bash file descriptors, pipes and … lockf.

Bash and long-standing file descriptors is a pain, to put it politely. For example, I’d like to spawn a sub-process into which I can “write” stuff at arbitrary points in time by doing echo fneh >&4 … sounds simple enough right? But how to spawn that sub process?

In particular, after getting stuck on the lockf situation last night I think I may have found a solution. What I plan to do is write a simple (probably around 20 lines or so of C code) program that’ll perform an lockf on it’s stdout (fd=2), and then go to background using fork() (steps may need to be reversed, or something done in order to block executing of the invoking process until the lock has been obtained, this little note in lockf(3) concerns: A child process does not inherit these locks.), and then keep on reading from stdin until it receives end-of-file, at which point it will simply terminate. This should then allow one to create a fifo (mkfifo) somewhere on disk, open that fifo for writing from bash (exec 3>/path/to/fifo) and to then execute the lockf utility using something like “lockf /path/to/lockfile” … at which point the idea would be that it blocks until the lock has been obtained, and then keep running in the background until it’s stdin closes (if the bash script terminates the fifo will be closed).

This begged a few questions (to which some of the answers is already present above). So a few bash notes (mostly obtained from the advanced bash scripting guide), firstly the objective, then two lines, firstly the semantics, then an example line:

Open a file for reading:

exec fd</path/to/file
exec 3</tmp/infile.txt

Open a file for writing:

exec fd>/path/to/file
exec 3>/tmp/outfile.txt

Closing a file descriptor:

exec fd<&-
exec 0<&-  # will close stdin

Writing to a file descriptor:

command >&fd
echo fneh >&3

Reading from a file descriptor:

command <&fd
read VARNAME <&4

Renumbering file descriptors (ala dup2(2)), depends on whether it’s for reading/writing, but basically you just open the new fd, and instead of specifying a filename you give it an existing file descriptor. So to “copy” stdin from fd=0 (standard) to fd=6 for whatever reason, and fd=2 (stderr) to fd=7 you can do this:

exec 6<&0
exec 7>&2

You can perform multiple exec actions in a single line, for example:

exec 3</tmp/infile.txt 4>/tmp/outfile 6<&0 7>&2

This is interpreted left to right. And yes, order does matter when for example doing this (saving stderr to fd 7, and redirecting the scripts stderr to /tmp/errout.txt:

exec 7>&2 2>/tmp/errout.txt

What I can’t seem to find is a bash way of doing pipe2(2) without using mkfifo (which has name prediction attacks on it). mktemp can’t create pipes for us, so we’re stuck with generating a random name, attempting to create it (remember –mode=0600). This still suffers from problems similar to mktemp in that the content can be high-jacked by users with read/write permissions to the pipe.

I’m still working on a sane way to enforce the lock though, and it’s looking more and more like I will need to create two fifo’s, and pass the filename of the lockfile as a parameter to the program, so something like (ignoring path name collisions and other failures on the fifo’s for the moment):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
mkfifo /tmp/lockf-{in,out}.$$.fifo
trap "rm /tmp/lockf-{in,out}.$$.fifo" EXIT
# Note that exec will block waiting for the bash process
# preparing the lockf command to open the fifo's first.
# Order of redirects is crucial to prevent deadlocks.
lockf /var/run/lockfile.lock </tmp/lockf-in.$$.fifo \
    >/tmp/lockf-out.$$.fifo &
exec 4>/tmp/lockf-in.$$.fifo 3</tmp/lockf-out.$$.fifo
read LOCKED<&3
exec 3<&-
[ -z "$LOCKED" ] && echo "Error obtaining lock" && exit 1
... rest of code here ...
... to explicitly unlock:
exec 4<&-

This assumes the semantics from lockf is as follows:

* Takes a single argument, filename of the lockfile.
* Will open this file for writing (creating it if required).
* Issue lockf() on the file (entire file … probably empty anyway).
* Write a single line to stdout before closing stdout.
* If unable to obtain a lock, simply close stdout (probably by terminating).
* Block on a read from stdin, when stdin is closed (receives EOF) terminate (letting go of the lock).

The code for lockf below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
 
int main(int argc, char ** argv)
{
    int fd;
    char bfr[128];
 
    if (argc < 2) {
        fprintf(stderr, "USAGE: %s filename\n", *argv);
        exit(1);
    }
 
    /* parent process should correctly set umask
       (0066 is a good one) */
    fd = open(argv[1], O_WRONLY | O_CREAT, 0660);
    if (fd < 0) {
        perror(argv[1]);
        exit(1);
    }
 
    if (lockf(fd, F_LOCK, 0) < 0) {
        perror(argv[1]);
        exit(1);
    }
 
    fprintf(stdout, "locked\n");
    fclose(stdout);
 
    while (fgets(bfr, sizeof(bfr), stdin)) { };
    exit(0);
}

Initial (rudementary) testing shows that this does actually work. Surprise surprise.

I initially had two unlink(2)s in the code, which I realized introduces other races. If not in the invoking process then potentially in other processes. These locations was if we opened the file and then failed to obtain the lock (note that I don’t set alarms but I don’t block signals either, so there are other reasons the system call may get interrupted). The other was just before the final exit, with the lock still held. This opens race conditions as follows, for the failure case (line structure is P:action, where P is process indicator, a number, 1 is us, 2 + 3 is others, and action is a logical action):

1:create lockfile
2:open lockfile
2:lock lockfile
1:lock fails
1:unlink lockfile
3:create new lockfile
3:lock new lockfile

In this case both processes two and three will think it has the lockfile. In the success case:

1:create lockfile
1:lock succeeds
2:opens lockfile
2:blocks waiting for lock
1:unlinks lockfile
1:closes lockfile
2:lock succeeds on already open fd
3:creates new lockfile
3:lock succeeds

In this case again we have both 2 and 3 with the “same” lock.

In the case of portage this probably doesn’t matter too much seeing that “1” should complete in downloading the file, and the lock file is mostly for in-process stuff, for it’s background fetching. There is (to the best of my knowledge) no concurrent compiling locks implemented, albeit, there probably should be.

As it stands, strictly speaking this is a bug in portage. The moral of the story is that a lock file should never, ever, ever, be removed. Ever.

2 Responses to “bash file descriptors, pipes and … lockf.”

  1. Michael Yagliyan says:

    What about using mktemp to safely create a directory, then use mkfifo to create the fifo in there?

  2. Jaco Kroon says:

    That is not a bad idea at all. It definitely reduces the ability of rogue processes to race the pipe, but doesn’t eliminate it for processes running as the same user. Basically what you’re saying is to modify the bash code above, replace lines 1 and 2 with:

    1
    2
    3
    
    tdir=$(mktemp -d)
    trap "rm -rf '${tdir}'" EXIT
    mkfifo ${tdir}/{in,out}.fifo

    And lines 6-8 with:

    6
    7
    
    lockf /var/run/lockfile.lock <${tdir}/in.fifo >${tdir}/out.fifo &
    exec 4>${tdir}/in.fifo 3<${tdir}/out.fifo

    Much safer IMHO indeed. Thanks.

Leave a Reply

This blog is kept spam free by WP-SpamFree.