From: Bart Schaefer <schaefer@brasslantern.com>
To: zsh-workers@zsh.org
Subject: Re: SIGPIPE echoing to fifo exits zsh
Date: Tue, 15 Jan 2013 18:53:41 -0800 [thread overview]
Message-ID: <130115185341.ZM18422@torch.brasslantern.com> (raw)
In-Reply-To: <87r4lmlae4.fsf@gmail.com>
In-Reply-To: <20130115231654.GB2518@localhost.localdomain>
On Jan 15, 4:43pm, Christian Neukirchen wrote:
}
} juno% repeat 1000 echo a >/tmp/fifo
}
} Program received signal SIGPIPE, Broken pipe.
}
} Using /bin/echo (coreutils 8.20), the code works flawlessy (all lines
} get through the FIFO).
On Jan 16, 7:16am, Han Pingtian wrote:
} Subject: Re: SIGPIPE echoing to fifo exits zsh
}
} Looks like the crash occurs at this line in bin_print():
}
} 4084 /* Testing EBADF special-cases >&- redirections */
} 4085 if ((fout != stdout) ? (fclose(fout) != 0) :
} 4086 (fflush(fout) != 0 && errno != EBADF)) {
Hrm. If this were a script instead of an interactive command at the
prompt, the correct behavior would in fact be to exit on SIGPIPE. I
don't know offhand if that's also (per POSIX spec for example) correct
in this instance.
You can prevent the shell from exiting by simply blocking the SIGPIPE
signal:
trap '' PIPE
As for why the builtin echo gets a SIGPIPE and /bin/echo does not ...
there's a race condition involved. With the trap above in place, if
you try
a=0
repeat 100 echo $((++a)) > /tmp/fifo
you'll probably see something like this on the reading end:
1
5
14
24
34
43
51
60
70
80
89
97
100
What's happening is that the the repeat loop writes a bunch of data to
the pipe (several executions of "echo") before "head -1" wakes up and
reads it. Then "head" exits, closing the pipe, which SIGPIPE's one of
the echo commands. A new "head" then re-opens the pipe so another few
passes of the loop go by before there's another exit+SIGPIPE event.
With /bin/echo it takes about the same amount of time for each of the
two loops to spawn their external processes, so the race is much less
likely to happen.
If you put a "sleep 1" or really almost any delay into the repeat loop
-- or use a single command like "cat" that doesn't repeatedly open/close
the read end of the pipe -- you'll see all the lines get through even
with the builtin echo.
You can see this even more glaringly by using
while :; do head -1 ; done < /tmp/fifo
as the reading end; with that, each run of "head" will consume multiple
lines of the output from the writing end, discard all but the first line
each time, and produce output similar to
1
53
83
without ever triggering the SIGPIPE on the write side.
In short, when using FIFOs, synchronizing the amount of data written/read
by each side is the programmer's job; commands like "head" are not meant
to work that way.
next prev parent reply other threads:[~2013-01-16 2:54 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-15 15:43 Christian Neukirchen
2013-01-15 23:16 ` Han Pingtian
2013-01-16 2:53 ` Bart Schaefer [this message]
2013-01-16 9:30 ` Han Pingtian
2013-01-16 9:44 ` Peter Stephenson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=130115185341.ZM18422@torch.brasslantern.com \
--to=schaefer@brasslantern.com \
--cc=zsh-workers@zsh.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/zsh/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).