Let's assume I have a series of large commands which need to be piped to eachother. Additionally, the pipeline itself must behave differently based on some global flags or arguments. There's a lot of way to make logic for this (e.g. functions), but one that I've become a fan of is something like this below. cmd1=( some long command $FLAG bar baz ) cmd2=( grep -E "$@" ) cmd3=( grep -v # Don't match these -e 'fizz|buzz' ) "${cmd1[@]}" | "${cmd2[@]}" | "${cmd3[@]}" However, conditional logic makes this annoying and a bit opaque -- functions definitely excel here. Another issue is that of adding comments to the arguments and the pipeline itself -- lines that end with '\' cannot contain comments. Functions obviously can contain comments within themselves. What I've discovered is that something like this works out pretty well... { printf "%s\n" a b foo c bar d fizz XfooX XbuzzX } | { grep -E 'foo|bar' } | { # If the user specified '--no-fizz-buzz', remove those entries if (( NO_FIZZ_BUZZ )); then grep -vE 'fizz|buzz' else # Pass through everything cat fi } I have a few questions about this construct. 1. Am I insane for doing this? 2. In what ways is this a terrible idea? 3. Is the use of {} better than ()? 4. How much of a performance hit does this make, versus hand-writing a different pipeline? 5. Are there any ways to improve this? For example, replacing 'cat' in the default case. Thanks for the attention, just curious what everybody thinks about this abuse of pipelines and conditional logic. *Zach Riggle*