From mboxrd@z Thu Jan 1 00:00:00 1970 From: erik quanstrom Date: Mon, 3 May 2010 10:53:53 -0400 To: 9fans@9fans.net Message-ID: <455f59971ace96897640df2bff497ce3@kw.quanstro.net> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] du and find Topicbox-Message-UUID: 189e37f6-ead6-11e9-9d60-3106f5b1d025 > It's always been easier for me to use python's/perl's regular > expressions when I needed to process a text file than to use plan9's. > For simple things, e.g. while editing an ordinary text in acme/sam, > plan9's regexps are just fine. i find it hard to think of cases where i would need such sophistication and where tokenization or tokenization plus parsing wouldn't be a better idea. for example, you could write a re to parse the output of ls -l and or ps. but awk '{print $field}' is so much easier to write and read. so in all, i view perl "regular" expressions as a tough sell. i think they're harder to write, harder to read, require more and more unstable code, and slower. one could speculate that perl, by encouraging a monolithic, rather than tools-based approach; and cleverness over clarity made perl expressions the logical next step. if so, i question the assumptions. - erik