From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <455f59971ace96897640df2bff497ce3@kw.quanstro.net> References: <455f59971ace96897640df2bff497ce3@kw.quanstro.net> From: Jorden M Date: Mon, 3 May 2010 14:34:27 -0400 Message-ID: To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] du and find Topicbox-Message-UUID: 19049f50-ead6-11e9-9d60-3106f5b1d025 On Mon, May 3, 2010 at 10:53 AM, erik quanstrom wro= te: >> It's always been easier for me to use python's/perl's regular >> expressions when I needed to process a text file than to use plan9's. >> For simple things, e.g. while editing an ordinary text in acme/sam, >> plan9's regexps are just fine. > > i find it hard to think of cases where i would need > such sophistication and where tokenization or > tokenization plus parsing wouldn't be a better idea. A lot of the `sophisticated' Perl I've seen uses some horrible regexes when really the job would have been done better and faster by a simple, job-specific parser. I've yet to find out why this happens so much, but I think I can narrow it to a combination of ignorance, laziness, and perhaps that all-too-frequent assumption `oh, I can do this in 10 lines with perl!' I guess by the time you've written half a parser in line noise, it's too late to quit while you're behind. > > for example, you could write a re to parse the output > of ls -l and or ps. =A0but awk '{print $field}' is so much > easier to write and read. > > so in all, i view perl "regular" expressions as a tough sell. > i think they're harder to write, harder to read, require more > and more unstable code, and slower. > > one could speculate that perl, by encouraging a > monolithic, rather than tools-based approach; > and cleverness over clarity made perl expressions > the logical next step. =A0if so, i question the assumptions. > > - erik > >