Computer Old Farts Forum
 help / color / mirror / Atom feed
* [COFF] Old and Tradition was [TUHS] V9 shell
@ 2020-02-11 18:41 clemc
       [not found] ` <CAP2nic2C4-m_Epcx7rbW2ssbS850ZFiLKiL+hg1Wxbcwoaa1vQ@mail.gmail.com>
  0 siblings, 1 reply; 26+ messages in thread
From: clemc @ 2020-02-11 18:41 UTC (permalink / raw)


moving to COFF

On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike at gmail.com> wrote:

> My general mood about the current standard way of nerd working is how
> unimaginative and old-fashioned it feels.
>
...
>
> But I'm a grumpy old man and getting far off topic. Warren should cry,
> "enough!".
>
> -rob
>

@Rob - I hear you and I'm sure there is a solid amount of wisdom in your
words.   But I caution that just, because something is old-fashioned, does
not necessarily make it wrong (much less bad).

I ask you to take a look at the Archer statistics of code running in
production (Archer large HPC site in Europe):
http://archer.ac.uk/status/codes/

I think there are similar stats available for places like CERN, LRZ, and of
the US labs, but I know of these so I point to them.

Please note that Fortran is #1 (about 80%) followed by C @ about 10%, C++ @
8%, Python @ 1% and all the others at 1%.

Why is that?   The math has not changed ... and open up any of those codes
and what do you see:  solving systems of differential equations with linear
algebra.   It's the same math my did by hand as a 'computer' in the 1950s.

There is not 'tensor flows' or ML searches running SPARK in there.  Sorry,
Google/AWS et al.   Nothing 'modern' and fresh -- just solid simple science
being done by scientists who don't care about the computer or sexy new
computer languages.

IIRC, you trained as a physicist, I think you understand their thinking.  *They
care about getting their science done.*

By the way, a related thought comes from a good friend of mine from college
who used to be the Chief Metallurgist for the US Gov (NIST in Colorado).
He's back in the private sector now (because he could not stomach current
American politics), but he made an important observation/comment to me a
couple of years ago.  They have 60+ years of metallurgical data that has
and his peeps have been using with known Fortran codes.   If we gave him
new versions of those analytical programs now in your favorite new HLL -
pick one - your Go (which I love), C++ (which I loath), DPC++, Rust, Python
- whatever, the scientists would have to reconfirm previous results.  They
are not going to do that.  It's not economical.   They 'know' how the data
works, the types of errors they have, how the programs behave* etc*.

So to me, the bottom line is just because it's old fashioned does not make
it bad.  I don't want to write an OS in Fortran-2018, but I can wrote a
system that supports code compiled with my sexy new Fortran-2018 compiler.

That is to say, the challenge for >>me<< is to build him a new
supercomputer that can run those codes for him and not change what they are
doing and have them scale to 1M nodes *etc*..
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200211/5fd66145/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd:  Old and Tradition was [TUHS] V9 shell
       [not found] ` <CAP2nic2C4-m_Epcx7rbW2ssbS850ZFiLKiL+hg1Wxbcwoaa1vQ@mail.gmail.com>
@ 2020-02-12  0:57   ` athornton
  2020-02-12  1:58     ` clemc
  0 siblings, 1 reply; 26+ messages in thread
From: athornton @ 2020-02-12  0:57 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 4710 bytes --]

---------- Forwarded message ---------
From: Adam Thornton <athornton at gmail.com>
Date: Tue, Feb 11, 2020 at 5:56 PM
Subject: Re: [COFF] Old and Tradition was [TUHS] V9 shell
To: Clem Cole <clemc at ccc.com>


As someone working in this area right now....yeah, and that’s why
traditional HPC centers do not deliver the services we want for projects
like the Vera Rubin Observatory’s Legacy Survey Of Space and Time.

Almost all of our scientist-facing code is Python though a lot of the
performance critical stuff is implemented in C++ with Python bindings.

The incumbents are great at running data centers like they did in 1993.
That’s not the best fit for our current workload.  It’s not generally the
compute that needs to be super-fast: it’s access to arbitrary slices
through really large data sets that is the interesting problem for us.

That’s not to say that that lfortran isn’t cool.  It really really is, and
Ondrej Cestik has done amazing work in making modern FORTRAN run in a
Jupyter notebook, and the implications (LLVM becomes the Imagemagick of
compiled languages) are astounding.

But...HPC is no longer the cutting edge.  We are seeing a Kuhnian paradigm
shift in action, and, sure, the old guys (and they are overwhelmingly guys)
who have tenure and get the big grants will never give up FORTRAN which
after all was good enough for their grandpappy and therefore good enough
for them.  But they will retire.  Scaling out is way way cheaper than
scaling up.

On Tue, Feb 11, 2020 at 11:41 AM Clem Cole <clemc at ccc.com> wrote:

> moving to COFF
>
> On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike at gmail.com> wrote:
>
>> My general mood about the current standard way of nerd working is how
>> unimaginative and old-fashioned it feels.
>>
> ...
>>
>> But I'm a grumpy old man and getting far off topic. Warren should cry,
>> "enough!".
>>
>> -rob
>>
>
> @Rob - I hear you and I'm sure there is a solid amount of wisdom in your
> words.   But I caution that just, because something is old-fashioned, does
> not necessarily make it wrong (much less bad).
>
> I ask you to take a look at the Archer statistics of code running in
> production (Archer large HPC site in Europe):
> http://archer.ac.uk/status/codes/
>
> I think there are similar stats available for places like CERN, LRZ, and
> of the US labs, but I know of these so I point to them.
>
> Please note that Fortran is #1 (about 80%) followed by C @ about 10%,
> C++ @ 8%, Python @ 1% and all the others at 1%.
>
> Why is that?   The math has not changed ... and open up any of those codes
> and what do you see:  solving systems of differential equations with linear
> algebra.   It's the same math my did by hand as a 'computer' in the 1950s.
>
> There is not 'tensor flows' or ML searches running SPARK in there.  Sorry,
> Google/AWS et al.   Nothing 'modern' and fresh -- just solid simple science
> being done by scientists who don't care about the computer or sexy new
> computer languages.
>
> IIRC, you trained as a physicist, I think you understand their thinking.  *They
> care about getting their science done.*
>
> By the way, a related thought comes from a good friend of mine from
> college who used to be the Chief Metallurgist for the US Gov (NIST in
> Colorado).  He's back in the private sector now (because he could not
> stomach current American politics), but he made an important
> observation/comment to me a couple of years ago.  They have 60+ years of
> metallurgical data that has and his peeps have been using with known
> Fortran codes.   If we gave him new versions of those analytical programs
> now in your favorite new HLL - pick one - your Go (which I love), C++
> (which I loath), DPC++, Rust, Python - whatever, the scientists would have
> to reconfirm previous results.  They are not going to do that.  It's not
> economical.   They 'know' how the data works, the types of errors they
> have, how the programs behave* etc*.
>
> So to me, the bottom line is just because it's old fashioned does not make
> it bad.  I don't want to write an OS in Fortran-2018, but I can wrote a
> system that supports code compiled with my sexy new Fortran-2018 compiler.
>
> That is to say, the challenge for >>me<< is to build him a new
> supercomputer that can run those codes for him and not change what they are
> doing and have them scale to 1M nodes *etc*..
>
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200211/bbfdde70/attachment-0001.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12  0:57   ` [COFF] Fwd: " athornton
@ 2020-02-12  1:58     ` clemc
  2020-02-12  3:01       ` lm
  0 siblings, 1 reply; 26+ messages in thread
From: clemc @ 2020-02-12  1:58 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 7339 bytes --]

Fwiw.  I'm sure your observations are solid and do make some good sense for
you.

But I ask you to consider the words you used defending your choices are
almostly exactly what was said by my CS profs at CMU and UCB on the 1970s
but it was Pascal not C++ and Snobol not Python as the scripting front end.
 40 yrs later Fortran has matured but as Stu Feldman famously said in 1977
- he did not what it would look like but in the future (year 2000) we will
still call it Fortran.  He was right and we still do.

Hey I'm a die hard systems person  and as I said C is my primary tool to do
my own job (I don't write in Fortran) but I do understand what pays my
salary (and has my entire career).  My experience has always been that
Scientists (of both sexs) don't care about the computer language nearly as
much as they do about getting their science done and trusting the data sets
that describe it.

Don't take my word for it. Look at the URL I gave you.  Check out the same
data from the other big sites around the world - USA, UK, EU, JP, CN, RU,
and IN.  I think you'll discover similar graphics. Fortran has been pretty
much constant at #1.   What has changed over the years is the languages
used to prep the scientific data before running it throught the primary HPC
tool.

We can take this off line if you like but imo the reason has been/continue
s to be - is economics of the existing code base is extremely difficult to
displace.   It's not an old guy / new guy thing.  It's when you get money
to do your science it's not cost effective to replace something you trust
just because you don't like the computer language.

More over i know of only one case we're a production code originally
written in Fortran was successfully replaced - in grad school years ago, my
old housemate Tom Quarles wrote SPICE3 in C to replace the original SPICE2.
 but note Tom had to do a huge amount of work to maintain the performance.
I also know of a couple of other commercial Mech E CAD and one Oil and Gas
code that tried to go to C++ but fell back to the original Fortran.

Anyway best wishes but be careful.   Noel's email has much wisdom.  New is
not necessarily better and old fashioned is not always a bad thing.

Clem

On Tue, Feb 11, 2020 at 7:57 PM Adam Thornton <athornton at gmail.com> wrote:

>
>
> ---------- Forwarded message ---------
> From: Adam Thornton <athornton at gmail.com>
> Date: Tue, Feb 11, 2020 at 5:56 PM
> Subject: Re: [COFF] Old and Tradition was [TUHS] V9 shell
> To: Clem Cole <clemc at ccc.com>
>
>
> As someone working in this area right now....yeah, and that’s why
> traditional HPC centers do not deliver the services we want for projects
> like the Vera Rubin Observatory’s Legacy Survey Of Space and Time.
>
> Almost all of our scientist-facing code is Python though a lot of the
> performance critical stuff is implemented in C++ with Python bindings.
>
> The incumbents are great at running data centers like they did in 1993.
> That’s not the best fit for our current workload.  It’s not generally the
> compute that needs to be super-fast: it’s access to arbitrary slices
> through really large data sets that is the interesting problem for us.
>
> That’s not to say that that lfortran isn’t cool.  It really really is, and
> Ondrej Cestik has done amazing work in making modern FORTRAN run in a
> Jupyter notebook, and the implications (LLVM becomes the Imagemagick of
> compiled languages) are astounding.
>
> But...HPC is no longer the cutting edge.  We are seeing a Kuhnian paradigm
> shift in action, and, sure, the old guys (and they are overwhelmingly guys)
> who have tenure and get the big grants will never give up FORTRAN which
> after all was good enough for their grandpappy and therefore good enough
> for them.  But they will retire.  Scaling out is way way cheaper than
> scaling up.
>
> On Tue, Feb 11, 2020 at 11:41 AM Clem Cole <clemc at ccc.com> wrote:
>
>> moving to COFF
>>
>> On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike at gmail.com> wrote:
>>
>>> My general mood about the current standard way of nerd working is how
>>> unimaginative and old-fashioned it feels.
>>>
>> ...
>>>
>>> But I'm a grumpy old man and getting far off topic. Warren should cry,
>>> "enough!".
>>>
>>> -rob
>>>
>>
>> @Rob - I hear you and I'm sure there is a solid amount of wisdom in your
>> words.   But I caution that just, because something is old-fashioned, does
>> not necessarily make it wrong (much less bad).
>>
>> I ask you to take a look at the Archer statistics of code running in
>> production (Archer large HPC site in Europe):
>> http://archer.ac.uk/status/codes/
>>
>> I think there are similar stats available for places like CERN, LRZ, and
>> of the US labs, but I know of these so I point to them.
>>
>> Please note that Fortran is #1 (about 80%) followed by C @ about 10%,
>> C++ @ 8%, Python @ 1% and all the others at 1%.
>>
>> Why is that?   The math has not changed ... and open up any of those
>> codes and what do you see:  solving systems of differential equations with
>> linear algebra.   It's the same math my did by hand as a 'computer' in the
>> 1950s.
>>
>> There is not 'tensor flows' or ML searches running SPARK in there.
>> Sorry, Google/AWS et al.   Nothing 'modern' and fresh -- just solid simple
>> science being done by scientists who don't care about the computer or sexy
>> new computer languages.
>>
>> IIRC, you trained as a physicist, I think you understand their thinking.  *They
>> care about getting their science done.*
>>
>> By the way, a related thought comes from a good friend of mine from
>> college who used to be the Chief Metallurgist for the US Gov (NIST in
>> Colorado).  He's back in the private sector now (because he could not
>> stomach current American politics), but he made an important
>> observation/comment to me a couple of years ago.  They have 60+ years of
>> metallurgical data that has and his peeps have been using with known
>> Fortran codes.   If we gave him new versions of those analytical programs
>> now in your favorite new HLL - pick one - your Go (which I love), C++
>> (which I loath), DPC++, Rust, Python - whatever, the scientists would have
>> to reconfirm previous results.  They are not going to do that.  It's not
>> economical.   They 'know' how the data works, the types of errors they
>> have, how the programs behave* etc*.
>>
>> So to me, the bottom line is just because it's old fashioned does not
>> make it bad.  I don't want to write an OS in Fortran-2018, but I can wrote
>> a system that supports code compiled with my sexy new Fortran-2018 compiler.
>>
>> That is to say, the challenge for >>me<< is to build him a new
>> supercomputer that can run those codes for him and not change what they are
>> doing and have them scale to 1M nodes *etc*..
>>
>> _______________________________________________
>> COFF mailing list
>> COFF at minnie.tuhs.org
>> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>>
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
-- 
Sent from a handheld expect more typos than usual
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200211/46f7986f/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12  1:58     ` clemc
@ 2020-02-12  3:01       ` lm
  2020-02-12 18:12         ` clemc
  2020-02-24  9:40         ` ik
  0 siblings, 2 replies; 26+ messages in thread
From: lm @ 2020-02-12  3:01 UTC (permalink / raw)


I'm following this with some interest, though I don't have a horse in
this race, hell, I'm not even at the race track.  But still, interesting.

What little Fortran background I have suggests that the difference
might be mind set.  Fortran programmers are formally trained (at least I
was, there was a whole semester devoted to this) in accumulated errors.
You did a deep dive into how to code stuff so that the error was reduced
each time instead of increased.  It has a lot to do with how floating
point works, it's not exact like integers are.  

For the record, I *hate* floating point, for numerical work in C I have
always tried to have "domains", these 32 bits cover the domain from
0..2^32-1, these 32 bits cover the domain from 0..2^32-1 * 2^32, etc.
It's more work because you have to translate across domains but it
is exact.  Floating point lets you cheat but the price of cheating is
the accumulated error.

Maybe I'm just going off of my 35 year old training, but it seems to me
that Fortran people pretty much live in floating point, and get what you
have to do to make that work.  C / python / systems people "understand"
floating point but don't understand what you have to do to make that work.
They think they do, but they mostly don't, they just haven't hit the
accumulated error problem and wouldn't recognize it if they did.

What do you guys think?

On Tue, Feb 11, 2020 at 08:58:50PM -0500, Clem Cole wrote:
> Fwiw.  I'm sure your observations are solid and do make some good sense for
> you.
> 
> But I ask you to consider the words you used defending your choices are
> almostly exactly what was said by my CS profs at CMU and UCB on the 1970s
> but it was Pascal not C++ and Snobol not Python as the scripting front end.
>  40 yrs later Fortran has matured but as Stu Feldman famously said in 1977
> - he did not what it would look like but in the future (year 2000) we will
> still call it Fortran.  He was right and we still do.
> 
> Hey I'm a die hard systems person  and as I said C is my primary tool to do
> my own job (I don't write in Fortran) but I do understand what pays my
> salary (and has my entire career).  My experience has always been that
> Scientists (of both sexs) don't care about the computer language nearly as
> much as they do about getting their science done and trusting the data sets
> that describe it.
> 
> Don't take my word for it. Look at the URL I gave you.  Check out the same
> data from the other big sites around the world - USA, UK, EU, JP, CN, RU,
> and IN.  I think you'll discover similar graphics. Fortran has been pretty
> much constant at #1.   What has changed over the years is the languages
> used to prep the scientific data before running it throught the primary HPC
> tool.
> 
> We can take this off line if you like but imo the reason has been/continue
> s to be - is economics of the existing code base is extremely difficult to
> displace.   It's not an old guy / new guy thing.  It's when you get money
> to do your science it's not cost effective to replace something you trust
> just because you don't like the computer language.
> 
> More over i know of only one case we're a production code originally
> written in Fortran was successfully replaced - in grad school years ago, my
> old housemate Tom Quarles wrote SPICE3 in C to replace the original SPICE2.
>  but note Tom had to do a huge amount of work to maintain the performance.
> I also know of a couple of other commercial Mech E CAD and one Oil and Gas
> code that tried to go to C++ but fell back to the original Fortran.
> 
> Anyway best wishes but be careful.   Noel's email has much wisdom.  New is
> not necessarily better and old fashioned is not always a bad thing.
> 
> Clem
> 
> On Tue, Feb 11, 2020 at 7:57 PM Adam Thornton <athornton at gmail.com> wrote:
> 
> >
> >
> > ---------- Forwarded message ---------
> > From: Adam Thornton <athornton at gmail.com>
> > Date: Tue, Feb 11, 2020 at 5:56 PM
> > Subject: Re: [COFF] Old and Tradition was [TUHS] V9 shell
> > To: Clem Cole <clemc at ccc.com>
> >
> >
> > As someone working in this area right now....yeah, and that???s why
> > traditional HPC centers do not deliver the services we want for projects
> > like the Vera Rubin Observatory???s Legacy Survey Of Space and Time.
> >
> > Almost all of our scientist-facing code is Python though a lot of the
> > performance critical stuff is implemented in C++ with Python bindings.
> >
> > The incumbents are great at running data centers like they did in 1993.
> > That???s not the best fit for our current workload.  It???s not generally the
> > compute that needs to be super-fast: it???s access to arbitrary slices
> > through really large data sets that is the interesting problem for us.
> >
> > That???s not to say that that lfortran isn???t cool.  It really really is, and
> > Ondrej Cestik has done amazing work in making modern FORTRAN run in a
> > Jupyter notebook, and the implications (LLVM becomes the Imagemagick of
> > compiled languages) are astounding.
> >
> > But...HPC is no longer the cutting edge.  We are seeing a Kuhnian paradigm
> > shift in action, and, sure, the old guys (and they are overwhelmingly guys)
> > who have tenure and get the big grants will never give up FORTRAN which
> > after all was good enough for their grandpappy and therefore good enough
> > for them.  But they will retire.  Scaling out is way way cheaper than
> > scaling up.
> >
> > On Tue, Feb 11, 2020 at 11:41 AM Clem Cole <clemc at ccc.com> wrote:
> >
> >> moving to COFF
> >>
> >> On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike at gmail.com> wrote:
> >>
> >>> My general mood about the current standard way of nerd working is how
> >>> unimaginative and old-fashioned it feels.
> >>>
> >> ...
> >>>
> >>> But I'm a grumpy old man and getting far off topic. Warren should cry,
> >>> "enough!".
> >>>
> >>> -rob
> >>>
> >>
> >> @Rob - I hear you and I'm sure there is a solid amount of wisdom in your
> >> words.   But I caution that just, because something is old-fashioned, does
> >> not necessarily make it wrong (much less bad).
> >>
> >> I ask you to take a look at the Archer statistics of code running in
> >> production (Archer large HPC site in Europe):
> >> http://archer.ac.uk/status/codes/
> >>
> >> I think there are similar stats available for places like CERN, LRZ, and
> >> of the US labs, but I know of these so I point to them.
> >>
> >> Please note that Fortran is #1 (about 80%) followed by C @ about 10%,
> >> C++ @ 8%, Python @ 1% and all the others at 1%.
> >>
> >> Why is that?   The math has not changed ... and open up any of those
> >> codes and what do you see:  solving systems of differential equations with
> >> linear algebra.   It's the same math my did by hand as a 'computer' in the
> >> 1950s.
> >>
> >> There is not 'tensor flows' or ML searches running SPARK in there.
> >> Sorry, Google/AWS et al.   Nothing 'modern' and fresh -- just solid simple
> >> science being done by scientists who don't care about the computer or sexy
> >> new computer languages.
> >>
> >> IIRC, you trained as a physicist, I think you understand their thinking.  *They
> >> care about getting their science done.*
> >>
> >> By the way, a related thought comes from a good friend of mine from
> >> college who used to be the Chief Metallurgist for the US Gov (NIST in
> >> Colorado).  He's back in the private sector now (because he could not
> >> stomach current American politics), but he made an important
> >> observation/comment to me a couple of years ago.  They have 60+ years of
> >> metallurgical data that has and his peeps have been using with known
> >> Fortran codes.   If we gave him new versions of those analytical programs
> >> now in your favorite new HLL - pick one - your Go (which I love), C++
> >> (which I loath), DPC++, Rust, Python - whatever, the scientists would have
> >> to reconfirm previous results.  They are not going to do that.  It's not
> >> economical.   They 'know' how the data works, the types of errors they
> >> have, how the programs behave* etc*.
> >>
> >> So to me, the bottom line is just because it's old fashioned does not
> >> make it bad.  I don't want to write an OS in Fortran-2018, but I can wrote
> >> a system that supports code compiled with my sexy new Fortran-2018 compiler.
> >>
> >> That is to say, the challenge for >>me<< is to build him a new
> >> supercomputer that can run those codes for him and not change what they are
> >> doing and have them scale to 1M nodes *etc*..
> >>
> >> _______________________________________________
> >> COFF mailing list
> >> COFF at minnie.tuhs.org
> >> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
> >>
> > _______________________________________________
> > COFF mailing list
> > COFF at minnie.tuhs.org
> > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
> >
> -- 
> Sent from a handheld expect more typos than usual

> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff


-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12  3:01       ` lm
@ 2020-02-12 18:12         ` clemc
  2020-02-12 21:55           ` cym224
                             ` (2 more replies)
  2020-02-24  9:40         ` ik
  1 sibling, 3 replies; 26+ messages in thread
From: clemc @ 2020-02-12 18:12 UTC (permalink / raw)


On Tue, Feb 11, 2020 at 10:01 PM Larry McVoy <lm at mcvoy.com> wrote:

> What little Fortran background I have suggests that the difference
> might be mind set.  Fortran programmers are formally trained (at least I
> was, there was a whole semester devoted to this) in accumulated errors.
> You did a deep dive into how to code stuff so that the error was reduced
> each time instead of increased.  It has a lot to do with how floating
> point works, it's not exact like integers are.

Just a thought, but it might also be the training.   My Dad (a
mathematician and 'computer') passed a few years ago, I'd love to have
asked him.   But I suspect when he and his peeps were doing this with a
slide rule or at best an Friden mechanical adding machine, they were
acutely aware of how errors accumulated or not.  When they started to
convert their processes/techniques to Fortran in the early 1960s, I agree
with you that I think they were conscious of what they were doing.   I'm
not sure modern CS types are taught the same things as what might be taught
in a course being run by a pure scientist who cares in the same way folks
like our mothers and fathers did in the 1950s and 60s.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200212/00cc0f93/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12 18:12         ` clemc
@ 2020-02-12 21:55           ` cym224
  2020-02-12 22:11           ` imp
  2020-02-16 21:47           ` wobblygong
  2 siblings, 0 replies; 26+ messages in thread
From: cym224 @ 2020-02-12 21:55 UTC (permalink / raw)


On 12/02/2020, Clem Cole <clemc at ccc.com> wrote (in part):

> But I suspect when he and his peeps were doing this with a
> slide rule or at best an Friden mechanical adding machine, they were
> acutely aware of how errors accumulated or not.

This being COFF, I feel justified in chiming in with a story from my
grad student days.

A fellow student was working on something -- I have forgotten the
actual problem but studying some behaviour near a singularity -- and
he wanted some numerical values.  So he obtained Fortran code from a
grad student in physics.  The result blew up in his region of
interest.  The person who gave him the code had no interest in
investigating because he was happy with the results in his region.
(After months of toil, he found serious approximation errors.)

N.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12 18:12         ` clemc
  2020-02-12 21:55           ` cym224
@ 2020-02-12 22:11           ` imp
  2020-02-12 22:45             ` sauer
  2020-02-13  6:57             ` [COFF] Fwd: " peter
  2020-02-16 21:47           ` wobblygong
  2 siblings, 2 replies; 26+ messages in thread
From: imp @ 2020-02-12 22:11 UTC (permalink / raw)


On Wed, Feb 12, 2020, 11:13 AM Clem Cole <clemc at ccc.com> wrote:

>
>
> On Tue, Feb 11, 2020 at 10:01 PM Larry McVoy <lm at mcvoy.com> wrote:
>
>> What little Fortran background I have suggests that the difference
>> might be mind set.  Fortran programmers are formally trained (at least I
>> was, there was a whole semester devoted to this) in accumulated errors.
>> You did a deep dive into how to code stuff so that the error was reduced
>> each time instead of increased.  It has a lot to do with how floating
>> point works, it's not exact like integers are.
>
> Just a thought, but it might also be the training.   My Dad (a
> mathematician and 'computer') passed a few years ago, I'd love to have
> asked him.   But I suspect when he and his peeps were doing this with a
> slide rule or at best an Friden mechanical adding machine, they were
> acutely aware of how errors accumulated or not.  When they started to
> convert their processes/techniques to Fortran in the early 1960s, I agree
> with you that I think they were conscious of what they were doing.   I'm
> not sure modern CS types are taught the same things as what might be taught
> in a course being run by a pure scientist who cares in the same way folks
> like our mothers and fathers did in the 1950s and 60s.
>

Most cs types barely know that 2.234 might not be an exact number when
converted to binary... A few, however can do sophisticated analysis on the
average ULP for complex functions over the expected range..

Warner

_______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200212/25b6a976/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12 22:11           ` imp
@ 2020-02-12 22:45             ` sauer
  2020-02-12 23:05               ` lm
  2020-02-13  6:57             ` [COFF] Fwd: " peter
  1 sibling, 1 reply; 26+ messages in thread
From: sauer @ 2020-02-12 22:45 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2348 bytes --]



On 2/12/2020 4:11 PM, Warner Losh wrote:
> 
> 
> On Wed, Feb 12, 2020, 11:13 AM Clem Cole <clemc at ccc.com 
> <mailto:clemc at ccc.com>> wrote:
> 
> 
> 
>     On Tue, Feb 11, 2020 at 10:01 PM Larry McVoy <lm at mcvoy.com
>     <mailto:lm at mcvoy.com>> wrote:
> 
>         What little Fortran background I have suggests that the difference
>         might be mind set.  Fortran programmers are formally trained (at
>         least I
>         was, there was a whole semester devoted to this) in accumulated
>         errors.
>         You did a deep dive into how to code stuff so that the error was
>         reduced
>         each time instead of increased.  It has a lot to do with how
>         floating
>         point works, it's not exact like integers are. 
> 
>     Just a thought, but it might also be the training.   My Dad (a
>     mathematician and 'computer') passed a few years ago, I'd love to
>     have asked him.   But I suspect when he and his peeps were doing
>     this with a slide rule or at best an Friden mechanical adding
>     machine, they were acutely aware of how errors accumulated or not. 
>     When they started to convert their processes/techniques to Fortran
>     in the early 1960s, I agree with you that I think they were
>     conscious of what they were doing.   I'm not sure modern CS types
>     are taught the same things as what might be taught in a course being
>     run by a pure scientist who cares in the same way folks like our
>     mothers and fathers did in the 1950s and 60s.
> 
> 
> Most cs types barely know that 2.234 might not be an exact number when 
> converted to binary... A few, however can do sophisticated analysis on 
> the average ULP for complex functions over the expected range..

If that is true of some today, that is sad and disappointing. I think I 
was taught otherwise in my beginning C.S. course at UT-Austin in 1971.

If I recall correctly:
- all doctoral candidates ended up taking two semesters of numerical 
analysis. I still have two volume n.a. text in the attic (orange, but 
not "burnt orange", IIRC).
- numerical analysis was covered on the doctoral qualifying exam.

-- 
voice: +1.512.784.7526       e-mail: sauer at technologists.com
fax: +1.512.346.5240         Web: https://technologists.com/sauer/
Facebook/Google/Skype/Twitter: CharlesHSauer


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12 22:45             ` sauer
@ 2020-02-12 23:05               ` lm
  2020-02-12 23:54                 ` [COFF] floating point (Re: " bakul
  0 siblings, 1 reply; 26+ messages in thread
From: lm @ 2020-02-12 23:05 UTC (permalink / raw)


On Wed, Feb 12, 2020 at 04:45:54PM -0600, Charles H Sauer wrote:
> If I recall correctly:
> - all doctoral candidates ended up taking two semesters of numerical
> analysis. I still have two volume n.a. text in the attic (orange, but not
> "burnt orange", IIRC).
> - numerical analysis was covered on the doctoral qualifying exam.

Pretty sure Madison required that stuff for Masters degrees.  Or maybe
undergrad, I feel like I took that stuff pretty early on.

I'm very systems oriented so I can't imagine I would have taking that 
willingly.  I hate the whole idea of floating point, just seems so 
error prone.  


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] floating point (Re:  Old and Tradition was [TUHS] V9 shell
  2020-02-12 23:05               ` lm
@ 2020-02-12 23:54                 ` bakul
  2020-02-12 23:56                   ` bakul
  2020-02-13  1:21                   ` toby
  0 siblings, 2 replies; 26+ messages in thread
From: bakul @ 2020-02-12 23:54 UTC (permalink / raw)


On Feb 12, 2020, at 3:05 PM, Larry McVoy <lm at mcvoy.com> wrote:
> 
> On Wed, Feb 12, 2020 at 04:45:54PM -0600, Charles H Sauer wrote:
>> If I recall correctly:
>> - all doctoral candidates ended up taking two semesters of numerical
>> analysis. I still have two volume n.a. text in the attic (orange, but not
>> "burnt orange", IIRC).
>> - numerical analysis was covered on the doctoral qualifying exam.
> 
> Pretty sure Madison required that stuff for Masters degrees.  Or maybe
> undergrad, I feel like I took that stuff pretty early on.
> 
> I'm very systems oriented so I can't imagine I would have taking that 
> willingly.  I hate the whole idea of floating point, just seems so 
> error prone.

David Goldberg's article "What every computer scientist should know
about floating-point arithmetic" is a good one to read:
https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf

I still have Bill Press's Numerical Recipes book though not opened
recently (as in not since '80s)!

It is interesting that older languages such as Lisp & APL have a
builtin concept of tolerance. Here 0.3 < 0.1 + 0.2 is false. But
in most modern languages it is true! This is so since 0.1 + 0.2 is
0.30000000000000004. In Fortran you'd write something like 
abs(0.3 - (0.1 + 0.2)) > tolerance. You can do the same in C
etc.but for some reason it seems to be uncommon :-)


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] floating point (Re: Old and Tradition was [TUHS] V9 shell
  2020-02-12 23:54                 ` [COFF] floating point (Re: " bakul
@ 2020-02-12 23:56                   ` bakul
  2020-02-13  1:21                   ` toby
  1 sibling, 0 replies; 26+ messages in thread
From: bakul @ 2020-02-12 23:56 UTC (permalink / raw)




> On Feb 12, 2020, at 3:54 PM, Bakul Shah <bakul at bitblocks.com> wrote:
> 
> On Feb 12, 2020, at 3:05 PM, Larry McVoy <lm at mcvoy.com> wrote:
>> 
>> On Wed, Feb 12, 2020 at 04:45:54PM -0600, Charles H Sauer wrote:
>>> If I recall correctly:
>>> - all doctoral candidates ended up taking two semesters of numerical
>>> analysis. I still have two volume n.a. text in the attic (orange, but not
>>> "burnt orange", IIRC).
>>> - numerical analysis was covered on the doctoral qualifying exam.
>> 
>> Pretty sure Madison required that stuff for Masters degrees.  Or maybe
>> undergrad, I feel like I took that stuff pretty early on.
>> 
>> I'm very systems oriented so I can't imagine I would have taking that 
>> willingly.  I hate the whole idea of floating point, just seems so 
>> error prone.
> 
> David Goldberg's article "What every computer scientist should know
> about floating-point arithmetic" is a good one to read:
> https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf
> 
> I still have Bill Press's Numerical Recipes book though not opened
> recently (as in not since '80s)!
> 
> It is interesting that older languages such as Lisp & APL have a
> builtin concept of tolerance. Here 0.3 < 0.1 + 0.2 is false. But
> in most modern languages it is true! This is so since 0.1 + 0.2 is
> 0.30000000000000004. In Fortran you'd write something like 
> abs(0.3 - (0.1 + 0.2)) > tolerance. You can do the same in C
> etc.but for some reason it seems to be uncommon :-)

Er... not right. Fl. pt. arithmetic is hard!



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] floating point (Re: Old and Tradition was [TUHS] V9 shell
  2020-02-12 23:54                 ` [COFF] floating point (Re: " bakul
  2020-02-12 23:56                   ` bakul
@ 2020-02-13  1:21                   ` toby
  1 sibling, 0 replies; 26+ messages in thread
From: toby @ 2020-02-13  1:21 UTC (permalink / raw)


On 2020-02-12 6:54 PM, Bakul Shah wrote:
> On Feb 12, 2020, at 3:05 PM, Larry McVoy <lm at mcvoy.com> wrote:
>>
>> On Wed, Feb 12, 2020 at 04:45:54PM -0600, Charles H Sauer wrote:
>>> If I recall correctly:
>>> - all doctoral candidates ended up taking two semesters of numerical
>>> analysis. I still have two volume n.a. text in the attic (orange, but not
>>> "burnt orange", IIRC).
>>> - numerical analysis was covered on the doctoral qualifying exam.
>>
>> Pretty sure Madison required that stuff for Masters degrees.  Or maybe
>> undergrad, I feel like I took that stuff pretty early on.
>>
>> I'm very systems oriented so I can't imagine I would have taking that 
>> willingly.  I hate the whole idea of floating point, just seems so 
>> error prone.
> 
> David Goldberg's article "What every computer scientist should know
> about floating-point arithmetic" is a good one to read:
> https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf
> 
> I still have Bill Press's Numerical Recipes book though not opened
> recently (as in not since '80s)!
> 
> It is interesting that older languages such as Lisp & APL have a
> builtin concept of tolerance. Here 0.3 < 0.1 + 0.2 is false. But


Mathematica also does significance tracking. It is probably the single
most egregious omission from IEEE FP and unfortunately not corrected in
the new standard.

Check Steve Richfield's old posts on Usenet for more.

--Toby

> in most modern languages it is true! This is so since 0.1 + 0.2 is
> 0.30000000000000004. In Fortran you'd write something like 
> abs(0.3 - (0.1 + 0.2)) > tolerance. You can do the same in C
> etc.but for some reason it seems to be uncommon :-)


> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12 22:11           ` imp
  2020-02-12 22:45             ` sauer
@ 2020-02-13  6:57             ` peter
  1 sibling, 0 replies; 26+ messages in thread
From: peter @ 2020-02-13  6:57 UTC (permalink / raw)


On 2020-Feb-12 15:11:27 -0700, Warner Losh <imp at bsdimp.com> wrote:
>Most cs types barely know that 2.234 might not be an exact number when
>converted to binary...

My favourite example (based on a real bug that I was involved in
investigating) is: int(0.29 * 100) = 28 (using IEEE doubles).  That result
is very counter-intuitive.  (Detailed explanation available on request).

In some ways IEEE-754 has made things worse - before that standard, almost
every computer system implemented something different.  Floating point was
well known to be "here be dragons" territory and people who ventured there
were either foolhardy or had enough numerical analysis training to be able
to understand (and cope with) what the computer was doing.

IEEE-754 offered a seeming nirvana - where the same inputs would result in
the same outputs (down to the bit) irrespective of what computer you ran
your code on.  The underlying problems were still present but, at least in
simple cases, were masked.  Why employ an experienced numerical analyst to
develop the code when a high-school student can translate the mathematical
equations into running code in the language-du-jour?  (And some languages,
like Java, make a virtue out of hiding all the information needed to
properly handle exceptional conditions).

-- 
Peter Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 963 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200213/42456332/attachment.sig>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12 18:12         ` clemc
  2020-02-12 21:55           ` cym224
  2020-02-12 22:11           ` imp
@ 2020-02-16 21:47           ` wobblygong
  2020-02-16 22:10             ` clemc
  2 siblings, 1 reply; 26+ messages in thread
From: wobblygong @ 2020-02-16 21:47 UTC (permalink / raw)


I think it's implicit with fooling around with slide rules. Everything
is logarithmic, therefore imprecise beyond a certain level (floating
point number or iteration, it's the same problem). You learn to
approximate, within a certain level of confidence (or diffidence :).

(FWVLIW - I bought a Dover book on slide rule in the late 70s while at
high school, and shortly after, a real slide rule, and it's stuck with
me.)

Wesley Parish

On 2/13/20, Clem Cole <clemc at ccc.com> wrote:
> On Tue, Feb 11, 2020 at 10:01 PM Larry McVoy <lm at mcvoy.com> wrote:
>
>> What little Fortran background I have suggests that the difference
>> might be mind set.  Fortran programmers are formally trained (at least I
>> was, there was a whole semester devoted to this) in accumulated errors.
>> You did a deep dive into how to code stuff so that the error was reduced
>> each time instead of increased.  It has a lot to do with how floating
>> point works, it's not exact like integers are.
>
> Just a thought, but it might also be the training.   My Dad (a
> mathematician and 'computer') passed a few years ago, I'd love to have
> asked him.   But I suspect when he and his peeps were doing this with a
> slide rule or at best an Friden mechanical adding machine, they were
> acutely aware of how errors accumulated or not.  When they started to
> convert their processes/techniques to Fortran in the early 1960s, I agree
> with you that I think they were conscious of what they were doing.   I'm
> not sure modern CS types are taught the same things as what might be taught
> in a course being run by a pure scientist who cares in the same way folks
> like our mothers and fathers did in the 1950s and 60s.
>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-16 21:47           ` wobblygong
@ 2020-02-16 22:10             ` clemc
  2020-02-16 22:45               ` krewat
  2020-02-16 23:50               ` bakul
  0 siblings, 2 replies; 26+ messages in thread
From: clemc @ 2020-02-16 22:10 UTC (permalink / raw)


On Sun, Feb 16, 2020 at 4:47 PM Wesley Parish <wobblygong at gmail.com> wrote:

> FWVLIW - I bought a Dover book on slide rule in the late 70s while at
> high school, and shortly after, a real slide rule, and it's stuck with
> me.
>
My dad taught me with a plastic slide rule he put in our stocking in the
early/mid 1960s.   This was how I (and many others) learn about
interpolation.   I also learned to make log/log paper with it.  A few years
later, my grandfather died when I was in engineering school. My grandmother
sent me his slide rule to remember him by (which I still have). Although
she did not know that at the time, I already owned the then hot item, a TI
SR50 scientific calculator - which I paid the $150 in 1972 dollars (about
$900 in today's money).     I also got his drafting table, but I no longer
have that.   The slide rule is made of ivory on top of metal (I think
bronze but I never had it checked).  It was probably made in the 1920s.  It
stays in a box in desk ;-)

A slightly, sad part is I don't think either of my kids knows how to use
it, and while both have degrees in science, I don't think either wants it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200216/220d7a4c/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-16 22:10             ` clemc
@ 2020-02-16 22:45               ` krewat
  2020-02-16 23:50               ` bakul
  1 sibling, 0 replies; 26+ messages in thread
From: krewat @ 2020-02-16 22:45 UTC (permalink / raw)


On 2/16/2020 5:10 PM, Clem Cole wrote:
> This was how I (and many others) learn about interpolation

I learned about interpolation in either middle school or high school 
math. And used it many times in video games I had my hands in. I was a 
year or two ahead (depending on which year) in math.

Create an array (or a cache at startup) with all the logs I needed, and 
then interpolate on the fly. And do the reverse for the alog().

Quite handy.

As for slide-rules, I remember my father had one, and used it regularly. 
I never even looked at it. He died when I was 16. A few years later, I 
took it out of it's case, looked at it, and thought "ah! logs". I had 
never known what a slide-rule was. And I was born in 1965 ;) They 
certainly didn't teach us about slide rules in school.

ak

PS: Take the log of a number, divide by two, take the alog() and get the 
sqrt(). Divide by 3, cube root, etc. I was impressed, to say the least.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200216/23eede90/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-16 22:10             ` clemc
  2020-02-16 22:45               ` krewat
@ 2020-02-16 23:50               ` bakul
  2020-02-18  0:17                 ` dave
  1 sibling, 1 reply; 26+ messages in thread
From: bakul @ 2020-02-16 23:50 UTC (permalink / raw)


On Sun, 16 Feb 2020 17:10:03 -0500 Clem Cole <clemc at ccc.com> wrote:
>
> On Sun, Feb 16, 2020 at 4:47 PM Wesley Parish <wobblygong at gmail.com> wrote:
>
> > FWVLIW - I bought a Dover book on slide rule in the late 70s while at
> > high school, and shortly after, a real slide rule, and it's stuck with
> > me.
> >
> My dad taught me with a plastic slide rule he put in our stocking in the
> early/mid 1960s.   This was how I (and many others) learn about
> interpolation.   I also learned to make log/log paper with it.  A few years
> later, my grandfather died when I was in engineering school. My grandmother
> sent me his slide rule to remember him by (which I still have). Although
> she did not know that at the time, I already owned the then hot item, a TI
> SR50 scientific calculator - which I paid the $150 in 1972 dollars (about
> $900 in today's money).     I also got his drafting table, but I no longer
> have that.   The slide rule is made of ivory on top of metal (I think
> bronze but I never had it checked).  It was probably made in the 1920s.  It
> stays in a box in desk ;-)

What brand was your grandfather's sliderule?

I had a yellow Pickett sliderule in my undergrad days. IIRC it
had some sort of aluminum alloy slide and didn't work as
smoothly as an Aristo or a Faber-Castell. A friend had TI
scinetific calculator with RED LED digits -- may have been the
SR50.  Aesthetically it didn't hold a candle to the beautfiul
sliderules!

Some sliderule simulators here: https://www.sliderules.org/
(mine looked exactly like the N600)

Online Museum here: https://sliderulemuseum.com/SRM_Home.htm

> A slightly, sad part is I don't think either of my kids knows how to use
> it, and while both have degrees in science, I don't think either wants it.

So it goes!


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-16 23:50               ` bakul
@ 2020-02-18  0:17                 ` dave
  2020-02-18 12:48                   ` jpl.jpl
  0 siblings, 1 reply; 26+ messages in thread
From: dave @ 2020-02-18  0:17 UTC (permalink / raw)


On Sun, 16 Feb 2020, Bakul Shah wrote:

> What brand was your grandfather's sliderule?

Mine was an Arista, and had it right through school; have no idea where it 
is now.  I also had a circular slide rule, which was fascinating.

Trivia: slide rules were banned for exams (and no calculators in those 
days), but log tables were OK; come some important exam, some idiot of a 
teacher forgot to specify that log tables were allowed so they were 
forbidden.  I merely worked the problems through right down to the long 
division, and left them there with a note saying that we were supposed to 
use log tables; I passed...

-- Dave


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-18  0:17                 ` dave
@ 2020-02-18 12:48                   ` jpl.jpl
  0 siblings, 0 replies; 26+ messages in thread
From: jpl.jpl @ 2020-02-18 12:48 UTC (permalink / raw)


My uncle had a cylindrical sliderule. It had about a dozen "rules" that
rotated around the central cylinder, with different gratings on the
opposite side of each rule. And the central cylinder had at least a dozen,
probably 20 or more, gratings. That yielded hundreds of combinations. I
wish he had bequeathed it to me. I have no idea what became of it.

On Mon, Feb 17, 2020 at 7:18 PM Dave Horsfall <dave at horsfall.org> wrote:

> On Sun, 16 Feb 2020, Bakul Shah wrote:
>
> > What brand was your grandfather's sliderule?
>
> Mine was an Arista, and had it right through school; have no idea where it
> is now.  I also had a circular slide rule, which was fascinating.
>
> Trivia: slide rules were banned for exams (and no calculators in those
> days), but log tables were OK; come some important exam, some idiot of a
> teacher forgot to specify that log tables were allowed so they were
> forbidden.  I merely worked the problems through right down to the long
> division, and left them there with a note saying that we were supposed to
> use log tables; I passed...
>
> -- Dave
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200218/9bc75e85/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12  3:01       ` lm
  2020-02-12 18:12         ` clemc
@ 2020-02-24  9:40         ` ik
  2020-02-24 15:19           ` lm
  1 sibling, 1 reply; 26+ messages in thread
From: ik @ 2020-02-24  9:40 UTC (permalink / raw)


Larry McVoy <lm at mcvoy.com> wrote:
> Fortran programmers are formally trained (at least I
> was, there was a whole semester devoted to this) in accumulated errors.
> You did a deep dive into how to code stuff so that the error was reduced
> each time instead of increased.  It has a lot to do with how floating
> point works, it's not exact like integers are.

I was unaware that there's formal training to be had around this but
it's something I'd like to learn more about. Any recommendations on
materials? I don't mind diving into Fortran itself either.

Background: I've been playing around with actuarial calculations
(pensions, life insurance, etc) which involve lots of accumulation over
time and I'm finding that, unsurprisingly, different implementations of
the same rules yield fairly different outcomes due to accumulated
errors.

Sijmen


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-24  9:40         ` ik
@ 2020-02-24 15:19           ` lm
       [not found]             ` <CAP2nic0fK+=eh=5MuY4BJH6zx4tCRMWcazmm1khYMzNmEdf8ug@mail.gmail.com>
  2020-02-24 16:27             ` [COFF] Fwd: Old and Tradition was [TUHS] " clemc
  0 siblings, 2 replies; 26+ messages in thread
From: lm @ 2020-02-24 15:19 UTC (permalink / raw)


On Mon, Feb 24, 2020 at 10:40:10AM +0100, Sijmen J. Mulder wrote:
> Larry McVoy <lm at mcvoy.com> wrote:
> > Fortran programmers are formally trained (at least I
> > was, there was a whole semester devoted to this) in accumulated errors.
> > You did a deep dive into how to code stuff so that the error was reduced
> > each time instead of increased.  It has a lot to do with how floating
> > point works, it's not exact like integers are.
> 
> I was unaware that there's formal training to be had around this but
> it's something I'd like to learn more about. Any recommendations on
> materials? I don't mind diving into Fortran itself either.

My training was 35 years ago, I have no idea where to go look for this
stuff now.  I googled and didn't find much.  I'd go to the local 
University that teaches Fortran and ask around.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] [TUHS]  Fwd: Old and Tradition was V9 shell
       [not found]             ` <CAP2nic0fK+=eh=5MuY4BJH6zx4tCRMWcazmm1khYMzNmEdf8ug@mail.gmail.com>
@ 2020-02-24 16:15               ` clemc
  2020-02-24 16:19                 ` clemc
  0 siblings, 1 reply; 26+ messages in thread
From: clemc @ 2020-02-24 16:15 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3386 bytes --]

First please continue this discussion on COFF (which has been CC'ed).
While Fortran is interesting to many, it not a UNIX topic per say.

Also, as I have noted in other places, I work for Intel - these comments
are my own and I'm not trying to sell you anything.  Just passing on 45+
years of programming experience.

On Mon, Feb 24, 2020 at 10:34 AM Adam Thornton <athornton at gmail.com> wrote:

> I would think that FORTRAN is likelier to be passed around as folk wisdom
> and ancient PIs (uh, Primary Investigators, not the detective kind)
> thrusting a dog-eared FORTRAN IV manual at their new grad students and
> snarling "RTFM!" than as actual college courses.
>
FWIW: I was at CMU last week recruiting.  Fortran, even at a leading CS
place like CMU, is hardly "folk wisdom". All the science PhD's (Chem, Mat
Sci, Bio, Physics) that I interviewed all knew and used Fortran (nad listed
on their CV's) as their primary language for their science.

As I've quipped before, Fortran pays my own (and a lot of other people's
salaries in the industry).  Check out:
https://www.archer.ac.uk/status/codes/ Fortran is about 90% of the codes
running (FWIW:  I have seen similar statistics from other large HPC sites -
you'll need to poke).

While I do not write in it, I believe there are three reasons why these
statistics are true and* going to be true for a very long time*:

   1. The math being used has not changed.  Just open up the codes and look
   at what they are doing.  You will find that they all are all solving
   systems of partial differential equations using linear algebra (-- see the
   movie:  "Hidden Figures").
   2. 50-75 years of data sets with know qualities and programs to work
   with them.  If you were able to replace the codes magically with something
   'better' - (from MathLab to Julia or Python to Java) all their data would
   have to be requalified (it is like the QWERTY keyboard - that shipped
   sailed years ago).
   3. The *scientists want to do their science* for their work to get their
   degree or prize.   The computer and its programs *are a tool* for them
   look at data *to do their science*.   They don't care as long as they
   get their work done.


Besides Adam's mention of flang, there is, of course, gfortran; but there
are also commerical compilers available for use:  Qualify for Free Software
| Intel® Software
<https://software.intel.com/en-us/articles/qualify-for-free-software>  I
believe PGI offers something similar, but I have not checked in a while.
Most 'production' codes use a real compiler like Intel, PGI or Cray's.

FWIW: the largest number of LLVM developers are at Intel now.  IMO,
while flang is cute, it will be a toy for a while, as the LLVM IL really
can not handle Fortran easily.  There is a huge project to put a number of
the learnings from the DEC Gem compilers into LLVM and one piece is gutting
the internal IL and making work for parallel architectures.  The >>hope<<
by many of my peeps, (still unproven) is that at some point the FOSS world
will produce a compiler as good a Gem or the current Intel icc/ifort set.
(Hence, Intel is forced to support 3 different compiler technologies
internally in the technical languages group).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200224/63e7d680/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] [TUHS]  Fwd: Old and Tradition was V9 shell
  2020-02-24 16:15               ` [COFF] [TUHS] Fwd: Old and Tradition was " clemc
@ 2020-02-24 16:19                 ` clemc
  0 siblings, 0 replies; 26+ messages in thread
From: clemc @ 2020-02-24 16:19 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3752 bytes --]

I probably should have added, it's not just the learnings from DEC Gem
folks, but also the old "Kuck and Associates" team formerly in Champaign
(Intel moved them all to Austin).

On Mon, Feb 24, 2020 at 11:15 AM Clem Cole <clemc at ccc.com> wrote:

>
> First please continue this discussion on COFF (which has been CC'ed).
> While Fortran is interesting to many, it not a UNIX topic per say.
>
> Also, as I have noted in other places, I work for Intel - these comments
> are my own and I'm not trying to sell you anything.  Just passing on 45+
> years of programming experience.
>
> On Mon, Feb 24, 2020 at 10:34 AM Adam Thornton <athornton at gmail.com>
> wrote:
>
>> I would think that FORTRAN is likelier to be passed around as folk wisdom
>> and ancient PIs (uh, Primary Investigators, not the detective kind)
>> thrusting a dog-eared FORTRAN IV manual at their new grad students and
>> snarling "RTFM!" than as actual college courses.
>>
> FWIW: I was at CMU last week recruiting.  Fortran, even at a leading CS
> place like CMU, is hardly "folk wisdom". All the science PhD's (Chem, Mat
> Sci, Bio, Physics) that I interviewed all knew and used Fortran (nad listed
> on their CV's) as their primary language for their science.
>
> As I've quipped before, Fortran pays my own (and a lot of other people's
> salaries in the industry).  Check out:
> https://www.archer.ac.uk/status/codes/ Fortran is about 90% of the codes
> running (FWIW:  I have seen similar statistics from other large HPC sites
> - you'll need to poke).
>
> While I do not write in it, I believe there are three reasons why these
> statistics are true and* going to be true for a very long time*:
>
>    1. The math being used has not changed.  Just open up the codes and
>    look at what they are doing.  You will find that they all are all solving
>    systems of partial differential equations using linear algebra (-- see the
>    movie:  "Hidden Figures").
>    2. 50-75 years of data sets with know qualities and programs to work
>    with them.  If you were able to replace the codes magically with something
>    'better' - (from MathLab to Julia or Python to Java) all their data would
>    have to be requalified (it is like the QWERTY keyboard - that shipped
>    sailed years ago).
>    3. The *scientists want to do their science* for their work to get
>    their degree or prize.   The computer and its programs *are a tool*
>    for them look at data *to do their science*.   They don't care as long
>    as they get their work done.
>
>
> Besides Adam's mention of flang, there is, of course, gfortran; but there
> are also commerical compilers available for use:  Qualify for Free
> Software | Intel® Software
> <https://software.intel.com/en-us/articles/qualify-for-free-software>  I
> believe PGI offers something similar, but I have not checked in a while.
> Most 'production' codes use a real compiler like Intel, PGI or Cray's.
>
> FWIW: the largest number of LLVM developers are at Intel now.  IMO,
> while flang is cute, it will be a toy for a while, as the LLVM IL really
> can not handle Fortran easily.  There is a huge project to put a number of
> the learnings from the DEC Gem compilers into LLVM and one piece is gutting
> the internal IL and making work for parallel architectures.  The >>hope<<
> by many of my peeps, (still unproven) is that at some point the FOSS world
> will produce a compiler as good a Gem or the current Intel icc/ifort set.
> (Hence, Intel is forced to support 3 different compiler technologies
> internally in the technical languages group).
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200224/7f99a23f/attachment-0001.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-24 15:19           ` lm
       [not found]             ` <CAP2nic0fK+=eh=5MuY4BJH6zx4tCRMWcazmm1khYMzNmEdf8ug@mail.gmail.com>
@ 2020-02-24 16:27             ` clemc
  1 sibling, 0 replies; 26+ messages in thread
From: clemc @ 2020-02-24 16:27 UTC (permalink / raw)


On Mon, Feb 24, 2020 at 10:19 AM Larry McVoy <lm at mcvoy.com> wrote:

> On Mon, Feb 24, 2020 at 10:40:10AM +0100, Sijmen J. Mulder wrote:
> > Larry McVoy <lm at mcvoy.com> wrote:
> > > Fortran programmers are formally trained (at least I
> > > was, there was a whole semester devoted to this) in accumulated errors.
> > > You did a deep dive into how to code stuff so that the error was
> reduced
> > > each time instead of increased.  It has a lot to do with how floating
> > > point works, it's not exact like integers are.
> >
> > I was unaware that there's formal training to be had around this but
> > it's something I'd like to learn more about. Any recommendations on
> > materials? I don't mind diving into Fortran itself either.
>
> My training was 35 years ago, I have no idea where to go look for this
> stuff now.  I googled and didn't find much.  I'd go to the local
> University that teaches Fortran and ask around.
>


   1. Download the Intel Fortran compiler for your Mac, Linux or Windows
   box.  This will also give you a number of tuning tools.   This compiler can
   understand syntax back to FORTRAN-66 through Fortran2018  (and often
   without any switches - it can usually figure it out and do the right thing).
   2. Get a copy of
   https://www.amazon.com/Explained-Numerical-Mathematics-Scientific-Computation/dp/0199601429
   3. Go to the physics and chem depts a place start as Larry said
   4. Or head to a supercomputer center in the US or EU
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200224/2d1b4b32/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
  2020-02-12 16:28 jnc
@ 2020-02-12 18:13 ` clemc
  0 siblings, 0 replies; 26+ messages in thread
From: clemc @ 2020-02-12 18:13 UTC (permalink / raw)


Sorry about that.

On Wed, Feb 12, 2020 at 11:45 AM Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Clem Cole
>
>     > Noel's email has much wisdom. New is not necessarily better and old
>     > fashioned is not always a bad thing.
>
> For those confused by the reference, it's to an email that didn't go to the
> whole list (I was not sure if people would be interested):
>
>     >> One of my favourite sayings (original source unknown; I saw it in
>     >> "Shockwave Rider"): "There are two kinds of fool. One says 'This is
>     >> old, and therefore good'; the other says 'This is new, and therefore
>     >> better'."
>
>         Noel
> _______________________________________________
> COFF mailing list
> COFF at minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200212/6c3c9be7/attachment.html>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [COFF] Fwd: Old and Tradition was [TUHS] V9 shell
@ 2020-02-12 16:28 jnc
  2020-02-12 18:13 ` clemc
  0 siblings, 1 reply; 26+ messages in thread
From: jnc @ 2020-02-12 16:28 UTC (permalink / raw)


    > From: Clem Cole

    > Noel's email has much wisdom. New is not necessarily better and old
    > fashioned is not always a bad thing.

For those confused by the reference, it's to an email that didn't go to the
whole list (I was not sure if people would be interested):

    >> One of my favourite sayings (original source unknown; I saw it in
    >> "Shockwave Rider"): "There are two kinds of fool. One says 'This is
    >> old, and therefore good'; the other says 'This is new, and therefore
    >> better'."

	Noel


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2020-02-24 16:27 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-11 18:41 [COFF] Old and Tradition was [TUHS] V9 shell clemc
     [not found] ` <CAP2nic2C4-m_Epcx7rbW2ssbS850ZFiLKiL+hg1Wxbcwoaa1vQ@mail.gmail.com>
2020-02-12  0:57   ` [COFF] Fwd: " athornton
2020-02-12  1:58     ` clemc
2020-02-12  3:01       ` lm
2020-02-12 18:12         ` clemc
2020-02-12 21:55           ` cym224
2020-02-12 22:11           ` imp
2020-02-12 22:45             ` sauer
2020-02-12 23:05               ` lm
2020-02-12 23:54                 ` [COFF] floating point (Re: " bakul
2020-02-12 23:56                   ` bakul
2020-02-13  1:21                   ` toby
2020-02-13  6:57             ` [COFF] Fwd: " peter
2020-02-16 21:47           ` wobblygong
2020-02-16 22:10             ` clemc
2020-02-16 22:45               ` krewat
2020-02-16 23:50               ` bakul
2020-02-18  0:17                 ` dave
2020-02-18 12:48                   ` jpl.jpl
2020-02-24  9:40         ` ik
2020-02-24 15:19           ` lm
     [not found]             ` <CAP2nic0fK+=eh=5MuY4BJH6zx4tCRMWcazmm1khYMzNmEdf8ug@mail.gmail.com>
2020-02-24 16:15               ` [COFF] [TUHS] Fwd: Old and Tradition was " clemc
2020-02-24 16:19                 ` clemc
2020-02-24 16:27             ` [COFF] Fwd: Old and Tradition was [TUHS] " clemc
2020-02-12 16:28 jnc
2020-02-12 18:13 ` clemc

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).