* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-23 19:28 ` Bakul Shah
@ 2018-03-23 19:44 ` Larry McVoy
2018-03-23 21:23 ` Clem Cole
2018-03-26 13:43 ` Tim Bradshaw
2 siblings, 0 replies; 19+ messages in thread
From: Larry McVoy @ 2018-03-23 19:44 UTC (permalink / raw)
I'm really not worried about it. When I got into kernel programming
I had no idea what I was doing. I just kept at it, did small stuff,
the bigger picture slowly came into focus. After a couple of years there
wasn't much that I wouldn't take on. I stayed away from drivers because I
had figured out that Sun had their stuff but it was going to be different
than SGI's stuff and both different from PC stuff (even the PC stuff was
different, I would have been working on ISA bus devices and that's long
gone so far as I know). But file systems, networking, VM, processes,
signals, all of that stuff was pretty easy after you got to know the code.
Every year someone takes some young hotshot and points them at some
"impossible" thing and one of them makes it work. I don't see that
changing.
On Fri, Mar 23, 2018 at 12:28:47PM -0700, Bakul Shah wrote:
> On Mar 22, 2018, at 2:35 PM, Clem Cole <clemc at ccc.com> wrote:
> >
> > The question (and a good one) is if you are not 'a Fortran person,' are you going to be able to understand it well enough to not do damage to it, if you modify it? Which is of course the crux the question Bakul is asking.
>
> This is indeed the case, but I am asking not just about
> Fortran. Will we continue programming in 50-100 years in
> C/C++/Java/Fortran? That is, are we going through the
> Cambrian Explosion of programming languages now and it will
> settle down in a few decades or have we just started?
>
> > I suspect, it is not as bad the science fiction movies profess. Because the codes have matured over time, which is not what happened with Y2K. Are their 'dusty decks' sure - but they are not quite so dusty as it might think, and as importantly, the code we care about are and have been in continuous use for years. So there has been a new generation of programmers that took of the maintenance of them.
>
> Perhaps a more important question is what % of programs are
> important enough and will be around in 50-100 years.
>
> On Mar 23, 2018, at 3:43 AM, Tim Bradshaw <tfb at tfeb.org> wrote:
> >
> > My experience of large systems like this is that this isn't how they work at all. The program I deal with (which is around 5 million lines naively (counting a lot of stuff which probably is not source but is in the source tree)) is looked after by probably several hundred people. It's been through several major changes in its essential guts and in the next ten years or so it will be entirely replaced by a new version of itself to deal with scaling problems inherent in the current implementation. We get a new machine every few years onto which it needs to be ported, and those machines have not all been just faster versions of the previous one, and will probably never be so.
>
> By now most major systems has been computerized. Banks,
> govt, finance, communication, shipping, various industries,
> research, publishing, medicine etc. Will the critical
> systems within each area have as many resources as & when
> needed as weather forecasting system Tim is talking about?
> [Of course, the same question can be asked in relation to
> the conversion I am wondering about!]
>
> On Mar 23, 2018, at 8:51 AM, Ron Natalie <ron at ronnatalie.com> wrote:
> >
> > A core package in a lot of the geospatial applications is a old piece of mathematical code originally written in Fortran (probably in the sixties). Someone probably in the 80???s recoded the thing pretty much line for line (maintaining the horrendous F66 variable names etc???) into C. It???s probably ripe for a jump to something else now.
> >
> > We???ve been through four major generations of the software. The original was all VAX based with specialized hardware (don???t know what it was written in). We followed that on with a portable UNIX (but mostly Suns, but ours worked on SGI, Ardent, Stellar, various IBM AIX platofrms, Apollo DN1000???s, HP, DEC Alphas). This was primarily a C application. Then right about the year 2000, we jumped to C++ on Windows. Subsequently it got back ported to Linux. Yes there are some modules that have been unchanged for decades, but the system on the whole has been maintained.
>
> I wonder if we will continue doing this sort of adhoc
> but expensive rewrites for a long time....
>
> [This may be somewhat relevant to TUHS, from a future
> historian's perspective :-)]
--
---
Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-23 19:28 ` Bakul Shah
2018-03-23 19:44 ` Larry McVoy
@ 2018-03-23 21:23 ` Clem Cole
2018-03-23 21:36 ` Warner Losh
2018-03-26 13:43 ` Tim Bradshaw
2 siblings, 1 reply; 19+ messages in thread
From: Clem Cole @ 2018-03-23 21:23 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2173 bytes --]
On Fri, Mar 23, 2018 at 3:28 PM, Bakul Shah <bakul at bitblocks.com> wrote:
>
> By now most major systems has been computerized. Banks,
> govt, finance, communication, shipping, various industries,
> research, publishing, medicine etc. Will the critical
> systems within each area have as many resources as & when
> needed as weather forecasting system Tim is talking about?
> [Of course, the same question can be asked in relation to
> the conversion I am wondering about!]
I suspect we agree more than we disagree.
I offer the following observation. Except for high end HPC particularly
DoD, DoE and big science types of applications, there has been a
'Christiansen' style disruption were a 'worse' technology was created and
loved by a new group of users and that new technology eventually got better
and replaced (disrupted) the earlier one (Banks/Finance were cobol - now
its Oracle and the like, SAP et al; Communications was SS7 over custom HW,
now its IP running on all sorts of stuff). The key is the disruptor was on
a economic curve that make it successful.
But HE HPC is the same people, doing the same things they did before -- the
difference is the data sets are larger, need for better precision,
different data representation (e.g., graphics). Again, the math has not
changed. But I don't see a new customer for those style of applications,
which is what is needed to basically bank roll the replacement (originally
less 'good') technology. The economics are their to replace it - at least
so far.
The idea of the 'under served' or 'missing middle' market for HPC has been
discussed for a bit. I used to believe it. I'm not so sure now.
Which bringing this back to UNIX. Linux is the UNIX disruptor - which is
great. Linux keeps 'UNIX' alive and getting better. I don't see an
economic reason to replace it, but who knows. Maybe that's what the new
good folks at Goggle, Amazon, Intel or some University is doing. But so
far, the economics is not there.
Clem
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180323/8e454207/attachment.html>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-23 21:23 ` Clem Cole
@ 2018-03-23 21:36 ` Warner Losh
2018-03-23 22:02 ` Steve Johnson
0 siblings, 1 reply; 19+ messages in thread
From: Warner Losh @ 2018-03-23 21:36 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 883 bytes --]
On Fri, Mar 23, 2018 at 3:23 PM, Clem Cole <clemc at ccc.com> wrote:
>
> Which bringing this back to UNIX. Linux is the UNIX disruptor - which is
> great. Linux keeps 'UNIX' alive and getting better. I don't see an
> economic reason to replace it, but who knows. Maybe that's what the new
> good folks at Goggle, Amazon, Intel or some University is doing. But so
> far, the economics is not there.
>
Speaking of how ancient code works... There's still AT&T code dating to v5
or older in *BSD.... It's been updated, improved upon, parts replaced, etc.
But there's still some bits dating all the way back to those early times.
Having competition from Linux is great and keeps the BSDs honest...
Warner
> ᐧ
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180323/7f03f6d6/attachment.html>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-23 21:36 ` Warner Losh
@ 2018-03-23 22:02 ` Steve Johnson
0 siblings, 0 replies; 19+ messages in thread
From: Steve Johnson @ 2018-03-23 22:02 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1348 bytes --]
Reminds me a bit of the old saying: "This is George Washington's
Axe. Of course, it's had six new handles and four new heads since he
owned it..."
Steve
----- Original Message -----
From:
"Warner Losh" <imp at bsdimp.com>
To:
"Clem Cole" <clemc at ccc.com>
Cc:
"TUHS main list" <tuhs at minnie.tuhs.org>
Sent:
Fri, 23 Mar 2018 15:36:02 -0600
Subject:
Re: [TUHS] long lived programs (was Re: RIP John Backus
On Fri, Mar 23, 2018 at 3:23 PM, Clem Cole <clemc at ccc.com [1]>
wrote:Which bringing this back to UNIX. Linux is the UNIX
disruptor - which is great. Linux keeps 'UNIX' alive and getting
better. I don't see an economic reason to replace it, but who
knows. Maybe that's what the new good folks at Goggle, Amazon, Intel
or some University is doing. But so far, the economics is not
there.
Speaking of how ancient code works... There's still AT&T code dating
to v5 or older in *BSD.... It's been updated, improved upon, parts
replaced, etc. But there's still some bits dating all the way back to
those early times. Having competition from Linux is great and keeps
the BSDs honest...
Warner
ᐧ
Links:
------
[1] mailto:clemc at ccc.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180323/3aaf806f/attachment-0001.html>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-23 19:28 ` Bakul Shah
2018-03-23 19:44 ` Larry McVoy
2018-03-23 21:23 ` Clem Cole
@ 2018-03-26 13:43 ` Tim Bradshaw
2018-03-26 16:19 ` Paul Winalski
` (2 more replies)
2 siblings, 3 replies; 19+ messages in thread
From: Tim Bradshaw @ 2018-03-26 13:43 UTC (permalink / raw)
On 23 Mar 2018, at 19:28, Bakul Shah <bakul at bitblocks.com> wrote:
>
> By now most major systems has been computerized. Banks,
> govt, finance, communication, shipping, various industries,
> research, publishing, medicine etc. Will the critical
> systems within each area have as many resources as & when
> needed as weather forecasting system Tim is talking about?
I think that this is indeed a problem: it just isn't a problem for the kind of huge numerical simulation that gave rise to this thread. In general programs where
- you continually are looking for more performance,
- you are continually updating what the program can do (adding better cloud models, say),
are pretty safe. But programs which get deployed *and then just work* are liable to rot. So, for instance, a retail bank's writes or buys a system to deal with mortgages: once this thing works then the chances are it will keep working for a very long time because the number of mortgages might double in ten years or something, but it won't go up by enormous factors and mortgages (at the retail bank end, not at the mortgage-backs end) are kind of boring.
Retail banks are risk-averse so they like to avoid the risks associated with porting the thing to new platforms. And since there's no development most of the developers leave.
And then ten or twenty years later you have this arcane thing which no-one understands any more running on a platform which is falling off the end of support.
And if that was the only problem everything would be fine. In fact, several times during the life of this thing, the bank wanted to offer some new kind of mortgage. But no-one understood the existing mortgage platform any more, *so they deployed a new one alongside it*. So in fact you have *four* of these platforms, all of them critical, and none of them understood by anyone.
To bring this back to Unix history, I think this is an example of a place where Unix has failed (or, perhaps, where people have failed to make use of it properly). Half the reason these things are such a trouble is that they aren't written to any really portable API, so the bit that runs on Solaris isn't going to run on AIX or Linux, and it only might run on the current version of Solaris in fact.
I don't know what the solution to this problem: I think it is essentially teleological to assume that there *is* a solution.
--tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180326/989dbb4e/attachment.html>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-26 13:43 ` Tim Bradshaw
@ 2018-03-26 16:19 ` Paul Winalski
2018-03-26 16:41 ` Arthur Krewat
2018-03-27 1:08 ` Clem Cole
2018-03-26 19:04 ` Bakul Shah
2018-03-27 1:21 ` Steve Johnson
2 siblings, 2 replies; 19+ messages in thread
From: Paul Winalski @ 2018-03-26 16:19 UTC (permalink / raw)
On 3/26/18, Tim Bradshaw <tfb at tfeb.org> wrote:
> On 23 Mar 2018, at 19:28, Bakul Shah <bakul at bitblocks.com> wrote:
> Retail banks are risk-averse so they like to avoid the risks associated with
> porting the thing to new platforms. And since there's no development most
> of the developers leave.
>
> And then ten or twenty years later you have this arcane thing which no-one
> understands any more running on a platform which is falling off the end of
> support.
>
After grad school (1978) one of my first job interviews was for a job
as system manager for an insurance company. Their data center took
the "don't risk porting software" to an extreme. As new technology
came out they bought it, but only to run new applications. Existing
applications were never ported and continued to run on their existing
hardware. Their machine room looked like a computer museum. They had
two IBM 1400s (one in use; one was cannabilized for parts to keep the
active machine going), two System/360 model 50s, with a drum and a
2321 data cell drive. Their only modern hardware was a System/370
model 158.
Operating systems seem to have taken one of two policies regarding
legacy programs. IBM's OS and DEC's VMS emphasized backwards
compatibility--new features mustn't break existing applications. VMS
software developers called the philosophy of strict backward
compatibility "The Promise" and took it very seriously. Unix, on the
other hand, has always struck me as being less concerned with backward
compatibility and more about innovation and experimentation. I think
the assumption with Unix is that you have the sources for your
programs, so you can recompile or modify them to keep up with
incompatible changes. This is fine for research and HPTC
environments, but it doesn't fly in corporate data centers, where even
a simple recompile means that the new version of the application has
to undergo expensive qualification testing before it can be put into
production. Which philosophy regarding backwards compatibility is
better? It depends on your target audience.
-Paul W.
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-26 16:19 ` Paul Winalski
@ 2018-03-26 16:41 ` Arthur Krewat
2018-03-26 19:53 ` Larry McVoy
2018-03-27 1:08 ` Clem Cole
1 sibling, 1 reply; 19+ messages in thread
From: Arthur Krewat @ 2018-03-26 16:41 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1000 bytes --]
On 3/26/2018 12:19 PM, Paul Winalski wrote:
> Unix, on the
> other hand, has always struck me as being less concerned with backward
> compatibility and more about innovation and experimentation.
For Sun, it was quite the contrary.
It was normal to run binaries from SunOS on Solaris. For the longest
time, the "xv" binary I used on SPARC hardware was compiled on SunOS.
It's even an X-windows application, and the libraries work.
Even in Solaris 10, it still runs:
-bash-3.00$ ./xv.sparc
ld.so.1: xv.sparc: warning: /usr/4lib/libX11.so.4.3: has older revision
than expected 10
ld.so.1: xv.sparc: warning: /usr/4lib/libc.so.1.9: has older revision
than expected 160
-bash-3.00$ file xv.sparc
xv.sparc: Sun demand paged SPARC executable dynamically linked
-bash-3.00$ uname -a
SunOS redacted 5.10 Generic_120011-11 sun4u sparc SUNW,SPARC-Enterprise
This has been deprecated as of Solaris 11, supposedly. Backwards
compatibility for Solaris binaries is also a "thing".
art k.
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-26 16:41 ` Arthur Krewat
@ 2018-03-26 19:53 ` Larry McVoy
0 siblings, 0 replies; 19+ messages in thread
From: Larry McVoy @ 2018-03-26 19:53 UTC (permalink / raw)
On Mon, Mar 26, 2018 at 12:41:15PM -0400, Arthur Krewat wrote:
> On 3/26/2018 12:19 PM, Paul Winalski wrote:
> > Unix, on the
> >other hand, has always struck me as being less concerned with backward
> >compatibility and more about innovation and experimentation.
> For Sun, it was quite the contrary.
>
> It was normal to run binaries from SunOS on Solaris. For the longest time,
> the "xv" binary I used on SPARC hardware was compiled on SunOS. It's even an
> X-windows application, and the libraries work.
Yeah, Sun was very good about that. You got smacked if you broke compat.
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-26 16:19 ` Paul Winalski
2018-03-26 16:41 ` Arthur Krewat
@ 2018-03-27 1:08 ` Clem Cole
1 sibling, 0 replies; 19+ messages in thread
From: Clem Cole @ 2018-03-27 1:08 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2632 bytes --]
On Mon, Mar 26, 2018 at 12:19 PM, Paul Winalski <paul.winalski at gmail.com>
wrote:
> ...
> VMS
>
> software developers called the philosophy of strict backward
> compatibility "The Promise" and took it very seriously. Unix, on the
> other hand, has always struck me as being less concerned with backward
> compatibility and more about innovation and experimentation. I think
> the assumption with Unix is that you have the sources for your
> programs, so you can recompile or modify them to keep up with
> incompatible changes.
Paul be careful here. Yes, BSD did that. I'll never forget hearing Joy
once saying he thought it was a good idea to make people recompile because
their code stayed fresh.
But as UNIX move from Universities to commercial firms, binaries became
really important.
DEC, Masscomp (being ex-DECies) and eventually Sun took that same promise
with them.
Linux has been mixed. The problem is that UNIX is more than just the
kernel itself. As the SPEC1170 work revealed in the late 1980s/early
1990's - there were 1170 different interfaces that ISVs had to think about
and agreeing between vendors much less releases within vendors was
difficult.
And here is an interesting observation ...
The ideas behind UNIX was (more or less) HW independant. Just think, how
hard it was for DEC, Masscomp or SUN to keep VMS or Ultrix/Tru64, RTU,
SunOS/Solaris binary compatible. It's part of why the whole UNIX
standards of API *vs.* ABI wars raged. It was and is a control problem.
Linux (sort of) solves it by keeping the Kernel as their definition. But
that really only partly works. kernel.org streams out new kernels way
faster than the distros. And the distros can not agree on the different
placements for things. Then you get into things like Messaging stacks.
It why Intel had to develop 'Cluster Ready.' I will not say to protect
the guilty, but one very well known HPC ISV had a test matrix of 144
different linux combinations before they could ship....
Just to give you an idea .. if they developed say on IBM/Lenovo under RHEL
version mumble, but tried to release a binary on the same RHEL but on say
HP gear, it would not work, because IBM had used Qlogic IB and HP Mellanox
say. Or worse they both had used the same vendor of the gear but
different releases of the IB stack (it gets worse and worse).
The issue is that each vendor wants (needs) to have some level of control.
Clem
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180326/20c7d5d0/attachment.html>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-26 13:43 ` Tim Bradshaw
2018-03-26 16:19 ` Paul Winalski
@ 2018-03-26 19:04 ` Bakul Shah
2018-03-27 1:21 ` Steve Johnson
2 siblings, 0 replies; 19+ messages in thread
From: Bakul Shah @ 2018-03-26 19:04 UTC (permalink / raw)
> On Mar 26, 2018, at 6:43 AM, Tim Bradshaw <tfb at tfeb.org> wrote:
>
> On 23 Mar 2018, at 19:28, Bakul Shah <bakul at bitblocks.com> wrote:
>>
>> By now most major systems has been computerized. Banks,
>> govt, finance, communication, shipping, various industries,
>> research, publishing, medicine etc. Will the critical
>> systems within each area have as many resources as & when
>> needed as weather forecasting system Tim is talking about?
>
> I think that this is indeed a problem: it just isn't a problem for the kind of huge numerical simulation that gave rise to this thread. In general programs where
I started thinking about Fortran programs but soon expanded to
the more general problem.
>
> - you continually are looking for more performance,
> - you are continually updating what the program can do (adding better cloud models, say),
>
> are pretty safe. But programs which get deployed *and then just work* are liable to rot. So, for
Even here attention can flag over time.
> And if that was the only problem everything would be fine. In fact, several times during the life of this thing, the bank wanted to offer some new kind of mortgage. But no-one understood the existing mortgage platform any more, *so they deployed a new one alongside it*. So in fact you have *four* of these platforms, all of them critical, and none of them understood by anyone.
This is the modification problem I was talking about. Running an
unchanged binary on an emulated processor is much easier.
>
> To bring this back to Unix history, I think this is an example of a place where Unix has failed (or, perhaps, where people have failed to make use of it properly). Half the reason these things are such a trouble is that they aren't written to any really portable API, so the bit that runs on Solaris isn't going to run on AIX or Linux, and it only might run on the current version of Solaris in fact.
1) This is when the OS doesn't live as long as the application.
2) The rate of change in open source technologies is far too high.
Open source is often open loop. Hundreds of bugs remain unfixed
while a new feature will be added.
> I don't know what the solution to this problem: I think it is essentially teleological to assume that there *is* a solution.
It is an interesting problem even if there is no clear solution.
But now I think may be it doesn't matter in the long run. We
will let our new AI lords worry about it :-)
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-26 13:43 ` Tim Bradshaw
2018-03-26 16:19 ` Paul Winalski
2018-03-26 19:04 ` Bakul Shah
@ 2018-03-27 1:21 ` Steve Johnson
2018-03-27 1:46 ` Clem Cole
2 siblings, 1 reply; 19+ messages in thread
From: Steve Johnson @ 2018-03-27 1:21 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2521 bytes --]
Portability and standard API are very good for users, but
manufacturers hate them. Unix portability was supported by AT&T in
part because they were getting flak from non-DEC manufacturers about
Unix running only on the PDP-11. Once Unix was ported,
applications could run on hardware that was cheapest and most
appropriate to the application rather than where it was first
written. Users, much more than computer companies, benefited from
portability. And DARPA, with a similarly broad view, for the most
part did things that helped users rather than specific manufacturers.
In fact, the first thing most manufacturers did with Unix was to
change it. The rot set in quickly, leading to long boring chaotic
standards efforts over POSIX and C (remember OSF?).
On the other hand, manufacturers love open source. There are no
apparent limits on growth, and few guiding hands to prevent silly, or
downright dangerous features from creeping into the endlessly bloating
code bases. Each company can get their own version at low cost and
keep their customers happy, serene in the knowledge that if the
customers try to use another system they will have to deal with the
100+ pages of incompatible GCC options and have to tame piles of
poorly concieved and documented code. None of the stakeholders in
open source have anything to gain by being compatible, or even letting
people know when they change something incompatibly without warning.
After all, it can only hurt the other stakeholders, not them...
Yes, I'm old and cynical, and yes there are some islands of sanity
fighting the general trend. And yes, I think this is a cyclical
problem that will swing back towards sanity, hopefully soon. But
where is the AT&T or DARPA with enough smarts and resources to do
things simply, and motivation to make users happy rather than
increasing profits?
Steve
----- Original Message -----
From: "Tim Bradshaw" <tfb@tfeb.org>. . .
To bring this back to Unix history, I think this is an example of a
place where Unix has failed (or, perhaps, where people have failed to
make use of it properly). Half the reason these things are such a
trouble is that they aren't written to any really portable API, so the
bit that runs on Solaris isn't going to run on AIX or Linux, and it
only might run on the current version of Solaris in fact.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180326/596b740f/attachment.html>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [TUHS] long lived programs (was Re: RIP John Backus
2018-03-27 1:21 ` Steve Johnson
@ 2018-03-27 1:46 ` Clem Cole
0 siblings, 0 replies; 19+ messages in thread
From: Clem Cole @ 2018-03-27 1:46 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1306 bytes --]
On Mon, Mar 26, 2018 at 9:21 PM, Steve Johnson <scj at yaccman.com> wrote:
> On the other hand, manufacturers love open source.
>
HW manufacturers love FOSS because it got them out of yet another thing
that cost them money. We saw the investment in compilers going away, now
we see the same in the OS. But many ISV's are still not so sure.
Funny compilers are a strange thing ... it's not in Gnu, much less
Microsoft's interest to get that last few percent out of any chip, as much
as it is for the chip developer. Firms like Intel have their own compiler
team, ARM and AMD pay third parties. But because if the competition, the
FOSS or even proprietary compilers get better [Certainly for languages like
Fortran were performance is everything - which is why 'production' shops
will pay for a high end compiler - be it from PCG, Intel or Cray say].
Truth is FOSS has changed the model. But there are only some people who
will pay for support (particularly large HPC sites we all can name). They
will pay some things, but those sites want to change everything (yet
another rant I'll leave for another day ;-)
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180326/61ecbf1a/attachment.html>
^ permalink raw reply [flat|nested] 19+ messages in thread