The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* [TUHS] Questions for TUHS great minds
@ 2017-01-17 14:27 Nelson H. F. Beebe
  0 siblings, 0 replies; 13+ messages in thread
From: Nelson H. F. Beebe @ 2017-01-17 14:27 UTC (permalink / raw)


Tim Bradshaw <tfb at tfeb.org> writes on 17 Jan 2017 13:09 +0000

>> I think we've all lived in a wonderful time where it seemed like
>> various exponential processes could continue for ever: they can't.

For an update on the exponential scaling (Moore's Law et al), see
this interesting new paper:

	Peter J. Denning and Ted G. Lewis
	Exponential laws of computing growth
	Comm. ACM 60(1) 54--65 January 2017
	https://doi.org/10.1145/2976758

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: beebe at math.utah.edu  -
- 155 S 1400 E RM 233                       beebe at acm.org  beebe at computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-17 19:55     ` Steve Johnson
@ 2017-02-01 23:32       ` Erik E. Fair
  0 siblings, 0 replies; 13+ messages in thread
From: Erik E. Fair @ 2017-02-01 23:32 UTC (permalink / raw)


Steve, you're probably aware of this but I'll chip in for everyone else: Ivan Sutherland has been working on "clockless" computing for the last many years - he's at Portland State University these days. I've had the privilege of listening to him talk just a little about this subject even though I'm not an EE.

	Erik Fair


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-17 13:09   ` Tim Bradshaw
  2017-01-17 13:36     ` Michael Kjörling
@ 2017-01-17 19:55     ` Steve Johnson
  2017-02-01 23:32       ` Erik E. Fair
  1 sibling, 1 reply; 13+ messages in thread
From: Steve Johnson @ 2017-01-17 19:55 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1942 bytes --]

Ah, the notion of clock speed...   For 75 years we have designed
circuits with a central clock.   For the last 10 years, people have
gone to great length to make a billion transistors on a chip operate
in synchrony, using techniques that are getting sillier and sillier
and don't provide much benefit.  For example, at lower voltages and
thinning wires, chips become dramatically more temperature sensitive,
so all kinds of guard bands and additional hardware gook is required
to make the chips function correctly.   There are some very
interesting technologies that are not clock based, scale well, are low
power, and perform well over wide variations of voltage and
temperature,   The problem is they would require a completely new
set of design tools, and the few players in this area don't want to
rock the boat.

It's not necessary to go that far, however.  Our chip has no global
signals and will probably be faster than 6 GHz.

Steve

----- Original Message -----
From: "Tim Bradshaw" <tfb@tfeb.org>This doesn't mean that the process
will continue: eventually you hit physics limits ('engineering' is
really a better term, but it has been so degraded by 'software
engineering' that I don't like to use it).  Obviously we've already
hit those limits for clock speed (when?) and we might be close to them
for single-threaded performance in general: the current big (HPC big)
machine where I work has both lower clock speed than the previous
 one and observed lower single-threaded performance as well, although
its a lot more scalable, at least in theory.  The previous one was
POWER, and was I think the slightly mad very-high-clock-speed POWER
chip, which might turn out to be the high-water-mark of
single-threaded performance; the current one is x86.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170117/6232d531/attachment.html>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-17 13:09   ` Tim Bradshaw
@ 2017-01-17 13:36     ` Michael Kjörling
  2017-01-17 19:55     ` Steve Johnson
  1 sibling, 0 replies; 13+ messages in thread
From: Michael Kjörling @ 2017-01-17 13:36 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2176 bytes --]

On 17 Jan 2017 13:09 +0000, from tfb at tfeb.org (Tim Bradshaw):
> I think we've all lived in a wonderful time where it seemed like
> various exponential processes could continue for ever: they can't.

I'm personally inclined to agree with Tim here. That's not to say that
I don't think some of those processes could be pushed a bit farther,
but as much as we would love it in some cases, a function on the form
f(x)=Ca^{Dx} (for any values of C, D and a) describing something that
can exist in the real world simply cannot continue forever before
encountering some real-world limit.

Zoom out what appears to be an exponential curve and more often than
not, it turns out that what looked like an exponential curve was
really the first portion of a S curve or (even worse in many cases) a
portion of a parabola. Either that, or it's something like the
Tsiolkovsky rocket equation or the relativistic colinear velocity
addition formula, where the exponent is something you try very hard to
_avoid_ the effects of for one reason or another.

What could conceivably change that picture somewhat is a total
paradigm shift in computing, kind of like if large general-purpose
quantum computers turn out to be viable after all. But even in that
case I'm pretty sure that at some point we would realize that we are
on the same kind of S curve or parabola there as well, only having
delayed the inevitable or shifted the origin.

That's not to say that even steady-state computer power can't provide
huge benefits. It absolutely can. Even the computers we have and are
able to actually build today are immensely powerful both in terms of
computational capability and storage, and they are, to a large degree,
scalable with the proper software and algorithms. The kind of computer
I have _at home_ today (which wasn't even top of the line when I put
it together a few years ago) would have been considered almost
unimaginably powerful just a few decades ago.

-- 
Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se
                 “People who think they know everything really annoy
                 those of us who know we don’t.” (Bjarne Stroustrup)


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11 18:34 ` Steve Johnson
  2017-01-11 19:46   ` Steffen Nurpmeso
@ 2017-01-17 13:09   ` Tim Bradshaw
  2017-01-17 13:36     ` Michael Kjörling
  2017-01-17 19:55     ` Steve Johnson
  1 sibling, 2 replies; 13+ messages in thread
From: Tim Bradshaw @ 2017-01-17 13:09 UTC (permalink / raw)


On 11 Jan 2017, at 18:34, Steve Johnson <scj at yaccman.com> wrote:
> 
> IMHO, hardware has left software in the dust.  I figured out that if cars had evolved since 1970 at the same rate as computer memory, we could now buy 1,000 Tesla Model S's for a penny, and each would have a top speed of 60,000 MPH.  This is roughly a factor of a trillion in less than 50 years.

This doesn't mean that the process will continue: eventually you hit physics limits ('engineering' is really a better term, but it has been so degraded by 'software engineering' that I don't like to use it).  Obviously we've already hit those limits for clock speed (when?) and we might be close to them for single-threaded performance in general: the current big (HPC big) machine where I work has both lower clock speed than the previous  one and observed lower single-threaded performance as well, although its a lot more scalable, at least in theory.  The previous one was POWER, and was I think the slightly mad very-high-clock-speed POWER chip, which might turn out to be the high-water-mark of single-threaded performance; the current one is x86.

Obviously for a while parallel scaling will mean things continue, but that crashes into other limits as well.

I think we've all lived in a wonderful time where it seemed like various exponential processes could continue for ever: they can't.

--tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170117/f5344cea/attachment.html>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11 17:01     ` Corey Lindsly
  2017-01-11 17:54       ` Marc Rochkind
@ 2017-01-11 20:32       ` Jacob Goense
  1 sibling, 0 replies; 13+ messages in thread
From: Jacob Goense @ 2017-01-11 20:32 UTC (permalink / raw)


On 2017-01-11 12:01, corey at lod.com wrote:
> I don't think so. RJ-11? More like a telephone killer, or home
> firestarter.

A fix for this obviously broken etherkiller would be UI. Doesn't
a telephone killer need a Honda portable generator on the
other end of that RJ-11 ;)

ObOT: Considering it is in now the realm of possibilities to
simulate a ~ 1986 internet under a desk and stuff a ~ 1994
geocities in a netbook.. I imagine running close to a billion
x86 emulators under the desk while running a copy of pinterest
and facebook on it. I'll probably need a larger pid_t and rewrite
some shell scripts in Rust or what you have by then.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11 18:34 ` Steve Johnson
@ 2017-01-11 19:46   ` Steffen Nurpmeso
  2017-01-17 13:09   ` Tim Bradshaw
  1 sibling, 0 replies; 13+ messages in thread
From: Steffen Nurpmeso @ 2017-01-11 19:46 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2236 bytes --]

Hello.

"Steve Johnson" <scj at yaccman.com> wrote:
 |>>  Or does the idea of a single OS disintegrate into a fractal cloud \
 |>>of zero-cost VM's?  What would a meta-OS need to manage that?  Would \
 |>>we still 
 |recognize it as a Unix?
 |
 |This may be off topic, but the aim of studying history is to avoid \
 |the mistakes of the past.  And part of that is being honest about where \
 |we are now...
 |
 |IMHO, hardware has left software in the dust.  I figured out that if \
 |cars had evolved since 1970 at the same rate as computer memory, we \
 |could now buy 1,
 |000 Tesla Model S's for a penny, and each would have a top speed of \
 |60,000 MPH.  This is roughly a factor of a trillion in less than 50 years.

I am even more off-topic, and of course this was only an example.
But this reference sounds so positive, yet this really is no
forward technology that you quote, touring along several hundred
kilogram of batteries that is.

Already at the end of the eighties i think (the usual
")everybody(") knew that fuel cells are the future.  It is true
that i have said in a local auditorium in 1993 that i wished with
18 everybody would get a underfloor with fuel cells in the
sandwich that it is, and four wheel hub motors, and a minimalistic
structure that one may replace at will.

It was already possible back then (but for superior tightness of
the tank), just like, for example, selective cylindre
deactivation, diesel soot filter, diesel NOx reduction cat ("urea
injection").  (Trying to clean Diesel in the uncertain conditions
that multi-million engines at different heights and climate are in
is megalomaniacal, in my opinion.  And that already back then.)

Unfortunately fuel cell development has never been politically
pushed as much as desirable, and was mostly up to universities
until at least about 2006, and in Germany, to the best of my
knowledge.  It may not be popular in the U.S. at the moment, but
it is Toyota again, with the Mirai, who spends money due to
responsibility.  That is at least what i think.  (And again it is
the question whether a doubtful technology is spread millions and
millions of times all over the place, or whether only some
refineries have to be improved.)

--steffen


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11  2:33 Robert Swierczek
  2017-01-11 16:25 ` Ron Natalie
@ 2017-01-11 18:34 ` Steve Johnson
  2017-01-11 19:46   ` Steffen Nurpmeso
  2017-01-17 13:09   ` Tim Bradshaw
  1 sibling, 2 replies; 13+ messages in thread
From: Steve Johnson @ 2017-01-11 18:34 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3294 bytes --]



>>  Or does the idea of a single OS disintegrate into a fractal cloud
of zero-cost VM's?  What would a meta-OS need to manage that?  Would
we still recognize it as a Unix?

This may be off topic, but the aim of studying history is to avoid the
mistakes of the past.  And part of that is being honest about where
we are now...

IMHO, hardware has left software in the dust.  I figured out that if
cars had evolved since 1970 at the same rate as computer memory, we
could now buy 1,000 Tesla Model S's for a penny, and each would have a
top speed of 60,000 MPH.  This is roughly a factor of a trillion in
less than 50 years.

For quite a while, hardware (Moore's law) was giving software
cover--the hardware speed and capacity was exceeding the rate of bloat
in software.  For the last decade, that's stopped happening (the
hardware speed, not the software bloat!)   What hardware is now
giving us is many many more of the same thing rather than the same
thing faster.  To fully exploit the hardware, the old model of
telling a processor "do this, then do this, then do this" simply
doesn't scale.  Things like multicore look to me like a desperate
attempt to hold onto an outmoded model of computing in the face of a
radically different hardware landscape.

The company I'm working for now, Wave Computing, is building a chip
with 16,000 8-bit processors on a chip.  These processors know how to
snuggle up together and do 16-, 32-, and 64-bit arithmetic.  The chip
is intended to be part of systems with as many as a quarter million
processors, with machine learning being one possible target.  There
are no global signals on the chip (e.g., no central clock).  (The
hardware people aren't perfect--it's not yet sunk in that making a
billion transistors operate fast while remaining in synch is
ultimately an impossible goal as the line sizes get smaller).

Chips like ours are not intended to be general purpose -- they act
more like FPGA's. They allow tremendous resources to be focused on a
single problem at a time.  And the focus to change quickly as
desired.  They aren't good for everything.  But I do think they
represent the current state of hardware pretty well, and the trends
are strongly towards even more of the same.  

The closest analogy of programming for the chip is microcode -- it's
as if you have a programmable machine with hundreds or thousands of
"instructions" that are far more powerful than traditional
instructions, able to operate on structured data and do many
arithmetic operations.  And you can make new instructions at will. 
The programming challenge is to wire these instructions together to
get the desired effect. 

This may not be the only path to the future, and it may fail to
survive or be crowded out by other paths.  But the path we have been
on for the last 50 years is a dead end, and the quicker we wise up and
grab the future, the better...

Steve

PS: another way to visualize the hardware progress:  If we punched
out a petabyte of data onto punched cards, the card deck height would
be 6 times the distance to the moon!  Imagine the rubber band....

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170111/64d9c337/attachment.html>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11 17:01     ` Corey Lindsly
@ 2017-01-11 17:54       ` Marc Rochkind
  2017-01-11 20:32       ` Jacob Goense
  1 sibling, 0 replies; 13+ messages in thread
From: Marc Rochkind @ 2017-01-11 17:54 UTC (permalink / raw)


A couple of answers:

"Or does the idea of a single OS disintegrate into a fractal cloud of
zero-cost VM's?"

I would say zero-cost computing. Whether that occurs at VMs or even on what
we would recognize as a computer seems too limiting. I think all that
matters is that the program be elaborated. (There, I used a term from the
Algol 68 report!)

"Would we still recognize it as a Unix?"

Not sure what "it" refers to, but I'm sure that any and all things
UNIX-like would be programs that could be run.

I imagine that clever marketeers will design a box that can appear to run
programs (they may or may not actually run on anything contained in the
box) and then call it a "computer", for those who still care. It could have
flashing lights, even. And, as it is nearly empty, it could range in size
from a watch (or smaller) to a big desktop box.

In today's terminology, what I see is that programs will run in the cloud.
Programs I think are of eternal importance. How they are executed will
become irrelevant.

Somewhere in that cloud are actual computers, of course. How they work I'm
sure will change drastically, as it has fairly often, from the beginning.

--Marc

On Wed, Jan 11, 2017 at 10:01 AM, Corey Lindsly <corey at lod.com> wrote:

> >
> > On 2017-01-11 17:25, Ron Natalie wrote:
> > > Somewhere I have an etherkiller (unfortunately it is non functional).
> >
> > FTFY
> >
>
> I don't think so. RJ-11? More like a telephone killer, or home
> firestarter.
>
> --corey
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170111/a6e8f068/attachment-0001.html>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11 16:50   ` Jacob Goense
@ 2017-01-11 17:01     ` Corey Lindsly
  2017-01-11 17:54       ` Marc Rochkind
  2017-01-11 20:32       ` Jacob Goense
  0 siblings, 2 replies; 13+ messages in thread
From: Corey Lindsly @ 2017-01-11 17:01 UTC (permalink / raw)


> 
> On 2017-01-11 17:25, Ron Natalie wrote:
> > Somewhere I have an etherkiller (unfortunately it is non functional).
> 
> FTFY
> 

I don't think so. RJ-11? More like a telephone killer, or home 
firestarter.

--corey



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11 16:25 ` Ron Natalie
@ 2017-01-11 16:50   ` Jacob Goense
  2017-01-11 17:01     ` Corey Lindsly
  0 siblings, 1 reply; 13+ messages in thread
From: Jacob Goense @ 2017-01-11 16:50 UTC (permalink / raw)


On 2017-01-11 17:25, Ron Natalie wrote:
> Somewhere I have an etherkiller (unfortunately it is non functional).

FTFY


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
  2017-01-11  2:33 Robert Swierczek
@ 2017-01-11 16:25 ` Ron Natalie
  2017-01-11 16:50   ` Jacob Goense
  2017-01-11 18:34 ` Steve Johnson
  1 sibling, 1 reply; 13+ messages in thread
From: Ron Natalie @ 2017-01-11 16:25 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2811 bytes --]

I give up on prediction.    I never thought UNIX would have lasted this long.    Personal computing bounces back and forth between local devices and remote services every decade or so.

 

I will offer a few interesting observations.    Back when I was still at BRL (somewhere around 1983 probably), one of my coworkers walked in and said that in a few years they'd be able to give me a computer as powerful as a VAX, and it will sit on my desk, and I’ll have exclusive use of it and be happy.    I pointed out that this was unlikely (the being happy part), as my expectations would increase as time goes on.


Ron’s Rule Of Computing:   “I need a computer ‘this’ big” (which is accompanied by holding my arms out about the width of a VAX 780 CPU cabinet.

 

Hanging up on the wall in one of the machine rooms I administered was a sign comparing the computer that had sat there (the ENIAC) to a then HP 65 calculator.     The HP 65 would have been an incredible tool to the ENIAC guys, but now It seems way dated.

 

A lot of computer discussion mentioned what would happen if a hacker had access to a CRAY computer.   Could he perhaps brute force the crypt in the UNIX password file?    Oddly, when BRL got their first Cray (an X/MP preempted from Apple’s delivery slot), I was given pretty much as much time as I wanted to try to vectorize the crypt routine.     It wasn’t particularly easy, and we had other parallel processors that were easier to program that were doing a better job at the hack for less money.    I actually signed for the BRL Cray 2 but it didn’t get installed until after I left.

 

Ron’s Rule of Software Deployment:   “Stop making cutovers on major holidays.” 

 

The dang government kept doing things like the long leader conversion on the Arpanet and the TCP/IP changeover on Jan 1.   It drove me nuts that our entire group ended up working over the holidays to bring the new systems up.    At one point when they were rejiggering the USENET groups, the proposal was to do it on Labor Day weekend.   I pointed out (this was the Atlanta UUG) that it violated the above rule AND was particularly bad because many of the USENET system admins were going to be back in Atlanta for the World Science Fiction Convention that weekend.

 

And finally, Ron’s Rule of Electrical Engineering:   “If two things can be plugged into each other, some fool will do so.   You better make it work, or at least benignly fail when it happens.”

 

Somewhere I have a cord that one of my employees made me which has a 110V plug on one side and an RJ-11 on the other (fortunately it is non functional).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170111/61496eec/attachment.html>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [TUHS] Questions for TUHS great minds
@ 2017-01-11  2:33 Robert Swierczek
  2017-01-11 16:25 ` Ron Natalie
  2017-01-11 18:34 ` Steve Johnson
  0 siblings, 2 replies; 13+ messages in thread
From: Robert Swierczek @ 2017-01-11  2:33 UTC (permalink / raw)


Not so long ago I joked about putting a Cray-1 in a watch.  Now that we are
essentially living in the future, what audacious (but realistic)
architectures can we imagine under our desks in 25 years?  Perhaps a mesh
of ten-million of today's highest end CPU/GPU pairs bathing in a vast sea
of non-volatile memory?  What new abstractions are needed in the OS to
handle that?  Obviously many of the current command line tools would need
rethinking (ps -ef for instance.)

Or does the idea of a single OS disintegrate into a fractal cloud of
zero-cost VM's?  What would a meta-OS need to manage that?  Would we still
recognize it as a Unix?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170110/e4f419ed/attachment.html>


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-02-01 23:32 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-17 14:27 [TUHS] Questions for TUHS great minds Nelson H. F. Beebe
  -- strict thread matches above, loose matches on Subject: below --
2017-01-11  2:33 Robert Swierczek
2017-01-11 16:25 ` Ron Natalie
2017-01-11 16:50   ` Jacob Goense
2017-01-11 17:01     ` Corey Lindsly
2017-01-11 17:54       ` Marc Rochkind
2017-01-11 20:32       ` Jacob Goense
2017-01-11 18:34 ` Steve Johnson
2017-01-11 19:46   ` Steffen Nurpmeso
2017-01-17 13:09   ` Tim Bradshaw
2017-01-17 13:36     ` Michael Kjörling
2017-01-17 19:55     ` Steve Johnson
2017-02-01 23:32       ` Erik E. Fair

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).