* [COFF] [TUHS] History of popularity of C [not found] ` <CAFNqd5Vrk9RBdq1MqyGGAAVZaaEWNE19sAaX4AkNudEfYD+Btg@mail.gmail.com> @ 2020-05-26 19:55 ` crossd 0 siblings, 0 replies; 7+ messages in thread From: crossd @ 2020-05-26 19:55 UTC (permalink / raw) Cc: to COFF, as this isn't so Unix-y anymore. On Tue, May 26, 2020 at 12:22 PM Christopher Browne <cbbrowne at gmail.com> wrote: > [snip] > The Modula family seemed like the better direction; those were still > Pascal-ish, but had nice intentional extensions so that they were not > nearly so "impotent." I recall it being quite popular, once upon a time, > to write code in Modula-2, and run it through a translator to mechanically > transform it into a compatible subset of Ada for those that needed DOD > compatibility. The Modula-2 compilers were wildly smaller and faster for > getting the code working, you'd only run the M2A part once in a while > (probably overnight!) > Wirth's languages (and books!!) are quite nice, and it always surprised and kind of saddened me that Oberon didn't catch on more. Of course Pascal was designed specifically for teaching. I learned it in high school (at the time, it was the language used for the US "AP Computer Science" course), but I was coming from C (with a little FORTRAN sprinkled in) and found it generally annoying; I missed Modula-2, but I thought Oberon was really slick. The default interface (which inspired Plan 9's 'acme') had this neat graphical sorting simulation: one could select different algorithms and vertical bars of varying height were sorted into ascending order to form a rough triangle; one could clearly see the inefficiency of e.g. Bubble sort vs Heapsort. I seem to recall there was a way to set up the (ordinarily randomized) initial conditions to trigger worst-case behavior for quick. I have a vague memory of showing it off in my high school CS class. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200526/ac4ff1ed/attachment.htm> ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <CAEoi9W4FDH7ahPHNfEdA0kcMO-yQccF-r=hUCNANyKWMtoO3bA@mail.gmail.com>]
[parent not found: <202006072256.057Muo1h090326@elf.torek.net>]
[parent not found: <CAAYAaAPu=dKnXH82WmECenLNJKSnKNh6aqjFphOoc+NPWxuGhw@mail.gmail.com>]
* [COFF] [TUHS] History of popularity of C [not found] ` <CAAYAaAPu=dKnXH82WmECenLNJKSnKNh6aqjFphOoc+NPWxuGhw@mail.gmail.com> @ 2020-06-08 12:47 ` crossd [not found] ` <a1321feb-ff2c-4172-9a43-cac2f61bfc18@CHERUBFISH.conted.ox.ac.uk> 1 sibling, 0 replies; 7+ messages in thread From: crossd @ 2020-06-08 12:47 UTC (permalink / raw) [-TUHS, +COFF as per wkt's request] On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie at gmail.com> wrote: > Dependent types aren't needed for sum types though, which is what you'd > normally use for an array that carries its size, correct? > No, I don't think so. I've never heard of the dimensionality of an array in such a context described in terms of a sum type, but have often heard of it described in terms of a dependent type. However, I'm no type theorist. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200608/3d372e09/attachment.htm> ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <a1321feb-ff2c-4172-9a43-cac2f61bfc18@CHERUBFISH.conted.ox.ac.uk>]
* [COFF] [TUHS] History of popularity of C [not found] ` <a1321feb-ff2c-4172-9a43-cac2f61bfc18@CHERUBFISH.conted.ox.ac.uk> @ 2020-06-08 13:46 ` cezar.ionescu 0 siblings, 0 replies; 7+ messages in thread From: cezar.ionescu @ 2020-06-08 13:46 UTC (permalink / raw) > I've never heard of the dimensionality of an array in such a context > described in terms of a sum type,[...] I was also puzzled by this remark. Perhaps Bram was thinking of an infinite sum type such as Arrays_0 + Arrays_1 + Arrays_2 + ... where "Arrays_n" is the type of arrays of size n. However, I don't see a way of defining this without dependent (or at least indexed) types. > [...] but have often heard of it described in terms of a dependent > type. I think that's right, yes. We want the type checker to decide, at compile time, whether a certain operation on arrays, such as scalar product, or multiplication with a matrix, will respect the types. That means providing the information about the size to the type checker: types need to know about values, hence the need for dependent types. Ideally, a lot of the type information can then be erased at run time, so that we don't need the arrays to carry their sizes around. But the distinction between what can and what cannot be erased at run time is muddled in a dependently-typed programming language. Best wishes, Cezar Dan Cross <crossd at gmail.com> writes: > [-TUHS, +COFF as per wkt's request] > > On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie at gmail.com> wrote: > >> Dependent types aren't needed for sum types though, which is what you'd >> normally use for an array that carries its size, correct? >> > > No, I don't think so. I've never heard of the dimensionality of an array in > such a context described in terms of a sum type, but have often heard of it > described in terms of a dependent type. However, I'm no type theorist. > > - Dan C. > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <202006072115.057LFV6v089953@elf.torek.net>]
[parent not found: <7w4krmw1kc.fsf@junk.nocrew.org>]
* [COFF] [TUHS] History of popularity of C [not found] ` <7w4krmw1kc.fsf@junk.nocrew.org> @ 2020-06-08 12:51 ` crossd 2020-06-08 14:14 ` akosela 0 siblings, 1 reply; 7+ messages in thread From: crossd @ 2020-06-08 12:51 UTC (permalink / raw) [ -TUHS and +COFF ] On Mon, Jun 8, 2020 at 1:49 AM Lars Brinkhoff <lars at nocrew.org> wrote: > Chris Torek wrote: > > You pay (sometimes noticeably) for the GC, but the price is not > > too bad in less time-critical situations. The GC has a few short > > stop-the-world points but since Go 1.6 or so, it's pretty smooth, > > unlike what I remember from 1980s Lisp systems. :-) > > I'm guessing those 1980s Lisp systems would also be pretty smooth on > 2020s hardware. > I worked on a big system written in Common Lisp at one point, and it used a pretty standard Lispy garbage collector. It spent something like 15% of its time in GC. It was sufficiently bad that we forced a full GC between every query. The Go garbage collector, on the other hand, is really neat. I believe it reserves something like 25% of available CPU time for itself, but it runs concurrently with your program: having to stop the world for GC is very, very rare in Go programs and even when you have to do it, the concurrent code (which, by the way, can make use of full physical parallelism on multicore systems) has already done much of the heavy lifting, meaning you spend less time in a GC pause. It's a very impressive piece of engineering. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200608/a695efb5/attachment.htm> ^ permalink raw reply [flat|nested] 7+ messages in thread
* [COFF] [TUHS] History of popularity of C 2020-06-08 12:51 ` crossd @ 2020-06-08 14:14 ` akosela 2020-06-08 15:22 ` crossd 2020-06-08 15:38 ` clemc 0 siblings, 2 replies; 7+ messages in thread From: akosela @ 2020-06-08 14:14 UTC (permalink / raw) On 6/8/20, Dan Cross <crossd at gmail.com> wrote: > [ -TUHS and +COFF ] > > On Mon, Jun 8, 2020 at 1:49 AM Lars Brinkhoff <lars at nocrew.org> wrote: > >> Chris Torek wrote: >> > You pay (sometimes noticeably) for the GC, but the price is not >> > too bad in less time-critical situations. The GC has a few short >> > stop-the-world points but since Go 1.6 or so, it's pretty smooth, >> > unlike what I remember from 1980s Lisp systems. :-) >> >> I'm guessing those 1980s Lisp systems would also be pretty smooth on >> 2020s hardware. >> > > I worked on a big system written in Common Lisp at one point, and it used a > pretty standard Lispy garbage collector. It spent something like 15% of its > time in GC. It was sufficiently bad that we forced a full GC between every > query. > > The Go garbage collector, on the other hand, is really neat. I believe it > reserves something like 25% of available CPU time for itself, but it runs > concurrently with your program: having to stop the world for GC is very, > very rare in Go programs and even when you have to do it, the concurrent > code (which, by the way, can make use of full physical parallelism on > multicore systems) has already done much of the heavy lifting, meaning you > spend less time in a GC pause. It's a very impressive piece of engineering. > The real question is can Go be really faster than C/C++? The last time I checked it was at least two times slower on an average. Plus it will not run on older systems; I think you need at least kernel 2.6.23 and its executables are rather big compared to C. In the beginning it was fun to create some clones of classic Unix tools in Go, but in reality they were still slower and bigger than the original utilities. I jumped on the Go bandwagon in 2010 (like a lot of people from the open source community) but recently I find myself coming back more and more to C. It is still a superior language in almost all areas. Plus it works on everything from retro 8-bit micros to modern big HPC systems. --Andy ^ permalink raw reply [flat|nested] 7+ messages in thread
* [COFF] [TUHS] History of popularity of C 2020-06-08 14:14 ` akosela @ 2020-06-08 15:22 ` crossd 2020-06-08 15:38 ` clemc 1 sibling, 0 replies; 7+ messages in thread From: crossd @ 2020-06-08 15:22 UTC (permalink / raw) On Mon, Jun 8, 2020 at 10:14 AM Andy Kosela <akosela at andykosela.com> wrote: > On 6/8/20, Dan Cross <crossd at gmail.com> wrote: > > [snip] > The real question is can Go be really faster than C/C++? The last > time I checked it was at least two times slower on an average. This is an unanswerable question, because it lacks enough detail to really answer the question honestly: what does it mean to be "faster"? It really depends on the problem being solved. If I'm writing straight-line matrix multiplication code on statically-allocated structures, I see little reason why Go would be inherently slower than C, other than that the compiler perhaps isn't as advanced. If my program is a systems-y sort of thing that's frequently blocked on IO anyway, then e.g. GC overhead may not matter at all. If my program is highly concurrent, then perhaps I can bring some of the concurrency primitives baked into Go to bear and the result may be closer to correct than a comparable C program which lacks things like channels as synchronization primitives and light-weight threads a la goroutines. Putting that last point another way, more expressive abstractions allow me to develop a correct solution faster. As an example of this, for my current project at work, I build a custom board that lets me control the power and reset circuitry on ATX motherboards; think of it as a remote-controlled solid-state switch in parallel with the motherboard power and reset switches. Each board can control up to 8 other computers. The control interface to that board is a USB<->UART bridge; USB also provides power. Since I'm controlling multiple other machines, possibly in parallel, I wrote an interface daemon that talks to the firmware on the board itself but presents a high-level abstraction ("reboot the host named 'foo'"). I wrote it in Go; a single go-routine owns the serial device and accepts commands corresponding to each system on a channel; controlling each system itself is a goroutine. The whole things speaks a simple JSON-based RPC protocol over a Unix domain socket. It's possible to control any of those 8 machines in parallel. It was only 200 lines of code and worked correctly almost immediately; I was really impressed with how _easy_ it was to write. I dare say a similar program in C would have been much more difficult to write. Further, while GC undoubtedly has overhead, it gives rise to optimization alternatives that aren't available in non-GC languages. For example, instead of atomically incrementing or decrementing a reference count in a tight loop before manipulating some shared data structure, I don't bother in a GC'd language; the offending memory will be automatically reclaimed when it's no longer referenced elsewhere in the program. Here's a really well-written post from 2011 about profiling and optimizing a Go program, and comparing the result to (approximately) the same program written in several other languages: https://blog.golang.org/pprof Plus it will not run on older systems; I think you need at least kernel > 2.6.23 and its executables are rather big compared to C. Now we have to differentiate between what we do for fun and what we do professionally. I don't care about targeting new software I write professionally at old versions of Linux or obsolete hardware platforms. Storage devices are huge, and the size of executables isn't really a concern for the sort of hosted software I write in Go. It may be for some domains, but no one said that Go is for _everything_. In the > beginning it was fun to create some clones of classic Unix tools in > Go, but in reality they were still slower and bigger than the original > utilities. I'm not sure I buy this. Here's an example. I've never liked the `cut` command; I've always thought it contained some really odd corner cases and didn't generally solve some of the problems I had. I've usually been satisfied with using `awk` to do things like pull fields out of lines of text, but in the spirit of laziness that typifies the average Unix programmer, I wanted to type less, so I had a little Perl script that I called "field" that I carted around with me. One could type "field 1" and get the first field out of a line; etc. The default delimiter was whitespace, it could handle leading and trailing blanks, separators could be specified by regular expressions, etc. When the Harvey project got off the ground, one of the first things done was retiring the aged "APE", which meant that awk was gone too and of course there was no Perl, so I rewrote `field` in C: https://github.com/Harvey-OS/harvey/blob/master/sys/src/cmd/field.c It works, but has issues. But I got to rewrite it in Go for the u-root project: https://github.com/u-root/u-root/blob/master/cmds/exp/field/field.go Not only is the Go rewrite a little shorter (-75 lines, or around 12% shorter than the C version), I would argue it's simpler and probably a little faster: instead of rolling my own slices in C, I just use what the language gives me. I jumped on the Go bandwagon in 2010 (like a lot of people from the > open source community) but recently I find myself coming back more and > more to C. It is still a superior language in almost all areas. C is my first love as a programming language, but I really don't think one can say that it's "still a superior language in almost all areas." It's just too hard to get some things right in it, and programmers who think that they can do so are often deluding themselves. Here's an example: unsigned short mul(unsigned short a, unsigned short b) { return a * b; } Is this function well-defined? Actually, no; not for all inputs at least. Why? Consider a 32-bit target environment with 16-bit `short`. Since the type `short` is of lesser rank than `int`, when the multiplication is executed, both 'a' and 'b' will be converted by the "usual arithmetic conversions" to (signed) `int`; if both 'a' and 'b' are, say, 60,0000, then the product will overflow a signed `int`, which is undefined behavior. The solution to this is simple enough, just rewrite it as: unsigned short mul(unsigned short a, unsigned short b) { unsigned int aa = a; unsigned int bb = b; return aa * bb; } But it's shenanigans like this that make using C really dangerous. Thankfully, Go doesn't have these kinds of issues, and the Rust people by and large avoid them as well by mandating explicit type conversions and explicit requests for modular arithmetic for unsigned types. In Rust, this would be: fn mul(a: u16, b: u16) -> u16 { a.wrapping_mul(b) } Plus > it works on everything from retro 8-bit micros to modern big HPC > systems. > That's a fair point, but I would counter that different tools are useful for different things. If I want to target an 8-bit micro, C may be the right tool for the job, but Go almost certainly wouldn't. I'd likely choose Rust instead, if I could, though. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200608/b8235fd0/attachment-0001.htm> ^ permalink raw reply [flat|nested] 7+ messages in thread
* [COFF] [TUHS] History of popularity of C 2020-06-08 14:14 ` akosela 2020-06-08 15:22 ` crossd @ 2020-06-08 15:38 ` clemc 1 sibling, 0 replies; 7+ messages in thread From: clemc @ 2020-06-08 15:38 UTC (permalink / raw) On Mon, Jun 8, 2020 at 10:14 AM Andy Kosela <akosela at andykosela.com> wrote: > The real question is can Go be really faster than C/C++? > That's really a matter of economics more than architecture. Obviously how to measure it and a ton of things like that will have to be agreed upon. But in the end, it has to have a wide enough use that it started to be a financial driver for 'enough' of the business. If people or important customers start to use it enough, firms like my own will start to say, it's in our best interest to help create a better code generator. As I said, we pay more people to work on LLVM than anyone else at this point and when it finally can handle Fortran as well or better, we have told the world, we will move to that compiler. We got to that point when it was clear, that it was the compiler technology of future and 'enough' customer demand it (BTW: we still have a ton of working to gcc also, but the effort there is much less than when I started at Intel). BTW: Apple ships clang - but guess what product compiler Apple's product teams use for things like Aperture and GarageBand. Intel spends a ton of money making sure icc(1) is the best for them [And again, when clang's C/C++ can match icc - our execs will want to go there in a heartbeat I suspect]. A couple of years ago at an HPC workshop, I was talking to the folks. They all screamed for a parallel language, but was all over the map. Chapel turns to have a small backing (Cray wrote it for a DoD contract), but even the spooks don't use it for production (that all Fortran an C). After a enough digging, we just rolled the dice with DPC++ because the Venn diagram of 'enough' of our HPC customers showed it would be helpful. We shall see (that took some convincing in management to make that investment) and right now, its a parallel effort to C++. As I have said before Cole's law: 'inexpensive economics beats sophisticated architecture.' If Go (or Rust - whatever) starts driving silicon sales for enough folks, you see IBM, Intel et al start getting really excited. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/coff/attachments/20200608/303d6e59/attachment.htm> ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-06-08 15:38 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <CAEuQd1B8gH-Lu22HKj9pn6JVXNVVYscAnL4TSVDY03k2ORy2qw@mail.gmail.com> [not found] ` <8a2e9b1b-8890-a783-5b53-c8480c070f2e@telegraphics.com.au> [not found] ` <m1jcHQv-0036tPC@more.local> [not found] ` <CAC20D2NhWp8V88+7KFaRdYPtn=YrJBfWxUu9OM4bOu8Fp_7KEA@mail.gmail.com> [not found] ` <alpine.BSF.2.21.9999.2005261411560.79423@aneurin.horsfall.org> [not found] ` <9e5933a166ece32b4fb17c6bbb563873@firemail.de> [not found] ` <CAFNqd5Vrk9RBdq1MqyGGAAVZaaEWNE19sAaX4AkNudEfYD+Btg@mail.gmail.com> 2020-05-26 19:55 ` [COFF] [TUHS] History of popularity of C crossd [not found] <CAEoi9W4FDH7ahPHNfEdA0kcMO-yQccF-r=hUCNANyKWMtoO3bA@mail.gmail.com> [not found] ` <202006072256.057Muo1h090326@elf.torek.net> [not found] ` <CAAYAaAPu=dKnXH82WmECenLNJKSnKNh6aqjFphOoc+NPWxuGhw@mail.gmail.com> 2020-06-08 12:47 ` crossd [not found] ` <a1321feb-ff2c-4172-9a43-cac2f61bfc18@CHERUBFISH.conted.ox.ac.uk> 2020-06-08 13:46 ` cezar.ionescu [not found] <202006072115.057LFV6v089953@elf.torek.net> [not found] ` <7w4krmw1kc.fsf@junk.nocrew.org> 2020-06-08 12:51 ` crossd 2020-06-08 14:14 ` akosela 2020-06-08 15:22 ` crossd 2020-06-08 15:38 ` clemc
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).