From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Fri, 20 Aug 2004 13:09:20 +1000 From: George Michaelson To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu> Subject: Re: [9fans] datakit Message-Id: <20040820130920.2a08bf59@garlic.apnic.net> In-Reply-To: <4697c901caf1f3aef38bcb285334b6ea@collyer.net> References: <20040820115259.12e1c14c@garlic.apnic.net> <4697c901caf1f3aef38bcb285334b6ea@collyer.net> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: geoff@collyer.net Topicbox-Message-UUID: d746f156-eacd-11e9-9e20-41e7f4b1d025 On Thu, 19 Aug 2004 19:43:48 -0700 geoff@collyer.net wrote: >My experience with datakit was limited, but from what I understand >(and can remember) the advantages were: > >? it provided some protection against snooping and meddling with data >in transit by running connections directly from the network controller >(kept in a locked closet) to the hosts and by not being a broadcast >medium, and by securing the connections between network controllers. this is the kind of network outcome which fails deployment tests in a post-deregulated world. those cupboards are now multi-keyed, with competing vendors patching at random the mix of slow and highspeed network services in the same KRONE. you get bupkes from 'locked cupboard' wiring and head-end. Telstra deliver our 10meg metro-ethernet to a commodity switch which is as dumb as the come, and could be sniffed for free once in the room. (their optical demuxes at least have remote alarm on the doors, so you get some prot there) > >? partly as a consequence, when a call arrived and the datakit >controller said that it was from a particular destination, you could >believe it. I could say the same about X.25 I guess. I am very glad we don't use that much any more. > >? when you placed a call, you handed a dial string like >`nj/astro/helix!uucp' to the datakit controller and it returned with >either an error message or a live connection. I think having namespace embedded into the model low down was a very good idea and if things like the newcastle connection had been ported well to it, and released into the wild early on, we might well all have been playing this game. > You could also tell >that controller if you expected to need a lot of bandwidth (e.g. for >file transfer) or not, and it would try to reserve bandwidth along the >route. So no need for DHCP, ARP, RARP, BOOTP, sockets, sockaddrs, >inaddrs, etc., etc. again, end-to-end bw reservation has spectacularly failed in the marketplace because greshams law winds up ruling. the military still do what they want, but banks now regard IP nets with non-reserved b/w as 'adequate' and won't pay for the national and inter-national costs which go with this model. if banks won't then you've lost the marketplace of ideas (Cisco sell more to banks and associated bodies than to unis and ISPs. what banks want, Cisco sell) > >? IP, UDP and TCP were subsumed by a supposedly-simple protocol, URP. >I believe datakit was a reliable network, so you didn't need an extra >layer of protocol such as TCP. supposedly.. I bet it cost a huge amount of hidden overhead! the costs here are things like RTT delay. you can't take the checksums out of the IP/above layers because one sub-net claims to be reliable, so you wind up doing checksums in all layers (this wouldn't have only been true for IP btw. links are not homogenous, so absent AT&T owning everywhere, if you want to talk everywhere you wind up having to do your own anyway) > >? datakit was available in a variety of speeds and could be run great >distances, including cross-country. In Australia, centrex models meant that things like this ran 2x or more the apparent distance for rural subscribers. All telecoms north of Gladstone in Queensland (Boyd will doubtless chime in with detail on how sick this is) run via a concrete bunker next to the Wolloongabba cricket ground here in Brisbane, and with Queensland 2000km long, that means to go from the Cairns GPO to the Cairns bank incurs the same RTT as a trans-pacific dialogue, when its 2km across town. Believe me, that is a royal PITA. It winds up breaking eg NOVELL netware LAT etc, dump protocols. Datakit may have masked it, but there would have been costs. > >I'm less familiar with the drawbacks, though I have heard some of the >moaning over the years, so I'll give a try: > >? datakit hardware, particularly the controllers, was expensive. FDDI died here for the same reason. IBM networks ran like greased steam-engines and looked as lovely (to a sick trainspotter mind like mine) but were so massively over-engineered you were paying for hand-lacing of the cables in 20ft racks of gear, when other vendors sold you wire-wrap botch-boxes for a 10th of the cost. > >? it was a centrally-managed network, arguably less open to ad-hocery >than ethernets built from switches and routers. Some may see this as >a disadvantage, though it enabled many of the advantages named above. essentially, scaling failure in a non-monopolistic world. Personally I mourn the retreat from a social contract which ended the public telco model, but there is a paucity of fellow travellers around me. I'd have loved to see the core national networks worldwide retained in a small set of public hands, and have a million upper layers. I bet Datakit could have thrived in that world. > >? some of the host interfaces (e.g., Vax ones) were reputedly >painfully slower than the network itself. actually, still a problem for many people. Now Ethernet is a ubiquitous connect, many high-speed hosts clock their clicks to do dumb buffering with their ether cards instead of handing off, when a $100 unit from Frys can do on-card errorcheck and the like, and work sanely with a kernel. Its very odd how many places have high speed nets and dumb-as-shit connect to it! > >My dislike for one-problem-one-protocol-one-RFC is largely one of >taste and conserving energy for more important things. Do we really >need over 3,700 RFCs to make our networks work? No. the RFC model is bust. I know, I'm chairing a WG in IETF. its utterly broken. > I don't have a strong >opinion about the end-to-end principle or various other IETF dogma. mostly its kept alive by a dwindling number of people. Anybody who gives in to NATs (eg SIP, other three-way rendesvous through NAT boxes) has ditched it long ago. I think it still has merit as a debating point and comparator, much as an OSI model does for defining an ontology. As a concrete implementation guide, I think it has some utility: I very much prefer not to be told by my provider what I can do in packets, and dumb networks seem to be closer to a utility of packets irrespective of what you do in them. packet soup? >We could certainly have done worse, OSI being the scariest of the >possible disasters (and it's not quite dead yet, it keeps popping up >in places like LDAP!). ASN lives on. I blame many people, especially Marshall Rose (SNMP) XML is no better, but is now the RPC of choice for many. XML uber alles has probably replaced IP uber alles as the mantra. > >Are you sure you don't have Datakit in your central offices or >long-distance equipment? Yes. I'm very sure. Australia bought a mix of Nortel and Ericsson gear more than AT&T gear. I dont think datakit was their provisioning model, Nokia b/w management was much more like it. Telstra and Optus got out of that space in the 90s. Edge customer nets which people buy as 2mbit E1 (T1 to you I guess) which used to home in those things, now are provided over an ADSL/HDSL bearer, switch into a core optical network which is weird digital muxing, and the old 2meg stuff is gathering dust. They made their ROI on the old core-copper stuff and ditched it in mad rushes to ATM, and other dead meat technology. I'm sure it lingers on, but I am also sure the places we're in now don't use it and that it didn't make it down here. -George