In an article published in Network World in 1989 about RPC, one MIS Director at Burlington Coat Factory made a particularly salient observation: “It takes a good programmer only a day or so to learn how to build distributed applications using RPCs.” Cha-ching. While now mostly obsolete, this approach achieved the end result Apollo was hoping for: any developer knew how to call a function, and object-oriented was the programming paradigm du jour. NCS borrowed ideas from object-oriented programming and was built around Remote Procedure Calls ( RPC). Developers needed a way to easily build applications that could fully exploit the networking capabilities of their respective products.Īpollo’s answer was the their Network Computing System ( NCS). In order to make a business case for their pricey workstations, they needed a programming environment. However, they were considered table stakes for the vision of the workstation.ĭespite an impressive technological boom in the workstation market, all of these vendors ran into the same road block: few developers knew anything about networks. Multi-tasking, security controls, networking, and mass storage were all too expensive or impractical to include in PCs at the time. The asynchrony of networks and the demanding nature of these tasks required vastly more capable computers. Everything had to be built from scratch as the hardware and software of that era was not designed for the use cases they imagined. Like its contemporary, Sun Microsystems, Apollo was truly full-stack. Thus the idea of distributed computing entered the mainstream. And in stark contrast to the mainframe world, data and compute was distributed across many interconnected computers. It sounds kind of ridiculous to use this term, but worth stating that at this time most of the networking technology we take for granted had yet to blossom. Workstations were really the first networked computers. While PCs had revolutionized computing, they lacked networking capabilities, and therefore were very fancy calculators.įounded in 1980, Apollo Computer was one of the first companies to enter the nascent workstation market. In other words, data was colocated with computation. Institutions exchanged information using minicomputers and powerful mainframes with hundreds or thousands of dumb terminals. At this point in time, using a computer to share data actually meant sharing a physical computer. Data no longer lives in one place, it flows freely across the network. This scale also brings with it a bit of chaos - failures and machine ephemerality move from a yak shaving problem to the routine. This is a mathematical inevitability as a network’s capacity for storage and retrieval increases linearly with size. The old systems of central control during the telecom age just wouldn’t scale. These networks needed a way to uniquely address these pieces of data. Given these new capabilities, the things on a network shifted from physical machines to logical pieces of data. Now, it became routine to store and retrieve data on demand, and thus the titanic explosion of data that has inundated the world ever since. Until this tipping point, data sent across telecom wires had been ephemeral - the networks were just a conduit. Suddenly the granularity of things had to become orders of magnitude finer. With it brought the first unique identifier in a network: the telephone number.įast forward many decades later to the advent of the networked computer. The key invention that brought telecommunications to the masses was the switchboard, enabling the creation of exchanges, and in turn vastly increased the utility of these lines. Compared to the speed of the telegraph, tying up an expensive copper line for a quick chat was incredibly frivolous. While telegraphs were mainly used for important governmental and business communications, telephones were an extreme luxury. It even resulted in an almost comical explosion of copper lines zig-zagging above the roads of major cities. While amazing at the time, they were expensive, inflexible, and unreliable. Before this crucial tipping point, telecom wires were entirely point-to-point links. The first networks that resemble our contemporary ideas began with the construction of the first telephone exchanges in 1870s. In doing the research that went into this library, we uncovered a compelling story that we wanted to share with a larger audience.Įver since two or more machines found themselves exchanging information on a network, they’ve needed a way to uniquely identify things. It borrows core ideas from the ubiquitous UUID standard, adding time-based ordering and more friendly representation formats. Today we’re releasing ksuid, a Golang library for unique ID generation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |