The Chinese Typewriter: Past as Prologue

I stumbled across a fantastic book over the holiday break: The Chinese Typewriter – A HistoryIt’s a story of the process of innovation — about imagination bottlenecks and their societal consequences, how use case goals shape (sometimes misdirect) design outcomes, the interplay of national and international politics with information technology, the mathematics and philosophy of organizing language and knowledge, and an array of related topics — via an extended case study of the Chinese typewriter.  It’s also a sort of alternative history of IT–an examination of what might happen when you can’t easily build on prior art. Reading the sample right before bed turned out to be a bad idea; I was awake thinking until 3 am.

Chinese Typewriter

The book begins with the history of the mass-produced typewriter, generally speaking. The “problem” of Chinese typewriting stemmed from the fact that the US-led international typewriter industry, after an initial proliferation of typewriter designs, very quickly standardized on the familiar keyboard and one-phoneme-per-key type that we now know, and thenceforth could not envision any other approach. It fell to Chinese inventors to come up with other alternatives, which in the end involved neither keyboards nor keys, and essentially miniaturized traditional moveable type schemes invented in China 1000 years earlier.

There remained the question of how to encompass the 60,000 or so non-decomposable Chinese characters in any compact mechanical device. Western Christian missionaries actually did a lot of the early work on this question, which tended to skew the results towards vocabulary most pertinent to their interests. The development of three main approaches to Chinese typography takes up the majority of the book, but Mullaney also takes a brief detour through the parallel challenges of encoding Chinese characters for telegraphic transmission. There were a variety of homegrown, mostly regional double-encoding schemes, all of which required the use of “special characters”. Special characters cost twice as much to transmit as “standard” Latin-alphabet characters, thus making telegraphy significantly more expensive for China. One doesn’t have to listen very hard to hear history rhyming with Unicode, as well as the ongoing economic impact to countries with double-byte character systems in view of their need to participate in a global communications infrastructure fundamentally designed around European  languages.

The challenge of coming up with a typewriter design that included a manageable but sufficient number of characters led to a widespread conclusion that the Chinese language was “incompatible with modernity.” This was not limited to foreign observers, by the way. Chinese elites, who were increasingly pursuing secondary educations abroad starting in the late 19th century, were keenly aware of the new centrality of the typewriter and telegraph in government and business affairs overseas and grew increasingly concerned that China was being left behind in the communications revolution.

There were heated debates about the best method for indexing Chinese characters in the absence of a system like a standard alphabet sequence: word-level metadata schemes, in other words, necessary for character retrieval. Rather poignantly, this led to something of an existential culture crisis for elites: Chinese high culture had revolved around the written language for millennia–and yet it turned out there was no real consensus even about the true makeup of a character. As Mullaney puts it, “Had the fundamental essence and order of Chinese script yet to reveal itself”, even after 5000 years of existence and intense scholarly examination?

By the 1930s, Japanese manufacturers had appropriated a couple of the most common approaches and began gaining market share. Their share accelerated as they invaded China in earnest during WWII, and continued in its aftermath, as China’s mostly small-scale manufacturing infrastructure was decimated. China’s experience with modern information technology into the 1950s was thus continually at the mercy of foreign interests–first Western bureaucrats, missionaries, and Western-run standards bodies, whose notions about information design were ill-matched with Chinese needs; and then the Japanese, who devastatingly controlled the means of print production in the lead up and during the war.

It’s not surprising, then, that post-WWII China would be as fixated on developing self-sufficiency in technology as in food and energy production. At the same time, maintaining interoperability with the global communications system is obviously essential. While Chinese technology protectionism is strong, so too is participation in standards bodies and open source projects, the latter being a particularly useful method of ensuring baseline interoperability as well as adaptability to Chinese environments without one-way dependency. Two leading telecom companies I work with have contributor rankings for key open source projects as top-level internal KPIs and other Chinese firms are increasingly taking operational leadership roles in those projects.

China’s early experiences with non-WYSISWG, and ultimately operator-designed input methods in both telegraphy and typewriting would form the core of keyboard-based input methods for Chinese in the computing age. Its alternate paths of IT experimentation also provided experience with predictive natural language approaches, as well as user-driven metadata design. Ongoing keyboard challenges, especially with smartphones, provide keen incentive to apply machine learning to predictive text–in the cloud, with an ever-expanding, real-time training set coming from over a billion internet-connected devices.

Mullaney has a follow-up book planned, which continues the story into the computer age. In the meantime, he’s already provided plenty of pattern-matching between China’s experiences as a technology outsider in the last century and its current initiatives in technology.

What to know before you go: Some general knowledge of Chinese history in the late 19th and 20th century would be helpful in reading the book. A brief perusal of a few Wikipedia articles would probably suffice.

I’m not sure how easy or difficult the book is to follow for someone with no knowledge of the Chinese language. Mullaney describes the various theories of how characters are formed in the course of explaining the different approaches to typewriting, but I suspect a quick primer such as one normally gets in the first couple classes of Chinese 1a would have been a good addition to the Introduction. This article isn’t a bad alternative. And this provides a brief summary about traditional approaches to organizing the language.

And, irony alert! It turns out that the Chinese characters sprinkled throughout the text to illustrate the discussion don’t render on older Kindles. I wound up buying the physical book.

Advertisements

Meet the New Software-Defined Network. Almost the Same as the Old Network.

Occasionally when I wake up, I have some utterly obscure question percolating in my brain. Once, it was about the color of the original, undomesticated carrots (white and purple, it turns out–like their cousins the turnip and the parsnip). I have to go look it up so I can go on about my day without further mental interference, which is why I can tell you, for example, that there is an online Carrot Museum that will tell you everything you want to know about carrots. Also, in Afghanistan they make a fermented beverage from wild carrots. (Thanks, brain!)

Today, I simply had to know what “palimpsest” means. Here’s what it means:

A palimpsest (/ˈpælɪmpsɛst/) is a manuscript page, either from a scroll or a book, from which the text has been either scraped or washed off so that the page can be reused, for another document.[1] …In colloquial usage, the term palimpsest is also used in architecture, archaeology, and geomorphology, to denote an object made or worked upon for one purpose and later reused for another, for example a monumental brass the reverse blank side of which has been re-engraved. [Wikipedia]

SDN is an excellent example of a palimpsest. Let’s take the SDN origin story as gospel: why, indeed, shouldn’t you be able to program a switch the way you can a non-purpose-built computing platform? The short answer is because the networking industry, and the devices it continues to produce and sell, evolved in a certain way, and the world is filled with such devices and vast numbers of people trained to interact with them via communication protocols vs higher-level constructs.

So now we’re elaborating an assortment of higher-level constructs under the common banner of SDN. This does not, however, eliminate existing network concepts, protocols–or the actual networks themselves. We are using all of these things as the foundations upon which software-defined networks will be operated, much as the “Troy” excavated (with dynamite!) by Heinrich Schliemann was, in fact, as many as nine different cities, each of which built upon the decaying foundations of the previous one. And why not? It all works moderately well, and we have lots of people who know how to make it work.

The thing about “adopting” SDN is that it’s a little bit of new technology, but a lot of mindset and process shifts. This, I think, is why SDN is starting to enter a winter of discontent (or if you’re more of a Gartnerian than a Shakespearean–a trough of disillusionment). Networking salespeople and users alike are trained to buy/sell boxes that you plug in and that largely ends the transaction until the support contract comes up for renewal or there’s an opportunity to add in more boxes. SDN controllers today, by contrast, are all at the some-assembly-required stage of evolution. And if your SDN use case is BetterFasterCheaper manageability, it’s very reasonable to question why you should go through a whole bunch of new gymnastics moves to do more or less what you’re already doing in a different way.

It would be a mistake, though, to imagine that the current state of affairs is the de-facto nature of SDN. As controllers mature, they will of course become more plug-and-play. Now that the industry has begun to consolidate around a few leading platforms, we will start to see more packaged SDN applications to run on said controllers. Meanwhile, the bleeding-edge organizations moving towards active deployments now are investing heavily in training NetDevs to build their own applications for proprietary use cases. At Open Networking Summit this spring, and in subsequent media interviews, AT&T indicated that they’re putting tens of thousands of network engineers through specialized coding classes. Some portion of those engineers will eventually go on to jobs at other companies, spreading their new expertise into new soil.

And as I wrote in 2013,

…Today’s discrete controllers will wind up going one of two ways:

  • Down, coupled ever more tightly into existing network operating systems, until eventually they will simply be part of OSes with better northbound APIs than before. This will be especially true of “house” controllers from existing networking vendors.
  • Up, as platforms for emerging ecosystems of network applications. There’s nothing to prevent house controllers from moving in this direction, and for major vendors to develop their own developer ecosystems (well, nothing but mindset and institutional support for such)…

There are those, in fact, who expect that controllers will eventually be packaged with applications such that the (micro)controllers are transparent to the user, as opposed to an independent piece of software to be set up and then integrated with a parade of disparate applications. This scenario will necessitate a truly de facto industry controller platform (we’re far from that yet…) as well as a standardized architecture for peer-to-peer controller communications, and mechanisms for defining order of precedence for operations stemming from different applications.

None of this, however, will change what lies beneath in the foreseeable future. We’ll still have some mix of physical and virtual forwarding devices, managed via some set of protocols–some older, some derivations or extensions of existing ones, some genuinely new ones–because SDN doesn’t necessarily change anything about forwarding architecture, and because of refresh cycles and human operator inertia. It will still be valuable for quite some time to understand how those protocols operate on the devices, even as the preferred method for doing so shifts with controller advances and the general state of administrator skillsets. It’s entirely conceivable that 15-20 years down the road, the idea of monkeying around with networking protocols themselves will seem as arcane to most as being fluent in the inner workings of the systems bus in your PC. But we’ve got a long way to go as an industry before that state of affairs appears on the horizon, and most of that shift will come well after we have some semblance of controller maturity and an SDN application ecosystem in place.

Meanwhile, I’ll apparently be mulling over things like the origins of carrots. Stupid brain.

Response: Open Networking: The Whale That Swallowed SDN

Art Fewell, whose views I greatly respect, has written a very good post on Network World entitled “Open Networking: The Whale That Swallowed SDN“. It’s a great historical summary of SDN 2011-present, with some noteworthy areas of concern. I agree with the general thrust of Art’s thesis, yet at many points I found myself thinking “Yeah, but…” I started to write a few comments on the Network World page, but the comments turned into a page, so here we are.

Here’s what I really liked in Art’s piece: Continue reading

Languid Conversations Among the Enlightened

I’ve come to the conclusion that IT people are always endangered. Their demise is perennially imminent, and it’s always because they’re simply too stuck in their ways, and too stupid and/or lazy to let go of the tried and true, and embrace the virtues of cross-disciplinary collaboration and training.

Here’s an example the relatively traditional version, in which a vendor tells its own core audience that they’re doomed if they don’t buy the latest thing from said vendor. In fact, we’ve got two FUD vectors conflated in this particular sample:

1)      Automation is going to take away your job (even though “our customers are telling us they need automation”).

2)      SDN is going to eliminate everything about how networks are currently operated and force anyone who wants to touch a network to become a programmer.

There’s a particularly tedious corollary to the threat of new technologies, one which comes up in virtually every discussion about emerging technologies.

Continue reading