Technology of Thought

Review of “Formal specification as a design tool” by John V. Guttag and James J. Horning[1] and “The emperor’s old clothes” by C.A.R. Hoare[2]

Sometimes, it is easy to take for granted the fact that the foundation of modern computing systems were built by women and men using the most abstract tools of mathematics available in their time. It is a well-known fact that the pioneers of computing were logicians first – mathematicians trained in the science and philosophy of reasoning. Logicians were purists, dealing with concepts originally disconnected from most practical human activities, even more remotely than geometers and forerunners of analysis who quickly found the applications of their work in physics and engineering.

But in less than a hundred years, machines dreamt and thought-experimented by logicians like Alan Turing, Charles Babbage, Ada Lovelace, and John Von Neumann became the core engines that drive the modern civilization. Their machines were realized as, first in the 1960s as accounting aids to large trans-nationals, then in the 1990s as the personal computer, and finally as ecosystems of cloud servers and mobile devices running apps like Uber or Twitter – churning data probably processed by deep learning predictive algorithms. Our world now is practically run by Boolean logic.

analitical_engine_general_v
The design of the analytical engine (1840). From here.

The irony can’t be missed: something that is so pervasive now in the age of the internet and personal computing have as its edifice “logical systems”, a concept so unnatural for humans that it took our species three millennia to go from the development of written language in Sumer to the discovery of deductive inference by Aristotle. If we were to rewind to the time when our kind first emerged, we will have to wait for the 99.85% of our entire timeline of existence to cross from being anatomically capable of discovering logic to actually discovering it.

So how did logicians manage to push reasoning from ivory tower anonymity to ubiquity? The history of computing shows us. The graphical user interface, operating systems, compilers, etc. were rigorously built onions – with machine’s native binary code wrapped around layer after layer of languages that pulls away from “thinking in ones and zeroes” and gets us nearer to how humans naturally think their individual and collective purposes can be achieved. Get my coffee, sort my email, give me the best schedule, learn my food preferences – are all natural commands now translated to algorithms in pseudo-code, then to high-level code, then to assembly language, then to machine language.

In a mad rush to create these layers, however, it appears that we have forgotten the largest chink in our armor: our own human, thinking process. At ease with the fact that we have developed means to talk to machines, we talked with abandon – programmers coding to solve poorly specified problems, creating architectures based on illusory needs, imagining designs without robust hardware foundations, building flimsy error checking mechanisms that won’t withstand most real world inputs. After all, we can do patches – another layer in response to what really is a deficit in thinking.

The emerging response to such a problem is to go back to the language of first-order logic. We used logic to talk to machines, why not use it to think before we talk? Through the eyes of three prominent computer scientists, Tony Hoare, John Guttag and James Horning, we try to see how the computing community tried to extend the legacy of the logic first used by its pioneers into the world of software engineering, and how they used mathematical elegance as the Alexandrian scissor to cut through the Gordian knot of programming complexity. Continue reading “Technology of Thought”

Flaws in Microfoundations

My last article delved into the mathematics of the modern approach to macroeconomics – reducing macroeconomic behavior as an ensemble of optimizing, intelligent, microeconomic agents – the so-called “microfoundations” approach. New classical economists sees the microfoundations approach as the final bridging of macroeconomics and microeconomics, spurring hopes of a single economic theory that would explain both the individual and aggregate economic phenomena. (note how this parallels physicists’ dream of uniting large-scale relativistic physics with quantum mechanics). This spurred an orientation in economic research and pedagogy characterized by complex mathematical models capturing “deep” parameters in taste, technology, and expectations.

Recently, the microfoundations approach came under attack after models with “deep” microeconomic parameters supposedly failed to predict and recommend effective policy recommendations to mitigate the current global economic crisis. Even recent Nobel Laureate Thomas Sargent – one of the pioneers of modern macro – is under fire. Why this is so – as well as earlier, almost forgotten challenges to the microfoundations approach – is the subject of the survey paper below. Read the abstract and full text:

Abstract

The history of economics, for the most part, has been bifurcated between the study of individual economic decisions (microeconomics) and the aggregate economic phenomena (macroeconomics). The attempt to marry the two, via incorporating “microeconomic foundations” or “microfoundations” to explanations for macroeconomic observations and predictions, has so far taken sway a majority of mainstream economists with the failure of Keynesian models to accurately predict aggregate behavior in the presence of government policy. Robert Lucas Jr. posited that people form “rational expectations” of government policy and act so as to render forecasts unstable.

However, there are some persisting theoretical and empirical challenges on this research direction – the empirical instability of macro-models which incorporated microfoundations, the Sonnenschein–Mantel–Debreu result which may spell the theoretical dead end to economic aggregation, the still unresolved Cambridge capital controversies started by the reswitching argument by Italian economist Pierro Sraffa and American economist Joan Robinson in the 1960s, and the missing “representative consumer or firm” that can take into account the behavior of the aggregate. These challenges give the idea that aggregate economic behavior is almost impossible to deduce from microeconomic behavior of agents. Post-Keynesianism – which asserts that long-term expectations are largely determined by non-economic, psychological processes exogenous to the model – is posited as a possible way forward.

Continue reading “Flaws in Microfoundations”

Optimal Control in Agent-based Economics

Nobel laureate in Economics Robert Lucas once described economics in terms that would have been familiar to computer scientists:

“This is what I mean by the ‘mechanics’ of economic development – the construction of a mechanical, artificial world, populated by the interacting robots that economics typically studies, that is capable of exhibiting behavior the gross features of which resemble those of the actual world.” – Robert Lucas, Jr., “On the Mechanics of Economic Development”. Journal of Monetary Economics. 1988.

The slide presentation below attempts to survey the use of optimal control theory – a staple in dynamic optimization problems – in economics, one in which markets are populated by hyper-rational, machine-like beings with fixed preferences and ability to calculate in infinite time.

The presentation is also available here: http://www.slideshare.net/jmmiraflor/optimal-control-in-agentbased-economics-a-survey

When the Ghost in the Machine Fails: The Costs of Customization

A Review of “Understanding BGP Misconfiguration” by Ratul Mahajan, David Wetherall, and Tom Anderson

Among routing protocols, what makes Border Gateway Protocol (BGP) stand out is in its being defined by local operational configurations, not by global optimization criterion. Instead of honest and full information route announcements by homogeneous nodes to their neighbors and the rest of the networks, we have arbitrary, “human” polices for selecting which route to advertise and which route to accept. Unfortunately, whether intentionally or consciously, this configuration lapses has a tendency to lead to systemic instability in the Internet, as wrong routes when advertised are often systematically re-propagated.

Mahajan, Wetherall, and Anderson pioneering approach in knowing the causes and effects of BGP misconfigurations remains to inform us of the dangers posed by policy-based routings, and how adjustments such as automated verification of configuration and transactional semantics for configuration commands. One can also use ergonomics – the user interface can be redesigned to reduce the possibility of slips. Basically, it appears that they are championing a direction towards a “human proof” internet – which is a logical response to all-too-human individual flaws that can have costly effects on the rest of the network. Continue reading “When the Ghost in the Machine Fails: The Costs of Customization”

BGP: Paying the Price of Anarchy

A Review of “Lecture 3: Interdomain Internet Routing” by Hari Balakrishnan

The Internet is a rowdy place. Of course, we know that it was designed to be a distributed, decentralized system that can scale, but its growth in terms of size, connections and complexity has probably exceeded the earlier projections of its creators. For instance, even with the assist of Network Address Translation (NAT) that greatly reduced pressure for IP numbers and hierarchical addressing via IP prefixes, the IPv4 routing table entries continue to increase (from reaching the 50k mark in late 90s to exceeding 400k around 2012), in fact exceeding the limit of 512k for many major old routers last year (dubbed as 512KDay).  It thus amazing to discover that this immense convolution of zetabytes of packet traffic – provided and consumed by profiteering computer networks constrained by government regulations – are actually mediated by select routers that operate on a fairly simple rule – the Border Gateway Protocol (BGP, see this too.).

For someone like me who is just learning about computer networks and who had just been recently exposed to the protocol suite, BGP came as a shock, a rude discovery of how complicated the real Internet works. And so, it took me a while before I had some appropriate grasp of its concept. But in actuality, BGP may actually be simpler than most routing protocols in terms of what it does. The complication lies in the combinatorial explosion of possible configurations it can take because of shifting capitalistic interests of autonomous Internet Service Providers (ISP) competing and cooperating amongst each other. The complexity lies in the anarchy.

In a sense, if Internet is an economy, then BGP is the common currency everyone uses to pay for the price this anarchy. Interaction between routers running on different BGP configurations is almost like currency exchange – subject to existing commercial agreements and disagreements of the corporate giants operating those routers. The Autonomous Systems (ASes) that organize hosts into groups serve as countries that demarcate the market content providers tirelessly compete against each other to reach, and all foreign trade has to go through routers running BGP.

Balakrishnan was able to effectively discuss how BGP works, including the complications introduced by peering and transit. For my part, it has helped to know how BGP came to be. I tried to trace the evolution of routing protocols from ARPANET to NSFNET to the present day internet, in order to know how shifting political and economic consideration affected the network structure, and thus, the design and mission of routers. The rest of the essay thus explains the context which demanded for the emergence of something like the BGP, and why it fits so smugly with the private-sector driven model of Internet development. Finally, we offer some perspective using concepts from distributed algorithmic mechanism design (DAMD) in order to provide a plausible explanation of why BGP magically works so well in a context of autonomous ISPs with competing commercial interests. Continue reading “BGP: Paying the Price of Anarchy”

RFC 1958 and the Internet as an Evolutionary System

A Review of “Architectural Principles of the Internet” or RFC 1958

The RFC 1958 began its discussion by describing the Internet, and its architecture, as emerging in an “evolutionary fashion… rather than a Grand Plan”. But really, how much of the Internet is really by mere accumulation of accidents? How much of it is really just a logical consequence of its minimalist design?

When we speak of evolution, we naturally associate it with the Darwinian process of natural selection. Given an initial population of relatively homogenous organisms, we let them be exposed to mutation and other replication errors, and see if the deviation from these errors improves the survivability of the deviants. The survivability of course, is measured by how much the given organisms adapt to its surroundings and other organisms. If it does, then they slowly increase in population. This mechanism was supposed to have explained the staggering diversity of life forms and the relative stability of existing ecosystems. In some sense, just like the internet, heterogeneity and extremely large scaling are inevitable and supported by design[1].

Is it in the same sense then do we refer to when we say that the Internet has evolved in an evolutionary fashion? What would be the shared characteristics of natural selection with Internet development[2]? And what does this imply? Continue reading “RFC 1958 and the Internet as an Evolutionary System”

Smart vs. Democratic? Public vs. Private? – Political Economy of the “End-to-end” Internet

A Review of “Rethinking the design of the Internet: The end to end arguments vs. the brave new world” by David D. Clark

Of all possible adjectives, “dumb” may be the least likely one to be ascribed to the Internet. Yet a number of computer scientists believe that it is the apt description to an Internet defined by the “end-to-end” concept – dumb network, smart terminals (vis-à-vis the more intuitive notion of a smart network operated by less smart terminals). A number of those scientists then feel that we have to abandon the “end-to-end” concept, providing more and more services at lower and lower levels, towards a more trust-worthy and regulated Internet.

internetlab-article

 

David Clark (whose another article we talked about in an earlier blog post) is among those pushing for the end of the end to end model. He believes that changing user requirements – including what he thinks to be a need for more security and regulation – necessitates that we strengthen the core of the network. He notes that the shrinking of government’s “enabler” role and the increasing commercial use of the internet necessitate this change, together with a transition towards a paradigm where government is a regulator. He believes that this is consistent with similar developments in other industries such as conventional telecommunications.

On the one hand, with increasing computational capacity and improving hardware performance, it is difficult to maintain the concept of gateways and other network core components as simply transmitter and routing technologies. More and more of reliability and quality of service functions can be pushed “down” so to speak, in order to create a better internet. On the other hand, we also have to go beyond the technical question into the political, as we argue that the end-to-end model, wherein we push “up” functions as much as we towards the nodes and network edges because it is necessary for a democratic and politically free internet.

So how do we resolve this? Continue reading “Smart vs. Democratic? Public vs. Private? – Political Economy of the “End-to-end” Internet”

Internet: Cold War’s Brain Child

Review of “The Design Philosophy of the DARPA Internet Protocols” by David D. Clark

The human brain is a very robust and flexible machine. Studies have pointed out the brain’s capacity to retain its cognitive faculties even under severe stress or physical assault. There is also evidence of functional compensation of some parts of the brain in the event that other parts deteriorate or are permanently damaged. Neuroplasticity ensures an adult brain’s capacity to learn new streams of information, from disparate and even simultaneous sources. They are also shown capable of interpreting direct introduction of electronic signals, and thus had been able to control electromechanical appendages, in the cases of persons with disabilities.

Since the advent of modern neurosciences, much have been learned about human brains. We have, in fact, used abstract models based on its biological features to design artificial intelligence techniques such as neural networks that underpin much of the deep learning methods we have today. But it seems that we may have inadvertently mimicked the human brain a long time ago – the architecture and the design of the Internet itself as originally conceptualized by Defense Advanced Research Projects Agency (DARPA) seem to replicate the robustness, plasticity, and efficiency of the human brain.

The 1988 paper by MIT computer scientist David Clark, coming at the heels of further development and widespread adoption of TCP/IP (first presented fifteen years ago) and the subsequent rise of inter-networking, attempts to condense the features and the moving logic behind the network that started it all – the Advanced Research Projects Agency Network (ARPANET) – as well as the introduction of packet switching and later, the TCP/IP. He traces the goals and motivations of DARPA in designing ARPANET, and how it impacted and set the course of the Internet evolution. Continue reading “Internet: Cold War’s Brain Child”

One Protocol to Connect them All

A Review of “A Protocol for Packet Network Intercommunication” by Vinton G. Cerf and Robert E. Kahn

“You can resist an invading army; you cannot resist an idea whose time has come.”

– attributed to Voltaire[1]

Four decades seem to be too short for any one group of people to change the course of history. But that is exactly what Vinton Cerf of DARPA and his recruit, Robert Kahn of Stanford University, did. Their technical article “A Protocol for Packet Network Intercommunication” – originally published in 1974 and read by a small subset of engineers and computer scientists – laid the foundations of the world-wide Internet revolution and in the process triggered similar revolutions in almost all aspects of modern life: entertainment, education, economics, etcetera. Cerf and Kahn’s brainchild, what would later be known as the Transmission Control Protocol (TCP) and the Internet Protocol (IP), or simply TCP/IP, effectively inter-connected computer networks previously bounded by geography. In effect, Cerf and Kahn delivered humanity’s final death blow to physical distance[2].

Cerf and Kahn did this by proposing a system of interconnecting multiple networks, then operating via packet switching (courtesy of Paul Baran’s ideas in the 1950s), that were sprouting like mushrooms in universities and communication companies ever since the ARPANET began merely half a decade ago[3]. Their motivation is similar to those who created the packet switching networks they are trying to interconnect: the sharing of computer resources. Going beyond a system that only allows computers from one or two schools or buildings to communicate, Cerf and Kahn designed a common language that enable computers – connected on networks operating on disparate physical, media, and link layers – to pass and receive data among each other.

The problems facing Cerf and Kahn with regards to differential implementations of packet switching are very clear. Networks often have distinct addressing schemes. Various networks also accept data of different maximum sizes, which may force the adoption of the small maximum size as a common denominator. Time delays are also different, an important element in the transmission of data. There is also no common restoration algorithm in the case of errors. Routing and fault detection varies. How did Cerf and Kahn attacked all these? Continue reading “One Protocol to Connect them All”