Technology of Thought

Review of “Formal specification as a design tool” by John V. Guttag and James J. Horning[1] and “The emperor’s old clothes” by C.A.R. Hoare[2]

Sometimes, it is easy to take for granted the fact that the foundation of modern computing systems were built by women and men using the most abstract tools of mathematics available in their time. It is a well-known fact that the pioneers of computing were logicians first – mathematicians trained in the science and philosophy of reasoning. Logicians were purists, dealing with concepts originally disconnected from most practical human activities, even more remotely than geometers and forerunners of analysis who quickly found the applications of their work in physics and engineering.

But in less than a hundred years, machines dreamt and thought-experimented by logicians like Alan Turing, Charles Babbage, Ada Lovelace, and John Von Neumann became the core engines that drive the modern civilization. Their machines were realized as, first in the 1960s as accounting aids to large trans-nationals, then in the 1990s as the personal computer, and finally as ecosystems of cloud servers and mobile devices running apps like Uber or Twitter – churning data probably processed by deep learning predictive algorithms. Our world now is practically run by Boolean logic.

analitical_engine_general_v
The design of the analytical engine (1840). From here.

The irony can’t be missed: something that is so pervasive now in the age of the internet and personal computing have as its edifice “logical systems”, a concept so unnatural for humans that it took our species three millennia to go from the development of written language in Sumer to the discovery of deductive inference by Aristotle. If we were to rewind to the time when our kind first emerged, we will have to wait for the 99.85% of our entire timeline of existence to cross from being anatomically capable of discovering logic to actually discovering it.

So how did logicians manage to push reasoning from ivory tower anonymity to ubiquity? The history of computing shows us. The graphical user interface, operating systems, compilers, etc. were rigorously built onions – with machine’s native binary code wrapped around layer after layer of languages that pulls away from “thinking in ones and zeroes” and gets us nearer to how humans naturally think their individual and collective purposes can be achieved. Get my coffee, sort my email, give me the best schedule, learn my food preferences – are all natural commands now translated to algorithms in pseudo-code, then to high-level code, then to assembly language, then to machine language.

In a mad rush to create these layers, however, it appears that we have forgotten the largest chink in our armor: our own human, thinking process. At ease with the fact that we have developed means to talk to machines, we talked with abandon – programmers coding to solve poorly specified problems, creating architectures based on illusory needs, imagining designs without robust hardware foundations, building flimsy error checking mechanisms that won’t withstand most real world inputs. After all, we can do patches – another layer in response to what really is a deficit in thinking.

The emerging response to such a problem is to go back to the language of first-order logic. We used logic to talk to machines, why not use it to think before we talk? Through the eyes of three prominent computer scientists, Tony Hoare, John Guttag and James Horning, we try to see how the computing community tried to extend the legacy of the logic first used by its pioneers into the world of software engineering, and how they used mathematical elegance as the Alexandrian scissor to cut through the Gordian knot of programming complexity. Continue reading “Technology of Thought”