Geek Sublime Read online

Page 3


  3 THE LANGUAGE OF LOGIC

  The seven lines of the “Hello, world!” code at the beginning of this book—written in Microsoft’s C# language—do nothing until they are swallowed and munched by a specialized program called a compiler, which translates them into thirty-odd lines of “Common Intermediate Language” (CIL) that look like this:

  This, as the name of the language indicates, is a mediating dialect between human and machine. You could write a “Hello, world!” program in another Microsoft language like Visual Basic and get almost exactly the same listing, which is how the program is stored on disk, ready to run. When you do run it, the CIL is converted yet again, this time into machine code:

  Now we’re really close to computing something, but not quite yet. Machine code is actually a really low-level programming language which encodes one or more instructions as numbers. The numbers are displayed above in a hexadecimal format, which is easier for humans to read than the binary numbers (“101010110001011 …”) sent to the computer’s central processing unit (CPU). This CPU is able to accept these numbers, each of which represents an instruction native to that particular type of CPU; the CPU reacts to each number by tripping its logic gates, which is to say that a lot of physical changes cascade in a purely mechanical fashion through the chips and platters in that box on your desk, and “Hello, world!” appears on your screen.

  But, but—what are “logic gates”? Before I began my investigation of the mechanics of computing, this phrase evoked some fuzzy images of ones and zeros and intricate circuits, but I had no idea how all of this worked together to produce “Hello, world!” on my screen. This is true of the vast majority of people in the world. Each year, I ask a classroom of undergraduate students at Berkeley if they can describe how a logic gate works, and usually out of about a hundred-odd juniors and seniors, I get one or two who are able to answer in the affirmative, and typically these are computer science or engineering majors. There are IT professionals who don’t know how computers really work; I certainly was one of them, and here is “Rob P.” on the “programmers” section of stackexchange.com, a popular question-and-answer site:

  This is almost embarrassing [to] ask … I have a degree in Computer Science (and a second one in progress). I’ve worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do.

  But I Don’t Know How Computers Work! [Emphasis in the original.]

  I know there are components … the power supply, the motherboard, ram, CPU, etc … and I get the “general idea” of what they do. But I really don’t understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff.1

  How logic gates “do stuff” is dazzlingly simple. But before we get to their elegant workings, a little primer on terminology: you will remember that the plus sign in mathematical notation (as in “2 + 3”) can be referred to as the “addition operator.” The minus sign is similarly the “subtraction operator,” the forward slash is the “division operator,” and so on. Mostly, we non-mathematicians treat the operators as convenient, almost-invisible markers that tell us which particular kindergarten-vintage practice we should apply to the all-important digits on either side of themselves. But there is another way to think about operators: as functions that consume the digits and output a result. Perhaps you could visualize the addition operator as a little machine like this, which accepts inputs on the left and produces the output on the right:

  So if you give this “Add” machine the inputs “3” and “2,” it will produce the result “5.”

  A “Subtract” operator might be imagined like this:

  So, giving this “Subtract” operator a first input of “4.2” and a second input of “2.2” will cause it to output “2.0.”

  The mathematical addition and subtraction operators above act on numbers, and only on numbers. In his 1847 monograph, The Mathematical Analysis of Logic, George Boole proposed a similar algebra for logic. In 1854, he corrected and extended these ideas about the application of symbolic algebra to logic with a seminal book, An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities. In “Boolean algebra,” the only legal inputs and outputs are the logical values “true” and “false”—nothing else, just “true” and “false.” The operators which act on these logical inputs are logical functions such as AND (conjunction), OR (disjunction), and NOT (negation). So the logical AND operator might look like this:

  The AND or conjunction operator, according to Boole, outputs “true” only when both inputs are “true.” That is, it works like this:

  Input 1 Input 2 Output

  false false false

  false true false

  true false false

  true true true

  If you gave the Boolean operator AND a first input of “false” and a second input of “true,” it would output “false.”

  The output of “(Teddy can fly) AND (Teddy is a dog)” would therefore be “false.” But the output of “(Teddy is a dog) AND (Teddy has a keen sense of smell)” would be “true.”

  Other operators work similarly. The “truth table” for the Boolean OR operator would look like this:

  Input 1 Input 2 Output

  false false false

  false true true

  true false true

  true true true

  So, the output of “(Teddy can fly) OR (Teddy is a dog)” would be “true.” That is, a first input of “false” and a second input of “true” would produce the output “true.”

  If one were to adopt the convention that “false” was represented by the digit “0” and “true” by “1,” the functioning of the OR operator could be represented as follows:

  Input 1 Input 2 Output

  0 0 0

  0 1 1

  1 0 1

  1 1 1

  And so we could draw our OR operator example like this:

  The XOR operator—sometimes referred to as the “exclusive-OR” operator—is a variation of the OR operator. It outputs “true” if either, but not both, of the inputs are “true.”

  Input 1 Input 2 Output

  0 0 0

  0 1 1

  1 0 1

  1 1 0

  You can think of XOR as an “either-or” operator—it returns “true” if one of the inputs is true, but returns “false” if both of the inputs are “true” or if both of the inputs are “false.” For example, a future robot-run restaurant might use an XOR operation to test whether your sandwich order was valid: “(With soup) XOR (With salad)”—you could have soup or salad, but not both or nothing. The last line of the truth table for the XOR operator could be drawn like this:

  Boolean algebra allows the translation of logical statements into mathematical expressions. By substituting ones and zeros into our soup-XOR-salad statement above, we can get back another number we can use in further operations.

  Now here’s the magical part: you can build simple physical objects—logic gates—that mechanically reproduce the workings of Boolean operators. Here is an AND logic gate built out of LEGO brides, cogs, and wheels by Martin Howard, a physicist from the UK:

  This is a push-pull logic gate. The two levers on the left are for input: a pushed-in lever represents a value of “true” or “1,” while a lever in the “out” position represents a “false” or “0.” The mechanism of the logic gate—its gears and rods—has been set up so that when you push in the input levers on the left, the output lever on the right automatically takes a position (in or out) that follows the workings of the Boolean logical operator AND. In figure 3.14, both input levers have been pushed in (set to “true” and “true”), and so the output lever has slid into a position representing “true” or “1.” Or, in Boolean terms, “true AND true = true.” Any possible positioning of the input levers will produce the correct positioning of the output lever. The logic gate always follows the truth table for the AND operator.
r />   And here is a push-pull XOR logic gate that mimics the workings of the Boolean XOR operator:

  If you’re still having trouble visualizing how these LEGO logic gates work, you can watch videos at http://www.randomwraith.com/logic.html. A physical logic gate is any device that—through artful construction—can correctly replicate the workings of one of the Boolean logical operators.

  You may still be wondering how all of this leads us toward computation, toward calculation. Well, as it happens, you can also represent numbers in a binary fashion, with ones and zeros, or with absence and presence, off states and on states. In binary notation, the decimal number “3” is “11.” How does this work? You’ll recall from elementary school that in the decimal system, the position of a digit—from right to left—changes what that digit means. If you write the decimal number “393,” you are putting the digits into columns like this:

  Hundreds

  (102) Tens

  (101) Ones

  (100)

  3 9 3

  So what you’re representing when you write “393” is something like “three hundreds, plus nine tens, plus three ones” or “(3 × 102) + (9 × 101) + (3 × 100).” A more precise way to think about the columns in the decimal system is to say each column, from right to left, represents an increase by a factor of ten. The small superscript number—the exponent—tells you how many times to use the number in a multiplication by itself. So, the “Hundreds” column represents quantities of “102” or “10 × 10.” Any number to the power of 1 gives you the number itself, so “101” is “10”; and any number to the power of zero is just “1,” so “100” is “1.”

  In base-2 or binary notation, our column headings would look like this:

  And you would write “393” in binary like this:

  When you write the binary number “110001001,” you are putting a “1” in every column that you want to include in your reckoning of the quantity you are trying to represent. In a base-2 system, you have only two symbols you can use, “0” and “1” (as opposed to the ten symbols you use in a base-10 system, “0” through “9”). So, with “110001001,” you are representing something like “256, plus 128, plus 8, plus 1” or “(1 × 28) + (1 × 27) + (1 × 23) + (1 × 20)”—which equals decimal “393.”

  Decimal “9” is the same as binary “1001,” and decimal “5” is binary “101”—all very baffling to the decimal-using brain, but completely consistent and workable. So if you wanted to add “9” to “5” in binary, it would look like this:

  And binary “1110” is of course equivalent to “8 + 4 + 2 + 0” or decimal “14.” From the above, you can deduce the rules of binary addition:

  0 + 0 = 0

  0 + 1 = 1

  1 + 0 = 1

  1 + 1 = 0, and carry 1

  The last rule may seem a bit mystifying until you recall how addition works in decimal arithmetic. In decimal, you use the digits “0” through “9” to represent numbers; when the sum of the digits in a column exceeds “9” you write the least significant figure (“4” in the number “14”) in that column and carry the more significant figure (“1” in the number “14”) to the next column on the left. In binary notation, you can only use the digits “1” and “0”—so when you add “1” to “1,” you write “0” in that column and carry “1.”

  Now this may begin to remind you of Boolean logic—you’re taking inputs of zeros and ones and sending out zeros and ones. In fact, except for the “carry 1” part of the last rule, this looks very much like the truth table for the XOR logical operator:

  Input 1 Input 2 Output

  0 0 0

  0 1 1

  1 0 1

  1 1 0

  It turns out that if you put together certain logical operators in clever ways, you can completely replicate addition in binary, including the “carry 1” part. Here is a schematic for a “half adder”—built by combining an XOR operator and an AND operator—which takes in two single binary digits and outputs a sum and an optional digit to carry.

  So the half adder would function as follows:

  And since we can build logic gates—physical objects that replicate logical operations—we should be able to build a physical half adder. And, indeed, here is a LEGO half adder built by Martin Howard.

  At last we have computation, which is—according to the Oxford English Dictionary (OED)—the “action or process of computing, reckoning, or counting; arithmetical or mathematical calculation.” A “computer” was originally, as the OED also tells us, a “person who makes calculations or computations; a calculator, a reckoner; spec. a person employed to make calculations in an observatory, in surveying, etc.” Charles Babbage set out to create a machine that would replace the vast throngs of human computers who worked out logarithmic and trigonometric tables; what we’ve sketched out above are the beginnings of a mechanism which can do exactly that and more. You can use the output of the half adder as input for other mechanisms, and also continue to add logic gates to it to perform more complex operations. You can hook up two half adders together and add an OR logic gate to make a “full adder,” which will accept two binary digits and also a carry digit as input, and will output a sum and a carry digit. You can then put together cascades of full adders to add binary numbers eight columns wide, or sixteen, or thirty-two. This adding machine “knows” nothing; it is just a clever arrangement of physical objects that can go from one state to another, and by doing so cause changes in other physical objects. The revolutionary difference between it and the first device that Charles Babbage built, the Difference Engine, is that it represents data and logic in zeros and ones, in discrete digits—it is “digital,” as opposed to the earlier “analog” devices, all the way back through slide rules and astrolabes and the Antikythera mechanism. Babbage’s planned second device, the Analytical Engine, would have been a digital, programmable computer, but the technology and engineering of his time was not able to implement what he had imagined.

  Once you have objects that can materialize both Boolean algebra and binary numbers, you can connect these components in ways that allow the computation of mathematical functions. Line up sufficiently large numbers of simple on/off mechanisms, and you have a machine that can add, subtract, multiply, and through these mathematical operations format your epic novel in less time than you will take to finish reading this sentence. Computers can only compute, calculate; the poems you write, the pictures of your family, the music you listen to—all these are converted into binary numbers, sequences of ones and zeros, and are thus stored and changed and re-created. Your computer allows you to read, see, and hear by representing binary numbers as letters, images, and sounds. Computers may seem mysteriously active, weirdly alive, but they are mechanical devices like harvesting combines or sewing machines.

  You can build logic gates out of any material that can accept inputs and switch between distinct states of output (current or no current, 1 or 0); there is nothing special about the chips inside your laptop that makes them essential to computing. Electrical circuits laid out in silicon just happen to be small, cheap, relatively reliable, and easy to produce in mass quantities. The Digi-Comp II, which was sold as a toy in the sixties, used an inclined wooden plane, plastic cams, and marbles to perform binary mathematical operations. The vast worlds inside online games provide virtual objects that can be made to interact predictably, and some people have used these objects to make computing machines inside the games—Jong89, the creator of the “Dwarven Computer” in the game Dwarf Fortress, used “672 [virtual water] pumps, 2000 [faux wooden] logs, 8500 mechanisms and thousands of other assort[ed] bits and knobs like doors and rock blocks” to put together his device, which is a fully functional computer that can perform any calculation that a “real” computer can.2 Logic gates have been built out of pneumatic, hydraulic, and optical devices, out of DNA, and flat sticks connected by rivets. Recently, some researchers from Kobe University in Japan announced, “We demonstrate that swarms of soldier crabs can im
plement logical gates when placed in a geometrically constrained environment.”3

  Many years after I stopped working professionally as a programmer, I finally understood this, truly grokked this fact—that you can build a logic gate out of water and pipes and valves, no electricity needed, and from the interaction of these physical objects produce computation. The shock of the revelation turned me into a geek party bore. I arranged toothpicks on dinner tables to lay out logic-gate schematics, and harassed my friends with disquisitions about the life and work of George Boole. And as I tried to explain the mechanisms of digital computation, I realized that it is a process that is fundamentally foreign to our common-sense, everyday understanding. In his masterly book on the subject, Code: The Hidden Language of Computer Hardware and Software, Charles Petzold uses telegraphic relay circuits—built out of batteries and wires—to walk the lay reader through the functioning of computing machines. And he points out:

  Samuel Morse had demonstrated his telegraph in 1844—ten years before the publication of Boole’s The Laws of Thought …