Back, back, back again!
I’ve been thinking, lately, about what humans are good for. Or more to the point, what I am good for. I’m an information worker, effectively. I am, professionally, an input/output device for a machine, or more generally, for the network of machines. In another turn of phrase, I’m a “content creator”. Yes, I include computer programs as “content” here, which is perhaps more general than people care for. But I think it makes sense to think of it this way: from the perspective of the computer, the user is an input/output device like any other. It’s difficult to draw exactly the line, here: in fact, it’s the keyboard, mouse, screen, speakers, cameras, microphones, and so on that are the input/output devices. But the information channeled back into the machine by those means comes from the human. In some sense, these interfaces are equipment that the computer uses to interact with the world. One could equally ask where, exactly, the line is with a more “traditional” device: where, exactly, is the line in a hard disk? In fact the main computer doesn’t typically interact with the disk directly at all; there’s another small computer on the disk that, in turn, controls the motors and servos that spin the disks and move the read head; information is written on and read back from the magnetic surface, which flows back to the computer.
All of these gymnastics are effectively inconsequential, since we adopt the polite convention that parts that can be physically seperated from the machine are seperate components, and that the interface is the mechansim of their connection. The issues arise when one wishes to differentiate between different ways of considering and dividing the devices up. It is difficult to think at multiple levels of abstraction simultaneously. Compare the several layers of abstraction that underlie the internet, and how little it all matters at the level of the concept of the web.
But I am astonished at how effectively the characteristics of a human’s input and output can be mimicked by the machine. It is not surprising that it should be so: one of the basic principles of cybernetics is that a universal computer can imitate any input and output, within an arbitrary limit of precision (I don’t claim that one can get around the limits of computable functions this way, or the limits of time complexity; but in contexts where one is modeling a continuous process, one can’t expect perfect precision or proofs of correctness, so the space can be approximated to an arbitrary degree of precision). The reader should beware of these sorts of meditations on the inner nature of computers and humans, since they are all approximative. The essential question is: can the flux, produced by a human, of input produced in response to output, be adequately replicated by the computer? This does not imply any question about computability, and it also gives little to no information about how to do such a thing effectively.
So suppose, then, that a machine can win the imitation game; that is, under arbitrary contexts it can impersonate a human arbitrarily well. Perhaps to do this one would need to produce an anthropoid body and let the machine explore the world, then set it down at a teletype to play the imitation game like anyone else. I don’t know. What I do know is that even without these extravagances, a reasonable level of verisimilitude is possible.
So then what is the purpose of humans? Now this is, of course, a situation where the owl of Minerva takes flight in the evening, since the question of what will become of humans in the face of automation is not new, but information workers have heretofore been exempt: power looms are a far cry from an automatic novel generator, though the impacts of the two on their domains are perhaps analogous. Generally, what is the point of doing anything the computer can do better? When doing anything I already have to be as good as other humans; why bother when the machine can beat anybody?
The typical move here is to invoke the slogan “humans are the guarantor of meaning.” The information in the computer is, strictly, nonsense: the machine moves the signals and manipulates them as a result of the raw mechanics of its circuitry. Any meaning is projected into them by humans. Take, for example, a bunch of numbers on the disk. Now it may be that, under the proper decoding, these numbers can be mapped onto characters, and the characters in turn interpreted as representing words, and so on. But the choice of interpretive frame, if you will, makes all the difference: trying to read back a text file using the wrong character encoding produces noise. One could say that it is the prerogative of humans to give this interpretation, and that therefore we are an irreducible part of the information system. QED.
Except not so fast: why is an interpretation necessary at all? Imagine some machine hooked up to a fan and a thermometer, so that it uses the fan to keep the temperature read below a certain threshold, but also seeks to minimize how much the fan runs. You can imagine that it gets one number from the thermometer (input) and sends another to the fan (output). For you and me, it is easy enough to imagine getting the temperature and setting the fan to a certain speed and to only see things at this level of abstraction. But why does the machine care? All it knows is that the input number is supposed to be in a certain range, and setting the output number has an impact on the input number, so that it discovers some procedure to optimally set the output in response to the input. It might as well be dispensing water to plants in response to measurements of soil humidity — the actual mapping of inputs to outputs might be different, but the machine wouldn’t have any information about the world except this mapping. No interpretation in sight, but the system might still effectively manage the temperature.
Perhaps the “interpretation” here is the material being of the object whose temperature is being manipulated — the “interpretation” of the input number is the thermal energy of the thing measured, and the “interpretation” of the output number is the kinetic energy of the fan. To the extent that these external things are changed by the inner mechanism of the computer, those obscure internal machinations have a “meaning.” And to this extent, the cybernetic feedback between the interior of the machine and the exterior is the same, essentially, as that which obtains in the human organism. This last is a statement of what for the sake of search engine optimization I shall call “computationalism” — the position that human cognition works basically like a computer.
So what are humans for? Perhaps there isn’t anything — why should there be? What I mean is, what should I do? What should anyone do? I can take care of myself, of my cat, of my apartment, of my family. There isn’t (yet) a machine to do those things, and why would we want to automate away things we do for enjoyment? The purpose of those things isn’t to do things effectively but to enjoy doing them. And it’s only with great difficulty that a machine can enjoy for us. Even trying to imagine what it would mean for a machine to automatically enjoy spending time with my family for me is difficult. And why would anybody do such a thing? Well, if my family were all replaced by machines I’d not enjoy spending time with them as much, so I’d send the machine to do it on my behalf. But that’s a story for another time.