Many people are familiar with Moore's Law. I have mentioned it in a previous blog entry. I may have waxed boring but I'm blogging to improve my writing skills, so do give me a break. Essentially the law (more like a forecast, really) states that processing power doubles every 2 years. This forecast also helps the technology companies decide how much R&D they need to do if they want to keep up. In that way it is a bit of a self-fulfilling prophecy, but the trend was already there to find before it was found, so one can guess that even if the trend had never been identified we'd still have roughly the computing power we do. Now, the futurist would say, let us assume that Moore's Law will hold for the coming decades. Soon we will have a computer so powerful that it will be able to emulate the human brain. And suppose that intelligent computer were used to design a computer more powerful and intelligent than itself. Then we build its design and use the result to design an even more powerful and intelligent computer. The rate of progress is now no longer exponential, since we used human intelligence to achieve exponential progress - if we have a computer four times better than the human mind then we can have a computer twice as powerful as it in six months, rather than two years. Then we use that computer we got six months later, which is eight times more powerful than the human brain, to develop another computer in three months that is sixteen times more powerful than the human brain. The result of this is the Singularity - an escalating spike of many-times exponential technological progress. Before four years had passed the mathematical trend I just mentioned would mean that computer power was practically infinite.
The chart below illustrates this:
Here we see the singularity forming based on the scenario above
If we take the base 2 logarithm of this period we get the following result:
Normal technological growth would show up as a straight line. The scenario of the Singularity results in a perverse acceleration of technological development.
The scenario I have outlined above is fairly simplistic but it is essentially what people expect from the events leading up to the Singularity. There are many doomsday or utopian predictions for what the fate of the human race will be in all this. It is these concerns that I wish to address. Futurists believe that the Singularity is inevitable - not in the same way that you might believe a minor accident is inevitable if you drive around for long enough - in the genuine no-possible-way-of-avoiding-it imperative super inevitable. They do not believe that it is even conceivable that the Singularity will not occur. In the words of John J. Xenakis, "No. No way. It's impossible. The Singularity cannot be stopped. It's as inevitable as sunrise."* I believe I can prove that the Singularity cannot occur.
There seem to be two general schools of thought on how the singularity will come about. One school believes that there will be computer or "robot" intelligences, the other, which I shall call the Singularity Cult, believes that the first hyper-intelligences will be human intelligences augmented by machines. The Singularity Cult hopes that it will be them.
I do not belive that either of these scenarios will result in a singularity. This is because I question the assumptions that go into the scenarios and result in the Singularity. Namely, the assumptions are these:
1. Computers can become genuinely aware.
2. Machine-mind neural interface is possible.
3. Production of new intelligences will accelerate.
If assumption one is incorrect, the Singularity can never occur. Let us assume for the moment that intelligence is an emergent property of the type of architecture that the human brain uses. We can create computers with similar architecture - in fact IBM have recently done so (they say) - and they may be able to solve problems in an intelligent way. But everything that chip does will be directed by human intervention. It will not by itself decide to create anything apart from the networks and programs it needs to do what we ask it to do. It is not self-directing, and I do not see any evidence that we will ever be able to create a computer that is self-directing. It will always and only take those directives and inputs we give it.
If only assumption two is incorrect, we are all in le merde. The machines will not care about us and they will use up all the energy in the world, returning us to the stone age overnight. We would destroy the machines before they could take control of the power supply, so the Singularity would not occur.
This is another thing technologists are sure of. My view is that the human brain is so complex that a reliable interface cannot be produced, no matter how many millenia of research are poured into it. The brain constantly changes its structure and the locations of various functions. It is - in a word - organic, it adapts to circumstance and every brain is unique in the particular way it processes, stores and retrieves information. We so far understand only the most basic aspects of how the most "basic" functions such as vision and other senses and regulatory systems work, we not much closer to understanding true intelligence than when we first used a sharpened bit of wood to kill a deer.
If assumption three is incorrect the Singularity cannot occur, and technological process will go back to being regularly exponential, albeit at a much higher multiplier.
My beef with this particular assumption is on much more solid ground than the other two. There is a finite rate at which resources can flow - both power and those resources required to build the intelligences. It also takes time to develop and construct the tools necessary to build the intelligences. All of these things put a limiter on the speed with which new intelligences may be created. Let's say the shortest time required to make a new one is one month. In my earlier example, that would cause the rate of progress to return to being regularly exponential, because the time axis is linear but the computing power axis is exponential. No matter how steep this new computer power/time line is, it will not result in a singularity. Technological progress will be extremely rapid, but it won't "fall up a cliff" as in the Singularity.
The following chart illustrates what might happen in this case:
Technological progress continues on in a single exponential trend - on this log base 2 normalised graph it is a straight line.
Much computing research is now in the field of neural network architecture and adaptive evolutionary code. It seems clear that this will eventually result in machines that far exceed the limitations of today's computers. They may see and hear and speak. They will certainly produce new inventions and new science - in fact they already have - but ultimately, they will be built by humans, for humans, and work produced by them will belong to humans. We will never be mastered by intelligent machines, because such machines are not truly intelligent. They simulate intelligence because we design them to. They speak, hear and see because we want them to. They may even kill, if we design them to. They will never be able to decide not to follow an instruction that is built in. They are tools - very sophisticated tools - but nevertheless have no moral superiority to a hammer or a crane. They are not capable of functions we do not build in to them, and never will be. So whatever the future of technology is, we should not fear it.
Almost anyone will believe almost anything ... because they want to believe it's true, or because they are afraid it might be true.
- Terry Goodkind
*I have to say that I find many of his other conclusions highly questionable, although he does have some points that are worthy of consideration.