Saturday, 20 August 2011

The Fallacy of the Technological Singularity

The Technological Singularity, or usually just "The Singularity" is a concept unfamiliar to most, I think, except technologists and futurologists, people who study computer science and artificial intelligence, and a few random others - actually quite a few people, probably. Sigularity is a term that has various meanings in physics, geometry, calculus, common speech and of course futurology. In all of them, however, it means something along the lines of "a discontinuous point". Generally this tends to be a point at which the tools we can use to help us understand the rest of the continuum no longer work. The Technological Singularity is a predicted point somewhere in the near future at which technological progress will become so accelerated that it will look "discontinuous".

Many people are familiar with Moore's Law. I have mentioned it in a previous blog entry. I may have waxed boring but I'm blogging to improve my writing skills, so do give me a break. Essentially the law (more like a forecast, really) states that processing power doubles every 2 years. This forecast also helps the technology companies decide how much R&D they need to do if they want to keep up. In that way it is a bit of a self-fulfilling prophecy, but the trend was already there to find before it was found, so one can guess that even if the trend had never been identified we'd still have roughly the computing power we do. Now, the futurist would say, let us assume that Moore's Law will hold for the coming decades. Soon we will have a computer so powerful that it will be able to emulate the human brain. And suppose that intelligent computer were used to design a computer more powerful and intelligent than itself. Then we build its design and use the result to design an even more powerful and intelligent computer. The rate of progress is now no longer exponential, since we used human intelligence to achieve exponential progress - if we have a computer four times better than the human mind then we can have a computer twice as powerful as it in six months, rather than two years. Then we use that computer we got six months later, which is eight times more powerful than the human brain, to develop another computer in three months that is sixteen times more powerful than the human brain. The result of this is the Singularity - an escalating spike of many-times exponential technological progress. Before four years had passed the mathematical trend I just mentioned would mean that computer power was practically infinite.

The chart below illustrates this:

Here we see the singularity forming based on the scenario above

If we take the base 2 logarithm of this period we get the following result:

Normal technological growth would show up as a straight line. The scenario of the Singularity results in a perverse acceleration of technological development.

Of course there are limiting factors. It will take time to construct every iteration. There is a limited supply of resources and energy. The computer intelligences may well increase these supplies by inventing new technologies but they do have ultimate limits which cannot be exceeded.

The scenario I have outlined above is fairly simplistic but it is essentially what people expect from the events leading up to the Singularity. There are many doomsday or utopian predictions for what the fate of the human race will be in all this. It is these concerns that I wish to address. Futurists believe that the Singularity is inevitable - not in the same way that you might believe a minor accident is inevitable if you drive around for long enough - in the genuine no-possible-way-of-avoiding-it imperative super inevitable. They do not believe that it is even conceivable that the Singularity will not occur. In the words of John J. Xenakis, "No. No way. It's impossible. The Singularity cannot be stopped. It's as inevitable as sunrise."* I believe I can prove that the Singularity cannot occur.

There seem to be two general schools of thought on how the singularity will come about. One school believes that there will be computer or "robot" intelligences, the other, which I shall call the Singularity Cult, believes that the first hyper-intelligences will be human intelligences augmented by machines. The Singularity Cult hopes that it will be them.

I do not belive that either of these scenarios will result in a singularity. This is because I question the assumptions that go into the scenarios and result in the Singularity. Namely, the assumptions are these:
1. Computers can become genuinely aware.
2. Machine-mind neural interface is possible.
3. Production of new intelligences will accelerate.

If assumption one is incorrect, the Singularity can never occur. Let us assume for the moment that intelligence is an emergent property of the type of architecture that the human brain uses. We can create computers with similar architecture - in fact IBM have recently done so (they say) - and they may be able to solve problems in an intelligent way. But everything that chip does will be directed by human intervention. It will not by itself decide to create anything apart from the networks and programs it needs to do what we ask it to do. It is not self-directing, and I do not see any evidence that we will ever be able to create a computer that is self-directing. It will always and only take those directives and inputs we give it.

If only assumption two is incorrect, we are all in le merde. The machines will not care about us and they will use up all the energy in the world, returning us to the stone age overnight. We would destroy the machines before they could take control of the power supply, so the Singularity would not occur.
This is another thing technologists are sure of. My view is that the human brain is so complex that a reliable interface cannot be produced, no matter how many millenia of research are poured into it. The brain constantly changes its structure and the locations of various functions. It is - in a word - organic, it adapts to circumstance and every brain is unique in the particular way it processes, stores and retrieves information. We so far understand only the most basic aspects of how the most "basic" functions such as vision and other senses and regulatory systems work, we not much closer to understanding true intelligence than when we first used a sharpened bit of wood to kill a deer.

If assumption three is incorrect the Singularity cannot occur, and technological process will go back to being regularly exponential, albeit at a much higher multiplier.
My beef with this particular assumption is on much more solid ground than the other two. There is a finite rate at which resources can flow - both power and those resources required to build the intelligences. It also takes time to develop and construct the tools necessary to build the intelligences. All of these things put a limiter on the speed with which new intelligences may be created. Let's say the shortest time required to make a new one is one month. In my earlier example, that would cause the rate of progress to return to being regularly exponential, because the time axis is linear but the computing power axis is exponential. No matter how steep this new computer power/time line is, it will not result in a singularity. Technological progress will be extremely rapid, but it won't "fall up a cliff" as in the Singularity.

The following chart illustrates what might happen in this case:

Technological progress continues on in a single exponential trend - on this log base 2 normalised graph it is a straight line.

Much computing research is now in the field of neural network architecture and adaptive evolutionary code. It seems clear that this will eventually result in machines that far exceed the limitations of today's computers. They may see and hear and speak. They will certainly produce new inventions and new science - in fact they already have - but ultimately, they will be built by humans, for humans, and work produced by them will belong to humans. We will never be mastered by intelligent machines, because such machines are not truly intelligent. They simulate intelligence because we design them to. They speak, hear and see because we want them to. They may even kill, if we design them to. They will never be able to decide not to follow an instruction that is built in. They are tools - very sophisticated tools - but nevertheless have no moral superiority to a hammer or a crane. They are not capable of functions we do not build in to them, and never will be. So whatever the future of technology is, we should not fear it.

Almost anyone will believe almost anything ... because they want to believe it's true, or because they are afraid it might be true.
- Terry Goodkind


*I have to say that I find many of his other conclusions highly questionable, although he does have some points that are worthy of consideration.

Sunday, 7 August 2011

Boom and Bust

Anyone unfortunate enough to stumble across my blog may be led to believe that I have no time or inclination for writing - after all, if I did there would be more posts, wouldn't there? The truth is rather different - I find it hard to choose what to write about, because it seems there are so many topics that urgently require my inspection. As a result I may make extensive notes about an idea and then never use it. It is a fact that I have fairly recently returned my appetite for knowledge so I must be a nett consumer of information if I am to get anywhere with anything. Hopefully I can return the fruits of my investigations to whoever is willing to read.

Lately I've been bending my will towards understanding economics. It seems that he field is muddy even to the experts, and assumptions and pitfalls abound. The 2008 crash showed that the banks didn't really know what they were doing, and before the crisis everyone thought the world was hunky-dory, so no-one else knew either, except a few like this guy. In fact I would hazard a guess that no one really knows what causes things like inflation, bubbles, deflation, crashes and the like. All we know is that generally if we put pressure on point x we get a response y. Thus when a country's official interest rate is lowered, inflation increases, and vice versa.

Most of today's economy seems to revolve around investors, in one way or another. The thinking is that no investor will willingly throw money away by buying into a bad deal, but the crash of 2008 showed this belief to be a fantasy. Governments can borrow money because investors are willing to give it to them - if they expect a return. Currencies are strengthen or weaken depending on how much faith investors have that the currency will stay strong or will weaken further. Companies can borrow money if banks think they will get a nett return, same for individuals. Investors bank money if they think the bank's practices are lucrative.

It does not take long to realise that this whole thing is a positive feedback loop. It all works if people are confident, but it rapidly goes down the toilet if people lose confidence. Granted this is a simplification but let's work with it for a moment. What are the checks? Where is the point where investment confidence is lost? Where is it gained? This article lists some of the causes and effects of four major crashes over the last century or so. They all have in common the formation of a "bubble" leading up to the crash. The first three have new rules associated with them that were intended to prevent future crashes. All of them failed. The Economists apparently ran out of talent, too, because no additional rules or institutions have resulted from the most recent crash, in order to prevent such a thing happening again. Instead we had costly bailouts, humiliatingly paid for by countries we might consider backward - China and the oil nations of the Middle East. In fact China is now the US's biggest creditor. The crashes we have suffered were either necessary, or they should direct us to restructure our finance. No one is claiming they were necessary and the Arab world laughed at us, saying their financial structures were more robust. So have we changed the way we do business? I should coco.

You may think I am having a downer on investors - after all, they are the link in the chain that has the most influence on the positive feedback - but I'm not, really. They are part of the problem but they are not the problem. The real problem is the way we do business. This article and the comments it sparked have some very interesting views, and suggest some solutions that are presently beyond me. It suggests that financial engineers are to blame for the crash, and to an extent they may be, but what is it about the system that makes financial engineering a tempting prospect in the first place? Why do bubbles form at all?

The answers to those questions are by no means obvious, and there is debate among economists on the answers. There are some things we can say with a reasonable degree of conviction, though. Firstly both the bubble and the crash are in principle based on investor psychology. It doesn't seem possible to identify one particular cause, instead a range of factors come into play.

A bubble is a type of inflation, except that in the real world if inflation causes a rise in the cost of, say, electricity, each kWh still has a genuine worth based on the production and distribution cost. Its worth is anchored in the work required to create it and get it into your house, or factory, or whatever. Of course demand can increase its price, but ultimately its worth can never go much above the economic benefit it gives to the buyer. Stocks, too, are subject to supply and demand. When a company is goes public its apparent worth is divided among stocks or shares, giving each of those shares a value that is defined by the total value of the company. The money raised from this partial sale is used to further the business. Over time the stocks of the company increase or decrease in value as investors buy or sell depending on whether they believe the company is growing or shrinking. Bad news, such as under-target profits, tend to depress the value of stock, and vice versa. But ultimately the real-terms value of stock is not known, it is only inferred from the company's past performance, market conditions and the company's current performance. So if investors suddenly believed that a company was going down the tubes they might sell all the public stock back to the company, resulting in a price drop that might make going down the tubes inevitable. A self-fulfilling prophecy. There is no anchor for the value of stocks, they are worth exactly what investors think they are worth. This becomes a problem when investors' beliefs are based on the beliefs of other investors. This was illustrated nicely when the dotcom bubble ocurred. Investors were so confident in the new business model that they thought such companies were worth more than their real value, resulting in a runaway investment boom that ended when everybody realised it was chaff. Some people legally made shed loads of cash by taking advantage of the investors' credulity. There is a word for such people. It is "conman". More can be found here. Similarly the bubble bursts when investors realise the bubble can no longer be sustained and opt out. A "correction" results, which, if the number of investors in the bubble is large enough, can become a crash, as investors with no exit strategy panic and sell as fast as possible to avoid heavy losses.

The model allows for rapid increases in stock prices, thus growing a company very fast, which can be a good thing, for sure, but this very volatility also gives rise to the possibility of a massive crash, resulting in a company losing value, likely ending up being undervalued. Instability is inherent in the system, so we either have to accept that crashes will occur and hope that the times of plenty make up for them, or we have to design a new system where stock valuations are not so volatile.