Have you ever said to someone, ‘I’d love to write but don’t know what to write about.’ Well here’s a suggestion: think about what worries, excites or intrigues you and put pen to paper.
My own example: for many years I’ve been interested in the development of artificial intelligence (AI) and its potential to radically change civilisation when AI overtakes the intellectual capabilities of the human race. This hypothetical moment is referred to as the Singularity and, once reached, nothing will ever be the same again.
This is because AI will become capable of managing its own development, driving an exponential explosion in machine intelligence, and if that development sticks to Moore’s Law its power will double roughly every two years. Within a few generations, AI will inhabit a plane far beyond our own, accelerating away at increasing speed.
This moment of singularity is predicted to occur in the 2040’s (which is not far off!) and just looking at the recent flurry of purchases of AI companies by big corporations, such as Google buying Deepmind for £400m, only reinforces that the singularity could be reached in our lifetimes.
Most views on the implications for humanity tend to be polarised between the utopian and dystopian. The utopians – like Google’s futurologist Ray Kurzweil – see it a massive step forward, as the intelligence can be used to solve a multitude of very human concerns such as cancer, dementia, ageing, global warming…. The list goes on.
Others see only threat. Movies such as the Terminator series make a fist of exploring the very unpleasant consequences. Even people we take seriously are worried about it, like Stephen Hawking and Elon Musk (of Paypal, SpaceX and Tesla Motors fame).
Right now I see the future of AI being split into two distinct stages. The first is where AI is progressed to the point where it does indeed become capable of managing its own intelligence upgrades and we will benefit hugely as this intelligence is brought to bear on our most pressing challenges. This looks like a good thing.
My real concern rests with the second stage: if and when AI achieves true consciousness. This may take a long time however as the gulf between current classic computing power and the human brain is still massive and if, as some suggest, our brains are essentially quantum computers.
If this quantum nature is also the source of our consciousness, then the machine equivalent may be a long, long way off. I refer to the work of Sir Roger Penrose and Dr. Stuart Hameroff. Their ideas of a quantum source of consciousness were not taken seriously for many years and are still by no means main-stream, though I was surprised when a Cambridge philosophy student recently mentioned to me an intense interest in the work of Penrose on consciousness.
The really interesting thing though is that the science appears to be slowly catching up with theory. If it turns out that if they are at least directionally correct, then achieving conscious AI (CAI), may take quite a while. And you might think ‘We don’t even have quantum computers yet.’ Well we do. A Canadian company, D-Wave, is building them and NASA and Google have had one since 2013 which they are using to explore AI in association with a number of universities. There is debate as to the nature of these computers, but the point I make is what’s impossible today is made possible tomorrow by new inventions.
So while conscious AI may not be realised in my lifetime, it does not mean I should not care about it – after all, I have children and a deep affection for the human race. It is the prospect of CAI that really fascinates me and is one driver of my writing.
Leave a comment