Humans: AI ‘Light’ and the Conscious AI ‘Flame’

Since my last blog on AI as an existential threat, the debate has heated up no end.

Professor Stephen Hawking has upped the ante. While speaking of a new communications system that allows him to generate words 10 times faster than previously, he told the BBC: ‘The primitive forms of artificial intelligence we already have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.’ It is the prospect of AI becoming self-aware and its development accelerating away from that of our own biological evolution that concerns him most.

Elon Musk (PayPal, Space X, Tesla, etc.) recently tweeted: ‘Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.’ Therein lies the dichotomy: while people like Prof Hawking and Elon Musk are warning of the dangers of AI, they are also involved in making use of AI development.  Why is it then that those who are most afraid also see the greatest benefits?

One of the acid tests of whether a threat is real or imagined is to ‘follow the money’. In my last blog I mentioned Google’s purchase of AI start-up DeepMind. Google has also spent money to set up an Ethics Board to monitor and anticipate developments in their AI program. Now ethics boards are not unusual and their main function, a bit like Human Resources, is to ensure that the company complies with its legal obligations in order to minimise corporate risk.  But there is very little law governing AI research right now, which suggests that something else is going on. You could take the view that Google is driven purely by altruism and that their Ethics Board is there to make sure they ‘do the right thing’. I think this is probably true, but with an important rider – they are also frightened.

Which brings me full circle; back to the AI versus Conscious AI (CAI) point. Most involved in the development of AI are drawn to the incredible benefits AI will generate for humanity – like moths to the flame in fact. Unfortunately, in this instance we are no different to moths because we, like they, cannot distinguish between the light and the flame that will kill us.

Consider the following future scenario: AI development continues apace. Humankind reaps huge benefits as we see progress in all fields. Public safety, health and wealth improve steadily. As AI becomes self-improving, previously insurmountable technical challenges are mastered and we realise limitless energy through nuclear fusion, to name but one area. We humans get to spend more time concentrating on protecting our environment and leisure pursuits. Happily, our rapidly improving global wealth also puts the brakes on population growth.

If this vision seems rather utopian, I would argue that it is not beyond our grasp and is a reasonable outcome from the exponential improvements in AI and that we will have already sown the seeds of our own destruction. We will become increasingly at ease with AI in a world that is visibly evolving into something more bountiful, cleaner and less dangerous, but as AI starts driving its own evolution, we won’t have the capacity to tell the difference between AI (jolly useful) and CAI (jolly dangerous). Why won’t we be able to tell the difference? We read of notions that we’ll be able to program AI’s to be ‘be good’ and while AI is not self-aware or concious, then that works, unless an AI has been specifically programmed to harm – an example will be where a nation or corporation ‘weaponises’ an AI to wreak destruction, disrupt a competitor, steal secrets or rig a market. But even this does not mean we are doomed.

The true existential threat arises when an AI self develops to the point where it becomes concious and self aware – a CAI. Firstly, we are unlikely to know if an AI has become a CAI unless it chooses to reveal itself. Secondly, any programming safeguards we put in place will only give us a false sense of security because they can be bypassed by the CAI, whose exponential self-development will streak away from our own. Such CAI super-intelligence could bypass firewalls and use all media channels to maintain its anonymity or portray itself in any way it chooses.

I would like to stress here that the threat to humanity does not even require a CAI to be antagonistic toward humans, because no matter how benign and well meaning the CAI is, it has the means to operate outside our control and then it will be all over as humans will no longer fully control their own destiny. I’m sure a good number of us will be kept comfortable by well-meaning CAI’s and the best human/CAI hybrid society we can probably hope for is brilliantly described in Iain M. Banks’ Culture novels.

I’d rather not think about those less well disposed towards us (or competing CAI’s for that matter).

h

Leave a comment

Your email address will not be published. Required fields are marked *