Artificial intelligence: The saviour of mankind or the end of the world?

Elon Musk calls AI a threat to 'civilisation'. Mark Zuckerberg calls such thinking "pretty irresponsible"
Elon Musk calls AI a threat to 'civilisation'. Mark Zuckerberg calls such thinking "pretty irresponsible"

To some, Artificial Intelligence will dramatically improve our lives in the future. To others, it spells the end of mankind. 

A new report from Oxford and Cambridge researchers has fuelled the debate about AI, warning that malicious use of AI presented a “clear and present danger” to society that could emerge in the next decade.

The ongoing discussion about the technology has some of the greatest minds in the world pitted at opposite ends of the spectrum. 

While the technology has undoubted benefits, many point out that not enough consideration has been devoted to the potentially catastrophic outcomes.

Recognising the possible problems ahead,  the Government recently set up a body to oversee ethics in the field and last month Theresa May called AI “one of the greatest tests of leadership for our time”. 

Yet Britain keen to be a leader in the field, which is increasingly used to drive cars, diagnose patients and even to help determine prison sentences.

Here are some prominent voices in the debate.

Words of warning

Stephen Hawking

Among the most outspoken critics of the technology has been Professor Stephen Hawking. At the opening of a new Cambridge centre exploring the possible dangers of AI in 2016, the British physicist said: "The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which."

Warning dangers "like powerful autonomous weapons or new ways for the few to oppress the many", he added:  “It will bring great disruption to our economy, and in the future AI could develop a will of its own that is in conflict with ours.”

Two years earlier, he issued the blunt warning: "The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.''

Elon Musk

Another prominent doom-monger is Elon Musk, the billionaire technology entrepreneur. Putting the issue into context in September, the chief executive of Tesla and SpaceX said artificial intelligence was a greater threat to civilisation than the North Korean regime. In particular, he has warned about the threat of autonomous weapons.

Mr Musk has been calling for regulation as soon as possible. “AI is a rare case where we need to be proactive in regulation instead of reactive because if we’re reactive in AI regulation it’s too late,” he told a meeting of US governors in July last year, adding that "AI is a fundamental risk to the existence of civilisation". 

Richard Branson

Last week, the Richard Branson, the billionaire entrepreneur, warned the technology could exacerbate income inequality.

"I think with the coming on of AI and other things there is certainly a danger of income inequality," Branson tells CNN's Christine Romans in a piece published Thursday.

The inequality will be caused by "the amount of jobs [artificial intelligence] is going to take away and so on. There is no question" technology will eliminate jobs, he told CNN.

He is also sceptical that AI can beat human instinct when it come to business.  "We must not forget that unlike machines that 'think', only humans have the ability to look at an idea or market opportunity and say 'to hell with the data. Maybe this doesn’t work in theory, but my gut tells me it will work in practice'," he wrote in a blog.  

Nick Bostrom

Nick Bostrom, director of Oxford’s Future of Humanity Institute, has also expressed fears about artificial intelligence if it is not handled very carefully.  In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom warns that AI could dispose of humans, resulting in a world that would see “economic miracles and technological awesomeness, with nobody there to benefit,” like “a Disneyland without children.”

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he writes. 

The report warned about the malicious use of driverless cars
The report warned about the malicious use of driverless cars Credit: PA

It was experts from the Future of Humanity Institute who are warning this week that artificial intelligence risked being exploited by terrorists to mount driverless car crashes and cyber attacks because the technology was being rapidly developed without thought for its downsides. 

But on the bright side...

Mark Zuckerberg

According to Mark Zuckerberg, the co-founder of Facebook, the likes of Mr Musk are being "pretty irresponsible" by voicing such dire warnings. 

In July last year, Mr Zuckerberg said he was very much "optimistic" about the future of AI.  "In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives,"  the entrepreneur said on Facebook Live. 

He pointed out that it was already helping diagnose diseases, and predicted that driverless cars would cut the number of deaths from road accidents.  

"You need to be careful about what you build and how it is going to be used," he said. "But people who are arguing for slowing down the process of building AI, I just find that really questionable."

The social media giant last year said it would use AI to spot users who may be at risk of suicide and seek help for them. It is also looking at developing artificial intelligence to automate the identification of terrorist material

However, it also had to shut down a pair of its artificial intelligence robots in August after they invented their own language.

Bill Gates

Bill Gates, the Microsoft founder, has not always been a fan of AI.   In 2015, he wrote on Reddit, Mr Gates wrote: "I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

US philanthropist Bill Gates sees the upside of artificial intelligence
US philanthropist Bill Gates sees the upside of artificial intelligence Credit: AFP

More recently, however, he has struck a more positive tone. "Certainly we can look forward to the idea that vacations will be longer at some point," Gates told FOX Business Network at the World Economic Forum last month. Machine learning will make humans more productive and efficient, he said.  

Making the same point last week, he said: "AI is just the latest in technologies that allow us to produce a lot more goods and services with less labor. And overwhelmingly, over the last several hundred years, that has been great for society," Gates said at Hunter College in New York City.

Larry Page 

Elon Musk may be friends with Google Co-Founder Larry Page, but they don't see eye-to-eye on AI. In an interview with the Financial Times in 2014, Page commented on the fears that computers will take the jobs of people.

“You can’t wish away these things from happening, they are going to happen,” he said. “You’re going to have some very amazing capabilities in the economy. When we have computers that can do more and more jobs, it’s going to change how we think about work. There’s no way around that. You can’t wish it away.” 

Google co-founder  Larry Page
Google co-founder  Larry Page Credit: AFP

But he said people should embrace this shift. “The idea that everyone should slavishly work so they do something inefficiently so they keep their job — that just doesn’t make any sense to me,” he said. “That can’t be the right answer.” 

His philosophy is: “Technology should do the hard work — discovery, organisation, communication — and then get out of the way, so people can live their lives and do what makes them happiest, not messing around with annoying machines.”

Sam Altman

Sam Altman, the president of Silicon Valley startup incubator Y Combinator, takes a more balanced approach to AI. Along with a number of other Silicon Valley figures, including Mr Musk, he backed  OpenAI, a nonprofit research venture aimed at developing “digital intelligence in the way that is most likely to benefit humanity.” 

He believes AI should be available to all rather than a few, believing the good will outweigh the evil. "Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it's far more likely that many, many AIs, will work to stop the occasional bad actors," he said.  

License this content