Hot Posts

6/recent/ticker-posts

Is AI an existential threat to humanity?

 

Is AI an existential threat to humanity

If the concern is that the AI systems will decide to take over and maybe to kill us all, this is not possible with the current AI technology or anything we're likely to see in the next couple of decades. The current exciting advances, based on machine learning and "deep learning" networks, are in the area of recognition of patterns and structures, not more advanced planning or application of general world-knowledge.


Even if those larger problems are eventually solved (and I’m one of the people working on that), there is no reason to believe that AI systems would develop their own motivations and decide to take over. Humans evolved as social animals with instinctual desires for self-preservation, procreation, and (in some of us) a desire to dominate others. AI systems will not inherently have such instincts and there will be no evolutionary pressure to develop them -- quite the opposite, since we humans would try to prevent this sort of motivation from emerging.


We can never say that such a threat is completely impossible for all time, so AI people should be thinking about this conceivable threat — and most of us are. But the last thing the field needs is for people with no real knowledge of AI to decide that the AI research needs to be regulated before their comic-book fantasies come to life. All of the real AI experts I know (with only two or three exceptions) seem to share this view.

When it comes to existential threats to humanity, I worry most about gene-editing technology — designer pathogens. And recent events have reminded us that nuclear weapons are still around and still an existential threat. (It’s kind of ironic that one of the most visible critics of AI is a physicist.)


AI does pose some real, current or near-future threats that we should worry about:


1, AI technology in the hands of terrorists or rogue governments can do some real damage, though it would be localized and not a threat to all of humanity. One small example: a self-driving car would be a very effective way to deliver a bomb into the middle of a crowd, without the need for a suicide volunteer.

2, People who don't understand the limitations of AI may put too much faith in the current technology and put it in charge of decisions where blunders would be costly.

3, The big one, in my opinion: AI and robotic systems, along with the Internet and the Cloud, will soon make it possible for us to have all the goods and services that we (middle-class people in developed countries) now enjoy, with much less human labor. Many (but not all) current jobs will go away, or the demand for them will be greatly reduced. This is already happening. It won’t all happen at once: travel agents are now mostly gone, truck and taxi drivers should be worried, and low-level programmers may not be safe for long.


This will require a very substantial re-design of our economic and social systems to adapt to a world where not everyone needs to work for most of their lives. This could either feel like we all won the lottery and can do what we want, at least for more of our lives than at present. Or (if we don't think carefully about where we are headed) it could feel like we all got fired, while a few billionaires who own the technology are the only ones who benefit. That is not a good situation even for the rich people if the displaced workers are desperate and angry. Louis XVI and Marie Antoinette found this out the hard way.

4, Somewhat less disruptive to our society than 3, but still troubling, is the effect of AI and Internet of Things on our ideas about privacy. We will have to think hard about what we want “privacy” to look like in the future, since the default if we do nothing is that we end up with very little of this — we will be leaving electronic “tracks” everywhere, and even if these are anonymized, it won’t be too hard for AI-powered systems to piece things back together and know where you’ve been and what you’ve been doing, perhaps with photos posted online. Definitely not an “existential” threat, but worrisome and we’re already a fair distance down this path.

Remember

Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!


AI has made tremendous progress, and I'm wildly optimistic about building a better society that is embedded up and down with machine intelligence. But AI today is still very limited. Almost all the economic and social value of deep learning is still through supervised learning, which is limited by the amount of suitably formatted (i.e., labeled) data. Even though AI is helping hundreds of millions of people already, and is well poised to help hundreds of millions more, I don't see any realistic path to AI threatening humanity.


Looking ahead, there're many other types of AI beyond supervised learning that I find exciting, such as unsupervised learning (where we have a lot more data available, because the data does not need to be labeled). There's a lot of excitement about these other forms of learning in my group and others. All of us hope for a technological breakthrough, but none of us can predict when there will be one.


I think fears of "evil killer AI" is already causing policy makers and leaders to misallocate resources to address a phantom. There are other problems that AI will cause, most notably job displacement. Even though AI will help us build a better society in the next decade, we as AI creators should also take responsibility to solve the problems we'll cause in the meantime. I hope MOOCs (Coursera) will be part of the solution, but we will need more than just education.

In the near term AI serves as a tool that can magnify the amount of power an individual has. For example, someone could buy thousands of cheap drones, attach a gun to each of them, and develop AI software to send them around shooting people. If the software was good enough this could result in far more destruction than a normal terrorist attack. And I fully expect that the software part of this will become easy in the future if it isn't already today.


This is very different from the options of a terrorist group today, because right now they need humans to carry out attacks and there is a limit to the amount of damage that can be done per person. Having relatively simple AI in place of the human here brings the marginal cost of an attack down to zero and hurts the ability of law enforcement to stop attacks or retaliate. So, there is a risk that as AI gets better and better it at least destabilizes things.


This is totally independent of concerns about AI "taking over" with its own "free will". I think that is a risk too, but it is much further off, and I think the near term force magnifier issue is just as dangerous.

Elon Musk, Stephen Hawking, and others have stated that they think AI is an existential risk.

I disagree. I don't see a risk to humanity of a "Terminator" scenario or anything of the sort.

Part of the confusion, I think, comes from how we use the term "AI" in reality and in fiction. In fiction, especially movies, "AI" means a self-aware, super-intelligent entity, with its own goals, a very broad sort of intelligence (similar to humans), and the ability to change its goals over time.

This is nothing like the sort of AI being developed. The real use of "AI" in industry is generally for very narrow pattern-matchers - a better search algorithm, an object-detection algorithm, etc..

These things are tools which we can use, for good or evil. But they're nothing like self-aware beings. Nor is there any plausible chance that they will suddenly and spontaneously become self-aware. Software just doesn't work that way. Our own brains didn't work that way. 

We became self-aware over an evolutionary process of hundreds of millions of years, because there was a continuous evolutionary pressure to understand the environment around us better, to be more flexible in our ability to learn and act, and to be able to predict the behavior of predators, prey, and other humans. It took millions of generations and billions of individuals living and dying for this to happen.

While it's possible for that sort of evolutionary process to occur in software, we're simply not doing that today, nor are we likely to.

Indeed, for the most part, AIs that are self-aware and have their own opinions would seem to be less valuable for companies to develop. Do you want a self-driving car that does what you want? Or do you want one that has an opinion about what neighborhoods you should go to, or that pouts if you haven't washed it enough?

And if a company started developing the latter, how long would it be until they'd be pressured to have ethics committees, or to give rights to those cars, or so on?

In the end, I expect we'll have AI that is better than we are at nearly every narrow task, but which are still our tools, not our masters. That shouldn't be surprising. Our cars, our phones, our airplanes, our calculators, and so on all radically improve on our abilities. But they're all just tools for us.

So. in my opinion, AI does pose some real threats to our well-being — threats that we need to think hard about — but not a threat to the existence of humanity.


Post a Comment

0 Comments