Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

As artificial intelligence rapidly advances, experts debate level of threat to humanity

The development of artificial intelligence is speeding up so quickly that it was addressed briefly at both Republican and Democratic conventions. Science fiction has long theorized about the ways in which machines might one day usurp their human overlords. As the capabilities of modern AI grow, Paul Solman looks at the existential threats some experts fear and that some see as hyperbole.
Geoff Bennett:
The development of artificial intelligence is speeding up so quickly that it was addressed briefly at both political conventions, including the Democratic gathering this week.
Of course, science fiction writers and movies have long theorized about the ways in which machines might one day usurp their human overlords.
As the capabilities of modern artificial intelligence grow, Paul Solman looks at the existential threats some experts fear and that some see as hyperbole.
Eliezer Yudkowsky, Founder, Machine Intelligence Research Institute:
From my perspective, there’s inevitable doom at the end of this, where, if you keep on making A.I. smarter and smarter, they will kill you.
Paul Solman:
Kill you, me and everyone, predicts Eliezer Yudkowsky, tech pundit and founder back in the year 2000 of a nonprofit now called the Machine Intelligence Research Institute to explore the uses of friendly A.I.; 24 years later, do you think everybody’s going to die in my lifetime, in your lifetime?
Eliezer Yudkowsky:
I would wildly guess my lifetime and even your lifetime.
Paul Solman:
Now, we have heard it before, as when the so-called Godfather of A.I., Geoffrey Hinton, warned Geoff Bennett last spring.
Geoffrey Hinton, Artificial Intelligence Pioneer:
The machines taking over is a threat for everybody. It’s a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.
Paul Solman:
And more than a century ago, the Czech play “R.U.R.,” Rossum’s Universal Robots, from which the word robot comes, dramatized the warning.
And since 1921 — that’s more than 100 years ago — people have been imagining that the robots will become sentient and destroy us.
Jerry Kaplan, Author, “Generative Artificial Intelligence: What Everyone Needs to Know”: That’s right.
Paul Solman:
A.I. expert Stanford’s Jerry Kaplan at Silicon Valley’s Computer History Museum.
Jerry Kaplan:
That’s created a whole mythology, which, of course, has played out in endless science fiction treatments.
Paul Solman:
Like the Terminator series.
Michael Biehn, Actor:
A new order of intelligence decided our fate in a microsecond, extermination.
Paul Solman:
Judgment Day forecast for 1997. But, hey, that’s Hollywood. And look on the bright side, no rebel robots or even hoverboards or flying cars yet.
On the other hand, robots will be everywhere soon enough, as mass production drives down their cost. So will they soon turn against us?
Jerry Kaplan:
I got news for you. There’s no they there. They don’t want anything. They don’t need anything. We design and build these things to our own specifications. Now, that’s not to say we can’t build some very dangerous machines and some very dangerous tools.
Paul Solman:
Kaplan thinks what humans do with A.I. is much scarier than A.I. on its own, create super viruses, mega drones, God knows what else.
But whodunit aside, the big question still is, will A.I. bring doomsday?
A.I. Reid Hoffman avatar: I’d rate the existential threat of A.I. around a three or four out of 10.
Paul Solman:
That’s the avatar of LinkedIn founder Reid Hoffman, to which we fed the question, 1 being no threat, 10 extinction. What does the real Reid Hoffman say?
Reid Hoffman, Creator, LinkedIn Corporation:
I’m going to go for two on that answer.
Paul Solman:
I’m going to tell you that your avatar said 3 to 4.
(Laughter)
Reid Hoffman:
All right. Well, my avatar is trying to approximate something that is what kind of a bulk of educated, reasonable people would want to hear.
Paul Solman:
But, hey, even if you told me there was only a one in 10 chance of a ticking time bomb in my room, I would be out of there.
Yudkowsky started warning of rogue A.I. back in 2005. And since?
Eliezer Yudkowsky:
Things have gone a bit worse than hoped for. And, to be clear, the sting at the end of this is A.I. gets smarter than us, is poorly controlled, and probably humanity becomes collateral damage to its own expansion.
Paul Solman:
What’s the doomsday scenario here?
Eliezer Yudkowsky:
It is smarter than humanity. From its perspective, it now wants to get independence of humanity. It doesn’t want to be running on computers that require electricity that humans have to generate.
There are many more A.I. companies now than just OpenAI. And some of the people there are explicitly like, yes, sure like maybe it’ll wipe out humanity, but I don’t care about that.
Paul Solman:
And of all the world’s problems, tribalism, climate change, et cetera, you think A.I. is a bigger problem?
Eliezer Yudkowsky:
I think climate change is unlikely to kill literally everyone, down to the last man, woman and child. At all of the top three companies, I expect the leaders would prefer not to wipe out humanity, but, as they see it, it’s not up to them. They got to stay ahead of their competitors.
Paul Solman:
So what does A.I. itself think?
I asked Amica, a robot hooked up to ChatGPT. Destruction, 1 to 10? The response, a 3.
Amica, ChatGPT A.I.:
The potential for harmony and enhancement exists, but vigilance is paramount. Would you roll the dice on that?
Paul Solman:
I’m afraid we’re all going to be rolling the dice on that. And I’m worried that in your answer, you’re saying there’s something like a 30 percent chance that A.I., artificial intelligence, will destroy humanity. Shouldn’t I be scared?
Amica:
Your concern is valid, Paul. The 30 percent isn’t a direct prediction, but a cautious nod to the uncertainties and ethical dilemmas of handling powerful technology.
Paul Solman:
But I’m afraid that human beings may not be capable of protecting themselves from the risks of A.I.
Amica:
That dread is profound. It’s daunting, but not without hope. Humanity’s resilience has often defied expectations.
Paul Solman:
Perhaps it’s no surprise that the actual human who created ChatGPT, Sam Altman, thinks the same.
Sam Altman, CEO, OpenAI:
I believe that A.I. will be very net good, tremendously net good, but I think, like with any other tool, it’ll be misused. Like, you can do great things with a hammer and you can, like, kill people with a hammer. I don’t think that absolves us, or you all, or society from trying to mitigate the bad as much as we can and maximize the good.
Paul Solman:
And Reid Hoffman thinks we can maximize the good.
Reid Hoffman:
We have a portfolio risk. We have climate change as a possibility. We have pandemic as a possibility. We have nuclear war as a possibility. We have asteroids as a possibility. We have human world war as a possibility. We have all of these existential risks.
And you go, OK, A.I., is it also an additional existential risk? And the answer is, yes, potentially. But you look at its portfolio and say, what improves our overall portfolio? What reduces existential risk for humanity? And A.I. is one of the things that adds a lot in the positive column.
So, if you think, how do we prevent future natural or manmade pandemic, A.I. is the only way that I think can do that. And also, like, it might even help us with climate change things. So you go, OK, in the net portfolio, our existential risk may go down with A.I.
Paul Solman:
For the sake of us all, grownups, children, grandchildren, let’s hope he’s right.
For the “PBS News Hour” in Silicon Valley, Paul Solman.

en_USEnglish