Geoffrey Hinton warns AI must tap into ‘maternal instincts’ otherwise we risk extinction
listen to this article
estimated 6 minutes
The audio version of this article has been generated by AI-based technology. There may be mispronunciations. We are working with our partners to continually review and improve results.
ideas54:01Why AI needs to be better for us and develop ‘maternal instincts’
Geoffrey Hinton, considered by many to be the godfather of artificial intelligence, says that if AI continues to develop without proper guardrails, the worst could happen cause of human extinction.
But he has a solution.
Hinton is its co-winner 2024 Nobel Prize in Physics and co-founder of the AI ​​Safety Foundation.
as he explains ideas Host Nahla Ayad, Training AI to develop maternal instincts could save the human race. Here is an excerpt from that conversation.
What’s the worst case scenario you can imagine here?
Well, there are plenty of short-term worst-case scenarios that don’t involve AI taking over. So it’s hard to choose the worst.
But, for example, AI is being used by terrorists to create nasty new viruses. This makes it very easy for them to do so. And it’s very scary. We will get international cooperation on how to stop it and how to stop it, but we may not be able to do that. So this is a short-term risk.
What worries me most is still the long-term risk of AI becoming smarter than us, which seems largely inevitable to me. AI is being used to corrupt democracy through fake videos.
But what worries me most is still this long-term risk, which seems to me largely inevitable, of AI becoming smarter than us, and we don’t know how we can co-exist with them. We don’t know whether they will actually take charge from us or not.
I want to ask you clearly. What exactly are the chances of human extinction due to AI this century?
Well, I think the only honest answer is that it’s something that’s probably not going to happen for 10 or 20 years. And we have very little idea what things will be like in 10 or 20 years. If you look back just 10 years, no one had any idea that we would have as good chatbots as we do now.
And so if progress is only linear, then we can expect that in 10 to 20 years, things will be very different from how they are now, and we will have all kinds of progress that we couldn’t have predicted. The most honest answer is that we haven’t got a clue.
And don’t focus on the negative, but is your fear on the far side of the horizon that it could lead to the extinction of humans?
Oh, it can definitely happen, yes. I think whoever said there is no way humans will be exterminated is not facing reality.
Geoffrey Hinton, co-winner of the 2024 Nobel Prize in Physics, is known by many as the ‘Godfather of AI’. He spoke with ideas about how we can train AI to be compassionate towards humans.
I wonder how we can shape the future of AI to ensure it is kind to us. is there a way?
It is possible I think we should put a lot of research effort into this. So if you look around and say, “Where is the example of a more intelligent thing being controlled by a less intelligent thing?” The best example I know of and probably the only example we’re talking about is how a child controls a mother and that’s because evolution has built things into the mother.
She cannot tolerate the sound of a child crying. She gets all kinds of hormonal rewards from being nice to the child. Apparently, it was very important for evolution to let the mother control the child for the survival of the species.
Perhaps we can do the same with AI. Even though it will be smarter than us, if we can make it care more about itself, rather than just thinking about itself, some good things will come of it.
He will realize that we are limited in our intellectual abilities, but he will want (us) to develop as much as we can. If you take a typical mom and say, “Would you like to turn off your maternal instincts? Wouldn’t your life be so much easier if you could wake up in the middle of the night and say, ‘Oh, the baby is crying again,’ and go back to sleep. Wouldn’t that be nice?”
Most mothers will say no, because they really care about the baby and realize that it would be very bad for them. Most of them would not want to turn off those trends, even though they would be able to do so if they wanted to because they can get their own code.
I’m surprised this wasn’t part of the development of AI in the beginning. Why haven’t we thought about making sure AI is kind to us?
Oh, because until recently the main focus of AI has been on where we want smart assistance.
You don’t need it to be kind, you just need to be efficient and do what you say. And that’s been the vision for how we can develop AI from big tech companies.
Until it becomes smarter.
And I don’t think it will be sustainable when it gets smarter. I think we need to completely redesign it because we will not be the boss and AI will be our intelligent assistant. AI is going to take care of us. How do you do that? How do we provide AI maternal instincts to be better for itself?
Well, remember, we are developing it. We are making it. We still have a chance to do this. Whether we succeed or not depends partly on how hard we work. This cannot be possible.
It may be that once you develop super-intelligent AI, it goes off and does its thing and we were just a passing phase in the evolution of intelligence. But if it is possible for it to evolve in a way where it cares about us more than it cares about itself, it would be very foolish if we went extinct because we didn’t try.
How many people are actually working on that side of things today?
There are probably less than 1 percent of researchers working on AI, which is crazy.
*Q&A edited for clarity and length. This episode was produced by Nikola Lukasik.
download thoughts podcast To hear the entire conversation.