970x125
One of the biggest fears that the human race currently faces is the development of artificial intelligence (AI)and how it impacts our lives. The fear is hinted at through all the movies and pop cultural references of AI taking over and wiping out the human race.
970x125
Geoffrey Hinton, a British-Canadian cognitive psychologist and computer scientist known as the ‘Godfather of AI’, and Yann LeCun of Meta highlighted the present problems with AI to save and educate us on how to protect ourselves.
Hinton expressed concerns earlier this week about the adequacy of the measures AI businesses were taking to guarantee that humans would stay “dominant” over AI systems.
Hinton stated during the AI4 industry conference in Las Vegas that “that’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that.”
Particularly crucial for the time when technology surpasses humans in Artificial General Intelligence (AGI), Hinton suggested incorporating “maternal instincts” into models to make sure they will care about people.
Additionally, during an interview with CNN, he also hinted that there are very few examples of intelligent items being controlled by less intelligent creatures. The only situation is when a woman’s maternal instincts are ingrained in her by evolution, which allows the infant to govern the mother. And Hilton added that without incorporating these similar instincts into AI, “we’re going to be history.”
Hinton also warns that companies are focusing on making AIs more intelligent rather than giving them empathy toward humans. And Meta’s chief AI scientist, Yann LeCun, agrees with Hinton.
Story continues below this ad
“Geoff is proposing a simplified version of what I’ve been saying for several years: hardwire the architecture of AI systems so that the only actions they can take are towards completing objectives we give them, subject to guardrails. I have called this ‘objective-driven AI,” LeCun said in a LinkedIn post.
“Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans,” LeCun also stated in his LinkedIn post. He also repeated what Hinton had said about how humans’ natural impulse to protect their children is a result of evolution.
Though he argues that numerous basic, low-level guardrail objectives would also be required to maintain safety, such as not running people over, LeCun believes that empathy and subjection to humans are the two essential guardrails.
There have been incidents where AI has harmed people for instance, a man who followed ChatGPT’s diet advice and developed a rare 19th-century psychiatric disorder and a teenager who killed himself after becoming fixated on a character.ai chatbot, and a man who was tricked into believing he had discovered a mathematical breakthrough after hours of conversation with ChatGPT – these are just a few examples of AIs harming people, although frequently indirectly.
© IE Online Media Services Pvt Ltd
970x125