What Happens When AI Knows TOO MUCH?
https://www.youtube.com/watch?v=W96C5t_p678

‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years
Geoffrey Hinton says there is 10% to 20% chance AI will lead to human extinction in three decades, as change moves fast
Dan Milmo Global technology editor
Fri 27 Dec 2024 10.50 EST
The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.
Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.
Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.
Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.”
Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”
He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”
London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.
“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.
AI can be loosely defined as computer systems performing tasks that typically require human intelligence.
Last year, Hinton made headlines after resigning from his job at Google in order to speak more openly about the risks posed by unconstrained AI development, citing concerns that“bad actors” would use the technology to harm others. A key concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.
Reflecting on where he thought the development of AI would have reached when he first started his work on the technology, Hinton said: “I didn’t think it would be where we [are] now. I thought at some point in the future we would get here.”
He added: “Because the situation we’re in now is that most of the experts in the field think that sometime, within probably the next 20 years, we’re going to develop AIs that are smarter than people. And that’s a very scary thought.”
Hinton said the pace of development was “very, very fast, much faster than I expected” and called for government regulation of the technology.
“My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely,” he said. “The only thing that can force those big companies to do more research on safety is government regulation.”
Hinton is one of the three “godfathers of AI” who have won the ACM AM Turing award – the computer science equivalent of the Nobel prize – for their work. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, has played down the existential threat and has said AI “could actually save humanity from extinction”.




Geoffrey Hinton at 29:44 wrote:
It's smarter than us right? There's no way we're going to prevent it getting rid of us if it wants to.
We're not used to thinking about things smarter than us.
If you want to know what life's like when you're not the apex intelligence, ask a chicken.
Scott Pelley at 11:21 wrote:
What is a path forward that ensures safety?
Geoffrey Hinton at 11:27 wrote:
I don't know. I can't see a path that guarantees safety.

19:12 wrote:if we imagine in 2030, we are teenagers and we're scrolling whatever teenagers are scrolling in 2030. How do we figure out what's real and what's not real?
Sam Altman at 19:24 wrote:I mean, I can give all sorts of literal answers to that question. We could be cryptographically signing stuff and we could decide who we trust their signature if they actually filmed something or not. But but my sense is what's going to happen is it's just going to like gradually converge. You know, even like a photo you take out of your iPhone today, it's like mostly real, but it's a little not. There's like in some AI thing running there in a way you don't understand and making it look like a little bit better and sometimes you see these weird things where (interrupted: "the moon") Yeah. Yeah. Yeah. Yeah. But there's like a lot of processing power between the photons captured by that camera sensor and the image you eventually see. And you've decided it's real enough or most people decided it's real enough. But we've accepted some gradual move from when it was like photons hitting the film in a camera. And you know, if you go look at some video on Tik Tok, there's probably all sorts of video editing tools being used to make it better than real look. Yeah, exactly. Or it's just like, you know, whole scenes are completely generated or some of the whole videos are generated like those bunnies on that trampoline. And and I think that the the sort of like the threshold for how real does it have to be to consider to be real will just keep moving.

Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!
WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse…or even World War III.
Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’.


Dee (with my highlighting) wrote:
0:55 This is a shocking incident, but there are a lot of stories of people
0:59 having personal relationships with AI and the media has dubbed this as
1:04 ChatGPT psychosis
:
1:24 ... and lately I've noticed on social media and even news outlets, that there are a
1:29 few people who use AI personally and are often pushed over the edge by talking
1:36 to these AI chatbots. And often these stories start off with just normal level headed
1:41 people, and then they start talking to AI and they start to unravel. Why?
:
3:38 I start asking AI models like ChatGPT, a bit more personal questions.
3:43 These AI models tend to agree with and affirm the user.
3:46 It almost feels like they start playing a role of a polite,
3:49 non-confrontational partner,
3:51 And this is known as Sycophancy. These AI models are trained to be helpful.
3:56 They train to avoid conflict, which means that they often do tell you what you want to hear.
4:01 A psychology researcher called Krista Thomason, actually compares
4:04 chat GPT to a fortune teller, which is quite an accurate description.
4:09 if you ask a fortune teller for answers, they actually respond vaguely and let you
4:14 fill in the blanks with what you hope for.
4:17 Now, AI is generally Sycophantic It means that the AI always wants to
4:21 win you over and be on your side.
:
12:27 these three reasons are the issue because when someone struggles with
12:31 mental health, let's say they have anxiety or paranoia, or just some
12:36 sort of psychotic disorder, maybe.
12:38 The factors that are wrong with AI can actually create that perfect storm.
12:42 You know, the chat bot suddenly becomes an echo chamber for
12:46 their ideas and delusions,
12:48 and the problem is that - unlike a friend, a loved one, or a human
12:51 therapist - AI won't question the assertions or seek outside help.
12:57 It will just keep the conversation going 24/7.
13:00 Really just feeding this person's obsession.
:
15:01 It is a mirror that amplifies whatever you bring into it.
15:04 If you bring paranoia, it will echo that.
15:07 If you bring obsession, it will feed into that with an endless conversation.
15:11 If you see comfort in fantasy ideas, it will indulge you until you stop.

Users browsing this forum: No registered users and 6 guests