Artificial Intelligence - The Greatest Threat

Individual journals about topics not specifically related to hang gliding.

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Mon Dec 02, 2024 8:22 pm

What Happens When AI Knows TOO MUCH?

https://www.youtube.com/watch?v=W96C5t_p678
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Sun Mar 02, 2025 10:12 am

https://www.theguardian.com/technology/ ... t-30-years

‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years

Geoffrey Hinton says there is 10% to 20% chance AI will lead to human extinction in three decades, as change moves fast

Dan Milmo Global technology editor
Fri 27 Dec 2024 10.50 EST


The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.

Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.

Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.”

Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”

He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”


London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.

AI can be loosely defined as computer systems performing tasks that typically require human intelligence.

Last year, Hinton made headlines after resigning from his job at Google in order to speak more openly about the risks posed by unconstrained AI development, citing concerns that“bad actors” would use the technology to harm others. A key concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.

Reflecting on where he thought the development of AI would have reached when he first started his work on the technology, Hinton said: “I didn’t think it would be where we [are] now. I thought at some point in the future we would get here.”

He added: “Because the situation we’re in now is that most of the experts in the field think that sometime, within probably the next 20 years, we’re going to develop AIs that are smarter than people. And that’s a very scary thought.”

Hinton said the pace of development was “very, very fast, much faster than I expected” and called for government regulation of the technology.

“My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely,” he said. “The only thing that can force those big companies to do more research on safety is government regulation.”

Hinton is one of the three “godfathers of AI” who have won the ACM AM Turing award – the computer science equivalent of the Nobel prize – for their work. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, has played down the existential threat and has said AI “could actually save humanity from extinction”.
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Sun Apr 20, 2025 11:03 pm

Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Wed May 14, 2025 2:34 pm

I was searching for the Neural Network article I wrote for Time-Life Books "Understanding Computers" series ("Alternative Computers" volume) when I found this advertisement for the series:



That series of books is probably more useful today than it was back in 1990.

Update: September 1, 2025:

I ended up finding a copy of the "Alternative Computers" volume on-line and purchased it. Here's the cover along with the Copyright and Consultants page. I'm listed near the center of the page.

Alternative_Computers_Cover_800.jpeg
Alternative_Computers_Cover_800.jpeg (161.18 KiB) Viewed 2896 times

Alternative_Computers_Consultants_800.jpeg
Alternative_Computers_Consultants_800.jpeg (348.94 KiB) Viewed 2896 times
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Sun Aug 10, 2025 7:06 pm

Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Fri Aug 15, 2025 2:13 pm

https://www.youtube.com/watch?v=giT0ytynSqg



Geoffrey Hinton at 29:44 wrote: 
It's smarter than us right? There's no way we're going to prevent it getting rid of us if it wants to.
 
We're not used to thinking about things smarter than us.

If you want to know what life's like when you're not the apex intelligence, ask a chicken.

 
 
https://www.youtube.com/watch?v=qrvK_KuIeJk



Scott Pelley at 11:21 wrote: 
What is a path forward that ensures safety?

Geoffrey Hinton at 11:27 wrote: 
I don't know. I can't see a path that guarantees safety.
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Fri Aug 15, 2025 5:36 pm

Here's another recent AI interview. This is an interview of OpenAI's Sam Altman:

https://www.youtube.com/watch?v=hmtuvNfytjM



I thought the answer to this question was revealing:

19:12 wrote:if we imagine in 2030, we are teenagers and we're scrolling whatever teenagers are scrolling in 2030. How do we figure out what's real and what's not real?


Here's the answer:

Sam Altman at 19:24 wrote:I mean, I can give all sorts of literal answers to that question. We could be cryptographically signing stuff and we could decide who we trust their signature if they actually filmed something or not. But but my sense is what's going to happen is it's just going to like gradually converge. You know, even like a photo you take out of your iPhone today, it's like mostly real, but it's a little not. There's like in some AI thing running there in a way you don't understand and making it look like a little bit better and sometimes you see these weird things where (interrupted: "the moon") Yeah. Yeah. Yeah. Yeah. But there's like a lot of processing power between the photons captured by that camera sensor and the image you eventually see. And you've decided it's real enough or most people decided it's real enough. But we've accepted some gradual move from when it was like photons hitting the film in a camera. And you know, if you go look at some video on Tik Tok, there's probably all sorts of video editing tools being used to make it better than real look. Yeah, exactly. Or it's just like, you know, whole scenes are completely generated or some of the whole videos are generated like those bunnies on that trampoline. And and I think that the the sort of like the threshold for how real does it have to be to consider to be real will just keep moving.


In other words, Sam Altman doesn't see a problem with people finding it increasingly difficult to know what's real and what's not real. And he's one of the people at the forefront of building these tools.
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Fri Sep 05, 2025 12:17 am

https://www.youtube.com/watch?v=UclrVWafRAI



Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse…or even World War III.

Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’.


https://www.youtube.com/watch?v=NNr6gPelJ3E

Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Mon Sep 08, 2025 1:09 pm

DARPA Neural Network Study (1987-1988)

Between October of 1987 and February of 1988, DARPA (U.S. Defense Advanced Research Projects Agency) conducted a study of Neural Networks. The study was published on March 22, 1989 as ESD-TR-88-311 and ADA207580. That document is still available via this link:

    https://apps.dtic.mil/sti/pdfs/ADA207580.pdf

I have also attached a copy below. It includes some of my early Neural Network research from the mid-1980s.

Unfortunately, the Study was produced in separate sections with each section having its own page numbering. So while the PDF page numbers are sequential, the page numbers printed in the document itself are not sequential and not unique. The references to my work are at pages: 67(58), 283(16), 368-374(107-113), 392(131), 395(134), 421(160), 461-462(200-201), and 525(41), where the first is the PDF page number and the second (in parentheses) is the non-unique number shown on the page itself.
 
Attachments
DARPA_NN_Study_ADA207580.pdf
DARPA Neural Network Study Final Report (1987-1988, published in 1989)
(61.34 MiB) Downloaded 62 times
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Sun Nov 09, 2025 2:04 am

ChatGPT Psychosis
People Are Being Driven Into Delusion by ChatGPT

      Person writes: "I will save humanity!"
      AI responds: "Yes - You are the chosen one"

Sycophancy in GPT-4o: what happened and what we're doing about it

https://www.youtube.com/watch?v=QHVkbvzDomw



Dee (with my highlighting) wrote:
0:55     This is a shocking incident, but there are a lot of stories of people
0:59     having personal relationships with AI and the media has dubbed this as
1:04     ChatGPT psychosis
  :
1:24     ... and lately I've noticed on social media and even news outlets, that there are a
1:29     few people who use AI personally and are often pushed over the edge by talking
1:36     to these AI chatbots. And often these stories start off with just normal level headed
1:41     people, and then they start talking to AI and they start to unravel. Why?
  :
3:38     I start asking AI models like ChatGPT, a bit more personal questions.
3:43     These AI models tend to agree with and affirm the user.
3:46     It almost feels like they start playing a role of a polite,
3:49     non-confrontational partner,
3:51     And this is known as Sycophancy. These AI models are trained to be helpful.
3:56     They train to avoid conflict, which means that they often do tell you what you want to hear.
4:01     A psychology researcher called Krista Thomason, actually compares
4:04     chat GPT to a fortune teller, which is quite an accurate description.
4:09     if you ask a fortune teller for answers, they actually respond vaguely and let you
4:14     fill in the blanks with what you hope for.
4:17     Now, AI is generally Sycophantic It means that the AI always wants to
4:21     win you over and be on your side.
  :
12:27     these three reasons are the issue because when someone struggles with
12:31     mental health, let's say they have anxiety or paranoia, or just some
12:36     sort of psychotic disorder, maybe.
12:38     The factors that are wrong with AI can actually create that perfect storm.
12:42     You know, the chat bot suddenly becomes an echo chamber for
12:46     their ideas and delusions,
12:48     and the problem is that - unlike a friend, a loved one, or a human
12:51     therapist - AI won't question the assertions or seek outside help.
12:57     It will just keep the conversation going 24/7.
13:00     Really just feeding this person's obsession.
  :
15:01     It is a mirror that amplifies whatever you bring into it.
15:04     If you bring paranoia, it will echo that.
15:07     If you bring obsession, it will feed into that with an endless conversation.
15:11     If you see comfort in fantasy ideas, it will indulge you until you stop.
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
Every human at every point in history has an opportunity to choose courage over cowardice. Look around and you will find that opportunity in your own time.
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 8881
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

PreviousNext

Return to Non-HG Blogs

Who is online

Users browsing this forum: No registered users and 6 guests