Artificial Intelligence - The Greatest Threat

Individual journals about topics not specifically related to hang gliding.

Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Fri Nov 01, 2019 8:01 am

Purpose

This Blog is dedicated to understanding the threat of Artificial Intelligence to humans.

Rules

All members of the U.S. Hawks are welcome to post. The author reserves the right to remove posts that stray from the subject. Please try to be reasonably polite.

Background

I began working in "AI" in 1985. At that time we were struggling to solve even the simplest problems with machine learning. A lot has changed in 35 years.

One of the looming questions is: "What will happen when there is nothing that any human can do that isn't done thousands of times faster, cheaper, and better by machines?"
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 6733
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Fri Nov 01, 2019 8:19 am

Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 6733
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Fri Nov 01, 2019 9:21 am

Twilight Zone "Steel" Episode

Written by Richard Matheson (Based on his short story Steel, first published in the May 1956 edition of The Magazine of Fantasy and Science Fiction).

Original air date October 4, 1963.











"Proof positive that you can't outpunch machinery."
                                                      - Rod Serling
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 6733
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Sat Dec 07, 2019 12:55 pm

I have long feared that "self driving cars" will eventually lead to it becoming illegal for people to drive themselves. I figured it would happen gradually with insurance companies giving "incentives" to people using self-driving cars. Of course, an "incentive" to one group is another word for "penalty" to everyone else. Eventually it will become economically impractical for most people, and then it will become illegal. Doug's quote from Oz is right on point:

Doug M wrote:The end of human intelligence and coordination.
I loathe the day when the insurance companies will demand that every road vehicle shall be piloted by AI rather than a human. Performance cars and trucks will be a thing of the past, as will motorcycles, etc. If the gov't thinks you owe back taxes or doesn't like your politics, or some other made-up misdemeanor, guess where the vehicle will take you without your permission? End of freedom.


In my early career in AI, I didn't see any danger because I saw AI being used in isolated applications. I didn't foresee that so many aspects of our lives (banking, driving, voting, communication, ...) would be connected to a network of computers. Everything is in place now. Just a little more training and ...
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 6733
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA

Re: Artificial Intelligence - The Greatest Threat

Postby Bob Kuczewski » Thu Oct 01, 2020 3:18 pm

https://www.thesocialdilemma.com/

TheSocialDilemma wrote:The problem beneath all other problems

Technology’s promise to keep us connected has given rise to a host of unintended consequences that are catching up with us. If we can’t address our broken information ecosystem, we’ll never be able to address the challenges that plague humanity.


Linked article:
New York Times wrote:Opinion
TURNING POINTS

Our Brains Are No Match for Our Technology
By Tristan Harris
Dec. 5, 2019

Credit...Nicolas Ortega for The New York Times

Turning Point: In July, the Federal Trade Commission announced that it would fine Facebook $5 billion, the largest penalty ever levied by the agency for consumer privacy violations.

A decade ago, Edward O. Wilson, the Harvard professor and renowned father of sociobiology, was asked whether humans would be able to solve the crises that would confront them over the next 100 years.

“Yes, if we are honest and smart,” he replied. “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”

Since Mr. Wilson’s observation, technology’s godlike powers have increased dramatically, while the ancient, Paleolithic impulses of our brains have remained the same.

Yet this isn’t usually one of the complaints leveled against technology companies today — that the digital infrastructures of Facebook and Google have overwhelmed the natural capacities of our brains. Instead, we hear concerns that tech firms are collecting and tracking our personal data. Or that they’re simply too big.

Let’s imagine that we managed to solve the privacy issue. In this new utopia, we would own all our data, and tech giants would be forbidden from tracking our online whereabouts; they would have access only to the data we agreed to share.

While we might see fewer creepy ads and feel less paranoid about surveillance, the troubling trends connected to the online world would remain unaddressed.

Our addiction to social validation and bursts of “likes” would continue to destroy our attention spans. Our brains would still be drawn to outrage and angry tweets, replacing democratic debate with childlike he-said, she-said. Teenagers would remain vulnerable to online social pressure and cyberbullying, harming their mental health.

Content algorithms would continue to drive us down rabbit holes toward extremism and conspiracy theories, since automating recommendations is cheaper than paying human editors to decide what’s worth our time. And radical content, incubated in insular online communities, would continue to inspire mass shootings.

By influencing two billion brains in these ways, today’s social media holds the pen of world history: The forces it has unleashed will affect future elections and even our ability to tell fact from fiction, increasing the divisions within society.

Yes, online privacy is a real problem that needs to be addressed. But even the best privacy laws are only as effective as our Paleolithic emotions are resistant to the seductions of technology.

A viral app called FaceApp recently persuaded 150 million people to hand over private images of their faces, paired with their names, simply by appealing to their vanity. How? The app offered the ability to create surreally accurate portraits of people as they would look many years in the future. Who owns the app (and the 150 million names and faces)? A Russian company based in St. Petersburg.

Who needs to hack elections or steal voter information when people will happily hand over scans of their faces when you appeal to their vanity?

With our Paleolithic instincts, we’re simply unable to resist technology’s gifts. But this doesn’t just compromise our privacy. It also compromises our ability to take collective action.

That’s because our Paleolithic brains aren’t built for omniscient awareness of the world’s suffering. Our online news feeds aggregate all of the world’s pain and cruelty, dragging our brains into a kind of learned helplessness. Technology that provides us with near-complete knowledge without a commensurate level of agency isn’t humane.

Our Paleolithic brains also aren’t wired for truth-seeking. Information that confirms our beliefs makes us feel good; information that challenges our beliefs doesn’t. Tech giants that give us more of what we click on are intrinsically divisive. Decades after splitting the atom, technology has split society into different ideological universes.

Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges. The advertising business model built on exploiting this mismatch has created the attention economy. In return, we get the “free” downgrading of humanity.

This leaves us profoundly unsafe. With two billion humans trapped in these environments, the attention economy has turned us into a civilization maladapted for its own survival.

Here’s the good news: We are the only species self-aware enough to identify this mismatch between our brains and the technology we use. Which means we have the power to reverse these trends.

The question is whether we can rise to the challenge, whether we can look deep within ourselves and use that wisdom to create a new, radically more humane technology. “Know thyself,” the ancients exhorted. We must bring our godlike technology back into alignment with an honest understanding of our limits.

This may all sound pretty abstract, but there are concrete actions we can take.

First, policymakers can create a special tax for tech giants — a “downgrading tax” — that would make their business models, based on extracting and exhausting our attention spans, prohibitively expensive, while redistributing wealth to journalism, public education and the creation of new platforms that privilege human values and service to society.

Second, instead of joining free social media platforms that benefit from turning us into addicted, narcissistic extremists, we could agree to pay subscription fees to services that shun “likes” for features that empower our lives offscreen, making these services, in essence, fiduciaries acting in the best interests of humanity.

Third, instead of spreading disinformation, digital platforms could radically strengthen the media infrastructures that protect us from malicious viral content and tech-enabled distortions like “deepfakes” (fabricated videos manipulated by artificial intelligence to look genuine).

Candidates in the 2020 United States presidential election must educate themselves about the threat posed by technology’s race to outmatch our brains, and the news media must hold them accountable. No president can effectively deliver on his or her campaign promises without addressing the attention economy.

To create humane technology we need to think deeply about human nature, and that means more than just talking about privacy. This is a profound spiritual moment. We need to understand our natural strengths — our capacity for self-awareness and critical thinking, for reasoned debate and reflection — as well as our weaknesses and vulnerabilities, and the parts of ourselves that we’ve lost control over.

The only way to make peace with technology is to make peace with ourselves.
Join a National Hang Gliding Organization: US Hawks at ushawks.org
View my rating at: US Hang Gliding Rating System
User avatar
Bob Kuczewski
Contributor
Contributor
 
Posts: 6733
Joined: Fri Aug 13, 2010 2:40 pm
Location: San Diego, CA


Return to Non-HG Blogs

Who is online

Users browsing this forum: No registered users and 1 guest