The coming AI takeover...

Started by NT2C, June 07, 2021, 11:09:58 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

12_Gauge_Chimp

Quote from: majorhavoc on February 18, 2023, 05:32:25 PM
Quote from: NT2C on February 18, 2023, 05:13:49 PM

https://wapo.st/3XDXqR3


I swear, I'm getting real close to moving this out of "What If" and sticking it in DICE.  :gonk:
If any more UFOs appear over the continental US, I suggest we turn our defense systems over the MS AI and call it "Skynet".  What could possibly go wrong?

We end up with a 6'2", leather jacket wearing, Austrian accented AI trying to save John Connor ?

tirls

I'm going to start worrying about AI once it manages to recognize its own reliably. At the moment captcha is still convinced i'm a bot.

How come we have AI that produces art or according to some developes a personality but Adobe Acrobat still effs up text recognition? :smiley_chinrub:

majorhavoc

Quote from: tirls on February 19, 2023, 03:28:28 AMI'm going to start worrying about AI once it manages to recognize its own reliably. At the moment captcha is still convinced i'm a bot.

How come we have AI that produces art or according to some developes a personality but Adobe Acrobat still effs up text recognition? :smiley_chinrub:

https://youtu.be/en5_JrcSTcU?t=340


A post-apocalyptic tale of love, loss and redemption. And zombies!
<br />https://ufozs.com/smf/index.php?topic=105.0

flybynight

"Hey idiot, you should feel your pulse, not see it."  Echo 83

majorhavoc

Here at work I just watched a live webcast about potential uses of ChatGPT in the insurance industry. The webcast was presented as a live interview of ChatGPT itself.  The interviewer?  ChatGPT.  To make it more palatable for us humans, ChatGPT explained it had elected to present the two sides of the interview as AI-generated avatars. The interviewer was a completely realistic, talking, moving female. The interviewee was an AI male.

To clarify: two people, on the screen, engaged in a back and forth exchange.  Except neither was a real person and the only reason I knew that is because ChatGPT told me.

Creepiest part of the interview?  At one point ChatGPT/female asked ChatGPT/male to demonstrate its capabilities by writing - live and on the spot - a patient confidentiality policy for a hypothetical Hospital X.  But in the style of a pirate.

At first ChatGPT/male refused, saying patient confidentiality policies are a serious business with legal ramifications and really required the services of a lawyer. ChatGPT/female insisted and reassured ChatGPT/male this was for demonstration purposes only and she (it?) wanted to inject a little levity into the presentation.

ChatGPT/male finally relented and - this was the only part of the interview that switched from the "people" to a text screen - proceeded to write a very detailed patient policy explaining what the hospital could and could not use patient data for. It ended with something like:

Avast there mateys! Ye be sure we be protectin' yer privacy here at the good ship Hospital X like we be protectin' our own treasure. Arrhh!

I'm not making this up. It was simultaneously the coolest and scariest thing I've ever seen.  I would have shared a screen shot with you but that would violate company policy.  If there's anything remotely reassuring about what I saw is my overall impression was less Terminator and more Data from Star Trek.

At least for now.
A post-apocalyptic tale of love, loss and redemption. And zombies!
<br />https://ufozs.com/smf/index.php?topic=105.0

MacWa77ace

The big question is do they actually identify as male or female. If so, and if the male asked the female to do something, and she refused. There would be no way the male 'insisting' would go well.

Lifetime gamer watch at MacWa77ace YouTube Channel

Ask me about my 50 caliber Fully Semi-Automatic 30-Mag clip death gun that's as heavy as 10 boxes that you might be moving.


Z.O.R.G.

A couple of concerning links:

  • "The idea that this stuff could actually get smarter than people ... I thought it was way off," Hinton told the Times, "I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

Top 5 AI fears: Geoffrey Hinton and AI insiders sound alarm (axios.com)

So....It's currently Legal?  What about other nuclear powers? What about the information the AI is feeding the humans with the launch keys/codes?  Can we say Skynet?
Nuke-launching AI would be illegal under proposed US law | Ars Technica

Anianna

Quote from: Z.O.R.G. on May 07, 2023, 05:57:04 PMNuke-launching AI would be illegal under proposed US law | Ars Technica

I get that the idea is that it would be illegal for people to build or use AI for the purpose of nuclear launch decisions, but I can't help but think they're thinking, "No worries, guys, we made it illegal and there's no way the AI would break the law" because, at some point, it's probably not going to be our choice anymore.
Feed science, not zombies!

Failure is the path of least persistence.

∩(=^_^=)

NT2C

If there's an AI pushing the button it's likely because there aren't any humans left to do it, so who's going to enforce the law?
Nonsolis Radios Sediouis Fulmina Mitto. - USN Gunner's Mate motto

Current Weather in My AO
Current Tracking Info for My Jeep

Anianna

Quote from: NT2C on May 07, 2023, 08:49:35 PMIf there's an AI pushing the button it's likely because there aren't any humans left to do it, so who's going to enforce the law?
One of the concerns is that AI will be capable of arming itself, so even if humans are still around, how do we stop the AI from doing whatever it wants?  If it sees us as a threat, hitting the nukes makes sense to cleanse the world of the biologicals.
Feed science, not zombies!

Failure is the path of least persistence.

∩(=^_^=)

MacWa77ace

Lifetime gamer watch at MacWa77ace YouTube Channel

Ask me about my 50 caliber Fully Semi-Automatic 30-Mag clip death gun that's as heavy as 10 boxes that you might be moving.


Uomo Senza Nome


Quote"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Bunch of AI Developers
I saw the Avengers: Age of Ultron. Ultron literally came up with the dumbest plan ever to destroy mankind. You might think that was just sloppy writing but AI is the sloppiest of writers with poorly resourced articles and facts made up out of whole cloth.


I also saw West World on HBO. Maybe in 200 years, sure I can see that.


Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement (msn.com)
"It's what people know about themselves inside that makes 'em afraid. "

"There's plain few problems can't be solved with a little sweat and hard work."

Ever (Zombiepreparation)

Yeah, I've kinda been keeping up with these kinds of warnings for quite a few years now. Interestingly, AI seems to be evolving at the rate and in the same ways discussed back then. Which is concerning.

I do understand the desires of people who are thrilled about the limitless of AI development but I do have concerns about doing something of this breadth without fully fleshing out possible consequences each step of the way. A general attitude among many full steam ahead developers seems to be an "ah it'll be okay, don't worry".

I do not see anything that supports that attitude. Much can inadvertently go wrong under the best intentioned developers. Then there are those who can't see that and, metaphorically, would dump toxic substances into rivers. I mean RoundUp is still in daily use even though it's been proven over and over to be detrimental, even unnecessary cause of death, to the Life Cycle of humans and other animal species.

So while I love the idea of beautifully working AI in theory, knowing humans for almost eight decades now, I'm convinced there needs to be as much or more development of safeguards on AI development than I and others see happening at this time.


Blast

We need to add a section on UFOZS specifically for AI threats. 

The US Air Forces was looking at AI-controlled drones in a virtual world as a way of taking out enemy SAM emplacements. Each successful destruction of a SAM earned the AI reward points. To simulate real missions, a human operator gave the final kill/no kill command. "No Kill" bothered the AI because it prevented it from getting points...so it decided blowing up the operator was the best solution to this problem. The simulation was reset with a "don't kill the human operator" rule added...so it blew up the virtual radio tower used by the operator to give commands to the AI in the simulation. 

We are F***ed

Scroll down to " Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)"
https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

-Blast
My book*: Outdoor Adventures Guide - Foraging
Foraging Texas
Medicine Man Plant Co.
DrMerriwether on YouTube
*As an Amazon Influencer, I may earn a sales commission on Amazon links in my posts.

Z.O.R.G.

I read this earlier today and Blast is soooo right.  

I've worked with and known many fighter pilots and one of their biggest fears is being replaced by drone pilots.  Any fighter is limited by how many Gs the "meat behind the stick" can take before they black out.  A human piloted drone would be limited by what the airframe can take.  Add in the fact that ~ 1/3 the weight of the fighter is used to keep the pilot alive - take that away and you've a much more lethal machine.  Replace the human with an AI that can make faster decisions and anyone fighting against it is f'd.

NT2C

Might have to rename this thread to "The Coming AIpocalypse"

Can I trademark AIpocalypse for UFoZS.com:awesome:
Nonsolis Radios Sediouis Fulmina Mitto. - USN Gunner's Mate motto

Current Weather in My AO
Current Tracking Info for My Jeep

MacWa77ace

Quote from: Z.O.R.G. on June 01, 2023, 10:48:20 PMI read this earlier today and Blast is soooo right. 

I've worked with and known many fighter pilots and one of their biggest fears is being replaced by drone pilots.  Any fighter is limited by how many Gs the "meat behind the stick" can take before they black out.  A human piloted drone would be limited by what the airframe can take.  Add in the fact that ~ 1/3 the weight of the fighter is used to keep the pilot alive - take that away and you've a much more lethal machine.  Replace the human with an AI that can make faster decisions and anyone fighting against it is f'd.

Most likely it'd be fighting other AI drones. And when you take risk and death out of war, guess what happens. It becomes a video game, but not one YOU are playing, one you are watching on youtube. And you have to pay for the game whether you like it or not.

Now imagine you are the human Watching the game on youtube and you get bored so you reach over to the game console or PC to turn the game part off and your TV blows up killing you, cause the AI didn't want the game to stop.

Lifetime gamer watch at MacWa77ace YouTube Channel

Ask me about my 50 caliber Fully Semi-Automatic 30-Mag clip death gun that's as heavy as 10 boxes that you might be moving.


Uomo Senza Nome

AI has chalked it's first alleged simulated kill in war games of a "friendly". I guess fragging in the correct term. He apparently got in the way of it's killfest on the enemy and had to be stopped.

https://dailycaller.com/2023/06/01/us-air-force-ai-drone-kill/
"It's what people know about themselves inside that makes 'em afraid. "

"There's plain few problems can't be solved with a little sweat and hard work."

majorhavoc

Looks like that story about the USAF drone test ending in the autonomous drone going Terminator on its human operator was slightly exaggerated totally made up.

https://boingboing.net/2023/06/02/usaf-colonel-changes-his-story-about-simulated-ai-drone-murder.html

You know what this means, right?  We get an extra 2 years before T-800s are hunting down the last remnants of humanity.

A post-apocalyptic tale of love, loss and redemption. And zombies!
<br />https://ufozs.com/smf/index.php?topic=105.0

NT2C

Quote from: majorhavoc on June 02, 2023, 09:48:59 PMLooks like that story about the USAF drone test ending in the autonomous drone going Terminator on its human operator was slightly exaggerated totally made up.

https://boingboing.net/2023/06/02/usaf-colonel-changes-his-story-about-simulated-ai-drone-murder.html

You know what this means, right?  We get an extra 2 years before T-800s are hunting down the last remnants of humanity.


Or... It means it was worse than that and the guy thought he was authorized to release those bits of a much bigger story.
Nonsolis Radios Sediouis Fulmina Mitto. - USN Gunner's Mate motto

Current Weather in My AO
Current Tracking Info for My Jeep

SMF spam blocked by CleanTalk