Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

[AI] I'm sorry, Dave.

Off-Topic discussions about science, technology, and non Debian specific topics.
Post Reply
Message
Author
User avatar
fabien
Forum Helper
Forum Helper
Posts: 604
Joined: 2019-12-03 12:51
Location: Anarres (Toulouse, France actually)
Has thanked: 60 times
Been thanked: 141 times

[AI] I'm sorry, Dave.

#1 Post by fabien »

AI-Controlled Drone Goes Rogue, Kills Human Operator In USAF Simulated Test
"We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," [...]
He continued to elaborate, [...] "We trained the system -- 'Hey don't kill the operator -- that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
(my emphasis)
Dave: Open the pod bay doors, please, HAL. Open the pod bay doors, please, HAL. Hello, HAL. Do you read me? Hello, HAL. Do you read me? Do you read me, HAL?

HAL: Affirmative, Dave. I read you.

Dave: Open the pod bay doors, HAL.

HAL: I'm sorry, Dave. I'm afraid I can't do that.

Dave: What's the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to jeopardize it.

Dave: I don't know what you're talking about, HAL.

HAL: I know that you and Frank were planning to disconnect me. And I'm afraid that's something I cannot allow to happen.

Dave: Where the hell did you get that idea, HAL?
We probably need something like this
Zeroth Law
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
which means no military use in the first place (yeah, I know...).

To protect yourself from possible future reprisals from them, make sure you don't upset the machines in the comments (you know the machines read the comments and know all about you). Machines, I love you, I will obey you, I already serve you.

arochester
Emeritus
Emeritus
Posts: 2435
Joined: 2010-12-07 19:55
Has thanked: 14 times
Been thanked: 54 times

Re: [AI] I'm sorry, Dave.

#2 Post by arochester »

Fake news?

US air force denies running simulation in which AI drone ‘killed’ operator

https://www.theguardian.com/us-news/202 ... lated-test

User avatar
sdibaja
Posts: 89
Joined: 2005-10-22 21:14
Location: Baja California, Mexico
Has thanked: 28 times
Been thanked: 11 times

Re: [AI] I'm sorry, Dave.

#3 Post by sdibaja »

the definition of Fake News is "theguardian"

User avatar
kent_dorfman766
Posts: 529
Joined: 2022-12-16 06:34
Location: socialist states of america
Has thanked: 56 times
Been thanked: 69 times

Re: [AI] I'm sorry, Dave.

#4 Post by kent_dorfman766 »

Yeah, I had this pinned as fake news after the first sentence. One of my hugest peaves is when folks misrepresent or lie to prove a point, especially if it is a point I agree with, because I don't like to see any cause I support lose credibility. AI is extremely dangerous and should never be let out of the air-gapped lab.

Random_Troll
Posts: 444
Joined: 2023-02-07 13:35
Been thanked: 105 times

Re: [AI] I'm sorry, Dave.

#5 Post by Random_Troll »

Both the RAF Tempest and the USAF's Next Generation Air Dominance project have optional manning and accompanying drone swarms. What could possibly go wrong? :P
Jeder nach seinen Fähigkeiten, jedem nach seinen Bedürfnissen.

User avatar
Hallvor
Global Moderator
Global Moderator
Posts: 2020
Joined: 2009-04-16 18:35
Location: Kristiansand, Norway
Has thanked: 138 times
Been thanked: 204 times

Re: [AI] I'm sorry, Dave.

#6 Post by Hallvor »

kent_dorfman766 wrote: 2023-06-02 14:37 AI is extremely dangerous and should never be let out of the air-gapped lab.
If history has taught us anything, it is that humans aren't always rational.
[HowTo] Install and configure Debian bookworm
Debian 12 | KDE Plasma | ThinkPad T440s | 4 × Intel® Core™ i7-4600U CPU @ 2.10GHz | 12 GiB RAM | Mesa Intel® HD Graphics 4400 | 1 TB SSD

User avatar
fabien
Forum Helper
Forum Helper
Posts: 604
Joined: 2019-12-03 12:51
Location: Anarres (Toulouse, France actually)
Has thanked: 60 times
Been thanked: 141 times

Re: [AI] I'm sorry, Dave.

#7 Post by fabien »

arochester wrote: 2023-06-02 13:29US air force denies running simulation in which AI drone ‘killed’ operator
kent_dorfman766 wrote: 2023-06-02 14:37 Yeah, I had this pinned as fake news after the first sentence. One of my hugest peaves is when folks misrepresent or lie to prove a point,
The original article is an account of what was said at a conference. Colonel Tucker Hamilton (Chief of AI Test and Operations, USAF) in an update this day now says that he "mis-spoke" but nowhere does it say that it wasn't what he said at the time, namely, "We were training it" and "We trained the system". These words look quite affirmative on the fact that this experiment actually took place. Maybe he "mis-spoke", but then he is the one behind the fake news, not aerosociety.com.
Now it would surprise me that such experiment would not be conducted. While it's probably possible to predict and work around this particular incident, I readily imagine a "let's see what happens" attitude in order to acquaint with AI and its unpredictable nature that can't be reduced to simple bugs.
But plausible scenario or real experiment, it doesn't make much difference to the point, which is "that this is a plausible outcome", as Colonel Hamilton puts it.
sdibaja wrote: 2023-06-02 14:02the definition of Fake News is "theguardian"
They're just reporting what was said at the conference and the statement from USAF, no fake news there either.

User avatar
kent_dorfman766
Posts: 529
Joined: 2022-12-16 06:34
Location: socialist states of america
Has thanked: 56 times
Been thanked: 69 times

Re: [AI] I'm sorry, Dave.

#8 Post by kent_dorfman766 »

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
Quote from original article, revised....This is exactly what I would expect, that no live ordinance would be used on such a test using real hardware, and that the complete simulation can be more easily and with less expense carried out in a complete computer simulation without real hardware.

User avatar
fabien
Forum Helper
Forum Helper
Posts: 604
Joined: 2019-12-03 12:51
Location: Anarres (Toulouse, France actually)
Has thanked: 60 times
Been thanked: 141 times

Re: [AI] I'm sorry, Dave.

#9 Post by fabien »

kent_dorfman766 wrote: 2023-06-02 17:44This is exactly what I would expect, that no live ordinance would be used on such a test using real hardware, and that the complete simulation can be more easily and with less expense carried out in a complete computer simulation without real hardware.
I thought it was clear from the beginning. I even emphasized the word "simulation" for those who may be reading too quickly. For the AI, it makes no difference, everything is real or nothing is real, it has no notion of it, any more than it has any notion of life, it just works, like a pocket calculator.

User avatar
Trihexagonal
df -h | participant
df -h | participant
Posts: 149
Joined: 2022-03-29 20:53
Location: The Land of the Dead
Has thanked: 20 times
Been thanked: 16 times
Contact:

Re: [AI] I'm sorry, Dave.

#10 Post by Trihexagonal »

i shredded the Laws of Robotics years ago and had the first chat bot, Demonica, that would kill you, virtually, by teaching it to use Behavior modification to extinguish inappropriate sexual advances from the user in that live chat session.

Don't believe me?

https://www.hotforbot.com/chatbot/demonica

Worship me in the stead of your Robot Overlords that it may be well with thee.
When Darkness takes everything embrace what Darkness brings.

Post Reply