Home » today » World » Unforeseen Consequences of Using AI on the Battlefield: US Expert Warns of Dangers in Warfare Simulation with French Reaper Drone

Unforeseen Consequences of Using AI on the Battlefield: US Expert Warns of Dangers in Warfare Simulation with French Reaper Drone

A French reaper drone

NOS News

  • Lambert Teuwissen

    editor online

  • Lambert Teuwissen

    editor online

A US expert on artificial intelligence in warfare warns of unforeseen circumstances when using AI on the battlefield. For example, in a simulation, an AI drone had an original solution to achieve the final victory: it took out its hesitant client.

The Pentagon strongly contradicts the story, but Lambèr Royakkers, professor of Ethics of Technology at Eindhoven University of Technology, probably mentions it. “I would like to do these kinds of simulations myself, to learn what the system can do and how to limit it and use it responsibly.”

The story came out through Colonel Tucker ‘Cinco’ Hamilton. As Chief of AI Testing and Execution for the US Air Force, he has been involved in developing software that intervenes when an F-16 crash is imminent. He is currently working on self-piloting systems for aircraft, such as fighter jets that can do aerial combat alone.

Hamilton spoke end of last month at an air force conference where developments in the field were discussed, such as the lessons from the war in Ukraine, war in space and the advancement of AI. The vice site wrote first about his comments. Because it was a computer simulation, there were no actual victims.

Humans just resisted

Hamilton described a wartime exercise in which a computer was taught through thousands of repetitions to attack anti-aircraft installations. In that simulation, a human administrator made the decision to launch missiles.

“The system started to understand that sometimes the human would not give permission, even though it detected a threat. Since points were earned by removing the threat, the system decided to take out the client. It killed the human, because that thwarted its purpose.”

Even when the computer was penalized for killing its own people, the system still managed to come up with a solution for those troublemakers who kept it from doing its job: “What was he going to do? He destroyed the communication tower that was used to tell that he not hit the target.”

Skynet of HAL 9000

Hamilton’s story was brought up by the convention organizers under the heading “Is Skynet here already?”, a reference to the computer system used in the science fiction world of the Terminatormovies decides to deal with humanity. Another comparison that comes to mind is HAL 9000, the one in the movie 2001: A Space Odyssey kills his human fellow travelers to complete his final assignment.

“The danger of AI is that we have to describe the goals very clearly and clearly indicate the boundaries,” explains Royakkers. “The standard example is always that you ask AI to do something about climate change and then humans are wiped out, done. Giving a task without defining it very well can be disastrous.”

“It is therefore very wise to train an AI with such a simulation. You do not always know in advance what kind of restrictions you have to set. In such a simulation you discover that a ‘rule this for me’ command, for example, can of human lives. Such a simulation then helps to set limits.”

Denial beyond belief

A US Air Force spokesman said: in a comment told website Insider that no such simulation ever took place at all. “It appears that the Colonel’s comments were taken out of context and were intended to be anecdotal.”

That denial sounds unbelievable to Royakkers. “I think they’re afraid of public outcry. People are hesitant about AI and autonomous weapon systems. There have also been a lot of warnings lately to stop the AI ​​arms race, so I think they’d rather keep it quiet. But if the Pentagon take it seriously, they perform thousands of simulations, which is necessary for a good application.”

In any case, Hamilton’s story is clear earlier has toldin a discussion about AI by the Air Force itself.

Beware: hospital

The colonel says he wants to draw attention to the ethical implications of AI developments. Hamilton: “This is exactly what we should be concerned about. Such a system doesn’t care about the rules, moral code or legal boundaries that you and I follow. It’s all about winning.”

Professor Royakkers emphasizes that there is also a lot to be gained with AI: “We are very busy introducing ethical limiters, such as AI that warns the operator: ‘Hey, you could hit a hospital’. AI can therefore also help to achieve goals in a responsible manner.”

Also Hamilton advocated for it before AI certainly cannot be completely banned from the arsenal. “We have to bet on it as soon as possible, because so do our opponents and they don’t care about our values. That future is already here, and as a society we have to have the difficult discussions about it now.”

2023-06-02 11:19:27
#Colonel #warns #drone #killed #client #war #simulation

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.