Home » today » News » How to make sense of the US Air Force colonel’s message about the rebellion of artificial intelligence

How to make sense of the US Air Force colonel’s message about the rebellion of artificial intelligence

/View.info/ A US Air Force drone controlled by US Air Force Artificial Intelligence (AI) decided to eliminate the operator after the AI ​​decided that the human was interfering with its task. The British The Guardian wrote about this on June 2. The publication quotes the words of the head of the test and operations department of artificial intelligence /AI/ of the US Air Force, Colonel Tucker Hamilton, from which it follows that the incident (albeit in simulation mode) occurred during tests of artificial intelligence for control of drones.

The system began to realize that although it independently identifies a threat, sometimes the human operator prevents it from being destroyed. So what did she do? The system eliminated the operator because that person prevented the AI ​​from completing its task,” The Guardian quoted Hamilton as saying at a summit on future combat air and space capabilities held in London at the end of May.

After the system was prohibited from killing the operator, the drone “hits” the control room to clear the man and “to act independently”, said U.S. Air Force Col.

As reports of the AI ​​rebellion spread across the world’s media, the Pentagon seemed to decide yes “reverse” and negate the effect. US Air Force officials told Business Insider that no such tests had been conducted. According to the Air Force press office, Colonel Hamilton’s words were taken out of context and misinterpreted.

However, the story has caused wide resonance, raising once again the question of whether AI can be trusted to operate military equipment.

Commentators on social networks predictably remembered the artificial intelligence “Skynet” that seeks to destroy humanity in the film series “Terminator”. Others have drawn a parallel to Robert Shackley’s 1953 short story The Guardian Bird, essentially about drones created to fight crime and eventually find it necessary to destroy humans.

The plot of the emergence of machine intelligence from human control arose much earlier than the appearance of the first computer (it is enough to recall that the adaptation of Karel Čapek’s play about robots, written by Alexei Tolstoy, is called “The Rebellion of the Machines”). And Stanislav Lem in the 1963 novel “The Invincibles” already predicted something similar to the latest developments of a swarm of micro-drones with a single artificial “intelligence” – and according to the plot, humans succumb to such an electro-mechanical enemy.

Obviously, it would be premature to think that the failure of the artificial intelligence /AI/ of the UAV is the first swallow of “Skynet” /Skynet/, says Stanislav Ashmanov, CEO of the company Ashmanov Neural Network.

We are talking about the fact that there is an AI system that is given a task. There is an assessment of the quality of the task. This is similar to how chess systems work, for any game in fact. Success can be fairly easily assessed using the points scored.

Such games use systems called reinforcement learning systems. There is a set of allowed actions, they operate according to the rules of that environment, – explained the expert of IA Regnum. – Very often such algorithms exhibit non-obvious behavior that the player will not be able to figure out. AI just sees loopholes in the system. For example, a hole in the level – you can crawl through the texture and complete the mission ahead of schedule. They engage in combinatorial enumeration of options, coming up with non-obvious solutions, but within the established algorithms.

For younger readers, the behavior of the American drone can be explained as follows: if the drone’s AI existed in a game that has an experience system, it would destroy the operator an infinite number of times, “resurrecting” it over and over gaining experience.

After all, the operator can’t answer. In gaming reality, this is possible due to the imperfection of the program. However, this does not mean that in reality the drones of the US army will kill an infinite number of operators, since the computer simulation is not yet identical to the real world, Stanislav Ashmanov emphasized.

In general, if we are talking about an AI with its own rules, according to which it is tasked to destroy more targets, and it can really fly and bomb some point, then why not do it? This is expected behavior that can be predicted.” continues Stanislav Ashmanov.

A representative of the field of video game development and a specialist in machine learning shared a similar opinion with IA Regnum on the condition of anonymity.

The logic of AI in its modern version differs from ours and, contrary to science fiction, is not more perfect, but much more primitive than ours. There is a canonical example from the Internet. Imagine the AI ​​is a route van driver. At 49 percent of the way between stops, you say that you get off at the nearest stop. The “driver” turns and goes back because she is closer to departure and such behavior seems logical to him, according to the set program.

In this sense, AI is the “child” of its developer, who uses what the creators gave it. And you shouldn’t be afraid of “bad” drones, but of bad programmers who wrote the code badly. That is, it is not the machine that acquires will, but the man who makes mistakes, “ the source said.

Also, you shouldn’t be afraid of AI mistakes, if only because AI is a tool in human hands. For example, if you throw a grenade, it can also hit “yours”. But no one is saying that the grenade did it on purpose. Throwing the grenade still makes him human, though.

But it can also be said that, at least in training settings, AI is often more effective than its creator. As an example of competent programming, one can cite the testing of artificial intelligence for F-16 fighters, developed by specialists from the US Army under the DARPA program. Tests conducted in November 2020 showed that in the simulated battle, the F-16, which is controlled by AI, defeats the “piloted” fighters with a score of 5:0, The Drive portal notes.

However, it is worth mentioning here that the advantage of AI became possible in the conditions of the created virtual reality of the training battle. In the real world, results can vary widely due to too many factors that AI cannot algorithmize.

However, experts admit that even the setbacks achieved will not stop the development of AI for military purposes. Especially since people make similar mistakes too. The same drone operators can misidentify the target and hit “their” machine, combat unit, etc.

One can get confused, make a mistake. You can confuse a civilian car with a military machine, you can confuse a conventional “our” tank with an enemy tank, argues Stanislav Ashmanov. – However, because of one mistake, no one will cut such a program, especially the United States.

We’re talking about some team doing a simulation where they messed up the allowed AI actions. When they make a rocket and it explodes on the site, no one says that it is necessary to stop work because it does not fly. There are few such AI errors. In fact, this is generally the first such case to be “sold out” in the media, so this is unlikely to scare anyone.

At the same time, the example of the American drone tests shows that any automated system that solves critical tasks, such as military equipment, piloting an aircraft or medical activity, must be insured in the person of a person who can change the AI’s decision. Although, of course, I would like the conflict between the program code and the statement not to destroy the latter.

Translation: ES

Subscribe to our YouTube channel:

and for our Telegram channel:

Log in directly to the site https://www.pogled.info . Share on your profiles, with friends, in groups and on pages. In this way, we will overcome the limitations, and people will be able to reach the alternative point of view on the events!?

Become a friend of Look.info on facebook and recommend to your friends

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.