Marines in the United States attempted to deceive the visual recognition system of a Darpa dog robot by hiding in boxes or behind tree branches. This demonstrates the potential for surveillance cameras to be manipulated by techniques that deceive the visual and facial recognition systems of Artificial Intelligences.

Researchers from the University of Maryland tested this theory by using an unsightly pullover to make the wearer invisible to the AI. This raises concerns about the reliability of military versions of these AIs, particularly in regards to their ability to accurately identify human targets.

In his recently published book, “Four Battlegrounds: Power in the Age of Artificial Intelligence,” Paul Scharre, a former Pentagon analyst and army veteran, delves into the military experiments conducted on Darpa robots and the weaknesses that were identified.

An Economist journalist specializing in defense issues highlights these findings in excerpts from the book.

A Military AI Fooled by Gymnasts

During their trials, eight Marines positioned a Darpa robot dog in the center of a roundabout for monitoring purposes. Their objective was to find ways to approach and touch the robot without being detected. The soldiers employed creative tactics, such as using somersaults to cover a distance of 300 meters to reach the robot, hiding in cardboard boxes, and disguising themselves behind a fir tree branch.

Despite these efforts, the advanced military-oriented algorithm failed to detect the approaching elements because it had only been trained to recognize walking humans, not somersaults or moving cardboard boxes.

The book does not specify when these events occurred, but it is likely that the defects have since been addressed. However, this highlights the limitations of AI, which can only perform tasks it has been trained to do. This also raises ethical concerns about the use of autonomous robots with lethal capabilities, such as drones, which are already being employed by some countries.