ONR first demonstrated the swarm boats in 2014, sending 13 robot boats out on Virginia’s James River to protect a large, high-value ship. The experiment proved that the robots could move independently of one another yet coordinate well enough to swarm a threatening vessel and escort it away from a friendly one. It was a key demonstration of ONR’s Control Architecture for Robotic Agent Command and Sensing, or CARACaS system, which includes radar and infrared sensors and which can be retrofitted to a variety of vessels. (ONR has been working mostly with small inflatable boats, but has tested it on four types of craft.) But a human had to tell the robots which vessel to swarm.
That has changed. A Sept. 6-Oct. 3 test in the lower Chesapeake Bay demonstrated several new capabilities that will “open up the aperture” for more missions, said ONR program manager Robert Brizzolara.
One was “enhanced vessel classification,” the boats’ ability to separate friend from foe, using images fed into CARACaS. No small task, this advance required new research and development into target classification.
There is more below.
Ниже есть продолжение.
We looked at a relatively large number of automated target recognition approaches…taking algorithms for [automated target recognition] and using them in our maritime environment was not straightforward,” said Brizzolara. “In the end, we finally came up with an approach that works very well.”http://www.defenseone.com/technology/2016/12/navys-autonomous-swarm-boats-can-now-decide-what-attack/133896
But the boats can also evaluate potential threats based on their behaviors; for example, taking note of how close a suspect vessel is getting to a port, ship, or other asset. This capability, which was not demonstrated in the recent tests, allows new images and behaviors to be entered into the boats’ threat library.
The recent experiment did demonstrate new tactical actions that the robot boats can use to handle a rapidly changing situation. For instance, the swarm can now assign different drones to trail or track different enemy boats. In the 2014 demonstration, they were much less coordinated; one threatening vessel might attract the entire swarm, opening up an opportunity for another.
“The co-operative decision making is a high technical degree of difficulty, very challenging technical problem,” Brizzolara says in a video that ONR provided to media....
...Much of the discussion and fear of armed unmanned vehicles ignores a central fact. Aerial drones like the Predator or Reaper are operated by two-man human teams, a pilot to steer the drone and a sensor operator to control the various mechanical eyes and ears. The boats that participated in the event on the James River were able to sense one another as well as other vessels, and execute complicated “swarm” maneuvers, with a bare minimum of guidance. These boats are not your average drones.
“Think about it as replicating the functions that a human boat pilot would do. We’ve taken that capability and extended it to multiple [unmanned surface vehicles] operating together… within that, we’ve designed team behaviors,” Robert Brizzolara, the manager of the SWARM program for ONR, told reporters.
At one point in his briefing, Klunder held up a cube about the size of a paper weight of circuits stacked on top one another. The unit is called the Control Architecture for Robotic Agent Command and Sensing or CARACaS. It allows the boats to “operate autonomously, without a sailor physically needing to be at the controls—including operating in sync with other unmanned vessels; choosing their own routes; swarming to interdict enemy vessels; and escorting/protecting naval assets,” according to an ONR description. “Any boat can be fitted with a kit that allows it to act autonomously and swarm on a potential threat.”
Though 13 was the number needed for that particular exercise, Klunder envisions future maneuvers with 20 or even 30 boats. He said the system will be fully operational next year...
....NASA originally designed the system for the Mars Rover. ONR adapted it for the Navy’s needs but the philosophical history of swarm robotics can be traced to this 1995 paper in which artificial intelligence researchers James Kennedy and Russell Eberhardt argue that the collective behaviors that birds, fish, insects and humans display in response to rewards or threats could be captured mathematically and brought to bear on improving artificially intelligent entities in a simulation...
...The Navy is eager to keep the secret sauce under the lid, but the scope of the problem, the modeling challenges and mathematical solutions, can be gleaned in this recent paper titled Model-Predictive Asset Guarding by Team of Autonomous Surface Vehicles In Environment With Civilian Boat. The research isn’t directly related to the Navy experiment, but there’s a lot of overlap. “The outlined problem can be decomposed into multiple components, e.g., accelerated simulation, trajectory planning for collision- free guidance, learning of interception behaviors, and multi-agent task allocation and planning,” the researchers write.
The Navy’s breakthrough marks the clearest indication yet that more missions are falling to increasingly automated—and weaponized—systems, with human presence retreating ever deeper into the background. It’s a trend that continues to alarm both AI experts and human rights watchers.
...Political scientist Matthew Bolton of Pace University New York City’s Dyson College offered a ... opinion. “Growing autonomy in weapons poses a grave threat to humanitarian and human rights law, as well as international peace and security… In modern combat it is often heartbreakingly difficult to tell the difference between a fighter and a non-combatant. Such a task relies on a soldier’s wisdom, discretion and judgment; it cannot and should not be outsourced to a machine. Death by algorithm represents a violation of a person’s inherent right to life, dignity and due process.”
Bolton points to the international bans on landmines as an indicator of where the debate over autonomous weapons systems is headed. “When the vast majority of countries outlawed anti-personnel landmines — a goal now endorsed by President Obama — they established that weapons which maim or kill absent of direct human control are morally reprehensible.”.