Thursday, March 5, 2020

Coalitions & AI

Following a previous article on multinational interoperability, some elementary inspection of complications was needed since the use of AI in the context of modern strategic systems exacerbates the pre-existing modalities governing the formation and functioning of coalitions.

Further to the differences in standards, cultures, and mutual trust, traditional international political-military alliances have the bulk of their operational interoperability challenges resulting from the diversity of doctrines, military technologies and soldiers’ skill-levels. Addition of Autonomous Weapons and Systems (AWS) greatly increases the already substantial operational complexity of these by introducing some major considerations into the alliance relations, chiefly:

THE DATA RIGHTS -
As the experiments with self-driving cars have shown, unlike the human soldiers, AWS will produce and collect huge amounts of multidimensional data, and there are obvious disputes about who’d have access to that data. The data will be of the external environment, also the AWS’ interaction data (with that environment and with other AWS including those of Allies and Non-Allies), the local learning and processing data, data on anomalies/lagging and security/cybersecurity incidents etc. These would present broader knowledge on the conflict and capabilities, and a picture which could be very useful to military planners and intelligence community. Because a lot of such data will be collected privately by autonomous agents, there are not just significant opportunities to achieve mutually beneficial sharing agreements between governments and organizations but also a whole new layer of interaction/information games between states. 

CONTESTED C3 - 
Related to who gets what data rights, is the matter of command arrangements. Decision-making, ethical guidelines, and operational procedures could differ significantly across allies when it comes to cyber-physical autonomous multi-agent systems. Therefore AI systems present a very different set of problems in alliances’ command & communication design. This also presents an opportunity for those nations which have better C3 infra and multilateral options of integration to take the upper-hand in setting the terms of the security cooperation.

BYZANTINE AI -               
The autonomous agents which display arbitrary, faulty, or malicious behavior are termed as Byzantine AI. These could be the result of enemy exploitation also. The management of such resources in an alliance requires special attention and creation of formal procedures, quick consensus mechanisms, and channels of risk communication to interact and intervene, including on behalf of allies, while avoiding any claims of mutual interference. For an example of bad interference, recently Tesla remotely turned-off the autopilot of a customer’s vehicle because some payment wasn’t made, such "security features" may have drastic consequences in armed conflicts, and so far we’ve been lacking a valid governance framework for managing byzantine assets. 

RESOURCE PROTECTION -
Assuming that most wars wouldn’t be a decisive all-out armageddon but follow some sort of political nash-equilibria and that today’s friends can turn into tomorrow’s foes, there emerge some concerns regarding maintaining certain elements of competitive advantage over your allies. There need to be modalities of ML process and intrasystem communication protection, development of mutually-agreed upon safety testing tools, and architectural modularity in the systems deployed, among probably many other things. Further work in modularity and resource protection mechanisms in an alliance would not just help solve the immediate interoperability challenges of a man-machine force, but also take us a step in the direction of being able to address the problem of technology/equipment life-cycle.

Clearly, one player signals too much.
There is a growing body on research in AI alignment which addresses the issues related to cooperation failures, AI’s lack of evolutionarily established signalling mechanisms, approaches for joint optimization, and open-source interactions between AI systems of asymmetric capabilities. All of which should be the top AI safety research questions for states looking to go deploy AI for security related tasks with their allies. And give the fact that most of these tasks are going to be actuated via edge devices, further expansion of security anti-patterns is needed in context of multi-agent learning systems.

Since AI integration is an inevitable and irreversible process, hopefully international alliances would churn out (or adopt) a framework which finds wider applicability across the spectrum of cooperation-competition activity. Also what promptly comes to mind is Napoleon’s somewhat ironic remark that if he must make war, he’d prefer to do it against a coalition. Which is still very relevant, even for security engineers and foreign policy wonks.  
__

No comments:

Post a Comment