Monday, May 27, 2024

Technology and Security Competitions

There circulate these days a variety of rumors of technological destruction - from an impending AI doom to a US-China technological decoupling and subsequent showdowns, or even an escalation of "small wars" into a litany of CBRN scenarios - for all we know (do we ever?), the internet could well be full of competing social botnets programming us for our imminent futures. Nevertheless, these rumors invite a closer look at the nature of technology development, coupling and competition before we could mourn (gleefully?) upon the demise of this brave new world of ours. In this digitally inter-connected world where "there is no there and we're all here", geography still remains a fundamental reality. Westphalian states are the dominant security providers over specific geographies. There are many competing states, perennially uncertain of each others' intentions and desirous of each others' resources and geographic control. They must also maintain a moral upper-hand and legitimacy over competing security providers. This security competition is one of the the key troubles in contemporary governance of "cyberspace" where there exist competing multilateral vs multistakeholder logic and value systems. 

Let us therefore begin with the internet itself. The graph below depicting the nation-wise publication of technical documents at the Internet Engineering Task Force (IETF) underlines the nature of technological contest among "great powers". A Thucydides vibe between a rapidly rising power (China) and a gradually declining one (US) is quite naturally apparent here. Ostensibly, the cyberspace is a global common, contested but shared, and its protocols' and standards' development, historically contingent as is it, could not be used to make generalised remarks about the development of AI technology stack and its technical standards. For those are developed with a much greater regional impetus where locally clustered actors dominate markets and policy. Moreover, as AI and social bots become more pervasive, internet governance itself might have to integrate aspects of geographically contingent platform and API governance machinations.       

A Thucydides' Graph of Technology Competition [Source]

This geographic characterization of digital stack portends an incipient geopolitical logic in technology construction. To take a particular area, one could see how geography forces technology in the development of various national cyber security complexes. For example, the US, that has no mortal enemies at its borders and is fairly isolated by the oceans, but has to fight wars all over the world. It has, as a result, leaned quite heavily on developing global communications and surveillance networks, superiority in air and electronic spectrum, and the enabling cyber security partnerships such as Five Eyes. Furthermore, one may argue that the present internet norms and architecture in itself are a significant tool at the service of the liberal international order.

The Chinese cyber security complex, on the other hand, has leaned a lot more towards domestic control, industrial espionage, management and expansion of territory, and its ambitions over pacific waters - where it comes into direct conflict with the US hegemony that'll continue to shape its digital-technical goals. One must note here that the construction of state's cyber security complex entails all three properties of technology - namely technique, equipment, and organisation - the unique nature of which emerges from the geo-strategic forces underlying these. 

One of the best example of this geographically contingent interplay of social organisation, technique, and equipment is Israel. Its location and initial conditions forced it to shed all the useless pomp and hierarchy of military-technical organisations and adopt a ruthless functionalism instead, producing in effect a world-class cyber security complex without the accompanying burden of quasi-victorian bureaucracies. In fact, Russia's embrace of "hacker culture" and asymmetric cyber capabilities with respect to US and Europe must also be seen within the context of the collapse of Soviet geographic and economic vision, and not to mention the abject failure of Soviet's symmetrical competitive strategy with US in early internet development.    

We, not to leave ourselves out, have had two main geographic adversaries - Pakistan, and China. However, early on our state managers pivoted the Indian strategic thinking and discourse around Pakistan, not China, and stuck to it. Pivoting our strength and capability building around the smaller adversary was certainly easier and also suited the political and professional incentives of powers that be, given long and painful historical damages. However, this long-held strategic benchmark of a useless and weaker enemy produced a psychological and technical backwardness in our society - we chose to import, not build, our own military-security stack, including even cyber security software. In fact, it was a third-party cyber threat observer (who we also tried importing then, and is now famous for its Pegasus investigations) that had initially flagged the sweeping extent of Chinese botnets in India to the notice of the public and government.   

The geo-strategic foundations of technology development suggest that as long as our polity remains shy and inertial about the deeply geographic and military-technical nature of our competition with China, our cyber security complex and technology organisation too will continue to reflect that institutional ambiguity. There are three key broad lessons here requiring "radical acceptance" by policymakers that the geo-strategic foundations of technology development hold: 

A) Technocratic rationality dominates ethical rationality in hyper-competitive arenas. 

The development of dominant cyber security clusters in Tel Aviv, Washington and San Francisco indicates how closely tied together innovation is with knowledge spillover and a hyper-competitive social ecosystem. Not only is there a great mobility of high-end expertise across public-private organisations within these geographies, it also corresponds highly to the military-security requirements of their states. Hyper-competitive games cannot be played with ethical or ideological instruments. A case in point can be that present AI systems need robustness, safety, and energy efficiency - as per technocratic rationality - but if governments prioritize corporate DEI policies instead and direct finite resources into inter-governmental virtue signaling games, they may win some validation but will lose the broader security competition itself.  

B) Global technical standardization is beyond conventional competencies of governments. 

The rapid rise of China in the Thucydides' graph above draws significantly from contributions of companies like Huawei, Baidu, and Tencent, along with actors like China Telecom. This happened post a series of reforms in early 2010s giving more leeway and independence to such actors in the global internet governance. Thus, transnational technical standardization requires a whole-of-society anti-Westphalian approach to address certain gaps in states' technical expertise and governance capacities. Having to navigate this conflagrating techno-security competition between US and China, we need serious structural reforms now across the state and industry to dramatically drive the trajectory of our own line, and certainly not more of the same thing.  

C) Technology artifacts are not technology. 

Fundamentally, technology is information (as in know-how). One may further say that technology is strategic information. On their own, societies acquire this knowledge under considerable security and civilisational stressors, wars being a prominent one. Since it is widely understood that high ambitions often have to grapple with constrained timelines and low budgets, it is tempting for bureaucracies to buy cool stuff and say that they've acquired technology. Yet, this is eventually hogwash accompanied with an overbearing servitization component. The Americans and Israelis got this information in process of navigating warfare, and the Chinese by stealing intellectual property instead.     

This discussion also underlines a key change in national polities post-WWII (because of the introduction of nuclear weapons and a corresponding military-scientific elite in decision-making processes) that in addition to the military and geopolitical planning, states also have to integrate global technological developments in their strategic calculus. With strong AI around the corner and a possibility to remake the internet (for good?), this would require considerable expertise outside the usual skills possessed by politicians and their babus, and additionally a permanent view of politics beyond electoral vicissitudes. Technology, by transforming our needs, environment and actual possibilities, slowly shapes its own operating environment. It has a life, and now a mind, of its own, throwing the transnational digital governance in practice into an uncomfortable mix with the contraptions of the administrative state. Where does our geo-strategic imperative take us at this juncture?

Monday, September 11, 2023

Reconciling AI Governance and Cybersecurity

  

Recently, Sam Altman had been touring the world attempting (perhaps) a regulatory capture of global AI developments. No wonder that OpenAI does not like open-sourced AI, at all. Nevertheless, this post isn’t about AI development, but its security and standardization challenges. Generally, an advanced cyber threat environment, as well as the defenders’ cyber situation awareness and response capabilities (henceforth referred together as ‘security automation’ capabilities) are both overwhelmingly driven by automation and AI systems. To get an idea, take something as simple as checking and answering your Gmail today, and enumerate the layers of AI and automation that can constitute securing and orchestrating that simple activity. 

Thus, all organisations of a noticeable size and complexity have to rely on security automation systems to effect their cybersecurity policies. What is often overlooked is that there also exist some cybersecurity “metapolicies” that enable the implementation of these security automation systems. These may include the automated threat data exchange mechanisms, the underlying attribution conventions and knowledge production/management systems. All these enable a detection and response posture often referred to by marketers and lawyers as “active defense” or “proactive cybersecurity”. However, if you pick up any national cybersecurity policy, you’d be hard pressed to find anything on these metapolicies – because they are often implicit, brought into national implementations largely by influence and imitation (i.e. network effects) and not so much by formal or strategic deliberations.

These security automation metapolicies are important to AI governance and security because in the end, all these AI systems, whether completely digital or cyber-physical, exist within the broader cybersecurity and strategic matrix. And we need to be asking whether retrofitting the prevalent automation metapolicies would serve well for the future of AI or not. 

Avoiding Path Dependency 

Given a tendency towards path dependency in automated information systems, what has worked alright so far is getting further entrenched into the newer and adjunct areas of security automation, like the intelligent/connected vehicle ecosystem. Further, the developments in the security of software-on-wheels are being readily co-opted across a variety of complex automotive systems, from fully-digitised tanks that hold the promise of decreased crew-size and increased lethality to standards for automated fleet security management and drone transportation systems. Consequently, there is rise in vehicle SOCs (Security Operations Centers) that operate on the lines of cybersecurity SOCs and use similar data exchange mechanisms, borrowing the same implementations of security automation and information distribution. That would be perfectly fine if the existing means were good enough to blindly retrofit into the emerging threat environment. But they are far from it. 

For example, most of cybersecurity threat data exchanges make use of the Traffic Light Protocol (TLP), however TLP itself is only a classification of information – its execution, and any encryption regimes to restrict distribution as intended, is left to the designers of security automation systems. Thus there is need for not just more fine-grained and richer controls over data sharing with fully or partially automated systems but also ensuring compliance with it. Much of threat communication policies like the TLP are akin to the infamous Tallinn manual, in the way that these are almost an expression of opinions that cybersecurity vendors may consider implementing, or may not. It gets more problematic when threat data standards are expected to cover automated detection and response (as is the case with automotive and industrial automation) – and may or may not have integrated an appropriate data security and exchange policy for lack of any compliance requirements to do so.

Another example of inconsistent metapolicies, out of numerous others, can be found in the recent rise of language generation systems and conversational AI agents. The thing is that not all conversational agents are ChatGPT-esque large neural networks. Most of these have been in deployment for decades as a rules-based, task-specific language generation programs. Having a “common operating picture” by dialogue modeling and graph-based representation of context between such programs (as an organisation operating in multiple domains/theaters could require) was an ongoing challenge before the world stumbled upon “attention is all you need”. So now basically we have a mammoth of legacy IT infrastructure in human-machine interface and a multi-modal AI automation paradigm that challenges it. Organisations undergoing “digital transformation” not only have to avoid inheriting the legacy technical debts but also consider the resources and organisational requirements for efficiently operating an AI-centric delivery model. Understandably, some organisations (including governments) may not want a complete transformation right away. Lacking standardised data and context exchange between the emerging and the legacy automated systems, many users are likely to continue with a paradigm they are most familiar with, not the one which is most revolutionary. 

In fact much of cybersecurity today hinges on these timely data exchanges and automated orchestration, and thus these underlying information standards become absolutely critical to modern (post-industrial) societies and the governance of cyber-physical systems. Yet, instead of formulating or harmonising the knowledge production metapolicies needed to govern AI security in a hyper-connected and transnational threat environment – we seem to be falling into the doomer traps of existential deliverance and unending uncanny valleys. That said, one of the primary reasons for the lack of compliance and a chaotic standards development scenario in security data production is the lack of a primary governance agent. 

The Governance of (Cyber) Security Information

Present automation-centric cyber threat information sharing standards generally follow a multistakeholder governance model. That means these follow a fundamentally bottom-up life-cycle approach, i.e. a cybersecurity information standard is developed and is then pushed “upward” for cross-standardisation with ITU and ISO. This upward mobility of technical standards is not easy. The Structured Threat Information eXpression (STIX), which is perhaps the de facto industry standard now for transmitting machine-readable Cyber Threat Intelligence (CTI), is still awaiting approval from ITU.  Not that its really needed, because the way global governance in technology is structured, it is led by industry and not nations. The G7 have gone to the extent of formalising this, and some even blocking any diplomatic efforts towards a different set of norms. 

This works well for those nation-states which have the requisite structural and productive capacities within their public-private technology partnerships. Consequently, the global governance of cyber technology standards becomes a reflection of the global order. Sans the naming of cyber threat actors, this still had been relatively objective by nature so far. But it is no longer true with the integration of online disinformation into offensive cyber operations and national cybersecurity policies – not only the conventional information standards can run into semantic conflicts, newer value-driven standards over information environment are also popping up. Since the production and sharing of automation-driven social/political threat indicators can be shaped by and affect political preferences, as the threats of AI generated information and social botnets rises, the cybersecurity threat information standards also slide from a sufficiently objective to a more subjective posture. And states can do little to reconfigure this present system because the politics of cybersecurity standards has been deeply intertwined with their market-led multistakeholder development.

Cyber threat attributions are a good case in point. MITRE began as a DARPA contractor, and today serves as an industry wide de-facto knowledge base for computer network threats and vulnerabilities. Of the Advanced Persistent Threat groups listed in MITRE ATT&CK, close to 1/3 of cyber threats are Chinese, another ~1/3 come from Russia/Korea/Middle-East/India/South-America etc, and the remaining ~1/3 (which contain the most sophisticated TTPs, the largest share of zero-day exploitation, and a geopolitically aligned targeting) remain unattributed. We’ll not speculate here but an abductive reasoning about the unattributed threat cluster may leave readers with some ideas about the preferences and politics of global CTI production. 

A fact of life is that in cyberspace power-seeking states have been playing the role of governance actors and sophisticated offenders at the same time, so this market led multistakeholderism has worked out well for their operational logic – promulgating a global politics of interoperability. But it is bad for the production of cyber threat knowledge and security automation itself, which sometimes can get quite biased and politically motivated over the internet. Society has walked this path long enough to not even think of it as a problem when moving into a world surrounded by increasingly autonomous systems.  

A Way Forward 

With the social AI risks looming larger, states intending to implement a defensible cybersecurity automation posture today might have to navigate a high signal-to-noise ratio in cybersecurity threat information, multiple CTI vendors and metapolicies, as well as constant pressures from industry and international organisations about “AI ethics” and “cyber norms” (we’ll not venture into a discussion of “whose ethics?” here). This chaos, as we noted, is an outcome of the design of bottom-up approaches. However, top-down approaches can lack the flexibility and agility of bottom-up approaches. For this reason, it is necessary to integrate the best of multistakeholderism with the best of multilateralism. 

That would mean rationalising the present bottom-up setup of information standards under a multilateral vision and framework. Because while we do want to avoid partisan threat data production, we also want to make use of the disparate pool of industry expertise which requires coordination, resolution and steering. While some UN organs, like the ITU and UNDIR, play an important role in global cybersecurity metapolicies – they do not have the sort of top-down regulatory effect needed to govern malicious social AI over the internet, or implement any metapolicy controls over threat sharing for distributed autonomous platforms. Therefore, this integration of multistakeholderism with multilateralism needs to begin at the UNSC itself, or any other equivalent international security organisation. 

Not that this was unforeseen. When the first UN resolution was made assessing Information Technologies in 1998, particularly the internet, some countries had explicitly pointed out that these technologies will end up at odds with international security and stability, hinting at the required reforms at the highest levels of international security. Indeed, UNSC as an institution has not co-evolved well with digital technologies and the post-internet security reality. The unrestricted proliferation of state-affiliated APT operations is just but one example of its failure in regulating destabilising state activities. Moreover, while the council seems still stuck in a 1945 vision of strategic security, there is enough reason and evidence to relocate the idea of “state violence” in light of strategically deployed offensive cyber and AI capabilities. 

While overcoming the resilience of global order and their entrenched bureaucracies is not going to be easy, if reformed in its charter and composition, the council (or its replacement) could serve as a valuable institution to fill the void that emerges from the lack of a primary agent in guiding the security and governance standards driving security automation and AI applications in cyberspace. 

It Is The Process

At this point it is necessary that we call out certain misunderstandings. It seems that regulators have some ideas about governing “AI products”, at least the EU’s AI Act suggests the same. Here we must take a moment of silence and quiet to reflect on what is “AI” or “autonomous behaviour” – and it will soon dawn upon most of us that the present methods of certifying products may not be adequate for addressing adaptive systems rooted in continuous learning and exchanging data with the real world. What we’re trying to say is that the regulators perhaps need to seriously consider the pros-and-cons of a product centric vs a process centric approach to regulating AI. 

AI, at the end, is an outcome. It is the underlying processes and policies, from data engineering practices and model architectures to machine-to-machine information exchanges and optimisation mechanisms, where the focus of governance and standards needs to be, not at the outcome itself. Further, as software shifts from an object-oriented to an agent-oriented engineering paradigm, regulators need to start thinking about policy in terms of code and code in terms of policy – anything else will always leave a giant gap between intent and implementation. 

If the aforementioned chaos of today’s multistakeholder cybersecurity governance is anything to go by, for AI security and governance we need an evidence led (consider data that led to final CTI, and engaging with new types of technical evidence) threat data orchestration, runtime verification of AI-driven automation in cyber defense and security systems, clear non-partisan channels and standards for cyber threat information governance, and a multilateral consensus on the same. Focusing on the final AI product alone can leave much unaddressed and potentially partisan – as we see from the ecosystem of information metapolicies that drive security automation systems worldwide – hence we need to focus on better governing the underlying processes and policies that drive these systems and not the outcomes of those processes and policies.

Tuesday, March 7, 2023

On Unsavoury Memeplexes

 Tigress & Serpent, Thomas Landseer

When ChatGPT was released, soon arose several continuing Chinese whispers of the online variety that expressed discontent over the AI being too "lobotomised". These are also valid concerns after all, for information (the semantic significance of it, not the syntactical expression itself), or the deliberate absence of it, can often be political. And consequently, from a range of quarters, we've seen people clamming for their ideologically or theologically consistent AIs. Times are such, even computation can get cancelled. But this tendency to fortify, arm and train their memeplexes against adversarial memes under the pressure of structural changes is a phenomena that calls for deeper exploration, for much of what drives an abundant people (those who've sufficient access to food, mates, and circus), whether state identities or culture wars, are memeplexes undergoing constant unraveling.     

State identities are a rather interesting example here, because these also carry along a bureaucratic rationalization of their adopted memeplexes. Therefore, following the principle of co-evolution of politics and technology (i.e. institutions and techniques), states in peacetime will openly pursue only the degree of cultural politics that their established bureaucratic institutions may permit - such as the US evangelising worldwide its own cultural revolution, or the Chinese digging up old religious artifacts in western Tibet to expand their claims over Indian territories, or even some babus in India conveniently forgetting to discuss all pre-invasion indo-tibetan boundary arrangements with their Chinese counterparts in the hopes of a bff-ship (effectively secularising the wider political discourse over western Tibet). The role which bureaucracies play in the evolution of state identities and corresponding memeplexes is an excellent case of the chief evil of the (sometimes necessary) institutionalisation of anything - that the process take over the purpose. During existential turmoils, the phenomena may briefly slip into the-purpose driving the-process again, but soon the newly emerged processes again tether the purpose, establishing a cultural homeostasis until the inertia is disturbed. And thus, in the absence of a clearly defined, objective and actionable political purpose, pragmatism dictates that bureaucratic rationalizations orchestrate state identities.

In 1945, after taking over Japan, the local US administration had banned the images of Mount Fuji, samurais, and even performing arts such as sword-fighting movies and Kabuki plays, while at the same time promoting pacifist literature and debating fraternisation with Japanese women. In the eyes of west, Japan's collective mindset at the core was feudal, requiring considerable reeducation and civilizing for democratic functioning. And much of this grand effort came under the perview of an "Information Control Division". Over the decades the naming conventions have gotten a lot more politicaly correct, i.e. it'll be scandalous for a public department to be named Information Control Division these days, Ministries of War have become Ministries (and Departments) of Defense and so on. But bureaucratic institutions embody patterns of state behaviour, and changing their names does little to alter human affairs and organisational logics. The Twitter files might suggest that Information Control Division lives on, and Ukraine turning into West's collective proxy might indicate that the ministries of war too, live on. Survival requires old-timer memes to not only remain relevant but also interact competitively or even freeride with other memeplexes. The freeloading memeplex is a thing of beauty, a beast of prey in the realm of ideas. Like a parasite it benefits from susceptible memeplexes, and often us biological hosts have little recourse to deal with such epenthetic memes. Glorious ideals like The Law, Justice, and Freedom et cetera are all also great examples of such memes. 

Since we were speaking of state identities, justice too is an interesting meme to consider. Just a couple of centuries ago, if someone had harmed your loved ones, you might have sought to take revenge, maybe even banding together your community and friends into it. After all, revenge has its evolutionary logic - credible retaliation prevents future indiscretions i.e. an eye for an eye does not make the whole world blind, instead the arrangement ensures that people avoid poking into another's eye. But irrespective of the role which revenge plays in human affairs, and plays much greatly in state and international affairs, if someone harmed your loved ones today, it is expected that you'd not be going out seeking revenge on your own. Instead, you'd go to the police and the court (irrespective of their effectiveness) and hope that "The System" will take your revenge for you. Here is a meme that is treated differentially at different levels of society - for indignity and a lost sense of kindred honour, a man pursuing his revenge could be too barbaric and patriarchal in today's world, but the sovereign doing so is a "strategic imperative". 

This is infact the very essence of The Law among the sovereigns. Consider that for the unparalleled specialisation in enforcing his will, in the jungle the lion is the law. Over his subjects within his territory, the lion may exercise his jurisprudence, if he has any. That jurisprudence forms the law of the land, the constitution and other sacred documents of the people living in that land. But the meme of justice takes a U-turn when the matter to be resolved is not within a lion's territory, but among the lions that hold such territories. Our salaried lions then do not say that XYZ lion is breaking the law, instead, they say that XYZ lion is disturbing the order. Just with different communities of underlying hosts, the expected behavior around the memeplex shifts from being almost mathematically formalised to being merely customary. Moreover, modern memeplexes are also partly driven by and dependent upon constant connectivity and a (computer) networked society, and thus the bureaucratic rationalization of our world necessitates reigning in the unsavoury memeplexes via technical means of the computational variety as well. 

How unsavoury is your memeplex? 

Often such unsavoury memeplexes could originate from and target the lions themselves - an example is cyber threat information sharing about these memeplexes. A lot of cyber security discourse today (including the just released US' National Cybersecurity Policy) include online influence and disinformation threats. Some "do-gooders" at NATO's StratComCoE are also trying to extend the existing threat information sharing standards and languages to cover these online informational threats. However, such information sharing involves a fair share of subjectivity and semantic conflicts (some of which I go over here) which could make releasing such "memetic threat intelligence" a bit of political activity as well, as opposed to sharing conventional cyber threat information. It'll be a diplomatic soup if states or their CERT-type organisations do it, so there is an precipitating growth in dependence on private sector proxies - online platforms, OSINT organisations, and commercial information vendors - to do what in the old days would have been done by foreign services and information control divisions.    

Clausewitz, of all the philosophers, had rightly remarked that every period carries its own cultural grammar for war. And thus even decades old conflict today are being fought under a new cultural grammar in Ukraine. Often the biological hosts' vainglorious assumptions of modernity dictate that they sanitise and compartmentalise their memeplexes - change the content of books, redesign classics, listen to musicians not music, so on and so forth. Even after hundreds of thousands of years of human existence, information producers - whether it is Tulsidas or Roald Dahl - need to be retrofitted into "The 21st Century", and ironically even the 21st century AIs too. Which also triggers me to invoke Stanely Kubric's absolute masterpeice - 2001: A Space Odyssey - where the protagonist eventually wins the race of evolution by "lobotomising" his fellow AI. Interestingly, much of the popular AGI risk discourse today is effectively synthesised into the memeplex of that one Hollywood movie.     

Perhaps all of this also points at the default human tendency to let their beliefs be coloured by their desires. Buddha, as some Chinese communist babus may also be grappling with, postulated desire to be the root of suffering. But would the underlying biological substrates of living memeplexes ever be able to shun their desires and see things as they really are? I'd say never, for that is our prison, which Czeslaw Milosz captured beautifully when noting that men will continue to clutch at illusions when they have nothing else to hold to.   

Saturday, June 11, 2022

A Networked Peace

"To live effectively is to live with adequate information."
 
Norbert Weiner had maintained that nature has an inherent tendency to degrade the organized. The organism, according to him, is opposed to death and disintegration "as message is to noise" - postulating a perpetual conflict between the self-organising and the organised. This is evident everywhere. Take, for example, the rules based world order. The thing about complex adaptive systems is that given their tendencies towards self-organisation, they generally find a way to work around any and all rules. The former then has to reassert itself to turn the self-organising into the organised, as long as it can. Thus the first secretary general of NATO had defined peace as keeping the Russians out, the Americans in, and the Germans down - and the secretary general of NATO today would continue to insist upon the same peace. But undoubtedly in an arena staged upon competing networks, one man's peace is bound to become another's war and vice versa.    
 
Notwithstanding, how do we ensure that in this struggle between the self-organising and the organised, a structure evolves and not destructs. Mr. Weiner had suggested that since all organisations are held together by communication, effective communication engineering is the key to steering society in a purposeful direction. Communication effects control. That is indeed the most fundamental insight of cybernetics, whose father understood society only through the study of the messages and communication facilities which belong to it. Thus the way to create a social structure is to restrict and regulate the information flows in that society, to destroy that structure is to destroy or divert those information flows, i.e. create new information flows. That is what the internet did too.
 
Enter digital networks and artificial agents. The internet hasn't lived upto the expectations of the founders of computing. JCR Licklider, for example, had prophesied that in the information age, unemployment would disappear from the face of the earth because of the opportunities offered by "an infinite crescendo of on-line interactive debugging". Seems unlikely to happen. A lot of the computing giants couldn't foresee the future that their dream machine, AI, would bring aboard while the humanity, globally and gently, sleepwalks into an era of perpetual warring and a painful transformation of the nation-state.
 
In 2011, a startup named Pacific Social Architecting Corporation, figured if they could change these information flows using online socialbots, they'd be able to alter the social architecture of a community. They tried and while their experiments met with some success, their startup did not. Almost two decades later, researchers writing for Nature Communications describe the use of similar software agents to produce "information gerrymandering" - forcing an asymmetric assortment of influence over an unsuspecting population and creating the majority illusion for specific nodes. Even in their highly-cited paper on social botnets, Yazan Boshmaf and others had suggested early on that each social bot in an online social network is going to be capable of two kinds of influence operations - those related to social interaction, and those related to the social structure. And we are only now witnessing the true meaning of that. To think about it, using networks of bots to create a very elaborate deception, even military deception, looks fairly doable with present developments in natural language and image processing. Unsurprisingly, some researchers did gone ahead and demonstrated the use of a guided social botnet to infiltrate and cultivate specific employees of targeted organisations(!).
 
Since communication effects control, the problem of Command and Control (C2) of operating such social machines becomes a problem of architecting a resilient and credible flow of information to effect specific organisational behaviours. The older methods of botnet command and control are no longer even suitable in this environment, as the use of social networks inevitable brings in a P2P architecture (i.e. each software agent can send encrypted commands to the rest of the network). We are thus seeing some very interesting C2 designs when it comes to bot deployments - from the use of smart contracts and blockchain to implement the C2 to embedding bot commands into Bitcoin transactions, we seem to be headed for interesting times in the cyberspace. To anyone still skeptical, do checkout this report on an IoT botnet that not just hacks your networked devices but also creates and operates socialbots using that infrastructure, even including a Machine Learning component that can be turned on or off depending upon the human users' responses on social media platforms.
 
Other than coordinating the bot network, identifying and prioritising the human users to reach using bots is also another challenge. These days, digital social networks enable a large number of unintended users to join an unfolding event i.e. the ongoing Russo-Ukrainian conflict. To those having to watch random networks form and dissolve from the existing scale free asymmetric networks of our modern human existence, following things must be highlighted about the human network to manipulate this "network-centric" environment:  
     -Which nodes have the decision rights?
     -What nodal decision rights can be accessed using software agents?
     -What is the pattern of interaction between these and other nodes?
     -What kind of inflow-outflow of information emerges from these networks? 

Coming back to the 1992 social architecting experiment, a key question that arises is whether such strategic use of software agent networks can overturn the importance of the existing key nodes in the human digital networks today? The key nodes are the information junctions, a kind of intelligence device within the network. Being at the junction, these nodes have power to regulate information flows, directly affecting a lot of other nodes, which also gives these a disproportionate amount of power over the rest of the network. It must be noted here how the information domain comes into a very intimate embrace with the cognitive and the political domains.

The original software robot was designed to provide an integrated and expressive interface to the internet as well as to change its behavior and choice of actions in response to transient system conditions. A coordinating network of such programs acting socially brings to table an important non-human agency in human affairs. Since computers today "record, relay, represent and inform our responses" to almost all political-military-economic conflicts as they unfold - today's digital networks potentially enable much coercive and destructive potential in the hands of robots. Our digital societies, as well as the their dominant military-industrial-media networks, both face the everyday dangers of interconnectivity i.e. networked terrorism, recurrent misinformation cascades, and computer/biological viruses to name a few. As Buddha used to say, everything is interconnected and life is suffering. In retrospect, it is a tribute to Turing that “bot or not” is turning into one of the defining questions of our hyper-connected age as the animal and the machine networks slowly renegotiate the structure of a new peace.

P.S. Credits to Scott Wolcott for the inspiration.
 

Wednesday, January 5, 2022

Metapolitics

Does your program have too many dependencies/relations with the outside environment?
 
Recently a question appeared on a prediction forum asking whether a nationalized AGI research group would arrive at AGI before the private sector. This is interesting because most AGI research is done privately at the fringes of AI community. Whether or not, and how well, dedicated nationalized organizations would be pursuing AGI is in some respects a little ill concerting as this has a metapolitical component - it agrees implicitly that AGI is great, it only differs on in whose custody it is indeed really great. 
 
Perhaps it is too early to speculate over how the computing and communication technologies have altered, or could alter, the size and quality of a state's political organization. Political organizations unfortunately are not software, you cannot simply download an upgrade to patch some issues and load even more additional features and functionalities. People will experience anxieties when structural changes lead to institutional changes, some of them will resort to violence, and then that violence will become a negotiation tactic for the politician. This. Happens. Everytime. In that sense the people are also a strategic dummy.  
 
Davidson & Rees-Mogg predicted that the computing and communication revolution would cause organized predatory violence to slip out from central control, and everyone except politicians will benefit from the death of politics. But they have their biases, I have mine. In August 1991, the hard-line coup plotters couldn't shut down Yeltsin's communications in Moscow because he had just acquired a cool new technology - the mobile phone. The history consistently suggests that computing and communication technologies have largely benefited the politicians. Carroll Quigley, writing in Weapons Systems and Political Stability, approached technologies in terms of their defensive/offensive nature - suggesting more that offensive power means larger, more intense political organization. Unfortunately, he died before his book could reach the computing age. Artificial Intelligence for example, could be described as a centralizing force, but a technology like that could hardly be assessed as inherently offensive or defensive. How will it then affect the political organization of modern nation-states if instead of the Deep Minds and the OpenAIs of the world, a nationalized organization happens to be the first one to develop AGI? 
 
Most of us live in a security bubble detached from the blood and gore of war, but it takes not long for things to slip back - Bosnia and plenty other places have proven time and again that the law is a consensus and no consensus lasts long enough to become The Law. That is why even if we hate them passionately, society needs politicians to negotiate that consensus on behalf of the crowd. Would an AGI make any difference to this, I can't really say. And about that AGI, well, its uncertainty is not the same as its low probability. The actual nature and requirements of AGI are yet to be discovered and its behaviors are yet to be added. Perhaps yet another job post seeking software architects with 108 years of ML experience needed for "extensible design" is in the works.

Tuesday, October 12, 2021

Rethinking GIS

Sambo sivam, jagame thandiram.
The deregulation of Indian geospatial data sector may have opened opportunities for some leeway in India’s strategic framework, though of course slight reorientations maybe required at comfortable places. The strategic security outlook over GIS still has some traces of the older days of Bhaskar and Rohini satellites where data handling rate used to be in few kilobytes. It maybe somewhat questionable to some, but we need to put the GIS’ software and hardware infrastructure at the very centre of a technology centric foreign policy and not at the peripheries of a shy and reticent overture towards being a global power. 
Some key recommendations:

  • Build regional cooperation over co-developing GIS assets and regional mapping capabilities, this would go a long way in redefining the nature of our neighbourhood.
  • Inter-governmental programs around GIS thus would also take own geography into account, meaning enhanced regional outreach of developmental programs using access, interoperability, and portability centric solutions.
  • We have to develop a horizontal and non-hierarchical work ethic in the organisations tasked with radical technical innovation, especially in the satellites-sensors-software triangle. It has worked wonders for many, it will be good for us too.
  • In the spirit of previous point, we need to create and globally promote open-ended and open-access GIS technology designs and standards. This cannot be done without seamless cooperation with academia and industry.
  • Again, in the spirit of previous point, the government needs to jointly develop a framework with the academia and GIS industry to enable them to independently pursue joint R&D goals with their counterparts in other countries.
  • We need to formulate a national data standardisation policy, which includes geo-tagging of all data objects in commercial as well as government sector. This way the existing information infrastructure can be easily transformed into a GIS based developmental and strategic platform.
  • Related to above point over standardisation, there should be a single source of “analysis ready” GIS data which can cut down painstaking data preparation, standardisation and fusion pipeline on the user side. Perhaps a cloud-native ML based geospatial processing service could help monitor and facilitate the whole thing.  
  • Whatever it takes, including stealthy reverse engineering or building edge AI capabilities into orbital platforms, we’ve to aggressively push Indian GIS capabilities to a level where it can be turned into a non-negotiable aspect of any joint space missions. 

Today the information acquisition costs are relatively democratised, what is wanting is the capacity building to augment the existing service based industries with earth observation. That would not only provide the impetus for the creation of a homegrown Indian earth observation industry, but also enable the strategic thinking to integrate the vagaries and vulnerabilities of our existence and its environment into how we run our geopolitics and strategic security.

Afterall the slowly evolving physical infrastructure of the internet, along with GIS assets on space and ground, and technologies like Artificial Intelligence – are forming the landscape of a new kind of territory where the pecking order has not yet been fully established. And therefore constructing India’s strategic interests as firmly embedded into the machine needs to go from perhaps an engineering problem to a political solution.

Wednesday, July 21, 2021

On the utility of crisis-gaming AI risks

Systems fail because engineers protect the wrong things, or protect the right things in wrong ways.  - Ross Anderson, Security Engineering [Art by Shibara]

Recently, yours truly ran into the use of analytical wargaming as a methodology to address policy problems at the intersectionality of AI risks, cyber security and international politics. While the critics of wargaming do espouse some valid points, it is an exceptionally useful tool when it comes to tackling a very specific kind of problems - where you need to not predict actions but rather anticipate the outcomes and consequences of actions. Because rationality doesn't work well with incomplete information, a lot of predictive research methods don't always work well for these kind of problems either. As forecasters say, quantification is useful but wisdom is more useful.

Anyways, system engineering principles suggest that it is best practice to build those components first which are most likely to fail or cause failure, so let us first look at the caveats generally associated with wargaming: 

Reproducibility
A bit like the game of chess. While a bunch of independent games may proceed differently, they'd cause over time and iterations the general characteristics inherent in the game to emerge clearly. Wargaming is experimental, but not strictly an experiment. If we approach it as a simulation as it is, the lack of micro-level reproducibility turns out to be a feature and not a bug. 

Rare Events
But a catastrophic risk is infact a very rare event. And the thing about such risk scenarios is that everything is chaotic and nobody knows the truth. Thus gamification based exercises in this context will only simplify reasoning about system behavior, so when the crisis actually happens, decision-makers can navigate the chaos without acting from a point of anxiety and fears. 

Players Matter Too Much
Yes. This is indeed the biggest constraint which will shape how we design our game. Player expertise is paramount and it is best to design a game which explores questions relevant to those expertise. And gaming a crisis is not easy to do, especially without player commitment and appropriate preparations. So game designers and institutions need to atleast agree upon a baseline player capability.     

A basic how-to-go-about gaming crises

In a large sociotechnical environment, sometimes reliable systems can have unreliable components. Identifying those components requires us to develop conceptual models of interaction in that environment. This is exactly what wargaming does, it would reduce the strategic indeterminacy which burdens the AI risk governance efforts. I mean chaos engineering is nice and useful, but mission-critical systems involving intelligent robotics or autonomy and cyberspace need to be as deterministic as possible.

In coming days, edge robots would not only need to gather and store but also trend, time-series, and analyse their data. They would also have their own CI/CD peculiarities. As it is even today, old components in complex legacy products are not easy to replace as there often are third-party microcontrollers and actuators which are no longer in production. Gamification can help us find those weak links. It can also help decision-makers conceptualise differently, for example helping high-level politicians and planers better understand swarms as interoperating IoT instances and not just as lots of flying drones.

Slide from a presentation by Anja van der Hulst, research on forecasting crisis decision making behavior suggests role-play simulations outperforms other methods 

Mainly the main rationale for gamification is to discover structure of solutions - what elements are relevant for policy decisions, what unknown relations exist between these elements, how do player motivations arise and shift, how do actors (and agents) interact, what are the influences of time and exertion etcetera and so on. The Formula-1 teams for example make a really good use of simulations to harden safety. They simulate the vehicle and all its components in almost a digital-twin like fashion, running containerized virtual systems. These simulations reduce time-intensive testing and provide a good cognitive model for engineers and managers to confront unexpected failures. The AI risk community at present is anyways mostly concerned with the hypothetical impacts of statistical mistakes. Crisis-gaming is great for just that sort of thing.


Sunday, March 28, 2021

The Narcissism of Technical Differences

A couple of years ago, the US quietly put a worldwide export control on convolutional neural network based geospatial imagery software, the reason being its potential use by foreign militaries. It is worth noting that these are commercial software products which if open-sourced, their hardware dependencies aside, would not be treated under international jurisprudence as products but instead as free-speech. Which brings us to the subject of export controls, dual-use emerging technologies, and legacy international institutions.
 

Technology Export & Information

 
Export-control in itself is an old and important tool of statecraft, serving economic as well as military functions. However the present dichotomous classification of technologies based on military and non-military usage is severely outdated and in fact only holds-up when dealing with conventional weapons. AI in itself isn't a weapon but an enabler and having AI-superiority proves hugely advantageous when extending general capabilities across all kinds of defense systems and platforms, giving the machine learning software-stack and hardware accelerators a strong military utility.  

To deal with the problematic usage of such dual-use technologies, the most prominent international regime is the Wassenaar Arrangement. The arrangement, with respect to its fundamental role of helping to prevent the malicious use of technology, is pretty ineffectual when it comes to Artificial Intelligence and its ICT based applications. The de-territorializing effects of cyberspace present a clear institutional, regulatory, and compliance gap, where arrangements like Wassenaar only end up imposing yesterday’s standards over tomorrow’s technology. For example the first time when Wassenaar was invoked, it targeted software during the 90s to stop international adoption of cryptographic techniques over things like e-mail communications. That turned out to be a lot more than just diplomatic failure.  

And Wassenaar isn't the only international regime facing difficulty with technological changes. The MTCR (Missile Technology Control Regime) for example has been updated to include long range UAVs and will need future changes sooner than later. It must be made clear that these controls do not work as intended. There is always great difficulty in enforcing contracts in emerging markets, there is an ever-present tradeoff between transparency and secrecy, and moreover in an environment of deception the compliance can not be truly verified even if states bring (they do) their own technical means of verification. This is a grand version of what game theorists call a POSG (Partial Observation Stochastic Game), really fascinating and quite intractable stuff.

A classic legend, of dual-use exploiting institutional ambiguities, sitting in the ruins of a medieval empire

Scholars have duly noted the inherent ambiguity in Wassenaar Arrangement, especially when dealing with intangible dual-use technologies. There is no consensus on whether the arrangement could adapt to technologies which are both socioeconomically foundational and militarily significant, without ironically turning itself into a dual-use multilateral instrument of coercive and economic statecraft. Such concerns are not unfounded in international politics, take for example when in 1993 the Americans got the Russians to cancel the transfer of cryogenic technologies to India in exchange for some US-Russia space cooperation. Software being fundamentally information, presents more compounded challenges for export controls given the largely unregulated flow of information across borders. Basically the trouble being that if you put sanctions over the export of fishes to someone, failing to prevent him from learning to fish is a foreign policy failure. Not surprisingly therefore, a recent US AI-policy report which declared advertising-technologies as "NatSecTech", contained that "Export controls should be utilized... to slow competitors’ efforts to develop indigenous industries in sensitive technologies with defense applications... (by) targeting discrete choke-points."

Proliferation & The Undefined

 
The Mahabharat is absolutely filled with instances of specialized and more devastating tools being limited-by-distribution to only select few, though those familiar with the text would concur that this non-proliferation regime did little to control privileged actor behavior and eventually an utterly morbid devastation followed up. Multilateral export control regimes today pursue a deterrence by denial framework towards global security. However, in case of ICT based goods when export controls suppress economic interests, proliferation shifts to a black market, even between member states, on shady darknet sites.

It takes no genius to see that legally binding restrictions on AI capabilities will be economically debilitating, so it is going to be really hard to enforce sovereignty over technology or even know which exact technologies to control. Let us consider a hypothetical scenario: Most of the power consumption (>90%) in AI operation happens due to what can be described as "data movement operations" — adjusting features, weights, biases, transferring intermediate results etc — so it is understandable that a better power management technique/toolkit could make the systems significantly more energy efficient. Now consider that in a not-too-distant future these computations are happening in autonomous edge devices fielded on both sides amid a lengthy military standoff in a remote inhospitable region such as the deserted upper reaches of the Himalayas. Would an innovative intangible power model here be treated the same way an advanced battery technology is treated? What if someone strategically open-sources it?
 
Neural Network Power Consumption

Given that even technologies used today in public transportation, agriculture and pollution monitoring, such as LIDAR and hyperspectral tracking, will be very easily directed to military usage; several researchers have suggested to use the phrase "omni-use" to describe AI and other related emerging ecosystem. It should be clear to whomsoever-it-may-concern that they cannot control the flow of omni-use intangible items and neither can it be ascertained decidedly what or from who an omni-use technology must be kept. In an area like this, traditional controls will only bring about shadow regulations and more of other unsavory economics.
 

Way Forward, Institutional Corrigibility

 
Most countries with strong R&D focus prefer their weapon-systems to be approached as black-boxes, not to be reversed engineered or subjected to exhaustive component analysis. On another side, in future insurgencies and urban conflicts, we could be seeing a lot of improvised weapon systems which utilize open-source and low cost OTS components, those which would just not be regulated with current approaches. What technologies are available to militaries greatly guides how societies evolve and technologies also have a tendency to become more efficient relatively quickly because of their near-constant evolutionary cycle of adaptation and reinvention. It is therefore fitting and important to have a better and continuous coupling with all stakeholders, and not get carried away by media headlines, trends and lobbying.

So building Institutional Corrigibility in our context can be thought of as designing mechanisms to facilitate objective stakeholders (from academia and industry) bring necessary outside correction in transgovernmental networks responsible for developing and implementing technology export controls. The multilateral international institutions must understand the difference between essential system complexity and accidental system complexity, and work every bit to minimize and first and obliterate the second.
_

Sunday, January 3, 2021

Cautious Quadrilateralities


There was the sound like a rumor without any echo of history, really beginning.

Following the recent Chinese overtures towards Arunachal, Ladakh, and Sikkim - notably sending troops, altering maps, taking land and calling for peace - there now hopefully would have enkindled some degree of sovereign urgency among Delhi's international security leaders towards a reassessment of some of our geostrategic priorities. So understandably (with some hindsight) while we might want to avoid symmetrical competition with an obviously bigger opponent and focus on emerging capabilities which could break an established stalemate and provide sustainable asymmetric advantages to constrain CCP actions in future. Delhi and even Dharamshala both understand this very well, after all. And other than the slowly simmering borders and periodic geo-economic swings, India also has a bit of a civilizational obligation to herself that Asia doesn't turn into a CCP fiefdom.

Under this kind of environment, there has lately been a resurgence and expansion of the old US-JP-AU-IN HADR (The Quad) alliance which in itself is a center of varied interests owing to its strong political proximity with maritime dominance and the welfares originating from a post-pandemic diversification of China based supply chains. There have even been some efforts to promulgate the idea that this arrangement is not against China. In all honesty, unless China itself joins the mix, any attempts to regulate the security of Indo-Pacific waters will have to take on a character which faces China as the opponent. This is a bit of a Tolstoyan case of the fate of nations being independent from the individual wills of their kings and ministers.

That said, there are valid reasons to reconsider the strategic utility of institutionalizing these emerging structures. Japan is a treaty-based US ally. Australia is a US ally which with RCEP is unlikely to break from an economic dependence on China. The US of course, is the incumbent politico-military arbiter of liberal world order. None of these shares a land boundary with or has massive trigger-happy territorial disputes with China. We do. But the quadrilateral isn't about India, it is about "Indo-Pacific". And that thick line needs more policy thickening, because given West's ideological and economic competition with China, it can be easy to surmise that the IOR or the Indo aspects of Indo-Pacific will recieve less attention than the Pacific aspects. Besides, a third of the worldwide maritime trade passes through the South China Sea that is being contested over, and the Chinese aren't exactly going to accept some fancy rules based order in strictly diplomatic manners.

It can also be expected that more and more states joining this arrangement will still try to maximize their relational benefits while minimizing the individual costs even if that isn't in the grander strategic interests of the arrangement itself. It is well known that specific goal oriented coalitions with clear objectives are better than a general-purpose bloc riding-the-waves of international waters. The military incentives here rest with the former, or in very localized regional arrangements between states trying to order their own backyards. Unless there are strong political and economic incentives, no one is going to make commitments of manpower and resources for expeditionary operations to defend foreign interests. So it can be fairly argued in this case that expansion for sake of expansion, while gaining more legitimacy in the eyes of public and possibly even some international institutions, would tend to dilute the coalitionary maritime defense potential unless carefully guarded against.

China too is aware of the possible maritime encircling and will focus on global economy to find its way out. Consequently it has revamped its foreign investment regime and launched a slew of techno-economic armamentarium including the central bank's digital currency which could in future help in renminbi internationalization, generating non-trade currency demands, and in getting the CCP a hold over China's burgeoning money networks and perhaps over the slippery oil shores as well. Considering 'all the things that need consideration' maybe the best way to think about the current stratagems is to affirmatively rephrase Derek Walcott that the sea is indeed the history.


Saturday, September 5, 2020

Fail Securely

As systems become more complex, they are also likely to fail in ways of which the exact mechanism might be harder to predict and understand beforehand. With machine-learning systems, there is a bigger problem than failure, which is exhibiting a potentially dangerous behavior as and when failure occurs, as part of the system/sub-system failure. This concern is understandably further magnified in the context of lethal autonomous systems. 

Exploitation surface of a generic ML pipeline

Given the increasingly diverse exploitation routes, the prevailing ideas suggest to have a well trained human operator who is periodically assessing whether an AI is misinterpreting its environment. But conventional human-in-the-loop mechanisms are ill-suited to handle spatiotemporal complexity. Consider for example a large number of potential targets in a tough or no geography and an extremely compressed timeline for making engagement. In this situation it is simply not feasible for the AI to refer back to the human operator every time an engagement has to be made.

If adapted suitably, the old concept of kill-boxes may produce a simple socio-technical solution to the problem of how to detach from conventional human-in-the-loop and embrace independent decision-making in military AIs while still keeping the operator to monitor incoming data. It is also in line with the existing cyber-security principles of network segmentation and granting access based on user's role/location/time etc.

By not wholly depending upon autonomous systems' ability to interpret context and limiting its "full-fledged use" within a human generated spatiotemporal compartment i.e. a kill-box, we would not only impart the AI operation a human-like non-zero probability of making high risk "alpha zero" moves, but also allow more secure failures that cannot exacerbate the larger conflict while providing all the benefits of deploying advanced autonomous technology. This is especially valid for the global-common type environments like the space and the ocean, and of course the internet too; where there is further need to research and manage AI security risks as most nations in such environments are virtually in a persistent struggle with their allies and adversaries simultaneously. 

A typical conventional kill-box. (image source: WikiLeaks)

Ideally, a military should develop its own autonomous systems instead of relying on commercial off-the-shelf products or even allies' systems for those may come with their own inductive biases and are sometimes less likely to fully support complex missions. There are obvious economic, organizational and foreign policy incentives for doing this. And most importantly it would let the machine behavior policy and AI failures be much more clearly defined and adversarially trained against in a manner that does away with insecure failures while also suiting the respective country's cultural sensibilities.

Speaking of latter, complex societies require a fair amount of organized coercion, socioeconomic incentives, cultural deterrence and mutation over a course of centuries to become eligible for the "civilization badge". So it should be natural to want your AIs to reflect those civilizational ethos. Understandably culture isn't an engineer's problem but the way technology (particularly ICT) accords vastly different modes of social interactions and restructures social and even political affairs, it sets the premise for engineering which can function as 'politics by other means'. Perhaps Kaczynski was right. And that's all the more reason to develop systems that embrace failures, and fail securely.