Want to learn more about the role of AI in #NuclearWeaponsSystems, the impact of cyber- and space-based capabilities on #StrategicStability & the role of #HypersonicWeapons? Join us on Day 1 of #UKPONI2020.
Today, the papers for the UK PONI 2020 conference were released!
The research topics for this year included:
I decided, based topical issues facing the country, to go for the final research topic, in which I was determined to speak about artificial intelligence in some capacity.
The paper discusses potential AI applications that may develop over the next 20 years and how they might challenge UK deterrence posture. It argues for the need to reconsider dependencies on other countries and that there is currently too little European thinking about what AI means for the military.
Unfortunately, the conference had to be ‘attended’ virtually, but hopefully that trip to Whitehall will materialise in the not-too-distant future.
The Global Artificial Intelligence Race and Strategic Balance: Which Race Are We Running?
Artificial intelligence (AI) has the potential to affect ever more aspects of military and civilian life as part of the fourth industrial revolution. Countries are racing for global AI dominance, and whoever ‘wins’ shall reap the economic and geopolitical power expected to result. However, AI-enhanced technologies could pose new security risks that have not been encountered before.
This paper discusses some military and defence implications of AI development and assesses potential threats to Euro-Atlantic security. China has been identified as potentially threatening because of its high AI capability rankings, use of AI for military applications and poise to become the global 5G leader.
Ultimately, this paper argues that the UK and the EU should approach outsourcing critical communications infrastructure with caution and take recent security concerns involving China more seriously.
Current AI capability
AI has been likened to electricity in its potential transformative impact on the economy and enablement of other innovations.1 Many expect that the ‘winners’ of the AI development race will dominate the coming decades economically and geopolitically, thus exacerbating tensions between countries and transforming elements of national power.2
The US remains ahead in most AI capability metrics, with the UK running third, behind China.3 Other sources use different metrics to generate their rankings.4 AI applied to the military is large on the agenda of the US, China, Russia and Israel.5 The implication is that whichever country leads in AI development will have a military advantage both in terms of cyber and traditional warfare.6
According to the framework provided by John Launchbury, AI can be conceptualised as having three waves, each based on a different capability, as depicted in Figure 1.7 The world is still in the realm known as ‘weak’ or ‘narrow’ AI, in which AI is optimised for specific, narrow tasks such as speech recognition and performing repetitive functions. Strong AI, or artificial general intelligence (AGI), in which a machine will have human-like cognitive capability, remains a significant technical challenge.
Figure 1: AI conceptualisation framework.
Source: United States Government Accountability Office, ‘Artificial Intelligence: Emerging Opportunities, Challenges, and Implications’, Report to the Committee on Science, Space, and Technology, House of Representatives, GAO-18-142SP, March 2018, accessed 16 May 2020.
There are two major limitations in current AI technology: large amounts of labelled data are needed to train systems; and context is still poorly understood by the systems.8 The next five years will likely see a lot of real-world piloting while building crucial datasets.9 Opinions differ regarding the timeline for the development of AGI.10 It has been predicted that computers will routinely pass the Turing test by 2029 and the technological singularity will occur by 2045.11 However, some sources consider AGI development unlikely for the next 20 years, if at all.12 It is also believed that the capacity for at least some aspects of decision-making could be achieved by 2040.13
The Fourth Industrial Revolution (Industry 4.0)
Industry 4.0 involves the development of smart and connected machines and systems, in which waves of further breakthroughs in areas ranging from AI, the Internet of Things (IoT), leveraging Big Data, and quantum computing will take place.14
Big Data is high-volume, high-velocity and/or high-variety information, known as the 3Vs.15 The IoT, which broadly encompasses the increased connectivity of people and things, has been recognised as a key civil technology that could potentially affect US power.16 The main purpose of the increasing number and types of IoT objects is to produce useful data about our surroundings to make them smarter.17 IoT is expected to be a major producer of Big Data, the fusion and analysis of which enables accurate and reliable decision-making and management of ubiquitous environments; this is a grand future challenge in which AI plays a key role.18 AI and Big Data have already changed the economy and advanced the productivity of entire markets, and Cisco predicts that 94% of global workloads will be processed in the cloud in 2021.19 AI is emerging as a solution for managing large amounts of data, especially for making predictions based on the data sets.20
AI in Defence
The European Defence Agency (EDA) has analysed what it considers to be the 10 key most disruptive innovations (Table 1), and the connecting technology regarding the development of those highlighted is AI.
Table 1: The 10 most disruption defence innovations to come.
|AI and cognitive computing in defence||Robotics in defence|
|Defence Internet of Things||Autonomy in defence: systems, weapons, decision-making|
|Big Data analytics for defence||Future advanced materials for defence applications|
|Blockchain technology in defence||Additive manufacturing in defence|
|AI-enabled cyber defence||Next generation sequencing (NGS) for biological threat preparedness|
Source: European Defence Matters, ‘Disruptive Defence Innovations Ahead!’, Magazine of the European Defence Agency, Issue 14, 2015, accessed 1 July 2020.
Increased autonomy in weapons systems could provide an advantage on the battlefield, potentially allowing weaker nuclear-armed states to reset the imbalance of power, but exacerbating fears that stronger states may further solidify their dominance and engage in more provocative actions.21 Competition between global leaders may lead to the proliferation of weaponised AI.22
Additionally, the co-mingling of both AI-augmented nuclear and strategic non-nuclear weapons will exacerbate the risk of inadvertent escalation by undermining strategic stability.23
The US, China and Russia have all declared strategies to achieve offset advantage through robotics and AI.24 C4ISR has been identified as a potential area of impact over the next 20 years, in which AI-enabled autonomous systems will be employed by war-fighting units.25 It has been acknowledged that both the EU and NATO are just beginning to grapple with the issue of AI in defence, whereas Russia and China have already started thinking strategically about it.26
5G, security, and political stability
The introduction of 5G will see the number of IoT sensors collecting data proliferate; 5G is up to 20 times faster than 4G, has record-setting low latency, and it will allow developers to create near real-time applications.27 This, however, will not be possible without AI.28 The use of Big Data creates a trade-off between performance versus privacy; if operators fail to leverage it in an ethical manner, data confidentiality issues regarding large amounts of sensitive personal information (names, ID numbers, locations, passwords) become apparent.29
Countries are under pressure to protect their citizens and even political stability in the face of possible malicious/biased uses of AI and Big Data.30 Because 5G networks are the future backbone of our increasingly digitised economies and societies, ensuring its security and resilience is essential.31 Even at current capability levels, AI can be used in the cyber domain to augment attacks on cyberinfrastructure.32 There is no such thing as perfect security, only varying levels of insecurity.33 These ‘smart’ technologies rely on bidirectional wireless links to communicate with devices and global services, which gives a larger ‘attack surface’ that cyber threats target.34 Thus, 5G networks may lead to politically divided and potentially noninteroperable technology spheres of influence, where one sphere would be led by the US and another by China, with some others in between (for example the EU, South Korea and Japan).35
All of these concerns are most significant in the context of authoritarian states but may also undermine the ability of democracies to sustain truthful public debates.36 For example, ‘deepfake’ (stemming from ‘deep learning’ and ‘fake’) algorithms can create fake images and videos that cannot easily be distinguished from authentic ones by humans. It is threatening to global security if deepfake methods are employed to promulgate misinformation. Additionally, they are a huge threat to privacy and identity. The proposal of technologies that can assess the integrity of digital media is therefore indispensable.37
Safety and control are also areas in which AI will need to be regulated, and some also call for banning or severely limiting R&D in fields of AI such as autonomous weapons, superintelligent AI and offensive cyber capabilities.38 Virtually all AI models include a ‘black box’ aspect to the software that even the creators do not fully understand, which adds to the major challenge of preserving algorithm openness.39
The implications of AI for EU and UK security and defence are largely unknown at this stage.40 Europe is behind other global players.41 Yet, successive EU strategies have continued to reinforce the desire for European technical autonomy and even outright ‘sovereignty’ in areas of key strategic importance, including 5G and 6G.42 The EU has declared the need for a high level of data protection, digital rights and ethical standards in AI and robotics, and insists on ethical standards as well as preparedness for the social changes caused by AI.43
The China 2025 strategy is about making China ‘a major cyber power’ and its capacity to shape the international governance of cyberspace according to its own interests. Some refer to the digital revolution underway and the deepening Sino-American competition as a new arms race.44
Is China a threat?
China was recently poised to become the global leader in 5G technology despite Huawei’s products and services being assessed as highly insecure.45 In January, the UK announced that Huawei will be allowed to build part of the country’s 5G core network. The US, citing Huawei equipment’s high-risk nature to critical infrastructures, responded with threats of restricting intelligence cooperation.46 The UK’s Huawei Cyber Security Evaluation Centre Oversight Board reported in 2018 that ‘security critical third-party software used in a variety of products was not subject to sufficient control’; Huawei replied that it may need three to five years to mitigate these two flaws, but by then, most decisions about 5G contracts will have been taken and the construction of 5G networks will already be underway.47 Once introduced, it will be an ‘unextractable’ part of British infrastructure.48
It can be argued that the prevailing practice of Big Data in China is much less attuned to the social, political and ethical implications that a human-centric approach would demand. Consequently, Chinese technologies and government policies have attracted growing international attention and scrutiny.49 China is implementing extensive social surveillance and an AI-based social credit scheme, which would be considered controversial in other countries and enable the collection of user behaviour data to later potentially give or deny access to a range of services provided by the state. The regime’s increasing dependence on its AI and Big Data systems builds a digital authoritarian regime.50
If many critical components of a country’s 5G infrastructure are of Chinese origin, it gives China easier access to spy on or disrupt that country’s online communications.51 The UK needs to understand how the risks that come with foreign equipment can be mitigated, because even now, little is known about how the UK counters potential security breaches that may come with the Chinese-produced surveillance equipment installed in various London boroughs.52 Jeremy Warner of The Telegraph states that the UK risks ‘leaving the future to China in our rush to data protection’.53
In June 2019, China issued its first AI ethics code: the Beijing AI principles. This is the first public signal of some willingness within the country to discuss the ethics of AI.54 Earlier in 2020, the US Commerce Department released new regulations restricting access to US technology by various Chinese companies.55 The need for future regulation will depend a lot on the technological progress of AI, as policy and regulation may subvert its development (and vice versa).56 An international, collaborative governance and the potential for a new technology diplomacy may be key in attaining stability during Industry 4.0.57
The implications of AI technologies on national security remain largely unknown at this stage. However, the UK’s actions in recent times appear to be to isolate itself strategically from its allies by initially going against the US’s wishes of banning Huawei from its 5G infrastructure and deciding to leave the EU, which has its own strategic priority to prove geopolitical relevance. Only very recently has the UK government decided to phase out Huawei’s 5G role from the country.58 It is still extremely important, though, for the UK to fully understand the surveillance equipment already in use by identifying potential gaps in existing frameworks and enforcement mechanisms.59
The EDA should engage with non-traditional defence R&D communities and innovators to speed up access to emerging and potentially disruptive research and identify areas for additional investment to fully address future defence capability needs.60 Consideration should also be given to the development of European industrial capacity.61
Countries that have fallen behind in AI may have only two options: to join the race and possibly develop niche AI or to regulate its uses to mitigate potentially undesirable applications.62
International policy coordination remains a necessary instrument to tackle the ethical and political repercussions of AI to facilitate the global alignment of AI policy and governance.63
1. US-China Economic and Security Review Commission, ‘Emerging Technologies and Military-Civil Fusion – Artificial Intelligence, New Materials, and New Energy’, 2019 Annual Report to Congress of the US-China Economic and Security Review Commission, US-China Competition, chapter 3, Section 2.
2. Claudio Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All: The Case for a New Technology Diplomacy’, Telecommunications Policy (Vol. 44, No. 6, 2020), pp. 1019–88.
3. Tortoise Media, ‘The Global AI Index’, accessed 3 April 2020.
4. Stanford University, ‘The 2019 AI Index Report. Human-Centered Artificial Intelligence’, 2019, accessed 30 June 2020; Oxford Insights, ‘Government Artificial Intelligence Readiness Index 2019’, 2019, accessed 30 June 2020; Sarah O’Meara, ‘Will China Lead The World in AI by 2030?’, Nature (Vol. 572, 2019), pp. 427–28.
5. Congressional Research Service, ‘Artificial Intelligence and National Security’, updated 26 August 2020, accessed 30 June 2020; Forrest E Morgan and Raphael S Cohen, ‘Military Trends and the Future of Warfare: The Changing Global Environment and Its Implications for the US Air Force’, RAND Corporation, 2020; Samuel Bendett et al., ‘Russian Unmanned Vehicle Developments: Syria and Beyond’, in Center for Strategic and International Studies (CSIS), ‘Improvisation and Adaptability in the Russian Military’, 30 April 2020, pp. 38–47; Vincent Boulanin and Maaike Verbruggen, ‘Mapping the Development of Autonomy in Weapon Systems’, SIPRI Publications, November 2017; Pax, ‘Don’t Be Evil?’, August 2019, accessed 30 June 2020.
6. J Scott Brennen, Philip N Howard and Rasmus Kleis Nielsen, ‘An Industry-Led Debate: How UK Media Cover Artificial Intelligence’, Factsheet, Reuters Institute for the Study of Journalism, December 2018.
7. John Launchbury, ‘A DARPA Perspective on Artificial Intelligence’, DARPA, 2016, accessed 1 July 2020.
8. US-China Economic and Security Review Commission, ‘Emerging Technologies and Military-Civil Fusion – Artificial Intelligence, New Materials, and New Energy’.
9. Congressional Research Service, ‘Artificial Intelligence and National Security’.
10. Leopold Schmertzing, ‘Trends in Artificial Intelligence and Big Data’, European Parliamentary Research Service (EPRS) Ideas Paper Series, 2019, accessed 4 August 2020.
11. Roman V Yampolskiy, ‘Artificial Intelligence Safety and Cybersecurity: A Timeline of AI Failures’, Artificial Intelligence, 2016.
12. D F Reding and J Eaton, ‘Science & Technology Trends 2020-2040: Exploring the S&T Edge’, NATO Science & Technology Organization, 2020, accessed 9 July 2020.
13. Edward Geist and Andrew J Lohn, ‘How Might Artificial Intelligence Affect the Risk of Nuclear War?’, RAND Corporation, 2018.
14. Klaus Schwab, ‘The Fourth Industrial Revolution’, World Economic Forum, 2016.
15. Gartner, ‘Gartner Glossary’, 2020, accessed 5 August 2020; Doug Laney, ‘Application Delivery Strategies’, META Group Inc., 2001, accessed 5 August 2020.
16. SRIC-BI, ‘Disruptive Civil Technologies: Six Technologies With Potential Impacts on US Interests Out to 2025’, National Intelligence Council Conference Report, SRI Consulting Business (SRIC-BI) Intelligence, 2008, p. 48.
17. Furqan Alam et al., ‘Data Fusion and IoT for Smart Ubiquitous Environments: A Survey’, IEEE Access (Vol. 5, 2017), pp. 9533–54.
19. Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
20. Daoqu Geng et al., ‘Big Data-Based Improved Data Acquisition and Storage System for Designing Industrial Data Platform’, IEEE Access (Vol. 7, 2018), pp. 44574–82.
21. Franz-Stefan Gady, ‘Elsa B. Kania On Artificial Intelligence And Great Power Competition’, The Diplomat, 31 December 2019; Lora Saalman, ‘The Impact of AI on Nuclear Deterrence: China, Russia, and the United States’, East-West Center, accessed 14 May 2020.
22. Amy Ertan, ‘What, Who, Where, How: The Impact of Recent Military AI Innovation in Security Terms’, Defence and Security Doctoral Symposium (DSDS19), 2015, accessed 14 May 2020; James Johnson, ‘Artificial Intelligence in Nuclear Warfare: A Perfect Storm of Instability?’, Washington Quarterly (Vol. 43, No. 2, 2020), pp. 197–211.
23. James M Acton et al., ‘Entanglement: Chinese and Russian Perspectives on Non-Nuclear Weapons and Nuclear Risks’, Carnegie Endowment for International Peace, 2017; James Johnson, ‘The End of Military-Techno Pax Americana? Washington’s Strategic Responses to Chinese AI-Enabled Military Technology’, Pacific Review, 2019; James Johnson, ‘VIII. The Impact of Artificial Intelligence on Strategic Stability, Escalation and Nuclear Security’, 2019 UK Project on Nuclear Issues (UK PONI) Annual Conference Papers, 2019.
24. Ministry of Defence (MoD), ‘Joint Concept Note 1/18: Human-Machine Teaming’, 2018, accessed 6 July 2020.
25. United States Government Accountability Office, ‘Artificial Intelligence: Emerging Opportunities, Challenges, and Implications’, Report to the Committee on Science, Space, and Technology, House of Representatives, GAO-18-142SP, 2018; James S Johnson, ‘Artificial Intelligence: A Threat to Strategic Stability’, Strategic Studies Quarterly, Spring 2020.
26. Ulrike Esther Franke, ‘Not Smart Enough: The Poverty of European Military Thinking on Artificial Intelligence’, European Council on Foreign Relations, 2019, accessed 14 May 2020; EU-NATO Relations and Artificial Intelligence Conference, co-organised by the EU Institute for Security Studies (EUISS) and the Finnish Presidency of the Council of the EU at the Permanent Representation of Finland to the EU, Brussels, 14 November 2019, accessed 14 May 2020.
27. Mark Howden, ‘5G Opportunities for App Developers’, Samsung Insights, 2019, accessed 3 August 2020; Moayad Aloqaily et al., ‘Design Guidelines for Blockchain-Assisted 5G-UAV Networks’, Networking and Internet Architecture, 2020, accessed 4 August 2020.
28. Karen Hao, ‘DARPA is Betting on AI to Bring the Next Generation of Wireless Devices Online’, MIT Technology Review, 25 October 2019, accessed 3 August 2020; Geng et al., ‘Big Data-Based Improved Data Acquisition and Storage System for Designing Industrial Data Platform’.
29. Roxana Mihet and Thomas Philippon, ‘The Economics of Big Data and Artificial Intelligence’, Disruptive Innovation in Business and Finance in the Digital World (Vol. 20, 2019), pp. 29–43; Ying He et al., ‘Big Data Analytics in Mobile Cellular Networks’, IEEE Access (Vol. 4, 2016), pp. 1985–96.
30. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’; Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
31. European Commission, ‘Report On EU Coordinated Risk Assessment Of 5G: Member States Publish A Report On EU Coordinated Risk Assessment Of 5G Networks Security’, 2019, accessed 3 August 2020.
32. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
33. Yampolskiy, ‘Artificial Intelligence Safety and Cybersecurity’.
34. James Hayes, ‘Hackers Under The Hood’, Institute of Engineering and Technology, 2020, accessed 6 July 2020; Idaho National Laboratory, ‘Cyber Threat and Vulnerability Analysis of the US Electric Sector’, US Department of Energy Office of Scientific and Technical Information, 2016, accessed 6 July 2020; European Commission, ‘Report on EU Coordinated Risk Assessment of 5G’.
35. Paul Triolo, Kevin Allison and Clarise Brown, ‘Eurasia Group White Paper: The Geopolitics of 5G’, Eurasia Group, 2018, accessed 5 August 2020; Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
36. Miles Brundage et al., ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation Authors are Listed in Order of Contribution Design Direction’, Future of Humanity Institute, 2018, accessed 5 August 2020.
37. Gady, ‘Elsa B. Kania on Artificial Intelligence and Great Power Competition’; Thanh Thi Nguyen et al., ‘Deep Learning for Deepfakes Creation and Detection: A Survey’, Computer Vision and Pattern Recognition, accessed 5 August 2020.
38. Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
39. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
40. Daniel Fiott and Gustav Lindstrom, ‘Artificial Intelligence: What Implications for EU Security and Defence?’ EUISS, accessed 17 May 2020.
42. Andrés Ortega Klein, ‘The US-China Race and the Fate of Transatlantic Relations: Part II: Bridging Differing Geopolitical Views’, CSIS, 2020; EU-NATO Relations and Artificial Intelligence conference, co-organised by the EEUISS and the Finnish Presidency of the Council of the EU, Permanent Representation of Finland to the EU, Brussels.
43. European Group on Ethics in Science and New Technologies, ‘Statement on Artificial Intelligence, Robotics and Autonomous Systems’, accessed 5 August 2018; European Commission, ‘Artificial Intelligence for Europe’, (COM2018) 237 final, accessed 5 August 2020.
44. François Godement et al., ‘The China Dream Goes Digital: Technology in the Age of Xi’, European Council on Foreign Relations, 2018, accessed 5 August 2020.
45. Elsa B Kania, ‘Securing Our 5G Future: The Competitive Challenge and Considerations for US Policy’, Center for a New American Security (CNAS), 2019, accessed 6 July 2020.
46. Valentin Weber, ‘Making Sense of Technological Spheres of Influence’, London School of Economics and Political Science, 2020, accessed 6 July 2020.
47. Valentin Weber, ‘Finding a European Response To Huawei’s 5G Ambitions’, Norwegian Institute of International Affairs (NUPI), 2019, accessed 6 July 2020.
48. Sarah Young, ‘UK Defence Committee to Probe Security of 5G Network on Huawei Concerns’, Reuters, 2020.
49. Min Jiang and King‐Wa Fu, ‘Chinese Social Media and Big Data: Big Data, Big Brother, Big Profit?’, Policy & Internet (Vol. 10, Issue 4, 2018), pp. 372–92; Gaurav Shukla, ‘Google Removes Viral Indian App That Deleted Chinese Ones: 10 Points’, Gadgets360°, 2020, accessed 5 August 2020; Leo Kelion, ‘TikTok: How Would the US Go About Banning the Chinese App?’, BBC News, 3 August 2020, accessed 5 August 2020.
50. Valentin Weber, ‘AI, China, Russia, and the Global Order: Technological, Political, Global, and Creative Perspectives’, Centre for Technology and Global Affairs, University of Oxford, 2019; Max Craglia et al., ‘Artificial Intelligence: A European Perspective’, Publications Office of the EU, 2020, accessed 3 August 2020; Christina Larson, ‘Who Needs Democracy When You Have Data?’, MIT Technology Review, 20 August 2018, accessed 5 August 2020.
51. Weber, ‘Making Sense of Technological Spheres of Influence’.
53. Brennen, ‘An Industry-Led Debate’.
54. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
55. Dieter Ernst, ‘Catching Up in a Technology War: China’s Challenge in Artificial Intelligence’, East-West Center, 16 June 2020, accessed 3 August 2020.
56. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’; Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
57. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
58. Aishwarya Nair and Daniel Wallis, ‘UK PM Johnson to Phase Out Huawei’s 5G Role Within Months – The Telegraph’, Reuters, 4 July 2020.
59. NIS, ‘EU Coordinated Risk Assessment of the Cybersecurity of 5G Networks’, NIS Cooperation Group, 9 October 2019, accessed 6 August 2020.
60. European Defence Matters, ‘Disruptive Defence Innovations Ahead!’, Magazine of the European Defence Agency (No. 14, 2017), accessed 1 July 2020; Loredana Ceccacci, ‘The Ethics of Big Data’, European Economic and Social Committee, 2017, accessed 18 May 2020.
61. NIS, ‘EU Coordinated Risk Assessment of the Cybersecurity of 5G Networks’.
62. Yuval Noah Harari, ‘Who Will Win the AI Race?’, Foreign Policy (Winter 2019), pp. 52–55.
63. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.