An overview of the crystallinity of cocoa butter 🍫

Chocolate makes for one of the most interesting products, because although it is incredibly tasty, it also demonstrates a plethora of scientific principles. The development of chocolate as we know it today has a rich cultural history, and over the years scientists and engineers have refined the process of making such celestial pleasure from the bitter cocoa bean, by utilising their knowledge of chocolate chemistry.

Originally published in New Food Magazine, Issue 6, 2016.

The aim of this article is to outline some of the basic chemical principles behind the science of chocolate, and how these principles are utilised by scientists, engineers, chocolate makers and pâtissiers to make some of the most beautiful confections you see lining the bakeries and chocolate shops of Paris and London.

Harvesting the cocoa beans and a very brief history
Chocolate begins with the cocoa tree, or Theobroma cacao, which means ‘food of the gods’. The tree grows along a strip of the globe than spans approximately ten degrees north and south of the Equator1 known as the ‘cocoa belt’. Farmers harvest the beans and then ferment them for approximately four to seven days. They are then sun-dried, which can take several days, before being transported to the manufacturer.

It was believed that the tree was first cultivated in the southern Gulf coast of Mexico sometime before 600 BC.2 The Aztecs used roasted ground cocoa beans in a drink served at religious ceremonies, and the beans themselves were even used as a form of currency. Columbus was said to have brought cocoa beans to Spain in the 1500s, where their popularity as a drink, flavoured with various spices and a lot of sugar, flourished in Europe. Although cocoa beans were already used in confectionery items throughout Europe at the time, the Bristolian company, Fry & Son, claim to be the producers of the first chocolate bar in 1847.

Manufacturing from bean to bar
Once received, the manufacturer cleans and roasts the beans. Roasting develops their flavour and aids in the cracking and removal of the shells. The beans are then lightly crushed and passed through impact or tooth rollers, before entering a winnowing machine to help separate the shell and obtain the cocoa nib inside. This prized nib is small, crunchy, and very bitter in taste. A yield of at least 83-84 percent nib should be obtained in order to maximise the profitability of the manufacturing process.3

Over half of the nib’s weight is fat,3 which is cocoa butter; the rest being the remaining solid part of the nib, known as ‘cocoa solids’. When the nibs are ground they form a thick paste. This paste then undergoes a refining process in order to reduce the cocoa solid particles to approximately 20-30 microns in size;1,2,4 if the particles are too large the resulting chocolate will have a coarse and gritty mouth-feel, but if they are too small it will be sticky on the palate. This is achieved by processing the paste through a series of rollers. The temperature of the rollers and the size of the gaps in-between them are important in determining the atmospheric conditions and the size of the particles.3 Sugar and lecithin (the emulsifier) are also added, as well as additional cocoa butter in order to provide sufficient lubrication for the sugar particles.2 Other ingredients, such as vanilla, milk powder, and ground cocoa nibs of different origins may also be included.

Conching to form molten chocolate
The refined paste then undergoes the conching process, which was developed by Rodolphe Lindt of Switzerland in 1878,2 and is the final flavour development step in bulk chocolate production. Conching is named after the instrument in which the process is carried out and refers to the shape of the tank, which is similar to that of a conch shell. The friction generated by the mechanical action of the conch causes the cocoa butter to melt, thus creating an extremely aromatic bath of molten chocolate. More cocoa butter is added to enhance its viscosity and flowability, which can be described as a non-Newtonian fluid or, more specifically, a mildly shear-thinning fluid.5-7 The rheology of molten chocolate has been extensively studied8 and the laminar flow of chocolate through the pipeline should be maintained at all times. Larger pipes should be installed if any turbulence occurs because the shear stress down a pipe is proportional to its diameter.1 Temperature control is also extremely important, because below certain temperatures the fluid becomes more viscous, which may cause damage to the pipework.

Cocoa butter is an important ingredient in chocolate because it gives the final product many of its important characteristics, such as mouth-feel, glossiness and hardness, and when cooled it contracts, thus making demoulding easier. Conching further develops the chocolate’s flavours by coating each cocoa solid particle evenly with the cocoa butter making for a mellower flavour. The molten chocolate is agitated in the conche, so that the suspended particles do not settle and homogenisation is always achieved. This process can last for days depending on the manufacturer; originally it lasted 96 hours,9 but nowadays it may last for only 7-36 hours.2,9

Cocoa butter is an important ingredient in chocolate because it gives the final product many of its important characteristics, such as mouth-feel, glossiness and hardness.

Tempering chocolate
The last step in the manufacturing process is to cool the molten chocolate to room temperature in a specific way that enables the removal of the latent heat and the gives the final product the desired properties, such as a nice sheen and snap; good mouth-feel; a denser chocolate that demoulds easily; and more resistance to fat bloom, which is a common occurrence when working with chocolate products.10 Tempering is the name given to this process (and is also applied to the cooling of glass and steel), with the aim being to cool the cocoa butter in the most stable crystal form that gives the chocolate the aforementioned desired properties.

The chocolate is cooled to the point of crystallisation, gently reheated to melt the unstable crystals, and cooled again.2 The different cocoa butter crystal forms have slightly different melting points (see Table 1), where the most stable form has the highest melting point. This process is repeated until the final tempered product is obtained – it is a very temperature controlled and automated process. It has been found that the actual temperatures needed for the tempering process varies depending upon the chocolate composition and equipment being used.1

Polymorphism of cocoa butter and fat bloom
The ability of a substance to exist in different crystalline structural forms without any changes in chemical structure is known as polymorphism. The most well known example of polymorphism in science textbooks would probably be that of diamond and graphite: diamond is hard and shiny, whereas graphite is soft and dull, despite them both being made of only carbon. It is the differences in the way the carbon atoms are structurally arranged with each other that gives the two materials their extremely different properties. The polymorphism of certain drugs is an extremely important issue within the pharmaceutical industry and it also affects the food industry because fat molecules can also crystallise in a similar way.

Fat bloom occurs when some of the fat, in the form of unstable cocoa butter crystals, rises to the surface of the chocolate giving the appearance of mould and making the chocolate unattractive and less pleasant to eat.

Cocoa butter fat molecules are complex molecules, and when the molten chocolate is cooled the molecules can stack on top of one another in different ways. Each of these ways is known as a ‘polymorph’. Historically, six different polymorphs of cocoa butter – labelled from Form I to Form VI – were identified by Wille and Lutton11 in 1966. In the same year Larsson12 used Greek letters as the nomenclature for the polymorphic forms of cocoa butter identified. These are shown in Table 1, along with their melting points,13 which have been determined in the past via the X-ray diffraction technique.9 The Form I to VI nomenclature is popular in the confectionery industry, while the oils and fats industry uses the Greek letter convention.1

Table 1: The six polymorphs of cocoa butter initially discovered.

Willie and Lutton (1966)11 Larsson12 Melting point (°C)
Form I Sub-α or γ 16-18
Form II α 21-22
Form III β’2 25.5
Form IV β’1 27-29
Form V β2 34-35
Form VI β1 36

The stability of the polymorphs increases as we move from Form I to Form VI. Forms V and VI are the most stable triple-chain packing polymorphs, whereas the other forms are all double-chain.4 When tempering chocolate Form V is the most desirable as it gives chocolate its valued final properties.

Although Form VI is the most energetically stable polymorph, this form cannot be obtained from the molten chocolate.1 However, all polymorphs eventually make the solid-solid transformation to the more stable Form VI over a period of time via a thermodynamic transition. The presence of Form VI is often accompanied by the formation of fat bloom.

Fat bloom occurs when some of the fat, in the form of unstable cocoa butter crystals, rises to the surface of the chocolate giving the appearance of mould and making the chocolate unattractive and less pleasant to eat.14

The transition between the polymorphs is the result of several factors, such as purity, temperature, rate of cooling, and humidity. There is still on-going research in both industry and academia in order to better understand the mechanism of fat bloom. Many hypotheses attribute the kinetics of fat migration in the particulate structure of chocolate to diffusion and capillary processes; the actual mechanism, however, is still unclear.1,15 Some studies suggest that additional ingredients may be able to help inhibit bloom.1

The final product
The manufacture of chocolate, as with many industrial processes, encounters several potential issues. This article focuses on the crystallisation of cocoa butter and how incorrect tempering affects the final product.

Fat bloom gives chocolate the appearance of mould and significantly affects its palatability; often it is a sign of poor tempering. It occurs when cocoa butter has melted out of unstable crystals, migrated to the surface, and formed new crystals there.2,15 Even if the chocolate has been tempered properly to form ‘Form V’ structure, fat bloom will naturally occur after a long period of time (~2 years)10 as a result of the solid-solid transformation to the most stable Form VI.

Therefore, the skill of the chocolate maker is to perfectly temper chocolate whilst preventing the formation of Form VI for as long as possible. A further understanding of the chemistry of chocolate would no doubt provide further significant insight into this process.

1. Beckett, S.T., Industrial Chocolate Manufacture and Use. 4th ed. 2008
2. McGee, H., On Food and Cooking: The Science and Lore of Kitchen. 2004
3. Dand, R., 9 – Cocoa bean processing and the manufacture of chocolate, in The International Cocoa Trade (Third edition). 2011, Woodhead Publishing. p. 268-289
4. Beckett, S.T., Science of Chocolate. 2000. X001-X004
5. Mohos, F., Confectionery and Chocolate Engineering: Principles and Applications. 29 Nov 2010 ed. 2010: John Wiley & Sons
6. Adam, K.T. and J.W. Helen, The fluid dynamics of the chocolate fountain. European Journal of Physics, 2016. 37(1): p. 015803
7. Roth, K., Von Vollmilch bis Bitter, edelste Polymorphie. Chemie in unserer Zeit, 2005. 39(6): p. 416-428
8. Gonçalves, E.V. and S. Caetano da Silva Lannes, Chocolate rheology. Ciência e Tecnologia de Alimentos, 2010. 30(4): p. 845-851
9. Babin, H., Colloidal properties of sugar particle dispersions in food oils with relevance to chocolate processing., in The Procter Department of Food Science (Leeds), Faculty of Maths and Physical Sciences (Leeds), The University of Leeds. 2005, University of Leeds: Ethos Import
10. Tisoncik, M. Chocolate Fat Bloom. The Manufacturing Confectioner, 2013. 65-68
11. Wille, R.L. and E.S. Lutton, Polymorphism of cocoa butter. Journal of the American Oil Chemists Society, 1966. 43(8): p. 491-496
12. Larsson, K., Classification of Glyceride Crystal Forms. Acta Chemica Scandinavica, 1996. 20: p. 2255-2260
13. Talbot, G., Science and Technology of Enrobed and Filled Chocolate, Confectionery and Bakery Products. 2007
14. Kovac, J., The Science of Chocolate (Beckett, Stephen T.). Journal of Chemical Education, 2002. 79(2): p. 167
15. Afoakwa, E.O., et al., Fat bloom development and structure-appearance relationships during storage of under-tempered dark chocolates. Journal of Food Engineering, 2009. 91(4): p. 571-581
16. Bricknell, J. and R.W. Hartel, Relation of fat bloom in chocolate to polymorphic transition of cocoa butter. Journal of the American Oil Chemists’ Society, 1998. 75(11): p. 1609-1615
17. Fryer, P. and K. Pinschower, The Materials Science of Chocolate. MRS Bulletin, 2000. 25(12): p. 25-29
18. Bakalis, S., et al., Modelling crystal polymorphisms in chocolate processing. Procedia Food Science, 2011. 1(0): p. 340-346

The Global Artificial Intelligence Race and Strategic Balance: Which Race Are We Running?

The UK Project on Nuclear Issues 2020 papers can be found here and recording can be viewed here.

Want to learn more about the role of AI in #NuclearWeaponsSystems, the impact of cyber- and space-based capabilities on #StrategicStability & the role of #HypersonicWeapons? Join us on Day 1 of #UKPONI2020.

Today, the papers for the UK PONI 2020 conference were released!

The research topics for this year included:

  • Emerging technology and nuclear proliferation;
  • Challenges and solutions for sustaining nuclear expertise in the next generation; and
  • UK nuclear weapons and Euro-Atlantic security.
  • .
    I decided, based topical issues facing the country, to go for the final research topic, in which I was determined to speak about artificial intelligence in some capacity.

    The paper discusses potential AI applications that may develop over the next 20 years and how they might challenge UK deterrence posture. It argues for the need to reconsider dependencies on other countries and that there is currently too little European thinking about what AI means for the military.

    Unfortunately, the conference had to be ‘attended’ virtually, but hopefully that trip to Whitehall will materialise in the not-too-distant future.


    The Global Artificial Intelligence Race and Strategic Balance: Which Race Are We Running?

    Artificial intelligence (AI) has the potential to affect ever more aspects of military and civilian life as part of the fourth industrial revolution. Countries are racing for global AI dominance, and whoever ‘wins’ shall reap the economic and geopolitical power expected to result. However, AI-enhanced technologies could pose new security risks that have not been encountered before.

    This paper discusses some military and defence implications of AI development and assesses potential threats to Euro-Atlantic security. China has been identified as potentially threatening because of its high AI capability rankings, use of AI for military applications and poise to become the global 5G leader.

    Ultimately, this paper argues that the UK and the EU should approach outsourcing critical communications infrastructure with caution and take recent security concerns involving China more seriously.

    Current AI capability

    AI has been likened to electricity in its potential transformative impact on the economy and enablement of other innovations.1 Many expect that the ‘winners’ of the AI development race will dominate the coming decades economically and geopolitically, thus exacerbating tensions between countries and transforming elements of national power.2

    The US remains ahead in most AI capability metrics, with the UK running third, behind China.3 Other sources use different metrics to generate their rankings.4 AI applied to the military is large on the agenda of the US, China, Russia and Israel.5 The implication is that whichever country leads in AI development will have a military advantage both in terms of cyber and traditional warfare.6

    According to the framework provided by John Launchbury, AI can be conceptualised as having three waves, each based on a different capability, as depicted in Figure 1.7 The world is still in the realm known as ‘weak’ or ‘narrow’ AI, in which AI is optimised for specific, narrow tasks such as speech recognition and performing repetitive functions. Strong AI, or artificial general intelligence (AGI), in which a machine will have human-like cognitive capability, remains a significant technical challenge.

    Figure 1: AI conceptualisation framework.
    Source: United States Government Accountability Office, ‘Artificial Intelligence: Emerging Opportunities, Challenges, and Implications’, Report to the Committee on Science, Space, and Technology, House of Representatives, GAO-18-142SP, March 2018, accessed 16 May 2020.

    There are two major limitations in current AI technology: large amounts of labelled data are needed to train systems; and context is still poorly understood by the systems.8 The next five years will likely see a lot of real-world piloting while building crucial datasets.9 Opinions differ regarding the timeline for the development of AGI.10 It has been predicted that computers will routinely pass the Turing test by 2029 and the technological singularity will occur by 2045.11 However, some sources consider AGI development unlikely for the next 20 years, if at all.12 It is also believed that the capacity for at least some aspects of decision-making could be achieved by 2040.13

    The Fourth Industrial Revolution (Industry 4.0)

    Industry 4.0 involves the development of smart and connected machines and systems, in which waves of further breakthroughs in areas ranging from AI, the Internet of Things (IoT), leveraging Big Data, and quantum computing will take place.14

    Big Data is high-volume, high-velocity and/or high-variety information, known as the 3Vs.15 The IoT, which broadly encompasses the increased connectivity of people and things, has been recognised as a key civil technology that could potentially affect US power.16 The main purpose of the increasing number and types of IoT objects is to produce useful data about our surroundings to make them smarter.17 IoT is expected to be a major producer of Big Data, the fusion and analysis of which enables accurate and reliable decision-making and management of ubiquitous environments; this is a grand future challenge in which AI plays a key role.18 AI and Big Data have already changed the economy and advanced the productivity of entire markets, and Cisco predicts that 94% of global workloads will be processed in the cloud in 2021.19 AI is emerging as a solution for managing large amounts of data, especially for making predictions based on the data sets.20

    AI in Defence

    The European Defence Agency (EDA) has analysed what it considers to be the 10 key most disruptive innovations (Table 1), and the connecting technology regarding the development of those highlighted is AI.

    Table 1: The 10 most disruption defence innovations to come.

    AI and cognitive computing in defence Robotics in defence
    Defence Internet of Things Autonomy in defence: systems, weapons, decision-making
    Big Data analytics for defence Future advanced materials for defence applications
    Blockchain technology in defence Additive manufacturing in defence
    AI-enabled cyber defence Next generation sequencing (NGS) for biological threat preparedness

    Source: European Defence Matters, ‘Disruptive Defence Innovations Ahead!’, Magazine of the European Defence Agency, Issue 14, 2015, accessed 1 July 2020.

    Increased autonomy in weapons systems could provide an advantage on the battlefield, potentially allowing weaker nuclear-armed states to reset the imbalance of power, but exacerbating fears that stronger states may further solidify their dominance and engage in more provocative actions.21 Competition between global leaders may lead to the proliferation of weaponised AI.22

    Additionally, the co-mingling of both AI-augmented nuclear and strategic non-nuclear weapons will exacerbate the risk of inadvertent escalation by undermining strategic stability.23

    The US, China and Russia have all declared strategies to achieve offset advantage through robotics and AI.24 C4ISR has been identified as a potential area of impact over the next 20 years, in which AI-enabled autonomous systems will be employed by war-fighting units.25 It has been acknowledged that both the EU and NATO are just beginning to grapple with the issue of AI in defence, whereas Russia and China have already started thinking strategically about it.26

    5G, security, and political stability

    The introduction of 5G will see the number of IoT sensors collecting data proliferate; 5G is up to 20 times faster than 4G, has record-setting low latency, and it will allow developers to create near real-time applications.27 This, however, will not be possible without AI.28 The use of Big Data creates a trade-off between performance versus privacy; if operators fail to leverage it in an ethical manner, data confidentiality issues regarding large amounts of sensitive personal information (names, ID numbers, locations, passwords) become apparent.29

    Countries are under pressure to protect their citizens and even political stability in the face of possible malicious/biased uses of AI and Big Data.30 Because 5G networks are the future backbone of our increasingly digitised economies and societies, ensuring its security and resilience is essential.31 Even at current capability levels, AI can be used in the cyber domain to augment attacks on cyberinfrastructure.32 There is no such thing as perfect security, only varying levels of insecurity.33 These ‘smart’ technologies rely on bidirectional wireless links to communicate with devices and global services, which gives a larger ‘attack surface’ that cyber threats target.34 Thus, 5G networks may lead to politically divided and potentially noninteroperable technology spheres of influence, where one sphere would be led by the US and another by China, with some others in between (for example the EU, South Korea and Japan).35

    All of these concerns are most significant in the context of authoritarian states but may also undermine the ability of democracies to sustain truthful public debates.36 For example, ‘deepfake’ (stemming from ‘deep learning’ and ‘fake’) algorithms can create fake images and videos that cannot easily be distinguished from authentic ones by humans. It is threatening to global security if deepfake methods are employed to promulgate misinformation. Additionally, they are a huge threat to privacy and identity. The proposal of technologies that can assess the integrity of digital media is therefore indispensable.37

    Safety and control are also areas in which AI will need to be regulated, and some also call for banning or severely limiting R&D in fields of AI such as autonomous weapons, superintelligent AI and offensive cyber capabilities.38 Virtually all AI models include a ‘black box’ aspect to the software that even the creators do not fully understand, which adds to the major challenge of preserving algorithm openness.39

    The implications of AI for EU and UK security and defence are largely unknown at this stage.40 Europe is behind other global players.41 Yet, successive EU strategies have continued to reinforce the desire for European technical autonomy and even outright ‘sovereignty’ in areas of key strategic importance, including 5G and 6G.42 The EU has declared the need for a high level of data protection, digital rights and ethical standards in AI and robotics, and insists on ethical standards as well as preparedness for the social changes caused by AI.43

    The China 2025 strategy is about making China ‘a major cyber power’ and its capacity to shape the international governance of cyberspace according to its own interests. Some refer to the digital revolution underway and the deepening Sino-American competition as a new arms race.44

    Is China a threat?

    China was recently poised to become the global leader in 5G technology despite Huawei’s products and services being assessed as highly insecure.45 In January, the UK announced that Huawei will be allowed to build part of the country’s 5G core network. The US, citing Huawei equipment’s high-risk nature to critical infrastructures, responded with threats of restricting intelligence cooperation.46 The UK’s Huawei Cyber Security Evaluation Centre Oversight Board reported in 2018 that ‘security critical third-party software used in a variety of products was not subject to sufficient control’; Huawei replied that it may need three to five years to mitigate these two flaws, but by then, most decisions about 5G contracts will have been taken and the construction of 5G networks will already be underway.47 Once introduced, it will be an ‘unextractable’ part of British infrastructure.48

    It can be argued that the prevailing practice of Big Data in China is much less attuned to the social, political and ethical implications that a human-centric approach would demand. Consequently, Chinese technologies and government policies have attracted growing international attention and scrutiny.49 China is implementing extensive social surveillance and an AI-based social credit scheme, which would be considered controversial in other countries and enable the collection of user behaviour data to later potentially give or deny access to a range of services provided by the state. The regime’s increasing dependence on its AI and Big Data systems builds a digital authoritarian regime.50

    If many critical components of a country’s 5G infrastructure are of Chinese origin, it gives China easier access to spy on or disrupt that country’s online communications.51 The UK needs to understand how the risks that come with foreign equipment can be mitigated, because even now, little is known about how the UK counters potential security breaches that may come with the Chinese-produced surveillance equipment installed in various London boroughs.52 Jeremy Warner of The Telegraph states that the UK risks ‘leaving the future to China in our rush to data protection’.53

    In June 2019, China issued its first AI ethics code: the Beijing AI principles. This is the first public signal of some willingness within the country to discuss the ethics of AI.54 Earlier in 2020, the US Commerce Department released new regulations restricting access to US technology by various Chinese companies.55 The need for future regulation will depend a lot on the technological progress of AI, as policy and regulation may subvert its development (and vice versa).56 An international, collaborative governance and the potential for a new technology diplomacy may be key in attaining stability during Industry 4.0.57


    The implications of AI technologies on national security remain largely unknown at this stage. However, the UK’s actions in recent times appear to be to isolate itself strategically from its allies by initially going against the US’s wishes of banning Huawei from its 5G infrastructure and deciding to leave the EU, which has its own strategic priority to prove geopolitical relevance. Only very recently has the UK government decided to phase out Huawei’s 5G role from the country.58 It is still extremely important, though, for the UK to fully understand the surveillance equipment already in use by identifying potential gaps in existing frameworks and enforcement mechanisms.59

    The EDA should engage with non-traditional defence R&D communities and innovators to speed up access to emerging and potentially disruptive research and identify areas for additional investment to fully address future defence capability needs.60 Consideration should also be given to the development of European industrial capacity.61

    Countries that have fallen behind in AI may have only two options: to join the race and possibly develop niche AI or to regulate its uses to mitigate potentially undesirable applications.62

    International policy coordination remains a necessary instrument to tackle the ethical and political repercussions of AI to facilitate the global alignment of AI policy and governance.63


    1. US-China Economic and Security Review Commission, ‘Emerging Technologies and Military-Civil Fusion – Artificial Intelligence, New Materials, and New Energy’, 2019 Annual Report to Congress of the US-China Economic and Security Review Commission, US-China Competition, chapter 3, Section 2.
    2. Claudio Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All: The Case for a New Technology Diplomacy’, Telecommunications Policy (Vol. 44, No. 6, 2020), pp. 1019–88.
    3. Tortoise Media, ‘The Global AI Index’, accessed 3 April 2020.
    4. Stanford University, ‘The 2019 AI Index Report. Human-Centered Artificial Intelligence’, 2019, accessed 30 June 2020; Oxford Insights, ‘Government Artificial Intelligence Readiness Index 2019’, 2019, accessed 30 June 2020; Sarah O’Meara, ‘Will China Lead The World in AI by 2030?’, Nature (Vol. 572, 2019), pp. 427–28.
    5. Congressional Research Service, ‘Artificial Intelligence and National Security’, updated 26 August 2020, accessed 30 June 2020; Forrest E Morgan and Raphael S Cohen, ‘Military Trends and the Future of Warfare: The Changing Global Environment and Its Implications for the US Air Force’, RAND Corporation, 2020; Samuel Bendett et al., ‘Russian Unmanned Vehicle Developments: Syria and Beyond’, in Center for Strategic and International Studies (CSIS), ‘Improvisation and Adaptability in the Russian Military’, 30 April 2020, pp. 38–47; Vincent Boulanin and Maaike Verbruggen, ‘Mapping the Development of Autonomy in Weapon Systems’, SIPRI Publications, November 2017; Pax, ‘Don’t Be Evil?’, August 2019, accessed 30 June 2020.
    6. J Scott Brennen, Philip N Howard and Rasmus Kleis Nielsen, ‘An Industry-Led Debate: How UK Media Cover Artificial Intelligence’, Factsheet, Reuters Institute for the Study of Journalism, December 2018.
    7. John Launchbury, ‘A DARPA Perspective on Artificial Intelligence’, DARPA, 2016, accessed 1 July 2020.
    8. US-China Economic and Security Review Commission, ‘Emerging Technologies and Military-Civil Fusion – Artificial Intelligence, New Materials, and New Energy’.
    9. Congressional Research Service, ‘Artificial Intelligence and National Security’.
    10. Leopold Schmertzing, ‘Trends in Artificial Intelligence and Big Data’, European Parliamentary Research Service (EPRS) Ideas Paper Series, 2019, accessed 4 August 2020.
    11. Roman V Yampolskiy, ‘Artificial Intelligence Safety and Cybersecurity: A Timeline of AI Failures’, Artificial Intelligence, 2016.
    12. D F Reding and J Eaton, ‘Science & Technology Trends 2020-2040: Exploring the S&T Edge’, NATO Science & Technology Organization, 2020, accessed 9 July 2020.
    13. Edward Geist and Andrew J Lohn, ‘How Might Artificial Intelligence Affect the Risk of Nuclear War?’, RAND Corporation, 2018.
    14. Klaus Schwab, ‘The Fourth Industrial Revolution’, World Economic Forum, 2016.
    15. Gartner, ‘Gartner Glossary’, 2020, accessed 5 August 2020; Doug Laney, ‘Application Delivery Strategies’, META Group Inc., 2001, accessed 5 August 2020.
    16. SRIC-BI, ‘Disruptive Civil Technologies: Six Technologies With Potential Impacts on US Interests Out to 2025’, National Intelligence Council Conference Report, SRI Consulting Business (SRIC-BI) Intelligence, 2008, p. 48.
    17. Furqan Alam et al., ‘Data Fusion and IoT for Smart Ubiquitous Environments: A Survey’, IEEE Access (Vol. 5, 2017), pp. 9533–54.
    18. Ibid.
    19. Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
    20. Daoqu Geng et al., ‘Big Data-Based Improved Data Acquisition and Storage System for Designing Industrial Data Platform’, IEEE Access (Vol. 7, 2018), pp. 44574–82.
    21. Franz-Stefan Gady, ‘Elsa B. Kania On Artificial Intelligence And Great Power Competition’, The Diplomat, 31 December 2019; Lora Saalman, ‘The Impact of AI on Nuclear Deterrence: China, Russia, and the United States’, East-West Center, accessed 14 May 2020.
    22. Amy Ertan, ‘What, Who, Where, How: The Impact of Recent Military AI Innovation in Security Terms’, Defence and Security Doctoral Symposium (DSDS19), 2015, accessed 14 May 2020; James Johnson, ‘Artificial Intelligence in Nuclear Warfare: A Perfect Storm of Instability?’, Washington Quarterly (Vol. 43, No. 2, 2020), pp. 197–211.
    23. James M Acton et al., ‘Entanglement: Chinese and Russian Perspectives on Non-Nuclear Weapons and Nuclear Risks’, Carnegie Endowment for International Peace, 2017; James Johnson, ‘The End of Military-Techno Pax Americana? Washington’s Strategic Responses to Chinese AI-Enabled Military Technology’, Pacific Review, 2019; James Johnson, ‘VIII. The Impact of Artificial Intelligence on Strategic Stability, Escalation and Nuclear Security’, 2019 UK Project on Nuclear Issues (UK PONI) Annual Conference Papers, 2019.
    24. Ministry of Defence (MoD), ‘Joint Concept Note 1/18: Human-Machine Teaming’, 2018, accessed 6 July 2020.
    25. United States Government Accountability Office, ‘Artificial Intelligence: Emerging Opportunities, Challenges, and Implications’, Report to the Committee on Science, Space, and Technology, House of Representatives, GAO-18-142SP, 2018; James S Johnson, ‘Artificial Intelligence: A Threat to Strategic Stability’, Strategic Studies Quarterly, Spring 2020.
    26. Ulrike Esther Franke, ‘Not Smart Enough: The Poverty of European Military Thinking on Artificial Intelligence’, European Council on Foreign Relations, 2019, accessed 14 May 2020; EU-NATO Relations and Artificial Intelligence Conference, co-organised by the EU Institute for Security Studies (EUISS) and the Finnish Presidency of the Council of the EU at the Permanent Representation of Finland to the EU, Brussels, 14 November 2019, accessed 14 May 2020.
    27. Mark Howden, ‘5G Opportunities for App Developers’, Samsung Insights, 2019, accessed 3 August 2020; Moayad Aloqaily et al., ‘Design Guidelines for Blockchain-Assisted 5G-UAV Networks’, Networking and Internet Architecture, 2020, accessed 4 August 2020.
    28. Karen Hao, ‘DARPA is Betting on AI to Bring the Next Generation of Wireless Devices Online’, MIT Technology Review, 25 October 2019, accessed 3 August 2020; Geng et al., ‘Big Data-Based Improved Data Acquisition and Storage System for Designing Industrial Data Platform’.
    29. Roxana Mihet and Thomas Philippon, ‘The Economics of Big Data and Artificial Intelligence’, Disruptive Innovation in Business and Finance in the Digital World (Vol. 20, 2019), pp. 29–43; Ying He et al., ‘Big Data Analytics in Mobile Cellular Networks’, IEEE Access (Vol. 4, 2016), pp. 1985–96.
    30. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’; Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
    31. European Commission, ‘Report On EU Coordinated Risk Assessment Of 5G: Member States Publish A Report On EU Coordinated Risk Assessment Of 5G Networks Security’, 2019, accessed 3 August 2020.
    32. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
    33. Yampolskiy, ‘Artificial Intelligence Safety and Cybersecurity’.
    34. James Hayes, ‘Hackers Under The Hood’, Institute of Engineering and Technology, 2020, accessed 6 July 2020; Idaho National Laboratory, ‘Cyber Threat and Vulnerability Analysis of the US Electric Sector’, US Department of Energy Office of Scientific and Technical Information, 2016, accessed 6 July 2020; European Commission, ‘Report on EU Coordinated Risk Assessment of 5G’.
    35. Paul Triolo, Kevin Allison and Clarise Brown, ‘Eurasia Group White Paper: The Geopolitics of 5G’, Eurasia Group, 2018, accessed 5 August 2020; Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
    36. Miles Brundage et al., ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation Authors are Listed in Order of Contribution Design Direction’, Future of Humanity Institute, 2018, accessed 5 August 2020.
    37. Gady, ‘Elsa B. Kania on Artificial Intelligence and Great Power Competition’; Thanh Thi Nguyen et al., ‘Deep Learning for Deepfakes Creation and Detection: A Survey’, Computer Vision and Pattern Recognition, accessed 5 August 2020.
    38. Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
    39. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
    40. Daniel Fiott and Gustav Lindstrom, ‘Artificial Intelligence: What Implications for EU Security and Defence?’ EUISS, accessed 17 May 2020.
    41. Ibid.
    42. Andrés Ortega Klein, ‘The US-China Race and the Fate of Transatlantic Relations: Part II: Bridging Differing Geopolitical Views’, CSIS, 2020; EU-NATO Relations and Artificial Intelligence conference, co-organised by the EEUISS and the Finnish Presidency of the Council of the EU, Permanent Representation of Finland to the EU, Brussels.
    43. European Group on Ethics in Science and New Technologies, ‘Statement on Artificial Intelligence, Robotics and Autonomous Systems’, accessed 5 August 2018; European Commission, ‘Artificial Intelligence for Europe’, (COM2018) 237 final, accessed 5 August 2020.
    44. François Godement et al., ‘The China Dream Goes Digital: Technology in the Age of Xi’, European Council on Foreign Relations, 2018, accessed 5 August 2020.
    45. Elsa B Kania, ‘Securing Our 5G Future: The Competitive Challenge and Considerations for US Policy’, Center for a New American Security (CNAS), 2019, accessed 6 July 2020.
    46. Valentin Weber, ‘Making Sense of Technological Spheres of Influence’, London School of Economics and Political Science, 2020, accessed 6 July 2020.
    47. Valentin Weber, ‘Finding a European Response To Huawei’s 5G Ambitions’, Norwegian Institute of International Affairs (NUPI), 2019, accessed 6 July 2020.
    48. Sarah Young, ‘UK Defence Committee to Probe Security of 5G Network on Huawei Concerns’, Reuters, 2020.
    49. Min Jiang and King‐Wa Fu, ‘Chinese Social Media and Big Data: Big Data, Big Brother, Big Profit?’, Policy & Internet (Vol. 10, Issue 4, 2018), pp. 372–92; Gaurav Shukla, ‘Google Removes Viral Indian App That Deleted Chinese Ones: 10 Points’, Gadgets360°, 2020, accessed 5 August 2020; Leo Kelion, ‘TikTok: How Would the US Go About Banning the Chinese App?’, BBC News, 3 August 2020, accessed 5 August 2020.
    50. Valentin Weber, ‘AI, China, Russia, and the Global Order: Technological, Political, Global, and Creative Perspectives’, Centre for Technology and Global Affairs, University of Oxford, 2019; Max Craglia et al., ‘Artificial Intelligence: A European Perspective’, Publications Office of the EU, 2020, accessed 3 August 2020; Christina Larson, ‘Who Needs Democracy When You Have Data?’, MIT Technology Review, 20 August 2018, accessed 5 August 2020.
    51. Weber, ‘Making Sense of Technological Spheres of Influence’.
    52. Ibid.
    53. Brennen, ‘An Industry-Led Debate’.
    54. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
    55. Dieter Ernst, ‘Catching Up in a Technology War: China’s Challenge in Artificial Intelligence’, East-West Center, 16 June 2020, accessed 3 August 2020.
    56. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’; Schmertzing, ‘Trends in Artificial Intelligence and Big Data’.
    57. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.
    58. Aishwarya Nair and Daniel Wallis, ‘UK PM Johnson to Phase Out Huawei’s 5G Role Within Months – The Telegraph’, Reuters, 4 July 2020.
    59. NIS, ‘EU Coordinated Risk Assessment of the Cybersecurity of 5G Networks’, NIS Cooperation Group, 9 October 2019, accessed 6 August 2020.
    60. European Defence Matters, ‘Disruptive Defence Innovations Ahead!’, Magazine of the European Defence Agency (No. 14, 2017), accessed 1 July 2020; Loredana Ceccacci, ‘The Ethics of Big Data’, European Economic and Social Committee, 2017, accessed 18 May 2020.
    61. NIS, ‘EU Coordinated Risk Assessment of the Cybersecurity of 5G Networks’.
    62. Yuval Noah Harari, ‘Who Will Win the AI Race?’, Foreign Policy (Winter 2019), pp. 52–55.
    63. Feijóo et al., ‘Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All’.

    Detached HEAD state and remote repositories 🤖👾👩‍💻

    I’m currently working on a test automation framework stored in Azure DevOps, and so this post is designed to be a starting point for anyone looking to do something similar. My research had led me to believe that Azure DevOps behaves differently with Git than other remote repositories, so hopefully this guide will clear for you what I needed to know at the start, too. 🥳

    Let’s get started! 👩‍💻

    Contents 🏷️
    👾 Git
    👾 AzureDevOps
    👾 Pushing to your DevOps branch
    👾 Remote and local branches
    👾 Merge changes from another branch
    👾 Push changes to master DevOps remote branch
    👾 Pulling and merging changes
    👾 Resources

    Git is a free and open source distributed version control system (DVCS). The automation framework is stored in a repository in AzureDevOps, and so we use Git to clone the repository to our computer. We then make changes to the framework on our computer and commit and push those changes to the repository.

    Other team members working on the automation framework will be working on the framework in the same fashion on their own computers, and so when we push the changes to the repository we may have to merge the changes from each person together (i.e. if two team members were working on the same file).

    I recommend watching this video for an explanation of branching and various Git terms, as well as to visit the Git website. Below is a schematic that ties together some Git terminology:

    A Git schematic, showing the cloning of a remote repository on your local computer and how these are pushed back to the remote.

    Setting up DevOps and an SSH key should be quite straightforward, and this page should help with that.

    Navigate to the web address where your DevOps repository is located, and create your new remote branch by clicking ‘New branch’ from the drop-down menu:

    Type in the name of your unique branch, as in the example below and ensure that it’s based on the ‘master’ branch (which is the default branch, i.e. the repository):

    On your computer, follow the checklist below of how to setup a local repository on your computer based on the remote repository in DevOps using PowerShell:

    1. Create a local folder in which you want the repository to be located:
      • In your C:\Users\USERNAME folder of your computer, create a new folder. For the purpose of this exercise, we will call it ‘FolderName.’ This is now the directory for the automation framework that is currently not under version control and you want to start controlling it with Git.
    2. Direct Git to the local folder you’ve just created:
      • In PowerShell, type: cd C:\Users\USERNAME\FolderName . Press enter.
    3. Create a new Git repository in that folder:
      • Type: git init . Press enter. A secret .git folder will appear in your new folder.
    4. Now we need to clone the automation framework into our local folder.
      • First, type: git config --global http.sslverify "false" . Press enter.
      • Then type: git clone https://webaddress . Press enter. Follow the prompts in PowerShell – (it may ask you for your username and then password of your DevOps or server account).
    5. Navigate to the TestAutomation folder:
      • Type: cd TestAutomation . Press enter.
    6. To view all of the hidden branches type: git branch –a .

      • The green ‘master’ branch is the branch you are currently on, and is the default branch that Git has created for you in your local repository.
      • The remotes/origin/master branch is a branch named ‘master’ on the remote named ‘origin’ (in DevOps). It is referred to as ‘remotes/origin/master’ or ‘origin/master.’ (Link)
      • The ‘remotes/origin/HEAD’ is the default branch for the remote named ‘origin.’ (Link)
    7. Checkout the branch in which the most up-to-date automation framework is located: git checkout remotes/origin/refactor_login . (Link)

    8. You now have all of those files in your TestAutomation folder. However, what is a ‘detached HEAD’? A ‘detached HEAD’ is when you checkout a specific commit instead of a branch, which in the above example is commit 932651b (and you can cross-check this SHA-1 hash (the number) in DevOps).

    9. To push this to your branch in DevOps, type: git push -f origin HEAD:Charlotte (links: 1, 2). This forces the detached HEAD to be pushed into the Charlotte branch created in DevOps. If you navigate to the branch in DevOps, you should see all of the files there.
    10. Now you can start working on the automation framework on your computer.

    Pushing to your DevOps branch
    Once you’ve made changes to some of the files on your computer (in your local repository), such as configuring the test automation framework to your computer, you can use this section to push the changes to your DevOps branch.

    1. Open up PowerShell and navigate to your automation framework using something similar to cd YourFolder\TestAutomation and pressing enter.
    2. Type git status and press enter to show you what files have been modified:

    3. Then type git add -A (make sure the A is capital) and press enter to add them to the staging area.

    4. If you type git status and press enter, you will see that the files have been staged:

    5. Type git commit and press enter.
      • You will then be taken to vim where you are asked to type a message:

      • Type a meaningful message such as ‘Automation framework configured to Charlotte’s VM.’ Then press the escape button, type: :wq , and press enter.

    6. To (force) push this to your branch in DevOps, type: git push -f origin HEAD:Charlotte (where ‘Charlotte’ is the name of the branch that you created in DevOps) and press enter. PowerShell may ask you for your username followed by a password.
    7. To check that this has been pushed successfully:
      • Type in git status and press enter, where PowerShell will inform you that there’s nothing to commit:
      • Navigate to your branch in DevOps (and perhaps refresh the page if needed) and you should be able to see your latest push:

    Remote and local branches
    When first starting with Git and remote repositories, I made the mistake of believing that different branches were synonymous with different folders.
    You have to create local branches and work on those because remote branches can’t be worked on directly. You update the local branch with git fetch, then merge into your current branch.

    To update from a remote repository, we have to use git fetch origin HEAD:name-of-remote-branch.

    However, when you checkout a remote branch, Git will tell you that you’re working in a detached HEAD state. Normally with Git, we do not want to be working in detached HEAD states but on branches. Normally we would checkout a branch, work on that branch, commit the changes locally and then push those changes to the master branch. However, it’s not the way Git works when working Azure DevOps (Azure DevOps apparently works differently from other remote repositories, too), and so we have to get used to working in the detached HEAD state.

    Close PowerShell and reopen it. Navigate back to the TestAutomation folder (using cd folder-address-here and pressing enter), and type in git checkout master then press enter (this checks out the local master branch):

    Git is telling me that I’m on the master branch currently. I can double-check this by typing in git status and pressing enter, and/or I can type git branch -a to list all of the branches:

    In the above screenshot, the branch in green with the asterisk next to it is the branch on which we are. The red branches with remotes/origin/DevOpsBranchName lists the remote branches in Azure DevOps.

    Let’s say we want to content of the Charlotte branch in DevOps. If I type git pull remotes/origin/Charlotte and press enter, I get an error:

    For some reason, when using Git locally on your computer, it doesn’t recognise Azure DevOps branches as repositories. The way I can get the files in one of these remote branches into my local folder is to type git checkout remotes/origin/Charlotte and press enter:

    Git is now telling me that I’ve checked out remotes/origin/Charlotte, and the files will be available locally in my computer’s folders. However, I am now in a detached HEAD state.

    A detached HEAD state occurs when a specific commit is checked out instead of a branch. HEAD is a pointer to the latest commit in that branch. When it is detached that means you’re not sitting on a specific branch. Git is telling me that I’m now at commit 1d5e249 (this is known as a SHA number). If I navigate to the Charlotte branch in DevOps I can see that the last commit corresponds with the SHA and includes the accompanying message:

    So, you don’t checkout the remote branch but the latest commit in that remote branch. Now, I can type git branch -a and press enter to view all of the branches:

    As like before, anything in red that starts with ‘remotes/origin/…’ leads to one of the remote branches in Azure DevOps. The green ‘branch’ with the asterisk is the local ‘branch’ where we’re currently sitting (it’s not a branch but a detached HEAD state), and the white ‘master’ branch is the default local branch that is available for us to move onto should we chose. We can create other local branches, too. However, at the moment, we’re in the detached HEAD state.

    The message has advised us to make new local branch so that we can retain the commits we create. To do with, we type git checkout –b branch new-branch-name and press enter. I’m going to create a new local branch called ‘CharlotteBranch’. You can see that I’m not on a detached HEAD anymore:

    Now I’m working on CharlotteBranch with all of the files from commit 1d5e249 locally stored in my computer.

    When using Git, it’s common to create a new branch for each new feature, bug, fix, or enhancement that you’re working on, in which each branch compartmentalises the commits related to a particular feature. Once completed, this is then pushed and/or merged into the master branch.

    Merge changes from another branch
    Note that git merge merges the specified branch into the currently active branch. This link helps with how to resolve any merge conflicts you may encounter.

  • Follow the instructions above to make the detached HEAD from the branch with the latest commit into a local branch on your computer.
  • Then type: git checkout "name of branch you want to merge INTO" and press enter.
  • Then type: git merge "name of branch you want to merge FROM" and press enter. (Link)
  • Delete the local branch you no longer need by typing: git branch -d "local branch name". (Link)
    Push changes to master DevOps remote branch
    After completing some work on your local branch (created after following the instructions in remote and local branches to create your own local branch), and now you want to push them to remote branches in DevOps:
  • To add all of the files you’ve changed, write: git add --all . Then press enter.
  • Then change to your local master branch: git checkout master . Press enter.
  • Merge the contents of the local branch (in which you worked on the files that you want to add to DevOps) to the local master branch: git merge "local-branch" . Press enter.
  • Commit the files: git commit . Press enter.
    • You will then be taken to vim where you are asked to type a message. Type a meaningful message such as ‘first commit to set up repository.’ Then press the escape button, and type in: :wq , and press enter.
  • Then push to master: git push origin master . Press enter.
  • To push this to another branch in DevOps, type: git push origin HEAD:RemoteBranch (where ‘RemoteBranch’ is the name of the branch that you created in DevOps) and press enter. PowerShell will ask for your username then followed by your password.
    Pulling and merging changes
    At the start of each new working day, before starting work on your branch you may wish to pull changes from the master repository so that you’re working with the most up-to-date automation framework. However, it’s recommended not to use git pull, but to use git fetch and then git merge. Why? Because git pull is magical and combines many steps in one, and so when something goes wrong it’s more difficult to understand why or what has gone wrong. The other problem is that by both fetching and merging in one command, your working directory is updated without giving you a chance to examine the changes you’ve just brought into your repository.

    After logging onto your computer for the first time of a day, and you want to pull changes from the remote master branch in DevOps onto your VM, follow these instructions:

  • git add --a
  • git commit (this ensures that any files you’ve been working on are saved in a commit; type your message then press the escape button, type :wq and press enter)
  • git checkout master (unless you already were working on the local master branch)
  • git fetch (type in your username and password when prompted)
  • git pullAfter I pulled from the remote master branch, I got a merge conflict in the config.ini file:

    Upon navigating to the conflict.ini file, I can see the merge conflicts. To see the beginning of the merge conflict in your file, search the file for the conflict marker <<<<<<<. You’ll see the changes from the HEAD or base branch after the line <<<<<<< HEAD. Next, you’ll see =======, which divides your changes from the changes in the other branch, followed by >>>>>>> BRANCH-NAME.

    Decide if you want to keep only your branch’s changes, keep only the other branch’s changes, or make a brand new change, which may incorporate changes from both branches. Delete the conflict markers <<<<<<<, =======, >>>>>>> and make the changes you want in the final merge.

    Then follow these instructions:

  • git add config.ini (where config.ini is the file in which you’ve just sorted out the merge conflicts)
  • git commit (change the merge message then press the escape button, type :wq and press enter)
  • git pull (and follow the username and password prompts)

    Below is also an example of what your commands may look like to pull the master branch onto your VM and then push your changes back into it:

  • git checkout master
  • git fetch
  • git checkout -b new_branch_name origin/master
  • … make amendments to code …
  • git add
  • git commit -m “Description of changes”
  • git checkout master
  • git merge new_branch_name
  • git push origin master
    When writing my PhD thesis and other publications in LaTeX, our research team used SourceTree, which is a great free Git GUI that I’d recommend. But GitHub was the remote repository and it all seemed much more straightforward using a graphical interface than just the command line. So I hope that this is a clear enough guide to get started with cloning an Azure DevOps repository and navigating your way around version control with some of the basic commands.

    Below are also some cool 😎 resources, too:

    👾 Kill All Defects: Agile Git Integration with GitWorkflows
    👾 DevConnected
    👾 Earth Lab: git clone, add, commit, push Intro version control git
    👾 Integralist: Git internals
    👾 Stack Abuse: Git: Merge Branch into Master
    👾 Andy Leonard: Azure DevOps, SSIS, and Git Part 0 – Getting Started
    👾 Cloud Skills: Getting Started with Git and Azure DevOps: The Ultimate Guide