Beyond human endeavour: The application of artificial intelligence in command-and-control
12 September 2022
Beyond human endeavour: The application of artificial intelligence in command-and-control
by Giles Ebbutt
Systematic's data-centric AI-based SitaWare Insight being viewed by UK staff. SitaWare Insight is intended to support staff across intelligence, planning, and operations at all levels of command. It enables data from all domains to be combined and exploited. (Systematic)
Artificial intelligence (AI) can support commanders and staffs in their roles in numerous ways. It can ease the cognitive burden and reduce workloads through its ability to process vast amounts of information far more rapidly, consistently, and accurately than a human being. This is particularly applicable to intelligence analysis, which, as a major contributor to situational awareness, is a fundamental part of the process of command-and-control (C2).
Threat assessment, course of action analysis (CoAA), and the evaluation of likely effects from particular actions are all areas where AI can help refine the options available to commanders and staffs and support decision making across all command levels.
French company Preligens has applied AI for intelligence analysis in products that aggregate information from multiple sources of intelligence, surveillance, and reconnaissance (ISR) data, including satellite imagery, infrared imagery, full-motion video, and text. AI is used to accelerate the analysis process, with workflows, processes, and outputs that adhere to NATO Standardization Agreement (STANAG) and other standard formats.
at the Eurosatory 2022 exhibition in Paris, Arnaud Guérin, Preligens CEO, explained that the objective is to take unstructured data and make it searchable by an AI algorithm. “AI is very good at searching large quantities of data for specific items, so it speeds up the analyst's task,” he said. “However, you have to be careful to ask the right question in order to avoid bias and an incomplete result.”
The AI can search text, not only for specific references but to find these in particular contexts, relationships, or meanings. It searches imagery to detect, classify, and identify military objects of interest. To do this, the algorithm draws on a database of more than nine million reference images that is constantly being fed with new data to train it further.
Two examples of Preligens products are ROBIN and ZEBRA. ROBIN is a NATO STANAG 3596-compliant optical satellite imagery monitoring tool that can leverage commercial or sovereign data. Its customisable alerting system can be set up on pattern analysis to cue analysts to key observed elements. The software is deployable or can be hosted in the cloud.
ZEBRA is an automatic AI solution for military mapping that can detect and vectorise roads and buildings from satellite images and create maps of urban and rural areas. When supporting disaster relief operations, it can be used to rapidly detect changes and assess the extent and impact of damage.
Preligens software is used by NATO, the European Union (EU), France, Japan, the UK, and the US. It will be installed at up to 20 sites in France by the end of 2022, and Guérin said that in 2023, there will be a deployable capability, with an early instance being for the French aircraft carrier
Charles de Gaulle
In 2021 Preligens and the Joint Forces Imagery Formation and Interpretation Centre (JFIFIC) in the French Directorate of Military Intelligence (Direction du Renseignement Militaire: DRM) established project TAIIA (Traitement et Analyse d'Images par Intelligence Artificielle – Artificial Intelligence Image Processing and Analysis), which intends to build a tailored automatic activity detection tool on chosen sites of interest.
The platform receives images from France's new Airbus D&S-built Composante Spatiale Optique (CSO) three-satellite high-resolution electro-optical/infrared (EO/IR) constellation. The Preligens AI identifies objects of interest using its algorithms and image database, particularly on specific sites of interest, enabling analysts to concentrate on those tasks that provide the most value, with alert notifications based on analyst-set rules.
C2 software specialist Systematic, which produces the widely used SitaWare battle management system (BMS), has identified five advantages that AI brings to the battlefield. The first is that its ability to handle vast volumes of complex data will streamline and accelerate decision making processes spanning all levels of command, acting as a force multiplier for commanders, particularly with the emerging requirements of multidomain operations (MDO).
The second advantage is the contribution AI can make to operational planning support. Besides terrain analysis and the ability to consider a wide range of planning factors, Systematic suggests that the “technology will not only enable commanders to quickly access and consider a much wider range of data than is possible at present, it also promises intelligent and nuanced support”.
Third, Systematic suggests that AI can enable commanders to focus on conducting an operation rather than on managing systems, particularly at the tactical level. “AI tools that can access and analyse data sets on previous attacks, likely enemy tactics, or communications blackspots, for example, could provide commanders with a far greater appreciation of what needs to be considered in planning [or conducting] an operation.”
The fourth advantage Systematic identifies is that “AI will ultimately have the greatest impact in instances where it is emulating a human's abilities rather than just those of the human brain – that is, where it is able to assess information in the same way as a human. AI's ability to conduct sensor fusion and track correlation – drawing on a wide range of inputs and far quicker than human operators – will bring a step change in capabilities”.
Systematic adds that this “could be especially beneficial at the tactical level … pattern-of-life analysis tools could greatly enhance situational awareness. [Through the analysis of] video footage and sensor data collected passively, software could alert a commander to extraordinary circumstances – such as changes in the environment or an increase in the number of potential combatants – and infer if an attack is likely”.
Finally, the company identifies that speed of action is vital against a near-peer threat, and makes the proposition that the use of cloud infrastructure is essential to ensure that elements operating at the tactical level have on-demand access to advanced AI capabilities.
Hans Jørgen Bohlbro, vice-president, defence product management at Systematic, told
that C2 was moving steadily towards a data-centric model. He said that geospatial digitisation had first enabled a map-centric model, and as network capability increased, that had shifted to a network-centric C2 model.
He noted that most processes have been digitised and there is an ever-increasing number of data sources. Analysing and understanding all the data that is being received will help to achieve information superiority and that is where AI is fundamental.
“But it is not possible to do manual human processing of all the data. However, AI can mimic many ‘human' cognitive skills and do this infinitely faster and with greater consistency. It can help to sort the ‘signal from the noise'. It will not make mistakes, get distracted, or tired.” He cited video and document analysis as an example, noting that AI can make large quantities of data searchable and positing that “this will become really important”.
Bohlbro explained that while AI can predict tracks, recognise patterns, and apply doctrine to provide threat analysis and prediction, this is an iterative process as the AI learns from experience with reinforcement learning techniques used to refine the model.
He also noted that it is important to keep teaching a “new normal”, where the base assumptions underlying the calculations change. Bohlbro said that availability of data can be a challenge in training AI models. “You need significant amount of defence-specific data and collection and classification restrictions can make this difficult.”
Addressing CoAA, Bohlbro said that for commanders to trust the outcomes and any recommendations, they need an understanding of what has been taken into account and what has not. “There needs to be transparency although that doesn't alter a commander's basic assessment of a recommendation, which is ‘does it seem reasonable'?”
He added that CoAA can also provide alternative conclusions based on differing criteria, such as speed of execution against resources required or likely casualties.
Systematic has developed two products that employ AI to support C2. SitaWare's AI Assisted Toolbox is designed to ease the workload of operators working on developing command layers in the software's map interface. The AI algorithm automatically adapts itself based on the decisions that users make in choosing symbology for given scenarios or areas of operation.
It will then present the user with the most common symbology and that which is most likely to be required. For example, if a command layer in a particular area is typically composed of infantry units, then the associated symbology is prioritised over others. This serves to streamline and accelerate the process, as well as helping to limit errors.
Systematic suggests that “while the benefit of the AI Assisted Toolbox at higher command levels, where more detailed, labour-intensive planning is carried out, is clear, the algorithm can greatly aid users at the tactical level. [For example,] simplifying and speeding up the interaction that a dismounted commander requires with their mobile device minimises the amount of time that they [are not physically alert to] their surroundings”.
Systematic has also developed SitaWare Insight. Bohlbro explained that this is a data-centric solution, which is “able to take any type of data, whether structured, semi-structured, or unstructured, and make it available in a federated data fabric, which then allows analysis. It integrates a wide range of military and non-military data sources including images, video, documents, and sensor data”. Bohlbro said that SitaWare Insight is intended to support staff across intelligence, planning, and operations at all levels of command and that it enables data from all domains to be combined and then exploited. “It's all about understanding the data and understanding what you're seeing,” he said.
Insight is provided as an add-on to SitaWare Headquarters (HQ), the C2 software intended primarily for use in static HQs and command posts, and is transparent to the user. It has been built with an application programming interface (API) and a software development kit to enable users to customise their installations. Bohlbro said that it also introduces a new intelligence workflow, tying together the operational and intelligence cycles. Future enhancements, he said, include object recognition, anomaly detection, and natural language processing.
AI is employed to drive activity in some constructive simulations used in command and staff training (CAST). The algorithms in MASA's SWORD constructive simulation software, for example, which had their origins in the video and serious games market, provide an intelligent simulation of military activity, with simulated intelligent and autonomous units following doctrine-compliant courses of action once they receive operational orders.
Units can execute these orders autonomously without additional input from the players, while adapting their behaviour accordingly as the situation evolves. This behaviour can be customised to match the specificity of any doctrine: vehicle speeds; weapon systems performance and sensor accuracy; unit composition, basic loads, and logistics systems; and unit behaviours and missions.
This simulation capability can be leveraged for CoAA to support operational planning, and it has been integrated with SitaWare. Hyssos Tech has also integrated SWORD with its AI-driven Sketch-Thru-Plan (STP) system, using the Command and Control Systems – Simulation Systems Interoperation (C2SIM) standard.
According to Hyssos, the STP “natural language intelligent mission planning interface solution lets warfighters develop CoA by simply speaking and sketching, with no keyboard or mouse required”. The plans thus developed can then be exported to SWORD using C2SIM and then run by the simulation.
These sorts of tools enable a commander to make a plan in a BMS and then run it a number of times at increased speed to assess the possible outcomes, using current intelligence data and knowledge of opposition doctrine. It enables staffs to run “what if” scenarios, changing the plan to achieve the most desirable outcome.
This process can be taken a step further by creating a synthetic copy of the real world for experimentation. Mike Raker, chief technology officer for Improbable Defence, told
that the classic use of AI is to automate existing human tasks, citing image identification and classification as a good example. He explained that AI can also be used in the world of synthetics to complement its use in the real world. Raker noted that, unusually, the defence and security world does not use synthetics to train AI, which is common in other areas. He cited the example of self-driving cars, which can be ‘taught' using a photo-realistic environment in which dangerous or damaging events and situations can be reproduced without the risks of doing this in the real world. “This is cheaper, safer, and more effective,” he said, “but it's much more difficult to do this in the defence environment”.
However, “we can create virtual representations of things that occur in the real world, such as military systems, critical infrastructure, or human communities, and then use AI in different ways in C2 systems to provide decision support as part of the decision/action cycle”, he said, adding that this was without replacing humans in the decision loop. “Our aim is to help planners and decision makers reduce the bubble of uncertainty by allowing them to explore options and then take the final decision themselves,” he said.
“Using synthetics can help to identify what the major variables or elements are that need to be considered, can help to focus on the priorities, and can reduce a huge range of options to a more targeted number.”
Improbable has developed a synthetic environment development platform called Skyral, described at its launch as a “platform-enabled ecosystem of technologies and services that supports the rapid development, deployment, and ongoing evolution of synthetic environments. [It] comprises a suite of proprietary tools and technologies that provide defence organisations, third-party developers, and systems integrators with everything they need to develop, deploy, and sustain complex and realistic synthetic solutions”.
“We're on the cusp of breaking the decision assistance field wide open,” Raker said. He explained that synthetics can be used to better explore the options that commanders might take and to evaluate those options more quickly. This is particularly true in the case of the cascading consequences of some CoAs.
Raker said that first-order effects resulting from a particular CoA can be fairly accurately predicted, but second- and third-order effects can be “more ephemeral and mysterious”. He said that synthetics can be effective in helping decision makers go beyond primary effects to secondary and tertiary ones and show what happens to adjacent systems following a particular action.
This could include, for example, an AI-driven analysis of the effect on population sentiment of a kinetic strike on infrastructure, which in turn affected the wider provision of services. He noted that it could show the impact of a tactical action at the operational level, perhaps through the AI-modelled impact of social media on a synthetic representation of the population.
“As the world gets more complex and interrelated, there needs to be greater understanding of how different domains interact,” Raker said. “The joint all-domain C2 (JADC2) and multidomain operations (MDO) initiatives demonstrate how important this is becoming. You need synthetics to help develop this understanding, particularly under the short timelines available to military decision makers.”
Raker highlighted the issue that the data to create a synthetic representation may not be complete, but said that the model did not have to be perfect to have utility. In fact, showing where there were gaps in the data to support demonstrating second- or third-order effects could inform intelligence requirements. Once additional data is collected against these requirements, the model can then be updated to support more accurate forecasting. However, he emphasised the importance of using the latest data to build models. “It's the speed of relevance that's important, not just the model itself,” he said.
He also noted that AI can be used to fill in data gaps. One way is to use historical data if “there is a reasonable caucus [of historical data] that can be used for current purposes”. Another is to use “techniques like procedural generation”, which are used in the gaming industry to create content to generate data.
He said that AI and synthetics can be used to forecast possible developments in a common operational picture (COP), noting that the COP is a snapshot of “what is happening now, not what is happening next”. He explained that “if I can take the COP data in, make a synthetic representation, [and] allocate behaviours and concepts of operations, then I can forecast possible futures”.
“What I need to do is turn the COP into a virtual world representing the present so I can effectively fast forward and predict what the COP might look like in different time increments.”
Raker said that generating trust in the process revolved around rigorous verification and validation of the synthetic model and the modelling process. The design needs to be user-centred and it needs to be explainable and understandable, he added. Trust grows over time, he said, so the process needs to be constantly used during training.
Bradley Allsop, head of Raytheon UK's Strategic Research, offered a similar assessment. He said that AI has application in target classification, and therefore threat assessment; in generating smart forces for constructive simulation; and in the management of resources.
However, he emphasised the importance of keeping a human-in-the-loop because of the human “ability to interpret”. He said that AI can be allowed to make decisions where the consequences are well understood, but where there is less certainty, there needs to be human involvement, particularly where kinetic activity is concerned.
Allsop said that Raytheon had “done a lot of work” on how AI decisions are made when data is uncertain and explained that this could involve using an array of different models, which he called the ensemble method. “It doesn't rely on one particular system but an entire community,” he said. He added that getting data to the right place in the right format is a significant issue.
Allsop noted the use of reinforcement learning to train AI algorithms using historical data. He also said that this could be achieved in specific scenarios by only providing the AI with a desired end-state so that “it learns as it moves through the scenario”. Human input can provide reward for actions taken according to different priorities. He explained that this provided the AI with experience that it can then apply when faced with a similar scenario in future.
Matthew George, head of Engineering, Cyber, Space & Training at Raytheon UK, also raised the issue of trust in the output from AI-enabled processes, suggesting that this falls into two areas. He said that one of these is transparency of process. “You need to enable someone who isn't a data scientist to understand the way the process works.” However, he added that trust “is really going to be built when commanders are using the capability in a representative training environment, which is actually how you build faith in any piece of military equipment”.
Building trust is a factor intrinsic to the successful utilisation of artificial intelligence and the introduction of autonomous systems across the spectrum of military activity. Speaking to
, Christina Balis, global campaign director, Training and Mission Rehearsal at QinetiQ, described it as a multilayered problem, as trust can either be focused on the veracity of the data or on the “thought processes” of an artificial agent. “Commanders need to know how to interrogate the data and what questions to ask,” she said.
In terms of C2, Balis identifies that there is a looming systemic problem. “We're going to get quite far in technology terms quite soon, but we're not going to have the human institution elements in place to exploit it.” In a Royal United Services Institute (RUSI) occasional paper
Trust in AI: Rethinking Future Command,
she and Paul O'Neill, director of military sciences at RUSI, note that “[it is likely that] both command and control responsibilities will increasingly be shared between humans and AI systems in ways that may be hard to envision at present”.
The paper identifies five main ‘trust points' where “the question of having an appropriate level of trust is crucial”. These are: deployment trust, the purpose for which AI is used; data trust, the data inputs being used; process trust, how the data is processed; output trust, the outputs generated by the AI; and organisational system trust, the overall ecosystem for optimising the use of AI.
Noting that trust will never be absolute, Balis told
that this does not necessarily matter, providing risks are reduced by having trained human operators and the right institutional structures. “You need to get to the point where you're a constructive partner and critic to AI,” she said.
Addressing this institutional issue, Balis observed that C2 is based on long-established processes and that the introduction of AI “could change everything. HQs are just not set up to exploit AI. We have digital technology but analog ways of operating, and we need to challenge the ways we organise ourselves. It is possible that AI could be better at doing some of this stuff and we could use human talent in other areas. We need to bring diversity of thought to the problem”.
Balis suggested that “you might end up with different sorts of HQ at different levels of command” depending on the application of AI, adding that experiments were being conducted to investigate this. She highlighted the British Army's experimentation programme as part of Project Theia, its overarching digitalisation programme.
The RUSI paper notes that “a thought experiment in automating the intelligence assessment process of the UK Permanent Joint Headquarters (PJHQ) identified opportunities to replace large numbers of staff, accelerate the headquarters' battle rhythm, and allow horizontal sharing of information using automatic summarisation and natural language processing. Testing this in an operational deployment, the UK's 20th Armoured Infantry Brigade Combat Team shortened parts of the planning process tenfold”.
It concludes that “when the military commander's role shifts from one of controller to that of teammate, when we can no longer ascribe only a supporting function to artificial agents, then we need to fundamentally rethink the role of humans and the structures of our institutions … we need to reassess the conditions for and implications of trust in human–machine decision making”.
The paper adds that “…AI requires commanders who can make sense of complexity, frame problems, and ask the right questions suited to the circumstances”.