5 minute read
CISpaces: AI for Intelligence analysts
90
Christina Mackenzie talks to Dr Alice Toniolo of St Andrews’ School of Computer Science to find out how a team of gifted computer scientists created an analysts’ decisionsupport tool using artificial intelligence
Intelligence analysts receive enormous amounts of frequently conflicting or incomplete information, which they need to make sense of and understand to reach one or more hypotheses and conclusions. These can often be vital for planning purposes and even operations, so it is crucial that the fullest meaning possible of the data can be extracted and tested. However, with so much information being generated on a daily basis, analysts need as much assistance as they can get. Thankfully, it has recently been proven that artificial intelligence (AI) can offer significant support in not only making sense of these huge volumes of data, but also questioning the analysts’ reasoning about it and keeping track of the conclusions they come to.
One of the most interesting tools that has been developed is the CISpaces (Collaborative Intelligence Spaces) decision-support tool. It was built by an international team of computer scientists backed by the UK and US military with advice from NATO experts. The team members were originally based in Aberdeen University in Scotland, UCLA in the US and the defence/ aerospace contractor, Honeywell, led by Professor Timothy J Norman. Funding came from the UK Ministry of Defence and US Army Research Lab (ARL). Three of the developers have since moved on from Aberdeen. One of them, Dr Alice Toniolo, is now an AI lecturer at the School of Computer Science in St Andrews University, Scotland. NITECH caught up with her there to ask her about the paper she and a team wrote about the project and the potential for the tool.
ANALYSIS OF COMPETING HYPOTHESES
Toniolo explains the reasons for the CISpaces project: “Intelligence analysts have a huge amount of expertise, and we were trying to leverage that.” She adds that, “intelligence analysts are already helped by a lot of automation to collect, index and organize the information they get. But we sought to give them a method to help them construct more than one hypothesis, which we refer to as ‘analysis of competing hypotheses’. Basically, our aim was to help them by asking ‘have you considered this?’ And then making one or more suggestions for them to consider.”
Having concluded that there were “currently no tools or methods which allow analysts to combine the recording and interpretation of information, and that there is little understanding about how software tools can facilitate the hypotheses formation process”, the research team
91
HELP FROM THE ANCIENT GREEKS
The method allows for each hypothesis to be evaluated and given a score. “We based our work on theories of philosophy that date back to the Ancient Greeks,” Toniolo laughs. “We had to think about how people construct pros and cons. We were also interested in how people reason.”
The system tries to mitigate human bias and helps decide whether a chosen expert is really the person you’re looking for in a particular field. “There’s no point talking to a car expert when what you’re interested in is trains,” she says.
NATO’s Phoenix Alliance Ground Surveillance system has already begun adding to the huge volumes of data NATO collects (PHOTO: NCI Agency)
92 set out to construct such a tool. Fortunately, they had access to intelligence analysts and the support of two key defence organizations. “The project benefitted from significant time and effort from expert intelligence analysts from Dstl [the Defence Science and Technology Laboratory] and ARL, and we received valuable advice from NATO experts,” Toniolo adds.
It’s a little bit like a more sophisticated version of the wall full of photographs and notes crisscrossed by lengths of string in various hues that you see in ‘whodunnit’ films. Except that CISpaces can only handle text, no images. “That’s because intelligence reports tend to be in textual format,” Toniolo explains.
CISpaces combines an array of AI methods such as argumentation theory (the interdisciplinary study of how conclusions can be supported or undermined by premises through logical reasoning), crowdsourced Bayesian analysis (a method of statistical inference in which Bayes’ theorem is used to update the probability for a hypothesis as more evidence or information becomes available) and provenance recording (understanding the process by which a result is generated).
The CISpaces developers received support from NATO experts (PHOTO: NCI Agency)
Toniolo says they combined two approaches. “The first was to analyse graphs, looking at the connections and evaluating conclusions. The second was to find the patterns of argumentation.” And then when premises and a conclusion are suggested, “we then asked questions to challenge those premises”.
The three key challenges the team faced were how to record the information, how to know which sources to trust and how to collect the information from crowdsourcing. The first version of the tool was ready in 2016, but the team was able to develop a second more recent version known as CISpaces.org with funding from the UK Ministry of Defence’s Dstl Defence Accelerator. “The system is still at the prototype level, at Technical Readiness Level 3,” Toniolo says. This second version focuses on using natural language processing algorithms to extract factual claims from open information sources such as Twitter and Facebook. This second version has been made available and there is further interest for the AI technologies underpinning CISpaces.
HUMAN-MACHINE COLLABORATION
In their paper Human-Machine Collaboration in Intelligence Analysis: An Expert Evaluation, the CISpaces developers explain that intelligence analysts agreed that “the AI methods implemented in CISpaces are useful in improving their daily activities, in particular thanks to the perceived improved utility of the outputs CISpaces generates”. Analysts also suggested that CISpaces has potential particularly for collaborative and complex analysis, training novice analysts and to maintain an audit trail of the formation and selection of hypotheses. In addition, evaluation of the tool has demonstrated the potential impact that such a tool can have on the process of understanding complex situations, and on how it can help focus human effort on identifying more credible interpretation of evidence.
The evaluation in the paper also highlights some drawbacks with the tool but goes on to explain “these are not due to the technologies underpinning the tool, but rather its lack of integration with existing organizational standards regarding input and output formats”.
93