Through its Explainable Artificial Intelligence (XAI) programme, the US Defense Advanced Research Projects Agency (DARPA) is working on technologies to enable autonomous systems to better explain their actions.
In situations where autonomous systems provide analysts with information about suspicious activities or something that requires a further examination, DARPA has determined that it would be useful for the analyst to have an explanation of why an autonomous system was bringing a particular photograph, piece of data, or particular person to their attention, David Gunning, programme manager in DARPA’s I2O, told Jane’s .
According to DARPA, XAI aims to “produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately, trust, and effectively manage the emerging generation of artificially intelligent partners”.
Want to read more? For analysis on this article and access to all our insight content, please enquire about our subscription options at ihs.com/contact