Logos Technologies, best known for developing sensors for aerostats and unmanned aerial vehicles, has developed an image processor small enough to fit into a Group II unmanned aircraft system, yet capable of processing about a billion pixels per second.
The company has combined graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and standard-purpose processors into a 960 g box. The multimodal edge processor can handle the processing for any sensor, John Marion, Logos Technologies president, told Jane’s at the annual Association of the United States Army (AUSA) exhibition held from 8 to 10 October in Washington, DC.
Soldiers and operators have to deal with huge data sets just from airborne and space-borne sensors. A 100-megapixel wide-area motion imagery (WAMI) system running at 2 Hz can generate more than 1 terabyte of data per hour.
The high-performance multimodal edge processor permits operators to process data on board an aircraft in real time, and enables immediate reaction to the data.
For WAMI, the multimodal edge processor can process about a billion pixels per second; for hyperspectral bands, up to 3 million spectra per second; and for Light Detection and Ranging (LIDAR) returns, up to 6 billion points per second, Marion noted.
“This is a very adaptable processing unit,” he added.
During the 18 months Logos developed the multimodal edge processor, the company incorporated the latest GPU or FPGA upgrades, Marion said.
He continued that the system – as built – is very nearly space qualified. “That is the advantage of the new systems, they are already radiation tolerant. There are a couple of small steps and a little bit of testing that would need to be done. But this could be a processor for space when we are done.”
Want to read more? For analysis on this article and access to all our insight content, please enquire about our subscription options at ihsmarkit.com/janes