- AI for Spectrum Imaging
- Object Classification, Detection, Recognition
- Visual Tracking, Localization, and Analytics
- Multimodal Spectral Image Learning
MetalScrap: A Multimodal and Attention-Based AI (Advanced CNN, Swin Transformer, and YOLO) for Automated and Accurate Metallic Scrap Inspection
The United States (U.S.) Army Material Command (AMC) and other DoD agencies define demilitarization (DEMIL) as the destruction of the functional capabilities and inherent military features from DoD materials to prevent further usage of the equipment and materials from the originally intended military design or capabilities. During the DEMIL process, all obsolete munitions are dismantled and incinerated together to form metal scraps. Incineration destroys the energetics in the metal scraps. It is not unusual for some metal scraps to contain energetics even after incineration due to their hollowness and venting cavities. It is therefore important to identify and destroy this energetics before handling the metal scraps to commercial dealers for recycling. The common practice employed by the U.S. Army to ascertain the destruction of energetics in metal scraps is to inspect each metal scrap using two independent trained and certified inspectors. The inspectors classify the metal scraps as Material Documented as Safe (MDAS) or Material Potentially Possessing Explosive Hazard (MPPEH). Hunan based approaches have several limitations:
- Lack of automation due to human operations,
- Poor classification accuracy due to limited visual capability,
- Human judgment bias based on inspectors, and
- Low DEMIL time/cost-efficiency.
For addressing these challenges, MetalScrap takes advantage of X-ray technology, digital imaging, and advanced deep learning algorithms to provide an alternative metal scrap inspection method that is accurate, safe, and time-effective. Particularly, MetalScrap develops an AI (e.g., advanced CNN, Swin Transformer, YOLO)-based architecture to provide metal scrap inspection, classification, energetic residue identification, and flexible GUI (graphic user interfaces)-based human control:
- MetalScrap uses multiview-multimodal images (e.g., Transmission and/or Backscatter X-ray) as a training dataset to precisely identify explosives in images of metal scraps, and accurately classify such metal scraps as MPPEH in a matter of seconds. This removes the limitations of poor accuracy, human judgment bias, and safety risks present with a human inspection.
- MetalScrap utilizes a leading You Only Look Once (YOLO) algorithm to analyze the energetic types, location, and quantity to form a severity report for decision-making. MetalScrap-YOLO consists of multiple functions that provide segmentation and severity analysis based on adaptable material handling, safety and inspection standards (e.g., DODI 4140.62, DODM 4140.72), which is important for DoD applications.
As a software platform, UI (User Interface) allows inspectors to review, control, and manage the entire metallic scrap inspection process. MetalScrap transition can be used for Army JMC, AMC, ARDEC, AMCOM, and Army CCDC Armaments Center for automating and enhancing DEMIL process. It classifies metal scraps as MPPEH in a matter of seconds to facilitate AMC & JMC metal inspection automatically without the need to label dataset which reduces AMC & JMC metal scrap inspection labor cost. It also has a large market for customers with DoD-like requirements, e.g., metal scrap inspection for metal recycling.
AI-based Parcel/Package Recognition and Identification via Advanced Computer Vision
Instead of human parcel sorting, the company is developing an automated parcel sorting systems in order to save labor costs. In such a system, a robot is designed to pick one parcel from a parcel pool and place it to a console connected to a conveyor belt. As tested, one robot can work as fast as several persons in a manner of 24/7/365.
Multi-pick especially is an issue that a robot picks two or more parcels/packages at the same time, resulting in multiple parcels are placed in a conveyor slot. Human has to manually re-sort with high attention on the conveyor belt. Therefore, it is essential to prevent multi-pick in various parcel or package processes.
As a response, an AI-based vision system is advocated to address the multi-pick challenges. The AI system improves the performance from both pre-pick and post-pick aspect. In the pre-pick process, AI algorithm based on a novel salient parcel detection algorithm is proposed using RGB and multispectral images to detect and localize those salient parcels in the pool. In the post pick process, it designs an innovative video-based detection method to detect if multiple parcels are picked. Instead of a single image, it uses a short video to learn multi-pick that is caused by the potential occlusion between parcels. The system targets a goal to meet a minimum performance of 0.01% multi-pick probability.
Instead of traditional machine learning, AI using deep learning finds the rich internal representations of features required for difficult tasks such as recognizing objects of interest or understanding the inherited properties of the object.