InfoBeyond

InfoBeyond Technology is an innovative company specializing in AI, Computer Vision, Communications, and Cybersecurity within the Information Technology industry.

Contact Info

320 Whittington PKWY, STE 303
Louisville, KY, USA 40222-4917
[email protected]
(502) 919 7050

Learn More

Follow Us

Our Strength

Computer Vision and AI

Instead of traditional machine learning, AI using deep learning finds the rich internal representations of features required for difficult tasks such as recognizing objects of interest or understanding the inherited properties of the object.

Our Solutions (1/3)

Object Classification, Detection, Recognition

MetalScrap: A Multimodal and Attention-Based AI (Advanced CNN, Swin Transformer, and YOLO) for Automated and Accurate Metallic Scrap Inspection

The United States (U.S.) Army Material Command (AMC) and other DoD agencies define demilitarization (DEMIL) as the destruction of the functional capabilities and inherent military features from DoD materials to prevent further usage of the equipment and materials from the originally intended military design or capabilities. During the DEMIL process, all obsolete munitions are dismantled and incinerated together to form metal scraps. Incineration destroys the energetics in the metal scraps. It is not unusual for some metal scraps to contain energetics even after incineration due to their hollowness and venting cavities. It is, therefore, important to identify and destroy this energetics before handling the metal scraps to commercial dealers for recycling. The common practice employed by the U.S. Army to ascertain the destruction of energetics in metal scraps is to inspect each metal scrap using two independent trained and certified inspectors. The inspectors classify the metal scraps as Material Documented as Safe (MDAS) or Material Potentially Possessing Explosive Hazard (MPPEH). Human based approaches have several limitations:

  • Lack of automation due to human operations,
  • Poor classification accuracy due to limited visual capability,
  • Human judgment bias based on inspectors, and
  • Low DEMIL time/cost-efficiency.

For addressing these challenges, MetalScrap takes advantage of X-ray technology, digital imaging, and advanced deep learning algorithms to provide an alternative metal scrap inspection method that is accurate, safe, and time-effective. Particularly, MetalScrap develops an AI (e.g., advanced CNN, Swin Transformer, YOLO)-based architecture to provide metal scrap inspection, classification, energetic residue identification, and flexible GUI (graphic user interfaces)-based human control:

  • MetalScrap uses multiview-multimodal images (e.g., Transmission and/or Backscatter X-ray) as a training dataset to precisely identify explosives in images of metal scraps, and accurately classify such metal scraps as MPPEH in a matter of seconds. This removes the limitations of poor accuracy, human judgment bias, and safety risks present with a human inspection.
  • MetalScrap utilizes a leading You Only Look Once (YOLO) algorithm to analyze the energetic types, location, and quantity to form a severity report for decision-making. MetalScrap-YOLO consists of multiple functions that provide segmentation and severity analysis based on adaptable material handling, safety and inspection standards (e.g., DODI 4140.62, DODM 4140.72), which is important for DoD applications.

As a software platform, UI (User Interface) allows inspectors to review, control, and manage the entire metallic scrap inspection process. MetalScrap transition can be used for Army JMC, AMC, ARDEC, AMCOM, and Army CCDC Armaments Center for automating and enhancing DEMIL process. It classifies metal scraps as MPPEH in a matter of seconds to facilitate AMC & JMC metal inspection automatically without the need to label dataset which reduces AMC & JMC metal scrap inspection labor cost. It also has a large market for customers with DoD-like requirements, e.g., metal scrap inspection for metal recycling.

Our Solutions (2/3)

Visual Tracking, Localization, and Analytics

A2TTA: Real-time Isotope Identification and Quantification using YOLO-based Neural Network for Advanced Atom Trap Trace Analysis

Atom Trap Trace Analysis (ATTA) technique is capable of providing high sensitivity, single/multiple atom detection capabilities, and superb selectivity, which allows the Defense Threat Reduction Agency (DTRA) and other DoD agencies to rapidly monitor nuclear activities, including nuclear fuel processing/recycling, underground or on-land nuclear weapon tests, accidental nuclear leaks, etc. Specifically, nuclear monitoring in real-time or near-real-time fashion requires ultra-fast processing time. Further, the isotope detection should be accomplished accurately in various contexts, e.g., low/high signal-to-noise ratio (S/N) images, spurious events, vibrations, cosmic radiations, operated in the ocean floater/unmanned aerial vehicle (UAV)/orbital vehicles.

The current practice employed by DTRA is to apply a traditional numerical integration algorithm to the selected Region of Interest (ROI) of ATTA’s atomic fluorescence image, and derive the atom quantities via statistical fitting. The current approaches have several limitations:

  • Low detection/quantification accuracies due to the spurious photon counts,
  • Introduces additional uncertainties during the quantification procedure,
  • Cannot handle ultra-low/high abundance samples;
  • Low efficiency due to the prolonged processing/analysis turnaround times.

To address these challenges, A2TTA takes advantage of charge-coupled device (CCD)/electron-multiplying CCD (EMCCD) imaging technology and advanced deep learning methods to offer accurate inference capability for isotope identification and quantification in a real-time fashion. Particularly, A2TTA develops AI-based architectures, e.g., You Only Look Once (YOLO), advanced Transformer, etc., to provide accurate atomic identification and atom/isotope quantification under various complicated contexts and operational environments:

  • A2TTA uses the leading YOLO algorithm and effectively learns a spectrum of image features to precisely identify the atom’s presence via classification, which can learn the atomic presence in a full range of count rates.
  • A2TTA performs deep learning to deliver the atom number with the use of all imaging and its statistical features, which can quantify the atoms over the full range of abundance levels.
  • A2TTA implements the state-of-the-art YOLO-based algorithm with improved optimization technology, which boosts the time efficiency to achieve real-time performance.

AI-Based Parcel/Package Recognition and Identification via Advanced Computer Vision

Instead of human parcel sorting, the company is developing an automated parcel sorting systems in order to save labor costs. In such a system, a robot is designed to pick one parcel from a parcel pool and place it to a console connected to a conveyor belt. As tested, one robot can work as fast as several persons in a manner of 24/7/365.

Multi-pick especially is an issue that a robot picks two or more parcels/packages at the same time, resulting in multiple parcels are placed in a conveyor slot. Human has to manually re-sort with high attention on the conveyor belt. Therefore, it is essential to prevent multi-pick in various parcel or package processes.

As a response, an AI-based vision system is advocated to address the multi-pick challenges. The AI system improves the performance from both pre-pick and post-pick aspect. In the pre-pick process, AI algorithm based on a novel salient parcel detection algorithm is proposed using RGB and multispectral images to detect and localize those salient parcels in the pool. In the post pick process, it designs an innovative video-based detection method to detect if multiple parcels are picked. Instead of a single image, it uses a short video to learn multi-pick that is caused by the potential occlusion between parcels. The system targets a goal to meet a minimum performance of 0.01% multi-pick probability.

Our Solutions (3/3)

Image Reconstruction

Image/Signal Recovery or Reconstruction using Near-Optimal Matrix Completion Optimization under Noise

The image/signals captured from a device are often contaminated as these observed entries are collected in a noisy environment. Meanwhile, the image/signals could be unexpectedly corrupted for any reasons. Further, the image/signals may be down sampled (e.g., average downsampling, bicubic downsampling, or subsampling) with a given probability. These images/signals should be recovered with as little as loss of image/signal detail and precision. This process is called image/signal reconstruction. Given a highly incomplete image or signal dataset with noises, traditional approaches such as regression and statistics are very limited to exactly recover the missing information entries in the image or signaling dataset, especially when these data are dominated by unknown entries (e.g., 50% or more).

Under the Navy’s support, an image/signaling recovery or reconstruction method is developed resorting to Near-Optimal Matrix Completion (NOMC) technology. Especially, NOMC especially is a reliable matrix completion method that a low-rank image or signaling matrix could be precisely recovered with a high probability from a very low number of non-zero entries. Therefore, image/signal recovery becomes a matrix completion optimization problem with the consideration of noises. It addresses the limitation of current approach in processing the noisy matrix where the known entries are sampled in a noisy environment or they are contaminated by a number of environmental factors. In practical engineering applications, observation noises are everywhere and it is critical to “denoise” the negative impacts in the matrix completion algorithms. For a given low-rank data matrix, NOMC is able to recover the original matrix with the average Frobenius norm error of 10% while 80% of data entries are unknown. It can effectively reconstruct the original image with high quality from the downsampled image even if 50% image pixels are randomly removed.