LG Electronics Partners With Applied Intuition to Train a Camera System for Autonomous Mobile Robots With Synthetic Data

Applied Intuition and LG Electronics are partnering to accelerate the development of camera systems for autonomous mobile robots (AMRs) with synthetic training data.
Apr 12, 2023

Applied Intuition and LG Electronics are partnering to accelerate the development of camera systems for autonomous mobile robots (AMRs) with synthetic training data.

LG is a global leader in technological innovation across various fields, including home appliances, electronic products, and automobile parts. The company’s Advanced Robotics Laboratory develops AMRs for indoor and outdoor use cases. Applied provides software solutions to safely develop, test, and deploy autonomous systems at scale. LG’s Advanced Robotics Lab uses Applied’s Synthetic Datasets to develop and test its computer vision algorithms.

“LG is one of the most well-known innovators in the technology industry,” said Qasar Younis, CEO and Co-Founder of Applied Intuition. “The company’s Advanced Robotics Lab recognizes the importance of synthetic data for successful perception algorithm training. We are proud to collaborate with the team to facilitate faster and more cost-effective camera system development.”

Synthetic camera image (left) and depth image (right) of an urban outdoor scene.

The Challenges of Real-World Training Data

Machine learning (ML) algorithms for an AMR’s perception system must be trained and tested on large amounts of diverse labeled data before they perform well enough to be deployed in a production environment. Collecting this training data in the real world is often time-consuming, costly, and dangerous.

Labeling the training data can also present challenges, especially when it is impossible to obtain high-quality ground truth data. Depth and optical flow labels for camera images, which are typically estimated using lidar sensors, are two examples where it is difficult to obtain high-quality ground truth. Lidar point clouds typically have a lower density than the corresponding camera image, resulting in sparse estimated ground truth data with some camera pixels lacking depth or optical flow values. When used in training, this sparse ground truth reduces the benefits of ML models.

Synthetic Training Data Complements Real-World Data

Synthetic training data helps solve the challenges of real-world data collection and labeling. Instead of being collected in the real world, synthetic data is generated through sensor simulation. Simulation provides deterministic control over scene contents, weather, lighting, and more. This makes it easy to define and obtain the exact data and labels needed to train a model. When training perception models, ML engineers can combine synthetic data with real data to improve an autonomous system faster and at lower costs. 

LG Uses Applied’s Synthetic Datasets

Applied’s Synthetic Datasets are labeled datasets for ML algorithm development. LG uses Synthetic Datasets to obtain dense per-pixel depth and stereo disparity ground truth to train stereo vision algorithms, which are otherwise impractical to obtain. With this ground truth data, LG can ultimately develop, test, and deploy safer AMRs faster than previously possible.