Amazon Sagemaker Ground Truth can now create virtual objects for AI model training
It takes massive amounts of data to train AI models. But sometimes, that data simply isn’t available from real-world sources, so data scientists use synthetic data to make up for that. In machine vision applications, that means creating different environments and objects to train robots or self-driving cars, for example. But while there are quite a few tools out there to create virtual environments, there aren’t a lot of tools for creating virtual objects.
At its re:Mars conference, Amazon today announced synthetics in Sagemaker Ground Truth, a new feature for creating a virtually unlimited number of images of a given object in different positions and under different lighting conditions, as well as different proportions and other variations.
With WorldForge, the company already offers a tool to create synthetic scenes. “Instead of generating whole worlds for the robot to move around, this is specific to items or individual components,” AWS VP of Engineering Bill Vass told me. He noted that the company itself needed a tool like this because even with the millions of packages that Amazon itself ships, it still didn’t have enough images to train a robot.
“What Ground Truth Synthetics does is you start with the 3D model in a number of different formats that you can pull it in and it’ll synthetically generate photorealistic images that match the resolution of the sensors you have,” he explained. And while some customers today purposely distress or break the physical parts of a machine, for example, to take pictures of them to train their models — which can quickly become quite expensive — they can now distress the virtual parts instead and do that millions of times if needed.
He cited the example of a customer who makes chicken nuggets. That customer used the tool to simulate lots of malformed chicken nuggets to train their model.
Vass noted that Amazon is also partnering with 3D artists to help companies that may not have access to that kind of in-house talent to get started with this service, which uses the Unreal Engine by default, though it also supports Unity and the open-source Open 3D Engine. Using those engines, users can then also start simulating the physics of how those objects would behave in the real world, too.