Skip to main content
Example ChannelsRGB Satellite Imagery

Generate Synthetic Data for Aerial Object Detection with Rendered.ai

By February 23, 2023February 12th, 2024No Comments

TL;DR: Rendered.ai is publicly releasing the capability to generate synthetic RGB satellite imagery for rare object detection. To try it for yourself, sign up for the Rendered.ai platform and input the content code SATRDEMO

Introduction

One of the greatest needs we have heard from our customers in the Earth Observation space has been the detection of rare objects in satellite imagery. Every day, hundreds of satellites are capturing terabytes of imagery of the Earth’s surface. This makes for an enormous haystack in which to find the few needles that amount to actionable intelligence. Deep learning object detection models are vital to this effort but require thousands or tens of thousands of diverse labeled images to train. “Diverse” is a key word here, as a model trained on many images of the same aircraft parked at the same location will not detect that same aircraft in a different environment, or with a different color scheme.

At Rendered.ai, we have worked with many customers to enable them with the capability to generate diverse synthetic datasets that can train models as well or better than what is possible using real data. This is done using physics-based simulation of 3D models within both 2D and 3D background scenarios. The approach is enhanced with intelligent placement routines, object modification for increased diversity, and domain adaptation techniques for style transfer. This article will walk through this approach, provide a public case study where this method was used to boost AP scores, and describe how you can use this for your own AI training.

Synthetic imagery of ships at port

Approach

When I say synthetic data, I mean physics-based, simulated imagery. While much attention has been given to generative AI methods in recent years, these approaches do not work for training detectors on rare objects. The obvious reason is that generative models require a large amount of ground truth images of the objects of interest to train. Furthermore, simulation pipelines can be configured at a granular level, allowing for easy experimentation and provable outcomes, whereas generative models are much more of a “black box” when it comes to output configuration.

While physics-based simulation of satellite imagery can incorporate complex sensor and atmospheric models (of which we can support with DIRSIG and MODTRAN), the approach described here is a simple one: to render 3D models of objects of interest within existing satellite imagery such that it matches its surroundings. This requires image metadata on ground sample distance, time and location of collection, and gamma and blur properties for seamless output. Without metadata to enable simulation of lighting conditions similar to background imagery, CV models may learn to detect discrepancies in synthetic data that will not be present in real imagery.

Context is Key

Beyond simulation, effective synthetic data balances diversity with plausibility of context space. While diversity is important, we can gain valuable contextual clues by constraining the surroundings and configurations of objects of interest to plausible observations that would be found in the real world. To ensure this, we provide regions that designate where objects can be placed within environments, and clustering algorithms that control objects placement patterns within designated areas. Examples include aircraft constrained to taxiing configurations on a tarmac, ships in shipping lanes or docked at port, and vehicles parked in parking lots.

Aircraft and ground vehicles in context

Domain Adaptation

While a satellite imagery channel (our term for a synthetic data application) can generate diverse generic satellite imagery, end users require synthetic data that matches the characteristics of their specific sensor and collection criteria. Working with customers, we have found that we can train a CycleGAN domain adaptation model to match the source domain of synthetic data to the target domain of real data. This model can then be applied to any new synthetic datasets as they are generated. Because the CycleGAN model activates on edges found in an image, the corresponding label and mask locations do not need to be updated. Furthermore, because we are only concerned about the style of the output image, the truth image set does not require labels, or the presence of the specific objects of interest. This combination of physics-based synthetic data with generative domain adaptation techniques provides the best of both approaches.

Synthetic image of a tower crane and the same image passed through a GAN trained on Xview imagery

Case Study

Beginning in 2021, Rendered.ai worked with Orbital Insight, a geospatial analytics company, on a Small Business Innovation Research (SBIR) project for the NGA. The goal of the project being to develop a methodology around generating and adapting synthetic satellite imagery to improve real-world object detection capabilities. While phase two of this project is still ongoing, phase one, completed in July 2021, showed breakthrough improvements in object detection using synthetic data, including a threefold improvement in AP scores in few-shot learning applications.

2–3x improvement in AP scores for rare object detection in Xview

The report, found here, demonstrates the real-world importance of the techniques described here, including image feature matching, contextual object placement, and in particular, the importance of domain adaptation to model performance using synthetic data. The experiments done in just phase one of this project used over 500,000 pieces of synthetic data and have, along with other successful customer engagements in this space, informed the development of the RBG satellite imagery channel that we are now releasing to the public. The results of phase two, which will likely be made public later this year, have shown marked advancements past those of phase one, including successes in zero-shot learning with synthetic data only.

How To

We have released the capabilities of this channel under the content code SATRDEMO. If you already have an account with Rendered.ai, you can enter this code in the field labeled ‘Content Code’ when setting up a new workspace. If you are new to the platform, first request access using this link, and then input the content code on the registration page once you are sent a link. For more information on content codes, along with full platform documentation, follow this link.

We’d love to hear from you. For more information:

Leave a Reply