Skip to main content
Example Channels

Rendered.ai’s Toybox Channel 

By January 22, 2024No Comments

I am pleased to report a refresh of Rendered.ai’s toybox channel, an example synthetic data application intended to highlight commonly used aspects of the Rendered.ai platform. Channels are our name for synthetic data applications that are deployed to the Rendered.ai platform.   

Computer vision scientists can use a channel like this to explore Rendered.ai’s tools for closing content and domain gaps when using real datasets.  Synthetic data engineers can explore best practices around creating Rendered.ai channels by reviewing the toybox channels source code and implementation.  

The toybox channel comes included with all new trial subscriptions and can be accessed in existing accounts by creating a new workspace with the content code “TOYBOX”.  

Workspaces created with content codes will come with graphs that are the instructions for Rendered.ai to create synthetic data jobs based on the components of the channel. 

Computer Vision training specialists can use the toybox channel to experiment with Rendered.ai tools described in the support documentation. As shown here, it is ideal for dropping in custom objects, converting the format of annotations, and uploading custom domain adaptation models.  

 Key Features: 

  • Drop in custom 3D objects 
  • Convert annotations to formats like COCO and Pascal VOC 
  • Apply domain adaptation models to bridge the gap between synthetic and real data 
  • Modify object colors, lighting, camera angles, and other properties randomly 
  • Visualize annotations like bounding boxes and segmentation masks 
  • Sample source code to build your own channels 

Sign up for a free trial of Rendered.ai

The Rendered.ai Application Use Guide has tutorials on creating and using graphs and datasets 

https://support.rendered.ai/rd/tutorials 

Synthetic data engineers can use the toybox channel source code to learn how Rendered.ai channels are used to control the scene objects, lights and cameras, and configure the user experience. The toybox channel has nodes that perform key aspects of randomizing datasets including changing the color of objects and performing a gravity simulation per run on Rendered.ai. 

Source Code: https://github.com/Rendered-ai/toybox 

Running the Toybox Channel 

Your workspace for the toybox channel comes with graphs for typical use cases. Graphs can be thought of as a node-edge user interface that configures the randomization of dataset generation. Each toy is represented as an object node which can be modified by the “Color Variation” node. 

The part of the graph that controls the scene generation

 Below is an image that was generated using the above graph, randomizing the color of the bubbles and yoyo’s, and dropping 25 objects in a container which is placed on a floor. The physics based synthetic data generation allows light to interact with the clear tub, showing the tile floor through the bottom of the container. 

An example image from toy box channel

 Metadata and Labels 

For each image in a dataset, there are corresponding files for annotations, metadata, semantic segmentation mask, depth mask, and surface normal mask. Each object has an instance identifier used in the segmentation mask, which can be used for model training and for visualizations. The video below shows how the annotations are used to visualize the bounding boxes and segmentation masks in the platform.

Rendering configurations such as for lighting and camera placement are exposed to the user in the graphs of the toybox channel. The nodes shown in the screenshot below control these settings, for example, the option to render masks for depth and normal vector estimates. 

The graphs that come with the Toy box channel

 

The depth mask and normal mask for our demo image are shown below. 

            

Custom Objects 

Rendered.ai users can easily drop in custom objects. The process is fully documented in the tutorial, “Creating and Using Volumes.” In the toybox workspace, there is a custom volume already created, “Fruit and Baskets,” where you can drop in objects and add them to graphs. 

The graph with custom file nodes and a resulting preview image

 

Annotation Conversions  

Rendered.ai supports annotation conversions for several common project formats. The raw annotations are for general use and capture more information than is needed by specific computer vision projects like COCO or Pascal VOC. Annotation conversion, also called mapping, can be done in the GUI or with Rendred.ai’s  SDK, e.g., in an ML pipeline. As you can see in the screenshot below, it is straightforward to use the annotation conversion wizard for toybox datasets. 

Annotation conversion on Rendered.ai Platform

 

The Rendered.ai SDK, “anatools”, enables performing various synthetic data tasks programmatically. Jupyter notebooks in the Rendered.ai resources demonstrate using anatools for annotation conversions, analytics, visualizations, and more:

https://github.com/Rendered-ai/resources/blob/main/3_anatools_for_ML/Generate%20Annotations%20for%20Datasets.ipynb 

 Domain Adaptation 

Rendered.ai includes tools for domain adaptation. We have seen domain adaptation serve as a useful tool for domain gaps introduced by lens properties and distortions from age, atmospheric effects, as well as camera motion. Domain adaptation models perform a pixel level transfer from the raw synthetic domain to a real domain. When users have representative real data for the target domain, CycleGAN models can be trained and uploaded to the Rendered.ai platform for domain adapted dataset generation.  

Tutorial demonstrating how to convert apples to oranges based on a user uploaded model: 

https://support.rendered.ai/rd/creating-and-using-domain-adaptation 

 The toybox channel comes with example pre-trained style transfer cycleGAN adaptation models for impressionist painters. Here is our demo image in the style of Van Gogh and Monet 

         

 

Writing Your Own Channel 

The source code for the Rendered.ai toybox channel is a suitable place to start creating your own synthetic data application that utilizes the features of Rendered.ai.  Please take a moment to read the repository description of the toybox github repository as well as the developer guide in the support documentation. In particular, the developer guide has tutorials on how to set up the development environment and tutorials on adding modifiers and object generators to a channel. 

Rendered.ai Channel Software Architecture 

https://support.rendered.ai/dg/ana-software-architecture 

Besides channel development, the developer guide describes how to configure the channel UX. Channel developers write tooltips and field descriptions for nodes, set up graph validation, and control error logging that informs users what goes wrong when a run fails. The following screenshot of a graph in the toybox channel shows the Color Variation node without a required link to the “Generators” field. This shows the user the graph will not run and why. 

An example of graph validation for enhanced user functionality

 

 

For more information about Rendered.ai and the Toybox channel contact sales@rendered.ai. 

 

Resources 

https://rendered.ai 

https://support.rendered.ai 

https://sdk.rendered.ai 

https://github.com/Rendered-ai/resources 

https://github.com/Rendered-ai/toybox  

 

 

 

 

Leave a Reply