Image Synthesis for Machine Learning
Aim is to help Machine Learning and Computer Vision researchers to generate annotated training sets in Unity and on the Cloud.
One of the main challenges in Machine Learning is the task of getting large amounts of training data in the right format. Deep learning, and machine learning more generally, needs huge training sets to work properly.
Virtual worlds can provide a wealth of training data. However it must consist of more than just the final image: object categorization, optical flow, etc
What does it do?
This repository contains code that is easy to add to any existing Unity project. It allows to capture image depth, segmentation, optical flow, etc as .png images with minimal intrusion:
- Image segmentation - each object in the scene gets unique color
- Object categorization - objects are assigned color based on their category
- Optical flow - pixels are colored according to their motion in the relation to camera
- Depth - pixels are colored according to their distance from the camera
- Normals - surfaces are colored according to their orientation in relation with the camera
- ... and more in the future
- How to use it in the existing project
- How does it work
- Render images on Amazon Cloud (AWS)
- ToDo list
Who do I talk to?
- email: firstname.lastname@example.org
- Unity 5.5.0 or later
- Should work on any OS officially supported by Unity
This repository is covered under the MIT/X11 license.