I would like to contribute code to predict ISO, shutter speed, and aperture settings. One scenario I see for this is an established wedding photographer who has a style and doesn't want to miss a shot by spending 5-10 seconds adjusting things between the many scenes she moves between. Another scenario is giving a "second shooter" a camera trained with the main photographer's settings. The second shooter would then take pictures with a similar style to that of the main photographer. Another scenario is to be able to share styles with other photographers. Lastly, depending on the algorithm's efficiency, this could be applied to video.
The neural network would attempt to learn the three settings based on the metadata available in all of the previous good shoots. When on a photo shoot, the network would predict the correct settings based on the previous pictures taken. For those unfamiliar with neural networks, the bulk of the processing would take place on the photographer's computer, when the training phase occurs. The actual prediction code would be running on the camera and would not be a burden for the camera's ARM processor.
I have trained my first ISO prediction network on the metadata for about 20,000 images that are from "good" photo shoots. Right now I can get about 90% accuracy for base ISO prediction, and this can probably be improved. (Disclaimer: this number was from testing on my training set, but I want to verify I can get access to my needed inputs before refining things.) As inputs I am using:
- Measured EV
- Camera Temperature
- Measured RGGB
- WB RGGB Levels Measured
- Color Temp Measured
- Raw Measured RGGB
- Blue Balance
- Field Of View
- Hyperfocal Distance
- Light Value
- Red Balance
This is all data that I collected from metadata. I have two big questions before I continue:
1) Are all of my inputs independent from ISO, shutter speed, and aperture? We need the input values to be (fairly) independent from those settings. 2) Is it possible to read these inputs on the fly with Magic Lantern?
Would anyone be interested in collaborating on this add-on? I am currently writing the code in Python/Octave, but I have the 550d build environment installed (I own a 550d, but eventually I need to get this working on the 5dm2), so I can also write the ARM C code for the neural network to predict the settings based on the input data. What would be very helpful is for someone to be able to provided the input values and add the GUI code and the proper hooks to the buttons - e.g. press the button half-way and the neural network predicts and then set the predicted settings. This all assumes that the list above is valid and readable, or that we can predict based on inputs that are valid and readable.