Wiki

Clone wiki

DeepDriving / DeepDrivingTrainingData

Deep-Driving

Prepare the Training-Data

One of the first steps, when training a deep-driving model, is to prepare the training data-set. The tensorflow deep-driving project does not work directly with the original training data. First you need to translate them to tfrecord-data.

Download of the Training-Data

  • Download the original training and validation data from the original project webpage.

  • Extract the folders inside the ZIP files. In the upcoming text, the folder of the training data is called <training-data-path> and the folder of the validation data is called <validation-data-path>.

Translate the Training-Data

  • The original data can be translated with the translate-tool, which uses the GUI. Thus you need to install the optional GUI python packages, as described in the installation chapter.

  • Change to the script directory python/scripts and start the script translate.py.

cd <repository-path>/python/scripts

python translate.py
  • You should see the translator GUI:

Translator GUI

  • Press "Open LevelDB Database..." and change to the <training-data-path> directory, to select the original training data. Press "Open Folder" afterwards.

Translator GUI

  • The GUI is now showing the first frame from the training data on the right side. Below the frame image, you can navigate through the training data, by using the arrow buttons. On the left side you see the situation view of the current frame. These are the labels, produced by the game.

Translator GUI

  • As a next step you have to select the folder, where to store the tfrecord-files. Press on "Store to Database..." and select an empty folder or add the name of a new folder to the path at the bottom of the window. Press "Select Folder" afterwards. Keep in mind to use a hard-disc with more than 60 GB of free memory. To make the training process faster you should use a SSD memory.

Translator GUI

  • You can now define the index of the start-frame and the index of the end-frame to translate. As default the start-frame is 1 (the first frame of training data) and the end-frame is 484815 (the last frame of training data).

Translator GUI

  • Press "Start Translation" afterwards. This process may take some time (up to some hours for the full training data-set). During the translation, you can see the current frame in the GUI.

Translate the Validation-Data

  • After translating the training-data, you need to repeat this process with the validation-data.

  • Open the validation data in the GUI. For this project description <validation-data-path>\TORCS_GIST_1F_Testing_280 has been used as validation-data.

  • Select a different directory in the "Store to Database..." screen. For example validation-data instead of training-data.

Translator GUI

  • Afterwards start the translation.

  • Close the translator tool, if both data-sets are translated correctly.

Prepare the configuration files

  • Open the train.cfg, eval.cfg and calc_mean.cfg files in an editor, to adapt the path of the training and validation data to the path you chose as output for translation.

  • If you have a graphic card with less video memory than 11 GB, you might also want to decrease the batch-size in the train.cfg file from 96 to 64 (~8GB) or 32 (~4GB).

train.cfg

{
  "Data": {
    "BatchSize": 96,
    "TrainingPath":    "<path-to-translated-training-data>",
    "ValidatingPath":  "<path-to-translated-validation-data>",
    "ImageWidth": 280,
    "ImageHeight": 210
  },
  "Optimizer": {
    "EpochsPerDecay": 300,
    "LearnRateDecay": 0.5,
    "StartingLearningRate": 0.01,
    "WeightDecay": 0.0005,
    "Momentum": 0.9,
    "Noise": null
  },
  "Trainer": {
    "CheckpointEpochs": 10,
    "CheckpointPath": "Checkpoint",
    "EpochSize": 10000,
    "NumberOfEpochs": 2000,
    "SummaryPath": "Summary"
  },
  "Validation": {
    "Samples": 1000
  },
  "PreProcessing": {
    "MeanFile": "image-mean.tfrecord"
  },
  "Runner": {
    "Memory": null
  }
}

eval.cfg

{
  "Data": {
    "BatchSize": 128,
    "ImageHeight": 210,
    "ImageWidth": 280,
    "ValidatingPath": "<path-to-translated-validation-data>"
  },
  "Evaluator": {
    "CheckpointPath": "Checkpoint",
    "EpochSize": 10000,
    "NumberOfEpochs": 4
  },
  "PreProcessing": {
    "MeanFile": "image-mean.tfrecord"
  },
  "Runner": {
    "Memory": 0.6
  }
}

calc_mean.cfg

{
  "Data": {
    "BatchSize": 500,
    "ImageHeight": 210,
    "ImageWidth": 280,
    "TrainingPath":    "<path-to-translated-training-data>",
    "ValidatingPath":  "<path-to-translated-validation-data>",
  },
  "MeanCalculator": {
    "EpochSize": 10000,
    "MeanFile": "image-mean.tfrecord",
    "NumberOfEpochs": 100
  }
}

Calculate the mean-image over the Training-Data (Optional)

  • This step is optional, since a sufficient mean-image file is already delivered with the code of this repository. However if you have changed the training-data set, it can give you better training results to recalculate the mean-image.

  • For calculating a mean-image you simply need to start the calc_mean.py script:

cd <repository-path>/python/scripts

python calc_mean.py
  • This process may take some minutes or hours, depending on the size of your training-data.

  • After successful recalculation of the mean-image, the image content is shown to the user. You can simply close the corresponding window, since the mean-image is stored automatically before opening the image window.

Mean Image

Next Step

Updated