Lets create our own brnaches and leave master in working condition. I was not following this and the master right now is broken I believe. I will correct this by today and push the necessary things. But after that I guess it will be better if we work in separate branches. :)
All the learning related stuff is in the 'python_models' directory. The execution entry point is called
main.py. Many things under
test_* is outdated I think. They were not written as proper unit tests anyways.
main.py you will see the program input parsing code. This is followed by training data generation and training loop. After this the controll autometically goes on to the testing part, which can also be run separately if the
--train_model flag is not set (default).
The pose sequence is organized in the following way. Each file inside each subject directory contains a matrix of
timesteps X num_joints * joint_param dimension. Here
joint_param represents joint parameterization. In the simplest setting normalized pose representation we have sample 3 numbers representing the 3D position of each of 32 joints that we care about. The information about which joint is connected to which other ones is represented by skeleton.
Input Generator/ Provider
This class handles reading, normalizing, and reshaping the input as well as configuring the post processor. The function
loadTimeSeriesData is the entry point which loads and prepares the file. First it reads and dumps everything in a big matrix. Then we reshape the data in
n X min_seq_length X num_joints * dim_per_joint. This obviously leaves out some smaller sequences per file which are used as test time seed sequence. Here we can also have the class return data file by file in order for training in stateful mode where state can be retained over the whole stretch of one file
in order to change the neural network structure the file named
PoseSquenceLearnerLSTM should be modified. The structure is fixed in construction time. Note here that the structure is same in train and test time except for batch size which is always 1 in test time in order to predict one time step at a time and is run in stateful mode. A pictorial representation of the network is always dumped in the parent directory for visualization.
This class is responsible for rescaling and replacing missing column of data. (Columns of data can be thrown away in training time due to lack of variance in them.) It is also responsible for writing the output data in proper format. The wa that is handled is first by accumulating data to be dumped in a matrix and then writing to disk at once.
Generating Exponential map
Please note that this is not necessary any more as we have maps converted to exponential maps already from the SRNN paper.
Call the matlab function
convertToExpMp with subject number and desired output format and optionally the output destination path. Keep matlab in the
AIT-RA-Ship/Matlab/rotationMath directory. The following is an example
for i=[1, 5, 6, 7, 8, 9, 11] convertToExpMp(i, '.csv'); end
#Commandline - Train Model python main.py --train_model --mini_seq_length=101 --num_epoch=100 --neurons_this_layer=1000 --neurons_this_layer=1000 --batch_size=16 --down_sample_in_time_by_n=3 --validation_split=0.0980 --num_predicted_time_steps=100 --file_type=.txt
#Commandline - Test Model python main.py --mini_seq_length=101 --neurons_this_layer=1000 --neurons_this_layer=1000 --seed_seq_length=50 --num_predicted_time_steps=1000 --stateful --down_sample_in_time_by_n=3 --num_files=7 --file_type=.txt
D3_Angles in corresponding subject folder is necessary
now that nonstateful mode is available be informed that the batch size must divide both the training samples size and also the validation samples size. The validation is often set with the validation fraction and easily can be so that the batch size doesn't divide it. In such cases an error will be thrown either from the user program or from tensorflow. As an example a data shape of
1657 with validation split of
0.1 and batch size
21 will run into problems.