Submission questions

Issue #236 resolved
Chris Fotache created an issue

Hi, we have our solution working in a local environment (catkin workspace), and we want to test submitting it on one of the test environments. Since time is very short and I’m not very experienced with docker, what are the simple steps to do?

I thought it would be getting the virtual_testbed docker image, copying our code inside, and that’s it, but while the simulation is starting, our code doesn’t because of some library not found errors. Is there a better way to do this?

Comments (13)

  1. Alfredo Bencomo

    I suggest you clone the subt_seed example and use it as starting-point reference to wrap your solution.

    $ mkdir -p ~/subt_solution/src && cd ~/subt_solution/src
    
    $ hg clone https://bitbucket.org/osrf/subt_seed
    
    $ cd ~/subt_solution
    

  2. Chris Fotache reporter

    Thanks Alfredo… I was able to do that, and I have our solution in a Docker image. But before submitting, how do we test that against the simulation? To make sure everything works the same as in the local environment?

  3. Alfredo Bencomo

    Hi Chris,

    Locally, you can test it using docker-compose, as described here.

    Hint: edit and change the docker-compose.yml with the name of your solution.
    

    Notice that you need the same number of bridges per number of target solutions to be launched ( that also includes the relay_netN). If you are satisfied with the results from your local run with docker-compose, then you can now upload your solution docker-image into the SubT Cloudsim Portal for a test-run against one of the practice/simple worlds there. Post back here if you have questions. Good luck!

  4. Chris Fotache reporter

    Thanks! Also now got another problem… Might be a bit unrelated to the original topic, but hopefully you have the answer.

    When running the object detection part (I’m using a Python node), I get this error:

    AssertionError:
    Found no NVIDIA driver on your system. Please check that you
    have an NVIDIA GPU and installed a driver from
    http://www.nvidia.com/Download/index.aspx

    I checked and both the Nvidia drivers and CUDA are installed in the container. Are there known issues with running that inside a docker container?

  5. Alfredo Bencomo

    Well, it depends. The Docker container will still try to access the local hardware, so does the machine where you are running the the object detection part have GPUs?

  6. Chris Fotache reporter

    Yes, it all works perfectly in the local catkin setup. I checked with nvidia-smi and nvcc that the same components are also installed in the docker container…

  7. Alfredo Bencomo

    So you get that AssertionError when you try to test your solution with docker-compose?

    If so, then can you send me the console output from these commands?

    $ docker images | grep srf/subt-virtual-testbed
    
    $ docker ps -a
    

  8. Malcolm Stagg

    One thing I ran into, in case it helps, is that an older version of docker_compose.yml did not includeruntime: nvidiafor the solution containers. Without that, they can’t access the gpu. The newest version has it included correctly.

  9. Malcolm Stagg

    In case it helps anyone else, another possible GPU failure can be from not having these lines in your Dockerfile:

    ENV NVIDIA_VISIBLE_DEVICES all
    ENV NVIDIA_DRIVER_CAPABILITIES compute,utility 
    

    This one just got me… It was showing up in nvidia-smi but not working in Python for docker-compose/cloudsim.

  10. Log in to comment