gmxcoco workflow on Bluewaters
Can anyone give a try to the gmxcoco on Bluewaters workflow?
Or if not, if anyone could have a look at: rp.session.moriarty.pharm.nottingham.ac.uk.ardita.016877.0003-pilot.0000/unit.000008/
This is the CU having to do with the coco analysis, and this CU fails. If one tries to execute the coco command as in rp.session.moriarty.pharm.nottingham.ac.uk.ardita.016877.0003-pilot.0000/unit.000008/radical_pilot_cu_launch_script.sh (that is pyCoCo "--grid" "30" "--dims" "3" "--frontpoints" "2" "--topfile" "md-0_0.gro" "--mdfile" "*.xtc" "--output" "coco_out_0.gro" "--logfile" "coco.log" "--selection" "protein") that will execute correctly.
Looking at rp.session.moriarty.pharm.nottingham.ac.uk.ardita.016877.0003-pilot.0000/unit.000008/STDERR we can notice the following:
MPI functionality is now available through bwpy-mpi.
To enable MPI packages `module load bwpy-mpi` after bwpy
assertion !pthread_create(&thr->thrH, &attr, rout, arg) failed, line 111 of file /u/sciteam/shkurti/atlas/ATLAS/intel/..//src/threads/ATL_thread_start.c
Any comments/suggestions?
Comments (8)
-
-
reporter Can you try now for the access?
And yes, coco was compiled with the openmpi stack in /projects/sciteam/gkd/modules ...
-
cd scratch/ -bash: cd: scratch/: Permission denied
Also the specific pilot folder. Please try a recursive permission change with
chmod -R
-
reporter Does it work now?
-
Hmmm.. this seems to be an atlas specific error perhaps. thinks look fine in the shell script. I'm giving it a try too, but if the coco stage in coam example worked fine I wouldn't expect this to fail unless there is some change in data (?).
-
Update: I can reproduce this.
-
Old. This is probably fixed given our experiments in BW.
-
- changed status to resolved
- Log in to comment
I can't cd into your working directory.
Just to confirm, coco was compiled with the openmpi stack in /projects/sciteam/gkd/modules right ?