Wiki

Clone wiki

scl-manips-v2 / people / Scott

Scott's Page

7/24

  • Samir's presentation at lab meeting today
  • Soldered the two remaining quadrature encoder ribbon cables in the same style as yesterday

7/23

  • Stripped the ends of the 10 wire ribbon cables
  • Further tore the cable so each individual wire had about 10 cm individually insulated and 1-2 cm at the end of wire exposed
  • Soldered the exposed centimeter of each wire to the crimp heads in the appropriate configuration (See 7/9 below)
  • Pushed the crimps into their container so each wire is now connected and can be plugged into the encoder!

7/21-7/25 Week 5: Goals

  • Finish assembling the nodes for NeuroArm 3
  • Establish connection to them from computer
  • Begin writing xmos application

7/10

  • Began soldering together the connectors between the nodes and encoders. This is a long process that involves splitting two of the tiny wires inside the rainbow cables and soldering them to the crimp heads before crimping them into the casing.
  • Met with XMOS CEO and CTO on visit to stanford and exchanged a few words with them about their processor.

7/9

  • The encoder connectors arrived, but I've noticed that they have a different pinout than the synapticon qei micro match connector.
    • Encoder Pin # <===> Synapticon QEI Pin #
      1. Ground <===> No Connect
      2. Ground <===> 5VDC Power
      3. Index - <===> Ground
      4. Index + <===> No Connect
      5. A- Channel <===> A- Channel
      6. A+ Channel <===> A+ Channel
      7. 5VDC Power <===> B- Channel
      8. 5VDC Powder <===> B+ Channel
      9. B- Channel <===> Index -
      10. B+ Channel <===> Index +

7/8

  • Waiting for the encoder connectors to arrive

7/7

  • Continued organzing the repository and reading through the new xmos code that will constitute neuroarm 3's driver

7/7-7/10 Week 3: Goals

  • Finish assembling the nodes for NeuroArm 3
  • Establish connection to them from computer
  • Begin writing xmos application

7/3

  • Just waiting on the connectors to arrive in the mail before I assemble the nodes
  • Downloaded the latest WaveBender release from Synapticon
    • Deleted the old applications_xmos files from the repository
    • Added the new applications_xmos files to the repository
    • Some of the files in the release were moved to applications_linux and src/synapticon as necessary
    • It will take some time to reconfigure everything to our liking

7/2

  • Mapped out the necessary connections to assemble the nodes
    • Ordered connector and assembly parts from US Digital for the encoders
    • Ordered connector set from Synapicon for the DC100's
  • Created a branch "neuroarm_version_2" in the stanford_synapticon repository
    • This branch will save the state of the repo as it exists now so future generations can recreate the wonder that was neuroarm 2
    • Master branch will continue to be updated with xmos and linux code for neuroarm 3 exclusively

7/1

  • Worked on documenting NeuroArm 2 and 3 on the wiki
  • Read about encoder connections, plan to finish this electrical assembly by next week
  • Planed with Karan and Samir to place power supplies AND nodes under the capstan on NeuroArm 3 - an entirely self contained robot!

6/30

  • Today began searching around Synapticon's wiki for info on how to assemble the new nodes - Good news! Their wiki is much improved!!!
  • Power supplies arrived - went to the store and bought extension cables to plug them in
  • Organized and updated documentation binder
  • Met with Alex and Sam in video conference about ethercat fpga --> robot nodes

6/30-7/3 Week 2: Goals

  • Primary focus of the week is assembling DC 100 nodes and purchasing the appropriate connectors for them
  • Finish writing a few tutorials on NeuroArm 2 usage etc
  • Configure desktop environment for programming the new node software

6/27

  • Samir's Symposium
    • Samir - Introductions and an overview of our research outlook
    • Hari - Haptic fMRI: Design Challenges
    • Scott - Understanding and Using the NeuroArm Robotic Platform
    • Michael - Neural computation for robot control (and for other things!)
    • Michelle - Haptic fMRI Experimental Framework
    • Kenji - Dynamics3d and Simulation
    • Nayan - Spatial Vector and Rigid Body Dynamics algorithm
    • Jack - Haptic jamming fMRI results, etc.
    • Paul - Classifying Motion using BOLD Response in Haptic fMRI
    • Karan - NeuroArm Design and Particle Jamming Gripper
    • Kaidi - Arm Segmentation
    • April - GUI for the Kuka

6/26

  • Finalized and pushed app_template_vanilla to the repo - It applies a sin wave torque on joint zero
  • Finalized and pushed app_template_gui to the repo - It displays a representation of the robot and uses a poorly calibrated PD controller that allows the user to move around the end effector in cartesian coordinates using wasdqe on the keyboard
  • In order to perfect the robot so it does exactly what we want, we must:
    • Calibrate current to torque output on the xmos nodes correctly
    • Calibrate robot parameters correctly
    • Compute Jacobian and Op. Space Control Equations correctly
    • Calibrate PD control parameters correctly
  • Finished presentation for tomorrow

6/25

  • Cleaned up the applications_linux folder, so currently the only files in it are:
    • lib_neuroarm
    • lib_synapticon
    • app_narm_shmem_serv - This is the server application that YOUR app must interact with
    • nips_demo - This is the original demo application you can look at to build your own app, it interacts with the server app
  • freeglut (./narm_gui): OpenGL GLX extension not supported by display ':0'
    • The old Nvidia graphics card was removed, was we had to uninstall its driver and reinstall the onboard driver
  • Boiled nips_demo down into two basic applications
    • app_template_gui - A template application that utilizes the gui
    • app_template_vanilla - The most basic app that interacts with the shared memory

6/24

6/23

  • Updated this wiki with last year's stuff
  • Ordered Power Supply - We're ordering the same power supplies that we've used before for uniformity. Three 150W power supplies (24V, 6.5A) at a unit price of $53.46 from digikey. Product URL
  • Video Chat with Kwabena, Sam, and Alex who are at Cornell.
  • Verified that NeuroArm 2 is functional. Ran shmem server and received q, dq, ddq feedback as we expect. Plan to cleanup the projects

6/23-6/27 Week 1: Goals

  • Order Power Supply and Begin Assembling NeuroArm 3 Nodes
  • Find a computer and desk to use and configure it for NeuroArm
  • Verify that NeuroArm 2 works, don't worry about rewriting driver to decrease acceleration feedback variability until after NeuroArm 3 works.
  • Write wiki page on using NeuroArm 2 and writing apps to interact with shmem server.
  • Prepare a short (15 min) presentation demonstrating my work, outlining function of robots, and how to write apps. (Sync with Samir on Wed night, present on Friday)

INITIAL UPDATE Summer 2014

As of the end of Spring Quarter, 2014, NeuroArm 2 was operational and NeuroArm 3 was under construction. We had presented NeuroArm 2 at the NIPS conference in November and send a robot to the team in Waterloo, Canada.

8/28

  • Finished poster today without floating data, included paragraph on operational space control
  • Printed out poster - It should be ready either by 5 pm today, or 1 pm tomorrow
  • Finished gluing neuroarm 2

8/27

  • Went to Meyer Library to ask about printing services
    • They should be available tomorrow to print my poster
  • Filled body of poster with text
    • Samir advised I include more images
    • Spent day adding images and reforming poster
  • Chris is finishing NeuroArm 2S - Will need to use this to get floating demo data for poster

8/26

  • Emailed Teri with the following project title:
    • Implementing Operational Space Control in NeuroArm: A Three DOF Robot
  • Finished implementing mathematica equations for operational space control
    • Have yet to compile successfully
    • Let Kevin work on my computer to build socket application
  • Wrote most of poster text

8/26-8/30 Week 10: Goals

  • Wow! It's the last week already.
  • Since we're running out of time, I am going to adhere to a more strict schedule:
    • Monday: Finish REU poster, email Teri with my project title, implement remaining driver functions
    • Tuesday: Print out poster, test basic operational space control, order parts for subsequent Neuroarms
    • Wednesday: Prepare for lab meeting, develop presentation involving Neuroarm 2S (this may be all-nighter depending on when Neuroarm 2S is available...)
    • Thursday: Present at lab meeting: 12:15 - 2:15, Present Poster: 3:30 - 5:00
    • Friday: Go home... :'(

8/23

  • I mostly worked out the forward transformation matrixes last night
    • Bothered Kevin this morning to check against his calculations
    • Implemented the SNeuroarmConstants file with the following constants:
      1. gravity --> -9.8
      2. link lengths --> a vector of lengths of each of our 5 links
      3. link masses --> a vector of masses of each of our 5 links
      4. link com --> a vector of spatial vectors to the center of mass (com) of each link
  • I got the gravity compensation physically working... after a LOT of debugging >.<
    • Arm now floats (although it has a slight upwards pull to it)

8/22

  • Tested Synapticon's new ctrlproto_m.c with updated FREQUENCY and PRIORTY values:
    • Nodes now runs with average delay of 7.48*10^-5 seconds over 300,000 trials (std dev = 5.79*10^-9)
  • Added CMake files to applications_linux/lib_neuroarm
    • I looked at the code a little bit and got it to compile
  • Began writing out the transformation matrixes for Neuroarm.
    • I decided it was an RRRbot (three rotations)
    • My Mathematica transformation matrixes... don't seem to work:
(*
NeuroArm Mathematica Info
-------------------------

Naming convention:
0,1,2,3 = points on robot typically at links intersection
O = origin
q = generalized coordinate (radians)
D = displacement (meters)
R = rotation matrix
T = transformation matrix
*)

(*GENERALIZE COORDINATES*)
q0 := 0
q1 := 0
q2 := 0

(*LINK DISPLACEMENTS*)
D32 := 0.8 (*In the negative Z direction*)
D21 := 1.0 (*In the positive X direction*)
D10 := 0.6 (*In the positive Z direction*)

(*FORWARD ROTATION MATRICIES*)
RO0 := {{Cos[q0],-Sin[q0],0},{Sin[q0],Cos[q0],0},{0,0,1}}
R01 := {{Cos[q1],0,-Sin[q1]},{0,1,0},{Sin[q1],0,Cos[q1]}}
R12 := {{Cos[q2],0,-Sin[q2]},{0,1,0},{Sin[q2],0,Cos[q2]}}
R23 := {{1,0,0},{0,1,0},{0,0,1}}

RO1 := R01
RO2 := R01 . R12
RO3 := R01 . R12 . R23

(*FORWARD TRANSFORMATION MATRIXES*)

(*Moves the beginning of a vector from the first index to the second while keeping the end of the vector at the same place*)

TO0 := {{Cos[q0],-Sin[q0],0,0},{Sin[q0],Cos[q0],0,0},{0,0,1,0},{0,0,0,1}}
T01 := {{Cos[q1],0,-Sin[q1],0},{0,1,0,0},{Sin[q1],0,Cos[q1],D10},{0,0,0,1}}
T12 := {{Cos[q2],0,-Sin[q2],D21},{0,1,0,0},{Sin[q2],0,Cos[q2],0},{0,0,0,1}}
T23 := {{1,0,0,0},{0,1,0,0},{0,0,1,-D32},{0,0,0,1}}

TO1 := TO0 . T01
TO2 := TO0 . T01 . T12
TO3 := TO0 . T01 . T12 . T23

(*Set the q values and multiply the transformation matrixes by {0,0,0,1} to obtain new link coordinates*)
q0 = 0
q1 = Pi/6
q2 = 0
v = {0,0,0,1}
TO1 . v
TO2 . v
TO3 . v
MatrixForm[TO1]
MatrixForm[TO2]
MatrixForm[TO3]

8/21

  • Implemented sockets to communicate with Neurogrid
  • Added Sam to the stanford synapticon repo so he could work on the sockets stuff

8/20

  • I graphed the PD data from yesterday and a few other data sets that I have collected to prepare a poster
  • Got a list of parts from Hari to consider ordering the raspberry pi electronics setup

8/19

  • Began testing various PD constant values
    • I ran eight linear movements between 0 and 6 radians on each generalized coordinate varying the KP/KV values for each one
  • Kevin used our torque control function with Nengo to successfully compensate for gravity and apply forces (newtons) in the XYZ directions (end effector)
$ ~/Code/Installed/nengo-f800ddb/NengoRobotControl/tests$ python clienthack.py 5008 ~/Code/stanford_synapticon.git/applications_linux/app_nengo_ctrl/torques.txt ~/Code/stanford_synapticon.git/applications_linux/app_nengo_ctrl/joints.txt 5007 ~/Code/stanford_synapticon.git/applications_linux/app_nengo_ctrl/commands.txt 

$ ~/Code/Installed/nengo-f800ddb$ ./nengo-cl NengoRobotControl/apps/neuroarm_control.py 127.0.0.1 5008 127.0.0.1 5007

$ ~/Code/stanford_synapticon.git/applications_linux/app_nengo_ctrl$ ./master_application

8/19-8/23 Week 9: Goals

  • Implement high level operational space control driver
  • Collect tons of data on current progress, make graphs and a poster (prepare to present at lab meeting)
  • Order parts for new robots

8/14-16

  • Simplfied the driver as a state machine with two states:
    1. Position control mode - commandMotorPositionsAndReadEncoders()
    2. Torque control mode - commandMotorTorquesAndReadEncoders()
  • The PD control constants are now sent from the master application and can be dynamically updated through the userdefined variable in the comm struct
    • The upper 16 bits of the undefined integer hold the k_p value (0-65535) and the lower 16 bits hold the k_v value (0-65535)
    • I ramped up the default k_p and k_v to 2400 and 100 respectively. I will need to further test to find optimal values.
  • Commands to the driver now run in radians instead of milli-degrees
  • Beginning to implement a higher level operational space control driver that will output torques
    • Emailed Synapticon to ask how to increase the comm rate between computer and node to above 100 Hz
  • Begun ordering parts for the new robots, buy 5 copies at least of Hari's setup, so we will have plenty of raspberry pi based electronics around in the future

8/13

  • Made adjustments to the CStanSynapticonDriver
    • It's now working very well with position, torque, and record modes
    • I built several applications very quickly as a result: app_gc_sin_ctrl, app_driver_test
  • New robot parts / tools arrived today
    • Began cleaning out the backroom (Clark W124) to move our stuff in
    • boxes...

8/12

  • Implemented our high level CStanSynapticon Driver
    • Restructured the files in the repository to make using this driver as easy as possible
  • I thoroughly cleaned my desk in advance of the Waterloo people coming here tomorrow
    • All of the "other people's stuff" I just put in one container on the third shelf
    • All of our stuff is on the second shelf

8/12-8/16 Week 8: Goals

  • Finish position and torque control demo applications
  • Make poster for presentation using this data
  • Understand operational space control theory

8/9

  • Kevin and I tried calibrating the magic number (current) to force at end effector (force) conversion using a scale from Kim and applying various torques.
    • I have decided that I will have succeed in calibrating robots, when I get one to do this.

8/8

  • Implemented the Torque control xmos application to compliment the pos ctrl one
    • I'll touch them both up a little bit, but we are close to having stable releases of both

8/7

  • Rewrote the script to flash the nodes
    • Due to the xflash command, the script must be run from the directory of the XDE binary
  • Here is a new image of the synapticon git file system.

8/6

  • Migrated Synapticon files into our own repository
  • There would be a ton of issues trying to migrate files individually
    • Edit each of the makefiles inidividually
    • Somehow correct the .project and .cproject xml files so that XDE can recognize and build them the projects
  • I just dumped the old XDE workspace inside of applications_xmos.
    • I'll update the doc images to reflect this structure

8/5

  • I finished re-implementing the position control xmos interface
  • Wrote the record / playback demo with a clean terminal interface
  • Talked with Samir about our future robot design:
    • Better motor: RE 50 Ø50 mm, Graphite Brushes, 200 Watt Part number 370356
      • Brushed motor doesn't require commutation
      • This motor has a time constant 3.8 ms as opposed to our current 13.5 ms
    • Go with Hari's Raspberry Pi controllers rather than our current Synapticon ones

8/5-8/9 Week 7: Goals

  • Implement the standard torque control and position control xmos applications
  • Build Synapticon git and standard interfaces in a neat and organized manner
  • Continue learning operational space control basics
  • Begin recording data and making graphs for poster presentation

8/2

  • I rewrote the xmos application for position control to allow the user to toggle between record mode and actuation mode as they like.
while (1) {
    handleComm();
    switch(InOut.motor_command) {
        case RECORD_MODE: doRecord(); break;
        case ACTUATION_MODE: actuate(); break;
    }
}
  • I am no longer using the stop function so the motor never jerks following a stop command.

8/1

  • Today we did an informal position record and playback demo for most of the lab.
  • I drew this graphic that descibes the basic layout of the stanford_synapticon git repository for controlling NeuroArm.
    • I can continue to migrate files into that repository and build the applications.
  • it's august... O.o

7/31

  • Today I began migrating the Synapticon libraries and files into our own NeuroArm repository.
    • Synpaticon divided most of their code into modules, which makes this transition very easy.
    • I will upload an image of the NeuroArm repository file system layout after I am done with this transition.
  • I finally implemented the position demo!!!!
    • We can record a motion by pushing around the arm and then it plays back on its own.
    • Not perfect, and I'll continue to update it, but great to finally have it working.

7/30

  • Today, I am perfecting the PD control
  • Obstacle: The weight of gravity effects the motor 2 (the vertical one)
    • I increased the default k_p and k_v values slightly, but there is still a constant offset from the destination position
    • I added the equilibrium power in the vertical direction by default, so power is added or subtracted on top of constantly applied power.
    • This is still kind of a hack but fixes the gravity issue temporarily:
const int antigravity = 1000;
power = antigravity + k_p * (exp_p-cur_p) + k_v * (exp_v-cur_v);
  • Implemented RecordPositions and PlaybackPositions for a position demo
    • Some bugs in when playing back positions...
  • Here is an update video: Motor Control Demo [Youtube]

7/29

  • The replacement nodes from Synapticon arrived today.
    • I assembled them and attached them to NeuroArm.
  • I tested the PD control that I worked out last week on all three nodes simultaneously and they all worked!

7/29-8/2 Week 6: Goals

  • Assemble and test Synapticon replacement nodes.
  • Finish implementing PID control over all 3 nodes
  • Continue interfacing torque control with Nengo.

7/26

  • Instead of modeling position as a linear function, I will have the position control follow a cubic spline
    • I calculated this equation based on the CS225 tutorial.
    • Enter the following query into wolfram alpha to see the graph: "graph y=a+c*x+3*x^2*(b-a)/(t^2)-x^2*(2*c+d)/t-2*x^3*(b-a)/(t^3)+x^3*(c+d)/(t^2) let a=0 ; b=90 ; c=0 ; d=0 ; t=3"
    • The equation parameters are:
a = inital position (degrees)
b = final position (degrees)
c = inital velocity (degrees/second)
d = final velocity (degrees/second)
t = final time (seconds) //inital time = 0
x = any time between 0 and t seconds
y = position at time x (degrees)
  • It is possible to solver for t given a maximum allowed velocity and acceleration:
    • v_max >= (3/2)*abs(b-a)/t - (1/4)*abs(c+d)
      • In terms of t: t >= (3/2) * abs(b-a) / (v_max + (1/4)*abs(c+d))
    • a_max >= 6*abs(b-a)/(t^2) - 2*abs(2*c+d)/t

7/25

http://cs.stanford.edu/groups/manips/images/stories/teaching/cs223a/handouts/control.pdf

  • Today I finally implemented system independent PD control:
//Just a few lines to get an idea of what I implemented
//des_p/des_v = desired position / velocity (user input)
//cur_p/cur_v = current position / velocity
//init_p = initial position (at the beginning of a move)
//k_p = 20, k_v = 9
while (1) {
    select {
        case EVERY_MILLISECOND:
            {cur_p, cur_v} = get_qei_data(c_qei);
            //We model expected position with a linear function
            exp_p = init_p + vel * time_since_init;
            power = k_p * (exp_p - cur_p) + k_v * (exp_v - cur_v);
            set_commutation(power); //Make the motor spin that fast

        case WHEN_USER_INPUT_NEW_COMMAND:
            des_p = user_input_position;
            des_v = user_input_velocity;
            //If we need to move the motors
            if (cur_p != des_p) {
                init_p = cur_p;
                time_since_init = 0;
                //We model velocity as a constant function
                exp_v = des_v;
            }
    }
}

7/24

  • After carefully setting up our one remaining functional node, I finally began implementing basic position control.
//cur_p = current position
//des_p = desired position
//vi = velocity
while (difference(cur_p, des_p) > 5) {
    if (cur_p > des_p) set_commutation_sinusoidal(-vi);
    else set_commutation_sinusoidal(vi);
}
  • Basic P control (shown above)
    • The position is overshot by 10-40 degrees as it takes time to lock the motor (this was also experienced with Synapticon's PID control...)
    • Right now we are testing on a disconnected motor from the robot, so there is very little resistant torque. This means the least vi that generates a torque is about 50 degrees/second.
    • Decreased overshoot to between 5 and 10 degrees by removing time delays on stop
  • Completed a basic sin wave demo [Youtube].

7/23

  • Could not finish torque calibrations on one of the nodes
    • After much debugging, it turns out that we have fried an additional node
    • Only the bottom DC 900 board is damaged, in fact only the circuit that outputs current to the motors
    • The Ethercat, and Core C22 are fine. They load and run code successfully, but have no output to the motors
  • Emailed Synapticon about this issue and requested an additional replacement

7/22

  • Contacted Synapticon support and mailed them the damaged SOMANET node.
  • Took torque calibrations on the motors

7/22-7/26 Week 5: Goals

  • Calibrate torque control using set_commutation and a multimeter to convert output amperage to torque based on the motor spec sheet
  • Implement position (PID) control and finally run the darn position demo. (Expectations... moderate albeit jerky success)
  • Work with Kevin on interfacing our torque control functions with Nengo / catch up with whatever he is working on.

7/19

  • Today I ripped apart pos_ctrl.xc and extracted it's heart: set_commutation(int value)
    • Basically pos_ctrl.xc is a PID motor control wrapper around this set_commutation function
    • I switched pos_ctrl to utilize the qei function I wrote, but I will end up re-writing the PID control equations if we ever use this function
  • Based solely on set_commutation, we were able to set an amplitude value (between -13749 and 13749) and the motors would spin at that relative speed.
    • I found an xbox 360 controller and driver and used the right trigger to to vary this amplitude: By depressing the right trigger, we could increase the constant spin rate of the motor.
    • We could also set the spin rate in a text file.

7/18

  • Continued edits to the qei_server.xc and qei_client.xc system:
    • qei_server is simply an infinite loop that records:
      1. Current qei count (0-4096)
      2. Current direction (+-1)
      3. Timer outputs at each of the last 8 clicks
    • qei_client contains a single wrapper function get_qei_data(chanend c_qei) that returns
      1. Position = 360 * qei_count / 4096 (units = degrees)
      2. Direction (+-1)
      3. Velocity = 1 count distance / average of 8 time intervals (units = degrees/sec)

7/17

  • Hari configured the Raspberry Pi and wrote a quadrature driver for it. I can now ssh into it when it is directly connected to my computer via ethernet over the ip address 169.254.0.2. Login: pi Password: raspberry
  • I began working to control the motors. Just as I rewrote the qei_server.xc file, I will also have to do that with pos_ctrl.xc as their position control function is extremely limited for our purposes.
    • It relies on the Hall sensor for position feedback! I will change this to QEI
    • I will remove extraneous code, and decompose functions like getPosition() and checkSignal(chanend)
    • I will not use the ack_rcv channel so we can dynamically update positions rather than wait for the previous command to finish before starting the next one

7/16

  • Obtained a response from Quadrature Encoders!!! I ended up rewriting all of qei_server.xc so that it now returns a count and direction. The time difference can also be used to calculate speed at a very high accuracy.
  • The Raspberry Pi arrived today with all of its associated parts. It looks awesome and Hari is going to configure it for me tonight, as he has an HDMI monitor. (HMDI is required for the initial configuration although I can use SSH afterwards)

7/15

  • Successfully flashed one of the nodes over ethercat using the fw_update program. I believe our previous issues with this program was that the ethercat master application was not turned off during the flashing process... sudo /etc/init.d/ethercat stop
  • We ordered more PicoBlade female connectors and ethernet cables to make our own ethercat cables out of.
  • Emailed Alvaro asking for a replacement SOMANET node and the update module_qei folder for the Moving Magnets Demo.

7/15-7/19 Week 4: Goals

  • Obtain reliable QEI based motor control.
  • Assemble an alternate motor control setup with the Raspberry Pi.
  • Explore robot control theory, write tutorials on PID control and Operational Space control.

7/12

7/11

  • The somanet nodes failed to flash today and I could not figure out the problem. After trying each of the three nodes, one of them started smoking and now does not work. We ordered a Raspberry Pi board to develop an immediate alternative to the Synapticon controllers as backup.
  • After talking with Hari, it looks like the Raspberry Pi would not solve our problems, as I would have to program the commutation and position/torque control by myself. This leaves us waiting for Synapticon to pull through.

7/10

  • Alvaro sent us pricing information for new ethercat cables. Since this is expensive, we will try crimping them ourselves. TODO: Purchase PicoBlade terminals and ethernet cables.
  • I re-downloaded the most recent version of the Moving Magnets demo and installed it clean (i.e. no modifications to their code). Our QEI sensors still gave no feedback, so I emailed Alvaro again that this did not work. I sent him the documentation for our encoders and asked for specific debugging instructions.

7/9

  • Emailed Synapticon again to set up a meeting for the QEI. Also, asked for new ethercat cables.
  • Compiled a complete list of the electronic components that we would have to purchase for two additional robots. I also included one raspberry pi in the order.
  • Talked to Kevin about getting involved in Nengo, so we can connect it to NeuroArm. Asked him to write documentation for me to pick up.

7/8

  • Today we ran our position demo for Kwabena. I made the mistake of telling the motor to turn 2700 degrees (when I meant 270), which ended up snapping a cable on the robot. This embarrassing mistake was an additional setback... to an already semi-functional demo.
  • Emailed Synapticon asking for help debugging why our QEI's are not working.

7/8-7/12 Week 3: Goals

  • Use the Quadrature Encoders to control the motors rather than the Hall sensors as we are using now.
  • Fix the ethercat connections, use all three motors at once.
  • Continue perfecting the position demo:
    1. Keep the vertical motor from being effected by gravity
    2. Decrease the MIN_DX position change, so there are no jerks in movement
    3. Accurately replay velocities and timing in the position demo

7/5

  • Continued debugging and finishing the position control demo. We have two issues with the demo at present:
    1. The EtherCAT cord is broken between two of our nodes, forcing us to only use one node for the demo.
    2. The QEI sensor is not working, and the HALL sensor is very imprecise resulting in jerky movement.
  • For the demo with Kwabena on Monday, I have compiled three separate programs:
    1. Record: The nodes each print their current position readings and the computer takes each of these values and writes them into a file. I have a MIN_DX constant, so the computer only records changes in position above 45 degrees.
    2. Playback: The computer reads through the record file and sends a new position command to the nodes, whenever there is a change in the file. This results in jerky motion at least 45 degrees at a time and at a high velocity.
    3. Manual: The computer constantly polls a text file. When we save the text file with new position values, they are sent to the motors.

7/3

  • Communication occurs between main_single_node.c (on computer) and internal_comm.xc (on nodes). By simply setting the slv_handle[0] fields (torque in, torque out, pos in, pos out, etc) on the computer, the fields in the InOut struct are updated on the computer and visa versa.
    • The fields are as follows:
InOut Struct (On Nodes)       slv_handles[0] (On Computer)
-----------------------       ----------------------------
ctrl_motor             <---   motorctrl_cmd --------------⌝
in_position            <---   position_setpoint           |
in_speed               <---   speed_setpoint              |
in_torque              <---   torque_setpoint             |
in_userdefined         <---   userdef_setpoint            |
command_number                motorctrl_cmd_readback <----⌟
out_position           --->   position_in
out_speed              --->   speed_in
out_torque             --->   torque_in
out_userdefined        --->   userdef_in
  • There are currently two issues that stand in the way of running the position control demo:
    1. The QEI Sensor is not giving successful readings. It returns 0 for all position and velocity queries.
    2. The Motor locks up on program start. We need to keep it unlocked if we want to read positions while manually moving the motors for our demo. I believe this command may be in watchdog.h, however we do not have the source for that file!

7/2

  • Tried to run the new release package from Synapticon for hours this morning with absolutely no success. After much fruitless work, I discovered that the fw_update program in fact was doing nothing. After using the JTAG adaptor to flash the nodes, I was able to run the demo programs and for the very first time we obtained movement from the motors.
  • Modifying the code in the demo programs, I found the QEI sensor is broke - it always returns position 0. The Hall sensor appears to give accurate readings however.
  • Wrote a program to turn on and off the onboard LED's.

7/1

  • Tried to compile the master motor control program with an included call to move() in pos_ctrl.h, however the libraries were broken. Synapticon has provided unreliable code up until this point.
  • Went over a brief Solidworks tutorial with Chris.
  • Wrote the documentation for the flashing process of loading code onto somanet nodes.
  • Downloaded the new release package from Synapticon and tried installing this motor control demo, but received absolutely no feedback from the nodes... Debugging on this issue to continue tomorrow.

7/1-7/5 Week 2: Goals

  • Finalize the electrical documentation on the wiki, and continue work on the software documentation.
  • Obtain QEI and Hall sensor readings from the NeuroArm and send position and velocity commands to the motors.
  • Write a program that records position information and replays it.

6/28

  • Samir and I talked with an engineer from Synapticon last night and we finally able to flash the sample code, or load it onto the Synapticon Controllers. This involved:
    1. attaching the JTAG debugging chip to each of the three nodes individually
    2. turning off the ethercat master:
    3. running flash > flash configuration in XDE.
      • This is a relatively difficult process to load (or flash) the code. Ideally, we should be able to flash the Synapticon nodes over ethercat rather than through the debugging chip. Alvaro from Synapticon sent us a new (hopefully working) version of the executable fw_update (firmware update), and I will try installing this later today.
  • This morning, I installed a new ethercat card in the Ubuntu computer allowing us to access the internet and communicate with the robot simultaneously.
  • I am continuing to write wiki pages documenting how to configure and use NeuroArm.
  • Using the new firmware update program that Alvaro sent us, we were successfully able to flash the Synapticon nodes over ethercat from the command line without the use of the debugging chip. Now programming the robot has become feasible and the process can be scripted.

6/27

  • Created a new map [Image] of the NeuroArm computer's filesystem. I will create one final map (and delete the extraneous files) once the software configuration is done and I begin documenting the software installation process.
  • Again today, I continued trying to run sample code from the XDE environment on the robot. I followed this tutorial, which basically has me do the following:
    1. Run the ethercat driver: sudo /etc/init.d/ethercat start
    2. Go into the project folder that we want to build on the robot: sn_sncn_ctrlproto and compile the project.
    3. Run the master script on the main computer from terminal:
      sudo ./linux_ctrlproto_master_example/bin/linux_ctrlproto_ecmaster_example
    4. Run the slave script on the Synapticon controllers by building in XDE.
  • The issue I encountered was that XDE could not program the Synapticon controllers over ethercat. Even though I added the C22-ComECAT zip file from help --> add new hardware in XDE (this supposedly allows programming the devices over ethercat), no new hardware link appeared upon running the project. Instead I was met with a "run in simulator option" which accomplished nothing.

6/26

  • Created a visual map [Image] of the NeuroArm computer's file system. Most of the documents folder appears to be useless. The Synapticon folder is on the desktop and the XDE programming environment is on it's own in a code folder...
    • TODO: Restructure the file system layout
  • Upon reinstall of the ethercat system, we were able to achieve communication with the robot. I was unable to run the sample program in the XDE environment on the robot however.
  • Samir and I restructured the files on the main computer, I will post a new layout image tomorrow.

6/25

  1. Install ethercat system for communication with synapticon controllers
  2. Run XMOS applications on robot
  • Encountered issues with the ethercat system after installation on the main computer. "sudo /etc/init.d/ethercat status" revealed that the device I specified to be the master was not communicating. Debugging to continue tomorrow.

6/24

  • Created this Wiki Page
  • Looked Over XMOS Sample Code: Source code documentation /Main_Page
  • Ordered Power Supply - We ordered three 150W power supplies (24V, 6.5A) at a unit price of $53.46 from digikey rather than one power (24V, 18A) supply which we could only find for a much higher price: Product URL

6/24-6/28 Week 1: Goals

  • Order Power Supply and Connect to NeuroArm
  • Download XMOS Code and Install on Synapticon Parts
  • Run a Simple Program that Moves the Motors

INITIAL UPDATE

As of the end of Spring Quarter, 2013, NeuroArm was mechanically constructed (thanks to the work of Chris Aholt) and I had soldered electrical connections between the motors/encoders and Synapticon motor controllers. Samir had installed the ethercat program that we would use to communicate with the Synapticon controllers and downloaded the XMOS files from Synapticon that we would use to control NeuroArm.

Updated