Clone wiki

PyVXI-11 / Home

PyVXI-11: Pythonic and C++ electronics lab automation

PyVXI-11 is both a Python and C++11 extension supporting digital communications between a computer and electronics laboratory equipment such as oscilloscopes, network and spectrum anaylzers, multimeters, etc., using the standard VXI-11 protocol over ethernet and TCP/IP. The programming language supporting by lab instruments is typically SCPI, extensively documented in these instrument's programming manuals. Python is a perfect language for manipulating SCPI, which is text, and the numerical results produced by electronics laboratory equipment, using NumPy. This package is a modern pythonic alternative to LabView, Linux GPIB, and VISA. This package was created because LabView and VISA are proprietary solutions not particularly portable, especially to Linux. Linux GPIB is a mature open source package that does port to Linux, but is modeled after the old fashioned "ibconf" and friends API. PyVXI-11 uses a modern programming style, both in Python applications and the underlying C++11 extension implementation. Unlike Linux GPIB, PyVXI-11 makes no effort to support GPIB hardware. Instead, it relies on a hardware VXI-11 to GPIB transceiver such as the ICS-8065.

PyVXI-11 supports service request interrupts (SRQ) using a signals/slot mechanism (it is boost::signals2), asynchronous I/O, and hopefully any other advanced requirements for an electronics R&D lab.

Compatibility and a Plea for Collaborators using Windows or OS-X

This software was developed and runs great on Linux, as long as your compiler and boost versions are new enough. In theory, boost should enable easy porting of PyVXI-11 to Windows and OS-X. Python is already supported everywhere. I would love to see ports to Windows and OS-X, and I think they will be relatively easy, but I need colleagues who are interested and knowledgable about release engineering on either of those platforms to make it happen. If you are interested, contact me and we will get to work!



  1. C++11 compliant compiler. Developed with gcc-4.8.1
  2. Boost 1.54 or newer. Boost::Log finally appeared in 1.54, which drives the version requirement. Dependency on Boost is deep and completely non-negotiable for PyVXI-11.

The build system is WAF. You do not have to install it, because it comes with PyVXI-11, as normal with WAF.

If WAF complains about not finding boost::log, it is because your boost is too old: install 1.54 or newer!

Most distributions do not yet use gcc-4.8 as standard because it is, as of this writing, still pretty new. Ensure your box has it somewhere; most distros support more than one compiler installed at a time. Make sure WAF finds the right compiler from the command line:

$ export CXX=g++-4.8.1

If you are not using gcc, we probably have to tweak wscript to generalize the compiler and linker options. Contact me and we will work together on that. In particular, the waf function check_python_headers() appears to get the wrong answer on my machine, and the install directory is definitely not portable.

It is likely that your distro's default Boost version is also too old, so check that too. One is supposed to be able to automatically download the WAF boost extension, using "waf update --files=boost" but this has never worked for me at my site (undoubtably a paranoid firewall). So I keep, as manually downloaded, in the root directory. Just leave it there, and WAF should find boost.

$ ./waf configure

and, hopefully,


Basic Usage

Synchronous I/O

The simplest possible vxi11 program:

>>> import vxi11
>>> with vxi11.client('', 'inst0', 'myvna') as vna:
...     print'*IDN?')
Agilent Technologies,E8363C,SG49030112,A.09.42.10

Instantiate a client on your LAN at, and print the device's self reported identification using a SCPI common command *IDN?. The with statement creates a Python context where the VXI-11 client named vna is defined. Using a context ensures that the VXI-11 connection is always properly destroyed at the end of the script, even if there are exceptions or other irregular exits. This is important to leave your equipment ready for the next run; it otherwise has no idea that your python process exited. Debug messages will be identified by the "myvna" prefix. The inst0 argument is instrument specific and tells the device which internal feature to use; in my lab, this is always "inst0" except for the ICS-8065.

Here is a similar example for an old fashioned GPIB device bridged by the ICS-8065:

>>> import vxi11
>>> with vxi11.client('', 'gpib0,4', 'field') as field:
>>>     print'*IDN?')
ETS-Lindgren,HI-6100 Field Monitor,0, REV 2.2

assuming the ICS-8065 is at the IP address of, and the instrument is on the GPIB bus at device 4.

VXI-11 Core Channel Commands

PyVXI-11 clients have member functions to call the VXI-11 core channel commands.

Read the status byte (returns a Python integer)

>>> client.readstb()

Trigger the instrument

>>> client.trigger()

Clear the instrument status

>>> client.clear()

Set remote and local modes

>>> client.remote()
>>> client.local()

The C++11 API has additional core channel commands that are not supported in Python: destroy_link(), create_intr_chan use vxi11.srq_client instead for a much nicer and Pythonic interface to use service requests.

Debugging Facilities

PyVXI-11 will increase verbosity if you ask it to. This is useful for debugging the conversation between your script and your instrument. The class vxi11.loglevel is an enumerated type with decreasing values:

  • debug: log everything
  • srq: log service requests
  • info: log the occasional useful information
  • error: log errors
  • silent: never log anything, used mostly internally for unit testing

You can set or query the global loglevel with client.default_loglevel; this is a static variable that affects every client. It is also in effect during client construction, showing a few extra log messages when links are created.

Client specific loglevels are accessed with client.logfilter. To see a log message, it has to pass both the global default and client specific loglevel. The logging system is implemented with boost::log.

Here is how the example above would look with logging enabled:

with vxi11.client('', 'inst0', 'myvna') as vna:
    vna.logfilter = vxi11.loglevel.debug
myvna <-- *IDN?
myvna --> Agilent Technologies,E8363C,SG49030112,A.09.42.10
Agilent Technologies,E8363C,SG49030112,A.09.42.10
myvna destroyed VXI-11 link

Instrument Discovery

From the python prompt:

>>> import vxi11
>>> print['192.168.0.%i' % ip for ip in range(1,256)])
['', '', '', '']

This will return a list of any VXI-11 compliant instruments on your LAN with IP address between and The example shows three instruments of mine which happened to be powered on at the time, plus my GPIB transceiver at The call will return after one RPC timeout interval, which on my computer is several seconds (this call actually launched 256 threads to probe your LAN in parallel). You can tweak the range according to your local LAN configuration. You do not have to use, because you likely already know the IP addresses of your gear.

Python Context Manager

In my experience, it is easy to confuse most GPIB or SCPI instruments with irregular (i.e., buggy) exits. Mix threads, sockets, and service requests (SRQ), all of which are used in PyVXI-11, and opportunities are ripe for leaving your lab in a state of software disarray when your script exits. Python's destructors __del__(self) are not useful for technical reasons within Python, yet with PyVXI-11 it is critical that resources get properly cleaned up. To ensure that destroy() is always called no matter what, you should always use a Python context manager. In fact, the basic PyVXI-11 object, vxi11.client, is such a context manager.

Service Requests (SRQ)

Service request features are supported by two more advanced PyVXI-11 client types, vxi11.srq_client and vxi11.srq_client_with_queue. All above examples work the same with these new types, because they inherit the syncronous interface of the basic vxi11.client. These clients actually park a daemon thread on the VXI-11 "Interrupt Channel" feature of the spec. When an instrument asserts SRQ, the thread manages the interrupt request with respect to the hardware and invokes any software signal handlers your application has registered. The underlying signal mechanism is boost::signals2, with a light boost::python wrapper to expose the signal object. You can register as many callbacks with boost::signals2 as you need, not just one. In fact, the logging system already has one handler per signal, just to print out the message that the interrupt occurred. Real work would be done with additional handlers.

These are the signals currently supported, which are all familiar from your instrument's programming manual's description of the status byte, and standard event status registers. Status byte:

  • client.status.device is a device dependent status interrupt
  • client.status.error is a error interrupt
  • client.status.questionable is a device dependent questionable status interrupt. (I've never actually used this.)
  • client.status.message is a message available (MAV) interrupt
  • client.status.operation is an operation status interrupt

and the standard event register:

  • client.event.OPC is the "operation complete" message
  • client.event.query is the query error message
  • client.event.DDE is a device dependent error
  • client.event.execution is an execution error
  • client.event.command is a command error
  • client.event.user is a user defined event
  • client.event.PON is the power on message

One annoying thing is that you have to pass in your own client IP address so the remote instrument actually knows where to send the service request (which is your python script). If anyone knows how to get this information automatically from a socket, tell me and I'll improve the code. In this example, the control computer (the Python script) is at, and the server (a network analyzer) is at

import vxi11

def errormsg():
    print 'command error!'

with vxi11.srq_client('', '', 'inst0', 'myvna') as vna:
    vna.logfilter = vxi11.loglevel.debug
    vna.write('*SRE 32; *ESE 255') # tell hardware to SRQ
    vna.enable_srq() # tell VXI-11 to handle the SRQ

Asynchronous I/O

Asyncronous I/O is supported by the vxi11.queue_client and vxi11.srq_client_with_queue types. These clients will buffer read, write, and SRQ calls in a separate thread which calls each async command, in sequence, syncronously. The interface is:

  • client.queue_write(message, timeout) behaves exactly write client.write(message, timeout).
  • client.queue_read(query, requestSize, dtype, timeout) behaves like, requestSize, dtype, timeout), except instead of immediately returning a Python string or numpy array, you get a future that will return a Python string or numpy array sometime later. The future object supports querying if the answer is there yet with future.is_ready(), blocking the calling thread until the answer appears with future.wait(), and actually retrieving the answer with future.get(). future.get() blocks, just like future.wait(), if the result is not yet available.
  • client.queue_srqwait(signal) will block the queue until a SRQ occurs. The particular SRQ expected is indicated by the signal argument, which can be any of the members of client.status or client.event documented in the SRQ section. This feature is enabled only by the vxi11.srq_client_with_queue client type, not the vanilla vxi11.queue_client.

Here is the simplest example of calling an asyncronous function:

>>> vxi11.client.default_loglevel = vxi11.loglevel.debug
>>> with vxi11.queue_client('', 'inst0', 'myvna') as vna:
...   idn = vna.client.queue_read('*IDN?')
...   print idn.get()
myvna created VXI-11 link
myvna[queue] dispatch thread "vxiqueue_myvna"
myvna[queue] <-- *IDN?
myvna[queue] --> Agilent Technologies,E5071C,SG46300362,A.11.23
Agilent Technologies,E5071C,SG46300362,A.11.23
myvna queue daemon thread joined
myvna destroyed VXI-11 link

Here is a similar example, using a service request on the "message available" signal:

>>> vxi11.client.default_loglevel = vxi11.loglevel.debug
>>> with vxi11.srq_client_with_queue('', '', 'inst0', 'myvna') as vna:
...   vna.clear()
...   vna.write('*CLS; *SRE 16') # allow the instrument to assert MAV SRQ
...   vna.enable_srq()
...   vna.queue_write('*IDN?')
...   vna.queue_srqwait(vna.status.message)
...   idn = vna.queue_read()
...   print idn.get()
myvna created VXI-11 link
myvna[SRQ] dispatch thread "vxisrq_myvna"
myvna[SRQ] daemon entering svc_run()
myvna created VXI-11 interrupt channel on port 46511
myvna[queue] dispatch thread "vxiqueue_myvna"
myvna <-- *CLS; *SRE 16
myvna SRQ enable callback 0x1550340
myvna[queue] <-- *IDN?
myvna[SRQ] interrupt arrived from 0x1550340
myvna[SRQ] RPC lock acquired, dispatching service
myvna[SRQ] SRQ disable
myvna[SRQ] STB --> 0x50
myvna[SRQ] status: Message Available
myvna[SRQ] interrupt service complete
myvna[SRQ] SRQ enable callback 0x1550340
myvna[queue] --> Agilent Technologies,E5071C,SG46300362,A.11.23
Agilent Technologies,E5071C,SG46300362,A.11.23
myvna[SRQ] daemon exited svc_run() and terminating
myvna destroyed VXI-11 interrupt channel
myvna SRQ daemon thread joined
myvna queue daemon thread joined
myvna destroyed VXI-11 link

Usage Example

The examples above are the simplest possible, but are not how I actually use PyVXI-11. I always derive subclasses to handle specific instruments that I have:

class AgilentE5071C(vxi11.srq_client_with_queue):
    def __init__(self, ip='', logname='ENA'):
        vxi11.srq_client_with_queue('', ip, 'inst0', logname)

    def special_vna_function(self):

with AgilentE5071C() as vna:
    print vna.special_vna_function()

Programming the ICS-8065

The ICS-8065 bridge has its own RPC programmability, similar but not the same as VXI-11. You probably do not need these features for most laboratory automation, but I found them convenient to help debug PyVXI-11. There is a support class vxi11.ICS8065 to access these features. The features are by and large the same as if you point your browser to the ICS8065, except the class lets you do it programmatically from Python.

Probably the most interesting feature supported by the ICS8065's own RPC is a reboot command. In my experience, the ICS8065 often runs out of resources and does not always properly shut down after a script exits. I'm hoping these bugs are fewer now, but I have definitely rebooted the ICS8065 by software command more times than I can count, while debugging PyVXI-11.

Here is an example script:

import vxi11, argparse

with vxi11.ICS8065() as ics:
    print 'interface_name:',ics.interface_name()
    print 'gpib_address:',ics.gpib_address()
    print 'comm_timeout',ics.comm_timeout()
    print 'hostname',ics.hostname()
    print 'idn:',ics.idn()
    print 'error_log:',ics.error_log()

    parser = argparse.ArgumentParser(description='Control ICS8065')
    parser.add_argument('--reboot', action='store_true', help='reboot ICS8065')
    args = parser.parse_args()
    if args.reboot:
        print 'rebooting...'

Tested Equipment

  • ICS-8065 GPIB to Ethernet transceiver. This one bears the brunt of testing, because of the number of older devices that I use that have only GPIB, and the vendor has been very helpful to me working out bugs in both my own software and, in one case, theirs.
  • Tektronics DPO-7000 series digitizing oscilloscope
  • Agilent E8363C PNA network analyzer
  • Agilent ENA network analyzer
  • Agilent 81160A Pulse/Function/Arbitrary Generator
  • Rhode and Schwarz SML-03 and SMR-40 frequency sources

Through the ICS-8065 transceiver, these GPIB devices are well tested:

  • ETS-Lindgren 2090 controller
  • Keithley 238 Source Measure Unit
  • Keithley 2001 Multimeter
  • Sanford DG-645 Delay Generator
  • Pendulum CNT-01 Counter
  • Agilent 81110A Pulse Generator
  • Agilent E4448A PSA Spectrum Analyzer

Unsupported Features

I've never seemed to need these, and it is not clear how widely they are supported by the gear in my lab, anyway:

  • VXI-11 supports an "abort" channel which is not yet supported by PyVXI-11.
  • do_cmd()

Design Choices

Why use C++ rather than pure Python for the extension?

  1. (best reason) Python does not support true, concurrently executing threads. Chances are that this will always be true, as far as I know, because the GIL (Global Interpeter Lock) is apparently difficult to eradicate in the Python interpreter. Launching threads in C++ achieves real concurrency by avoiding the GIL altogether. Using boost::threads, maybe someday switching to native C++ threads, gets the cross-platform portability that pure Python would otherwise offer. PyVXI-11 uses threads extensively, especially for the asyncronous and SRQ features.
  2. OpenRPC (i.e., Sun RPC) uses an old fashioned C API, compatible with C++, not pure python. I know there are pure Python hacks to achieve RPC as well, but using the standard C library functions seems more elegant to me and is one less compatibility issue to worry about if the implementations on various platforms are ever modified.
  3. If you need to, and enjoy the pain of text (SCPI) manipulation in C++, you could actually control your electronics lab using C++ rather than Python, using the C++ objects out of PyVXI-11. You can inspect the included test suite for examples of using PyVXI-11 directly from C++.

What is the relationship to Linux-GPIB and LabView?

PyVXI-11 is an alternative to either Linux-GPIB or LabView. They serve basically the same functions, but are not compatible. I personally have never used either alternative for real work, so my views may be underinformed, but here is how I see it:

  1. PyVXI-11 is open source and intended to eventually be fully portable between Linux, OS-X, and Windows. I use Linux, where it works now, and hope to find collaborators to support the other major platforms.
  2. PyVXI-11 has a very modern design, API, and implementation.
  3. Linux-GPIB is also open source, but aims to be compatible with similar proprietary systems on Windows (and OS-X?). The Linux-GPIB API is very old fashioned, similar to the original proprietary systems it emulates.
  4. PyVXI-11 makes no attempt to support GPIB hardware. Linux-GPIB does. PyVXI-11 instead relies on a VXI-11 compatible ethernet to GPIB bridge device, such as the ICS-8065. Consequently, PyVXI-11 is a much simpler system, but requires a particular hardware device if you want to use GPIB instruments in your lab. Larger instruments ($$$) usually have VXI-11 features built in to them served up on the ethernet port, so the bridge device is not needed for those; hook them straight to your lab's ethernet switch.
  5. PyVXI-11, by itself, is rather low level. It manages raw SCPI commands, and allows asyncronous I/O and asyncronous SRQ features. It also handles NumPy arrays natively. LabView, on the other hand, supports a many graphical features and vendor supplied modules that know about the instrument's particular capabilities and features. Python is, of course, naturally capable of arbitrary graphics as well using a variety of toolkits, but you have to code these interfaces yourself. So if you need portability or complete control of your instrument as a power user and do not mind reading programming manuals, PyVXI-11 may appeal. If you only need something really simple and are using Windows anyway, LabView may be easier to use.

Why is the testsuite written in C++?

All the hardest bugs in PyVXI-11 are/were related, not surprisingly, to thread concurrency issues like races and syncronization. On Linux, the best threading debugging tools are part of the valgrind suite, which is easier to use with compiled C++ programs. One big reason is that Python itself is far from valgrind clean, sometimes for legitimate reasons, so running valgrind and Python together is extra painful. So the testsuite is written C++.

Why Boost::Test?

There are lots of interesting testing frameworks for C++ available these days, and there are many good ones among the choices. Because PyVXI-11 already had deep boost dependencies, Boost::Test was used if, for no other reason, to avoid another dependency.