Frequently Asked Questions (FAQ)
- Frequently Asked Questions (FAQ)
- What can I do if I get stuck?
- Where is the Bonsai install folder?
- Where can I find examples of Bonsai at work?
- How do I get a camera working?
- How do I set the image visualizer to display in full resolution?
- How do I inspect individual pixel values in the image visualizer?
- How do I write a Python transform node?
- How do I write a Python image processing transform node?
- How do I get an Open Ephys FPGA working?
- How do I get Arduino sources and sinks working?
What can I do if I get stuck?
If this FAQ or the Wiki cannot help you, there is a public Bonsai users forum. Chances are someone has run into your problem before and the solution will be listed there. Otherwise, feel free to ask any questions or share ideas you've come across while using Bonsai.
Where is the Bonsai install folder?
You can pick the installation folder during setup. However, being a portable app, Bonsai installs by default into the user's AppData. Unfortunately this directory is hidden in Windows, which can make finding the Bonsai folder a bit tricky. You can find it by default at C:\Users\YOURUSER\AppData\Local\Bonsai.
Where can I find examples of Bonsai at work?
First install the Starter Pack package from the package manager. This package will install all the common dependencies for both image and signal processing. Example workflows can then be found in the Tools menu, under Bonsai Gallery.
How do I get a camera working?
In Bonsai, a camera shows up as a data source of images. Each camera has its own set of vendor-specific drivers that need to be installed in the system for it to operate correctly. In addition, because different drivers provide different application programming interfaces (APIs), Bonsai needs to know how to interface with each specific camera, before it will work correctly. It is common that a specific Bonsai module needs to be installed for each different vendor.
There is a generic Bonsai camera node (CameraCapture) that will work with most webcams and simple DirectShow based devices. You can find it by installing the Vision package and then searching the Toolbox under Source / Vision / CameraCapture.
How do I set the image visualizer to display in full resolution?
Double-clicking anywhere inside the displayed image will set the visualizer window resolution to match the actual input size.
How do I inspect individual pixel values in the image visualizer?
Right-clicking anywhere inside the displayed image will toggle the visualizer status bar. The x- and y-coordinates of the mouse cursor in image space will be shown, as well as the BGRA values of the selected pixel (note: depending on the image color space, the order of the pixel components may change).
How do I write a Python transform node?
Bonsai allows writing new transform nodes directly in a workflow using Python, specifically its .NET variant, IronPython. The way to do this is by installing the Scripting package and then searching for Transform / Scripting / PythonTransform. Double-clicking on the transform node, a script editor should show up with the following contents:
@returns(bool) def process(input): return True
The process function is where the main transform code is to be written. This function will be called once for every input object arriving at the node. However, because Python is a dynamic interpreted language, the resulting data type of a Python function is not known. In order for Bonsai to know this, the decorator returns needs to be declared in order to specify the intended type of the result. The example shown specifies how to do it for the boolean (bool) data type.
How do I write a Python image processing transform node?
It is possible to write image processing scripts directly in IronPython transform nodes using OpenCV.NET. To do this, it is necessary to include a reference to OpenCV.NET as well as importing all relevant data types that will be used in the script. Available operators and data types for OpenCV.NET can be found in the NuDoq website. Below is an example of using the Canny edge detection operator in a Python transform:
import clr clr.AddReference("OpenCV.Net") from OpenCV.Net import * @returns(IplImage) def process(input): output = IplImage(input.Size, IplDepth.U8, 1) CV.Canny(input, output, 128, 128) return output
How do I get an Open Ephys FPGA working?
Bonsai supports the Open Ephys and Intan's Rhythm interface for next-generation electrophysiology amplifiers. These devices operate off an FPGA acquisition board. To use it, you need to install the Ephys package, and then search the Toolbox under Source / Ephys / Rhd2000EvalBoard. You will need a specific bitfile for the acquisition card FPGA depending on the board you are trying to use (Intan or Open Ephys). You can find these bitfiles on the websites for each of the specific distributors.
The output from the acquisition card is a complex data frame where you can select different outputs available at the board level (e.g. amplifiers, board ADCs, etc.). A member selector transform node can be used to access each of these individual outputs. You can create one by right-clicking on the data-source, accessing the Output drop-down and clicking on the desired output stream.
Each data stream is represented as a multi-channel, multi-sample buffer similar to what you get out of a microphone data source.
How do I get Arduino sources and sinks working?
Arduino support in Bonsai makes use of the Firmata protocol over the serial port. In order to get it to work, one of the Firmata implementations needs to be uploaded to the microcontroller, which effectively turns the Arduino into a DAQ, streaming digital input pin changes and analog input (ADC) readings through the serial port and receiving commands to set digital outputs back. These implementations are included in the Arduino distribution, under Examples / Firmata. Make sure that the baud rate used in the Arduino matches the one that is configured in Bonsai (default is 57600 bps).