Writing a Neuron2Robot TF

This tutorial assumes you have familiarized yourself with the Braitenberg experiment. If not, please consider reading the Tutorial setup first.

A TF in Python is basically a Python function with a set of decorators. These decorators create a TF from a simple Python function by specifying where the function parameters come from and what should happen with the functions return value. Let us begin to manually implement the TF using Python code.


The following code will usually be generated by the BIBI configuration generator if BIBI Configurations are used.

A (not so) new TF

import hbp_nrp_cle as nrp

def linear_twist(t):

This code creates a TF named linear_twist as a Neuron2Robot TF.

Connecting to the neuronal network

We access the neuronal network through parameters of the TF function. For this we need to introduce a new parameter and connect it to the brain. This connection is also done using a decorator. This decorator takes as input

  1. The name of the parameter that should be connected

  2. The neurons that should be connected

  3. The device type that should be created

  4. Additional device configuration

The specification of the neurons that can be connected works through a specification starting from nrp.brain. Since TFs exist independently from the brain instance, the object accessible through nrp.brain records all the steps and thus represents a function that will when given a brain instance select the neurons that should be connected to the TF.

The device types are the device types supported by the CLE. In particular, the following are allowed:

Of course, not all device types are suitable for reading purposes.

If we want to specify the devices like above, this amounts to the following Python code:

@nrp.MapSpikeSink("left_wheel_neuron", nrp.brain.actors[0], nrp.leaky_integrator_alpha)
@nrp.MapSpikeSink("right_wheel_neuron", nrp.brain.actors[1], nrp.leaky_integrator_alpha)
def linear_twist(t, left_wheel_neuron, right_wheel_neuron):


The parameter mapping decorators must appear before the Neuron2Robot decorator. Otherwise an exception will be thrown.

The rationale behind the naming MapSpikeSink is that the generated devices are effectively sinks as they consume spikes.

Although we have specified how the TF can be connected to a neuronal simulator, we have not yet decided on which neuronal simulator to choose. Moreover, it is perfectly valid to use a mock neuronal simulator as e.g. for unit testing of the TF.

In the last code snippet, we have not used additional device configuration. Such additional device configuration is specific to a particular neuronal simulator and may be used for various purposes but as the data is transferred in the TF anyhow, this is usually not so important as similar effects can be gained more easily by varying scale factors.

Connecting to the robot

Of course, so far there is nothing to unit test since the TF is not yet doing anything. To change this, we have to assign a robot topic channel. The most convenient form is to simply capture the methods return value and send the output to a robot topic. To do this, we simply need to add an argument to the @Neuron2Robot decorator as shown below:

@nrp.Neuron2Robot(Topic('/husky/cmd_vel', geometry_msgs.msg.Twist))

Now, we only need to ensure that we return something that is not None but an instance of geometry_msgs.msg.Twist.

As the next step, we learn how to specify a TF in the opposite direction: Writing a Robot2Neuron TF.