The Virtual Brain Engine

This page explains the steps needed to connect The Virtual Brain models with external simulators using NRPCore. Contrary to other simulators with Python API, like OpenSim, TVB doesn’t have a dedicated engine. You should use Python JSON Engine directly. To do this, you need to define a class called Script, which inherits from EngineScript. The base class takes care of all tasks related to simulation control and data exchange. In particular, it provides the following methods, in which calls to TVB API should be embedded:

  • initialize() : executed when the engine is initialized

  • runLoop(timestep_ns) : executed when the engine is requested to advance its simulation (from EngineClient::runLoopStep)

  • shutdown() : executed when the engine is requested to shutdown

A fully working example is available under examples/opensim_tvb/, with the code related to TVB located in examples/opensim_tvb/tvb_engine.py.

Defining proxy nodes in the brain model

Proxy nodes allow to inject external signals into the brain. The following code creates a single proxy node labeled “Arm”:

# Inject the proxy region into the brain

connectivity.region_labels = np.concatenate([connectivity.region_labels,
                                            np.array(["Arm"])])
number_of_regions = connectivity.region_labels.shape[0]
arm_proxy_index = number_of_regions - np.array([1]).astype('i')

maxCentres = np.max(connectivity.centres, axis=0)
connectivity.centres = np.concatenate([connectivity.centres,
                                      -maxCentres[np.newaxis],
                                       maxCentres[np.newaxis]])
connectivity.areas = np.concatenate([connectivity.areas,
                                     np.array([connectivity.areas.min()])])
connectivity.hemispheres = np.concatenate([connectivity.hemispheres,
                                           np.array([True])])
connectivity.cortical = np.concatenate([connectivity.cortical,
                                        np.array([False])])

You may want to connect the newly created region into other regions:

# Augment connectivity with the proxy region

for attr, val in zip(["weights", "tract_lengths"], [1.0, MIN_TRACT_LENGTH]):
    prop = getattr(connectivity, attr).copy()
    prop = np.concatenate([prop, np.zeros((number_of_regions - len(motor_regions), len(motor_regions)))], axis=1)
    prop = np.concatenate([prop, np.zeros((len(motor_regions), number_of_regions))], axis=0)
    prop[arm_proxy_index[0], motor_index[0]] = val  # Motor region -> Arm
    prop[motor_index[0], arm_proxy_index[0]] = val  # Arm -> Motor region
    setattr(connectivity, attr, prop)

Extending the Simulator class

An extended version of the Simulator class, which is able to handle data coming from the proxy regions, is available in the TVB cosimulation package:

from tvb.contrib.cosimulation.cosimulator import CoSimulator as Simulator

After creating the CoSimulator object, some of its variables need to be set:

  • voi : variables of interest. These are the state variables that will be exchanged with the proxy region.

  • proxy_inds : indexes of the proxy nodes

  • exclusive : must be set to True to indicate that data from proxy regions are simulated externaly

# Set CoSim simulator parameters

simulator.voi = np.array([0, 1])  # State variables to be interchanged with the arm's cosimulator
simulator.proxy_inds = self.iF    # The nodes simulated by the arm's cosimulator

# Coupling from all TVB nodes, towards the nodes of the arm's cosimulator
# is what will be transferred from TVB to the fingers:
simulator.cosim_monitors = (CosimCoupling(), )
simulator.cosim_monitors[0].coupling.a = np.array([1.0])
simulator.exclusive = True  # Arm is exclusively simulated by the arm's cosimulator (opensim)

Injecting data into the proxy region

The following piece of code shows how to inject position of an external agent into the proxy region:

def simulate_fun(self, simulator, elbow_position, elbow_velocity):
    motor_commands_data = deepcopy(simulator.loop_cosim_monitor_output()[0])
    motor_commands_data[1] = motor_commands_data[1][:, :, self.iF, :]

    input = deepcopy(self.prev_state)

    # Inject arm's position into the brain
    # Meaning of indexes - [time = 0, data = 1][sample, state variable (V, W), brain region, mode (not used, always 0)]
    # Remove the scaling factor (5) to see a change in the model's behaviour
    input[1][0, 0, 0, 0] = elbow_position * 5

    # The second state variable isn't really used in the model, but it could be set up like this:
    #input[1][0, 1, 0, 0] = elbow_velocity

    # We want to inject the data 'in the past'
    input[0][0] -= simulator.integrator.dt

    # Simulate the brain

    dtres = list(simulator(cosim_updates=input))[0]

    sim_res = list(dtres)
    # For a single time point, correct the 1st dimension:
    sim_res[0][0] = np.array([sim_res[0][0]])
    sim_res[0][1] = sim_res[0][1][np.newaxis]

    self.prev_state[0] = deepcopy(motor_commands_data[0])
    self.prev_state[1] = deepcopy(motor_commands_data[1])

    return motor_commands_data, sim_res