Transceiver Functions

TransceiverFunctions (TFs) are user-defined Python functions that facilitate the exchange of data between Engines. This data is always stored in DataPacks. TFs are used in the architecture to convert, transform or combine data from one or multiple engines and relay it to another one.

To identify a Python function as a TransceiverFunction, it must be marked with the decorator:


This will both allow this function to be registered upon simulation startup as well as link it to the Engine specified by engine_name. For more details, see this section.

To request datapacks from engines, additional decorators can be prepended to the TF, with the form

@EngineDataPack(keyword, id)

This will configure the TF to request a datapack identified by id, and make it available as keyword in the TF.

Additionally, TFs allow users to send data back to engines as well. This is accomplished via the return value of the TF, which is expected to be an array of datapacks. After the TF returns, each of these datapacks is sent to the engine it is registered with IF AND ONLY IF this engine is being synchronized in the same loop step in which the TF is executed (see here for more details about the synchronization loop structure).


An example of a TF is listed below. The Python function “transceiver_function” is declared as a TF by prepending the TransceiverFunction decorator to its definition. The decorator also specifies that the TF is linked to the engine python_2. Using the EngineDataPack decorator, it reads DataPack data corresponding to datapack with id datapack1 from an Engine with name python_1 and sends its output to the engine it is linked to, ie. python_2. Before the TF is executed, datapack1 is retrieved from python_1, a Python object instance of datapack1 type Python wrapper is created and passed as argument to the TF with name datapack_python. During its execution, the TF first creates a datapack rec_datapack1 linked to python_2. Then, it takes the data from datapack_python and places it into rec_datapack1, which is then returned from the TF. After execution of this TF, rec_datapack1 will be sent to python_2.

from nrp_core import *
from import *

@EngineDataPack(keyword='datapack_python', id=DataPackIdentifier('datapack1', 'python_1'))
def transceiver_function(datapack_python):
    rec_datapack1 = JsonDataPack("rec_datapack2", "python_2")
    for k in[k] =[k]

    return [rec_datapack1]

This TF is part of an example experiment that can be found in the examples/tf_exchange folder.


To ensure that output datapacks from a TF are systematically received by their engines it is strongly recommended that they return only datapacks linked to the same engine as the TF itself. The reason for this is because TFs will always be executed in the simulation loop in which their linked engines are return data to, and receive data from NRP Core. In that case, it is guaranteed that output datapacks prepared by the linked TFs will always be sent to them. Returning datapacks linked to other (non-linked) engines is allowed to avoid duplicating potentially costly computations. Nevertheless, it must be understood that whether or not these non-linked engines actually receive these datapacks depends on their own time step. In other terms, when a datapack D for engine A is prepared in a TF linked to engine B, then by design D may not always reach A.

Implementation Details

TransceiverFunctions are managed by the TransceiverFunctionManager and the TransceiverFunctionInterpreter. The former deals with general tasks such as loading TFs from the experiment configuration and de-/activation of TFs, while the latter handles the actual Python script execution.

On Simulation Loop configuration stage, all TransceiverFunction configurations are read. For each config, the Python script that contains the TF is loaded and executed. Inside of the scripts, TFs must be registered with the TransceiverFunctionInterpreter by calling TransceiverFunctionInterpreter::registerNewTransceiverFunction. To automatize this process the TransceiverFunction decorator is provided. Any function in the TF script preceded by this decorator will be registered as a TF.

Currently, all TFs share the same PythonInterpreterState, meaning they share the same global and local Python variable pool. Therefore, a function/variable defined in one TF Python script is accessible in others.