SNAP, short for Simulation and Neuroscience Application Platform, was developed by Christian Kothe at the Swartz Center for Computational Neuroscience, a Center of the Institute for Neural Computation at the University of California San Diego, CA, USA. It is based on the open-source game engine Panda3D.
Being based on a game engine, SNAP provides great flexibility with respect to the design of two- or three-dimensional, interactive, multimodal paradigms. It provides wrappers and convenience functions for the quick scripting of standard neuroscientific experiments, but also allows all functions of the underlying game engine to be accessed. For example, both the traditional visual oddball task used by [Zander et al. (2017)] and the STRUM [Kothe et al. (2018)] dataset’s complex multi-task battery were implemented in SNAP. Of note is also the Meyendtris paradigm [Krol et al. (2017)]. Meyendtris, having been developed for the IEEE Brain Hackathon Budapest 2016, was fully implemented from scratch on-site during the first few hours of the hackathon. This illustrates the possibilities provided by SNAP for flexible and rapid experiment development.
One downside of SNAP is that stimulus presentation is somewhat delayed compared to the markers written into the EEG. SNAP has built-in support for the LabStreamingLayer (LSL) and uses it to send event markers. These markers are sent when the code executes, but, depending on your hardware, the actual stimulus can appear more than 100 ms later. (We did not measure any significant jitter coming from the software: the delay appears to be constant, jittered only by the display’s frame rate.).
In order to keep the mentioned advantages of using SNAP while offsetting the disadvantage of the delay, we used a Brain Products Photo Sensor to more accurately measure the onset times of visual stimuli, and correct the markers after the recording.
We recorded EEG using the BrainAmp DC, extended with a BrainAmp ExG amplifier, to which we connected the photo sensor. We could thus record the EEG and the signal coming from the photo sensor in one synchronous data stream. The photo sensor was placed in one of the corners of the display of the stimulus computer.
In SNAP, we replaced calls to the regular marker function with calls to a function that would simultaneously present a bright flash in the chosen corner of the display. Each marker was now registered both in the LSL marker stream and, upon visual presentation in the corner, by the photo sensor. The photo sensor’s measurements thus included the delay in the visual presentation pipeline.
After the data was recorded, we converted the photo sensor’s sudden signal onsets into markers using a custom script. When care is taken that the markers are called in the same drawing cycle as the stimuli they represent, these markers now indicate the exact onset times of the visual stimuli, synchronized to the EEG.
The SNAP script
Attached to this post is a SNAP script that provides a few basic functions to help speed up paradigm creation, and to generate visual markers for the photo sensor.
It includes a logging function. When enabled, the script will request a subject ID upon start-up. Now, when self.log() is called with any number of variables as arguments, it will log those variables to a semicolon-separated log file, along with the subject ID and current time stamp.
Another function simplifies the display of brief instructions: self.instruct() puts vertically-aligned and word-wrapped text on the screen, and waits for the participant to press space to continue. Without instructions, you can use self.waitForUser() for a simple self-paced break.
The visual markers are implemented using self.photomarker(). In addition to calling the regular self.marker() function, it generates configurable visual flashes in one of the screen’s corners, such that these markers can be picked up using a photo sensor. The size, colour, and corner of the flashes can be configured.
To use this script: Once you have installed the Panda3D SDK and downloaded SNAP, place this script in the /src/modules directory of SNAP where the SNAP launcher can find it. Use the SNAP launcher to run it.
Also attached to this post is a MATLAB script to convert the recorded photo sensor data into markers. This script requires the data to be in EEGLAB format. Simply call it with the EEGLAB dataset as the first argument, and the photo sensor’s channel number as the second, and it will return the same dataset with additional markers placed at the time points of sudden photo sensor activity.
Due to noise, however, it is advisable to fine-tune the script’s parameters for your use, or at least to evaluate the results. The options are as follows.
First of all, the markers can be locked either to the onset (sudden increase) or offset (sudden decrease) of the photo sensor activity.
Secondly, a threshold can be set, allowing you to configure at what point a sudden change results in a marker being placed. This helps deal with noise.
A refractory period can be set to ignore any above-threshold activity for a certain amount of time after a marker has been set. If the flashes in your paradigm e.g. last 100 ms, a refractory period of 100 ms or more can make sure no single event generates more than one marker.
It is possible to plot the output of the script for evaluation purposes.
The above image shows the first derivative of the photo sensor data in red. The calculated markers are shown and counted in grey. The script’s default threshold at 0.75 appears too high for this recording: a number of onsets are missed. Furthermore, because its onset was missed, it once happened that the offset of a stimulus was erroneously counted as its onset (marker number 4). In this case, we would thus lower the threshold and make sure the refractory period covers the length of the marker presentation.
If it cannot be avoided that erroneous markers are generated, make a note of these markers’ indices in the plot. These indices can then be passed as a final argument to the script, and they will be ignored.
Kothe, Christian A., Mullen, Tim R., Makeig, Scott (in press). STRUM: A new Dataset for Neuroergonomics Research. In 2018 IEEE International Conference on Systems Man and Cybernetics (SMC), Miyazaki, Japan.
Krol, L. R., Freytag, S.-C., & Zander, T. O. (2017). Meyendtris: A hands-free, multimodal Tetris clone using eye tracking and passive BCI for intuitive neuroadaptive gaming. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (pp. 433-437). New York, NY, USA: ACM.
Zander, T. O., Andreessen, L. M., Berg, A., Bleuel, M., Pawlitzki, J., Zawallich, L., Krol, L. R. & Gramann, K. (2017). Evaluation of a dry EEG system for application of passive brain-computer interfaces in autonomous driving. Frontiers in Human Neuroscience, 11, 78.