Hyperscanning Setup with 10 X.ons
Using LSL and an Auditory Oddball Paradigm
Using LSL and an Auditory Oddball Paradigm
A BLOG POST BY DR. ALEX KREILINGER
At a glance
- Study Name: Hyperscanning Setup with 10 X.ons
- Fields: X.on, Hyperscanning, LSL, EEG, data acquisition, data analysis
- Project run in house at Brain Products
- Hardware used: 10 X.ons, 10 Samsung Galaxy S21 FE 5G smartphones with Android™ 13, 2 Windows® PCs, 1 router Linksys WRT1900ACS
- Software used: X.on App for Android™, PsychoPy, MATLAB®, LabRecorder, BrainVision Analyzer, BrainVision LSL Viewer
Description
With the X.on, Brain Products recently released a new EEG headset that is ideal for running experiments in classroom settings, as it is very easy and comfortable to acquire high-quality EEG signals within mere minutes. One of the questions that is most often asked in this context is “How many X.ons can you use in parallel in a single room/close distances?”.
We designed an experiment that incorporates 10 X.ons with all transmission settings set to maximum and performed an auditory oddball paradigm, simulating a typical classroom scenario with students performing a simultaneous task. In this article, we provide results and many important tips that you might want to follow if you plan on running similar experiments with multiple X.on headsets.
We are pleased to report that we could successfully record data from 10 X.ons without any lost samples and with sampling rates very close to the nominal sampling rates. Five of the 10 X.ons were put on participants that performed a simple auditory oddball task. It was possible to record auditory event-related potentials (ERPs) for all five participants.
Multiple X.ons in a Room
The affordable pricing and the easy-to-use and quick setup time are all ideal prerequisites for educational purposes where multiple EEG streams should be streamed in close proximity. Because the X.on is a wireless headset that communicates with a receiver—Bluetooth® on an Android™ smartphone or Windows® PC—it can suffer from the usual problems that wireless technology is exposed to: Interference with other transmissions in the same bandwidth, deteriorating transmission quality with increased distance between sender and receiver, or blocked line of sight. It may also be more difficult to set up the wireless connection as it requires some careful preparation.
Therefore, the question about how many X.ons can be used simultaneously in a single room is more than justified.
The Setup
To get a good idea of what is possible, we wanted to test multiple X.ons with maximum settings, i.e., sampling rate should be set to 500 Hz, all possible channels should be activated, and streams should be sent and recorded via Lab Streaming Layer (LSL), not just saved directly to the phones.
We also wanted to stream realistic data, as opposed to flat lines that may get away with lower bit rates. Therefore, we decided to put 5 X.ons on human participants and to make 5 X.ons stream noise (which was ensured by connecting GND and REF to one EEG electrode).
The distance between headsets was kept to realistic values that can be expected in a normal classroom.
The setup can be observed in the figure on the right.
- There were two benches placed next to each other. On each of these benches with a length of 220 cm two participants were seated.
- Directly behind the benches, two tables of the same length were positioned on which the 5 X.ons with noise were placed.
- One additional participant, who also took the role of the operator, was seated in front of the participants at a separate table.
- Each participant held in their hands an Android™ phone that was paired with their X.on headset.
- The noise-streaming X.ons were all placed directly next to the corresponding paired Android™ phones.
- All Android™ phones were of the model Samsung Galaxy S21 FE 5G.
The Paradigm
Aiming for a proper hyperscanning scenario, we did not just want to record simultaneous EEG in one room. We wanted to provide a task for everyone with some context that can later be used for analysis. Because we did not want everyone to focus their gaze on a single display, we decided to use an auditory oddball paradigm. This paradigm was kept rather simple: three different sounds were played at random. Out of 200 trials there were 160 frequent, low-pitch sounds, 20 deviant medium-pitch sounds, and 20 target high-pitch sounds. The task for all participants was to count the targets and ignore the frequent and deviant sounds.
The sounds were provided by a computer running PsychoPy. Sounds were played loud enough so they could be heard easily everywhere in the room. To mark the type of sounds, LSL markers were sent at the beginning of each playback: ‘1’ for frequent sounds, ‘2’ for deviants, and ‘3’ for the targets.
The Network
One important realization we made while preparing the measurements was that the network has a big effect on how smooth hyperscanning recordings can be. While at first, we simply connected all Android™ phones to the company network, this proved to be unreliable. In a busy company network, there are just too many processes running in the background that cannot be accounted for. For example, big data transfers, streams with high bit rates, or restrictive security measures can make it difficult to stream data unimpededly, or to even find the streams without issues.
Therefore, we decided to go with a dedicated network that is isolated from the company network and/or the internet. We placed a Linksys router in the room and configured it as a 5-GHz access point. All the Android™ phones connected to this router, as did the computer with PsychoPy and an additional computer that was used to record all of the streams.
For the phones, it is important to connect to a 5-GHz network. Otherwise, the fact that Bluetooth® is using the same bandwidth as the 2.4-GHz alternative can lead to data transmission problems. This problem was more prominent with Samsung phones of the A-series as opposed to phones of the S-series. However, this cannot be generalized, and individual phones should be tested for their capability. The safe bet is to go with the 5-GHz network option if available.
Once all the phones were connected to the network successfully, two computers were connected to the same network: one with PsychoPy and one with LabRecorder for recording the XDF file with all streams and BrainVision LSL Viewer for online monitoring of signals.
In LabRecorder, all the streams had to be selected and recorded. Here, it can easily happen that streams do not always appear in the list of detected streams. Reasons can be a wrong setting in the firewall, the device being connected to a different network, the stream not having started yet, or simply the stream not being in the scope of LabRecorder. There are two steps to facilitate better detection of streams.
First, provide all the IP addresses in the lsl_api.cfg file. The lsl_api.cfg file is not necessarily present on your computer. If not, you can download it and get more information on how to use it here. As explained in the link, there are three possible locations where the file can be placed: in a global folder, a user-specific folder, or directly in the same folder of the program that shall be configured (e.g., LabRecorder). What usually worked best for us was to add all the devices’ IP addresses in the list (you can find the IP address in the settings of your Android™ phone):
For LabRecorder, it also helps a lot to predefine the names of all the required streams. If you want to do this, edit the file LabRecorder.cfg in LabRecorder’s folder and list all of the streams under “RequiredStreams”, providing the stream name and host. For example, adding the data and marker stream of one X.on looks like this:
Entering these two streams in the config file will make them appear in the list of available streams even if they are missing, indicated by the red color. Only if the streams are found, the color changes to green. Do this for all streams you need, and you will not miss any and accidentally start the recording prematurely.
Figure 1: LabRecorder showing missing streams that are configured in ‘LabRecorder.cfg’.
The Recording
As soon as every device was running, connected, streaming, all streams detected in LabRecorder, and the participants were ready, we started the recording in LabRecorder and the auditory oddball paradigm in PsychoPy. In total, we performed two recordings while sitting, and one additional recording where the participants were asked to walk around the room randomly, without trying to avoid blocking line of sight between phones and wireless router. This last run was recorded mainly to see if we could provoke lost samples.
We recorded three XDF files:
- Hyper_10Xons_Linksys_AEP_02.xdf,
- Hyper_10Xons_Linksys_AEP_03.xdf,
- Hyper_10Xons_Linksys_AEP_04.xdf.
The missing first file is a testament to why it is a good strategy not to start recordings prematurely: in this case one of the phones was not connected to the correct network and we had to cancel the recording and restart.
Each XDF file is approximately 70 MB in size and includes 10 X.on data streams, 10 X.on marker streams, and 1 PsychoPy marker stream.
The X.on data streams comprise 18 channels:
- 7 EEG channels,
- 1 AUX channel (not connected),
- 3 accelerometer channels for the X.on headset,
- 3 gyroscope and 3 accelerometer channels for the paired phone,
- and 1 SampleCounter channel.
The X.on marker streams contain string markers for:
- ambient light levels,
- barometric pressure,
- location,
- and the trial markers from PsychoPy (‘1’, ‘2’, or ‘3’).
The Analysis
The data was analyzed in two different ways:
MNELAB is becoming a more and more streamlined tool for importing hyperscanning data, running some preprocessing, and exporting to the BrainVision Core Data Format (BVCDF) which is the native file format of BrainVision Analyzer. Since MNELAB version 0.8.0, it is possible to import multiple continuous data streams with heterogenous sampling rates. When selecting the streams, resampling becomes mandatory, but the target sampling rate is adjustable. Since version 0.9.0, it is also possible to unselect marker streams from the list. For these particular recordings this is very useful, because you can deselect the duplicated marker streams from the X.ons and only select the one LSL marker stream directly from PsychoPy (together with all EEG streams):
Figure 2: EEG streams can be selected by holding ‘CTRL’ and clicking on each stream. The same works with marker streams for MNELAB version ≥0.9.0. We only select the marker stream containing the original trial markers (“LSL_Markers”). The “Resample to” check box is checked automatically, but the target sampling rate can be adjusted. Because all EEG streams have a nominal sampling rate of 500 Hz, this value is automatically preselected.
Once loaded in MNELAB, it is also possible to run some additional processing before exporting to BVCDF. First, it makes sense to only select EEG channels by type, or to only select the X.ons with real EEG data. After exporting to BVCDF (please install pybv first), all of the selected signals can be opened in BrainVision Analyzer. Here, the signals were filtered, segmented based on the event markers ‘1’ and ‘3’, and averaged. The basic artifact rejection function was used, mainly to reject trials with high movement-related artifacts in the walking around condition.
XDF files were loaded with load_xdf.m from the xdf-MATLAB GitHub page. Here, signals from the X.ons with real EEG were loaded and segmented based on the LSL markers. Similar filtering steps were taken, but artifacts were not rejected in MATLAB®. Here, we were mainly interested in looking at the meta data and synchronization quality. We looked at each stream’s effective sampling rate. The effective sampling rate is basically the difference between first and last time stamp divided by the number of collected samples. Ideally, it is as close to the nominal sampling rate as possible. Both values can be found in each stream’s info. When loading the XDF file with load_xdf.m, a cell array with one structure for each stream is created (Figure 3).
In addition, we checked if all SampleCounter signals were increasing one-by-one to see if there were any lost samples. The marker streams were also checked if all 200 expected trial markers were present: Remember, the LSL_Marker stream from PsychoPy was recorded and streamed in all X.on marker streams.
Figure 3: Meta data of one stream in MATLAB®. The nominal and effective sampling rates are highlighted.
The Results
In BrainVision Analyzer, we were able to show synchronized ERPs for the 5 participants.
We chose channel Cz from each involved X.on because it had the most pronounced ERPs and plotted the responses to frequent and target sounds on top of each other.
The steps taken in BrainVision Analyzer are shown in Figure 4.
Figure 4: History tree in BrainVision Analyzer. First, signals are filtered, then only the channels Cz from human participants are selected for further segmentation and averaging.
In the following three figures, each subplot represents one individual participant. (Click on image to enlarge and view captions.)
1st run (%) | 2nd run (%) | 3rd run (%) |
---|---|---|
+0.00214 | +0.00216 | +0.00132 |
+0.00096 | +0.00228 | +0.00046 |
+0.0016 | +0.00066 | +0.00172 |
+0.00174 | +0.00126 | +0.00056 |
+0.00026 | +0.00094 | +0.00044 |
+0.0009 | +0.00072 | +0.00128 |
+0.00016 | +0.00144 | +0.00204 |
+0.0016 | +0.00098 | +0.0008 |
+0.00076 | +0.0018 | +0.00072 |
+0.00086 | +0.00266 | +0.00026 |
Table 1: Deviations from the nominal sampling rates for each X.on in the three runs. The maximum values are highlighted.
The highest total deviation of 0.00266 % represents 500.0133 Hz which is even less than 1 second drift per hour.
In all of the recordings, no lost samples were detected based on the SampleCounter channel that was streamed and recorded particularly for this purpose. The difference over this whole signal was continuously equal to 1 for all three runs.
The number of markers recorded was 200 for all marker streams. There were occasional extra markers at the end of recordings. Those can be tied to an unclean closing of the stimulus presentation software. The termination of the LSL_Marker stream was not ideal, as we did not close the LSL outlet properly at the end of the auditory oddball paradigm.
The averaged ERPs created in MATLAB® are shown in the following three figures, using a different approach by showing the averaged responses to frequent sounds on the left and the responses to target sounds on the right. (Click on image to enlarge and view captions.)
Summary and Take-Home Messages
When taking some time to prepare and making sure to have a stable network, it is possible to record data with maximum settings—in terms of sampling rate, bit rate, and number of channels—from 10 X.ons simultaneously and in close proximity.
We did not encounter any lost samples in the three recordings. The effective sampling rates were only minimally deviating from the nominal sampling rate. In addition, we could demonstrate clearly synchronized EEG signals in form of auditory ERPs where even P300 signals were visible, despite the low number of 20 trials for target sounds.
One additional positive aspect of the X.on is that the typical hardware-based latency (see this article for more information) compared to software-based LSL markers is mitigated because the X.on App for Android™ corrects the transmission delay caused by Bluetooth® internally. Therefore, the latency is only in the range of a few milliseconds.
During the preparation, performance, and analysis steps, we learned some valuable lessons that we want to save in the following list. If you want to recreate a similar experiment, we advise you to adhere to these suggestions as much as possible:
Data Access
We hope this article gave you some ideas or could help you setting up your hyperscanning experiment. If you are still in the planning phase and want to see what to expect, it is possible to analyze the data yourself. Due to the complexity of the recording (21 streams in a single XDF file) we only send the link per request. If you are interested in taking a look by yourself, please reach out to us, tell us your use case and/or specific questions, and we will be happy to send you a link to the data.