mindaffectBCI is a Python-based, open source BCI framework. It comes with a lot of great examples and offers basically endless options for users to create their own BCI. The flagship example is a speller based on noise-tagging (or c-VEP, as in code-based visual evoked potential) . In this type of BCI, the user simply focuses on letters that flash with very specific patterns. Thereby, extremely fast and accurate spelling is possible, as you can see for yourself in the video we recorded, using a LiveAmp and just 8 actiCAP slim electrodes.
This blog post demonstrates how to use the mindaffectBCI software with Brain Products amplifiers. We will first guide you through a specific example of how to use the c-VEP-based speller with our LiveAmp and then illustrate how this can be extended to any of our amplifiers with the use of LSL. In this post, we outline one specific implementation; however, mindaffectBCI offers more potential types of BCI, therefore, we encourage anyone interested to investigate further.
Install all of the necessary drivers for the Brain Products amplifier you want to use.
If you have BrainVision Recorder installed, then these drivers are already on your system. For a fresh installation, please look at the Brain Products website.
[Optional] Go to our github page and download the LSL connector for your amplifier if you plan on using the LSL option. Check the release page for your amplifier of interest (see this blog post for more information).
Make sure you have Python (we recommend version 3.8) installed on your system.
[Optional] If you haven’t done so already, install git from git-scm.com. You could also just download the zip container from the github page, but in this article we will continue using the git command.
Install a Java Development Kit (JDK), for example from https://adoptopenjdk.net.
It’s recommended to restart your computer afterwards because the path may not update right away.
Create a folder for your mindaffectBCI installation (e.g., c:\MindAffect\)
Start the command line tool (e.g., with Win+r and cmd) and change into this folder (e.g., cd MindAffect).
Run git clone https://github.com/mindaffect/pymindaffectBCI The files will be downloaded into a folder called pymindaffectBCI.
[Optional] If you have several Python versions installed, make sure you are using the correct version. You can verify the path settings and adapt, if necessary, following this guide. Running where python should show the correct installation on the very top.
Change into the folder just created (e.g., c:\MindAffect\pymindaffectBCI) and run python -m pip install -e c:\MindAffect\pymindaffectBCI All the necessary requirements will be installed.
To quickly test if the installation was successful run python -m mindaffectBCI.online_bci –acquisition fakedata
and, after being prompted, select online_bci.json as the configuration file to run.
This will run a simple spelling BCI with simulated data.
If requested, allow the program in the firewall. Note: It just needs permission to send messages between different components on the current machine, it does not need an internet connection.
[Optional] Install the Smart_keyboard: if you want to use the smart keyboard including a learning and editable dictionary for auto-completion, please follow these extra steps:
Go into the MindAffect folder (e.g., c:\MindAffect) and run git clone https://github.com/mindaffect/smart-keyboard
Change into this folder (e.g., c:\MindAffect\smart-keyboard) and run python -m pip install -e c:\MindAffect\smart-keyboard
3. Run MindAffect
If your test run was successful, it’s time to start with real EEG data. Currently, there are two options:
Use LiveAmp with 8 channels based on our Software Development Kit (SDK)
Use any Brain Products amplifier with LSL
3.1 Use LiveAmp with 8 channels based on our SDK
The first implementation we want to highlight makes use of our own SDK, which we gladly share with interested customers who want to write their own software and need customizable access to our amplifiers. The version from MindAffect is pre-configured to use a LiveAmp with 8 channels of EEG and a 500 Hz sampling rate, connecting to the amplifier via internal Bluetooth. In our video, we are using this version with an actiCAP slim snap cap with 64 holders. We put the actiCAP slim electrodes on the back of the head at channels POz, PO7, O1, Oz, O2, PO8, P8, and Iz. However, the exact placement of the electrodes does not matter much, as long as they are mainly covering the visual cortex.
To start the speller, run python -m mindaffectBCI.online_bci –acquisition LiveAmp.
Before this, make sure that the LiveAmp is turned on and that you have the electrodes connected. We recommend checking impedances and signal quality in BrainVision Recorder before running MindAffect, but it is possible to view the EEG signals within the MindAffect software as well.
After you run the command, you will be prompted to select a json file which specifies the configuration to run. Most relevant configuration files that have already been downloaded are online_bci.json (which runs a simple spelling system) or smart_keyboard.json (which runs a more advanced communication system with auto-completion and spelling correction). This file includes an entry for the default acquisition device, which we overrode with the –acquisition LiveAmp options on the command-line. For later experiments, it may be better to make your own json file, so you do not have to write out the full command every time. In this case you can also directly specify the configuration file on the command-line to avoid having to select it later with: python -m mindaffectBCI.online_bci –config_file smart_keyboard.json
3.2 Use any Brain Products amplifier with LSL
Recently, MindAffect implemented LSL reading functionality. This provides immediate access to all of our amplifiers because we provide suitable LSL connectors for all of them on our Brain Products github page. Together with our free LSL viewer, you can even visualize data simultaneously. You also have various options for how to access the data stream, such as providing the data type or even specific channels. Feel free to add these to the corresponding json file, for example like this:
Copy to Clipboard
After that, you simply have to provide an LSL stream in the same network as your computer. If you want to include only specific channels (which have to be present in the LSL stream), their names can be included in the stream definition.
Now, you can play around with the regular speller or with the smart keyboard. The speller itself is self-explanatory. Simply run one or more calibration sessions and you should be ready to go. How fast can you spell, can you do better than us?
Need help with troubleshooting? Check out MindAffect’s FAQs for some additional useful tips.
 Thielen J, van den Broek P, Farquhar J, Desain P (2015)
Broad-Band Visually Evoked Potentials: Re(con)volution in Brain-Computer Interfacing.
PLOS ONE 10(7): e0133797. https://doi.org/10.1371/journal.pone.0133797