All   Archive   Code   News   Projects  

 

My name is Stephen Schieberl. I live in the woods near Portland, Oregon. Hit me up to work together.

Ban the Rewind?

A D M I N

Pictured: Drawing a 3D mesh with cognitive input PLEASE CHECK OUT THIS PAGE FOR THE NEW VERSION: http://bantherewind.com/cinder-emotiv We've had the Emotiv EPOC sitting around at Fashionbuddha for almost a year now. Outside of moving a dot around with OSC messages last Autumn, I never really dove into this technology. I was recently afforded the opportunity to produce a visual experiment in Cinder which leverages the EPOC. Before doing this, I needed to lay the foundation with a class that could give me access to all the data an EEG device can give without the hassle and math. Thus, the ciEmotiv block. The Emotiv SDK provides a way to connect to devices in C++ and receive some interpreted data, such as facial movements and trained cognitive actions, alongside raw EEG data. To use the SDK inside of Cinder, I created an event-based system which boils all the data down into simple values for gestures and thoughts for input, and brainwave frequencies for neurofeedback . It also works with profiles which you can train using Emotiv's Cognitiv Suite. This is especially important, as it truly gives you control over actions using thoughts alone. My first test positions a cursor in 3D space and uses cognitive input to draw a mesh. The Z coordinate of the cursor is modulated by time to give it some depth. You can also use the mouse to rotate the scene to get a different angle on what you've created. Drawing a 3D mesh with cognitive input Adding a bouncing white ball alongside the structure causes me to instinctively think the correct thoughts needed to move the tail around Increased input sensitivity and am now using the strength and frequency of beta waves to phase the spline THE BLOCK Download ciEmotiv for Cinder ciEmotiv 0.0.4 beta This block works with, but does not include the Emotiv SDK. Because of licensing restrictions, I cannot include any part of the SDK itself. In place of headers, static lib files, and DLLs, I have readme.txt files telling which files go where and what to do with them if you have the SDK. If you do have the SDK, you can use this block to easily get the EPOC and future Emotiv devices working in your Cinder project. The SDK is exclusive to Windows, so this block is, too. This works with both the Emotiv Engine (actual devices) and the Emotiv Composer (device emulator). This also works with as many devices as you have plugged in. The getUserId() method of an EmotivEvent can be used to differentiate them. ciEmotiv uses my KissFFT block for Cinder to handle the brainwave analysis. It's included in this zip or you can get the complete package with examples at the KissFFT page. Here's a complete working application which writes data to the console: #include <cinder/app/AppBasic.h> #include <ciEmotiv.h> using namespace ci; using namespace ci::app; using namespace std; class MyApp : public AppBasic { public: // Cinder callbacks void setup(); void quit(); // Emotiv callback void onData(EmotivEvent event); private: // Emotiv ciEmotiv mEmotiv; int32_t mCallbackId; }; // Setup void MyApp::setup() { // Add Emotiv callback mCallbackId = mEmotiv.addCallback<MyApp>(&MyApp::onData, this); // Connect to Emotiv composer // if (mEmotiv.connect("", "127.0.0.1")) // console() << "Connected to Emotiv Composer\n"); // Connect to Emotiv engine if (mEmotiv.connect()) console() << "Connected to Emotiv Engine\n"; } // Handles Emotiv data void MyApp::onData(EmotivEvent event) { // Trace event data console() << "Blink: " << event.getBlink() << "\n"; console() << "Clench: " << event.getClench() << "\n"; console() << "Cognitiv Suite action: " << event.getCognitivAction() << "\n"; console() << "Cognitiv action power: " << event.getCognitivPower() << "\n"; console() << "Engagement/boredom ratio: " << event.getEngagementBoredom() << "\n"; console() << "Eyebrows up: " << event.getEyebrow() << "\n"; console() << "Eyebrows furrowed: " << event.getFurrow() << "\n"; console() << "Laughing: " << event.getLaugh() << "\n"; console() << "Long term excitement score: " << event.getLongTermExcitement() << "\n"; console() << "Looking left: " << event.getLookLeft() << "\n"; console() << "Looking right: " << event.getLookRight() << "\n"; console() << "Long term excitement score: : " << event.getShortTermExcitement() << "\n"; console() << "Smiling: " << event.getSmile() << "\n"; console() << "Smirking left: " << event.getSmirkLeft() << "\n"; console() << "Smirking right: " << event.getSmirkRight() << "\n"; console() << "Time elapsed: " << event.getTime() << "\n"; console() << "Device/user ID: " << event.getUserId() << "\n"; console() << "Winking left eye: " << event.getWinkLeft() << "\n"; console() << "Winking right eye: " << event.getWinkRight() << "\n"; console() << "Wireless signal: " << event.getWirelessSignalStatus() << "\n"; console() << "Brainwave - alpha: " << event.getAlpha() << "\n"; console() << "Brainwave - beta: " << event.getBeta() << "\n"; console() << "Brainwave - delta: " << event.getDelta() << "\n"; console() << "Brainwave - gamma: " << event.getGamma() << "\n"; console() << "Brainwave - theta: " << event.getTheta() << "\n"; console() << "---------------------------------------------\n"; } // Called when leaving void MyApp::quit() { // Disconnect if (mEmotiv.connected()) { mEmotiv.removeCallback(mCallbackId); mEmotiv.disconnect(); } } // Create the application CINDER_APP_BASIC(MyApp, RendererGl) If you have trained a profile for user-specific facial and cognitive input, you can import their *.emu file as follows: // Load profiles from Control Panel's data location map<string, string> profiles = mEmotiv.listProfiles("c:\\ProgramData\\Emotiv"); // Find my profile and load it onto the first device for (map<string, string>::iterator profileIt = profiles.begin(); profileIt != profiles.end(); ++profileIt) if (profileIt->first == "steve") mEmotiv.loadProfile(profileIt->second, 0); This device is sparking a lot of ideas for me. Time to brush up on Arduino robotics...