It was already mid-20th century when the British philosopher Gilbert Ryle poked fun at Cartesian mind-body dualism by calling it “the dogma of the Ghost in the Machine,” in which our mind and body was understood as separate and unequal entities: the mind is the master that controls the body.
Now we have come to understand that our mind and body are one and inseparable, and that what-controls-what is anything but clear-cut. But our imagination has not quite let go of the long-cherished dualism: the idea of eternal life is making a (millionth) comeback as some start to see our consciousness as data (any “Ghost in the Shell” fans?).
Is technology tugging us back to Cartesian dualism? Or is it opening up a new space for exploration in our understanding of mind, body and what humans are capable of?
Here at Geist, we want to use technology to better understand, play with and augment human capacities. By combining insights from cognitive science, psychology and social sciences with tech (e.g. wearable sensors, deep learning, etc.), we seek to bring you that sense of wonder you felt when you first managed to balance yourself on two spinning wheels or breathed underwater with all those strange creatures.
Technology has always expanded our perspectives. So can digital tech. Let’s amplify our sense of wonder.
1. Eyewear Computing
Eyewear computing is an exciting area of exploration since most of our sensory channels are located on the head. The window-to-soul metaphor may be apt after all. Unlike those bulky/pricy early devices with short battery life, we work on truly wearable devices like J!NS MEME to pave the way for a new generation of “smart eyewear” that are light-weight, low-power, easy-to-use and smart-looking.
2. Deep Learning the Human Mind
This project is about estimating cognitive performance fluctuations through deep learning, using a combination of physiological signals and recorded behavior patterns as data. Rather than employing deep learning to mimic or surpass some human cognitive functions (such as visual object/face recognition, playing Go, etc.), we adopt a human-empowering framework and use deep learning to “learn deeply” about the human mind.
3. Effective Interactions in VR/AR/MR in Social Spaces
Wearable and mobile interactions are still dominated by buttons, touch and mid-air gestures. Virtual and Augmented Reality is the Wild West of User Experience and User Interface Design. We work on more effective interactions to define best practices for more subtle, discreet and unnoticeable in public/social spaces.