Art From Brainwaves, Antifreeze, And Ferrofluid

Moscow artist [Dmitry Morozov] makes phenomenal geek-art. (That’s not disrespect — rather the highest praise.) And with Solaris, he’s done it again.

The piece itself looks like something out of a sci-fi or horror movie. Organic black forms coalesce and fade away underneath a glowing pool of green fluid. (Is it antifreeze?) On deeper inspection, the blob is moving in correspondence with a spectator’s brain activity. Cool.

You should definitely check out the videos. We love to watch ferrofluid just on its own — watching it bubble up out of a pool of contrasting toxic-green ooze is icing on the cake. Our only wish is that the camera spent more time on the piece itself.

Two minutes into the first video we get a little peek behind the curtain, and of course it’s done with an Arduino, a couple of motors, and a large permanent magnet. Move the motor around with input from an Epoc brain-activity sensor and you’re done. As with all good art, though, the result is significantly greater than the sum of its parts.

[Dmitry’s] work has been covered many, many times already on Hackaday, but he keeps turning out the gems. We could watch this one for hours.

Blow Your Mind With The Brainwave Disruptor

rich_decibles_brainwave_disruptor

Whether you believe in it or not, the science behind brainwave entrainment is incredibly intriguing. [Rich Decibels] became interested in the subject, and after doing some research, decided to build an entrainment device of his own.

If you are not familiar with the concept, brainwave entrainment theory suggests that low-frequency light and sound can be used to alter brain states, based on the assumption that the human brain will change its frequency to correspond to dominant external stimulus. [Rich’s] device is very similar to [Mitch Altman’s] “Brain Machine”, and uses both of these methods in an attempt to place the user in an altered state of mind.

[Rich] installed a trio of LEDs into a set of goggles, wiring them along with a set of headphones to his laser-cut enclosure. Inside, the Brainwave Disruptor contains an Arduino, which is tasked with both generating light patterns as well as bit-banged audio streams.

Well, how does it work? [Rich] reports that it performs quite nicely, causing both visual and auditory hallucinations along with the complete loss of a sense of time. Sounds interesting enough to give it a try!

Brainwave-based Assistive Technology In The Home

eeg_smart_house

Amyotrophic lateral sclerosis (ALS) is a debilitating disease that eventually causes the afflicted individual to lose all control of their motor functions, while leaving their mental faculties intact. Those suffering from the illness typically live for only a handful of years before succumbing to the disease. On some occasions however, patients can live for long periods after their original diagnosis, and in those cases assistive technology becomes a key component in their lives.

[Alon Bukai and Ofir Benyamin], students at Ort Hermalin Collage in Israel, have been working hard on creating an EEG-controlled smart house for ALS patients under the guidance of their advisor [Amnon Demri]. The core of their project focuses around controlling everyday household items using brainwaves. They use an Emotiv EPOC EEG headset which monitors the user’s brainwaves when focusing on several large buttons displayed on a computer screen. These buttons are mapped to different functions, ranging from turning lights on and off to changing channels on a cable box. When the user focuses on a particular task, the computer analyzes the headset’s output and relays the command to the proper device.

As of right now, the EEG-controlled home is only a project for their degree program, but we hope that their efforts help spur on further advancements in this field of research.

Continue reading to see a pair of videos demonstrating their EEG-controlled smart house in action.

Continue reading “Brainwave-based Assistive Technology In The Home”

Researchers Create A Brain Implant For Near-Real-Time Speech Synthesis

Brain-to-speech interfaces have been promising to help paralyzed individuals communicate for years. Unfortunately, many systems have had significant latency that has left them lacking somewhat in the practicality stakes.

A team of researchers across UC Berkeley and UC San Francisco has been working on the problem and made significant strides forward in capability. A new system developed by the team offers near-real-time speech—capturing brain signals and synthesizing intelligible audio faster than ever before.

Continue reading “Researchers Create A Brain Implant For Near-Real-Time Speech Synthesis”

Sleeping arctic fox (Alopex lagopus). (Credit: Rama, Wikimedia)

Investigating Why Animals Sleep: From Memory Sorting To Waste Disposal

What has puzzled researchers and philosophers for many centuries is the ‘why’ of sleep, along with the ‘how’. We human animals know from experience that we need to sleep, and that the longer we go without it, the worse we feel. Chronic sleep-deprivation is known to be even fatal. Yet exactly why do we need sleep? To rest our bodies, and our brains? To sort through a day’s worth of memories? To cleanse our brain of waste products that collect as neurons and supporting cells busily do their thing?

Within the kingdom of Animalia one constant is that its brain-enabled species need to give these brains a regular break and have a good sleep. Although what ‘sleep’ entails here can differ significantly between species, generally it means a period of physical inactivity where the animal’s brain patterns change significantly with slower brainwaves. The occurrence of so-called rapid eye movement (REM) phases is also common, with dreaming quite possibly also being a feature among many animals, though obviously hard to ascertain.

Most recently strong evidence has arisen for sleep being essential to remove waste products, in the form of so-called glymphatic clearance. This is akin to lymphatic waste removal in other tissues, while our brains curiously enough lack a lymphatic system. So is sleeping just to a way to scrub our brains clean of waste?

Continue reading “Investigating Why Animals Sleep: From Memory Sorting To Waste Disposal”

Nature Vs Nurture In Beethoven’s Genome

When it comes to famous musicians, Beethoven is likely to hit most top ten charts. Researchers recently peered into his genome to see if they could predict his talent by DNA alone.

Using a previously-identified polygenetic index (PGI) for musical talent, which finds the propensity of certain genes to influence a given trait after a genome-wide association study (GWAS), the researchers were able to compare samples of Beethoven’s DNA to that of two separate population studies with known musical achievement data.

Much to the relief of those who saw Gattaca as a cautionary tale, the scientists found that Beethoven scored only around the tenth percentile for the ability to keep a beat according to his genetic markers. According to the researchers, using genetic markers to predict abilities of an individual can lead to incorrect conclusions, despite their usefulness for group level analyses.

Curious about more musical science? How about reconstructing “Another Brick in The Wall (Part I)” from brainwaves or building a Square Laser Harp?

Re-Creating Pink Floyd In The Name Of Speech

For people who have lost the ability to speak, the future may include brain implants that bring that ability back. But could these brain implants also allow them to sing? Researchers believe that, all in all, it’s just another brick in the wall.

In a new study published in PLOS Biology, twenty-nine people who were already being monitored for epileptic seizures participated via a postage stamp-sized array of electrodes implanted directly on the surface of their brains. As the participants were exposed to Pink Floyd’s Another Brick In the Wall, Part 1, the researchers gathered data from several areas of the brain, each attuned to a different musical element such as harmony, rhythm, and so on. Then the researchers used machine learning to reconstruct the audio heard by the participants using their brainwaves.

First, an AI model looked at the data generated from the brains’ responses to components of the song, like the changes in rhythm, pitch, and tone. Then a second model rejiggered the piecemeal song and estimated the sounds heard by the patients. Of the seven audio samples published in the study results, we think #3 sounds the most like the song. It’s kind of creepy but ultimately very cool. What do you think?

Continue reading “Re-Creating Pink Floyd In The Name Of Speech”