Feeds:
Posts
Comments

Archive for the ‘Brain-Computer Interface’ Category

PSFK reports:

… Mind Flex requires players to wear a headset equipped with sensors that measure brainwave activity in order to levitate a ball and move it through hoops.

In this video, a representative explains that the sensors detect and measure theta wave output, which as you concentrate, your brain creates more of.  After measuring those waves, it then converts them into a signal and sends it to the game unit, which in turn powers tiny little fans.  The harder you concentrate, the faster they spin and the higher the ball floats.

The brain waves signal to a computer, which in turn signal to real life fans, making it sort of a brain-computer interface hybrid. As more and more of these games go mainstream, the interest in neuroscience will grow as the results become more tangible. Perhaps they will also being to change some views toward non-materialism.

Advertisements

Read Full Post »

Ron-Angevin and Diaz-Estrella recently conducted a training session using untrained subjects to test whether they made more errors in a conventional (read: boring) training environment manipulating a 2-D horizontal bar or in a virtual reality environment while attempting to drive a car. In both environments, the subjects could only make responses by visualizing moving their right or left hands.

Once feedback was enabled in both systems (after a few seconds), the virtual reality system led to significantly less error than the conventional system. The subjects reported being more engaged in the virtual reality game and there is strong evidence to suggest that they were focused on dodging the obstacles.

This study has applications in a surspising number of settings. Subjects respond well when presented with challenging, life-like environments that have feedback mechanisms built in. When building brain-computer interfaces, it is essential to include such a training environment.

The real question is, does this study lend any more evidence to the startling conclusion that we are living in a computer simulation?

Reference

Ron-Angevin R Diaz-Estrella A. 2008 Brain–computer interface: Changes in performance using virtual reality technique. Neuroscience Letters 449: 123-127. doi:10.1016/j.neulet.2008.10.099.

Read Full Post »

That is the report from Nature News. They note that:

The electrode is different to others used for brain–computer interfaces, most of which are fixed to the skull rather than within a specific part of the brain. This means that the electrodes can move around, making it difficult to record from the same neurons every time or to leave the electrode in place in for more than a few months at a time.

The electrode used by Guenther’s team is impregnated with neurotrophic factors, which encourage neurons to grow into and around the electrode, anchoring it in place and allowing it to be recorded from for a much longer time.

As more public goodwill is seen from the results of this research, more money will probably be poured into it. The use of neurotrophic factors is interesting, and success in this area would raise serious questions about whether it could be used on healthy people as well to augment their senses or even to provide additional ones.

Read Full Post »

One way for quadriplegic patients to feed themselves and perform other motor activities is to connect an interface to neural populations, and by using a complicated algorithm decode task-oriented activity and determine the parameters necessary for these external devices to work. In a fascinating experiment from the Washington National Primate Research Center published in Nature, Moritz et al. have shown that it is possible to create artificial connections between haphazardly chosen motor cortex cells to multiple muscles and in order to restore movement to a paralyzed arm.

The researchers used operant conditioning to reward monkeys for specified motions in the wrist. Within 10 minutes of the first practice session the monkeys had volitional control of the firing of most neuron cells involved, which is incredible. The researchers then increased the complexity of the experiment so that the monkeys were rewarded for maintaining a certain amount of torque in the wrist, which requires that the monkeys be able to both control the extension of the muscles and relax them on cue. The monkeys were able to do this as well, as each individual cell increased in control with practice.

This technique may prove to be a revolutionary strategy for recreating motor control in paralyzed patients, with signal pathways made directly from cells to muscles instead of the typical approach of attempting to derive neural signals from a neuron population at large. Their method piggybacks on the natural plasticity of individual cells to tremendous effect. The optimal strategy now probably would combine this new technique with the old method somehow. The paper is short and fascinating, required reading for anybody interested in brain computer interfacing at all.

Reference

Moritz TC, Perlmutter SI, Fetz EE 2008 Direct control of paralysed muscles by cortical neurons. Nature advanced online publication October 15. doi:10.1038/nature07418.

Read Full Post »

This is obviously fascinating stuff. The lecture is by Professor Klaus-Robert Mueller, who works at the technical university of Berlin and has studied mathematical physics and theoretical computer science, in addition to his work on brain computer interfacing (also known as BCI). Here is a link to the lecture, and although the streaming is a bit choppy the work they are doing is worth the wait. Here are my notes:

  • The general structure for their non-invasive BCI is a brain, some measuring device, some feature abstraction, some classifier, and some feature output. The goal could be to control some device using neural transmissions.
  • One challenge is that if you measure electrical activity, you can only measure a certain superposition of electroactivity at given brain sources. The problem is that there is not a single brain but many quasi-independent pathways working together. This is known as the cerebral cocktail party problem.
  • One way to sort between sources of interest and non-interest (and “solve” the cerebral cocktail problem) is to use independent component analysis (aka ICA, info here). It attempts to tell which independent compents are the source of the electrical acitivity data.
  • In their research, they wanted to both predict motor movements before they occured, and predict motor movements based on humans imagining the movement.
  • Their “imagining movement” study including a 20 minute training phase (where they collected ~ 200 data points), a 5-10 minute machine learning phase, and then a feedback phase where the information learned is tested. After this point, the subjects were able to play computer pong with their brains!
  • Peak bit rates for the feedback phase reached 37 bits per minute, although the overall bit/rate varied substantially within subjects (there were 6).
  • They were also able to have the subjects spell words using imagined movements by their left or right hands, and one innovate technique was non-binary, which expedited the process.
  • This data showed that the techniques can be non-invasive (they used EEGs), and still produce high transfer results.

This lecture was in 2007, before all of the hoopla over the prosthetic arm movements by monkeys in Japan, but this field is (IMNSHO) about to explode. Get ready.

Read Full Post »

« Newer Posts