Cognixion gives voice to a user’s thoughts


A. 

There are some key technical problems to solve. BCIs historically have been viewed somewhat skeptically, particularly the use of electroencephalography. So our challenge is to come up with a paradigm for stimulus response that enables sufficient expressive capability within the user interface. In other words, can I show you enough different kinds of stimuli to give you meaningful choices so you can efficiently use the system without becoming unnecessarily tired?

Then it’s like whack-a-mole, or the digital equivalent. When we see a specific frequency come through, and a certain power threshold on it, we act on it. How many different unique frequencies can we disambiguate from one another at any given time?

“For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Andreas Forsland, founder and CEO of Cognixion.

Cognixion

Another challenge is that a commercial device should require a nearly zero learning curve. Once you pop it on, you need to be able use it within minutes and not hours.

So we might couple the stimulus-response technology with a display, or speakers, or haptics that can give biofeedback to help train your brain: ‘I’m doing this right’ or ‘I’m doing it wrong.’ This would give people the positives and negatives as they interact with it. If you can close those iterations quickly, people learn to use it faster.

Our goal is to really harden and fortify the reliability and accuracy of what we’re doing, algorithmically. We then have a very robust IP portfolio that could go into mainstream applications, likely in the form of much deeper partnerships.

Related content

Amazon Research Award recipient Jonathan Tamir is focusing on deriving better images faster.

In terms of applications, we are pursuing a medical channel and a research channel. Making a medical device is much more challenging than making a consumer device, for a variety of technical reasons: validation, documentation, regulatory considerations. So it takes some time. But the initial indications for use will be speech generation and environmental control.

Eventually, we could look to expand our indications within the control ‘bubble’ to cover additional interactions with people, places, things, and content. Plus, the system could find applications within three other healthcare bubbles. One is diagnostics in areas like ophthalmology and neurology, thanks to the sensors and closed-loop nature of the device. A second is therapeutics for conditions involving attention, focus, and memory. And the third is remote monitoring in telehealth-type situations, because of the network capabilities.

The research side uses the same medical-grade hardware, but loaded with different software to enable biometric analysis and development of experimental AR applications. We’re preparing for production and delivery of initial demand early next year, and we’re actively seeking research partners who would get early access to the device.

In addition to collaborators in neuroscience, neuroengineering, bionics, human-computer interaction, and clinical and translational research, we’re also soliciting input from user experience research to arrive at a final set of specific technical requirements and use-case requirements.

We think there’s tremendous opportunity here. And we’re constantly being asked, ‘When can this become mainstream?’ We have some thoughts and ideas about that, of course, but we also want to hear from the research community about the use cases they can dream up.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart