Hello, if anyone from this team is reading this and needs a quadriplegic geek with experience beta testing would massively benefit from this technology. Let me know.
This could make my life 3.7 million percent easier. Seriously.
Would it work at all (assuming your quadriplegia is spinal cord related)? It functions by measuring signals sent from the brain at the forearm:
>How does it do that? By measuring changes in electrical potential, which are caused by impulses that travel from the brain to hand muscles through lower motor neurons. This information-rich pathway in the nervous system comprises two parts: upper motor neurons connected directly to the brain’s motor center, and lower axons that map to muscle and muscle fibers.
Perhaps it could be configured to work with neck muscles instead?
Fortunately, like most Quads I have some movement below the level of injury. If you're that way inclined, here's a YouTube video of the amount of positive control I have over my body below the neck (My Range of Movement [1]). It's using that and my voice that I can post here, work, bother Sam Harris on Twitter etc etc.
So ideally it would go on my finger, or somewhere on my lower forearm such that I could use both my finger and the muscles in my lower arm to do different things.
I definitely could not have anything wrapped around my neck as it's too fragile, and the thought of having something wrapped around my neck gives me horrors. Maybe scalp muscles could be used, that would be cool.
• Receive raw EEG, accelerometer, gyroscope, and battery data
• Leverage built-in algorithms for band powers, eye blinks, and jaw clenches
• Record and/or playback data to and from a file
• Full documentation
It’s geared toward mobile, but I imagine you could have an iOS/Android companion app that acts as a remote for a computer over Bluetooth/WiFi.
I tried a quick search to see if anyone was working on this as a general input device for people who can’t use mouse/keyboards but it didn’t turn anything up.
Which you pointed out the fallacy of the title, since I could say a mouse and keyboard is controlled by my since its the electrical impulses that trigger muscle movement which are being transposed to the computer input.
My experience working with EEG headsets is that you can create a few cool demos where you show people how you can control this or that very restricted interface, but only after you spend ages calibrating the system for you specifically and for that demo specifically, and make sure the power supply is very steady and there is no strong wifi or bluetooth around. And even then it'll be too frustrating to work with for you to want to use it for anything except showing it off as a demo.
This section:
"Then he strapped the bracelet on my arm. I had worse luck — the thumb on the computerized hand reflected the motions of my thumb, but the index and pinkie finger didn’t — they remained stiff. Berenzweig had me recalibrate the system by angling my wrist slightly, but to no avail."
and this:
“It acts like an antenna,” he said, “so it’s susceptible to interference.”
Sounds a lot like my experience. If you can't make a system people can just pick up an use right away, I predict it will just be too much hassle for anyone to want to use. And throwing buzzwords like ML at the problem doesn't seem to change that.
When I read that section followed by "He chalked it up to the demo’s generalized machine learning model." I also got doubtful, but then the second demo seems to indicate that they can drastically improve the results by adapting the model to the user.
In machine learning terms, it seems that they can manage supervised learning (given a known task, they can figure out how signals correspond to movements) but not unsupervised learning (given a new user's signals, they can't decode them without knowing what the user is trying to do).
So I can imagine this working if new users first had to complete a short initialization sequence (maybe gamified in some way) but could then use it without errors more common than typos when using a keyboard.
The situation could be similar to accents in speech recognition, where you need some form of speaker adaptation. You would obviously use data from more than one individual to train a model that adapts automatically, but that adaptation will still take a little time.
tbh it depends on how much value you can get out of it. If it introduces some new paradigm of interaction, thats actually worth having, then you'll probably see power-users slowly trending to it; and if they get enough of a cohesive/useful ecosystem to surround it, then accessible stuff will start to appear, and suddenly it'll be worth it for the average person to put up with the initialization step
so the setup might be shit, but if the system itself works well, and the expected value of it working well actually bears fruit, then theres still a chance it'll all work out
> then you'll probably see power-users slowly trending to it
Think of it like this. If you have the choice between a regular mouse with 3 buttons, and a mouse with 22 buttons that randomly flies off to some direction for 3 seconds every 2 minutes or so, which would you chose as a power user?
Obviously the first, because reliably having 3 buttons is just so much better than a wonky mouse even if it does have 19 more buttons (that you likely wont use anyway).
It's not really a case of them needing to find a new paradigm, the technology needs to get to a place where someone (like the interviewer) cant at the very very least just pick it up and use it. And after that they need to get to a point where it's reliable and consistent. EEG headsets have been trying to get there for some 10-15 years now without much luck, and these guys might have a good funding pitch claiming that "it's different, we are using muscles!" (EEG headsets pick up your facial muscles as well, but lets forget that for a second) and of cause "We're using Machine learning to build a model!" (machine learning has already been employed on EEG headsets naturally, but it's not a problem of analyzing the signals, it's about getting consistent signals from hardware that freaks out if a metal cart is nearby).
I'm not saying these guys can't do this, I'm just saying that nothing they've shown or done puts them apart from the other companies who have spend 10 years failing to provide something other than cool looking demos in the field.
Im thinking more along the lines of changing off of qwerty to dvorak. Theres an initial barrier to usage, but thats fine, if it actually resulted in something (significantly) better. With dvorak, there are power user converts, despite whatever benefit being minimal.
Your many mouse example fails in that learning it doesnt actually lead to significant benefit, so ofc it'll never see converts; but if it did, if it was actually better than the three button mouse, then the initialization cost is overriden by the long-term benefit, and it remains possible for it to succeed.
My point is primarily about the initialization; if this tech's only substantial negative is that it has to be trained to function correctly, and its benefit is substantial, then there remains a path to success. That is, the training step (alone) does not necessarily kill the technology.
I can't think of any good examples off the top of my head
Normally I am bearish on this type of technology, given the number of previous similar attempts with similar marketing that all failed to deliver. This line gave me pause, though:
"Within just a few seconds, moving the cursor with thought became almost second nature, and I was able to steer it up, down, and to the left and write by thinking about moving — but not actually moving — my hand."
That's quite a strong statement. If true, I'd hop on that bandwagon.
This seems like it could contribute to discreet AR tech in interesting ways. Voice control in public spaces is a bad idea for obvious reasons [1], but this gets you something roughly touchscreen-equivalent for e-glasses or whatever.
I think it is actually a plus that they are shooting for a consumer product and not a medical one. If it's available to anyone, that includes handicapped people, minus the stigma that "only crips use that." (Yes, I'm aware that is an offensive word.)
Bonus points: It will probably be more affordable as well.
You're gonna love Vimium/Vimperator/whatever. It makes the whole browser keyboard-accessible with easy bindings (if you're familiar with the Vim ones).
I really liked Vimium, but I ran into a problem where it just annihilated CPU and I had to uninstall it. It's probably been a few version changes since then, so I might give it a try again.
This is why I have a real hard time buying anything other than a Thinkpad, because of the trackpoint. If I have to move my wrists from where they're planted on the keyboard, it feels about as annoying as having to get up off the couch after I'm all comfortable because I have to go to the bathroom.
my guess is the precision, dexterity and energy cost of finger / hand / wrist movements make them more effective than using a mouth.
An example would be complex poses such as the "gun" shape with a hand as opposed to "pouting" your lips, there's so much opportunity to finely tune the motion from open hand to gun shape but it's more linear with pouting. However I'm not sure where or how the "thinking" part would play into this, eg physical muscle movements vs intention of muscle movements.
And there might just be too much noise and involuntary mouth movements that would make it impractical (eg, breathing, swallowing), plus the fact you'd look like a psychopath mumbling to yourself.
Thinking about moving without doing so breaks the proprioceptic feedback loop that is important in precision motor control, so I wonder if using any device working on this principle will require some learning on the part of the operator, as well as the electronics. This may be more of an issue for me than most people - I find a mouse to be a much better control than a touchpad, which I think is due to having to move an actual object having a small but perceptible heft and drag, which gives me feedback.
I have a Myo, and it's the device which taught me that what I'm actually interested in is interface technology that reduces my movement cost, and the Myo -- while being a pretty cool device and also one of the few that actually works within 10% of what's advertised -- unfortunately increases my movement cost. It's also got a pretty narrow range of actions. More than a few, but not enough to make the increase in movement cost worthwhile, IMO.
I had a job writing software for piloting a drone with Myo. Myo recognizes a small set of palm and finger gestures, not continuous movements, so, for example, you couldn't record yourself typing on the keyboard. It does stream arm movement and rotation though. No mind reading either, which is the gist of the Ctrl-labs' armband.
I dunno. There's some real lag you can see in the video, and, worryingly, the cart says "dogfooding cart." Typically dogfood is labeled "dogfood" for a reason--it's not really fun or stable or effective to use/interact with (yet?).
> Typically dogfood is labeled "dogfood" for a reason--it's not really fun or stable or effective to use/interact with (yet?).
“Dogfooding” is also a term for using your own products like you expect customers / clients to use them. In that usage it isn’t a criticism of the product’s quality.
I believe that their problem will be finding the small group of users that derive enormous value from it. When you really dig into all the applications they've suggested, they don't quite make sense.
This could make my life 3.7 million percent easier. Seriously.