HN2new | past | comments | ask | show | jobs | submitlogin
Ctrl-labs’ armband lets you control computer cursors with your mind (venturebeat.com)
134 points by sahin on June 17, 2018 | hide | past | favorite | 68 comments


Hello, if anyone from this team is reading this and needs a quadriplegic geek with experience beta testing would massively benefit from this technology. Let me know.

This could make my life 3.7 million percent easier. Seriously.


Would it work at all (assuming your quadriplegia is spinal cord related)? It functions by measuring signals sent from the brain at the forearm:

>How does it do that? By measuring changes in electrical potential, which are caused by impulses that travel from the brain to hand muscles through lower motor neurons. This information-rich pathway in the nervous system comprises two parts: upper motor neurons connected directly to the brain’s motor center, and lower axons that map to muscle and muscle fibers.

Perhaps it could be configured to work with neck muscles instead?


Hello,

Fortunately, like most Quads I have some movement below the level of injury. If you're that way inclined, here's a YouTube video of the amount of positive control I have over my body below the neck (My Range of Movement [1]). It's using that and my voice that I can post here, work, bother Sam Harris on Twitter etc etc.

So ideally it would go on my finger, or somewhere on my lower forearm such that I could use both my finger and the muscles in my lower arm to do different things.

I definitely could not have anything wrapped around my neck as it's too fragile, and the thought of having something wrapped around my neck gives me horrors. Maybe scalp muscles could be used, that would be cool.

Thanks

[1]:https://www.youtube.com/channel/UCjnBpiirfr7LL-mUk9uSpZw/vid...


A product like Muse might be useful as a headband interface

http://www.choosemuse.com/developer/#sdk

Key Features

• Discover and connect to all Muse headbands

• Receive raw EEG, accelerometer, gyroscope, and battery data

• Leverage built-in algorithms for band powers, eye blinks, and jaw clenches

• Record and/or playback data to and from a file

• Full documentation

It’s geared toward mobile, but I imagine you could have an iOS/Android companion app that acts as a remote for a computer over Bluetooth/WiFi.

I tried a quick search to see if anyone was working on this as a general input device for people who can’t use mouse/keyboards but it didn’t turn anything up.


Neat, TIL. Thanks.


Which you pointed out the fallacy of the title, since I could say a mouse and keyboard is controlled by my since its the electrical impulses that trigger muscle movement which are being transposed to the computer input.


My experience working with EEG headsets is that you can create a few cool demos where you show people how you can control this or that very restricted interface, but only after you spend ages calibrating the system for you specifically and for that demo specifically, and make sure the power supply is very steady and there is no strong wifi or bluetooth around. And even then it'll be too frustrating to work with for you to want to use it for anything except showing it off as a demo.

This section: "Then he strapped the bracelet on my arm. I had worse luck — the thumb on the computerized hand reflected the motions of my thumb, but the index and pinkie finger didn’t — they remained stiff. Berenzweig had me recalibrate the system by angling my wrist slightly, but to no avail." and this: “It acts like an antenna,” he said, “so it’s susceptible to interference.”

Sounds a lot like my experience. If you can't make a system people can just pick up an use right away, I predict it will just be too much hassle for anyone to want to use. And throwing buzzwords like ML at the problem doesn't seem to change that.


When I read that section followed by "He chalked it up to the demo’s generalized machine learning model." I also got doubtful, but then the second demo seems to indicate that they can drastically improve the results by adapting the model to the user.

In machine learning terms, it seems that they can manage supervised learning (given a known task, they can figure out how signals correspond to movements) but not unsupervised learning (given a new user's signals, they can't decode them without knowing what the user is trying to do).

So I can imagine this working if new users first had to complete a short initialization sequence (maybe gamified in some way) but could then use it without errors more common than typos when using a keyboard.


One would presume they have data collected from more than one individual to train their model.


The situation could be similar to accents in speech recognition, where you need some form of speaker adaptation. You would obviously use data from more than one individual to train a model that adapts automatically, but that adaptation will still take a little time.


tbh it depends on how much value you can get out of it. If it introduces some new paradigm of interaction, thats actually worth having, then you'll probably see power-users slowly trending to it; and if they get enough of a cohesive/useful ecosystem to surround it, then accessible stuff will start to appear, and suddenly it'll be worth it for the average person to put up with the initialization step

so the setup might be shit, but if the system itself works well, and the expected value of it working well actually bears fruit, then theres still a chance it'll all work out


> then you'll probably see power-users slowly trending to it

Think of it like this. If you have the choice between a regular mouse with 3 buttons, and a mouse with 22 buttons that randomly flies off to some direction for 3 seconds every 2 minutes or so, which would you chose as a power user?

Obviously the first, because reliably having 3 buttons is just so much better than a wonky mouse even if it does have 19 more buttons (that you likely wont use anyway).

It's not really a case of them needing to find a new paradigm, the technology needs to get to a place where someone (like the interviewer) cant at the very very least just pick it up and use it. And after that they need to get to a point where it's reliable and consistent. EEG headsets have been trying to get there for some 10-15 years now without much luck, and these guys might have a good funding pitch claiming that "it's different, we are using muscles!" (EEG headsets pick up your facial muscles as well, but lets forget that for a second) and of cause "We're using Machine learning to build a model!" (machine learning has already been employed on EEG headsets naturally, but it's not a problem of analyzing the signals, it's about getting consistent signals from hardware that freaks out if a metal cart is nearby).

I'm not saying these guys can't do this, I'm just saying that nothing they've shown or done puts them apart from the other companies who have spend 10 years failing to provide something other than cool looking demos in the field.


Im thinking more along the lines of changing off of qwerty to dvorak. Theres an initial barrier to usage, but thats fine, if it actually resulted in something (significantly) better. With dvorak, there are power user converts, despite whatever benefit being minimal.

Your many mouse example fails in that learning it doesnt actually lead to significant benefit, so ofc it'll never see converts; but if it did, if it was actually better than the three button mouse, then the initialization cost is overriden by the long-term benefit, and it remains possible for it to succeed.

My point is primarily about the initialization; if this tech's only substantial negative is that it has to be trained to function correctly, and its benefit is substantial, then there remains a path to success. That is, the training step (alone) does not necessarily kill the technology.

I can't think of any good examples off the top of my head


A co-worker of mine paid some $800 for a developer(read prototype) EEG headset from a promising startup a few years ago.

The promise was lofty, but the device was near unusable in practice.

In the demo application you could learn to move the ball around a little bit but it was ridiculously hard.

Nothing like Macross Plus...


I agree it won't replace a mouse anytime soon, It might be a good solution for handicapped people though.


Normally I am bearish on this type of technology, given the number of previous similar attempts with similar marketing that all failed to deliver. This line gave me pause, though:

"Within just a few seconds, moving the cursor with thought became almost second nature, and I was able to steer it up, down, and to the left and write by thinking about moving — but not actually moving — my hand."

That's quite a strong statement. If true, I'd hop on that bandwagon.


The article mentions all the big investors too. They had to have impressed a lot of people to get that kind of funding.

Let's hope this device holds up when it's released, and doesn't turn into another Nintendo Power Glove.


> The article mentions all the big investors too. They had to have impressed a lot of people to get that kind of funding.

On the other hand there's Magic Leap. Which from the sounds of things will turn out to be somewhat scam-ish. :/


This seems like it could contribute to discreet AR tech in interesting ways. Voice control in public spaces is a bad idea for obvious reasons [1], but this gets you something roughly touchscreen-equivalent for e-glasses or whatever.

[1]: http://dilbert.com/strip/1994-04-24


I think it is actually a plus that they are shooting for a consumer product and not a medical one. If it's available to anyone, that includes handicapped people, minus the stigma that "only crips use that." (Yes, I'm aware that is an offensive word.)

Bonus points: It will probably be more affordable as well.


If I can keep my hands on the keyboard all the time, life gets a whole lot better.


Looks like the next step is almost there technologically, where we'll have programmers coding away while moving the mouse with their minds.

Next steps after that would be programmers coding away with their minds..?


Give me one of these and a set of Newgle Glasses, all connected to a wristwatch computer on the other arm and we'll be able to dump laptops.

Imagine literally being able to work anywhere. On the beach, climbing a mountain, anywhere the hardware works.


I can see it: virtual screens projected in our eyes (or directly to brain), with mind-controlled keyboard, mouse and/or other input types..

So programmers of the future might just look like high-tech monks meditating.


Dreaming up artificial realities using only our brains.

One day we will all be gods.


> One day we will all be gods.

Or the entire world will be our cubicle.


Hopefully no need for them to be only artificial realities.


Probably not good when climbing a mountain.

On the obituary "Lost grip due to an auto-play ad at full volume...". ;)


What are these "Newgle" glasses ? I have not found any useful references to these.


They're a hypothetical future product like Google Glasses, but much better.


> ... we'll have programmers coding away while moving the mouse with their minds.

Personally, I'd really like to be able to use something like this for CAD design / performing operations on 3D models. :)

For bonus points, multi-user interactive product design sessions.

That should help to reduce iteration time in product development.


Why do you need a mouse at all? I barely ever use it.


Very few websites (apart from the most basic ones) are keyboard-friendly.


You're gonna love Vimium/Vimperator/whatever. It makes the whole browser keyboard-accessible with easy bindings (if you're familiar with the Vim ones).


I really liked Vimium, but I ran into a problem where it just annihilated CPU and I had to uninstall it. It's probably been a few version changes since then, so I might give it a try again.


This is why I have a real hard time buying anything other than a Thinkpad, because of the trackpoint. If I have to move my wrists from where they're planted on the keyboard, it feels about as annoying as having to get up off the couch after I'm all comfortable because I have to go to the bathroom.


I agree. A better version of this type of technology was described by David Brin in the 90s.

Why not measure the minute signals of the mouth to predict word commands instead of hand movements?


my guess is the precision, dexterity and energy cost of finger / hand / wrist movements make them more effective than using a mouth.

An example would be complex poses such as the "gun" shape with a hand as opposed to "pouting" your lips, there's so much opportunity to finely tune the motion from open hand to gun shape but it's more linear with pouting. However I'm not sure where or how the "thinking" part would play into this, eg physical muscle movements vs intention of muscle movements.

And there might just be too much noise and involuntary mouth movements that would make it impractical (eg, breathing, swallowing), plus the fact you'd look like a psychopath mumbling to yourself.


Thinking about moving without doing so breaks the proprioceptic feedback loop that is important in precision motor control, so I wonder if using any device working on this principle will require some learning on the part of the operator, as well as the electronics. This may be more of an issue for me than most people - I find a mouse to be a much better control than a touchpad, which I think is due to having to move an actual object having a small but perceptible heft and drag, which gives me feedback.


What's the difference to Thalmic Labs' Myo?


I have a Myo, and it's the device which taught me that what I'm actually interested in is interface technology that reduces my movement cost, and the Myo -- while being a pretty cool device and also one of the few that actually works within 10% of what's advertised -- unfortunately increases my movement cost. It's also got a pretty narrow range of actions. More than a few, but not enough to make the increase in movement cost worthwhile, IMO.


I had a job writing software for piloting a drone with Myo. Myo recognizes a small set of palm and finger gestures, not continuous movements, so, for example, you couldn't record yourself typing on the keyboard. It does stream arm movement and rotation though. No mind reading either, which is the gist of the Ctrl-labs' armband.


According to the article, better machine learning leading to more consistency.


The potential for accessibility seems massive here!


This could be neat if combined with haptic gloves. VRchat or something of that class, will be the VR killer app with cheap haptic feedback.


The comment about Minesweeper is a particularly smart observation that is true of AR as well.


I wonder if their error function is to sense the user's frustration when the model performs badly.

I have a feeling that actual human neural activity does not generalize well, but that ML models needs to be good at adapting to individuals.


How do they close the feedback loop?

In all of these systems I've seen so far, there is a ton of calibration needed.

The user should be able to do that.

Edit: say it is a two way street.

System takes input and renders it right back as a shock, or vibration, something.

There then is an exchange.

System presents its message, user present theirs.

Simple things, like shock, or pressure, vibration on and off.

Then levels.

Then modes.

Together, that results in "thought as action", much like we get when we do things with our bodies.

Once there, mapping that to things like cursors, could be standardized, IMHO.


This can work.

Moving cursors with your hand means a use of calories.

Not using calories means less effort for the same work. That’s a gain.


Leaves you with more calories to spend at the gym where we pay money to burn unwanted calories!


People seem to treat gyms are like carbon credits; you can do the wrong thing most of the time and then pay to offset it later.


It can be more efficient this way.


"Saving calories" is hardly the problem for most of us :)


I'm pretty sure mouse and keyboard movements are a trivial amount of calories, compared to walking down the street.


Your brain uses a whole lot of calories.


Or just install vim :-)


So mind control will be the replacement of vim movement and other stuffs with steep learning curve?


I dunno. There's some real lag you can see in the video, and, worryingly, the cart says "dogfooding cart." Typically dogfood is labeled "dogfood" for a reason--it's not really fun or stable or effective to use/interact with (yet?).


> Typically dogfood is labeled "dogfood" for a reason--it's not really fun or stable or effective to use/interact with (yet?).

“Dogfooding” is also a term for using your own products like you expect customers / clients to use them. In that usage it isn’t a criticism of the product’s quality.


I suppose it is more ergonomic


I believe that their problem will be finding the small group of users that derive enormous value from it. When you really dig into all the applications they've suggested, they don't quite make sense.


I'm waiting for EEGs to get vim keybindings


Why is it so important that the office overlooks Herald Square, is that just poor editing or am I missing a deeper meaning?


It's an expensive location for an office. It doesn't much matter for the story, and regardless the fact may be lost on people who don't know NYC.


Poor editing.


My brainwaves are my own, no thanks, microsoft.


Next up, advertising based on your thoughts...


It would be nice to get some tangible benefits from the mind scanning satellites we're already paying for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: