VR/AR is still in its "our users are completely untrained" skeuomorphic UI phase. There's been a tunnel-vision focus on gaming, Virtual Reality, and immersion.
The design space for HMD and body tracking UIs is much larger than skeuomorphic VR. I've liked attaching eyeballs to my hand, to explore graphs without gymnastics or manipulation. Using aphysical kind-of-planar workspaces, to permit subpixel resolution on integrated graphics. Remapping motion, to permit fast ergonomic input from resting hands.
Non-skeuomorphic UIs aren't getting much attention yet. And even awareness of that is not yet widespread. But there's a lot of fun incoming.
Attaching eyeballs to your hands sounds really fun actually, but we are already walking a pretty thin line just trying to not make people sick. Not sure experiments like this would be comfortable for most people, so this is why i think everyone usually sticks to skeuomorphic UIs.
But i agree that, VR-design wise, we are very primitive right now.
> walking a pretty thin line just trying to not make people sick
Yes and no. There seems a lot of confusion about that. About which design constraints are coming from which goals. Yes, for the common style of immersive gaming. But when I'm working, I'm usually down around 30 fps with high variance, running on integrated graphics.
What constitutes a "horrible" "immersion-breaking" "visual artifact"? And how much will you pay to avoid it?
I'm looking at a laptop's desktop. It's obviously a display panel. And not my wooden desk. And that's fine. When looking at my HMD "desktop", it's also obviously a panel. And not my office. And that's also fine. Using emacs isn't fighting slimey zombies. Usually.
Paper novels can be immersive. Even if you sometimes notice turning a page.
The design goal "avoid reminding the user they're wearing an HMD" is a very challenging one. It prunes the design space, discouraging many things. Camera passthrough AR. Lag. Different objects having different lag. Visible boundaries in display space. And so on.
For example. If you mostly care about text, then you want to see unblurred pixels, which means only the center half of the lens-blurred display is useful. Passthrough AR can provide balance even at low fps. So it can be shown beyond the center. Which creates a visible boundary in display space. And that's fine. And if when you turn your head, some graphical elements slowly chase others across the screen, that's fine too. So now you can run on old Intel integrated graphics. It's a different point in design space from fighting zombies in a warehouse. And it has different design constraints.
So when people say "VR requires X", that's worth translating as "SteamVR games require X". And it can be fun to consider other things you can do with a tracked head mounted display.
>The design goal "avoid reminding the user they're wearing an HMD"
There is a difference between design decisions based on "striving to achieve presence" and design decisions based on "not make people vomit". And high fps with good tracking is closer to "not make people vomit" group.
I agree that it's okay to break presence to do something (i would trade presence for fun for example) and let user push the limit if he can handle it, but you have to understand that VR is already a niche market, so devs usually are just playing it safe.
> There is a difference between design decisions based on "striving to achieve presence" and design decisions based on "not make people vomit". And high fps with good tracking is closer to "not make people vomit" group.
A goal of avoiding visible lag, requires a mechanism of predictive tracking, which has an undesired side-effect of judder, which can be far more sickening than the original lag, which motivates a mitigation of higher fps and lower variance, which requires a stronger GPU.
A goal of wide fov (absence of tunnel-vision "comfort mode"), increases user sensitivity, especially during rapid head and spatial motion, increasing needed performance and user discomfort risk.
A goal of avoiding visual artifacts, discourages introducing visual artifacts which increase user comfort. A goal of visual seamlessness, discourages segmenting the display to permit separately optimizing for task and comfort, compromising both, motivating mitigation.
With current hardware, "presence" and comfort are conflicting objectives. Optimizing for presence, is why we're pushing the envelope on comfort. But there's been a lack of community awareness of just what tradeoffs are being made, and why, and of alternatives and opportunities. In part because of the niche economics you mentioned - if SteamVR doesn't support it, why think about it? But happily, improving hardware will make the issue mostly go away.
I've played a number or game that replaces your hand with the object you're using. Or hell, most of the time you see the vive controllers and not hands in place of your own.
But by "attaching eyeballs to your hand" i though he meant decoupling the camera from HMD tracking and attaching it to the controller tracking. Like you move your hand and the view changes. Something similar to The Pale Man from Pan's Labyrinth [1].
Research shows[2,3], that brain adapts to the unusual perspective quite fast. So technically, after a while, the brain should accommodate and it would start feeling "natural".
From a couple decades of gaming experience, you don't need much more than eyeballs, fingers and a little bit of forearm.
I am convinced people who talk about immersion in VR have never been addicted to any game. It doesn't take that much to lose yourself inside of one. After the first few minutes both VR and non-VR games are the same once you're fluent with the controls.
> After the first few minutes both VR and non-VR games are the same once you're fluent with the controls.
This couldn't be farther from the truth.
Take a game as simple as Zombie Training Simulator. Holding your hands up and firing virtual guns is a completely different experience than pointing a mouse and clicking. Physically bending over to pick up a grenade, then throwing it, is far more immersive than pressing the number 5 and holding a mouse button down to prime it, then releasing it to throw.
Then there's Ultrawings. Having head tracking, being able to look around by just pivoting your head, makes the flying experience feel so much better, even if the graphics are dated by 10 years in terms of texture and model detail.
When I got my Vive and was playing around in The Lab, and one of the experiences puts you on the side of a mountain, I stepped off a cliff edge and I could feel my heart rate increase slightly as my brain was expecting me to fall to my death. The immersion is real.
And I have plenty of gaming experience. I've been gaming since I got an NES for Christmas when I was 5 in 1987.
> When I got my Vive and was playing around in The Lab, and one of the experiences puts you on the side of a mountain, I stepped off a cliff edge and I could feel my heart rate increase slightly as my brain was expecting me to fall to my death. The immersion is real.
Not disputing anything you're saying but I actually have this same experience playing games where you can fall and the player accelerates (particularly if the camera is first person or right behind the player). Just looking at the screen when I jump off a cliff in something like WoW makes my stomach feel like 20lbs of iron while I'm falling. I've kinda learned to enjoy it at this point.
I have experienced both, and they are rather different. On the computer, I just get this sinking feeling in my stomach when falling, but there was no hesitation before jumping. In VR, my brain did not want me to step near the ledge at all. I had to probe the floor with my foot, make sure there actually is solid floor there in reality, and really will myself into moving there.
In the end, I ended up making a tiny tiny step and leaning forward enough that the game registered me as having jumped. The two experiences aren't even comparable. In one game where I was in a hot air balloon I even freaked out and panickedly searched for the button to exit the game.
Yeah playing The Climb sets off my fear of heights pretty well. I’ve climbed a lot in the real world and I think the lack of input from my feet actually heightens that fact as they are pretty integral to having the strength to hold yourself on.
I get the same thing, but it only kicks in once the fall is long enough that you know that your character won't survive.
What's interesting is that after far too many hours of WoW, that fall distance is so ingrained that I get that feeling at the same point in other games, even if they have a longer 'safe' fall distance.
In my experience every little bit of verisimilitude makes a huge difference. Little things like Oculus' hands which line up perfectly with your proprioception and have the fingers move depending on your finger position make a huge difference. Just a few of these little things and you get new people in a horror game completely forget that they even have the option of taking the headset off.
I think there will be a lot of difficulty with wording and meaning here, because we don't have a consistent terminology for the different kinds and levels of immersion or focus.
For example, the distinction between being deeply involved and distracted in a game (which does not require special tools) versus the extent to which the game will trigger automatic bodily reactions.
In other words, someone deeply "immersed" in a chess-match isn't quite the same as someone "immersed" in a VRv game so that they fall over at a jump-scare.
I don't want to pick on this person, but imagine if someone said "from a couple of decades of watching movies, I know that all a movie needs is.." Experiencing a medium is very different than developing a medium. And there are a lot of rules that hold true for your subjective experience of a medium that may not be universal rules.
It's even worse because they are using one medium to infer the needs of another medium. It's more like saying "From a couple of decades of reading science fiction books, I know all a science fiction movie needs".
Reminds me of the Jurassic Park PC game where you could look down and check out your character's gender.
Given that you had to enter pin codes by setting the hand to pointing mode and move around with the mouse, i wonder how it would translate to a VR experience today.
Trespasser
yeah there was a bit with a keypad for getting into the Ingen employee village, I spent ages trying to use the pointy finger and a mouse to press the keys on the keypad... I couldn't do it, but there was a break a bit further along the fence/wall. They wrote the code on the wall (upside down) next to the keypad, but I guess they realised it was too cumbersome so they made a break in the wall further along.
I said they are "the same" only in terms of how your brain can immerse/rewire itself to understand those new controls as if they were artificial limbs. Even with the same level of fluency, some controls are definitely better than others depending on the domain.
So VR is definitely not pointless. Motion tracking for your head/arms/feet unlock many applications that are previously hard or impossible (e.g., 3D sculpting using mouse/tablet vs a motion controller).
But it's not always better than mouse/keyboard input for let's say typing or shooting a target. Having more irrelevant degrees of freedom makes it harder to master your input controllers, even if you already had a lifetime of practice with it in the real world (e.g., I am okay with doing parkour levels of movement with WASD/Space but won't ever try it with my real limbs even though in theory I know how to climb over a real world fence).
Sorry to break it to you but many games (like first person shooters) currently have no depth perception and would be greatly enhanced by it. Literally every 3D game you play today can and will be enhanced in VR because of this reason.
I don’t think this is true. In a gaming context VR provides more immediacy of immersion, much greater presence and wider accessibility. These factors also make VR a compelling case outside of gaming for all sorts of tasks.
In deep trance states, the body progressively fades away and you perceive what would ordinarily feel like body sensations as other things, as the mind loses its frame of reference to cause and effect. All experience thus gets created by mind.
I imagine with practice anything is possible regarding VR. The mind can make anything believable if you just relax and let it happen.
> I imagine with practice anything is possible regarding VR. The mind can make anything believable if you just relax and let it happen.
That's generally good advice for journeying of any kind, whether we're talking about VR, psychedelics, drumming, meditation, lucid dreaming, or reading a really good book.
I am learning VR Development in my free time just to explore this. I want to make a VR guided meditation program, to help beginning mediators reach deeper states than they would normally be able to.
I think its more important for the body parts that are there to move correctly than to have them visible and not match. I remember trying a vive game that had arms but sometimes they would contort into impossible poses and it was extremely jarring.
Having spent many hours in both vive and rift, while I prefer vive in general, the visible hands on the touch controllers on rift is just fantastic.
I think its more important for the body parts that are there to move correctly than to have them visible and not match.
A laser-hatchet hand and a disembodied floating mecha-claw that respond snappily are ultimately more real than a "realistic" arm that moves like a cartoon.
BTW. I recall an experiment that found that to get someone to associate a virtual hand with their real one, the researchers had to prick the real hand at the same time as the person see the virtual hand getting a prick.
By the same token, if you want someone to associate a virtual elbow with their real elbow, you'd better not be having it do stuff your real elbow isn't doing. Better off just not having the elbow.
One interesting aspect is the difference between displaying yourself and others in VR.
For yourself you do not want to show body parts that cannot be tracked correctly because of the jarring effect of the mismatch, as you say. But for showing other people you can get away with more, so for example you can show full arms even if the elbows are not exactly in the right place (as long as they are not unphysical of course.)
> I remember trying a vive game that had arms but sometimes they would contort into impossible poses and it was extremely jarring [.....] the visible hands on the touch controllers on rift is just fantastic.
Neither of the things mentioned here are specific to Vive or Rift.
A Rift game can give you arms that contort into impossible poses, just like a Vive game can give you visible hands. It's just software and whatever the author of whatever game you're playing decided to do.
Did you play a game worked in both Vive and Rift, and the versions were different or something?
You're correct about the arms, but I believe the Rift has additional sensors on its touch controllers [1], allowing it to record your thumb position and update the hand model accordingly.
If you try both the vive and the rift you'll see that the default for the rift is fake hands that mimic yours. It's cool but I personnally couldnt get fooled. I prefer the vive controllers.
First thing I did when I played a demo of Dark Secret, it shows your character in a mirror as a way to show you what your character looks like (there's the mask mirror from the Oculus Dreamdeck as well). Of course I see the character's head tilt to my head movement but most people when looking at a mirror make an expression (smile etc...) and the fact that it didn't show my actual facial expression reflect in the mirror was off putting.
Second experience was Job Simulator. I picked up something, dropped it on the floor. Tried to kick it away with my feet but of course no sensors on feet so that failed. Also tried to hip close drawers and file cabinets but of course no hip sensor.
If I was in a VR meetup I'd want to be able to express with my hands meaning I need all 5 fingers so I can point, give the middle finger, make the OK gesture, pick up a glass and stick my small finger out, make the shaka sign, etc.
You really only need hands so you can see what you’re interacting with and how you’re interacting with them.
I think if we had some other way of controlling your avatar/viewpoint, such as some kind of direct mind interface I think you could put people into nearly any kind of body at all and they would adapt.
I guess that all depends on -what you look forward to doing- in that virtual reality. And how many (and what kind of stimulating) sensor options are available.
The design space for HMD and body tracking UIs is much larger than skeuomorphic VR. I've liked attaching eyeballs to my hand, to explore graphs without gymnastics or manipulation. Using aphysical kind-of-planar workspaces, to permit subpixel resolution on integrated graphics. Remapping motion, to permit fast ergonomic input from resting hands.
Non-skeuomorphic UIs aren't getting much attention yet. And even awareness of that is not yet widespread. But there's a lot of fun incoming.