Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The really big, really difficult one will be enabling dynamic focal depth. The lack of focal planes makes it difficult to imagine VR providing a lifelike experience.

Some volumetric/panoptic displays are getting closer, along with eye tracking, but this seems like one of those things you cannot fix in software. Just adding higher pixel resolution, improving "physics", or creating a more beautiful environment will not solve this. So, 20 years is probably right.



Nvidia and many others are working on this problem right now and making a lot of headway. They are called light field displays. One of the biggest players in the space is magic leap, they have attracted quite a bit of funding (1.4B last I checked).

https://research.nvidia.com/publication/near-eye-light-field... https://www.technologyreview.com/s/534971/magic-leap/


It'll be interesting to see what Magic Leap finally produces. Maybe it'll be an embarrassing bust, maybe it'll jump the whole field ahead five or ten years. I don't think I've ever seen so much money go to such a (public) unknown before.


I have talked to two developers at my company who have flown to Florida and taken Magic Leaps red pill. They were both floored by the experience!


It does seem to be a company like something out of fiction: all the skeptics who go see what's behind the curtain come back swearing it's amazing, but refusing to talk about it! I'm fascinated to see what they actual produce, and I'd be sweating a bit if I worked in VR tech at any other company.


Yeah, lightfield displays exist today and will only get better/cheaper. Also, accommodation is only one of many vision cues amongst stereoscopy and parallax, and one I personally don't miss all that much. When everything else is perfect, sure, it needs to be tackled, but there are much bigger fish to fry at the moment.


But 'all' you have to do is track the pupil, and push focus on the object they're looking at. We're already rendering the focal length live in most games, there's no reason we couldn't shift the focal length affording to what the viewer's pupils are pointed at.

Alternatively, they could use a modified biology, where the direction the head is pointing chooses the relative DOF, basically you render a fairly wide depth of field around whatever the head is pointed at.

Anyway, you can get a pretty lifelike experience with just some basic atmospheric perspective. Look at photography with very wide depth of field [1], so everything is in focus simultaneously. You sorta just get used to it, and accept it.

[1] http://cdn.shutterbug.com/images/0814picture03.jpg


Except you can't fake the feedback of muscles adjusting focus that the real world requires without different physical focal planes. You can mimic the focal effect, but your brain has the ability to recognize motor inputs to the eyes and act on this information (or lack thereof).

I think what we really need is a way to target specific focal distances via some sort of dynamic refraction. Otherwise we'll never get the proper motor feedback needed for immersion.


Light field displays seems to be the answer. Also much more computationally heavy.


So, dynamic lenses on the VR display.


As someone who works on eye tracking for VR headsets, this is much easier said than done. Doing it well requires levels of accuracy, latency, and reliability that are not available in any eye tracker on the market today.


What about SMI already tracking pupils at 250Hz? https://www.youtube.com/watch?v=Qq09BTmjzRs That tech is still expensive and I don't know how accurate or reliable but if it is good enough for foveated rendering, I would have expected it to be good enough for focus as well. Care to elaborate as to why it wouldn't work, or is it just beyond your horizon of "today"?


It's not accurate or reliable enough (in addition to being very far from a consumer price point). Reliability in particular is difficult to assess and usually lacking in eye tracking systems. Also just because the cameras are running at 250 Hz doesn't mean the latency is 1/250 seconds or even close to that.


Only reason for lack of availability is size of the market. Tech is already here and you are probably holding it in your hand right now. Avago's will be more than happy to sell you tiny combo chip packing camera/DSP able to capture >10K frames/s, track at >5 meters/s and report position at over 1KHz.


As someone working in the industry, would you care to guess how many years away we are from being able to do this at an acceptable level? Is this something for the next gen of hardware or is it much further away?


Maybe VR would work better if you just fed the same view to both eyes and ignored stereoscopy. Jaron Lanier used to do that with his original VR rig in the 1980s when one of the machines went down. He said nobody noticed.


I can assure you that the tech has advanced enough in the last three decades that you would immediately notice the lack of non-parallax depth perception.


The problem is your eyes can focus at different focal planes: on an object, past an object etc. I'd imagine it would be pretty jarring if the software just blurred everything around an object that happened to be at the x, y coordinates where your pupils were pointing.


You project into the scene to discover the distance of the object, then set the aperture and/or focal length of your camera as appropriate.


This doesn't help. You need to physically change focus in your eye. Your brain uses the feedback from the eye muscles to ascertain distance. If your eye muscles don't need to adjust focus, then they're reporting that everything is at the same distance. If this disagrees with the stereoscopy or the context then you will get nausea.


> "Your brain uses the feedback from the eye muscles to ascertain distance."

Source? I'm under the impression that focus is a brain-first, muscles-second process. If the object was already in focus, I don't think your eyes would mind.


It's called the vergence-accommodation conflict. There's quite a bit of literature on the problem in relation to VR. Here's one paper picked at random from Google. http://jov.arvojournals.org/article.aspx?articleid=2122611


If that were true people with only one eye would be nauseous all of the time.

As I understand it (which I may not) -- this paper (which purports to be the only paper to have tested this effect), says that they test divergence by having the images displayed at various distances to replicate varying levels of divergence.

They see that the closer it is to a realistic vergence/accommodation match the more comfortable the eyes.

However -- it should obviously be easier to tune a 3D image which is at a realistic distance than one which is not.

It seems to me, they may very well just be showing that effect, just showing that it is easier to adjust the illusion to render realistically at distance. This would make perfect sense.

This would then just argue for better tuning.


Light field displays?


What do you do when the pupil is ambiguously pointing at an object 30cm in front of your eyes, and a mountain on the horizon?


Wide DOF


VR is much more difficult that one might expect. We are really just slightly more advanced then blue and red glasses atm.

But instead of trying to adjust for how our brain works, we could give an enhanced experience, so that IRL will feel "clunky" in comparison.


Isn't that what Magic Leap is up to?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: