Where they'd keep an image on the screen saccade invariant (ie, compensate for any saccades that were made).Slowing moving a letter/character/word/whatever to the edges of the participant's field of view and asking when the character was no longer legible.
It's easy enough to get quality-control clamped for a 600DPI display if you only have to make them a square inch in size; and your memory bandwidth and parallel processing needs go way down if the outside rings can actually be treated as a small screen, rather than as an extremely-high-resolution screen displaying a (monotonous and blurry) image.Of course, the big obvious problem with such a display is that the eye moves, and moves faster than you could possibly move around the display.It was surprising how good people are at detecting changes in a blurry picture that's way out in their periphery.My prof used to joke that, whenever he'd discuss fovea and periphery, that if we had a fovea the size of our full field of view, we'd need a brain the size of an elephant to process all that information.No reason you couldn't use it to deflect a continuous parallel matrix of rays by a constant amount, rather than one continuously-shifting beam.
Heck, you could use an array of coherent emitters (laser diodes) rather than point-source diodes, and use phosphorous on the intermediary panel like the good old days.It's interesting how we're very sensitive to sudden changes (thus movement) in our periphery, but are so bad at classifying/identifying static imagery.I remember reading about right eye and left eye dominance.It was spectacularly obvious what was going on if you were watching the screen while someone else had the helmet on.I _so_ wanted one of those Reality Engines back then, I suspect my phone now has more graphics processing power though (I'm pretty sure my Galaxy S6 in a Gear VR does a significantly better job than that multi million dollar military project ~20 years ago...)It's always seemed to me that we could get much higher-quality VR if we could manage to set up a foveated display: a high-DPI display embedded in concentric rings of progressively-lower-DPI displays.My point: If you were able to detect saccades of the eyes, and relatively accurately calculate their position, you could have the region of the image that's unattended be colourful noise, the theory says that your visitors would be none the wiser (of course this would break down with multiple people looking at the same image)Even cooler: procedurally re-generate parts of the image that are unattended, so that you're looking at an ever shifting image, but wouldn't quite be able to pin down what's happening. v=ub NF9QNEQLAThis is also the principle behind foveated rendering (https://en.wikipedia.org/wiki/Foveated_imaging) where you only render the graphics in high detail where the viewer is looking, and use a low-res image for the rest with the goal of saving computing power. Our peripheral vision is just that bad (at least for static imagery, movement detection is actually pretty good).