"I'd argue that today's VR hasn't beaten century-old stereoscopy just yet."
Somehow, what struck me the most during my trip to Berlin sat in a quiet corner of the Deutsches Technikmuseum. Many of their other interactive exhibits were grand and detailed, but this one was just a lever and two holes in the wall for your eyes. Anyway, I kneeled and peered into the thing. On the other side of that wall, a Victorian woman was staring into a stereoscope in her room, “The Stereograph as an Educator”. It was a stereoscopy exhibit, and pulling the lever drew a new card, carrying a new sight.

"The Stereograph as an Educator"
Stereoscopy was what people invented after they figured out what our second eye was for. That is, a card carried two film photographs. Each photo showed the same scene but from slightly different angles, and lenses would force the eyes to see them as a single scene but with the illusion of depth. It wasn’t truly 3D though because simply moving your head broke the illusion. Today’s VR is certainly 3D by contrast. Still, fooling the eyes with stills was enough to make stereoscopy spectacularly popular a century ago! I saw the appeal first-hand.
I don’t know how long I kneeled there, and I don’t know how many times I pulled that lever, but I know that there was one scene that compelled me to take a picture through the lenses. And I brought myself to do that only twenty times in two weeks! I saw “the most spectacular pageant of modern times, the Durbar, Delhi, India”: brilliantly decorated elephants carrying little carriages–yet those carriages must have held people. Though the scene was evidently captured from a distant hill, the following crowd covered the grand steps and the rolling plains regardless. And despite that, I could’ve picked out the couple of faces that were smiling for the camera, or I could’ve studied their headwear or clothes, all either ornate, military, or ordinary.

My photo of "The most spectacular pageant of modern times, the Durbar, Delhi, India" is somehow the best-resolution copy on the Internet—as far as I know right now...
I think what took me at the time was the flood of detail at all the right depths. I felt my brain buzzing over being suddenly transported to a new place. For a split moment, I even realized that I was filling in the noise of the crowds, the whistling of the wind, and that wind brushing against my side–as if I were on that hill in Delhi in 1903. For a moment, bubbling within me was an uncanny nostalgia for a time I never lived. It was a deeper experience than pictures figuratively jumping out of a page. It was a deeper experience than any VR I have experienced today. Somehow.
The crux of this gap in visceral experience is just resolution, in my opinion. In 2010, Apple declared that they had accomplished the “retina” resolution on their iPhone 4. Specifically, they made pixels so small that you couldn’t make out a single specific one from the screen anymore. Not only was it a brilliant marketing term, but it was also a philosophical point realized by technology. More generally, I’d say that the “retina” resolution is the point where the digital space, in which computers work, becomes so finely defined (in terms of pixels, refresh rate, etc.) that it becomes indistinguishable from the real space to people. So, as the ways we interact with computers approach “retina” resolution, the barrier between our real world and the digital world continues to blur.
For example, computer audio has accomplished this general kind of “retina”. At CD quality, every 0.0000226 seconds, a computer interprets 16 bits as one out of 65536 possible values to send to a speaker. With this kind of precision, the sound produced is virtually indistinguishable from sound produced naturally.
However, this point has not been achieved with modern consumer VR yet. For example, the old Google Cardboard blatantly didn’t achieve this because it brought a phone screen very close to the eyes. Sure, how else would a small screen fill your field of view? But that consequently pushed the “retina” standard far past the screen resolution of most phones at the time. When I later tried VR on an Acer Windows Mixed Reality headset, I noticed that it had gone a long way but still didn’t accomplish the “retina” standard. On both, a “screen door” effect caused by the unilluminated spacing between the pixels was a literal barrier between me and the digital VR world.
Sources say today that the Oculus Quest 2 has done a better job since, but it hasn’t yet completely solved the issue either. On top of that, the Quest 2 offers a relatively small field of view and limited integrated GPU power for rendering detail. Although VR has made long strides in building a totally immersive experience, they don’t compare in the visual front with stereoscopy. Using today’s VR, I still can’t gaze at that expansive, detailed scene from that hill in Delhi in 1903.
The facts are: those stereoscopy cards were just photographs. Film photographs had as much resolution as their grain would allow, and that was plenty. In that exhibit, those cards were close enough to fill a large part of my vision, but there was still detail and “resolution” to spare. No blockiness or “screen doors” or other digital artifacts kept me from immersing myself in the scene. It never needed to solve the “retina” crux or cross any gap, and therefore stereoscopy is both a relic and ahead of its time.
In this way, I’m not interested in what VR offers today, and I’m not about to leap into the metaverse with an Oculus Quest 2 headset for work and play, as Zuckerberg would like. And yet, I couldn’t be more excited for what VR has yet to become.