Shared reality: Exploring VR-like environments with your smartphone
Virtual reality (VR) devices offer great potential for immersive communication and interactivity. But so far, it turns out few of us want to put on a clunky headset to make those possibilities come alive.
What if we could have a similar feeling of being in the room with a friend or colleague halfway around the world by using an everyday smartphone or tablet?

ATLAS PhD researcher Rishi Vanukuru builds tools to do just that.Â
“We're always going to be separated from family, from friends, from colleagues, and I think we can do better than just waiting for technology to advance in the next five or 10 years,” Vanukuru explained. “We can do more with the devices that we all have, that we're all familiar with, and use them to improve the experience of remote interaction today.”
He also aims to address access to these tools in his work, noting, “There's a divide between video calls that are widely available but not spatial, and augmented and virtual reality headsets that offer spatial interaction but are not widely accessible. My work bridges that gap using everyday technology to allow for more spatial interactions.”
Vanukuru is a member of the ACME Lab, directed by Professor Ellen Do. It is a space particularly conducive to this kind of research. He related that in the ACME Lab, “a key through-line through all of our work is building tools to help people be creative. The way that I interpret that is that for me, collaboration is one of the best amplifiers for creativity. What can we do to make better tools to help people be collaborative and therefore be more creative?”

A sense of space
The three-dimensionality of a room, the tactile nature of objects in that space, the ability to interact with those objects—these features create a sense of immersion we feel in well-executed virtual environments. Typically, though, you need a VR headset and additional controllers to experience these things.Â
But our smart devices also have lots of sensors—for tracking motion, location, light, depth, biometrics and more. Vanukuru explores ways those sensors can be used to bring the immersive qualities of VR to everyday video calls.Â
Vanukuru stated, “My hunch has been that we can do a lot more with devices that we all carry around, like phones and tablets. My work [aims] to maximize the potential of these devices for spatial collaboration in everyday contexts.”
For example, imagine your car has broken down on the side of the road. You might call an expert to talk you through a possible solution, but if you don’t know much about what’s under the hood, you will likely be hard-pressed to make the fix. With Vanukuru’s technology, dubbed DualStream, the caller can transmit a live 3D rendering of the car to an expert, who can then point to specific parts directly, improving the sense of shared presence over a standard video call.

“His work shows that we do not need expensive, specialized hardware to experience meaningful, embodied collaboration. Instead, we can use the sensors already in our pockets to transform how we share and interact within remote spaces,” said Do.
The importance of partnership
This research has been supported in part by an ongoing partnership with , which explores at the forefront of information and communications technology.Â
“A lot of this work has been helped by this active collaboration that we've had with Ericsson Research, both in Silicon Valley in California and with researchers in Sweden,” Vanukuru said. “For three years now we've had biweekly weekly meetings with them where they've been giving inputs on the work and how it might progress.”Â
Professor Do elaborated on the importance of such relationships, saying, “Partnering with industry leaders like Ericsson Research is vital because it enables our academic prototypes, such as DualStream or Shared Reality, to be tested against the technical realities of global networking and communication standards. This collaboration has not only resulted in high-impact research on network-adaptive AR, but also directly contributed to international standards (like ), ensuring our innovations have a clear pathway to real-world deployment.”
Vanukuru concluded by noting, “Being at ATLAS has given me the space to question dominant narratives around what technology can or should be. It has also given me the freedom to use design as a means to explore alternate possibilities, and see what we can do with technologies that are already familiar to us and how we can use them to do more in the present.”
Authors: Rishi Vanukuru, Krithik Ranjan, Ada Yi Zhao, David Lindero, Gunilla H. Berndtsson, Gregoire Phillips, Amy Banić, Mark D. Gross, Ellen Yi-Luen Do
Abstract: Mobile video calls are widely used to share information about real-world objects and environments with remote collaborators. While these calls provide valuable visual context in real time, the experience of interacting with people and moving around a space is significantly reduced when compared to co-located conversations. Recent work has demonstrated the potential of Mobile Augmented Reality (AR) applications to enable more spatial forms of collaboration across distance. To better understand the dynamics of mobile AR collaboration and how this medium compares against the status quo, we conducted a comparative structured observation study to analyze people's perception of space and interaction with remote collaborators across mobile video calls and AR-based calls. Fourteen pairs of participants completed a spatial collaboration task using each medium. Through a mixed-methods analysis of session videos, transcripts, motion logs, post-task exercises, and interviews, we highlight how the choice of medium influences the roles and responsibilities that collaborators take on and the construction of a shared language for coordination. We discuss the importance of spatial reasoning with one's body, how video calls help participants "be on the same page" more directly, and how AR calls enable both onsite and remote collaborators to engage with the space and each other in ways that resemble in-person interaction. Our study offers a nuanced view of the benefits and limitations of both mediums, and we conclude with a discussion of design implications for future systems that integrate mobile video and AR to better support spatial collaboration in its many forms.