I helped the R&D team at the New York Times on several projects between 2019 and 2022. They’re a crew of hybrid designer/developers and strategists, exploring how emerging technologies can be applied in service of journalism.
One project involved helping them think about design patterns for Mixed Reality. How might the NYT respond if MR interfaces go mainstream in the near future? I spent a few months sketching and prototyping in Unity, on AR-equipped phones and headsets like the Hololens and
Oculus Meta Quest, building up maps, taxonomies and patterns for how this stuff might work in the context of journalism.
Framing and mapping
I typically start exploratory projects with a little bit of mapping and orientation to help us set a rough direction. This is usually pretty fast and loose – the goal is to sketch out a broad range of territory to explore with prototypes.
After doing a little mapping, the next job is to populate the maps with lots of little prototypes, helping us learn more about how this stuff actually feels – what are the affordances of the tech? What does it do well? What does it suck at? What are the applications, and what are the implications?
Kitchen table MR
One of the primary use cases for news consumption is the morning breakfast routine – if we were to sit at the kitchen table, pour the first coffee of the day, and pop on some MR glasses, what types of interface and information design would be possible? Which bits would be better or worse than on a screen, or in print, and what might this mean for storytelling?
Responsive layout in MR
Should interfaces be able to float around, or lock to the surfaces around us? Do certain surfaces (walls, tables, worktops) lend themselves to certain types of information? When does 3D work best, and when is 2D superior? The best way to find out is to build a lot of tiny demos and try them all out.
Fully immersive stories
Some types of stories work brilliantly well at 1:1 scale, but what happens if you don’t have enough space in your living room? We’ve all seen those videos of people in VR headsets running into walls.
Perhaps we need responsive, adaptive layouts, that can scale between fully immersive environments, tabletop- or plinth-scale models, and HUD-based floating panels and controls. Stories might need an adaptive sense of scale too.
Adapting to mobility
Another important use case is ‘not being hit by a bus while using this’. How might MR interfaces adapt to us walking around, dealing with distractions and alerts, and what might this mean for storytelling?
Finding the design patterns
At the end of each prototyping phase, we’d have enough sketch prototypes to begin finding generalizable patterns and rules. I’d then put together little interactive diagrams to help capture and distill what we learned.
This project was one of many MR explorations going on at the time – you can read more over on the New York Times R&D site.