For Creators

Guided Meshing in Create

| Jonathan Brodsky

Just over a year ago, we brought Magic Leap One to the world. With it, we introduced Project Create, a flagship Magic Leap app from Magic Leap Studios. Project Create brings you into a delightfully weird new world of colorful characters, art and physics. In celebration of this anniversary, we wanted to share some of our learnings with developers who are also pushing the boundaries of Magic Leap One experiences.

Create is a physics game that’s played with and against your real-world environment. The excitement comes from building something magical in a familiar space. Having a rocket ship smash into your ceiling or a T-Rex chase a ball over your couch was the kind of new experience we wanted to explore. The quality of the spatial map (or “world mesh,” terms we use interchangeably) was critical to the experience of the game.

Our Constraints

One of the technical constraints that we put on Create early on was no live meshing. The user’s space would be scanned when the app started, and Create would use that scanned mesh for the rest of the experience. Due to our AI pathing system, we needed a quality mesh before any virtual objects could be placed in the world. There were also design issues around resolving digital content that could be hidden when updating a spatial map. Finally, we wanted to use an initial scan to ensure the mesh at the beginning of the game met minimum quality requirements before we started.

Common Meshing System Misconceptions

When an immersive app launches, its spatial map doesn’t start with a blank slate. Instead, when the app requests a spatial map, it inherits the current world representation from the operating system. If the user has started the headset and launched an immersive app without looking around, the spatial map inherits any world mesh directly in front of the user. In most situations this works really well, and it’s about all you need to get started, since world collisions can happen in the space the user cares about.

For Create, the initial mesh presented a special challenge because, from the perspective of a user looking at this data, it seemed like the job of mapping their room was already done. We tried different ways to tell users that they needed to look around their room, including text instructions and pointing to specific locations. In the end, we went for a more guided experience so players could only start playing in Create once they had enough mesh.

The Guided Experience

At its core, Create’s meshing experience is a set of hard-coded points around the user where the user is then invited to look. When the user looks at the points, the depth sensor has time to gather world mesh data and hopefully build out a spatial map. We don't require a specific order, nor do we require a minimum quality, we only need the user to look in those directions. We called these hard-coded locations “waypoints.”

The Waypoints

Waypoints are a set of hard-coded offsets from a central point. They’ll often seem to be floating in the air not far from the user’s starting point. The waypoints for Create include one in each cardinal direction from where the player stands, four above the player on the ceiling, and one on the floor. This set was chosen because we needed a ceiling, wall, and floor mesh to contain and support the objects we created. An app that only needed a floor could tune their waypoint locations to only collect floor data.

So, when Create starts, it skips waypoints directions if, from the user’s perspective, they already have world mesh behind them. 3a Illustrations by Javier Busto

As onboarding runs, we push the waypoints out to an axis-aligned bounding box. The bounding box is defined by combining locations of headpose and Control position with the results of world mesh from the depth sensor camera. Throughout onboarding, the bounding box is used to define the placement of waypoints in the absence of world mesh. If a world mesh is closer than the bounding box, we pull the waypoint in to the world mesh. Each waypoint is defined as a direction (up, forward, right, left). To derive a world mesh position from a waypoint direction, we raycast in that direction until we hit either the world mesh or the bounding box. 2a 4a

Reticle, Lock-on, Completion

The way that we ask the player to look at each point is by providing a digital content reticle. The reticle points to the incomplete waypoint nearest the user’s forward headpose. When the player looks at that reticle, we move it into a lock-on state and display a timer. The timer runs as long as it stays within a minimum distance from the center of the player’s view. If there isn’t any mesh behind the waypoint, the timer runs at a slower rate. This optimizes the system’s scanning time and helps us to build a better spatial map for that location.

1a

When the timer is complete, we tell the user that they have finished the waypoint and remove it from the active list. The reticle then points to the next closest waypoint.

Ideate and Iterate

We had some false starts. In building this system, we tried spawning waypoints that would guide the user to only look at the holes in the world mesh. In many cases, these holes were there because the system could never resolve them. The depth camera is limited by mirrors, windows, and dark surfaces. We just couldn't resolve these gaps. Asking users to look at them was a really painful experience.

We also tried a version of Create that messaged which waypoints had failed when we were unable to find a mesh behind them within a specific time period. This resulted in users endlessly chasing "incomplete" waypoints, feeling increasingly frustrated with both our system and their performance of the task. Announcing success proved a much better experience.

We briefly tried filling in gaps in the mesh, such as default floors or walls. It turns out that even the simplest room is really complex. Adding a fake floor resulted in areas where our digital content got trapped between the physical world and our estimated world. There is much opportunity for further exploration in this area and we’re excited to see how others solve these problems.

Finally, we were unable to find a great way to guide users to look at the back of objects during mesh building. We knew we wanted to enable a seated and stationary option and could imagine lots of scenarios where you would want to experience Create without wandering around. In the end, we compromised and allowed a seated person to complete guided meshing. This worked okay because from a seated position you would never care about the far side of objects. However, for a person who starts off sitting or standing but then moves around the room, there would be more gaps in our mesh. It was a compromise we could live with.

Create's Meshing Today

In the last year, we’ve shipped a number of revisions of the Create onboarding experience, tweaking it in response to further playtesting and user feedback. One of the big improvements was to soften every requirement. The guided meshing experience now provides something of a tutorial on how spatial mapping works on Magic Leap One. It shows users a raw view of the data, then explains how they can look at things to help the system understand what’s around them.

Once the user understands the system, they can self-direct and mesh to whatever degree they want for their experience. We explain these paths by front-loading guidance about what users can do, and then give them the option to do an unguided meshing.

We hope this gives you a starting point to design your own onboarding experiences. Not every experience has the same requirements as Create, but hopefully this offers a baseline to get started.

We’re excited to see where you’ll go with it!

Related content

Get the latest news and updates

Sign up to receive offers, promotions and other marketing emails from Magic Leap. You can opt out of them at any time.

Sign up to receive offers, promotions and other marketing emails from Magic Leap. You can opt out of them at any time.