Real time volumetric video

In 2018, we were contacted by Ericsson, who wanted to explore what could be done using 5G as an enabler for Augmented Reality experiences. The end result turned out pretty well! Here’s a brief write up of the projects and what we were able to achieve.

1. Real time viewing of remote band members

Ericsson already had a setup for low latency audio, called the “MusiConnect”. By adding volumetric video to the mix, the idea was to showcase the possibility for a band to perform together, even though some members of the band would be somewhere else.

The setup included a couple of Kinect v2 cameras, beefy PC:s for processing, a prototype 5G link provided by Ericsson’s lab in Aachen, and of course Microsoft HoloLens for viewing. We also added an iPad “spectator view” that would capture and optionally display the experience for bystanders on a large screen TV.

2. Real time volumetric video calls

Inspired by the success, it was time to build something a bit more challenging. By creating a fully bi-directional experience for MWC 2019, we gave visitors a way to experience a not too distant future where video calls will be in 3D and you will be able to see the person on the other end, as if he or she was standing right in front of you. In addition to the bi-directional setup, the new application also included real time meshing instead of point clouds, further adding to the image quality.

Next steps

At some point, it would be fun to continue exploring this project and for example upgrade it to use Kinect for Azure and HoloLens 2. There have been significant improvements both in the hardware and software so we expect to be able to create a “high definition” experience pretty soon. Stay tuned!

Folder and code setup for a medium size Unity project

Despite there being many blog posts on the topic, I have yet to find a simple explanation of an easy to use, low cost, robust solution for a small developer team that builds reasonably complex software, involving front end, back end, UX, as well as the use of several third party components, either from the asset store or from public github repositories.

Ideally, the setup should be robust and support real-world situations, including working on a future version while performing bug-fixes to the main version. In our case it must support multiple platform configurations where some code is shared, but some plugins and code is platform specific.

In using Unity for a couple of years, some common principles have emerged to help keep sanity in a project with multiple contributors and 3rd party libraries.

  • Unity’s root folder cannot be managed. Several third party assets and libraries require that they are located directly in the root.
  • Content should primarily be grouped after function (not asset type), despite the prevalence of folders such as “Textures”, “Prefabs” etc, and despite Unity itself requiring certain folder names ie “Plugins”.

After much consideration, these are the steps I have taken after working with Unity for 2+ years in setting up the shared environment for my team.

  1. Use a collaboration platform that allows private git repositories at a low cost. I’ve found that Visual Studio Online works well for our purposes, with free private repositories for up to five users.
  2. Use Visual Studio for syncing repositories. Nowadays, it’s available for both PC and Mac, and it works with submodules. More on that later.
  3. Use a common main project with branches for your stable (master) and preview (release) versions.
  4. Put all “well-behaved” 3rd party libraries in a separate repository and clone as a submodule to your main project. Contrary to most accounts on the web, I’ve found that submodules are fairly reliable and provide an excellent way to put shared code inside your Unity projects. It wasn’t until recently they could be managed directly in Visual Studio, however, and before they required some careful setting up using the git command line.
  5. Optionally, if you have or expect some of your code to be reusable in other projects, put these in a separate “Toolkit” repository.
  6. For any platform specific content, put these in separate repositories and link these as submodules.
  7. When your wireframe application is working well, fork the release project once for each developer to reduce the risk of inadvertent changes to the master or release projects

 

These are just my own thoughts on how to set up Unity for maximum productivity. What is your favourite configuration? Please fill in your best tips in the comments field! Thanks!

Creating local world anchors on HoloLens 2

When HoloLens 1 was released, one of it’s greatest strengths was the ability to create persistent, so called world anchors, that would reliably lock objects into a position in the real world.

World anchors could be retrieved during later sessions, or even shared across devices! This was groundbreaking in 2015 and still pretty cool today.

With the HoloLens 2, the competition has caught up a bit and it is now possible to create spatial anchors on most AR/MR platforms. Still, the ability to create robust anchors using only the onboard device capabilities is an attractive feature of the HoloLens.

Microsoft are now pushing the use of Azure Spatial Anchors, which in many ways is a superior successor to the “old” spatial anchors. Created and managed by a cloud service, it leverages both the cached, local anchors, as well as an online database of anchors that can grow to host millions of anchors.

Still, sometimes it is preferable to use the classic world anchors, for instance, when you cannot, or do not want, to use any cloud services. I wanted to try if this was still possible, since the “WorldAnchorManager” script is available in MRTK (the Mixed Reality ToolKit).

Turns out that with some minor tweaks, it’s perfectly possible to create world anchors on HoloLens 2.

Based on these steps: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/unity/persistence-in-unity, I wrote a small example app that enables the user to place a cube at any location, automatically creating and saving an anchor each time the cube is released. When the app is closed and re-started, the anchor is loaded and the cube is placed at its saved location.

The source can be found here: https://github.com/anders-lundgren/mrtk-world-anchors

Verified on Unity 2019.4.9f1, Visual Studio 2019, HoloLens 2 (Build 10.0.19041.1377).

In addition to the above steps, I added the base MRTK components, according to the MRTK getting started tutorial at https://docs.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-base-02 . I also added a simple debugging prefab from this example: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-asa-02#importing-the-tutorial-assets , and the manipulation handler from this example: https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_ManipulationHandler.html.