Skip navigation

Hi everyone,

 

After creating many hundreds of pages of personal research material over the past seven years - part of a much vaster archive belonging to my company - I decided that it was time that I tried to put more of it into practical use for modern-day applications such as development of software for the forthcoming Intel Project Alloy merged-reality headset.  The natural place to start was at the beginning of the archive, in 2010.

 

I rediscovered an idea for creating living avatars for classroom teaching.  The teacher would wear an all-covering white bodysuit, whilst an image projector would move around the classroom ceiling on rails, tracking the teacher's position and projecting an image onto the suit's surface that would change the teacher's appearance in doing so.  For example, if the teacher got down on hands and knees then the projector could make them look like a bear by projecting a bear image onto the suit from above.

 

 

As the Project Alloy headset can scan real-world large objects such as furniture and convert them into a virtual object of a similar size, it made me realize that this could provide a new way to make living avatars a reality.  If the headset could scan furnishings and convert them into a virtual representation, I wondered, maybe it could do the same for living people observed by the headset too, as they should not be any different from moving furniture to the camera.  This would allow any person observed by the headset wearer to take on a virtual form of similar size and shape.

 

And as Alloy is constantly scanning the room (a feat made possible by its advanced vision processing chip), rather than just taking a single calibration scan at start-up (like Microsoft's Kinect camera did), in theory it ought to be able to update in real-time the virtual representation of the living person that the headset's camera is observing.

 

Perhaps, with Project Alloy, we will all have the opportunity to interact with friends and colleagues as animals, heroes, villains and creatures beyond imagination ...

Hi everyone,

 

In this article, I will showcase the design philosophy of the User Interface (UI) for the character selection screen of my company's RealSense-enabled game, 'My Father's Face'.  The game is targeted for release on PC and Intel Project Alloy merged-reality headset in 2017.

 

In the offline mode of 'My Father's Face', one or two players can play, with the second player using a joypad and able to seamlessly drop in and out of the game without interrupting Player 1's game. 

 

1.jpg

 

Because the game centers around a pair of characters, Player 1 is asked to select a second character to define a family, even if they will only be controlling their own character.  Any combination of male and female characters can be selected (male / female, female / male, male / male and female / female), enabling the player or players to form their ideal family unit.

 

Where possible, icons are used instead of text to broaden the appeal of the game to international audiences and make it economical to create a range of language localizations for the game.  The male and female gender of the characters is therefore conveyed through the universal symbols for gender, rather than stating 'male' and 'female' in text.

 

In the top corner of the screen, icons representing the controls for returning to the title screen or starting the game are prominently displayed, informing the player how to enter the game quickly and easily if they do not wish to take time making character selections on their first time with the game.  If Start is activated immediately then the default character pairing selected is Male for Player 1 and Female for Player 2.

 

The player selector is controlled with the arrow keys, the joypad left stick or the left hand of Player 1 with the RealSense camera controls.

 

When 'Player 2' is highlighted, the text below the selector changes to inform Player 2 that they are able to use their joypad to make their own character selection to override the choice of Player 2 character made by player 1, if they wish to do so.

 

2.jpg

 

Both characters are controllable within the character selection room before the game even begins, in order to provide a gentle practice environment where nothing can go wrong and they can try out different buttons, keys and – in the case of the camera controls – body movements.

 

Because the character section screen is the first time that the players are exposed to the game's controls, we do not wish to overwhelm them with control information and create confusion though.  So whilst most of the character controls are available at this time, the on-screen UI only informs the player of the existence of the most vital core controls for navigating the selection room – turn left, turn right and walk forwards.

 

These controls are overlaid on the camera as a Heads Up Display (HUD) so that even if the player characters walk across them, they are always visible at the front of the screen and the characters walk behind the UI elements.

 

In the above image, the keyboard and joypad controls for turning and walking are displayed, as the camera control icons have not been created at this stage in the project.  The player can use the RealSense camera to turn 360 degrees left or right with a tilt or a left-right turn of the head, whilst walking forwards is currently achieved by leaning forwards in the chair (this action will likely be changed in the final Project Alloy headset release).

 

3.jpg

 

As demonstrated in the image below, other character controls such as crouching and side-stepping are also accessible in the character control room if the player discovers them through exploration.  They are not a requirement to leave the selection room though, ensuring that the first-time player will never become frustratingly trapped in the starting room and unable to progress to the main game.

 

4.jpg

 

When the Start control is activated, the game begins, either in full-screen mode if only Player 1 is playing, or in multiplayer split-screen if the '2 Player' option is highlighted at the top of the screen when Start is activated.

 

Upon starting, a set of doors to the outside world opens up, revealing a much larger environment outside the cozy, closed-off confines of the selection room and beckoning the player or players to move forwards into this new world using the basic navigation controls.

 

5.jpg

 

At this point, the full range of controls becomes accessible.  'My Father's Face' uses an innovative control system in which all the control advantages of a camera are available through a joypad, with the button layout designed to maximize comfort whilst carrying out complex body and limb movements with the greatest of ease during walking / running.

 

On the pad, control of the arms is mapped to the left and right sticks, whilst turning is accomplished with the left and right analog triggers and walking with the left digital bumper button above the left analog stick.  This control layout enables the fingers to flow subconsciously over the pad, enabling the players to navigate, touch and explore the world with all the motion and tactile capabilities of their real-world body.

 

6.jpg

 

The characters can also be controlled with a keyboard and mouse combo, with the arrow keys the default for turning left-right and walking forwards back, and the arms and crouch action controlled one at a time with the mouse, its buttons and the scroll-wheel.  The left mouse button opens and closes the currently selected arm's hand, and the right mouse button toggles control of the left and right arm.

 

An auto-walk control toggle enables the player character to walk and run automatically whilst it is active, freeing up the player's hand to focus comfortably on other controls such as turning and arm movement.  This allows spectacular, complex Power Rangers style moves to be enacted intuitively with the player character.

 

When the RealSense camera controls are being used, the two arms mirror Player 1's real life arm movements almost 1:1, able to make every movement range of the upper and lower arms that the real arms can.  This opens up almost limitless possibilities for realistic interaction, from hugging Player 2 (or more than hugging!) to operating an in-game computer keyboard or using a handle.

 

This control capability is also available with the joypad or mouse, but using both of your real arms simultaneously with the RealSense camera  creates an incredible level of immersion in the virtual environment that makes you believe that you are really there.  This sensation is amplified when switching the camera mode into first-person 'through the eyes' view that lets you see the world like your real eyes do, see your arms moving in front of your line of vision and look down at your own virtual torso and feet.

 

7.jpg

 

You are not merely limited to interacting with a friend sitting next to you either.  Players can set up a private online session through match-making and meet in a shared instance of the game world, each player using the controls at their real-world location to independently control their character in the online environment.  This is made possible by the Unity engine's UNet networking system.

 

When this level of multiplayer virtual character control is made available online to people in any location with a good internet connection, the potential number of applications of the technology – from physically tactile personal long-distance relationships to professional team meetings and collaborative creativity – increases exponentially.

 

You can view the most recent “tech trailer” demonstration video for 'My Father's Face' (minus the latest character selection UI elements) here:

 

 

Link: 'My Father's Face' Tech Trailer 7 - YouTube

 

'My Father's Face' will be released by Sambiglyon (www.sambiglyon.org) for PC and Intel Project Alloy headset in 2017.

Project Alloy, Intel's new "merged reality" headset, is scheduled to be available for purchase in the Q4 2017 time window.  If you are hoping to develop a large application for Alloy in time for the launch window then you will want to start considering now about how to set up a stopgap development environment before a proper Development Kit is available.

 

This was an approach commonly taken in the videogame development industry in the past, where developers set up PCs with a specification approximate to what they thought a new game console's would be and then switched to actual development kits later in development once the console platform-owner (e.g Nintendo, Microsoft or Sony) could supply them with one.

 

In this guide, we will look at some useful guidelines to preparing your project idea for development

 

!.  What specification of PC should I target for my development machine?

 

The official specification for the Alloy headset is due to be released to developers sometime around the middle of 2017.  Unlike headsets such as Oculus Rift and HTC Vive, the Alloy headset will not be tethered to a PC by a cable.  Instead, it will contain a full PC board inside the headset.

 

One of the few concrete details available is that the headset will use some form of the 7th generation Kaby Lake processor.  As the specification of the GPU that will provide the graphics for the headset is currently unknown, this means that developers should aim relatively low in regard to the graphics power that their application will require.  If the GPU is more powerful than expected then that will be a pleasant surprise.  But if you design an application that requires a high end video card to run well then it will be much harder to scale the application down to meet a lower specification.

 

This does not mean that you should lower your ambitions.  Instead, you should be aiming to extract maximum performance from the hardware available by creating highly optimized code, art and other project assets.  This is a principle practiced by videogame developers for decades, when their dreams did not quite match the capability of the target hardware.  Indeed, many useful lessons about optimizing for the biggest bang for your processing buck can be learned by looking to the games of the past.

 

My own Alloy development machine is a Skylake 6th generation laptop with 8 GB of memory and Intel HD 520 integrated graphics.  This was my machine of choice because I believe that it is a reasonable approximation of the hardware that may be found in the final Alloy headset's PC board.  My previous development machine was an i3 4th generation Haswell with 6 GB of memory and a 2013-era Nvidia GT 610 video card.

 

The video card, even being 4 years old, was a key factor in the performance of my project.  Once the project was transferred to the Skylake laptop with integrated graphics it slowed down noticeably, even though the processor is superior.  Rather than being discouraged about this, I view it as a positive challenge.  As highly optimized as my code is already, I know that there is still more that I can do to squeeze more performance out of it.  And the better the performance that I can achieve on this development machine, the better it will run on final Alloy equipment if its specification exceeds that of my dev laptop.

 

As an example of where performance design can be made by thinking carefully about your design: in the Unity game creation engine that RealSense is compatible with, the amount of graphics processing required can be reduced by using a method called Static Batching.  This is where you place a tick in a box labeled 'Static' for objects that are stationary and will never move.  Unity will then place all objects that use the same texture into a grouped-together 'batch', meaning that Unity has to draw that object onscreen fewer times and so the overall project should run faster.

 

2.  What control methods will Alloy support for my application?

 

Previous Intel demonstrations and hands-on sessions by developers at events where Intel has presented the in-development headset give us some idea of what to expect.  Via its "inside-out" tracking, Alloy can - like the RealSense Developer Kits - track hand movements / gestures and facial movements.  So if you have experience with developing RealSense applications with the Developer Kit cameras then that knowledge should be relatively easy to adapt for Alloy applications.

 

Alloy has also been shown to be compatible with physical handheld controllers with 'six degrees of motion' - forward, back, up, down, left and right.  Until final hardware is available, using an existing Bluetooth-enabled handheld motion controller such as PlayStation Move with your development PC is likely to be sufficient to prototype such controls.

 

In regard to locomotion, an Alloy application can update the user's position in the virtual environment as they walk through a room with their real-life feet.  If your application will be a sit-down experience though then you may find it easier to assign movement to a hand gesture via Alloy's five-finger detection capability, or to a button on the handheld controller.

 

3.  How can I truly take advantage of this new "Merged Reality" medium of bringing real-world elements into a virtual environment?

 

You may be interested in also reading my article on designing applications with "User Imagined Content" for the alloy headset.

 

Advanced 'User Imagined Content' Design Principles For Intel Project Alloy Applications

 

In conclusion: develop smart and aim high!

Edit: PDF manuals for the R3 SDK can be found in the following folder:

 

C:// Program Files(x86) > Intel > RSSDK > docs > PDF

 

The manuals for particular SDK optional modules seem to be added as each module is installed.

 

Hi everyone,

 

I have been investigating the process for setting up the RealSense camera in the Unity game creation engine with the '2016 R3' SDK.  Up until now, attempts to get it working have failed, because the structure of the R3 SDK is very different to the previous R2 version.  I have made some progress though and will share it in this article, updating it as new discoveries are made.

 

STEP 1

Download the 2016 R3 SDK at this download location and install it:

 

Intel® RealSense™ SDK | Intel® Software

 

1.jpg

 

Download and install the 'Essentials' module first, and then any optional modules from the same page that your project may require (Face Tracking, Hand Trackng, etc).

 

When downloading some of the optional modules such as Face and Hand, you may encounter an error message during installation about not being able to connect to the network to download a particular file.  This is the 'Visual C++ 2015 Redistributable'.  Although this message is alarming, it seems to occur not because the file is missing from the download, but rather because it is already installed on your machine and therefore does not need to be installed again.

 

STEP 2

 

If you have used earlier versions of the RealSense SDK with Unity before then you should be prepared to need to make big changes in your thinking, because the R3 SDK's Unity implementation is completely different, both in folder structure and the names of the DLL library files that make the camera function.

 

Because of this, it is fruitless to try to import RealSense's new Unity implementation into an existing RealSense-equipped project, as you will only get red error messages due to your project being unable to locate the files that it is looking for.  The folder and file changes render the Unity Toolkit files from SDK versions up to 2016 R2 practically useless in their current form.  You will therefore have to start fresh with a brand new Unity project file.

 

STEP 3

 

Open Unity and go to 'File > New Project' to start a new, clean Unity project.

 

2.jpg

 

3.jpg

 

STEP 4

 

The default installation location for the Unity-related files for the RealSense R3 SDK on your computer should be:

 

C: // Program Files (x86) \  Intel \ RSSDK \ framework \ Unity

 

As you can see below, the structure of R3's Unity framework is very different to the framework you may have used in previous SDK versions.

 

4.jpg

 

The files listed in your particular PC's folder will depend on which of the optional modules you downloaded, if any.  There should at least be an 'Intel.Realsense.core' file there, representing the 'Essentials' module, with further files added such as Face and Hand when you install optional modules.

 

STEP 5

 

Double left-click on the 'Intel.Realsense.core' file whilst Unity is open.  This will cause an 'Import Unity Package' window to pop up in Unity.

 

5.jpg

 

Left-click on the 'Import' button at the base of the window to import the files of the 'Essentials' RealSense module into your new Unity project.

 

6.jpg

 

STEP 6

 

Repeat the process for all of the optional module files that you have in your SDK Unity framework folder until they are all imported into the 'RSSDK' folder in your Unity project.

 

7.jpg

 

If you browse through the folders of the optional modules then you can see how much it has changed from the Unity Toolkit of version R2 and earlier.  Instead of only two primary DLL library files to operate the camera, there is now a separate DLL file for each optional module.

 

8.jpg

 

The original familiar library files have also disappeared, replaced in the Core folder by 'Libpxccore_c'.

9.jpg

 

It is clear, then, why attempting to use these files in an existing RealSense-equipped project in Unity that contains the old Unity Toolkit files generates red errors.

 

STEP 7

 

Having completed the setup of our new 2016 R3 SDK project in Unity, you can now run the new project for the first time.  And the result is ... nothing happens to the camera at all, not even a green light.  It is some kind of forward progress from having red errors though!

 

10.jpg

 

Quite simply, because there are no sample scripts provided with the R3 implementation of Unity, the scene does not know what to do with the camera because there are no scripts telling it to activate.

 

Even if a simple cube object is created and a test script placed inside it that contains RealSense camera code, it red-errors because the structure of the RealSense implementation in Unity has changed so much.

 

What will be necessary from this point onwards is to work out how to write scripts that will work with the new R3 structure.

 

It is likely that the new modular structure of the RealSense SDK from '2016 R3' onwards will be the standard set for subsequent RealSense SDK releases in 2017 / 2018, and that 2016 R2 will be obsoleted in the same way that previous SDKs before R2 were.

 

Developers who wish to make use of Unity therefore stand the best chance of future-proofing their applications and making them easier to upgrade by adopting and learning the new modular system now, rather than persisting with the previous Unity implementation that spanned from the launch of RealSense in 2014 to version 2016 R2.

 

Continuing the process of getting started with the 2016 R3 SDK in Unity: although R3 is not supplied with a Unity Toolkit package of tools and sample scripts like in the previous SDKs due to R3's new and very different structure, a sample Unity program is provided - a Unity version of the 'RawStreams' program that regular users of the RealSense SDK will be familiar with.

 

The default installation location for this sample is: C:// Program Files(x86) / Intel / RSSDK / sample / core / RawStreams.Unity

 

1.jpg

This sample is not yet listed in R3's Sample Browser application.  So in order to make it runnable, the folder needs to be copied to a location where data can be saved to it as well as read from it.  The desktop is a suitable location to place it.

 

Right-click on the folder and select the 'Copy' option from the menu.  Then go to the desktop, right-click on it and select the 'Paste' option from the menu to place a copy of the folder there.

 

3.jpg

As usual with RealSense's Unity sample programs, it can be run without setting up a new Unity project by

 

- Starting up Unity

-  Clicking on the 'Open' option on its project selection window

- Browsing to the folder containing the RawStreams.Unity folder, highlighting it and left-clicking the 'Select Folder' option to open the sample in Unity.

 

5.jpg

 

Upon clicking the 'Select Folder' option, you will be notified that the version of Unity that you are opening the sample in is newer than the version that the sample was created in, assuming that your Unity version is newer than 5.2.3.  Left-click the 'Continue' button to proceed with opening the sample.

 

6.jpg

 

Once Unity has updated the sample's files, it will open in its default New Project view.

 

7.jpg

 

We need to load the sample's Scene file into Unity before we can use it.  Left-click on the 'File' menu and select the 'Open Scene' option.  Browse to the RawStreams.unity > Assets > Scenes folder and select the file called 'main'.

 

8.jpg

 

The RawStreams sample project will now load into Unity.

 

9.jpg

 

Left-click on the small triangular 'Play' button at the center-top of the Unity window to run the RawStreams.Unity sample program.

 

Success!  A dual window stream of the RGB and depth cameras is displayed.

 

10.jpg

 

Having successfully run the new RawStreams_Unity sample, we will open its 'RawStreamsController' script in the Unity script editor to learn more about how scripting works in Unity under the 2016 R3 SDK.

 

Left-click on the object in Unity's left-hand Hierarchy panel called 'RawStreamsController' to display its settings in the right-hand Inspector panel, including the 'RawStreamscController' script file that provides the programming for the sample program.

 

1.jpg

 

Left-click on the small gear-wheel icon at the end of the row containing the script's name to open the script's menu, and left-click on the 'Edit Script' menu option to open the script in the Unity script editor.

 

2.jpg

 

using UnityEngine;

using System.Collections;

using Intel.RealSense;

// For each subsequent algorithm module "using Intel.RealSense.AlgorithmModule;"

 

public class RawStreamsController : MonoBehaviour {

 

  [Header("Color Settings")]

  public int colorWidth = 640;

  public int colorHeight = 480;

  public float colorFPS = 30f;

  public Material RGBMaterial;

 

  [Header("Depth Settings")]

  public int depthWidth = 640;

  public int depthHeight = 480;

  public float depthFPS = 30f;

  public Material DepthMaterial;

 

  private SenseManager sm = null;

  private SampleReader sampleReader =  null;

  private NativeTexturePlugin texPlugin = null;

 

  private System.IntPtr colorTex2DPtr = System.IntPtr.Zero;

  private System.IntPtr depthTex2DPtr = System.IntPtr.Zero;

 

  void SampleArrived (object sender, SampleArrivedEventArgs args)

  {

  if(args.sample.Color != null) texPlugin.UpdateTextureNative (args.sample.Color, colorTex2DPtr);

  if(args.sample.Depth != null) texPlugin.UpdateTextureNative (args.sample.Depth, depthTex2DPtr);

  }

 

  // Use this for initialization

  void Start () {

 

  /* Create SenseManager Instance */

  sm = SenseManager.CreateInstance ();

 

  /* Create a SampleReader Instance */

  sampleReader = SampleReader.Activate (sm);

 

  /* Enable Color & Depth Stream */

  sampleReader.EnableStream (StreamType.STREAM_TYPE_COLOR, colorWidth, colorHeight, colorFPS);

  sampleReader.EnableStream (StreamType.STREAM_TYPE_DEPTH, depthWidth, depthHeight, depthFPS);

 

  /* Subscribe to sample arrived event */

  sampleReader.SampleArrived += SampleArrived;

 

  /* Initialize pipeline */

  sm.Init ();

 

  /* Create NativeTexturePlugin to render Texture2D natively */

  texPlugin = NativeTexturePlugin.Activate ();

 

  RGBMaterial.mainTexture = new Texture2D (colorWidth, colorHeight, TextureFormat.BGRA32, false); // Update material's Texture2D with enabled image size.

  RGBMaterial.mainTextureScale = new Vector2 (-1, -1); // Flip the image

  colorTex2DPtr = RGBMaterial.mainTexture.GetNativeTexturePtr ();// Retrieve native Texture2D Pointer

 

  DepthMaterial.mainTexture = new Texture2D (depthWidth, depthHeight, TextureFormat.BGRA32, false); // Update material's Texture2D with enabled image size.

  DepthMaterial.mainTextureScale = new Vector2 (-1, -1); // Flip the image

  depthTex2DPtr = DepthMaterial.mainTexture.GetNativeTexturePtr (); // Retrieve native Texture2D Pointer

 

  /* Start Streaming */

  sm.StreamFrames (false);

 

  }

 

  // Use this for clean up

  void OnDisable () {

 

  /* Clean Up */

  if (sampleReader != null) {

  sampleReader.SampleArrived -= SampleArrived;

  sampleReader.Dispose ();

  }

 

  if (sm != null) sm.Dispose ();

  }

 

}

 

*****************************

 

The header of the script provides our most useful clue about how RealSense camera scripting works in Unity in the R3 SDK.

 

3.jpg

To specify that the script uses the RealSense camera, we must place in the header:

 

using Intel.RealSense;

 

The comment below this line informs us that to access the main and optional feature modules of the R3 SDK, we must use the format:

 

using Intel.RealSense.AlgorithmModule;

 

substituting the word AlgorithmModule for the name of the module.

 

If we revisit the SDK's Unity framework folder then we can find out the module names that the script expects to be provided in the script header if those features are accessed with the script.

 

4.jpg

We can assume that the Essentials (core) module is already referenced in the script as 'using Intel.RealSense', otherwise the script would not be able to function without accessing the Essentials module.  So the algorithms that should be listed are the additional optional ones that are installed in your Unity R3 project.

 

If our script were to use the Face (face) and Hand (hand) algorithms, then our header may look like this:

 

using UnityEngine;

using System.Collections;

using Intel.RealSense;

using Intel.RealSense.Face;

using Intel.RealSense.Hand;

 

However, we provide this information here just for the purposes of learning scripting in Unity with R3, since the Face and Hand optional modules are not used in the RawStreams_Unity sample.

 

Subsequent experimentation revealed that if you do not have the referenced modules installed in your project then the module name will be highlighted in red text to indicate that Unity cannot find the module.

 

Once a module is installed, Unity makes it easy to confirm what the correct algorithm name is by typing the first letter of that algorithm's name after 'using Intel.Realsense.'

 

1.jpg

 

The other interesting detail we can learn from the RawStreamsController script is that the Sense Manager component is still used in Unity in R3, just as it has been with previous versions of the SDK.

 

5.jpg

Finally, if we turn our attention away from the scripting of the sample and look at Unity's 'Assets' panel, a browse through the folder structure of RSSDK and its sub-folders demonstrates the differences between SDK versions R2 and R3 that were shown at the very beginning of this article when setting R3 up in Unity.

 

6.jpg

 

The 'Intel.RealSense' file in the Plugins folder is connected to in the scripting with 'using Intel.Realsense', whist the camera's library file driver in the 'x86_64' folder is called 'libpxccore_c'  (matching the 'core' name of the Essentials module), replacing the familiar pair of library files in the 'Plugins' and 'Plugins_Managed' folders that were used up until the R2 SDK.