Skip navigation

Intel® RealSense Community

10 Posts authored by: MartyG

Hi everyone,


The virtual reality news site Road To VR reports - with a confirmation statement from Intel - that the Project Alloy wireless 'merged reality' headset that was due to be released by the end of this year has unfortunately been canceled.


Given how a number of manufacturers had debuted Windows VR mixed-reality headsets recently whilst there was no fresh news of Alloy, the possibility of the project having been canceled seemed strong.  It is sadly confirmed now.


RealSense SDK 2.0 documentation

Posted by MartyG Sep 13, 2017

Hi everyone,


Now that the new open-source, cross-platform RealSense SDK 2.0 for Windows and Linux has launched, I went through the documentation files and assembled some useful introductory information links.  SDK 2.0 is an upgraded version of Librealsense (which has also been referred to in recent times by Intel by the alternate name 'RealSense Cross Platform API').


It is distinguished from the previous version of Librealsense by being identified as being a "development" branch (one that is still in development and may have issues), whilst the older Librealsense version is on the stable "main" branch.




GitHub - IntelRealSense/librealsense at development



librealsense/ at development · IntelRealSense/librealsense · GitHub



librealsense/examples at development · IntelRealSense/librealsense · GitHub



librealsense/tools at development · IntelRealSense/librealsense · GitHub



librealsense/ at development · IntelRealSense/librealsense · GitHub



librealsense/ at development · IntelRealSense/librealsense · GitHub

Since Intel acquired the vision technology Movidius a year ago, people have been wondering how that technology would manifest as an Intel product.  Although the new RealSense D415 and D435 cameras were undoubtedly influenced by the acquisition, the most visible manifestation of the purchase has been revealed today: the Movidius Myriad X vision chip.


Although not a RealSense product, it is worth highlighting the Myriad X on this forum because of the natural parallels of its possible applications with the kind of projects that developers use RealSense cameras for.


You can read more about the Myriad X in this Intel news release.


Introducing Myriad X: Unleashing AI at the Edge | Intel Newsroom


Edit: further details about Myriad X can be found on its information page.


Myriad™ X: Ultimate Performance at Ultra-Low Power | Machine Vision Technology | Movidius

Hi everyone,


As my company's RealSense-enabled full game 'My Father's Face' heads for its late 2017 release date on the target platforms of PC and Intel Project Alloy headset, I thought it would be a good time to share some new preview images.  We hope it will give you creative inspiration for of what you can achieve with the RealSense camera in your own projects when you really harness its true power!









Features of 'My Father's Face' include:


- A huge fully explorable, physics-driven island in an exciting new universe where you can go to anywhere that you can see.

- Choice of either male or female characters to play as, and local and online multiplayer in a shared world with mixed or same genders.  Walk together, run together, play together, work together and touch together!

- Live lip sync that replicates the player's mouth movements as they speak into their real-life microphone and fully animates the virtual face.

- Player-controlled characters that utilize RealSense technology to precisely mirror the player's body and arm / hand movements and facial expressions almost 1:1, thanks to technology two years in the making.

Hi everyone,


After creating many hundreds of pages of personal research material over the past seven years - part of a much vaster archive belonging to my company - I decided that it was time that I tried to put more of it into practical use for modern-day applications such as development of software for the forthcoming Intel Project Alloy merged-reality headset.  The natural place to start was at the beginning of the archive, in 2010.


I rediscovered an idea for creating living avatars for classroom teaching.  The teacher would wear an all-covering white bodysuit, whilst an image projector would move around the classroom ceiling on rails, tracking the teacher's position and projecting an image onto the suit's surface that would change the teacher's appearance in doing so.  For example, if the teacher got down on hands and knees then the projector could make them look like a bear by projecting a bear image onto the suit from above.



As the Project Alloy headset can scan real-world large objects such as furniture and convert them into a virtual object of a similar size, it made me realize that this could provide a new way to make living avatars a reality.  If the headset could scan furnishings and convert them into a virtual representation, I wondered, maybe it could do the same for living people observed by the headset too, as they should not be any different from moving furniture to the camera.  This would allow any person observed by the headset wearer to take on a virtual form of similar size and shape.


And as Alloy is constantly scanning the room (a feat made possible by its advanced vision processing chip), rather than just taking a single calibration scan at start-up (like Microsoft's Kinect camera did), in theory it ought to be able to update in real-time the virtual representation of the living person that the headset's camera is observing.


Perhaps, with Project Alloy, we will all have the opportunity to interact with friends and colleagues as animals, heroes, villains and creatures beyond imagination ...

Hi everyone,


In this article, I will showcase the design philosophy of the User Interface (UI) for the character selection screen of my company's RealSense-enabled game, 'My Father's Face'.  The game is targeted for release on PC and Intel Project Alloy merged-reality headset in 2017.


In the offline mode of 'My Father's Face', one or two players can play, with the second player using a joypad and able to seamlessly drop in and out of the game without interrupting Player 1's game. 




Because the game centers around a pair of characters, Player 1 is asked to select a second character to define a family, even if they will only be controlling their own character.  Any combination of male and female characters can be selected (male / female, female / male, male / male and female / female), enabling the player or players to form their ideal family unit.


Where possible, icons are used instead of text to broaden the appeal of the game to international audiences and make it economical to create a range of language localizations for the game.  The male and female gender of the characters is therefore conveyed through the universal symbols for gender, rather than stating 'male' and 'female' in text.


In the top corner of the screen, icons representing the controls for returning to the title screen or starting the game are prominently displayed, informing the player how to enter the game quickly and easily if they do not wish to take time making character selections on their first time with the game.  If Start is activated immediately then the default character pairing selected is Male for Player 1 and Female for Player 2.


The player selector is controlled with the arrow keys, the joypad left stick or the left hand of Player 1 with the RealSense camera controls.


When 'Player 2' is highlighted, the text below the selector changes to inform Player 2 that they are able to use their joypad to make their own character selection to override the choice of Player 2 character made by player 1, if they wish to do so.




Both characters are controllable within the character selection room before the game even begins, in order to provide a gentle practice environment where nothing can go wrong and they can try out different buttons, keys and – in the case of the camera controls – body movements.


Because the character section screen is the first time that the players are exposed to the game's controls, we do not wish to overwhelm them with control information and create confusion though.  So whilst most of the character controls are available at this time, the on-screen UI only informs the player of the existence of the most vital core controls for navigating the selection room – turn left, turn right and walk forwards.


These controls are overlaid on the camera as a Heads Up Display (HUD) so that even if the player characters walk across them, they are always visible at the front of the screen and the characters walk behind the UI elements.


In the above image, the keyboard and joypad controls for turning and walking are displayed, as the camera control icons have not been created at this stage in the project.  The player can use the RealSense camera to turn 360 degrees left or right with a tilt or a left-right turn of the head, whilst walking forwards is currently achieved by leaning forwards in the chair (this action will likely be changed in the final Project Alloy headset release).




As demonstrated in the image below, other character controls such as crouching and side-stepping are also accessible in the character control room if the player discovers them through exploration.  They are not a requirement to leave the selection room though, ensuring that the first-time player will never become frustratingly trapped in the starting room and unable to progress to the main game.




When the Start control is activated, the game begins, either in full-screen mode if only Player 1 is playing, or in multiplayer split-screen if the '2 Player' option is highlighted at the top of the screen when Start is activated.


Upon starting, a set of doors to the outside world opens up, revealing a much larger environment outside the cozy, closed-off confines of the selection room and beckoning the player or players to move forwards into this new world using the basic navigation controls.




At this point, the full range of controls becomes accessible.  'My Father's Face' uses an innovative control system in which all the control advantages of a camera are available through a joypad, with the button layout designed to maximize comfort whilst carrying out complex body and limb movements with the greatest of ease during walking / running.


On the pad, control of the arms is mapped to the left and right sticks, whilst turning is accomplished with the left and right analog triggers and walking with the left digital bumper button above the left analog stick.  This control layout enables the fingers to flow subconsciously over the pad, enabling the players to navigate, touch and explore the world with all the motion and tactile capabilities of their real-world body.




The characters can also be controlled with a keyboard and mouse combo, with the arrow keys the default for turning left-right and walking forwards back, and the arms and crouch action controlled one at a time with the mouse, its buttons and the scroll-wheel.  The left mouse button opens and closes the currently selected arm's hand, and the right mouse button toggles control of the left and right arm.


An auto-walk control toggle enables the player character to walk and run automatically whilst it is active, freeing up the player's hand to focus comfortably on other controls such as turning and arm movement.  This allows spectacular, complex Power Rangers style moves to be enacted intuitively with the player character.


When the RealSense camera controls are being used, the two arms mirror Player 1's real life arm movements almost 1:1, able to make every movement range of the upper and lower arms that the real arms can.  This opens up almost limitless possibilities for realistic interaction, from hugging Player 2 (or more than hugging!) to operating an in-game computer keyboard or using a handle.


This control capability is also available with the joypad or mouse, but using both of your real arms simultaneously with the RealSense camera  creates an incredible level of immersion in the virtual environment that makes you believe that you are really there.  This sensation is amplified when switching the camera mode into first-person 'through the eyes' view that lets you see the world like your real eyes do, see your arms moving in front of your line of vision and look down at your own virtual torso and feet.




You are not merely limited to interacting with a friend sitting next to you either.  Players can set up a private online session through match-making and meet in a shared instance of the game world, each player using the controls at their real-world location to independently control their character in the online environment.  This is made possible by the Unity engine's UNet networking system.


When this level of multiplayer virtual character control is made available online to people in any location with a good internet connection, the potential number of applications of the technology – from physically tactile personal long-distance relationships to professional team meetings and collaborative creativity – increases exponentially.


You can view the most recent “tech trailer” demonstration video for 'My Father's Face' (minus the latest character selection UI elements) here:



Link: 'My Father's Face' Tech Trailer 7 - YouTube


'My Father's Face' will be released by Sambiglyon ( for PC and Intel Project Alloy headset in 2017.

Project Alloy, Intel's new "merged reality" headset, is scheduled to be available for purchase in the Q4 2017 time window.  If you are hoping to develop a large application for Alloy in time for the launch window then you will want to start considering now about how to set up a stopgap development environment before a proper Development Kit is available.


This was an approach commonly taken in the videogame development industry in the past, where developers set up PCs with a specification approximate to what they thought a new game console's would be and then switched to actual development kits later in development once the console platform-owner (e.g Nintendo, Microsoft or Sony) could supply them with one.


In this guide, we will look at some useful guidelines to preparing your project idea for development


!.  What specification of PC should I target for my development machine?


The official specification for the Alloy headset is due to be released to developers sometime around the middle of 2017.  Unlike headsets such as Oculus Rift and HTC Vive, the Alloy headset will not be tethered to a PC by a cable.  Instead, it will contain a full PC board inside the headset.


One of the few concrete details available is that the headset will use some form of the 7th generation Kaby Lake processor.  As the specification of the GPU that will provide the graphics for the headset is currently unknown, this means that developers should aim relatively low in regard to the graphics power that their application will require.  If the GPU is more powerful than expected then that will be a pleasant surprise.  But if you design an application that requires a high end video card to run well then it will be much harder to scale the application down to meet a lower specification.


This does not mean that you should lower your ambitions.  Instead, you should be aiming to extract maximum performance from the hardware available by creating highly optimized code, art and other project assets.  This is a principle practiced by videogame developers for decades, when their dreams did not quite match the capability of the target hardware.  Indeed, many useful lessons about optimizing for the biggest bang for your processing buck can be learned by looking to the games of the past.


My own Alloy development machine is a Skylake 6th generation laptop with 8 GB of memory and Intel HD 520 integrated graphics.  This was my machine of choice because I believe that it is a reasonable approximation of the hardware that may be found in the final Alloy headset's PC board.  My previous development machine was an i3 4th generation Haswell with 6 GB of memory and a 2013-era Nvidia GT 610 video card.


The video card, even being 4 years old, was a key factor in the performance of my project.  Once the project was transferred to the Skylake laptop with integrated graphics it slowed down noticeably, even though the processor is superior.  Rather than being discouraged about this, I view it as a positive challenge.  As highly optimized as my code is already, I know that there is still more that I can do to squeeze more performance out of it.  And the better the performance that I can achieve on this development machine, the better it will run on final Alloy equipment if its specification exceeds that of my dev laptop.


As an example of where performance design can be made by thinking carefully about your design: in the Unity game creation engine that RealSense is compatible with, the amount of graphics processing required can be reduced by using a method called Static Batching.  This is where you place a tick in a box labeled 'Static' for objects that are stationary and will never move.  Unity will then place all objects that use the same texture into a grouped-together 'batch', meaning that Unity has to draw that object onscreen fewer times and so the overall project should run faster.


2.  What control methods will Alloy support for my application?


Previous Intel demonstrations and hands-on sessions by developers at events where Intel has presented the in-development headset give us some idea of what to expect.  Via its "inside-out" tracking, Alloy can - like the RealSense Developer Kits - track hand movements / gestures and facial movements.  So if you have experience with developing RealSense applications with the Developer Kit cameras then that knowledge should be relatively easy to adapt for Alloy applications.


Alloy has also been shown to be compatible with physical handheld controllers with 'six degrees of motion' - forward, back, up, down, left and right.  Until final hardware is available, using an existing Bluetooth-enabled handheld motion controller such as PlayStation Move with your development PC is likely to be sufficient to prototype such controls.


In regard to locomotion, an Alloy application can update the user's position in the virtual environment as they walk through a room with their real-life feet.  If your application will be a sit-down experience though then you may find it easier to assign movement to a hand gesture via Alloy's five-finger detection capability, or to a button on the handheld controller.


3.  How can I truly take advantage of this new "Merged Reality" medium of bringing real-world elements into a virtual environment?


You may be interested in also reading my article on designing applications with "User Imagined Content" for the alloy headset.


Advanced 'User Imagined Content' Design Principles For Intel Project Alloy Applications


In conclusion: develop smart and aim high!

Edit: PDF manuals for the R3 SDK can be found in the following folder:


C:// Program Files(x86) > Intel > RSSDK > docs > PDF


The manuals for particular SDK optional modules seem to be added as each module is installed.


Hi everyone,


I have been investigating the process for setting up the RealSense camera in the Unity game creation engine with the '2016 R3' SDK.  Up until now, attempts to get it working have failed, because the structure of the R3 SDK is very different to the previous R2 version.  I have made some progress though and will share it in this article, updating it as new discoveries are made.



Download the 2016 R3 SDK at this download location and install it:


Intel® RealSense™ SDK | Intel® Software




Download and install the 'Essentials' module first, and then any optional modules from the same page that your project may require (Face Tracking, Hand Trackng, etc).


When downloading some of the optional modules such as Face and Hand, you may encounter an error message during installation about not being able to connect to the network to download a particular file.  This is the 'Visual C++ 2015 Redistributable'.  Although this message is alarming, it seems to occur not because the file is missing from the download, but rather because it is already installed on your machine and therefore does not need to be installed again.




If you have used earlier versions of the RealSense SDK with Unity before then you should be prepared to need to make big changes in your thinking, because the R3 SDK's Unity implementation is completely different, both in folder structure and the names of the DLL library files that make the camera function.


Because of this, it is fruitless to try to import RealSense's new Unity implementation into an existing RealSense-equipped project, as you will only get red error messages due to your project being unable to locate the files that it is looking for.  The folder and file changes render the Unity Toolkit files from SDK versions up to 2016 R2 practically useless in their current form.  You will therefore have to start fresh with a brand new Unity project file.




Open Unity and go to 'File > New Project' to start a new, clean Unity project.








The default installation location for the Unity-related files for the RealSense R3 SDK on your computer should be:


C: // Program Files (x86) \  Intel \ RSSDK \ framework \ Unity


As you can see below, the structure of R3's Unity framework is very different to the framework you may have used in previous SDK versions.




The files listed in your particular PC's folder will depend on which of the optional modules you downloaded, if any.  There should at least be an 'Intel.Realsense.core' file there, representing the 'Essentials' module, with further files added such as Face and Hand when you install optional modules.




Double left-click on the 'Intel.Realsense.core' file whilst Unity is open.  This will cause an 'Import Unity Package' window to pop up in Unity.




Left-click on the 'Import' button at the base of the window to import the files of the 'Essentials' RealSense module into your new Unity project.






Repeat the process for all of the optional module files that you have in your SDK Unity framework folder until they are all imported into the 'RSSDK' folder in your Unity project.




If you browse through the folders of the optional modules then you can see how much it has changed from the Unity Toolkit of version R2 and earlier.  Instead of only two primary DLL library files to operate the camera, there is now a separate DLL file for each optional module.




The original familiar library files have also disappeared, replaced in the Core folder by 'Libpxccore_c'.



It is clear, then, why attempting to use these files in an existing RealSense-equipped project in Unity that contains the old Unity Toolkit files generates red errors.




Having completed the setup of our new 2016 R3 SDK project in Unity, you can now run the new project for the first time.  And the result is ... nothing happens to the camera at all, not even a green light.  It is some kind of forward progress from having red errors though!




Quite simply, because there are no sample scripts provided with the R3 implementation of Unity, the scene does not know what to do with the camera because there are no scripts telling it to activate.


Even if a simple cube object is created and a test script placed inside it that contains RealSense camera code, it red-errors because the structure of the RealSense implementation in Unity has changed so much.


What will be necessary from this point onwards is to work out how to write scripts that will work with the new R3 structure.


It is likely that the new modular structure of the RealSense SDK from '2016 R3' onwards will be the standard set for subsequent RealSense SDK releases in 2017 / 2018, and that 2016 R2 will be obsoleted in the same way that previous SDKs before R2 were.


Developers who wish to make use of Unity therefore stand the best chance of future-proofing their applications and making them easier to upgrade by adopting and learning the new modular system now, rather than persisting with the previous Unity implementation that spanned from the launch of RealSense in 2014 to version 2016 R2.


Continuing the process of getting started with the 2016 R3 SDK in Unity: although R3 is not supplied with a Unity Toolkit package of tools and sample scripts like in the previous SDKs due to R3's new and very different structure, a sample Unity program is provided - a Unity version of the 'RawStreams' program that regular users of the RealSense SDK will be familiar with.


The default installation location for this sample is: C:// Program Files(x86) / Intel / RSSDK / sample / core / RawStreams.Unity



This sample is not yet listed in R3's Sample Browser application.  So in order to make it runnable, the folder needs to be copied to a location where data can be saved to it as well as read from it.  The desktop is a suitable location to place it.


Right-click on the folder and select the 'Copy' option from the menu.  Then go to the desktop, right-click on it and select the 'Paste' option from the menu to place a copy of the folder there.



As usual with RealSense's Unity sample programs, it can be run without setting up a new Unity project by


- Starting up Unity

-  Clicking on the 'Open' option on its project selection window

- Browsing to the folder containing the RawStreams.Unity folder, highlighting it and left-clicking the 'Select Folder' option to open the sample in Unity.




Upon clicking the 'Select Folder' option, you will be notified that the version of Unity that you are opening the sample in is newer than the version that the sample was created in, assuming that your Unity version is newer than 5.2.3.  Left-click the 'Continue' button to proceed with opening the sample.




Once Unity has updated the sample's files, it will open in its default New Project view.




We need to load the sample's Scene file into Unity before we can use it.  Left-click on the 'File' menu and select the 'Open Scene' option.  Browse to the RawStreams.unity > Assets > Scenes folder and select the file called 'main'.




The RawStreams sample project will now load into Unity.




Left-click on the small triangular 'Play' button at the center-top of the Unity window to run the RawStreams.Unity sample program.


Success!  A dual window stream of the RGB and depth cameras is displayed.




Having successfully run the new RawStreams_Unity sample, we will open its 'RawStreamsController' script in the Unity script editor to learn more about how scripting works in Unity under the 2016 R3 SDK.


Left-click on the object in Unity's left-hand Hierarchy panel called 'RawStreamsController' to display its settings in the right-hand Inspector panel, including the 'RawStreamscController' script file that provides the programming for the sample program.




Left-click on the small gear-wheel icon at the end of the row containing the script's name to open the script's menu, and left-click on the 'Edit Script' menu option to open the script in the Unity script editor.




using UnityEngine;

using System.Collections;

using Intel.RealSense;

// For each subsequent algorithm module "using Intel.RealSense.AlgorithmModule;"


public class RawStreamsController : MonoBehaviour {


  [Header("Color Settings")]

  public int colorWidth = 640;

  public int colorHeight = 480;

  public float colorFPS = 30f;

  public Material RGBMaterial;


  [Header("Depth Settings")]

  public int depthWidth = 640;

  public int depthHeight = 480;

  public float depthFPS = 30f;

  public Material DepthMaterial;


  private SenseManager sm = null;

  private SampleReader sampleReader =  null;

  private NativeTexturePlugin texPlugin = null;


  private System.IntPtr colorTex2DPtr = System.IntPtr.Zero;

  private System.IntPtr depthTex2DPtr = System.IntPtr.Zero;


  void SampleArrived (object sender, SampleArrivedEventArgs args)


  if(args.sample.Color != null) texPlugin.UpdateTextureNative (args.sample.Color, colorTex2DPtr);

  if(args.sample.Depth != null) texPlugin.UpdateTextureNative (args.sample.Depth, depthTex2DPtr);



  // Use this for initialization

  void Start () {


  /* Create SenseManager Instance */

  sm = SenseManager.CreateInstance ();


  /* Create a SampleReader Instance */

  sampleReader = SampleReader.Activate (sm);


  /* Enable Color & Depth Stream */

  sampleReader.EnableStream (StreamType.STREAM_TYPE_COLOR, colorWidth, colorHeight, colorFPS);

  sampleReader.EnableStream (StreamType.STREAM_TYPE_DEPTH, depthWidth, depthHeight, depthFPS);


  /* Subscribe to sample arrived event */

  sampleReader.SampleArrived += SampleArrived;


  /* Initialize pipeline */

  sm.Init ();


  /* Create NativeTexturePlugin to render Texture2D natively */

  texPlugin = NativeTexturePlugin.Activate ();


  RGBMaterial.mainTexture = new Texture2D (colorWidth, colorHeight, TextureFormat.BGRA32, false); // Update material's Texture2D with enabled image size.

  RGBMaterial.mainTextureScale = new Vector2 (-1, -1); // Flip the image

  colorTex2DPtr = RGBMaterial.mainTexture.GetNativeTexturePtr ();// Retrieve native Texture2D Pointer


  DepthMaterial.mainTexture = new Texture2D (depthWidth, depthHeight, TextureFormat.BGRA32, false); // Update material's Texture2D with enabled image size.

  DepthMaterial.mainTextureScale = new Vector2 (-1, -1); // Flip the image

  depthTex2DPtr = DepthMaterial.mainTexture.GetNativeTexturePtr (); // Retrieve native Texture2D Pointer


  /* Start Streaming */

  sm.StreamFrames (false);




  // Use this for clean up

  void OnDisable () {


  /* Clean Up */

  if (sampleReader != null) {

  sampleReader.SampleArrived -= SampleArrived;

  sampleReader.Dispose ();



  if (sm != null) sm.Dispose ();







The header of the script provides our most useful clue about how RealSense camera scripting works in Unity in the R3 SDK.



To specify that the script uses the RealSense camera, we must place in the header:


using Intel.RealSense;


The comment below this line informs us that to access the main and optional feature modules of the R3 SDK, we must use the format:


using Intel.RealSense.AlgorithmModule;


substituting the word AlgorithmModule for the name of the module.


If we revisit the SDK's Unity framework folder then we can find out the module names that the script expects to be provided in the script header if those features are accessed with the script.



We can assume that the Essentials (core) module is already referenced in the script as 'using Intel.RealSense', otherwise the script would not be able to function without accessing the Essentials module.  So the algorithms that should be listed are the additional optional ones that are installed in your Unity R3 project.


If our script were to use the Face (face) and Hand (hand) algorithms, then our header may look like this:


using UnityEngine;

using System.Collections;

using Intel.RealSense;

using Intel.RealSense.Face;

using Intel.RealSense.Hand;


However, we provide this information here just for the purposes of learning scripting in Unity with R3, since the Face and Hand optional modules are not used in the RawStreams_Unity sample.


Subsequent experimentation revealed that if you do not have the referenced modules installed in your project then the module name will be highlighted in red text to indicate that Unity cannot find the module.


Once a module is installed, Unity makes it easy to confirm what the correct algorithm name is by typing the first letter of that algorithm's name after 'using Intel.Realsense.'




The other interesting detail we can learn from the RawStreamsController script is that the Sense Manager component is still used in Unity in R3, just as it has been with previous versions of the SDK.



Finally, if we turn our attention away from the scripting of the sample and look at Unity's 'Assets' panel, a browse through the folder structure of RSSDK and its sub-folders demonstrates the differences between SDK versions R2 and R3 that were shown at the very beginning of this article when setting R3 up in Unity.




The 'Intel.RealSense' file in the Plugins folder is connected to in the scripting with 'using Intel.Realsense', whist the camera's library file driver in the 'x86_64' folder is called 'libpxccore_c'  (matching the 'core' name of the Essentials module), replacing the familiar pair of library files in the 'Plugins' and 'Plugins_Managed' folders that were used up until the R2 SDK.


*  Image (c) Marty Grover, from the forthcoming PC RealSense game 'My Father's Face'


In the fictional headset above, long hours of use are achieved in the untethered battery-driven headset by a ring of ball-bearings inside the yellow component on each side of the helmet.  As the bearings continually roll clockwise and anti-clockwise under gravity through a small electromagnet as the wearer's body moves, electrical current is induced that is directed to the headset battery storage like the alternator component on a motor vehicle engine that supports the vehicle's electrical systems with a small charge when the engine is running.




In the fourth quarter of 2017, Intel's RealSense-equipped Project Alloy headset is due to be released for purchase.  The Alloy headset contains a full PD with integrated graphics GPU, enabling it to be used without a cable tethering the user to an external PC, as is the case with Oculus Rift and HTC Vive VR headsets.


Alloy is not simply a Virtual Reality system though.  It is referred to as Merged Reality, where the user's real-world body and those of other persons nearby are visible in a virtual environment, and real-world objects can be brought into the virtual location and interact with the digital objects in it.


Microsoft uses a similar term - Mixed Reality - for their HoloLens headset, though its approach is the reverse of Alloy's, where the user views a real-world environment that has virtual content overload onto it.  There is enough commonality with Intel and Microsoft's approaches though that the two companies have partnered to create a standard for Head Mounted Displays (HMD).  And, like HoloLens, Alloy will utilize Microsoft's Windows Holographic shell, which is currently used in the HoloLens headset and will be integrated into the mainstream Windows 10 in the middle of 2017.


Around this time, Intel will release the detailed specification for Alloy so that developers can begin preparing for it ahead of its release.  We will not preempt that announcement in this article by trying to make our own predictions of the spec's contents.  Instead, we are going to look at design principles that can be incorporated into your Alloy applications to make them truly transformative for their end-users.


With the rise of augmented reality, people are already becoming living avatars via mobile devices such as handhelds and wearables.  But we can take these portable computing technologies even further by using them to draw out dormant mental and physical potential.




Ever since virtual worlds as we know them have existed, the ones with the greatest longevity have been those that have offered User Generated Content (UGC) – whether to a small degree (crafting items from pre-made raw materials) or to a much larger extent (creating complex objects from very basic geometric shapes in online virtual worlds such as Second Life).


Everything goes in circles though, and what was old becomes new again eventually.  As technology shrinks in physical size and clunky interfaces are stripped away to be replaced by more intuitive Natural User Interfaces and Augmented Reality, the barriers between the real world and digital are dissolving: so much so that we are seeing the return of Human 1.0 – the power of imagination or, to put it in a more modern way, User Imagined Content (UIC).


The Buddhist and Hindu religions believe strongly in imagined worlds taking on a real, tangible existence.  Their faith calls them Tulpas.  In Buddhism they are something that is benign, whilst Hinduism considers them to have the potential to be spiritually dangerous, because a person can become so obsessed with fantasy worlds that they lose touch with their real world life.


Children create UIC every day through play.  They and their friends take a basic story concept from their favorite entertainment media – whether it be from a book, television show / movie or the internet and then take on a role from that entertainment and allot the remaining unfilled character roles of their piece of imaginative theater to invisible cast members.  They are using their minds to paint their stage-play directly onto the canvas of the real world.  It does not matter that others cannot see what they are painting; somehow, like real actors on a stage or set, a group of young friends know what is happening in their live-action recreation without having to explain it to each other, because they know the basic rules of the original media that their play is based on and create something new within the safety of that guiding framework.


A merged-reality environment can be used as the basis for superimposing a user's mental imaginings over their virtual surroundings, so that they can attempt to apply their own rules to that environment to test it, bend it to their will, and then break it in a way that makes it even better.  All that the developer needs to do is to give the user subtle direction that suggests how the user may begin understanding the basic rules of that environment - like the starting town in an online Massively Multiplayer game - and then set the user free to explore and adapt the rule-set to their particular brain's own style of processing information, learning from it and turning it into action.  This is not too dissimilar from the principles of a good tutorial at the start of a traditional videogame.




Learning is at its most effective when the user is having so much fun that they do not even realize that they are learning!   This is the primary reason that a school exists – to impart theoretical knowledge that can – sooner or later - be applied to real-world situations.  By giving users the tools to harness their imagination and then setting them free, they can then teach themselves through their play.


During that play, they draw on both conscious recollection and, to a lesser extent, a wealth of information lodged deep in their unconscious long-term memory that their mind has recorded and filed away during their lifetime.  Like the creation of dreams during sleep, those conscious and unconscious pieces of memory combine to generate a narrative that is acted out during play with thoughts and with physical body language (some aspects of which are intended, and others that are instinctive auto-actions.)


My first experience of the concept of optimizing the brain to make the most efficient use of a large volume of stored memories was as a teenager when I watched the Gerry Anderson TV puppet show 'Joe 90'.


Joe 90 - Wikipedia


The basic premise was that a hypnotically pulsing gyroscope machine called BIG RAT was used to install the memories and brain pattern of a specific adult into an ordinary child called Joe a he sat inside the gyroscope.  The memories were stored in Joe's spectacles (and lost from his mind if he removed them), and enabled him to carry out secret agent missions and perform adult actions such as flying a fighter jet.


A little later in life, I learned of the existence of self-help CDs that could be played through headphones whilst a person slept, loading the information into their unconscious and so making it easier to recall that information during their waking life if they were exposed to memory cues that triggered the pre-installed data and pulled the memory (or a shard of it) into their conscious mind.


Alloy merged-reality application designers can give their users prompts through sounds and imagery that trigger instinctive emotional and physiological responses - such as "fight or flight" fear responses - that are common to most people due to being hard-wired into the brain and nervous system from birth.  This is not unique to Alloy though, as any VR headset could provide an environment with such cues.


Where Alloy can truly take this further into new and powerful "living and learning" experiences is through its Merged Reality nature, in which the real-world body is not shut out from the virtual world but can instead become a part of it.




Once the user enters a virtual environment and is given prompts in that environment that are designed in such a way that the mind can subconsciously tie them to information it has previously absorbed, the mind can bring that information naturally to the surface of the consciousness in an “Oh yeeahh, I get it!” moment.  Once they have grasped the basics of a concept from these realizations then they can then explore that idea further using the tools available to them in that virtual world.  If a concept is presented to them in a way that is compelling enough for it to powerfully resonate with them then they are likely to want to continue exploring it without being asked, thinking about it even after wearing the headset and making plans that they can take into their next session in that environment.


This can form the basis of an endless learning loop (virtual world, real wold, virtual world, real world), each new cycle building on the results of the preceding one in a continuous, harmonious amplification.  In the field of electronics waveform theory, this is known as Constructive Interference.  Conversely, when learning is disjointed and conflicted then the progress of previous sessions is canceled out by Destructive Interference.


It is desirable that a new idea is followed up on as soon as practically possible so that the fresh knowledge does not have a chance to be forgotten before it can be reinforced.  Research into sales techniques has demonstrated that a customer will begin the forget the details of a sales pitch after three days and after a week they can barely remember it, giving a rival company an opportunity to make a successful competing pitch to that customer.  This is why a salesperson is keen to close a deal as soon as possible after the initial sales lead.


In the case of the developer of Alloy applications, it could be said that the developer's competitor is everything that captures the user's attention inbetween one headset session and the next – television, the internet, sports and hobbies, friends and family, etc. It is in the interests of the developer that the user returns for their next session with the headset before daily life can erode the learning momentum that has been built up.


Like a sales executive, a developer should therefore seek to continue the learning dialog with their users at the first opportunity, whilst avoiding pressure tactics and giving the user space to return at a time of their choosing, when it most comfortable for them to do so.  An excellent way to maintain the energy and excitement of their headset session is to lead them to real-world content such as websites and videos that they can explore at their leisure.  This serves not only to keep the user thinking about their next visit to the Alloy application but also feeds new information into their mind that their subconscious can draw upon during a headset session to create even more learning and progression possibilities.




One of the biggest obstacles to enjoying using virtual reality / merged reality headsets is the physical headset itself.  Unlike subtle augmented-reality eyewear such as Google Glass, any person wearing a full headset risks becoming the subject of ridicule for anyone else in the room who is not wearing a headset who is not sympathetic to the charms of such equipment.  The problem with play is that while you can do it privately in isolation to a certain extent if you do not want to involve other people, it is often also a social expression that relies on an individual having a certain amount of self-confidence to share the play experience with others or even accept being observed at play by others.


As traditional interfaces such as joypad, mouse and keyboard give way to touch and motion detection though, it is more vital than it has ever been that people are equipped from an early age to lose their fear of public performance, as they otherwise will risk falling behind their peers in their personal development.  Physical activities such as live roleplay provide an outlet for frustration, and so youths should be learning how to harness their full potential and energies early on in life so that they grow up knowing how to direct their mind and hence their behavior and responses to difficult situations.


Having doubts and fears during play introduces turbulence into one's thought processes that can quickly lead to mental paralysis.  Once that happens to a user then they are likely to be extremely averse to taking part in subsequent occurrences of that activity.  The same emotion that can paralyze a non-confident person can though, when channeled correctly, be used as a powerful fuel for harnessing their latent potential. Once they see for themselves what they are truly capable of, the confidence problem will be taken care of because, if they believe with a conviction beyond certainty that they can accomplish a goal even if they have not yet attempted it, the barriers to that aim will crumble.


Fictional heroes such as the Power Rangers draw the strength to survive from their determination to win, and so their mind empowers their abilities, not the other way around.  With a Positive Mental Attitude, the entire human body becomes a living 'power morpher' that enables a person to attain a “heroic” state.  Users who can break through perceived limitations – restrictions that may have developed as a result of low self-esteem or a negative home and peer environment - will effectively become 'super powered' in regard to the amount of potential that they can harness, and others who see them will be inspired to follow their example.




Personal development plans for an individual user can be built around a simple formula. Three of the most important requirements for unrestricted access to inner potential are Calmness, Belief and Need.  These components can be better understood if described in terms of the parts of a kitchen tap and the water inside it.


Calm is represented by the water itself.  clears the mind of distracting thoughts, doubts and fears and gets it ready for the issuing of thought commands so that you can access and summon your potential and then use it for something.  Until you have Belief and Need though, most of that water - or in other words, our potential - will remain inside the tap's water pipe and will not be usable by us.


Belief is the water pressure in the pipe that determines how strongly the water flows out when it is released by the turning of the tap-head.  The greater our belief in ourselves, the more of our untapped potential that our mind will released.


Need is the twistable head of the tap that controls how much of our potential is released.  When we turn open our inner tap-head, we make it possible for the dormant potential inside us to rush out.


The urgency of your Need to draw on your capability determines how much potential will be released by your mind so that you can use it.  The maximum amount of potential that a person may summon can be increased if they have a combination of Perfect Calm, Strongest Belief and Strongest Need.  The summoning, release and use of inner potential is a team-up between mind and body.


The mind should always be in charge of the body, not the other way around. Once the user's mind has become familiar with the codes to unlocking their potential through tech-assisted play then they will be able to call on it at will in everyday life without the need for hardware   When they have been shown that they can achieve something once then they will, from that first hand experience, know they can achieve it again at any time and place in the future.  We mentioned earlier how humans draw on mental resources such as short and long-term memories.  In fact, “Drawing Out” is precisely how we can go about turning the theoretical power-formula of Calm-Belief-Need into living reality.




Through play inside and outside of an Alloy session, users can – once they become less self-conscious - use their mind and / or body movements to 'direct' their psychology / physiology and physical posture as though real life is a movie set where they, as director, can influence any aspect of their life through a fully energized state of mind and body where doubts, negativity and self-imposed limitations fall away.


For younger users, there are plenty of examples that can be referred to in popular pre-teen and teen media to defuse those users' concerns that they would look silly representing their thoughts with movement (a couple of prime examples being the “jutsu” hand movements to activate special powers in the hit teen ninja cartoon series 'Naruto,' or the set of hand motions made with portable devices in series such as 'Power Rangers' and 'Digimon.'


Personal visualizations can be externalized with Drawing Out so peers can experience them too, the choreography making an instant impression on the audience in the same way that they are often hypnotically enraptured by the dance moves of their favorite music stars.  Alloy, which enables users to incorporate the movements of their entire body into manipulation of a virtual environment, is a perfect medium for doing so.


After a formula for success has been demonstrated by one person, that success can be replicated by others if they closely follow the observed formula themselves, with some tweaks of their own to suit their individual circumstances.  Careful iteration on what has gone before and been proven to work is a key principle of progress in most aspects of the real world – science and technology, business, sports and innumerable other areas.  In fact, it is a core rule of science that a theory cannot be considered to be a scientific law until it has been replicated a number of times with the same results.  And once the basics are proven, the theory is developed and greater discoveries are made, which in turn are tested, proven and iterated on yet again.




The future of living and learning is not passively sitting with a videogame with a controller and moving an avatar, but putting the whole of their mind and body into their interactions with the world.  It is not enough though that a system should work.  To ensure the best results from it, especially with younger users, it should also be so simple to use that the mechanics of it are invisible to the user, and they are able to focus exclusively on succeeding at the task at hand.


To make this possible, we need to essentially automate the potential-harnessing interactions between the Alloy headset and the mind of the student, so that once they achieve the super-potentialized state then they can maintain that state for the rest of the session without thinking about it until the roleplay ends and they can relax and “power down.”


When designing a learning system based on the principles of Drawing Out, a useful benchmark for ease of use to bear in mind is “Is it likely to be usable by a profoundly physically impaired person who can at least bend a part of their body, such as a finger or toe?”  To achieve such a level of automation of a user's thought processes that this becomes feasible, developers can utilize a combination of the human nervous system and physical feedback sensations.


One of the almost endless amazing things about the human brain is that, being a super-computer of unparalleled complexity, it can handle more than one activity at the same time.  You can be thinking about something and at the same time have your brain work on another task automatically in the background.  You can program the brain to carry out a specific internal process when you move a part of your body in a certain way (for example, pushing your thumb and fore-finger together.)


In the psychological science of Neuro-Linguistic Programming (NLP), the touch-based programming instruction is known as an “anchor” and the stimuli that activates that instruction is called a “trigger.”  The more that you practice the chosen touch-gesture anchor whilst thinking about the task you want to be activated when the movement is made, the more certain your brain will become that it should start doing the designated processing task or enter into a particular psychological / physiological state whenever it detects that particular body movement trigger.


The brain will remember to keep automatically carrying out the task that you have assigned to the chosen physical gesture for as long as you keep that body part or parts tensed.  This is because it is constantly being reminded to do the task by the feeling of tension which travels to the brain through the nervous system.


To demonstrate the concept clearly and powerfully, let's do an example exercise.


Step One

Think in your mind an instruction that clearly describes to the brain what you want to happen whilst a particular part of your body is tensed.  The example thought-command we will use in this exercise is “Give me more energy.”


Step Two

The next step is to select a part of your body to place in tension.  You do not need to be looking at that part in order for the technique to work   In this exercise, we will utilize a finger as our means of creating physical tension. Bend a finger – any finger - on one of your hands a little and then hold it in that position ...  not a lot, just enough so that you have a continuous feeling of tension in that digit.  The feeling in the finger as you keep it tensed will keep reminding the brain that it is supposed to raise your body's energy level for as long as the nervous system keeps telling it that the finger is tensed.


Step Three

Think or do anything else that you want to while keeping the finger bent.  You will find that you can now do two things at the same time – what you would normally be doing and the additional task that you have programmed into your finger without any division of concentration!


When you are ready to end the exercise, simply relax your finger to cease the mental programming instruction that was linked to that finger.




We can make our physical feedback system even simpler if we replace the need for a conscious bending action with physical feedback from the hardware, such as the vibration / rumble function.   It still works because the rumble feedback reminds the user's mind to process a set mental instruction (e.g our aforementioned energy-raising command) in place of the reminder transmitted to the brain by the nervous system via body-part tension.


A developer could program their application to send pulses of vibration of varying durations and intensities into the user's body.  If the user is taught that these pulse patterns correspond to specific meanings then this would be another form of instinctive non-visual feedback.  Whilst it seems unlikely that the final Alloy specification will contain a rumble feature in the headset, Alloy does also support handheld physical motion-tracked controllers with six degrees of movement (up, down, left, right, forward, back), and these controllers could easily have a rumble feedback function incorporated into them.




The Alloy headset is constantly scanning the real world environment and absorbing the details of objects observed by its built-in RealSense camera.  It can then convert those real world objects into fictional objects that have roughly the same shape and proportions as the original object, enabling the contents of a real-life room to be incorporated into a virtual simulation in the headset.


The presence of such objects can actually be detrimental to immersion in a simulation in some cases though.  The effectiveness of a pretend reality relies upon believing completely in the imagery that you create though, and that belief can be eroded if you are consciously aware of stimuli that is contrary to the alternative reality that you are trying to hold onto in your mind.


As an example, you may be sitting on a real-world chair whilst using the Alloy headset.  If the virtual environment also contains a chair that you can sit down on, then the physical sensation of the real-world chair beneath you provides a perfect sense of "being there" in the simulation.


If, however, the simulation involves a scenario such as flying through the air as a bird - like in Ubisoft's 'Eagle Flight' VR game - then the sensation of the physical chair beneath you would act as a constant reminder that you are still physically grounded.  Even standing up during the experience would not help much with this particular scenario, because you will still feel the ground beneath your feet.  Short of installing a body-lifting wind tunnel in the floor or floating in a swimming pool, it would be hard to shake off the cognitive dissonance caused by the feeling that you are not actually in the circumstances that you are trying hard to believe yourself to be part of.




If the user cannot block out the physical stimuli that causes them to have doubt in the scenario then the developer can reinforce their mental conviction about the truth of the imagery by utilizing disruptive sensations as an integral part of the scene.


Let's say that the user is laying down on their bed or couch with the Alloy headset on in the real world, trying to convince themselves that they are standing in the hall of a magical castle.  They may be able to clearly see themselves and the castle room in the headset, but it is not a perfect simulation because the attempt to convince the mind into believing that they are standing up in the scene is being disrupted by the sensation of the surface of the bed or couch under their back.


But the problem can be solved simply by changing the narration of the roleplay scene to take account of what the user is feeling.  In the example of the magical castle, if the user is laying down instead of standing up - a status that could perhaps be automatically detected by the sensors in the headset - then the simulation could change the description of the scene from standing in a hall to laying on a bed in a castle bedroom, similar to how a smartphone or tablet changes its screen image orientation from portrait to landscape when turned on its side..


By incorporating what particular parts of your body are feeling into the simulation, the mind will become even more convinced that the artificial reality that you are visualizing is true because the physical stimuli will back up your belief in the truth of the fantasy.




You may have heard of the well-known saying “The thought is the deed.”  In the realm of psychology, a physical condition can manifest because of something that is happening in the mind (a very basic example being the creation of spots on the skin by a state of high stress.)  Therefore, we can adjust our physiology in specific ways by crafting a narrative with VR scenario programming or with our internal mindscape.


If we want to think about the idea of role-playing unreal situations in an otherwise real world, then one can look at the example of 'magical transformation' characters in classic cartoon fiction, such as Sailor Moon, He-Man and She-Ra, where they appear to be surrounded by a magical field around them that changes their physical form from an ordinary- everyday one to a super-powered one.  Thinking about the mechanics of such fictional universes can provide useful insights about the concept of incorporating unreality into the real world.


If we were to try to convince our mind that we were in possession of Prince Adam's magical Power Sword (with which he transforms into He-Man by holding the sword aloft and speaking the magical words, “By the power of Grayskull”,) it would not be enough for us to just imagine the Power Sword in the hand and say the magic words, and expect to be instantly transformed.  Instead, one might have to set up all of the conditions in their visualization that make the transformation possible in the fictional world of He-Man.


These would be:


- That you are holding the sword when the special activation words are spoken – a belief that would be reinforced by holding any long, thin device that can be gripped with a closed hand, such as the previously mentioned Alloy physical handheld controllers.


- That you believe completely that when you speak the magic words “By the power of Grayskull”, the expected release of power from the sword will occur (this could be synchronized with a burst of vibration from the cont held in the hand to reinforce the belief that something is happening); and


- That when the power is released, your physiology and / or psychology will be changed positively in some way, even if you don't literally transform into He-Man.




We mentioned earlier about the importance of iteration of accepted standards over time in order to continuously improve them year on year.  This is true for the Drawing Out system detailed in this article as well.  Once techniques have been documented in some manner then, like a particular martial art, they have a chance – if they are subsequently widely accepted - of becoming a standard that others can use as a reference to successfully train themselves.


They can then use the rules of that standard as the basis to develop their own take on it and hopefully also share their iterations with others as the originator did so that that iteration can also be built upon.  If an open-source community can be developed around the standard then Alloy developers can create and share their own modules to plug into the framework of the core standard, and also contribute feedback to development of the core itself.




Alloy has enormous potential for use in school classrooms, in scenarios where either every member of the class has an Alloy headset or - much more affordably - one student at a time is wearing the headset and the rest of the class are excitedly participating as advisors to guide the wearer.


Involving additional participants in an Alloy session has the potential to be a recipe for chaos though unless all members of the group are working to a common goal.  For inspiration about how to arrange this, we can look to the 1970s, where the British boys action comics 'Warlord' and 'Bullet' created a powerful sense of purpose among their young readers by encouraging them to form 'secret agent clubs' with other kids in their street and their neighborhood.


Club members built club-houses and carried out activities such as charity work, exercising (to become fitter secret agents), going exploring, investigating and solving local minor crimes, helping people in trouble and giving first aid.  The comics supported the clubs by publishing accounts of their heroic adventures on the letters page (some of which were likely fictional) and rewarding contributors with practical prizes like a pennant to hang on their club-house wall or a pendant to carry with to inspire courage.


A teen reader who wore a pendant whilst they tackled an army assault course reported that they felt that their pendant gave them extra strength to climb over 14 foot walls easily.  Another pendant-wearer attributed greater success at school sports to it.  These instances of heightened performance were likely a form of positive stimuli feedback like that described earlier in this article, in which absolute belief in the metaphysical properties of an inanimate object or the values that it represented translated into a greater release of dormant physical and mental potential.


Another key tenet of the clubs was information security.  The comics suggested to its young agents that each club use secret passwords and make up its own message encryption code so that their communications could not be cracked if intercepted by non-agents (usually brothers and sisters who weren't in the club!)


Clubs also often found their own solutions for problems that the comic editors never anticipated.  Members who reached their mid-teens and felt that they were too old and grown-up for their club and should quit instead became leaders of the club and acted as the “boss” who coordinated activities and arranged missions for the younger members.  Some clubs even charged a small weekly fee that was pooled to purchase equipment and uniform for members, such as compasses, first-ad materials and wet-weather boots.


It is no longer the Seventies though and society has changed greatly, for the worse in some respects.  Kids no longer care about health as much as they used to and the risk of child endangerment in public places is higher   So whilst it is not realistic to closely emulate that era, there are lessons provided by it that can be adapted for the present day.  Some examples include:


-  Documenting students' progress on a blog or social network, similar to how clubs reported their news to each other via the letters pages of the comics.


-  Group members could access mission details and logs online and upload collected evidence related to investigations for other team members to sort and process into actionable information for the current Alloy wearer to act upon as group leader.


-  Emulate the courage-boosting pendants with additional internet-connected wearables that actually have a scientifically provable and easily measurable effect on the wearer and give them incentive through gamification to live a healthier lifestyle.


-  Adapt the password and code-creating practices into an educational message for students in the present about taking care of their online safety and security.


-  Use Alloy as a means for students to make games, using the ability to incorporate real-world bodies and objects as a new form of video game development that focuses primarily on creating and performing outstanding stories.  Instead of having to learn drawing and coding, their real-world bodies / hands and equipment can be used in conjunction with the virtual content instead, and they can act out scenarios without having to awkwardly use traditional control methods.


The only experience they therefore need in order to efficiently use the scenarios that they create is the life experience that they have been accumulating since birth, because they can use their bodies to interact with the simulation in the same way that they would with equivalent objects in the real world.  The integration of living users with virtual tools also provides those users with the ability to attempt to solve the problems that they are dealing with through 'sandbox' experimentation and testing of possible solutions in a way that is not possible when sitting around a table or in a lecture hall in the real world.


There are opportunities also for teachers and school administrators to expand their Alloy teaching program into an open discussion with massed participants from other schools in a district / county / state within an online-enabled merged-reality environment, sharing best practices with each other and thus filling in specific knowledge gaps at each school.  The professional participants in this large-scale collaboration could also engage in 'pro tournament play' in the sandbox simulation, taking turns to try out different approaches in order to see whose methods work the best.




A system that encourages mass interaction can also be utilized by schools as a model of mass cooperation that thinks of everyone in the school – from administrators through to teachers and the students – in terms of individuals who, when they come together, resemble the cells in the structure of a tree.  In the structure of a real tree, sap rises from the roots, up through the trunk and ultimately arrives at the leafy canopy at the peak of the tree.  This means that nothing living at the top is immune to what is happening at the ground and middle levels.


Also like a real tree, there are both helpful and harmful / disruptive elements in the structure of the school.  Individuals could be regarded as being 'Tree Cells' (a play on words of 'T-Cells', the cells in animal immune systems), with the aim being to ensure that there are many more helpful Tree Cells in the school and that students and staff who are negative cells – perhaps because of work stress, their family background, learning difficulties or other factors in their life - are helped to become healthy cells too.  This philosophy mirrors the saying “it takes a village to raise a child”,  with the school community as a whole refusing to turn away and abdicate responsibility for taking care of somebody in that community who is in need of help – even if they take some persuading to accept it.


Each individual in a school can provide help in the areas where they have power to intervene: students helping students and making teachers aware of significant problems with friends that are beyond their help, and staff helping other staff.  If there are common problems occurring then they could be addressed with group training programs, making use of tools such as the Rift headset and mixed-reality presentation technology.


In the US Military, soldiers are paired up as 'Battle Buddies' in a program designed to reduce suicides, with each buddy looking out for the well-being of their buddy both in battle and outside of it.  Whilst a single-digit minority of soldiers surveyed about the program strongly resented having to be so deeply responsible for someone else's welfare, the majority believed it to be an excellent idea.  If people are mutually dependent on each other for success then the better that one performs, the better that their partner will do in their respective role.




The Alloy headset and its ability to enable free, untethered movement is a game-changer for virtual reality.  As powerful as it is in its own right, when it is combined with the techniques described in this article, its potential becomes limitless and can create a foundation to reach even greater heights as new simulation technologies emerge!

Foreword: Grateful thanks go to Intel customer support's Jesus Garcia for contributing to the information in this article.


In this article, we highlight questions and answers regarding technical information about the Intel® RealSense™ range of cameras, and explain how technology companies may integrate RealSense into their products.


1. What are the types of RealSense camera available and their approximate range?

2. Where can I purchase RealSense Developer Kit cameras?

3. What is the software development tool chain for the RealSense cameras?

4. Which operating systems (OS) do the RealSense camera SDKs support?

5. What are the model and part numbers of the RealSense Developer Kit cameras?

6. Where can I find the official data sheets for the RealSense camera range?

7. How may I use RealSense technology for commercial purposes?

8. How may I ask about purchasing RealSense components in bulk quantities?

9. What will the commercial price per unit be? Can I obtain a discounted price for bulk purchases?

10. Where can I find customer support and tutorials during the development of my RealSense project / product?


1. What are the types of RealSense camera available and their approximate range?


Short Range


SR300 (1.5 m)


Long Range


R200 (4 - 5 m approx)

ZR300 (3.5m indoor range and longer range outdoors)


2. Where can I purchase RealSense Developer Kit cameras?


Development kits are available in the Intel Click store. We recommend that kits should always be purchased from Click where possible. Intel ships Developer Kits to numerous countries, including:


United States, Argentina, Australia, Austria, Belarus, Belgium, Brazil, Canada, China, Czech Republic, Denmark, Finland, France, Georgia, Germany, Hong Kong, Iceland, India, Indonesia, Ireland, Israel, Italy, Japan, Korea, Latvia, Liechtenstein, Lithuania, Luxembourg, Malaysia, Malta, Mexico, Netherlands, New Zealand, Norway, Philippines, Poland, Portugal, Romania, Russia, Singapore, Slovakia, Slovenia, South Korea, Spain, Sweden, Switzerland, Taiwan, Turkey, UAE, United Kingdom, Vietnam.


Please check individual store listings for confirmation of whether that particular product can be shipped to your country. If Intel does not ship directly to your country, you can enquire to your local approved Intel product distributor.


USA and Canada


Rest of world


3. What is the software development tool chain for the RealSense cameras?


Intel's RealSense SDK currently supports the F200 camera (the now-unavailable predecessor of the SR300) and the SR300 camera. 


Active development of tools for the R200 ceased at the '2016 R2' SDK version, though the open-source SDK Librealsense will continue to support it with updates.


The ZR300 camera is supported by the RealSense SDK for Linux.


4. Which operating systems (OS) do the RealSense camera SDKs support?


The RealSense SDK supports Windows 10, or Windows 8.1 for R200 (with an OS update), and Windows 10 only for the SR300.


The RealSense SDK for Linux is only supported on Ubuntu 16.04 running on Intel® Joule 570x.


Open-source RealSense camera support for Linux and Mac OSX users of the F200, R200 and SR300 cameras is also provided by the Librealsense SDK.


5. What are the model and part numbers of the RealSense Developer Kit cameras?




Model #: VF0830

Part #: MM#939143




Model #: 06VF081000003

Part #: MM#943228, H89061-XXX




Model #: 995176

Part #: 995176


5. What are the UL laser safety certificate numbers for the RealSense camera range?




Laser camera module, "Intel® RealSenseTM 3D Camera Front F200", Model(s) H53987-XXX (A)


R200 / LR200


Notes: the LR200 is an almost identical version of the R200 with improved RGB quality and newer IR emitter components.  There are also two listings for the R200 because there have been two versions of the circuit board, the original and a smaller, more power-efficient later version.


Laser camera module, "Intel® RealSenseTM 3D Camera Rear LR200", Model(s) J31114-XXX(A)

Laser camera module, "Intel® RealSenseTM 3D Camera Rear R200", Model(s) H55024-XXX(A), H72161-XXX(A)

Laser camera peripheral, "Intel® RealSenseTM 3D Camera Rear R200", Model(s) H81017-XXX(A)




Laser camera module, "Intel® RealSenseTM 3D Camera Rear ZR300", Model(s) J27384-XXX(A)




Laser camera module, "Intel® RealSenseTM Camera SR300 (Falcon Cliffs)", Model(s) H89061-XXX, J26805-XXX (A)


The database that the certificate numbers are sourced from can be viewed at the UL certificate website.


6. Where can I find the official data sheets for the RealSense camera range?








7. Where can I find other self-help resources for the RealSense camera range?


RealSense Web Site (must see)


RealSense Customer Support


7. How may I use RealSense technology for commercial purposes?


The official policy on the Intel online store pages for the RealSense developer kits is:


"The Camera is intended solely for use by developers with the Intel® RealSense SDK for Windows solely for the purposes of developing applications using Intel RealSense technology. The Camera may not be used for any other purpose, and may not be dismantled or in any way reverse engineered."


There are commercial products that make use of the RealSense technology, such as drones, laptop PCs and the Razer Stargazer camera, which is a Razer-branded version of the SR300. The distinction is that they contain the RealSense camera -circuit board- and do not make use of the Developer Kits purchased from the Intel store.


The development kits that are for sale on the Intel store are meant for development purposes only. They come with a 90-day return policy. Therefore they are not recommended for commercial use or productization. 


For commercial use, Intel recommends that you purchase camera modules (they are what is inside the development kit) from Intel approved distributors.


USA and Canada


Rest of world


8. How may I ask about purchasing RealSense components in bulk quantities?


If you want to use RealSense cameras in a product and want to buy in bulk, you should purchase the camera modules at Intel distributors. Just about any Intel distributor can order these modules for you.


USA and Canada


Rest of world



9. What will the commercial price per unit be? Can I obtain a discounted price for bulk purchases?


Please contact a local Intel approved distributor using the links above to ask about pricing and discount programs.


10. Where can I find customer support and tutorials during the development of my RealSense project / product?


Support is provided by Intel support staff and community members through the Intel Realsense support community site.


There is also a large amount of tutorial articles on using RealSense that are available on Intel websites and from other sources.

In this article, we highlight questions and answers regarding technical information about the Intel® RealSense™ range of cameras, and explain how technology companies may integrate RealSense into their products.