Skip navigation

1.jpg

*  Image (c) Marty Grover, from the forthcoming PC RealSense game 'My Father's Face'

 

In the fictional headset above, long hours of use are achieved in the untethered battery-driven headset by a ring of ball-bearings inside the yellow component on each side of the helmet.  As the bearings continually roll clockwise and anti-clockwise under gravity through a small electromagnet as the wearer's body moves, electrical current is induced that is directed to the headset battery storage like the alternator component on a motor vehicle engine that supports the vehicle's electrical systems with a small charge when the engine is running.

 

INTRODUCTION

 

In the fourth quarter of 2017, Intel's RealSense-equipped Project Alloy headset is due to be released for purchase.  The Alloy headset contains a full PD with integrated graphics GPU, enabling it to be used without a cable tethering the user to an external PC, as is the case with Oculus Rift and HTC Vive VR headsets.

 

Alloy is not simply a Virtual Reality system though.  It is referred to as Merged Reality, where the user's real-world body and those of other persons nearby are visible in a virtual environment, and real-world objects can be brought into the virtual location and interact with the digital objects in it.

 

Microsoft uses a similar term - Mixed Reality - for their HoloLens headset, though its approach is the reverse of Alloy's, where the user views a real-world environment that has virtual content overload onto it.  There is enough commonality with Intel and Microsoft's approaches though that the two companies have partnered to create a standard for Head Mounted Displays (HMD).  And, like HoloLens, Alloy will utilize Microsoft's Windows Holographic shell, which is currently used in the HoloLens headset and will be integrated into the mainstream Windows 10 in the middle of 2017.

 

Around this time, Intel will release the detailed specification for Alloy so that developers can begin preparing for it ahead of its release.  We will not preempt that announcement in this article by trying to make our own predictions of the spec's contents.  Instead, we are going to look at design principles that can be incorporated into your Alloy applications to make them truly transformative for their end-users.

 

With the rise of augmented reality, people are already becoming living avatars via mobile devices such as handhelds and wearables.  But we can take these portable computing technologies even further by using them to draw out dormant mental and physical potential.

 

USER IMAGINED CONTENT

 

Ever since virtual worlds as we know them have existed, the ones with the greatest longevity have been those that have offered User Generated Content (UGC) – whether to a small degree (crafting items from pre-made raw materials) or to a much larger extent (creating complex objects from very basic geometric shapes in online virtual worlds such as Second Life).

 

Everything goes in circles though, and what was old becomes new again eventually.  As technology shrinks in physical size and clunky interfaces are stripped away to be replaced by more intuitive Natural User Interfaces and Augmented Reality, the barriers between the real world and digital are dissolving: so much so that we are seeing the return of Human 1.0 – the power of imagination or, to put it in a more modern way, User Imagined Content (UIC).

 

The Buddhist and Hindu religions believe strongly in imagined worlds taking on a real, tangible existence.  Their faith calls them Tulpas.  In Buddhism they are something that is benign, whilst Hinduism considers them to have the potential to be spiritually dangerous, because a person can become so obsessed with fantasy worlds that they lose touch with their real world life.

 

Children create UIC every day through play.  They and their friends take a basic story concept from their favorite entertainment media – whether it be from a book, television show / movie or the internet and then take on a role from that entertainment and allot the remaining unfilled character roles of their piece of imaginative theater to invisible cast members.  They are using their minds to paint their stage-play directly onto the canvas of the real world.  It does not matter that others cannot see what they are painting; somehow, like real actors on a stage or set, a group of young friends know what is happening in their live-action recreation without having to explain it to each other, because they know the basic rules of the original media that their play is based on and create something new within the safety of that guiding framework.

 

A merged-reality environment can be used as the basis for superimposing a user's mental imaginings over their virtual surroundings, so that they can attempt to apply their own rules to that environment to test it, bend it to their will, and then break it in a way that makes it even better.  All that the developer needs to do is to give the user subtle direction that suggests how the user may begin understanding the basic rules of that environment - like the starting town in an online Massively Multiplayer game - and then set the user free to explore and adapt the rule-set to their particular brain's own style of processing information, learning from it and turning it into action.  This is not too dissimilar from the principles of a good tutorial at the start of a traditional videogame.

 

PROVIDING THE USER WITH AN OUTLINE TO FILL IN

 

Learning is at its most effective when the user is having so much fun that they do not even realize that they are learning!   This is the primary reason that a school exists – to impart theoretical knowledge that can – sooner or later - be applied to real-world situations.  By giving users the tools to harness their imagination and then setting them free, they can then teach themselves through their play.

 

During that play, they draw on both conscious recollection and, to a lesser extent, a wealth of information lodged deep in their unconscious long-term memory that their mind has recorded and filed away during their lifetime.  Like the creation of dreams during sleep, those conscious and unconscious pieces of memory combine to generate a narrative that is acted out during play with thoughts and with physical body language (some aspects of which are intended, and others that are instinctive auto-actions.)

 

My first experience of the concept of optimizing the brain to make the most efficient use of a large volume of stored memories was as a teenager when I watched the Gerry Anderson TV puppet show 'Joe 90'.

 

Joe 90 - Wikipedia

 

The basic premise was that a hypnotically pulsing gyroscope machine called BIG RAT was used to install the memories and brain pattern of a specific adult into an ordinary child called Joe a he sat inside the gyroscope.  The memories were stored in Joe's spectacles (and lost from his mind if he removed them), and enabled him to carry out secret agent missions and perform adult actions such as flying a fighter jet.

 

A little later in life, I learned of the existence of self-help CDs that could be played through headphones whilst a person slept, loading the information into their unconscious and so making it easier to recall that information during their waking life if they were exposed to memory cues that triggered the pre-installed data and pulled the memory (or a shard of it) into their conscious mind.

 

Alloy merged-reality application designers can give their users prompts through sounds and imagery that trigger instinctive emotional and physiological responses - such as "fight or flight" fear responses - that are common to most people due to being hard-wired into the brain and nervous system from birth.  This is not unique to Alloy though, as any VR headset could provide an environment with such cues.

 

Where Alloy can truly take this further into new and powerful "living and learning" experiences is through its Merged Reality nature, in which the real-world body is not shut out from the virtual world but can instead become a part of it.

 

A SERIES OF A-HA MOMENTS

 

Once the user enters a virtual environment and is given prompts in that environment that are designed in such a way that the mind can subconsciously tie them to information it has previously absorbed, the mind can bring that information naturally to the surface of the consciousness in an “Oh yeeahh, I get it!” moment.  Once they have grasped the basics of a concept from these realizations then they can then explore that idea further using the tools available to them in that virtual world.  If a concept is presented to them in a way that is compelling enough for it to powerfully resonate with them then they are likely to want to continue exploring it without being asked, thinking about it even after wearing the headset and making plans that they can take into their next session in that environment.

 

This can form the basis of an endless learning loop (virtual world, real wold, virtual world, real world), each new cycle building on the results of the preceding one in a continuous, harmonious amplification.  In the field of electronics waveform theory, this is known as Constructive Interference.  Conversely, when learning is disjointed and conflicted then the progress of previous sessions is canceled out by Destructive Interference.

 

It is desirable that a new idea is followed up on as soon as practically possible so that the fresh knowledge does not have a chance to be forgotten before it can be reinforced.  Research into sales techniques has demonstrated that a customer will begin the forget the details of a sales pitch after three days and after a week they can barely remember it, giving a rival company an opportunity to make a successful competing pitch to that customer.  This is why a salesperson is keen to close a deal as soon as possible after the initial sales lead.

 

In the case of the developer of Alloy applications, it could be said that the developer's competitor is everything that captures the user's attention inbetween one headset session and the next – television, the internet, sports and hobbies, friends and family, etc. It is in the interests of the developer that the user returns for their next session with the headset before daily life can erode the learning momentum that has been built up.

 

Like a sales executive, a developer should therefore seek to continue the learning dialog with their users at the first opportunity, whilst avoiding pressure tactics and giving the user space to return at a time of their choosing, when it most comfortable for them to do so.  An excellent way to maintain the energy and excitement of their headset session is to lead them to real-world content such as websites and videos that they can explore at their leisure.  This serves not only to keep the user thinking about their next visit to the Alloy application but also feeds new information into their mind that their subconscious can draw upon during a headset session to create even more learning and progression possibilities.

 

CONFIDENCE BUILDING

 

One of the biggest obstacles to enjoying using virtual reality / merged reality headsets is the physical headset itself.  Unlike subtle augmented-reality eyewear such as Google Glass, any person wearing a full headset risks becoming the subject of ridicule for anyone else in the room who is not wearing a headset who is not sympathetic to the charms of such equipment.  The problem with play is that while you can do it privately in isolation to a certain extent if you do not want to involve other people, it is often also a social expression that relies on an individual having a certain amount of self-confidence to share the play experience with others or even accept being observed at play by others.

 

As traditional interfaces such as joypad, mouse and keyboard give way to touch and motion detection though, it is more vital than it has ever been that people are equipped from an early age to lose their fear of public performance, as they otherwise will risk falling behind their peers in their personal development.  Physical activities such as live roleplay provide an outlet for frustration, and so youths should be learning how to harness their full potential and energies early on in life so that they grow up knowing how to direct their mind and hence their behavior and responses to difficult situations.

 

Having doubts and fears during play introduces turbulence into one's thought processes that can quickly lead to mental paralysis.  Once that happens to a user then they are likely to be extremely averse to taking part in subsequent occurrences of that activity.  The same emotion that can paralyze a non-confident person can though, when channeled correctly, be used as a powerful fuel for harnessing their latent potential. Once they see for themselves what they are truly capable of, the confidence problem will be taken care of because, if they believe with a conviction beyond certainty that they can accomplish a goal even if they have not yet attempted it, the barriers to that aim will crumble.

 

Fictional heroes such as the Power Rangers draw the strength to survive from their determination to win, and so their mind empowers their abilities, not the other way around.  With a Positive Mental Attitude, the entire human body becomes a living 'power morpher' that enables a person to attain a “heroic” state.  Users who can break through perceived limitations – restrictions that may have developed as a result of low self-esteem or a negative home and peer environment - will effectively become 'super powered' in regard to the amount of potential that they can harness, and others who see them will be inspired to follow their example.

 

TURNING ON THE POTENTIAL TAP

 

Personal development plans for an individual user can be built around a simple formula. Three of the most important requirements for unrestricted access to inner potential are Calmness, Belief and Need.  These components can be better understood if described in terms of the parts of a kitchen tap and the water inside it.

 

Calm is represented by the water itself.  clears the mind of distracting thoughts, doubts and fears and gets it ready for the issuing of thought commands so that you can access and summon your potential and then use it for something.  Until you have Belief and Need though, most of that water - or in other words, our potential - will remain inside the tap's water pipe and will not be usable by us.

 

Belief is the water pressure in the pipe that determines how strongly the water flows out when it is released by the turning of the tap-head.  The greater our belief in ourselves, the more of our untapped potential that our mind will released.

 

Need is the twistable head of the tap that controls how much of our potential is released.  When we turn open our inner tap-head, we make it possible for the dormant potential inside us to rush out.

 

The urgency of your Need to draw on your capability determines how much potential will be released by your mind so that you can use it.  The maximum amount of potential that a person may summon can be increased if they have a combination of Perfect Calm, Strongest Belief and Strongest Need.  The summoning, release and use of inner potential is a team-up between mind and body.

 

The mind should always be in charge of the body, not the other way around. Once the user's mind has become familiar with the codes to unlocking their potential through tech-assisted play then they will be able to call on it at will in everyday life without the need for hardware   When they have been shown that they can achieve something once then they will, from that first hand experience, know they can achieve it again at any time and place in the future.  We mentioned earlier how humans draw on mental resources such as short and long-term memories.  In fact, “Drawing Out” is precisely how we can go about turning the theoretical power-formula of Calm-Belief-Need into living reality.

 

DRAWING ONTO THE WORLD WITH MENTAL PAINT

 

Through play inside and outside of an Alloy session, users can – once they become less self-conscious - use their mind and / or body movements to 'direct' their psychology / physiology and physical posture as though real life is a movie set where they, as director, can influence any aspect of their life through a fully energized state of mind and body where doubts, negativity and self-imposed limitations fall away.

 

For younger users, there are plenty of examples that can be referred to in popular pre-teen and teen media to defuse those users' concerns that they would look silly representing their thoughts with movement (a couple of prime examples being the “jutsu” hand movements to activate special powers in the hit teen ninja cartoon series 'Naruto,' or the set of hand motions made with portable devices in series such as 'Power Rangers' and 'Digimon.'

 

Personal visualizations can be externalized with Drawing Out so peers can experience them too, the choreography making an instant impression on the audience in the same way that they are often hypnotically enraptured by the dance moves of their favorite music stars.  Alloy, which enables users to incorporate the movements of their entire body into manipulation of a virtual environment, is a perfect medium for doing so.

 

After a formula for success has been demonstrated by one person, that success can be replicated by others if they closely follow the observed formula themselves, with some tweaks of their own to suit their individual circumstances.  Careful iteration on what has gone before and been proven to work is a key principle of progress in most aspects of the real world – science and technology, business, sports and innumerable other areas.  In fact, it is a core rule of science that a theory cannot be considered to be a scientific law until it has been replicated a number of times with the same results.  And once the basics are proven, the theory is developed and greater discoveries are made, which in turn are tested, proven and iterated on yet again.

 

DRAWING OUT

 

The future of living and learning is not passively sitting with a videogame with a controller and moving an avatar, but putting the whole of their mind and body into their interactions with the world.  It is not enough though that a system should work.  To ensure the best results from it, especially with younger users, it should also be so simple to use that the mechanics of it are invisible to the user, and they are able to focus exclusively on succeeding at the task at hand.

 

To make this possible, we need to essentially automate the potential-harnessing interactions between the Alloy headset and the mind of the student, so that once they achieve the super-potentialized state then they can maintain that state for the rest of the session without thinking about it until the roleplay ends and they can relax and “power down.”

 

When designing a learning system based on the principles of Drawing Out, a useful benchmark for ease of use to bear in mind is “Is it likely to be usable by a profoundly physically impaired person who can at least bend a part of their body, such as a finger or toe?”  To achieve such a level of automation of a user's thought processes that this becomes feasible, developers can utilize a combination of the human nervous system and physical feedback sensations.

 

One of the almost endless amazing things about the human brain is that, being a super-computer of unparalleled complexity, it can handle more than one activity at the same time.  You can be thinking about something and at the same time have your brain work on another task automatically in the background.  You can program the brain to carry out a specific internal process when you move a part of your body in a certain way (for example, pushing your thumb and fore-finger together.)

 

In the psychological science of Neuro-Linguistic Programming (NLP), the touch-based programming instruction is known as an “anchor” and the stimuli that activates that instruction is called a “trigger.”  The more that you practice the chosen touch-gesture anchor whilst thinking about the task you want to be activated when the movement is made, the more certain your brain will become that it should start doing the designated processing task or enter into a particular psychological / physiological state whenever it detects that particular body movement trigger.

 

The brain will remember to keep automatically carrying out the task that you have assigned to the chosen physical gesture for as long as you keep that body part or parts tensed.  This is because it is constantly being reminded to do the task by the feeling of tension which travels to the brain through the nervous system.

 

To demonstrate the concept clearly and powerfully, let's do an example exercise.

 

Step One

Think in your mind an instruction that clearly describes to the brain what you want to happen whilst a particular part of your body is tensed.  The example thought-command we will use in this exercise is “Give me more energy.”

 

Step Two

The next step is to select a part of your body to place in tension.  You do not need to be looking at that part in order for the technique to work   In this exercise, we will utilize a finger as our means of creating physical tension. Bend a finger – any finger - on one of your hands a little and then hold it in that position ...  not a lot, just enough so that you have a continuous feeling of tension in that digit.  The feeling in the finger as you keep it tensed will keep reminding the brain that it is supposed to raise your body's energy level for as long as the nervous system keeps telling it that the finger is tensed.

 

Step Three

Think or do anything else that you want to while keeping the finger bent.  You will find that you can now do two things at the same time – what you would normally be doing and the additional task that you have programmed into your finger without any division of concentration!

 

When you are ready to end the exercise, simply relax your finger to cease the mental programming instruction that was linked to that finger.

 

READY TO RUMBLE

 

We can make our physical feedback system even simpler if we replace the need for a conscious bending action with physical feedback from the hardware, such as the vibration / rumble function.   It still works because the rumble feedback reminds the user's mind to process a set mental instruction (e.g our aforementioned energy-raising command) in place of the reminder transmitted to the brain by the nervous system via body-part tension.

 

A developer could program their application to send pulses of vibration of varying durations and intensities into the user's body.  If the user is taught that these pulse patterns correspond to specific meanings then this would be another form of instinctive non-visual feedback.  Whilst it seems unlikely that the final Alloy specification will contain a rumble feature in the headset, Alloy does also support handheld physical motion-tracked controllers with six degrees of movement (up, down, left, right, forward, back), and these controllers could easily have a rumble feedback function incorporated into them.

 

INCORPORATING REALITY INTO UNREALITY

 

The Alloy headset is constantly scanning the real world environment and absorbing the details of objects observed by its built-in RealSense camera.  It can then convert those real world objects into fictional objects that have roughly the same shape and proportions as the original object, enabling the contents of a real-life room to be incorporated into a virtual simulation in the headset.

 

The presence of such objects can actually be detrimental to immersion in a simulation in some cases though.  The effectiveness of a pretend reality relies upon believing completely in the imagery that you create though, and that belief can be eroded if you are consciously aware of stimuli that is contrary to the alternative reality that you are trying to hold onto in your mind.

 

As an example, you may be sitting on a real-world chair whilst using the Alloy headset.  If the virtual environment also contains a chair that you can sit down on, then the physical sensation of the real-world chair beneath you provides a perfect sense of "being there" in the simulation.

 

If, however, the simulation involves a scenario such as flying through the air as a bird - like in Ubisoft's 'Eagle Flight' VR game - then the sensation of the physical chair beneath you would act as a constant reminder that you are still physically grounded.  Even standing up during the experience would not help much with this particular scenario, because you will still feel the ground beneath your feet.  Short of installing a body-lifting wind tunnel in the floor or floating in a swimming pool, it would be hard to shake off the cognitive dissonance caused by the feeling that you are not actually in the circumstances that you are trying hard to believe yourself to be part of.

 

IF YOU CAN'T AVOID YOUR FEELINGS, USE THEM

 

If the user cannot block out the physical stimuli that causes them to have doubt in the scenario then the developer can reinforce their mental conviction about the truth of the imagery by utilizing disruptive sensations as an integral part of the scene.

 

Let's say that the user is laying down on their bed or couch with the Alloy headset on in the real world, trying to convince themselves that they are standing in the hall of a magical castle.  They may be able to clearly see themselves and the castle room in the headset, but it is not a perfect simulation because the attempt to convince the mind into believing that they are standing up in the scene is being disrupted by the sensation of the surface of the bed or couch under their back.

 

But the problem can be solved simply by changing the narration of the roleplay scene to take account of what the user is feeling.  In the example of the magical castle, if the user is laying down instead of standing up - a status that could perhaps be automatically detected by the sensors in the headset - then the simulation could change the description of the scene from standing in a hall to laying on a bed in a castle bedroom, similar to how a smartphone or tablet changes its screen image orientation from portrait to landscape when turned on its side..

 

By incorporating what particular parts of your body are feeling into the simulation, the mind will become even more convinced that the artificial reality that you are visualizing is true because the physical stimuli will back up your belief in the truth of the fantasy.

 

THE MECHANICS OF MAGICAL PEOPLE

 

You may have heard of the well-known saying “The thought is the deed.”  In the realm of psychology, a physical condition can manifest because of something that is happening in the mind (a very basic example being the creation of spots on the skin by a state of high stress.)  Therefore, we can adjust our physiology in specific ways by crafting a narrative with VR scenario programming or with our internal mindscape.

 

If we want to think about the idea of role-playing unreal situations in an otherwise real world, then one can look at the example of 'magical transformation' characters in classic cartoon fiction, such as Sailor Moon, He-Man and She-Ra, where they appear to be surrounded by a magical field around them that changes their physical form from an ordinary- everyday one to a super-powered one.  Thinking about the mechanics of such fictional universes can provide useful insights about the concept of incorporating unreality into the real world.

 

If we were to try to convince our mind that we were in possession of Prince Adam's magical Power Sword (with which he transforms into He-Man by holding the sword aloft and speaking the magical words, “By the power of Grayskull”,) it would not be enough for us to just imagine the Power Sword in the hand and say the magic words, and expect to be instantly transformed.  Instead, one might have to set up all of the conditions in their visualization that make the transformation possible in the fictional world of He-Man.

 

These would be:

 

- That you are holding the sword when the special activation words are spoken – a belief that would be reinforced by holding any long, thin device that can be gripped with a closed hand, such as the previously mentioned Alloy physical handheld controllers.

 

- That you believe completely that when you speak the magic words “By the power of Grayskull”, the expected release of power from the sword will occur (this could be synchronized with a burst of vibration from the cont held in the hand to reinforce the belief that something is happening); and

 

- That when the power is released, your physiology and / or psychology will be changed positively in some way, even if you don't literally transform into He-Man.

 

DOCUMENTING STANDARDS

 

We mentioned earlier about the importance of iteration of accepted standards over time in order to continuously improve them year on year.  This is true for the Drawing Out system detailed in this article as well.  Once techniques have been documented in some manner then, like a particular martial art, they have a chance – if they are subsequently widely accepted - of becoming a standard that others can use as a reference to successfully train themselves.

 

They can then use the rules of that standard as the basis to develop their own take on it and hopefully also share their iterations with others as the originator did so that that iteration can also be built upon.  If an open-source community can be developed around the standard then Alloy developers can create and share their own modules to plug into the framework of the core standard, and also contribute feedback to development of the core itself.

 

SCALING ALLOY UP TO LARGE SCHOOL CLASSROOM GROUPS

 

Alloy has enormous potential for use in school classrooms, in scenarios where either every member of the class has an Alloy headset or - much more affordably - one student at a time is wearing the headset and the rest of the class are excitedly participating as advisors to guide the wearer.

 

Involving additional participants in an Alloy session has the potential to be a recipe for chaos though unless all members of the group are working to a common goal.  For inspiration about how to arrange this, we can look to the 1970s, where the British boys action comics 'Warlord' and 'Bullet' created a powerful sense of purpose among their young readers by encouraging them to form 'secret agent clubs' with other kids in their street and their neighborhood.

 

Club members built club-houses and carried out activities such as charity work, exercising (to become fitter secret agents), going exploring, investigating and solving local minor crimes, helping people in trouble and giving first aid.  The comics supported the clubs by publishing accounts of their heroic adventures on the letters page (some of which were likely fictional) and rewarding contributors with practical prizes like a pennant to hang on their club-house wall or a pendant to carry with to inspire courage.

 

A teen reader who wore a pendant whilst they tackled an army assault course reported that they felt that their pendant gave them extra strength to climb over 14 foot walls easily.  Another pendant-wearer attributed greater success at school sports to it.  These instances of heightened performance were likely a form of positive stimuli feedback like that described earlier in this article, in which absolute belief in the metaphysical properties of an inanimate object or the values that it represented translated into a greater release of dormant physical and mental potential.

 

Another key tenet of the clubs was information security.  The comics suggested to its young agents that each club use secret passwords and make up its own message encryption code so that their communications could not be cracked if intercepted by non-agents (usually brothers and sisters who weren't in the club!)

 

Clubs also often found their own solutions for problems that the comic editors never anticipated.  Members who reached their mid-teens and felt that they were too old and grown-up for their club and should quit instead became leaders of the club and acted as the “boss” who coordinated activities and arranged missions for the younger members.  Some clubs even charged a small weekly fee that was pooled to purchase equipment and uniform for members, such as compasses, first-ad materials and wet-weather boots.

 

It is no longer the Seventies though and society has changed greatly, for the worse in some respects.  Kids no longer care about health as much as they used to and the risk of child endangerment in public places is higher   So whilst it is not realistic to closely emulate that era, there are lessons provided by it that can be adapted for the present day.  Some examples include:

 

-  Documenting students' progress on a blog or social network, similar to how clubs reported their news to each other via the letters pages of the comics.

 

-  Group members could access mission details and logs online and upload collected evidence related to investigations for other team members to sort and process into actionable information for the current Alloy wearer to act upon as group leader.

 

-  Emulate the courage-boosting pendants with additional internet-connected wearables that actually have a scientifically provable and easily measurable effect on the wearer and give them incentive through gamification to live a healthier lifestyle.

 

-  Adapt the password and code-creating practices into an educational message for students in the present about taking care of their online safety and security.

 

-  Use Alloy as a means for students to make games, using the ability to incorporate real-world bodies and objects as a new form of video game development that focuses primarily on creating and performing outstanding stories.  Instead of having to learn drawing and coding, their real-world bodies / hands and equipment can be used in conjunction with the virtual content instead, and they can act out scenarios without having to awkwardly use traditional control methods.

 

The only experience they therefore need in order to efficiently use the scenarios that they create is the life experience that they have been accumulating since birth, because they can use their bodies to interact with the simulation in the same way that they would with equivalent objects in the real world.  The integration of living users with virtual tools also provides those users with the ability to attempt to solve the problems that they are dealing with through 'sandbox' experimentation and testing of possible solutions in a way that is not possible when sitting around a table or in a lecture hall in the real world.

 

There are opportunities also for teachers and school administrators to expand their Alloy teaching program into an open discussion with massed participants from other schools in a district / county / state within an online-enabled merged-reality environment, sharing best practices with each other and thus filling in specific knowledge gaps at each school.  The professional participants in this large-scale collaboration could also engage in 'pro tournament play' in the sandbox simulation, taking turns to try out different approaches in order to see whose methods work the best.

 

FORMING A CULTURE OF 'TREE CELLS' IN THE SCHOOL COMMUNITY

 

A system that encourages mass interaction can also be utilized by schools as a model of mass cooperation that thinks of everyone in the school – from administrators through to teachers and the students – in terms of individuals who, when they come together, resemble the cells in the structure of a tree.  In the structure of a real tree, sap rises from the roots, up through the trunk and ultimately arrives at the leafy canopy at the peak of the tree.  This means that nothing living at the top is immune to what is happening at the ground and middle levels.

 

Also like a real tree, there are both helpful and harmful / disruptive elements in the structure of the school.  Individuals could be regarded as being 'Tree Cells' (a play on words of 'T-Cells', the cells in animal immune systems), with the aim being to ensure that there are many more helpful Tree Cells in the school and that students and staff who are negative cells – perhaps because of work stress, their family background, learning difficulties or other factors in their life - are helped to become healthy cells too.  This philosophy mirrors the saying “it takes a village to raise a child”,  with the school community as a whole refusing to turn away and abdicate responsibility for taking care of somebody in that community who is in need of help – even if they take some persuading to accept it.

 

Each individual in a school can provide help in the areas where they have power to intervene: students helping students and making teachers aware of significant problems with friends that are beyond their help, and staff helping other staff.  If there are common problems occurring then they could be addressed with group training programs, making use of tools such as the Rift headset and mixed-reality presentation technology.

 

In the US Military, soldiers are paired up as 'Battle Buddies' in a program designed to reduce suicides, with each buddy looking out for the well-being of their buddy both in battle and outside of it.  Whilst a single-digit minority of soldiers surveyed about the program strongly resented having to be so deeply responsible for someone else's welfare, the majority believed it to be an excellent idea.  If people are mutually dependent on each other for success then the better that one performs, the better that their partner will do in their respective role.

 

CONCLUSION

 

The Alloy headset and its ability to enable free, untethered movement is a game-changer for virtual reality.  As powerful as it is in its own right, when it is combined with the techniques described in this article, its potential becomes limitless and can create a foundation to reach even greater heights as new simulation technologies emerge!

Announcement

All SDK versions prior to 2016 R2 (v10.0.26.0396) have been discontinued and are no longer available for download from Intel websites.

 

Why

Security issue in web component in SDK versions prior to 2016 R2 (v10.0.26.0396). See this notice for further details and uninstallation instructions.

 

SDK 2016 R2 (v10.0.26.0396) minus the web component is still downloadable from here. The current version of the SDK, 2016 R3 (v.11.0.27.1384) is available for download here.

Foreword: Grateful thanks go to Intel customer support's Jesus Garcia for contributing to the information in this article.

 

In this article, we highlight questions and answers regarding technical information about the Intel® RealSense™ range of cameras, and explain how technology companies may integrate RealSense into their products.

 

1. What are the types of RealSense camera available and their approximate range?

2. Where can I purchase RealSense Developer Kit cameras?

3. What is the software development tool chain for the RealSense cameras?

4. Which operating systems (OS) do the RealSense camera SDKs support?

5. What are the model and part numbers of the RealSense Developer Kit cameras?

6. Where can I find the official data sheets for the RealSense camera range?

7. How may I use RealSense technology for commercial purposes?

8. How may I ask about purchasing RealSense components in bulk quantities?

9. What will the commercial price per unit be? Can I obtain a discounted price for bulk purchases?

10. Where can I find customer support and tutorials during the development of my RealSense project / product?

 

1. What are the types of RealSense camera available and their approximate range?

 

Short Range

 

SR300 (1.5 m)

 

Long Range

 

R200 (4 - 5 m approx)

ZR300 (3.5m indoor range and longer range outdoors)

 

2. Where can I purchase RealSense Developer Kit cameras?

 

Development kits are available in the Intel Click store. We recommend that kits should always be purchased from Click where possible. Intel ships Developer Kits to numerous countries, including:

 

United States, Argentina, Australia, Austria, Belarus, Belgium, Brazil, Canada, China, Czech Republic, Denmark, Finland, France, Georgia, Germany, Hong Kong, Iceland, India, Indonesia, Ireland, Israel, Italy, Japan, Korea, Latvia, Liechtenstein, Lithuania, Luxembourg, Malaysia, Malta, Mexico, Netherlands, New Zealand, Norway, Philippines, Poland, Portugal, Romania, Russia, Singapore, Slovakia, Slovenia, South Korea, Spain, Sweden, Switzerland, Taiwan, Turkey, UAE, United Kingdom, Vietnam.

 

Please check individual store listings for confirmation of whether that particular product can be shipped to your country. If Intel does not ship directly to your country, you can enquire to your local approved Intel product distributor.

 

USA and Canada

 

http://www.intel.com/content/www/us/en/resellers/where-to-buy/overview.html

 

Rest of world

 

http://intel.ly/2lXn4Cw

 

3. What is the software development tool chain for the RealSense cameras?

 

Intel's RealSense SDK currently supports the F200 camera (the now-unavailable predecessor of the SR300) and the SR300 camera. 

 

Active development of tools for the R200 ceased at the '2016 R2' SDK version, though the open-source SDK Librealsense will continue to support it with updates.

 

https://software.intel.com/en-us/blogs/2016/01/26/realsense-linux-osx-drivers

 

The ZR300 camera is supported by the RealSense SDK for Linux.

 

https://github.com/IntelRealSense/realsense_sdk

 

4. Which operating systems (OS) do the RealSense camera SDKs support?

 

The RealSense SDK supports Windows 10, or Windows 8.1 for R200 (with an OS update), and Windows 10 only for the SR300.

 

The RealSense SDK for Linux is only supported on Ubuntu 16.04 running on Intel® Joule 570x.

 

https://github.com/IntelRealSense/realsense_sdk

 

Open-source RealSense camera support for Linux and Mac OSX users of the F200, R200 and SR300 cameras is also provided by the Librealsense SDK.

 

https://github.com/IntelRealSense/librealsense

 

5. What are the model and part numbers of the RealSense Developer Kit cameras?

 

R200

 

Model #: VF0830

Part #: MM#939143

 

SR300

 

Model #: 06VF081000003

Part #: MM#943228, H89061-XXX

 

ZR300

 

Model #: 995176

Part #: 995176

 

5. What are the UL laser safety certificate numbers for the RealSense camera range?

 

F200

 

Laser camera module, "Intel® RealSenseTM 3D Camera Front F200", Model(s) H53987-XXX (A)

 

R200 / LR200

 

Notes: the LR200 is an almost identical version of the R200 with improved RGB quality and newer IR emitter components.  There are also two listings for the R200 because there have been two versions of the circuit board, the original and a smaller, more power-efficient later version.

 

Laser camera module, "Intel® RealSenseTM 3D Camera Rear LR200", Model(s) J31114-XXX(A)

Laser camera module, "Intel® RealSenseTM 3D Camera Rear R200", Model(s) H55024-XXX(A), H72161-XXX(A)

Laser camera peripheral, "Intel® RealSenseTM 3D Camera Rear R200", Model(s) H81017-XXX(A)

 

ZR300

 

Laser camera module, "Intel® RealSenseTM 3D Camera Rear ZR300", Model(s) J27384-XXX(A)

 

SR300

 

Laser camera module, "Intel® RealSenseTM Camera SR300 (Falcon Cliffs)", Model(s) H89061-XXX, J26805-XXX (A)

 

The database that the certificate numbers are sourced from can be viewed at the UL certificate website.

 

http://bit.ly/1J9WNG6

 

6. Where can I find the official data sheets for the RealSense camera range?

 

R200

 

https://software.intel.com/sites/default/files/managed/d7/a9/realsense-camera-r200-product-datasheet.pdf

 

SR300

 

https://software.intel.com/sites/default/files/managed/0c/ec/realsense-sr300-product-datasheet-rev-1-0.pdf

 

ZR300

 

http://click.intel.com/media/ZR300-Product-Datasheet-Public-002.pdf

 

7. Where can I find other self-help resources for the RealSense camera range?

 

RealSense Web Site (must see)

 

https://software.intel.com/en-us/realsense/home

 

RealSense Customer Support

 

http://www.intel.com/content/www/us/en/support/emerging-technologies/intel-realsense-technology/intel-realsense-cameras.html?wapkw=realsense

 

7. How may I use RealSense technology for commercial purposes?

 

The official policy on the Intel online store pages for the RealSense developer kits is:

 

"The Camera is intended solely for use by developers with the Intel® RealSense SDK for Windows solely for the purposes of developing applications using Intel RealSense technology. The Camera may not be used for any other purpose, and may not be dismantled or in any way reverse engineered."

 

There are commercial products that make use of the RealSense technology, such as drones, laptop PCs and the Razer Stargazer camera, which is a Razer-branded version of the SR300. The distinction is that they contain the RealSense camera -circuit board- and do not make use of the Developer Kits purchased from the Intel store.

 

The development kits that are for sale on the Intel store are meant for development purposes only. They come with a 90-day return policy. Therefore they are not recommended for commercial use or productization. 

 

For commercial use, Intel recommends that you purchase camera modules (they are what is inside the development kit) from Intel approved distributors.

 

USA and Canada

 

http://www.intel.com/content/www/us/en/resellers/where-to-buy/overview.html

 

Rest of world

 

http://intel.ly/2lXn4Cw

 

8. How may I ask about purchasing RealSense components in bulk quantities?

 

If you want to use RealSense cameras in a product and want to buy in bulk, you should purchase the camera modules at Intel distributors. Just about any Intel distributor can order these modules for you.

 

USA and Canada

 

http://www.intel.com/content/www/us/en/resellers/where-to-buy/overview.html

 

Rest of world

 

http://intel.ly/2lXn4Cw

 

 

9. What will the commercial price per unit be? Can I obtain a discounted price for bulk purchases?

 

Please contact a local Intel approved distributor using the links above to ask about pricing and discount programs.

 

10. Where can I find customer support and tutorials during the development of my RealSense project / product?

 

Support is provided by Intel support staff and community members through the Intel Realsense support community site. 

 

https://communities.intel.com/community/tech/realsense

 

There is also a large amount of tutorial articles on using RealSense that are available on Intel websites and from other sources.

In this article, we highlight questions and answers regarding technical information about the Intel® RealSense™ range of cameras, and explain how technology companies may integrate RealSense into their products.

Whether you are having problems installing the RealSense drivers or running RealSense applications, providing RealSense logs to Intel Customer Support will speed up the troubleshooting process. If you are not sure which logs are relevant to your situation, collect all logs as instructed in this article.

 

Provide System Info File

First, provide a system info file.

  1. Press the keyboard WinLogo key + r
  2. Type msinfo32
  3. In the System Information console, click on File>>Export, and attach the .txt file to the service ticket or RealSense Community post.

 

Provide Driver Installation Logs

If the camera driver, Intel® RealSense™ Depth Camera Manager (DCM), does not install correctly:

  1. Capture and send a screenshot of the DCM installation error.
  2. Zip the full contents of all of the following log directories.
    • %temp%\micl_tmp_%username%
    • %windir%\Temp\micl_tmp_SYSTEM
    • %windir%\INF\setupapi.*

NOTE: "%TEMP%" is typically, "C:\Users\<username>\AppData\Local\Temp." "%windir%" is typically "C:\Windows".

    3. Attach the zip file and screen shot to the service ticket or RealSense Community post.

 

Provide Application Execution Logs

If you are having problems running RealSense applications or RealSense SDK samples, collect and provide execution logs. You will need to download and install the Intel® RealSense™ SDK.  Click here for detailed instructions with screen shots on collecting SDK logs.

  1. If you are using the SR300 or F200 cameras download and install the RealSense SDK Essentials package from here. If you are using the R200 camera, download the RealSense SDK R2 from here.
  2. Run the SDK debug tool by executing "C:\Program Files (x86)\Intel\RSSDK\bin\x64\SDK_Info.exe" as an Administrator.
  3. In the SDK_Info tool, navigate to the Logging Tab.
  4. Click “Enable All Logs” button
  5. Reproduce the issue by running whatever app is causing the failure  (make sure not to close sdk_info, it can be minimized, closing sdk_info will stop the logging).
  6. After the failure has occurred return to SDK_Info and click “Save” button (in sdk_info logging tab).
  7. Zip the created directory and attach to your service ticket or RealSense Community post.

Due to a security issue with one of the dependencies of the Intel® RealSense™ SDK Web Component, Intel has decided to discontinue marketing and development of the component and all related software.  This applies to all versions of the Intel® RealSense™ SDK.

 

Please see this notice for further details and uninstallation instructions.

Problem

The Intel® RealSense™ Cameras may sometimes experience power delivery problems from the target platform. This may result in connectivity issues where the camera is not fully recognized by the system.

 

Symptoms

  • Not able to install the camera driver - Intel® RealSense™ Depth Camera Manager. Error says, "The installer failed to detect an Intel® RealSense™ 3D camera on this system."
  • Windows Hello does not work. Only applicable to SR300 and F200 cameras.
  • You hear USB disconnect/connect sound
  • Camera disconnects after you press "Stop" on a stream from a RealSense SDK sample
  • The Virtual Driver disappears from Device Manager -> Imaging Devices. RGB and Depth nodes may still be visible.

 

Possible Workarounds

The following workarounds have helped some users resolve connectivity issues but are not guaranteed to work in every case.

  1. First, ensure the target system meets minimum system requirements for your RealSense camera.
    1. SR300 system requirements
    2. R200 system requirements
  2. Ensure the camera is connected to a powered USB 3.0 port on the target system. Try all the powered USB 3.0 ports.
  3. Update chipset drivers. See this article for more details.
  4. Connect the camera to an external powered USB 3.0 hub.
  5. Set USB 3.0 Host Controller to not turn off power to the device
    1. Go into Device Manager
    2. Right click the Intel(R) USB 3.0 eXtensible Host Controller - 1.0 (Microsoft) (or similar for your device)
    3. Click Properties
    4. Click the Power Management tab
    5. Deselect Allow the computer to turn off this device to save power

 

If none of these workarounds solve the connectivity problems then please contact Intel Customer Support for further assistance. You can open a web ticket or post in the RealSense Community forum. When first contacting support, please specify that you have gone through the steps outlined in this article and provide the article's URL.