After creating many hundreds of pages of personal research material over the past seven years - part of a much vaster archive belonging to my company - I decided that it was time that I tried to put more of it into practical use for modern-day applications such as development of software for the forthcoming Intel Project Alloy merged-reality headset. The natural place to start was at the beginning of the archive, in 2010.
I rediscovered an idea for creating living avatars for classroom teaching. The teacher would wear an all-covering white bodysuit, whilst an image projector would move around the classroom ceiling on rails, tracking the teacher's position and projecting an image onto the suit's surface that would change the teacher's appearance in doing so. For example, if the teacher got down on hands and knees then the projector could make them look like a bear by projecting a bear image onto the suit from above.
As the Project Alloy headset can scan real-world large objects such as furniture and convert them into a virtual object of a similar size, it made me realize that this could provide a new way to make living avatars a reality. If the headset could scan furnishings and convert them into a virtual representation, I wondered, maybe it could do the same for living people observed by the headset too, as they should not be any different from moving furniture to the camera. This would allow any person observed by the headset wearer to take on a virtual form of similar size and shape.
And as Alloy is constantly scanning the room (a feat made possible by its advanced vision processing chip), rather than just taking a single calibration scan at start-up (like Microsoft's Kinect camera did), in theory it ought to be able to update in real-time the virtual representation of the living person that the headset's camera is observing.
Perhaps, with Project Alloy, we will all have the opportunity to interact with friends and colleagues as animals, heroes, villains and creatures beyond imagination ...