Gesture control veterans launch new company and introduce fluid experience technology
Palo Alto, California, USA (Business Wire) – The team that developed key elements of the Microsoft Xbox One® Kinect Technology has launched a new company, Aquifi, to introduce Fluid Experience Technology. Fluid Experience Technology is a new, software-only platform that will yield new economics and usability in the perceptual computing and gesture-control markets, by allowing a new style of “adaptive” interface to be created.
Because the technology works with inexpensive, commodity image sensors, it will be possible for smartphones, tablets, PC, wearable devices, and other machines to all have interfaces that adjust automatically to users.
The term “Fluid Experience Technology” (coined by the company) implies that barriers are removed between human and machine, with the latter interpreting a user’s movements and gestures, and initiating intuitive actions in response.
“Within the next decade, machines will respond to us and our needs through intuitive interpretation of our actions, movements, and gestures,” said Nazim Kareemi, Aquifi CEO. “Our fluid experience platform represents the next generation in natural interfaces, and will enable adaptive interfaces to become ubiquitous, thanks to our technology’s breakthrough economics.”
Kareemi, a co-founder of Aquifi, also co-founded Canesta – a pioneer in gesture control and perceptual computing, which was acquired by Microsoft for its unique technology. Another Canesta founder, and several of its former executives, joined with Kareemi to develop and launch Aquifi, and its new Fluid Experience paradigm.
Aquifi’s Fluid Experience Technology differs from today’s gesture and control technologies on several fronts.
- It can interpret user movements over a wide area, as opposed to shallow or narrow “interaction zones”;
- It can interpret far more than simply hand gestures, or gross body positions, such as the 3-D position of a user’s face or even whose face is in view (facial fingerprinting);
- It can adapt its response based upon machine learning;
- It is a software-only solution that can use inexpensive, commodity imaging sensors, rather than specialised chips or other expensive hardware;
- It can be used across a full spectrum of machines to simplify both the developer’s and user’s experience.
These capabilities will permit innovations such as controlling devices subtly, and from casual positions, as opposed to very stylised actions occurring in a specific window in space, or devices that go in and out of power saving mode, depending on whether the user is looking at them. Devices could even “auto-lock” when a face not recognised looks at them – greatly enhancing security.
As the technology is introduced to developers over the next six months, in addition to some of the examples above, new applications may include:
- Wearable applications such as augmented reality that uses smartphone 3-D object scanning/room mapping;
- Safe car applications, that combine with voice for feedback so we don’t have to look at the screen;
- And, in the long term, the ideas are only limited by the imaginations of the developers of new applications.