Archives For Ambient Intelligence


The design industry’s reigning paradigm is in crisis. It’s time to evolve from human-centered design to humanity-centered design, write Artefact’s Rob Girling and Emilia Palaveeva.

If followed blindly and left unchecked, this cult of designing for the individual can have disastrous long-term consequences. A platform designed to connect becomes an addictive echo chamber with historic consequences (Facebook); an automation system designed to improve safety undermines our ability to seek information and make decisions (the plane autopilot); a way to experience a new destination like a local squeezes lower income residents out of affordable housing (Airbnb). Each of these examples is recognized as a real product or service design feat. Yet by focusing on the individual user alone, we often fail to take into account broader cognitive and social biases. By zeroing in on the short-term impact and benefits of our designs, we spare ourselves asking the really hard question: Are we designing a world we all want to live in today and tomorrow?

 To be agents of positive change, we as designers need to think more broadly about the direct and secondary consequences of our work. We need to be clear-eyed about what we are striving to do and minimize the chances of creating more problems than we are trying to solve. To do that, we need to integrate our discipline with systems thinking, which entails understanding how systems work and evolve over time. This will allow us to anticipate and mitigate the negative longer-term consequences of well-intentioned solutions. As a result, we will be poised to design systems that have minimum negative impact, create and sustain equity, and build on technological advances without disrupting the foundations of society. We have the responsibility to evolve from human-centered design thinkers to humanity-centered designers.


Great Wave Data

January 16, 2018


With augmented reality (AR) and virtual reality (VR) becoming the next computing platforms, app developers have been increasingly focused on building AR and VR apps.

One of the companies that aim to be on the cutting edge of Analytics VR and AR app development is GREAT WAVE. By helping people understand and analyze data more quickly, such a tool could provide richer, more insightful experiences than the ones derived from paper and screens. Studies conducted by researchers at Stanford and by the neuroscience and analytics team of the AR developers META (in conjunction with Accenture) demonstrate how the use of 3D information could amplify people’s efficiency and ability to focus on tasks.

Have a look at the video of GREAT WAVE:


2D vs 3D

October 8, 2017

Given the lack of studies that have systematically examined the perceptual cues that our brains use to rapidly process procedural tasks – META decided to partner with Accenture Labs on a pilot study examining the use of perceptual cues in AR. More specifically, they wanted to measure the effect an additional perceptual cue (motion) would have on the time it takes to complete a procedural task. The team operated under the hypothesis that integrating both stereo and motion perceptual cues could further reduce the limitations of 2D instructions – ultimately enabling people to more quickly complete a procedural task.

At this year’s Bay to Breakers pre-race expo, the colorful annual footrace in San Francisco (California), the team of Meta and Accenture researchers set up the procedural task of assembling a physical lighthouse Lego set.

They defined three conditions based on the different types of instructions participants were to receive:2D Paper, Holographic Static 3D (Stereo Cue), and Holographic Dynamic 3D (Stereo & Motion Cues).

Comparing the three instruction conditions, they found that Dynamic 3D Instructions enabled participants to more quickly complete each step. Participants using Static 3D Instructions and 2D Paper Instructions were much slower in comparison. This confirmed their hypothesis that the use of both the stereo and motion perceptual cues in AR instructions speeds up assembly time. Interestingly enough, the researchers found that participants using Static 3D Instructions were the slowest of the three instruction conditions. This was especially surprising to them because based on past studies conducted in 2003 and 2013 , they expected people using any kind of 3D instructions to perform the Lego building task more quickly than those using paper 2D Paper Instructions.

Check out this video:


Models of Diversity

March 11, 2016


Gave a talk on New Narratives at the conference Models of Diversity at the ETH and ZHDK Zurich.

The main aim of this conference was to create 3-way discourses to search for correlations and models that can foster deeper creative levels of discourses across the disciplines of art, science, sociology and philosophy. A round table conference with paired presentations of art researcher, scientists and theorists in diverse fields of inquiry-alongside dynamic moderators who tried to stimulate discourse.

Tango project

December 4, 2015

I had the chance to explore Google’s Tango with a team of developers. Great software and it is to hope that it can live up to its potential. The first consumer implementation will be in a package with Lenovo’s PHAB PRO later next year.
The essential aim is to give your mobile device full spatial awareness, or the ability to understand your environment and your relation to it, to get your smartphone to understand the world around it, enabling it to provide augmented reality experiences. A Project Tango device ‘sees’ the environment around it through a combination of three core functions.

First up is motion tracking, which allows the device to understand its position and orientation using a range of sensors (including accelerometer and gyroscope). Further, it involves depth perception, it is able to examine the shape of the world around you. Here it relies on Intel’s RealSense 3D camera. it helps the device to gain accurate gesture control and snappy 3D object rendering among a number of other features.

Additionally, Project Tango incorporates area learning, which means that it maps out and remembers the area around it.


Magic Leap

October 26, 2015


Magic Leap, Inc., a developer of novel human computing interfaces and software, announced in a newsletter the recent closing of its A round of venture capital. Magic Leap has now raised more than $50 million in its series seed and A rounds to develop its proprietary technology platform. Magic Leap will use the funds to advance the product development and commercialization of its proprietary human computing interface technology, known as “Cinematic RealityTM”.

At engagdet – Mariella Moon states that she can’t decipher what Magic Leap exactly is – but she argues that Magic Leap is:

a headset that superimposes digital images onto the real world. In that respect, it’s similar to Microsoft’s HoloLens, which is just slightly less mysterious (since we’ve actually seen it). But based on the things Abovitz said in his AMA at reddit, like “Our vision for AR and VR is a true replication of visual reality,” there’s a chance that it can also block the outside world entirely with virtual reality. (Update: Rachel Metz confirmed to engagdet on Twitter that it’s capable of doing full VR.)

This points out  that there’s a reason why the company is calling its technology “cinematic reality” rather than AR or VR: it works a bit differently than either of them. Standard AR and VR use stereoscopic 3D, a technique that tricks you into thinking an object is three-dimensional by showing each eye a different image and a different angle of the same object. The Oculus Rift and Samsung’s Gear VR headset are two well-known examples of this technique.


In his AMA Abovitz revealed that he’s not a fan of stereoscopic 3D and believes it can cause “temporary and/or permanent neurologic deficits.” So, Magic Leap uses a Lilliputian projector to shine light and images into the user’s eyes instead, the startup told Metz from MIT’s Technology Review. Your brain apparently won’t be able to detect the difference between light from the projector and light from the real world: The result is life-like digital images that show reflections like real physical objects would.


Magic Leap




July 22, 2015

4 hololens

Microsoft HoloLens puts you at the center of a world that blends holograms with reality. With the ability to design and shape holograms, you’ll have a new medium to express your creativity, a more efficient way to teach and learn, and a more effective way to visualize your work and share ideas. Your digital content and creations will be more relevant when they come to life in the world around you.