Screen shot 2018-02-25 at 13.15.29.png

This Anthro Life is a round-table, open format discussion of an anthropological take on the people, objects, ideas, and possibilities of everyday life around the world. They are hosted by Adam Gamwell and Ryan Collins and there are nearly 100 to chose from.


The Intuitive Appeal of Explainable Machines

by Andrew D. Selbst (Data & Society Research Institute; Yale Information Society Project) and Solon Barocas (Cornell University)
February 19, 2018, 59 pages


As algorithmic decision-making has become synonymous with inexplicable decision-making, we have become obsessed with opening the black box. This Article responds to a growing chorus of legal scholars and policymakers demanding explainable machines. Their instinct makes sense; what is unexplainable is usually unaccountable. But the calls for explanation are a reaction to two distinct but often conflated properties of machine-learning models: inscrutability and non intuitiveness. Inscrutability makes one unable to fully grasp the model, while non intuitiveness means one cannot understand why the model’s rules are what they are. Solving inscrutability alone will not resolve law and policy concerns; accountability relates not merely to how models work, but whether they are justified.

In this Article, we first explain what makes models inscrutable as a technical matter. We then explore two important examples of existing regulation-by-explanation and techniques within machine learning for explaining inscrutable decisions. We show that while these techniques might allow machine learning to comply with existing laws, compliance will rarely be enough to assess whether decision-making rests on a justifiable basis.

We argue that calls for explainable machines have failed to recognize the connection between intuition and evaluation and the limitations of such an approach. A belief in the value of explanation for justification assumes that if only a model is explained, problems will reveal themselves intuitively. Machine learning, however, can uncover relationships that are both non-intuitive and legitimate, frustrating this mode of normative assessment. If justification requires understanding why the model’s rules are what they are, we should seek explanations of the process behind a model’s development and use, not just explanations of the model itself. This Article illuminates the explanation-intuition dynamic and offers documentation as an alternative approach to evaluating machine learning models.


Daniel Kahneman has referred to the human mind as a “machine for jumping to conclusions.” Intuition is a basic component of human reasoning, and reasoning about the law is no different. It should therefore not be surprising that we are suspicious of strange relationships in models that admit of no intuitive explanation at all. The natural inclination at this point is to regulate machine learning such that its outputs comport with intuition.

This has led to calls for regulation by explanation. Inscrutability is the property of machine learning models that is seen as the problem, and the target of the majority of proposed remedies. The legal and technical work addressing the problem of inscrutability has been motivated by different beliefs about the utility of explanations: inherent value, enabling action, and providing a way to evaluate the basis of decision-making. While the first two rationales may have their own merits, the law has more substantial and concrete concerns that must be addressed. But those that believe solving inscrutability provides a path to normative evaluation also fall short of the goal because they fail to recognize the role of intuition.

Solving inscrutability is a necessary step, but the limitations of intuition will prevent such assessment in many cases. Where intuition fails us, the task should be to find new ways to regulate machine learning so that it remains accountable. Otherwise, if we maintain an affirmative requirement for intuitive relationships, we will potentially lose out on many discoveries and opportunities that machine learning can offer, including those that would reduce bias and discrimination.

Just as restricting our evaluation to intuition will be costly, so would abandoning it entirely. Intuition serves as an important check that cannot be provided by quantitative modes of validation. But while there will always be a role for intuition, we will not always be able to use intuition to bypass the question of why the rules are the rules. Sometimes we need the developers to show their work.

Documentation can relate the subjective choices involved in applying machine learning to the normative goals of substantive law. Much of the discussion surrounding models implicates important policy discussions, but does so indirectly. Often, when models are employed to change our way of making decisions, we tend to focus too much on the technology itself, when we should be focused on the policy changes that either led to the adoption of the technology or were wrought by the adoption. Quite aside from correcting one failure mode of intuition, then, the documentation has a separate worth in laying bare the kinds of value judgments that go into designing these systems, and allowing society to engage in a clearer normative debate in the future.

We cannot and should not abandon intuition. But only by recognizing the role intuition plays in our normative reasoning can we recognize that there are other ways. To complement intuition, we need to ask whether people have made reasonable judgements about competing values under their real-world constraints. Only humans know the answer.

2D vs 3D

October 8, 2017

Given the lack of studies that have systematically examined the perceptual cues that our brains use to rapidly process procedural tasks – META decided to partner with Accenture Labs on a pilot study examining the use of perceptual cues in AR. More specifically, they wanted to measure the effect an additional perceptual cue (motion) would have on the time it takes to complete a procedural task. The team operated under the hypothesis that integrating both stereo and motion perceptual cues could further reduce the limitations of 2D instructions – ultimately enabling people to more quickly complete a procedural task.

At this year’s Bay to Breakers pre-race expo, the colorful annual footrace in San Francisco (California), the team of Meta and Accenture researchers set up the procedural task of assembling a physical lighthouse Lego set.

They defined three conditions based on the different types of instructions participants were to receive:2D Paper, Holographic Static 3D (Stereo Cue), and Holographic Dynamic 3D (Stereo & Motion Cues).

Comparing the three instruction conditions, they found that Dynamic 3D Instructions enabled participants to more quickly complete each step. Participants using Static 3D Instructions and 2D Paper Instructions were much slower in comparison. This confirmed their hypothesis that the use of both the stereo and motion perceptual cues in AR instructions speeds up assembly time. Interestingly enough, the researchers found that participants using Static 3D Instructions were the slowest of the three instruction conditions. This was especially surprising to them because based on past studies conducted in 2003 and 2013 , they expected people using any kind of 3D instructions to perform the Lego building task more quickly than those using paper 2D Paper Instructions.

Check out this video:


Screen shot 2018-02-25 at 20.07.04

Mozilla’s Open Innovation Toolkit is a community sourced set of practices and principles for incorporating human-centered design into your product development process. It provides you a collection of easy-to-use, self-serve techniques and methods that are gathered from industry best practices of innovation. Whether you have a new idea or a working prototype to test, the Open Innovation Toolkit may help.



Augmented Narratives

August 7, 2017

I started my new research-project on Augmented Narratives which will involve the platforms of META2 and OCTAGON. For users, good UX-design for Augmented Reality platforms should facilitate physical and psychological immersion in the mediated experience. A holistic, multi-dimensional approach that incorporates qualitative experience and a deep understanding of the psychological aspects of optimum user experience are an imperative for such environments to be successful.

The creation of such a flexible, holistic, and enveloping environment that allows well-tuned variations and personalized adjustments, requires new forms of digital storytelling and the application of new user experience-design paradigms – based on a deep knowledge of the users’ data-scape. How can we can assess and organize these new worlds – in order to create the best experiences?



Ethnography is the study of people and cultures, and ethnographic research is imperative to design research. How does a group relate to or understand a product, what is that group’s needs, what are the tech trends in that group? Designer Caroline Sinders explains in Fast Co.Design that we need data ethnography, a term she defines as the study of the data that feeds technology, looking at it from a cultural perspective as well as a data science perspective. Data ethnography is a narrower, but no less crucial, field: Data is a reflection of society, and it is not neutral; it is as complex as the people who make it.

Data and artificial intelligence systems are a civil issue, a civic issue, and a human issue. Understanding that data is complicit in how AI works is a step toward making equitable technology systems.  Imagine an opensource, transparent, data ethnography group that combines the skill sets of data scientists and ethnographers–imagine the kind of change that could create.

Screen shot 2018-02-25 at 15.28.25

Sakari Tamminen and Elisabet Holmgren of the Finnish/USA innovation agency Gemic, published a paper on EPIC entitled “The Anthropology of Wearables: The Self, The Social, and the Autobiographical“.

A wide range of new digital products lumped together under the category of ‘Wearables’ or ‘Wearable Technology’ raises fundamental questions about the way we think about our individual bodies and the species Homo Sapiens. This paper traces three different relationships to what are called the ‘wearables’ and extends the notion to cover all material technologies that mediate our relations between various embodied practices and the world, and beyond pure ‘hi tech’ products. Therefore, this paper develops a general cultural approach to wearables, informed by empirical examples from the US and China, and ends by mapping valuable design spaces for the next generation of digital technologies that are getting closer to our bodies and our skin, even venturing beneath it.

In their paper the authors state that ‘Wearables’ should not be primarily defined through their form factors (technological objects one can ‘wear’) or technical functions such as monitoring, or nudging –  the question of wearable technology should be understood in terms of the relationship they have to our bodies, social selves, and our personal identities to arrive at more useful insights about the role of these technologies in our lives.

Tamminen just summarised his January article in a shorter one entitled “Reconsidering the Value of Wearables“.


See also the interview with Intel anthropologist Todd Harple on fashion tech at putting things first.