Coded Portraits

Towards understanding, appropriating, and reclaiming machine vision

This is what I look like to a machine.

Each of these images is the result of a machine asking—and answering—the question, “what is a person?”

These ways of seeing embody their own kinds of logic.

In the logic of this algorithm, I am seen as a series of boxes that can be recognized and named.

This logic determines that I am a group of points defined in relation to one another, which can be used to guess my emotional state.

And in the logic that informs this way of seeing, I am a series of separate pieces—hair, left eye, right eye, upper lip, and so on—that can be broken apart and examined individually.

But “see” isn’t quite the right word. This image is a much closer representation of how the machine “sees” me. For the world—for us—to be legible to a machine, we need to be translated into numbers.

These portraits, then, are an interface mediating between the machine’s way of seeing and our own. Their particular aesthetic qualities emerge from a collaboration between scientists and the software they build, as they attempt to make the world and its inhabitants machine-readable.

What if we could re-code these images with different kinds of logic?

@daintyfunk, CV Dazzle, the clown, 2020

Wendy Red Star, Déaxitchish/Pretty Eagle, 2014

1
Momtaza Mehri, “The Beautiful Ones,” Real Life, 2017


2
Joy Buolamwini Stephanie Dinkins Dainty Funk (Maud Acheampong) Shawné Michaelain Holloway Wendy Red Star Olivia M. Ross Stephanie Syjuco

This question is the seed for a series of self-portraits exploring the tensions between automated and human ways of seeing and knowing.

As a cis white woman, I’m far from an expert on the dangers and possibilities of visibility. For marginalized and racialized communities, visibility and invisibility can carry great risk. And recognition can be powerful. As poet Momtaza Mehri writes, “we devise and circulate our own nomenclature. Our own ways of being seen.”1

This work is inspired by and in dialogue with femme and nonbinary Black, Indigenous, and Filipinx artists2 who create re-coded portraits—ones that go beyond camouflage and, instead, claim space and complexity.

Below, I propose a framework for understanding, appropriating, and reclaiming what Joy Buolamwini has termed “the coded gaze.”

Stephanie Syjuco, Cargo Cults (Cover-Up), 2016

Olivia M. Ross, @cyberdoula, 2020

Joy Buolamwini, still from The Coded Gaze: Unmasking Algorithmic Bias, 2016

Experiments in re-coding

What would it mean to recreate these invasive forms of vision? What ways of knowing are illegible or inaccessible to a machine? In what ways do I want to be known, to myself and to others?

These experiments are a dialogue between my original photograph, the ways it was “seen” and interpreted by a machine, and my response to the gaps and tensions that this “seeing” exposed.

Ultimately, the re-coded portraits conceal as well as reveal: the altered images are no longer recognizable as human to facial recognition algorithms.

A framework for re-coding

1.

Select a photograph.

3.

Choose a re-coding prompt.

2.

Choose a type of automated recognition to explore.

4.

Re-create the way a machine sees your image. Using the prompt as a guide, add information and context that reflects what you consent to share about yourself.

Type of machine vision

Recognition

Re-coding prompt

Self-presentation

With ‘self-presentation’ as a re-coding prompt, I used the areas highlighted by the algorithm to add context about how I chose to represent myself—both in person (my hairstyle and clothing) and in the photograph itself (my expression and pose).

Model: DenseCap (github)

Type of machine vision

Emotion detection

Re-coding prompt

Memory

Using ‘memory’ as a re-coding prompt, I answered the questions the algorithm asks (e.g. which emotions are shown? what is visible?) with my own recollections.

Model: Google Vision AI (demo)

Type of machine vision

Segmentation

Re-coding prompt

Sensory

To re-code the form of machine vision that breaks my face apart into separate, named pieces, I used ‘sensory’ as a prompt to augment the legend with sense memories specific to each of my body’s parts.

Model: Face-Parser (github)

About

This project was created by Livia Foldes. It was developed in collaboration with Jasmin Liang and Franziska Mack, with generous feedback from Shannon Mattern, Melissa Friedling, and Richard The.

Bibliography

Agostinho, Daniela. “Chroma Key Dreams: Algorithmic Visibility, Fleshy Images and Scenes of Recognition.” Philosophy of Photography, vol. 9, no. 2, 1 Oct. 2018, pp. 131–155, 10.1386/pop.9.2.131_1.

Ajana, Btihaj. Governing through Biometrics : The Biopolitics of Identity. Erscheinungsort Nicht Ermittelbar, Verlag Nicht Ermittelbar, 2013.

Anderson, Steve F. Technologies of Vision: The War between Data and Images. Cambridge, Massachusetts, The Mit Press, 2017.

Browne, Simone. Dark Matters: On the Surveillance of Blackness. Durham, Duke University Press, 2015.

Daub, Adrian. “The Return of the Face.” Longreads, 3 Oct. 2018.

Campt, Tina M. Listening to Images. Durham ; London, Duke University Press, 2017.

Crawford, Kate, and Roel Dobbe. AI Now 2019 Report. AI Now Institute, 2019.

Lehmann, Claire. “Color Goes Electric.” Triple Canopy, 31 May 2016.

House, Brian. “Stalking the Smart City.” Urban Omnibus, 2 May 2019.

Levin, Boaz, and Vera Tollmann. “Bunker-Face.” Transmediale.De, 2018.

Mattern, Shannon. “All Eyes on the Border.” Places Journal, no. 2018, 25 Sept. 2018.

Mehri, Momtaza. “The Beautiful Ones.” Real Life, 16 Mar. 2017.

Pipkin, Everest. “On Lacework: Watching an Entire Machine-Learning Dataset.” Unthinking Photography, July 2020.

Robertson, Hamish, and Joanne Travaglia. “Big Data Problems We Face Today Can Be Traced to the Social Ordering Practices of the 19th Century.” Impact of Social Sciences, 13 Oct. 2015.

Roth, Lorna. “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity.” Canadian Journal of Communication, vol. 34, no. 1, 28 Mar. 2009, 10.22230/cjc.2009v34n1a2196.

Schmitt, Philipp. “Tunnel Vision.” Unthinking Photography, Apr. 2020.

Sharpe, Christina Elizabeth. In the Wake: On Blackness and Being. Durham, Duke University Press, 2016.

Slevin, Tom. “Vision, Revelation, Violence: Technology and Expanded Perception within Photographic History.” Philosophy of Photography, vol. 9, no. 1, 1 Apr. 2018, pp. 53–70, 10.1386/pop.9.1.53_1.

Steyerl, Hito. “In Defense of the Poor Image.” e-flux, 2009.

Sun Kim, Christine. “Artist Christine Sun Kim Rewrites Closed Captions.” Pop-Up Magazine, 13 Oct. 2020.

Woodall, Richard. “Lying Eyes.” Real Life, 30 Jan. 2020.