Empathy Translator

Design Intervention + Media Premise

The Empathy Translator is a machine learning prototype for an imagined tablet application used by healthcare professionals to view and maintain patient records through a smart data-viz filter intended to maximize their ability to imagine the patient’s discomfort.

AI is already in common commercial use to outsource emotional labor. Most commonly we see this in the machine learning algorithms that calculate a personality profile based on an individual’s online activity, enabling marketers to tailor web ads according to that person’s interests and shopping patterns. For example, you could expect the ads that appear in the Instagram feed of a 14-year-old boy to differ pretty significantly from those in the feed of a 40-year-old stay-at-home mom. This project is premised on the possibility that a doctor’s personality or worldview (a gendered one, for example (Billock 1)) could inhibit their ability to interpret a patient and ultimately lead to medical care that is not in the patient’s best interest, and it aspires to intervene by subverting the application of user data normally aggregated by marketers to enable those users to demand a more personal quality of medical care from their doctors. Apart from its intended use by doctors, this project was inspired by the social media accounts of chronically unwell people who use these platforms primarily to document their illnesses. I became aware of this demographic when a friend started using her Instagram to document her struggles with CRPS.

Design Problems

The Empathy Translator is a design-thinking exercise to imagine ways to intervene at the intersection of two problems - the inadequate accessibility of information on the internet and the boxed-in models available for imagining pain in healthcare - from the perspective that they relate to each other as players in “the migration of cultural materials into networked environments” (Burdick and Drucker 3).

Problem A

The internet is a tool with singular potential to make and share knowledge all over the world in a transcendently enlightening way. Given that the internet has indisputably already enabled radical new ways of making and sharing knowledge (Smith vii), it hasn’t done this in a way that even approaches its full potential. I believe the information that can be accessed by a typical search engine user is marred by at least three failings of digital data-sharing culture:

1. Peer-reviewed content is often accessible only to people belonging to certain organizations or for a potentially exorbitant cost.

2. The nature of the data itself and the hierarchy in which a search engine presents it often has more to do with behind-the-scenes sponsorships than with the empirical objectives of the inquiry.

3. There’s no easy way for the average user to synthesize these bodies of biased data from different sources to cobble together something closer to a holistic snapshot.

+ Problem B

In a country where everything is a commodity, including (or even especially) medical care, there are some serious problems with the one-sided ownership of data. American healthcare is a jumble of forced solutions to a diverse assortment of more or less unsolvable variables (Gawande 8-9), a consequence of which is that American medical patients often receive pre-packaged and impersonal care from their providers. Pain scales like the McGill Index are among these pre-packaged care offerings (albeit the McGill is on the more sophisticated side).

= Informational Problem

The relationship I see between these two seemingly unrelated problems is this: both have to do with the politics of information design.

It used to be that research was done using information contained in books, with books being contained in libraries. Library scientists were the professionals who facilitated the retrieval of information in this system. But now in the world of ubicomp, information is no longer confined to just books; it’s in every “smart object.” Everything is a library, and everyone is a librarian. Traditional library science is no longer adequate for taming information in this brave new world. I like the way information architect Abby Covert frames this in her book How to Make Sense of Any Mess. “..information,” she explains, “isn’t a thing. It’s subjective, not objective. It’s whatever the user interprets from the arrangement or sequence of things they encounter” (Covert 20).

Library scientist and author of the Atlas of New Librarianship R. David Lankes proposes a professional approach to information in this paradigm in which the sense-maker allows the metaphorical horses to remain wild instead of trying to break them. He writes, “Rather than cataloging artifacts and assuming they are self-contained, we need to build systems that focus on the relationships” (Lankes 141)

This state of information science poses some intimidating/exciting challenges/opportunities addressed in burgeoning design fields like new librarianship and the digital humanities. These practices embrace the role of technology in manufacturing the informational chaos we live in and make spaces for non-deterministic, computer-assisted approaches to creatively structure and restructure pieces of it.

A digital humanities approach using AI to build new models for thinking about pain excites me for a couple reasons. Digitized medical research can be obscured by immense pay walls; so even though the content could potentially be accessed by anyone online, effectively it will only be accessed by medical students and doctors. The digitization of medical knowledge creates a technological extension off an analog power hierarchy. Theoretically, the doctor can use their proximity to privileged knowledge as a tool exercise authority over patients rather than working more collaboratively with them. The patient’s lack of access limits how they can and will be allowed to participate in conversations about their own medical care; moreover, it means they can’t freely contribute to the bodies of knowledge being made and used by their doctors - and so the hierarchy lives on, augmented now by the internet.

So many people have a device on their desks and in their pockets that makes research and education possible immediately, at an immense scale. The internet could be a School of Athens-esque student’s paradise, but I’d say it’s more of a Garden of Earthly Delights academic obstacle course. Although computer technology is a tool that enables and amplifies the unequal distribution of academic capital, I think it’s important to remember that it’s a powerful tool: people are the culprits behind its abuse. My goal with this project was to investigate if and how a subversion of AI could help facilitate the co-creation of better, non-deterministic bodies of knowledge between the consumers and producers of digital content to bridge the power gap created by the accessibility of information.


Intended Technological Function

The imagined application gives the patient the option to link a social media account to their medical chart; it also aggregates personality data about the doctor viewing the chart. I had hoped basically to visually and semantically iterate on the McGill Pain Index using this sequence of steps:

First I’d build a neural network trained on the MPI model (or one like it).

Then I’d sort a larger body of data through that MPI filter.

Next, I’d parse that filtered body of data through pre-existing neural networks in common use, such as IBM’s sentiment analysis and personality insights modules.

After making observations about this 2nd generation of filtered data, I’d invent two or three reinterpretations of the MPI for legibility that varies depending on the temperament of the person doing the interpreting.

The ML module synthesizes the patient data into a written report and visualization that express the pain symptoms described in that body of data using language tailored to the doctor’s personality. The hope is that such a system could enable a doctor to provide compassionate, respectful treatment to a patient in spite of disparate frames of reference, and to create opportunities for more collaborative relationships between doctor and patient.

Technological Approach + Delivered Function

I encountered a number of obstacles that forced me to change my initial strategy.

First I’d build a neural network trained on the MPI model (or one like it).

The way I went about doing this was by aggregating tweets containing hashtags relating to pain. I tried working with data about two types of pain: the physical pain of a toothache (using tags like #toothache, #cavity, #rootcanal) and the psychological pain of depression (#depression, #depressed, #imdepressed, #mentalhealth). What I found when trying to prepare this data to train my first NN was a very frustrating validation of my initial design problem: the data was junk. I scraped between 500 to over 1000 tweets per queried tag to find in every case that substantially more than half of these would be advertisements. This meant that extracting enough good data was not practical given my production schedule, even after training a preliminary NN to filter out the ads.

Then I’d sort a larger body of data through that MPI filter.

So rather than aggregating content by tag I decided to scrape all the content from one twitter profile. After doing some informal experiments with the profiles of real people who write about their pain, I realized that not only would it be unethical to appropriate this content, but it would be grossly irresponsible to process it through my very bastardized understanding of a mental model used in a field in which I am far from an expert.

Next, I’d parse that filtered body of data through pre-existing neural networks in common use, such as IBM’s sentiment analysis and personality insights modules.

At this point I ditched the MPI, and the project became more about how a patient and doctor communicate through their personalities in general, not about pain specifically. I did this by running a diverse selection of Twitter profiles (belonging to Gertrude Stein, patient, and four Muppets, doctors) through IBM Personality Insights. Exploration of the technology took a firm lead over the conceptual investigation I’d started with.

After making observations about this 2nd generation of filtered data, I’d invent two or three reinterpretations of the MPI for legibility that varies depending on the temperament of the person doing the interpreting.

I did create many reinterpretations of the information I sorted, but the avenue that brought me to that point was very different than I imagined.


Conceptual Results + Future Iterations

I think ultimately, given my newness to neural networks, the ontological heft of the topics I wanted to work with, and my lack of expertise on those topics, the ambition of the project as I initially envisioned it pretty vastly exceeded what I could achieve in a few weeks, and as an individual rather than in conjunction with a transdisciplinary team. That said, as I became increasingly aware of this I was able to adjust my expectations to keep my enthusiasm for the work and its implications steady. The initial idea was built on a complicated conceptual narrative, but from the beginning I knew that, on a personal level, what I primarily wanted from the project was to become acquainted with the technology. Now that I’ve done this, I’m interested to continue working toward the conceptual objectives I started with.

Materials Used:

  • IBM Watson Studio - Personality Insights - Machine Learning

  • Twitter Developer

  • Twelts (chrome extension)

  • Python

  • Jupyter Notebook

  • Matplotlib

  • Adobe Illustrator

Project Brief:

Create a project that reflects a future world where people and AI systems co-exist and interact on an everyday, casual basis. Design a particular scenario and/or question and play it out in a detailed way to discover new things about AI and people. How will people and ubiquitous smart things have relationships, collaborate, cooperate, understand each other, and generally co-exist? How will this interaction work as people and things get to know each other, come and go, and change? What will the diverse and multiple communities of people and other intelligent systems be like in the mundane, day-to-day future?