Skip to main content

It’s Not Sci-Fi: Giving Language to the Harms AI is Creating, Now

Maybe you’ve been in this scenario: you’re talking to a friend or family member about your work. You mention that you work with incredible leaders who are working to govern Big Tech better. Your friend, mom, or nephew asks you, why? Why do I care if Instagram is listening to me? 

Advocates calling for regulation and governance of artificial intelligence (AI), both generative and predictive, have named the importance of communicating about the current, real-world harms Big Tech is perpetuating and profiting from. Too often surveillance is diminished as an everyday occurrence (a platform listening in on your conversations and then serving you tailored ads) or, perhaps worse, that technology will save humanity or ruin it (think, Ex-Machina, Terminator, robots that either take over or- plot twist– cure cancer!). 

Describing an issue with real examples of how it is harming people in their lives today is essential for bringing your audiences toward the specific solution you’re proposing. The way we describe any issue sets a listener up to understand what should be done about it, and ultimately, who is responsible for solving that problem. 

When we perceive an issue as larger than life, rooted in the future or impossible to solve (read: sentient robots), it can feel difficult to hold accountable the people and companies who are responsible for current, real-world harms. But together we can – by describing the problem as solvable, and naming the people responsible for solving it.

Spitfire put together a fact sheet which includes talking points and story examples of current-day, real-world harms from AI. The fact sheet can serve as a tool for those who are naming the need for greater governance of Big Tech to incorporate in their work and make their own. 

The harms of AI and algorithmic systems are varied and, sadly, well established. They can be seen across nearly every aspect of our lives, every region of the country and around the world: 

  • From the surveillance and over policing of Black and brown communities,

  • To catastrophic instances of mistaken identity that lead to police shootings and wrongful arrests,

  • To the criminalization of youth based on common, nonviolent behaviors in school,

  • To pushing housing further out of reach for Black and brown people,

  • To discriminating against workers with disabilities in the hiring process and on the job, 

  • To devaluing human labor,

  • To undermining human creativity and stealing the work of writers, artists and creators,

  • To exacerbating the climate crisis.

This is all reflective of the consequences of letting Big Tech self-regulate, and it doesn’t work. We need strong civil rights protections and for nonuse to be a meaningful option. And for that, advocates need winning narratives that effectively frame the issues technology poses as solvable – and then go further beyond naming what’s wrong, to name what’s possible. 

When communicating about technology justice, establishing the problem is important, and the frame we use to establish a problem, as real world, current and solvable, vs. future oriented with ambiguous responsibility, is essential. But don’t stop there in your communications. We encourage advocates to use this resource in tandem with a solid messaging foundation: name a shared value before introducing the issue, name a clear solution, and always communicate what’s possible when technology works for people and not the other way around. 

Click here to download the full guide

This entry was posted on Friday, July 26, 2024 at 18:18 pm and is filed under Digital strategy. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.