Episode 9: Azza Altiraifi
As the use of surveillance technologies continues to rise, our day-to-day lives continue to be affected, from education to employment, web searches to doorbells. Countless studies have shown that surveillance technologies are inherently biased and discriminatory, and that's especially true for people with disabilities.
This episode features Azza Altiraifi, a disabled organizer and researcher, and Senior Program Manager at a progressive economic messaging organization. Previously, Azza was a research and advocacy manager at the Center for American Progress's disability policy initiative, where she spearheaded advocacy campaigns, as well as researched and published articles on mental health policy, surveillance and advancing economic security for disabled people.
Episode 8: lydia x.z. brown
State governments are increasingly relying on AI tools and systems to determine whether people qualify for public benefits and to what extent they receive them. For people with disabilities, this can mean losing critical support without warning or explanation. Algorithms are designed to make decisions based on patterns, but disabilities are diverse, nuanced and sometimes not even physically apparent.
This episode features lydia x.z. brown, policy counsel with the Center for Democracy and Technology’s Privacy and Data Project, focused on disability rights and algorithmic fairness and justice. Their work is investigated algorithmic harm and injustice in public benefits determinations, hiring algorithms and algorithmic surveillance that disproportionately impact disabled people — particularly multiply marginalized, disabled people.
Episode 7: Alex Givens
In recent years, more and more companies — big and small — have deployed AI powered tools in the workplace. While these tools are ostensibly intended to make hiring and supervising workers easier for managers, there's tremendous risk of discrimination embedded within what is effectively automated surveillance technology. The harms of algorithmic bias, the systematic discrimination born of artificial intelligence software are becoming more well-known. What is less familiar are the deep systemic harms AI can have on people with disabilities.
This episode features Alex Givens, president and CEO at the Center for Democracy and Technology, which works to promote democratic values by shaping technology policy and architecture. Alex is an advocate for using technology to increase equality, amplify voices and promote human rights.
Episode 6: Alvaro Bedoya
The aftermath of the violence at the U.S. Capitol on January 6th has driven calls from policymakers and in the press for expanding the use of surveillance and facial recognition technologies, which has civil rights and justice advocates concerned.
Though the use of these technologies has many feeling that the perpetrators of the insurrection are being brought to justice, many advocates worry that — especially in the hands of police — their use will only aid a pattern of discrimination, surveillance, over-policing and censorship for communities of color, oftentimes those working to build a more just society.
Today's guest, Alvaro Bedoya, is Founding Director of the Center on Privacy & Technology at Georgetown Law School, where he is also a Visiting Professor of Law and Director of the Federal Legislation Clinic.
Episode 5: Steven Renderos
The stories shared through media and technology platforms hold power in shaping our culture and understanding about people and communities who are often underrepresented. At a time when misinformation and what my guest today calls “organized lies” overwhelmingly move into the mainstream, it’s important we take a look at who is owning these stories and shaping the narrative.
This episode features a conversation with Steven Renderos, Executive Director of MediaJustice, a national racial justice hub fighting for a future in which all people of color are connected, represented, and free. Steven was previously MediaJustice’s long time Campaign Director, leading initiatives for prison phone justice and net neutrality, fighting giant corporate mergers and pushing for platform accountability measures.
Episode 4: Brandi Collins-Dexter
As we usher in a new presidential administration, how can we continue working to hold technology and media accountable for aiding the spread of false information and hate speech plaguing our society? How does the intersection of technology, media and race influence culture?
This episode features a conversation with Brandi Collins-Dexter, Visiting Fellow at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School of Government. She is also a senior fellow at Color Of Change, the nation’s largest online racial justice organization, where she formerly served as Senior Campaign Director.
Episode 3: Mutale Nkonde
In the wake of the police murders of George Floyd, Breonna Taylor, Tony McDade, Elijah McClain and so many other Black people, our society faces a reckoning–400 years overdue–about anti-Black violence and white supremacy.
Tech companies are beginning to express their support for racial justice. But we need more than their words: Black people continue to be underrepresented in tech while overly impacted by anti-Blackness hardwired into our algorithms and artificial intelligence.
This episode features a conversation with Mutale Nkonde, CEO of AI for the People.
Episode 2: Hannah Sassaman
In the midst of COVID-19 and uprisings calling for the end of police violence across the U.S. and around the world, lawmakers and leaders are turning to technology for a cheap and decisive solution. But what should we do when these solutions increase surveillance, unjustly placing eyes on Black and Brown people?
This episode features a conversation with Hannah Sassaman, policy director at the Movement Alliance Project (MAP). Based in Philadelphia, MAP connects communities and builds power for working families at the intersection of race, technology and inequality.
Episode 1: Neema Singh Guliani
Technology has the power to foster connection, community, learning and promote equity and justice. But it can easily be used as a tool for surveillance, division, discrimination and to amplify inequality.
We know that Amazon’s facial recognition software has difficulty identifying female and darker-skinned faces. Studies have shown that AI technology used for job recruiting often favors male candidates, as the AI models and algorithms are developed and tested using men’s resumes.
In this episode Jen chats with Neema Singh Guliani, senior legislative counsel at the American Civil Liberties Union, about the consequences of this phenomenon.