Whether it’s driverless cars, improving healthcare, or optimizing logistics, AI has become a transformative force across the globe. Yet, the same algorithms that make our lives easier can be fraught with controversy surrounding their application. The increasing use of AI has resulted in fears of job loss, fake media and misinformation campaigns, and pervasive surveillance. Since we know that datasets used in machine learning are biased, things such as predictive policing and facial recognition can include racial, gender and other biases that largely impact already vulnerable populations.
However, activists and leaders in the community are tackling issues of ethics in data and artificial intelligence and are working diligently to identify concerns and promote inclusion and transparency. Here are a few of those voices.
Safiya Umoja Noble PhD – Bestselling Author, Algorithms of Oppression: How Search Engines Reinforce Racism
Dr. Safiya Umoja Noble is an Associate Professor at the University of California in the Department of Information Studies, where she serves as the Co-Director of the UCLA Center for Critical Internet Inquiry. She has found that racist and sexist bias, misinformation, and profiling are frequently unnoticed byproducts of search engine algorithms. She is the author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, Algorithms of Oppression: How Search Engines Reinforce Racism. You can learn more about Dr. Safiya Umoja Noble’s work by following her on Twitter @SafiyaNoble or by visiting her website SafiyaNoble.com.
Olga Russakovsky – Assistant Professor, Princeton University, Department of Computer Science
Olga Russakovsky is an Assistant Professor with the Department of Computer Science at Princeton University. Russakovsky leads the Princeton Visual AI Lab, whose research brings together the fields of computer vision, machine learning, and human-computer interaction while promoting fairness, accountability, and transparency. She has been recognized for her work to fight bias in artificial intelligence through research and mentorship. Russakovsky co-founded the national nonprofit, AI4ALL, which brings together high school students from underrepresented groups to learn the basics of the field through intensive training, group projects and guest lectures. You can learn more about Olga Russakovsky’s work by following her on Twitter @orussakovsky.
Cathy O’Neil – Founder, ORCAA – O’Neil Risk Consulting & Algorithmic Auditing
Cathy O’Neil has been an independent data science consultant since 2012 and has worked for clients including the Illinois Attorney General’s Office and Consumer Reports. She is the author of Weapons of Math Destruction: How Bid Data Increases Inequality and Threatens Democracy, which examines how algorithms, and not human beings, are increasingly used to make judgements that impact human lives. Cathy is the founder of ORCAA – O’Neil Risk Consulting & Algorithmic Auditing – a consulting company that helps organizations manage and audit their algorithmic risks. ORCAA recognizes that companies have been increasingly using mathematical models to streamline important decisions and they perform focused Algorithmic Audits for accuracy, bias, consistency, transparency, fairness and legal compliance. You can learn more about Cathy O’Neill’s work by following her on Twitter @mathbabedotorg, visiting her website mathbabe.org, or at the ORCAA website www.orcaarisk.com.
Joy Buolamwini – Founder, Algorithmic Justice League
Motivated by personal experiences of algorithmic discrimination, Buolamwini launched the Algorithmic Justice League. The Algorithmic Justice League’s mission is to raise awareness about the impacts of AI. It works to equip advocates with empirical research and amplify the voice of the most impacted communities, all while galvanizing researchers, policy makers, and industry practitioners to mitigate AI harms and biases. You can learn more about Joy Buolamwini’s work by following her on Twitter @jovialjoy or by visiting the https://www.ajl.org/ website.
Kate Crawford – Co-Founder, AI Now Institute
Kate Crawford is a Distinguished Research Professor at NYU and a Senior Principal Researcher at MSR-NYC. She studies the social implications of data systems, machine learning, and artificial intelligence. She is also the co-founder of the AI Now Institute at New York University, the world’s first university institute dedicated to researching the social implications of artificial intelligence and related technologies. You can learn more about Kay Crawford’s work by following her on Twitter @katecrawford or by visiting the katecrawford.net website.
Abeba Birhane – PhD candidate, University College Dublin
Abeba Birhane is currently a PhD candidate in cognitive science at University College Dublin. She studies the dynamic and reciprocal relationships between emerging technologies, personhood and society. Specifically, she explores how technologies which are a part of our personal, social, political, and economical spheres shape what it means to be a person. In her research, Birhane leans on theoretical frameworks from traditions such as embodied cognitive science, dialogism, complexity science, critical data studies and philosophy of technology. You can learn more about Abeba Birhane by following her on Twitter @Abebab or by visiting her website, https://abebabirhane.com/.
Rachel Thomas – Director, USF Center for Applied Data Ethics
Rachel Thomas is Director of the USF Center for Applied Data Ethics and Co-Founder of fast.ai. Fast.ai believes that deep learning is transforming the world, and their goal is to make deep learning easier to use and more accessible to people from all backgrounds. They accomplish this by offering free courses for coders, a software library, cutting-edge research, and creating community. You can learn more about Rachel Thomas and fasti.ai by following her on Twitter @math_rachel or by visiting fast.ai.
The Future of Ethics in Data and AI
As our reliance on AI deepens, we must hold ourselves and our technologies accountable in developing models and systems that mitigate biases and address ethical issues. As AI continues to evolve, researchers, companies, and governments must work together to establish and implement guidelines that ensure AI’s implementation is as ethical as possible.