To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Rashida Richardson is senior counsel at Mastercard, where her purview lies with legal issues relating to privacy and data protection in addition to AI.
Formerly the director of policy research at the AI Now Institute, the research institute studying the social implications of AI, and a senior policy advisor for data and democracy at the White House Office of Science and Technology Policy, Richardson has been an assistant professor of law and political science at Northeastern University since 2021. There, she specializes in race and emerging technologies.
Rashida Richardson, senior counsel, AI at Mastercard
Briefly, how did you get your start in AI? What attracted you to the field?
My background is as a civil rights attorney, where I worked on a range of issues including privacy, surveillance, school desegregation, fair housing and criminal justice reform. While working on these issues, I witnessed the early stages of government adoption and experimentation with AI-based technologies. In some cases, the risks and concerns were apparent, and I helped lead a number of technology policy efforts in New York State and City to create greater oversight, evaluation or other safeguards. In other cases, I was inherently skeptical of the benefits or efficacy claims of AI-related solutions, especially those marketed to solve or mitigate structural issues like school desegregation or fair housing.
My prior experience also made me hyper-aware of existing policy and regulatory gaps. I quickly noticed that there were few people in the AI space with my background and experience, or offering the analysis and potential interventions I was developing in my policy advocacy and academic work. So I realized this was a field and space where I could make meaningful contributions and also build on my prior experience in unique ways.
I decided to focus both my legal practice and academic work on AI, specifically policy and legal issues concerning their development and use.
What work are you most proud of (in the AI field)?
I’m happy that the issue is finally receiving more attention from all stakeholders, but especially policymakers. There’s a long history in the United States of the law playing catch-up or never adequately addressing technology policy issues, and five-six years ago, it felt like that may be the fate of AI, because I remember engaging with policymakers, both in formal settings like U.S. Senate hearings or educational forums, and most policymakers treated the issue as arcane or something that didn’t require urgency despite the rapid adoption of AI across sectors. Yet, in the past year or so, there’s been a significant shift such that AI is a constant feature of public discourse and policymakers better appreciate the stakes and need for informed action. I also think stakeholders across all sectors, including industry, recognize that AI poses unique benefits and risks that may not be resolved through conventional practices, so there’s more acknowledgement — or at least appreciation — for policy interventions.
How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?
As a Black woman, I’m used to being a minority in many spaces, and while the AI and tech industries are extremely homogeneous fields, they’re not novel or that different from other fields of immense power and wealth, like finance and the legal profession. So I think my prior work and lived experience helped prepare me for this industry, because I’m hyper-aware of preconceptions I may have to overcome and challenging dynamics I’ll likely encounter. I rely on my experience to navigate, because I have a unique background and perspective having worked on AI in all industries — academia, industry, government and civil society.
What are some issues AI users should be aware of?
Two key issues AI users should be aware of are: (1) greater comprehension of the capabilities and limitations of different AI applications and models, and (2) how there’s great uncertainty regarding the ability of current and prospective laws to resolve conflict or certain concerns regarding AI use.
On the first point, there’s an imbalance in public discourse and understanding regarding the benefits and potential of AI applications and their actual capabilities and limitations. This issue is compounded by the fact that AI users may not appreciate the difference between AI applications and models. Public awareness of AI grew with the release of ChatGPT and other commercially available generative AI systems, but those AI models are distinct from other types of AI models that consumers have engaged with for years, like recommendation systems. When the conversation about AI is muddled — where the technology is treated as monolithic — it tends to distort public understanding of what each type of application or model can actually do, and the risks associated with their limitations or shortcomings.
On the second point, law and policy regarding AI development and use is evolving. While there are a variety of laws (e.g. civil rights, consumer protection, competition, fair lending) that already apply to AI use, we’re in the early stages of seeing how these laws will be enforced and interpreted. We’re also in the early stages of policy development that’s specifically tailored for AI — but what I’ve noticed both from legal practice and my research is that there are areas that remain unresolved by this legal patchwork and will only be resolved when there’s more litigation involving AI development and use. Generally, I don’t think there’s great understanding of the current status of the law and AI, and how legal uncertainty regarding key issues like liability can mean that certain risks, harms and disputes may remain unsettled until years of litigation between businesses or between regulators and companies produce legal precedent that may provide some clarity.
What is the best way to responsibly build AI?
The challenge with building AI responsibly is that many of the underlying pillars of responsible AI, such as fairness and safety, are based on normative values — of which there are no shared definitions or understanding of these concepts. So one could presumably act responsibly and still cause harm, or one could act maliciously and rely on the fact that there are no shared norms of these concepts to claim good-faith action. Until there are global standards or some shared framework of what is meant to responsibly build AI, the best way one can pursue this goal is to have clear principles, policies, guidance and standards for responsible AI development and use that are enforced through internal oversight, benchmarking and other governance practices.
How can investors better push for responsible AI?
Investors can do a better job at defining or at least clarifying what constitutes responsible AI development or use, and taking action when AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards to evaluate AI actor practices. While some nascent regulations like the EU AI Act will establish some governance and oversight requirements, there are still areas where AI actors can be incentivized by investors to develop better practices that center human values or societal good. However, if investors are unwilling to act when there is misalignment or evidence of bad actors, then there will be little incentive to adjust behavior or practices.