What caught our attention & what we want to see next
LA Tech4Good statement on the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
by Eva Sachar
Milestone Directive! On October 30th, 2023, the Biden-Harris Administration released the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The executive order marks a significant stride in AI regulation and includes measures that address data ethics and equity. Thus far, efforts to develop responsible AI have been largely voluntary and driven by academia, nonprofits, and subsets of the tech industry; this executive order makes clear the federal government’s intention to take responsibility for AI regulation efforts.
Data and AI are high-impact tools that can also have terribly unjust consequences for already vulnerable populations. We are pleased to see that the executive order particularly calls for actions in application areas where harms pose large risks, especially in healthcare, criminal justice, housing, and education.
What caught our attention
Advancing Equity & Civil Liberties: The executive order repeatedly acknowledges algorithmic discrimination. The past decade has produced several instances of data and AI that have propagated inequities, denying Americans equal opportunity and justice. We need to be vigilant in our application of AI for sensitive use cases, to ensure that we build systems that benefit the public and avoid unintended harms. With this order, the government will continue to promote technical evaluations, oversight, community engagement, and regulation.
AI & Criminal Justice: To identify where AI can enhance law enforcement efforts, the President has requested a report about the use of AI in the criminal justice system, such as risk assessments, sentencing, police surveillance, and predictive policing. Furthermore, the government will also work towards increasing public awareness of potentially discriminatory uses and effects of AI. This is key even outside of criminal justice - how do you know when you’ve been denied bail or a loan because of algorithmic discrimination? You would first have to know that an algorithm was used in the first place. We were also pleased to see the intention to educate investigators and prosecutors on civil rights violations resulting from algorithmic discrimination - those ensuring justice must be data literate and understand how these technologies work. There is an endless list of facial recognition leading to false arrests, of predictive policing disproportionately affecting black and brown communities, and these technologies derailing people’s lives. Read more here: How Wrongful Arrests Derailed 3 Men's Lives, Facial Recognition Software Led to His Arrest, Predictive Policing Software Terrible at Predicting Crimes, and Chicago Police Department’s Heatlist.
AI & Public Benefits: We’ve seen several examples of individuals of certain gender, race, income, and disability, being put at risk by poorly designed systems. This has severe consequences when vulnerable populations are incorrectly labeled high risk or unable to verify their identity to access public benefits. The executive order recognizes this and asks agencies to prevent and address algorithmic discrimination to ensure equitable administration of public benefits.
AI & the Broader Economy: Housing markets, consumer financial markets, hiring, access to housing, and credit have been plagued by discriminatory practices long before data & AI were commonplace - think redlining, gentrification, predatory lending, and discriminatory hiring. As a result, data and models are particularly predisposed to learning from historical and persistent discrimination, and replicating these inequities. As mandated in the order, it is critical that we assess and mitigate biases in the broader economy as we look to technology to be a more fair and efficient tool, and in some cases, replacement, for human judgment. If we don’t, we’ll keep seeing cases like A Racially Biased Scoring System Helps Pick Who Receives Housing in LA, The Secret Bias Hidden in Mortgage-Approval Algorithms, and Can Algorithms Violate Fair Housing Laws?.
AI & Healthcare: Various government agencies, including HHS, DOD, and the VA, will establish an HHS AI Task Force that will develop a strategic plan that includes policies and frameworks on the responsible deployment and use of AI in healthcare, public health, and human services. When algorithms are used in every facet of healthcare - diagnosis, treatment, care experience, outcomes, and cost - they have life or death consequences. The importance of this can’t be understated. For a deep dive: Hospital Algorithm Designed to Predict Sepsis, When Healthcare is Decided by Algorithms, Who Wins?, and The Fall of Babylon.
AI & Education: We’re already seeing school systems embrace AI for administration, personalized learning, grading, and tutoring. While technology has the potential to make education more effective and accessible, it’ll be crucial that we continue to evaluate it to ensure that we’re not worsening outcomes for certain students. To learn more, we recommend reading Turnitin biased against non-native English speakers and Wisconsin’s Racially Inequitable Dropout Algorithm. The order tasks the Secretary of Education with developing resources, policies, and guidelines to ensure the responsible development and deployment of AI in education.
AI & Innovation: Various agencies are tasked with reporting the potential for AI to strengthen our nation’s resilience against climate change impacts and build an equitable clean energy economy for the future. To learn more about the potential for environmental tech, we recommend Climate Change AI, For AI on the Grid, it’s All About the Data, How AI is Being Used in Energy Right Now, and Using AI to Cut Energy Waste from Buildings.
What we want to see next
There are many more critical areas of ethical data and AI practices to address, beyond this executive order. Here are some of the questions we think are most pressing to address next.
The executive order doesn't address how to design data and algorithmic systems with ethics, equity, and human-centered principles in mind. The design process includes aligning with ethical principles, deliberating if a technology tool should be built or used, engaging diverse stakeholders, and considering how it should be used. These steps are crucial for ascertaining that using data and AI will result in equitable outcomes. Many of the best practices mandated by the report start after the design phase, which somewhat dilutes their potential for encouraging holistic change.
Predictive policing is a very high-risk application of AI; there are multiple examples that demonstrate how urgent regulation is here. While the executive order does address civil rights in the criminal justice system, much of the AI deployed in this space is used at state and local levels; this remains unregulated.
How will the government evaluate if an algorithm follows established best practices? This is especially pressing in high-impact areas with historical discrimination, like healthcare, housing, and criminal justice. The order repeatedly mentions “best practices” and “standards”, but who defines these and how will these be enforced?
In these critical domains like healthcare, education, and criminal justice, how will the public be informed and educated about the creation and use of AI tools, their impacts, and opportunities to opt out of their use or report harm they may cause?
Washington’s AI advisors are notably also funded by Silicon Valley giants. How much will this influence regulation efforts? Will they divert solutions in an effort to avoid regulation impacting their own business?
Social media and the rising risk of disinformation campaigns fueled by emerging AI tools goes largely unmentioned – what safeguards and regulations can help mitigate these risks?
We hope that reports about the potential for AI to address climate change and clean energy will also address AI’s environmental impact, and offer some solutions. To learn more, we recommend AI Could Soon Need As Much Electricity as an Entire Country and The Carbon Impact of Artificial Intelligence.
An overall critique we recommend is The White House Is Preparing for an AI-Dominated Future by Karen Hao and Matteo Wong. The Executive Order is here and a fact sheet here.
We at LA Tech4Good are encouraged by this Executive Order; we remain committed to fostering social change by teaching equitable and ethical data practices. Our vision is to see technology embraced and employed in support of social change, helping address the myriad of critical injustices in the world and creating new solutions. Technology and data are tremendous tools for social change, and we aim to strengthen the tech and social change ecosystem through skill-building and community-building.
About the author
Eva Sachar serves as Partnership Lead and Data Equity Workshop Facilitator with LA Tech4Good. She strives to inspire empathy and perspective in her work implementing advanced analytics solutions and data ethics practices in the healthcare and public service industries.