Digital epistemology: the third digital spool of educational justice
Guest article by Mary Lang
Truth will never be the same: Responsible AI requires diverse perspectives
Educational justice means education for all and for each, with the guiding principle that all students should receive the education they deserve. Every student deserves an education that equitably equips and empowers them to realize their greatest potential in ways relevant to their world. Our world deserves students who have received just educations.
In the years since the first installments of this series on K–12 educational justice, we have seen the first two digital spools – digital equity and digital literacies – put on paths to progress, propelled by many committed educators, administrators, policy makers and parents.
However, in those same few years, AI has fast-tracked the final digital spool of K-12 educational justice, epistemology, on a potential path to uncertainty. Epistemology is the branch of philosophy concerned with the nature and construction of knowledge. Epistemology debates questions like: What is knowledge? How do we know what we know? Are there criteria for justified belief?
That technology-enabled changes to the nature and construction of knowledge (aka “digital epistemology”) will impact educational justice seems obvious. What is less obvious is how education leaders can think, talk, and act on this relationship in service to educational justice.
An educator’s thinking toolkit on digital epistemology
The practical toolkit offered in this final installment does not contain a collection of rubrics and templates; rather, it is a “thinking” toolkit. It walks through four current conditions in the AI lifecycle to help spark conversations concerning epistemology that need to happen in K-12 education leadership circles, with some starter questions, and links to interdisciplinary resources.
Following the four conditions, the toolkit offers a recap of interdisciplinary discussions touching on these conditions from the recent Symposium on Increasing Diversity in AI Education and Research at Stanford University.
1. The condition of goal agreement
Experts agree the goal of diverse perspectives is foundational to responsible AI
There is broad agreement across AI thought leaders and practitioners — including Dr, Timnit Gebru, Dr. Fei Fei Li, John Etchemendy, Emily Bender, Mary Lou Maher, Dr. Joy Buolamwini, Ethan Mollick and Dr. Ebony McGee — that developing and deploying responsible AI requires diverse perspectives.
Mollick, in his book In Co-Intelligence, Living and Working with AI, suggests why diverse perspectives are critical. Solving for the “Alignment Problem” is how we ensure AI serves rather than hurts, human interests. Solving the Alignment Problem means AI must be “shaped through an inclusive process representing diverse voices,” a challenge Mollick concedes cannot be solved in a lab.
Goal Opening Question: What diverse perspectives do our specific students embody, and how are those perspectives currently reflected in AI applications our students already use?
2. The condition of fairness as a confirmed challenge
Data confirm diverse perspectives and fairness remain an AI challenge
Chapters 3 (Responsible AI), 6 (Education), and 8 (Diversity) in the 2024 AI Index Report from Stanford HAI (Human-Centered Artificial Intelligence) offer data and analysis on diverse perspectives and fairness in AI. Data from Code.org, Computing Research Association, and Informatics Europe sheds some hopeful light – for example some narrowing of certain racial imbalances in K-12 – and some durable challenges – for example demographics of AI developers often differ from those of users, and, robust and standardized evaluations for LLM responsibility are seriously lacking. The AI Index underscores that the lack of diversity can perpetuate or even exacerbate societal inequities and biases.
Fairness Opening Question: Are we growing only AI consumers or are we also growing AI developers?
3. The condition of epistemic injustice
A fragile constitution of knowledge opens the door to epistemic injustice
Philosopher Amanda Fricker defines epistemic injustice as harm done to a person’s capacity as a knower. This includes a person’s access to epistemic goods, the building blocks of knowledge, including education, information, good advice, and resources for research.
Epistemic injustice can be testimonial in nature. This means unfairness related to trusting someone's word. An injustice of this kind can occur when someone is ignored, or not believed, because of their gender, sexuality, age, race, disability, or, broadly, because of their identity.
Imagining a future where participation in knowledge creation and knowledge sharing is determined by AI-defined “knowledge” that has been mechanically trained on just a sliver of our world, makes increasing the diversity of perspectives across the AI ecosystem seem one obvious common-sense safeguard to this form of testimonial epistemic injustice.
Jonathan Rauch discussed his 2021 book, The Constitution of Knowledge: A Defense of Truth, as presenting a system for drawing public conclusions about truth and falsehood. He argues that structures to repel assaults on truth already exist, namely experts, who will push back if not convinced.
The sheer scale of AI’s explosion in recent years threatens to overwhelm that argument. In the whack-a-mole knowledge and truth hunt created by machine learning, a protective epistemic infrastructure to save truth, does not exist.
Absent structures to repel assaults on truth, K-12 education has a role to play in taking more aggressive steps to collect, understand, and act on data that can guide actions to diversify perspectives in STEM education to help offset epistemic harms.
Epistemic Injustice Opening Question: What examples can we already identify of epistemic injustice in our classrooms?
4. The condition of mounting epistemic concerns
Concerns of future AI-fueled knowledge damage or eradication are growing
AI’s underlying technology, machine learning, will play an increasing role in modern epistemology, the very nature of how humans understand knowledge. Without diverse perspectives across the AI ecosystem, rapid advances in machine learning will continue to fuel epistemic injustice.
Machine learning is an acknowledged source of bias (see for example A Survey on Bias and Fairness in Machine Learning). In the 2018 backlash against Google’s human-voice assistant DUPLEX, sociologist Zeynep Tufekci called the technology’s lack of transparent design “horrible and so obviously wrong.”
Leading linguists and scientists, including Bender and Gebru, flagged concerns of the potential for epistemic degradation in their seminal 2021 work “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” They called upon the AI ecosystem to “recognize that applications that aim to believably mimic humans bring risk of extreme harms. Work on synthetic human behavior is a bright line in ethical AI development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
Eric Horvitz, President's Council of Advisors on Science and Technology (PCAST), and Microsoft’s Chief Scientific Officer, publicly flagged his concerns in his 2022 paper On the Horizon: Interactive and Compositional Deepfakes,' where he warned that “In the absence of mitigations, interactive and compositional deepfakes threaten to move us closer to a post-epistemic world, where fact cannot be distinguished from fiction.”
I am not convinced (yet) that an AI-fueled post-epistemic world is inevitable.
Knowledge and truth do not disappear, but they do reconfigure. Machine learning can fuel epistemic injustice, which would accelerate that reconfiguration. Rather than post-epistemic, I consider us in the crosshairs of a heavily “Mepistemic” world.
Mepistemic is the (somewhat playful, yet entirely serious) term I created from a synthesis of Machine + Epistemic to describe synthetic human behavior and information that is more machine than human trying to pass for actual verifiably individual human behavior and knowledge. I do not (yet) unilaterally condemn all mepistemic information, but I do believe that left unchecked, AI’s mepistemic forces will widen societal equity gaps.
At this point, mepistemology is not only inevitable, but already upon us.
Despite this inevitability, however, we have a fighting chance to preserve the human core of epistemology. Most notably, the growing movement focused on responsible and ethical AI. Here are some positive indicators.
In 2019, Stanford University launched the interdisciplinary Stanford Institute for Human-Centered AI (HAI) co-directed by Dr. Li and Etchemendy. Their AI4ALL Program offers a three-week online program to 9th-grade students eager to explore the exciting field of AI and increase diversity in the field.
❝[T]he creators and designers of AI must be broadly representative of humanity. This requires a true diversity of thought — across gender, ethnicity, nationality, culture and age, as well as across disciplines.❞ –Stanford Institute HAI
Dr. Gebru launched the Distributed AI Research (DAIR) Institute in 2021, which is an interdisciplinary and globally distributed AI research institute rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes, it can be beneficial.
Still, anyone who has attempted to get Siri to accurately interpret simple song requests, or had a child report that they know something is true “because Siri told me so,” might want to brace themselves for a potential flood of mepistemic advice from the “post-brain transplant Siri” Apple announced just this week.
Mounting Epistemic Concerns Opening Questions:
1. Is a post-epistemic world inevitable?
2. Do we already see evidence of mepistemic information?
3. Where do we, as education leaders, currently sit on the continuum of AI danger, AI harms, AI opportunities and AI benefit?
Hopeful signs: interdisciplinary discussions on the four current conditions of AI
Earlier this year, Dr. Ebony McGee, Johns Hopkins University Professor of Innovation and Inclusion, wrote in EducationWeek, that “Diverse perspectives of thought, background, race, gender, and ability are not just nice to have, they are a necessity for innovation.” She went on to say, “Without radical change beginning at the K-12 level, STEM fields will fall short of creating the innovations that will make our word better, safer, and cleaner.”
At the Association for the Advancement of Artificial Intelligence (AAAI) 2024 Spring Symposium, Dr. McGee highlighted this in her talk, Beyond the Exclusionary Algorithm: The Urgent Need for Anti-Racist AI Education and Research.
Dr. Kamau Bobb, Director of STEM Education Strategy at Google, reminded participants that the AI infrastructure is made exceptionally fragile by its lack of diverse perspectives, and Anshul Sonak, Global Director, INTEL Digital Readiness, shared workforce statistics that brought this fragile AI infrastructure and lack of diverse perspectives into focus.
“…it’s a bit unacceptable to leave destiny only in the hands of just a few tech professionals...The urgency of demystifying AI for all and diversifying the talent base has never been more critical.” ~ Anshul Sonak, Intel Corporation
Putting hope into practice, a promising framework developed by Sri Yash Tadimalla and Maher for an “AI Identity,” was introduced at the symposia.
Each time leading AI voices such as AAAI support this sort of public event—in this case a symposia focused specifically on diversity in AI— they help shape responsible AI, raising awareness of epistemic integrity and epistemic justice, and fostering conversations that can contribute to educational justice for all and for each.
Definition: AI Identity
AI Identity is defined across internal and external dimensions.
• Internally, AI Identity includes the collective characteristics, values, and ethical considerations embodied in the creation of AI technologies.
• Externally, AI identity is shaped by individual perception, societal impact, and cultural norms.
This duality highlights the interplay between the creation of AI technology and AI’s broader interaction with society.
Worthy models for digital epistemology
Today, we have the opportunity to build models that can shape the digital epistemic dimension of educational justice that are “worthy of AI’s promise, challenges and harms” (Martin Tisne’).
Collective efforts to embrace diverse perspectives across the AI ecosystem must be a part of those worthy models.
We must pursue a future education system that centers inclusive solutions that are human first yet are welcoming to technologies able to propel equitable outcomes.
AI’s promise calls K-12 educators to spark their students’ love of learning, to bring meaning to student learning, to ignite the energy they will need to persist in their learning, and to earn the essential trust in their learning that will empower them to take their learning and change our world.
Let us answer that call with the rich diverse perspectives and the thoughtful deliberate processes that can face the challenge, minimize the risks/harms, and bring the positive promise of AI to life.
About the author
Mary Lang works to contribute to progress toward educational justice. Currently Mary is an EdSAFE AI Alliance 2024 Women in AI Fellow and also serves on the Steering Committee for the AI Initiative for Educational Justice at the Center for Leadership, Equity and Research (CLEAR). Mary also serves as the Organizational Change Management Officer for the Los Angeles County Office of Education. Previously Ms. Lang taught in public and private universities and designed socio-technical transitions across all sectors..
The views expressed here are solely those of the author.
About the AAAI 2024 Spring Symposium
The Association for the Advancement of Artificial Intelligence (AAAI) 2024 Spring Symposium on Increasing Diverse Perspectives in AI Education and Research was held at Stanford University on March 24-27, 2024. The Increasing Diverse perspectives in AI Education and Research cohort approached the future of AI as one that must be built on the principles of equity, accessibility, and diverse perspectives. It was three days of thoughtful conversations and reflection on the intersections of diverse perspectives, history, AI, and society.
Click here to learn more, and access papers and presentations.
Notes
See The Routledge Handbook of Epistemic Injustice for more examples of epistemic injustice.
Dr. Barbara Tomlinson coined the term “Epistemic Machine,” in her 2019 book Undermining Intersectionality: The Perils of Powerblind Feminism