InterXAI brings together scholars, developers, and critics to reimagine Explainable AI (XAI) through the lens of human-centered design and intertextual analysis. We believe explanations arenโt just for machinesโtheyโre for people.
By linking literary methods with technical interpretation, we aim to reveal the layered meanings in model behavior, decisions, and narratives.
๐ What You Can Do with InterXAI
๐ฅ Submit a Case Study
Share your applied work or conceptual insights on how humans interact with, interpret, or critique AI explanations.
๐ค Collaborate With Us
Join interdisciplinary projects blending digital humanities, NLP, and human-centered AI design.
๐ง Explore Case Studies
Browse existing case studies and see how others are engaging critically with XAI across domains.
๐ Read the Blog
Reflective pieces on the tensions, promise, and methods of human-in-the-loop AI understanding.
๐งญ Why Intertextuality?
Modern AI explanations often miss the human textureโthe narratives, biases, and values that shape understanding. Intertextual critique bridges that gap, inviting diverse voices into the meaning-making process of AI behavior.
By examining text reuse, metaphors, references, or alternative framings, we unlock a richer interpretive layer behind machine logic.
๐ก Sample Use Cases
- Analyzing how users interpret LLM-generated answers via user comments
- Reframing algorithmic recommendations through narrative theory
- Highlighting text reuse or inspiration in machine summaries or predictions
๐ค Meet the Project Lead
Click to view

Felix B. Oke is the founder and project lead of InterXAI. With expertise in Digital Humanities, Natural Language Processing, and Explainable AI, Felix is dedicated to bridging critical human insight with algorithmic transparency. He leads the design and development of tools that foreground interpretability and accountability in machine learning systems.
๐ง bfiliks4xt@gmail.com | ๐ GitHub | ๐ LinkedIn
๐ Try switching themes above.