InterXAI: Human-Centered Explainable AI and Intertextual Critique

A platform for exploring human-centered Explainable AI and intertextual critique.


InterXAI brings together scholars, developers, and critics to reimagine Explainable AI (XAI) through the lens of human-centered design and intertextual analysis. We believe explanations arenโ€™t just for machinesโ€”theyโ€™re for people.

By linking literary methods with technical interpretation, we aim to reveal the layered meanings in model behavior, decisions, and narratives.


๐Ÿš€ What You Can Do with InterXAI

๐Ÿ“ฅ Submit a Case Study

Share your applied work or conceptual insights on how humans interact with, interpret, or critique AI explanations.

๐Ÿค Collaborate With Us

Join interdisciplinary projects blending digital humanities, NLP, and human-centered AI design.

๐Ÿง  Explore Case Studies

Browse existing case studies and see how others are engaging critically with XAI across domains.

๐Ÿ“ Read the Blog

Reflective pieces on the tensions, promise, and methods of human-in-the-loop AI understanding.


๐Ÿงญ Why Intertextuality?

Modern AI explanations often miss the human textureโ€”the narratives, biases, and values that shape understanding. Intertextual critique bridges that gap, inviting diverse voices into the meaning-making process of AI behavior.

By examining text reuse, metaphors, references, or alternative framings, we unlock a richer interpretive layer behind machine logic.


๐Ÿ’ก Sample Use Cases


๐Ÿ‘ค Meet the Project Lead

Click to view
Felix B. Oke

Felix B. Oke is the founder and project lead of InterXAI. With expertise in Digital Humanities, Natural Language Processing, and Explainable AI, Felix is dedicated to bridging critical human insight with algorithmic transparency. He leads the design and development of tools that foreground interpretability and accountability in machine learning systems.

๐Ÿ“ง bfiliks4xt@gmail.com | ๐Ÿ”— GitHub | ๐ŸŒ LinkedIn

๐ŸŒ— Try switching themes above.