InterXAI: Human-Centered Explainable AI and Intertextual Critique

A platform for exploring human-centered Explainable AI and intertextual critique.


Understanding Intertextuality in Explainable AI

July 20, 2025

Intertextuality is a foundational concept in literary theory, but it has profound implications for Explainable AI. At its core, intertextuality refers to how texts draw meaning from other texts—through allusion, quotation, influence, or structure.

What if XAI worked the same way?

In this post, we explore how intertextual logic—juxtaposing AI outputs with human-authored documents, prior knowledge, or cultural references—can make AI explanations more grounded, transparent, and ethical.

We also introduce our working framework that maps machine salience scores against human-critical references.

📚🔍🤖


← Back to All Posts