InterXAI: Human-Centered Explainable AI and Intertextual Critique

A platform for exploring human-centered Explainable AI and intertextual critique.


From GPT to Human Critique: A Look at Our Model

July 20, 2025

Our InterXAI pipeline combines three layers of interpretation:

  1. Model Explanation (e.g., SHAP, LIME, attention weights)
  2. Human Commentary (scholarly or critical notes)
  3. Intertextual Overlay (matching or clashing sources)

In this post, we provide an overview of our architecture—from GPT-based content generation to side-by-side comparison with curated human annotations.

We believe this hybrid approach opens up a new form of critical XAI that’s especially useful in education, ethics, and digital humanities research.

🧩🧑‍🏫💡


← Back to All Posts