InterXAI: Human-Centered Explainable AI and Intertextual Critique

A platform for exploring human-centered Explainable AI and intertextual critique.


πŸ“„ InterXAI White Paper

Reimagining Explainable AI through Intertextual Critique

Version 1.0 – July 2025


Executive Summary

InterXAI is a human-centered platform that bridges Explainable AI (XAI) and intertextual analysis. It empowers users to compare machine-generated model explanations with human interpretations, fostering critical engagement, accountability, and enriched understanding of AI behavior.

Combining computational linguistics, literary methods, and digital humanities, InterXAI offers a hybrid framework for interpretability and critique in machine learning systems.


1. Vision & Purpose

Modern AI systems are increasingly opaque, and existing XAI tools often center the machine’s perspective. InterXAI flips the lens. We ask not just how a model worksβ€”but how its outputs are interpreted, challenged, and reframed by human users.

By drawing from hermeneutics, reader-response theory, and intertextuality, InterXAI provides a method to analyze, visualize, and question AI decisions in light of cultural, ethical, and narrative frames.


2. Core Features

🧠 XAI Engine

βœ‹ Human Annotation Engine

πŸ”— Intertextual Linker

🧲 Comparison Orchestrator


3. Use Cases


4. System Architecture

Deployment: GitHub Pages + Netlify frontend. Companion Streamlit dashboard supports real-time analysis and comparison.


5. Technical Stack


6. Contribution & Collaboration

InterXAI is an open, evolving project. You can:


7. Future Roadmap


8. Licensing & Ethics

InterXAI values transparency, interpretability, and scholarly openness.


9. Contact

Project Lead: Felix B. Oke
πŸ“§ bfiliks4xt@gmail.com
πŸ”— GitHub | LinkedIn
🌐 Website: interxai.netlify.app


πŸ“₯ Download This White Paper

⬇️ Download the InterXAI White Paper (PDF)