Episodes
Saturday Dec 07, 2024
Saturday Dec 07, 2024
There is a crisis in Explainable Artificial Intelligence (XAI), stemming from conflicting research goals of two cultures: BLUE XAI, focused on human-centred values like trust and fairness, and RED XAI, prioritising model validation, exploration, and debugging. The paper argues that RED XAI is currently under-explored, highlighting several key challenges within it, including the need for complementary explanation methods, benchmarks, and a more exploratory research mindset.