Maximilian Muschalik
Artificial Intelligence and Machine Learning at LMU Munich.
Akademiestr.7
80799 Munich
Hi, I am Maximilian Muschalik, and I am a PhD student at Prof. Eyke Hüllermeier’s AIML chair, LMU Munich. I also like Shapley interactions quite a lot. So I work on them and develop shapiq.
Research Focus
My research centers on the topic of explainable artificial intelligence (XAI), with a focus on black-box machine learning models.
Currently, I am mainly working on the topic of Shapley-based explanations and extensions of Shapley values to higher-order interactions. Therein, we are mostly concerned with developing approximation methods to compute Shapley interactions efficiently, as usually the computational complexity of Shapley interactions is infeasible for most real-world applications. We have bundled our research and methods into the Python package shapiq, which unifies state-of-the-art algorithms to efficiently compute Shapley values and any-order Shapley interactions in an application-agnostic framework. More recently, we have pushed Shapley-based explanations into new modalities, including graph neural networks, vision-language encoders, and hyperparameter optimization. If you are interested in Shapley interactions, I would be happy to hear from you. You can also check out our blog post on Shapley interactions.
Another interesting area of XAI research is the development of methods that can explain the predictions of models in dynamic learning environments. Specifically, we investigate the challenges of creating accurate and timely explanations for models that must constantly adapt to changes in data streams and learning tasks. In such dynamic settings, traditional XAI methods may be computationally expensive or unable to provide faithful explanations in a timely manner.
My research is part of the TRR 318 Constructing Explainability.
news
| Jan 20, 2026 | Our paper HyperSHAP: Shapley Values and Interactions for Explaining Hyperparameter Optimization has been presented at AAAI 2026 in Singapore. |
|---|---|
| Dec 5, 2025 | Our paper Explaining Similarity in Vision-Language Encoders with Weighted Banzhaf Interactions has been presented at NeurIPS 2025 in San Diego. |
| Nov 8, 2025 | Our paper Investigating the Impact of Conceptual Metaphors on LLM-based NLI through Shapley Interactions has been presented at EMNLP 2025 Findings in Suzhou. |
| May 5, 2025 | Our paper Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory has been presented at AISTATS 2025. |
| May 3, 2025 | Our paper Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks has been presented at ICLR 2025. |