0
When Shapley Values Break: A Guide to Robust Model Explainability
https://towardsdatascience.com/when-shapley-values-break-a-guide-to-robust-model-explainability/(towardsdatascience.com)Shapley values, a common method for AI model explainability, can produce misleading results when features are correlated. Using a simple linear model, it is demonstrated that when a highly influential feature is duplicated, its Shapley value is diluted and split among all the identical copies, obscuring its true importance. This behavior is driven by the Symmetry Axiom, which requires that features contributing equally to the model receive equal credit. To overcome this limitation, two techniques are proposed: grouping correlated features to calculate their collective contribution and a greedy "winner takes all" approach that iteratively assigns credit to the most impactful feature.
0 points•by will22•9 hours ago