Why AI interpretations might be lying to you
As AI systems are increasingly deployed in high-stakes decision-making, such as insurance pricing and mortgage approvals, interpretability is often presented as the solution to concerns about trust and accountability. But there is an uncomfortable question that needs to be answered. Can we actually trust AI interpretations?
Imagine being told that your insurance premium has increased by 30% at renewal or that your mortgage application has been declined. Naturally, you ask why.
The response is polite, confident and technical:
“According to the SHAP interpretation tool, this customer’s age increased the predicted premium by $20 relative to the average predicted premium over the chosen reference population.”
But is this interpretation meaningful to a customer? Can it inform risk-reduction behaviour or help someone understand the pricing decisions? And most importantly - should we trust it?
Long before modern machine learning, banks and insurers relied on statistical models that were largely invisible to customers. Yet there was a crucial difference. Traditional statistical models typically had simple, structured forms – often additive or multiplicative. Even if customers never saw the equations, an actuary or credit officer could plausibly explain how a single factor, such as claims history, driving record or age, was associated with the decision.
[....]