Caffinance - July 2025

Caffinance Talk – New Date in Planning
Thank you for the strong interest in the upcoming talk:
“Champion vs. Challenger – How to deconstruct an AI challenger model to validate traditional credit scoring”
Since the original date on July 2 overlaps with holiday season and other events for several potential participants, we are currently planning to reschedule the talk to early September or late October. The exact date will be announced shortly.
If you were interested but couldn’t commit due to scheduling, feel free to reach out or leave a comment – your feedback is valuable and will help us find the most suitable new date.
For those who had already planned to attend on July 2 or are in Frankfurt that evening: Andree still be on site and happy to connect with anyone interested in the topic.
Agenda:
17:30 Open registration
18:00 Andree’s talk
18:45 Q&A
19:00 Reception & networking
Registration:
Please confirm your attendance with mouna.soufan@mathfinance.com. Space is limited, please register early.
Abstract:
The title of this presentation reflects a key question in modern risk analytics: Can complex AI models outperform simpler approaches without sacrificing transparency and compliance?
Over the past decade, financial institutions have shown increasing interest in the application of machine learning techniques. At the same time, these developments have raised concerns among supervisory authorities.
Two publications are particularly relevant in this context: the EBA’s discussion and follow-up papers on IRB models, and the AI Act proposed by the European Council.
According to the EBA, machine learning models are subject to the same regulatory requirements as traditional approaches. Furthermore, the EBA emphasizes in its follow-up report that many aspects of the planned AI Act are already covered by existing banking regulation. This suggests that both supervisors and European legislation are open to modern technologies.
Nevertheless, AI models embody a fundamental trade-off between innovation and accountability. While they offer improved predictive performance, they also risk being perceived as black boxes—difficult to interpret, explain, or validate in a regulatory context.
This presentation examines the integration of modern machine learning techniques into an enhanced, yet regulation-compliant, validation framework using a credit scoring example. A logistic regression model is used as a benchmark, and an XGBoost model serves as the AI challenger, applied to a retail mortgage dataset. The analysis includes model architecture, discriminatory power and calibration metrics. A systematic SHAP-based decomposition is used to attribute predictions to individual features and to highlight interaction effects not visible to linear models.
The aim of this talk is not only to provide technical insight, but also to contribute to the broader discussion of how risk managers can responsibly integrate advanced analytics into established validation and governance frameworks.