Many organisations rely on technological systems to help mitigate compliance risk. Think for example of the post-event transaction monitoring systems used throughout the financial sector to mitigate risks of money laundering and terrorism financing. Increasingly sophisticated quantitative methods such as machine learning are finding their way into the field. How can the compliance professional stay on top of these developments?

You are not alone

Each of these technological system have a clear goal: the mitigation of a particular type of risk. They receive input data, and through some quantitative approach, output an estimate of the risk at hand. This fits well within the established definition of a model as provided by the Office of the Comptroller of the Currency and the Board of Governors of the Federal Reserve System in their Sound Practices for Model Risk Management (2011). Compliance professionals can take comfort in knowing that they’re not alone, and that the challenge they’re facing is not completely new.

Sound model risk management

These sound practices explain how an organisation can take good care of its models. These models are simplifications of reality that — unavoidably and by their very design — focus attention on key factors and ignore others. Any such simplification is based on numerous assumptions that don’t necessarily hold in general. Hence when you’re using a model, there’s a risk that it’s fundamentally incorrect or that you’re misusing it. This is called model risk and the sound practices provide guidance on its management.

As a compliance professional, you can play a crucial role in management of model risk. After all, you’re intimately familiar with the compliance risks your organisation faces. Your expertise is invaluable to effectively challenge the assumptions underlying any compliance model. Moreover, you have deep insights in the risks that the model is meant to mitigate. Any outcome analysis would greatly benefit from such knowledge. Even without a quantitative background, you can thus contribute towards ensuring the soundness of these models and uncovering their limitations.

Fruitful discussion

On December 12th, I was a guest together with Maurice Jongmans on the Dutch podcast Compliance Adviseert: Automated Learning & AI hosted by Erik Reissenweber. We spoke for about an hour and covered the role of the compliance officer in the management of model risk. Additionally, we addressed the integrity risks that can arise from using artificial intelligence as described in DNB’s General principles for the use of Artificial Intelligence in the financial sector (2019).

During the podcast, Maurice spoke about the model used within his organisation to mitigate the risks of fraud, money laundering and terrorism financing. Seemingly effortlessly, a fruitful discussion arose between Erik and Maurice on the soundness, efficacy and efficiency of the model. It’s precisely through these types of conversations that a compliance professional can start to effectively challenge a model — regardless of the modelling techniques used.