DEFENSESTORM BLOG
Monday, January 24th, 2022
If you have good data, if you’re monitoring your ML, and if you pair it with great people, then you’re much better off in protecting your institution as cyberthreats continue to involve.
In my last post I wrote about why machine learning should be something that cyber security, fraud, and AML teams are thinking about, and I ended with three general guidelines that we use when building machine learning. As a reminder, they were:
Let’s talk about each of these in a little more detail.
You’ll hear about this one from any AI/ML practitioner, but good data is necessary to build anything that will work effectively, which hopefully makes inherent sense. Imagine you’re teaching a baby what an apple is for the first time… how would you pick the apples you showed them? Would you focus on only red apples, or include green ones as well? Would you show them a pear and other similar-looking objects to help them learn how to delineate? If you want a baby to really understand what an apple is, then it’s important to hit those cases and more.
Similarly, in the machine learning world, knowing what you’re building for – and NOT building for – is extremely important. First, we need to validate that we have the right data to solve for the problem at hand, while also ensuring that customer data is safe and secure. Then we focus on the application itself, which often requires that what we build is trained on banking data specifically (one of many reasons we think “built for banking” matters), and is also implemented in such a way that it takes into account each individual client’s data. It helps us make sure that we’re knowledgeable about and reacting appropriately to things that are specific to FIs, while also acknowledging there are differences amongst FIs.
One needs to only look at the news to figure out why it matters to monitor the performance of your AI. Though they were recently cleared of wrongdoing, Apple and Goldman Sachs were in the news in 2019 because they were unable to explain to consumers how they determined credit limits for credit cards. Regulatory guidance already exists for modeling (e.g., OCC’s Model Risk Management Handbook), and we expect even more guidance as ML becomes more pervasive and readily available; this means that from a compliance perspective, it’s going to be increasingly imperative to understand how you’re making decisions and to be able to articulate it to an examiner. Explainable AI and ethical AI are subsets of AI that I won’t delve into here, but many companies – especially those that don’t have a focus on banking – discount the importance of being able to explain what their models are doing. Partnering with a company that chooses machine learning models that can be explained is a good way to mitigate potential risks in the future.
It also makes sense to monitor your ML from an operational perspective, which I’ll explain in the context of something that exists in our tool today. In addition to supporting rules-based alerting, we have an ML-based feature called PatternScout that analyzes activity for each client over various devices and network configurations, and will generate an alert if things look abnormal. As you can imagine, there’s a tradeoff between having a low threshold (i.e., generating a bunch of investigations that may not lead to anything) vs a high one (i.e., not starting enough investigations and thus missing something critical). By continuously monitoring how PatternScout is performing, we’re able to balance those two things for our clients – which ultimately impacts the value they get out of our service. It also helps us stay up to date with what’s happening in an ever-changing landscape of cyber threats, which leads us to the third guideline.
AI still needs people to be effective for financial institutions
To us, AI and people are complimentary – not replacements to one another. Machines are really good at performing certain types of tasks, assuming the boundaries and rules are already defined. They’re also great at looking through tons of data. But they’re not necessarily good at being creative or sensing that “something just feels off about this”. In other words, you often can’t replace a gut feeling when it comes to cyber security, fraud, or AML – that’s why our view is that great ML paired with great talent is what will keep your FI ahead of the curve.
I’ll use one last example in a different domain to illustrate that the best applications of AI still require human input. Google is one of many companies at the forefront of ML innovation. Within Gmail, they have a feature called Smart Compose that uses ML to help you compose emails more quickly. As well-funded as they are, Google still knows that their technology is not good enough to predict with 100% accuracy what you want to type (despite having phenomenal training data… in other words, all your previous emails), so the way it works is this: As you’re typing, Gmail will suggest things you might want to type next. If they’re right, you can press a button and it will autocomplete. If they’re wrong, you keep typing as if nothing was there. When the technology works, you can reap its benefits, but when it’s wrong, it doesn’t get in your way. There are other examples, but the point is this: we’re not even close to the point where AI is taking over the world – we’re still figuring out how to make it work somewhat decently in the context of the world. And for AI to work well, it needs to work with humans.
Hopefully I’ve achieved my goal to explain why machine learning should matter to a financial institution like yours and after reading these two posts, you have:
While nothing is foolproof, if you have good data, if you’re monitoring your ML, and if you pair it with great people, then you’re much better off in protecting your institution as cyberthreats continue to involve, thus instilling more trust with your customers.