First Reference company logo

Inside Internal Controls

News and discussion on implementing risk management

machine cogs image

Don’t outsmart yourself: AI and compliance

AII’m a big fan of artificial intelligence. The older I get, the more I appreciate that real intelligence needs all the help it can get. Corporate ethics and compliance officers, however, need to pause before betting big on AI as a solution to all our needs.

We can begin by considering where AI (or regtech, or any other name we put on it) claims to offer the most benefit: in financial services. Client onboarding, account creation, due diligence, investor expertise, fiduciary obligations—those are complicated compliance burdens that every financial firm has. At the same time, online brokerage firms are offering “robo-adviser” services where you enter a few demographic and financial criteria, and algorithms then recommend investment options to you. That means profit margins are squeezed.

So if you’re a financial advisory firm, courting wealthy clients around the world who want to act right now, and robo-advisers are sucking away the revenue dollars you need to pay for high-touch services—well, of course you want regtech and AI that can automate as much of your compliance workload as possible. Who wouldn’t?

What financial firms really want to accomplish with AI is to create a more customer-centric experience. Firms need to simplify the customer experience so they can cut compliance costs and impress customers at the same time.

Beyond financial firms, other businesses want to do the same thing with AI and regtech: cut compliance costs and impress “customers,” even if the customer is an employee or some third party that crosses paths with your business. You want to automate away the chores of robust compliance, and still reap all the benefits of robust compliance.

Where it all goes wrong

The shortcoming in AI is that it cannot appreciate the context of a situation, unless someone has already pre-programmed and pre-defined the context that an AI application should consider. That is, AI will always be flawless in situations where it knows exactly what to do—yet be disastrous in situations where it doesn’t know what to do. Which is when you need intelligence the most.

The perfect example of this is—wait for it—airline overbooking. AI programs have become superb arbiters of who on a flight should be bumped. The AI can analyze spending habits, travel patterns, flight connections, and innumerable other variables to find the single passenger, among many hundreds, who should be removed from a flight. Except that the AI can’t account for unique passenger variables that algorithms can’t foresee or quantify: important business appointments, family emergencies, major life events.

So if your business wants to embrace AI, know the limits that AI has.

Three points to ponder

  1. Appreciate the difference between big data and AI

Big Data combines reams of information and analytics capability to provide insights the human brain might not otherwise find—but it does not make decisions. AI “makes decisions,” although really it selects actions from a predetermined set of choices, based on certain inputs.

Compliance officers (along with CTOs, CIOs, and others who decide how to use AI) need to understand where their company should draw that line: between analysis a computer program can automate, and a decision that humans should to make. For example, AI could automate much due diligence. But would you automate the triage of whistleblower retaliation complaints, or disclosure of FCPA violations?

  1. Understand the importance of input values

The one advantage of AI is speed; once it starts, it can run incredibly fast. The risk is that AI will start from a flawed position, and race the company to a conclusion it can’t countenance. You might assess loans in a portfolio incorrectly and misstate liquidity risk; maybe members of your social media network exclude “people you may know” who are minorities. AI works on models and algorithms. If those inputs are based on faulty data or assumptions that contradict firm values, you’ll be in trouble sooner than you think.

  1. Remember problems of scale

I mentioned earlier that financial firms want AI to deliver a more “customer-centric” experience to each client. At a conference I attended, someone said: “Aren’t they just trying to improve customer service? Isn’t that old news?”

Well, yes; but remember the state of affairs decades ago: firms were smaller, so they could offer better client service, even in a paper-based world. So could small airlines, not worried about overbooking; or retailers, who knew specific customers by sight.

The challenge today is to scale up that high-touch environment to large volumes of customers—while each one brings a specific context, and that context might trip up your business. You need to be smart about it. Otherwise a little AI can be a dangerous thing.

By: Matt Kelly

Follow me

Ethics &Compliance Matters ™, Navex Global ®

Ethics & Compliance Matters™ is the official blog of NAVEX Global®. All articles posted on the Inside Internal Controls blog originally appeared on NAVEX Global’s Ethics and Compliance Matters Blog. The blog leverage the news, insights and best practices you find here to stay ahead of GRC trends, and take your compliance program to the next level. Read more
Follow me
Send to Kindle

, , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.