AI vs. AI: The Race Between Fraudsters and Fraud Detection Gets Automated

, AI vs. AI: The Race Between Fraudsters and Fraud Detection Gets Automated

Swami Vaithianathasamy of Signifyd discusses why an automated solution is the best way to tackle the continual need to evolve to combat digital fraud

While it’s tempting to think of those committing digital fraud as lone wolves spending hours in their bedroom working to weasel their way into someone’s account, in reality professional fraud operations look more like the JP Morgan trading floor.

Like any other enterprise, sophisticated fraud operations have been turning to artificial intelligence (AI) and machine learning to scale their businesses while increasing efficiency, accuracy and profitability. Not surprisingly, but ironically, the key reason fraudsters are deploying AI is to take on the AI used to protect retailers, banks and other businesses.

Automated bot attacks are proliferating

And the malicious use of AI is no fringe trend. Globally, bot attacks on ecommerce sites increased from under 100 million in the first quarter of 2017 to nearly 1.4 billion in Q2 of 2018.

The huge increase is attributed to the steadily increasing number of data security breaches. ThreatMetrix points to spikes in the bot attacks that coincide with some of the year’s most notorious breaches. For instance, ThreatMetrix notes, one of 2017’s highest attack rates happened in Q2, just as the Equifax breach, which affected the records of 148 million consumers, was getting underway.

And just as the wisest businesses turn to a combination of human and machine to get the optimal result, fraudsters balance the speed and scale of machines with the intuition, experience and expertise of humans to get the job done.

Given the logistics of a concerted fraud enterprise, it becomes clear why the best in the dark business turn to machines to be successful. Then, once the machine breaks into an existing account, a human takes over to ensure that the browsing and checkout behaviour is that of a human, so as not to raise suspicions of the machine-learning models and the human beings protecting merchants from fraud.

Fraudsters, like enterprises, need speed and scale to succeed

But as a business, fraud rings need to take over thousands of accounts — or more — and because account takeovers are ultimately discovered, they need to constantly take over new accounts to keep their cash flow positive.

Beyond the scale challenge, fraudsters also work in a world where time is of the essence. The time between a data theft that produces thousands or millions of stolen identities and the time the theft is discovered is prime time for creating and stealing credit and e-commerce accounts.

Again that human — or even a team of humans — in a room is not going to be up to the task. Machines, however, are exceptionally good at the tasks necessary to takeover accounts — and they never rest.

Fraud-protection systems that use big data, AI and domain expertise to foil criminals are constantly learning. When properly designed they sift through orders, sorting fraudulent orders from legitimate ones in milliseconds with incredible accuracy.

Incredible accuracy, but not perfect accuracy. Sometimes a machine or a machine aided by a human with intuition and experience will ship an order that should have been declined. Or the system might hold back an order that should have been shipped. A properly designed system will include a feedback loop that will feed the circumstances of that error back into the machine, so it learns from its mistakes.

Fraudsters’ machines learn from anti-fraud machines

On the other side, the fraudsters’ machines are learning the same way. In recent years, some fraudsters have sought to speed up that learning process by actually stealing the fraud-protection model they are preparing to go up against. The heist, known as “model extraction,” is the result of the practice of organisations hosting their models in the cloud and calling upon users to accelerate the model’s learning by sending it data to act upon.

The difficulty of essentially decoding the model depends entirely on the complexity of the model. In a previous role, I once sought insight into the skills and thinking of fraudsters with a simple experiment. For a set of transactions, I created a rule that said any order under £43 would be approved, but orders over £43 would require a more thorough review of a broad range of attributes to determine whether the person, payment method, device and location all lined up as being a legitimate buyer.It took fraudsters less than a minute to figure out the crucial factor was order value and the £42 orders came pouring in.

Stealing a fraud model is only half the equation

Those engaged in model extraction work in a similar fashion. Essentially they are reverse engineering the model, or enough of the model, to exploit it. This sort of extraction works particularly well with traditional, static, rules-based models that produce a score upon which a merchant makes ship-or-don’t-ship decisions.

Of course, just “stealing” a fraud prevention model isn’t enough. The fraud ring needs a vast supply of identities and personally identifiable information to go on a fraudulent shopping spree. Unfortunately, such personal data is available in abundance.

Combined with AI, this pilfered data allows a criminal to place a nearly unending string of orders, trying a tremendous number of combinations of attributes. The criminal relies on a process of elimination, reinforcing the combinations that move the ship-or-don’t-ship score in a more favourable direction as far as shipping an order.

The answer is in fraud models that stay a step ahead of automated fraud tactics

While the tactics are new, the cat-and-mouse game in fraud is not. And so, there are defenses available and in the works to turn the advantage back to the good guys.

The AI-powered fraud-protection models, obviously, are already helping retailers and other businesses stay a step ahead of fraudsters. Models that go beyond static rules and learn in real-time provide another shield against determined and nimble fraud operations.

And, of course, models that go beyond simply delivering a score for a merchant to ponder and instead automate the ordering process and come with some assurance to the retailer that a poor decision won’t come out of his or her pocket also goes a long way toward mitigating any maliciousness that an AI-powered fraudster can cause.

If the first rule of managing online fraud and mitigating risk is to remember that fraudsters are entrepreneurs, then maybe the second rule is to make sure that the businesses they run are not sufficiently profitable.

Choosing the right AI and remaining vigilant when it comes to changes in fraudsters’ tactics and technology will go a long way to achieving that goal.