Businesses, health systems, utilities and even governments run on data — unseen torrents of it constantly sloshing back and forth across globe-spanning networks, at a speed and volume too large for any human brain to fathom. Fraudsters often lurk within that hidden world, exploiting weaknesses in systems or using crafty techniques to mask their activities. 

Organizations can use big data technology to guard against various types of fraud, from data breaches to false claims that an item never arrived, by training algorithms to recognize what is and is not normal behavior within a system. The technology required for such operations is complex and requires big investments from e-commerce experience builders like Signifyd, which uses data science technology to protect its customers from abuse. Meanwhile, cybersecurity outfits like ActZero use similar technology to help businesses recognize the potential presence of hackers within their own systems. 

To learn more about what data science-driven cybersecurity looks like in practice, we caught up with data science leaders at Signifyd and ActZero. 

 

Diana Rodriguez
Senior Director of Data Science • Signifyd

Company background: Signifyd helps retailers produce e-commerce experiences for customers. The company provides a financial guarantee against approved orders that turn out to be fraudulent, which places data security for customers and vendors front and center. 

 

Describe the data sets your technology runs on and how that data is collected.

Our data sets consist of hundreds of billions of dollars’ worth of transaction data from thousands of online merchants selling in every retail vertical in more than 100 countries around the world. If you want to envision that data set of transactions, think of a top 10 online merchant. Now think bigger — and bigger still. That commerce network is at the core of what we do, and I constantly work with and on the technology that drives it. Our data is enriched with data sets from third-party providers, which amplifies our ability to understand the identity and intent of every order placed on our global commerce network. Our commerce network data is collected through API custom integrations with some retailers and through standard Signifyd applications available through all the major e-commerce platforms such as Shopify Plus, Magento, Salesforce, BigCommerce and others.

 

How Signifyd Uses Data Science

Signifyd’s models harvest their most valuable insights from behavior patterns that indicate whether an e-commerce order is legitimate or fraudulent, or whether a customer complaint involves an honest failure on a retailer’s part or a dishonest attempt by a fraudster to take advantage of the retailer. Those anomalies can come in the form of disparities between shipping addresses and billing addresses, transaction history, device ID and location, among thousands of other signals.

 

What are the most valuable insights or patterns you look for in the data? 

Our challenge is to apply the latest developments in machine learning to turn fuzzy concepts like trust into solvable, quantitative problems. Do I trust this order and the consumer behind it? Secondly, fraud is an adversarial problem. Unlike, say, a self-driving car, where the innovators, developers and society at large all share an interest in making sure the artificial intelligence involved works well and evolves into better versions rapidly, fraudsters are out to foil our artificial intelligence. As we become better and our solutions become more effective, fraudsters shift their targets and tactics. We need to anticipate these moves and stay ahead of the competition. It certainly makes life interesting.

By analyzing transactions on multiple data points, we are able to see the full picture and better distinguish the good from the bad, as well as enable merchants to properly enforce policies that meet their business goals. Watching the changes in behavior through the data has inspired Signifyd to develop additional solutions for merchants, including those tackling unauthorized reselling, fraudulent returns and false claims that an ordered item never arrived, to name a few. All the while, as we analyze patterns, adjust the model and evolve our view of fraud, we need to keep in mind that the primary goal is not stopping fraudsters but enabling good buyers to buy. That makes life better for end consumers and also for our merchant-customers, who see higher revenue and build customer lifetime value for their enterprises.

 

It can be dangerous to rely too heavily on one data point.”

 

What are some of the potential drawbacks of using data science to solve problems like fraud, and how can technologists avoid or mitigate them?

We can never forget that the machine learning models we build to make data understandable and actionable are the products of human minds. Human minds are wonderful things, but they come with biases, preconceived notions of outcomes and they are not flawless. Because humans conceive of and develop models, they also can inject human biases into the model, which by extension introduce biases into the outcomes the models produce.

Data scientists apply various model interpretability methods to balance high model performance and accountability and explainability. Data scientists are building very complex models. Employing interpretability methods is one way to better explain how our models function and diagnose issues of bias or fairness that might otherwise go undetected.

One advantage to drawing on a vast number of signals to achieve our decisioning is that the anomalous patterns that we look for are based on thousands of signals, and therefore, we do not ascribe too much weight to one or two signals. It can be dangerous to rely too heavily on one data point. For example, consider the case of account takeover fraud — a misfortune in which through obtaining passwords, for instance, a fraudster takes control of a legitimate consumer’s account. If decisions were based solely on the behavior of that email address, the account would appear suspicious and the legitimate consumer’s ability to transact would be compromised. That’s why our models base their decisions on a multitude of signals.

 

How do you think data science technology will evolve over the next year?

Like all science, data science will continue to advance through experimentation, trial and error, vigorous debate and rigorous study. Most of all, it will continue to advance through collaboration. Data science is moving fast and its power is being extended to more and more facets of our everyday lives. New ideas and techniques are being shared every day. Understanding how we can build off of the collaboration in the space to continue to refine and explore new ways of building models will be an ongoing pursuit. Flexibility in experimentation will be key, with clear goals guiding the continuing exploration.

 

Alexis Yelton
Director of Data Science • ActZero

Company background: ActZero provides an AI-powered security platform for small and medium-sized businesses, detecting anomalies that may betray the presence of bad actors within a system. The company raised $40 million in funding earlier this year, which it is using to publicly launch its product. 

 

Describe the data sets your technology runs on and how that data is collected.

We build ML and mathematical models on computer, network and cloud software logs in order to detect and respond to cyber threats. This data is semistructured and very big — we see terabytes of data per day.

 

What are the most valuable insights or patterns you look for in the data? 

We look for two kinds of patterns in the data. The first is patterns known to be indicative of cyberattacks, including but not limited to known suspicious commands, suspicious processes executing and command line character entropy. The second is patterns that indicate anomalous behavior. For these, we learn what is normal behavior for a user, machine or customer and highlight unusual behavior using anomaly detection algorithms.

 

“We build simple features that can serve as output in their own right, then more complex ones.”

 

What are some of the potential drawbacks of using data science to solve problems, and how can technologists avoid or mitigate them?

The biggest drawback is obvious but essential to understand: Machine learning models and even simpler mathematical models are costly to build, run and maintain. You need a robust code framework and infrastructure to process data and to run and monitor algorithms. These types of models also require substantial maintenance. 

We have spent a great deal of time mitigating these issues at ActZero, and we do so in a number of ways. Firstly, we take an iterative but also additive approach to rolling out models. We build simple features that can serve as output in their own right, then more complex ones. All of these are input into a feature store from which future pipelines can draw. Then we build heuristics based on logic and statistics that use these features to produce meaningful predictions. Finally, we build an ML model if there is a business case for one. We reuse our framework — mainly via our feature store and data science pipelines — as much as possible to accelerate and simplify productionization.

 

How do you think data science technology will evolve over the next year?

There has been a proliferation of companies that sell machine learning frameworks for feature stores, ML pipelines and auto-ML. Over the next year, I see these dropping in cost and experiencing significant market adoption as the need grows for data science solutions.

 

Great Companies Need Great People. That's Where We Come In.

Recruit With Us