Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, VictorOps Blog, Elizabeth White, Liz McMillan, Ruxit Blog

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Cloud Security

@CloudExpo: Article

Real-Time Fraud Detection in the Cloud

Using machine learning agent ensembles

This article explores how to detect fraud among online banking customers in real-time by running an ensemble of statistical and machine learning algorithms on a dataset of customer transactions and demographic data. The algorithms, namely Logistic Regression, Self-Organizing Maps and Support Vector Machines, are operationalized using a multi-agent framework for real-time data analysis. This article also explores the cloud environment for real-time analytics by deploying the agent framework in a cloud environment that meets computational demands by letting users' provision virtual machines within managed data centers, freeing them from the worry of acquiring and setting up new hardware and networks.

Real-time decision making is becoming increasingly valuable with the advancement of data collection and analytics techniques. Due to the increase in the speed of processing, the classical data warehousing model is moving toward a real-time model. A platform that enables the rapid development and deployment of applications, reducing the lag between data acquisition and actionable insight has become of paramount importance in the corporate world. Such a system can be used for the classic case of deriving information from data collected in the past and also to have a real-time engine that reacts to events as they occur. Some examples of such applications include:

  • A product company can get real-time feedback for their new releases using data from social media
  • Algorithmic trading by reacting in real times to fluctuations in stock prices
  • Real-time recommendations for food and entertainment based on a customer's location
  • Traffic signal operations based on real-time information of volume of traffic
  • E-commerce websites can detect a customer transaction being authentic or fraudulent in real-time

A cloud-based ecosystem enables users to build an application that detects, in real-time, fraudulent customers based on their demographic information and financial history. Multiple algorithms are utilized to detect fraud and the output is aggregated to improve prediction accuracy.

The dataset used to demonstrate this application comprises of various customer demographic variables and financial information such as age, residential address, office address, income type, income frequency, bankruptcy filing status, etc. The dependent variable (the variable to be predicted) is called "bad", which is a binary variable taking the value 0 (for not fraud) or 1 (for fraud).

Using Cloud for Effective Usage of Resources
A system that allows the development of applications capable of churning out results in real-time has multiple services running in tandem and is highly resource intensive. By deploying the system in the cloud, maintenance and load balancing of the system can be handled efficiently. It will also give the user more time to focus on application development. For the purpose of fraud detection, the active components, for example, include:

  • ActiveMQ
  • Web services
  • PostgreSQL

This approach combines the strengths and synergies of both cloud computing and machine learning technologies, providing a small company or even a startup that is unlikely to have specialized staff and necessary infrastructure for what is a computationally intensive approach, the ability to build a system that make decisions based on historical transactions.

Agent Paradigm
As multiple algorithms are to be run on the same data, a real-time agent paradigm is chosen to run these algorithms. An agent is an autonomous entity that may expect inputs and send outputs after performing a set of instructions. In a real-time system, these agents are wired together with directed connections to form an agency. An agent typically has two behaviors, cyclic and triggered. Cyclic agents, as the name suggests, run continuously in a loop and do not need any input. These are usually the first agents in an agency and are used for streaming data to the agency by connecting to an external real-time data source. A triggered agent runs every time it receives a message from a cyclic agent or another triggered agent. Once it consumes one message, it waits for the next message to arrive.

Figure 1: A simple agency with two agents

In Figure 1, Agent 1 is a cyclic agent while Agent 2 is a triggered agent. Agent 1 finishes its computation and sends a message to Agent 2, which uses the message as an input for further computation.

Feature Selection and Data Treatment
The dataset used for demonstrating fraud detection agency has 250 variables (features) pertaining to the demographic and financial history of the customers. To reduce the number of features, a Random Forest run was conducted on the dataset to obtain variable importance. Next, the top 30 variables were selected based on the variable importance. This reduced dataset was used for running a list of classification algorithms.

Algorithms for Fraud Detection
The fraud detection problem is a binary classification problem for which we have chosen three different algorithms to classify the input data into fraud (1) and not fraud (0). Each algorithm is configured as a triggered agent for our real-time system.

Logistic Regression
This is a probabilistic classification model where the dependent variable (the variable to be predicted) is a binary variable or a categorical variable. In case of binary dependent variables favorable outcomes are represented as 1 and non-favorable outcomes are represented as 0. Logistic regression models the probability of the dependent variable taking the value 0 or 1.

For the fraud detection problem, the dependent variable "bad" is modelled to give probabilities to each customer of being fraud or not. The equation takes multiple variables as input and returns a value between 0 & 1 which is the probability of "bad" being 0. If this value is greater than 0.7, then that customer is classified as not fraud.

Self-Organizing Maps (SOM)
This is an artificial neural network that uses unsupervised learning to represent the data in lower (typically two dimensions) dimensions. This representation of the input data in lower dimensions is called a map. Like most artificial neural networks, SOMs operate in two modes: training and mapping. "Training" builds the map using input examples, while "mapping" automatically classifies a new input vector.

For the fraud detection problem, the input space which is a fifty dimensional space is mapped to a two dimensional lattice of nodes. The training is done using data from the recent past and the new data is mapped using the trained model, which puts it either in the "fraud" cluster or "not - fraud" cluster.

Figure 2: x is an in-put vector in higher dimension, discretized in 2D using wij as the weight matrix
Image Source: http://www.lohninger.com/helpcsuite/kohonen_network_-_background_information.htm

Support Vector Machines (SVM)
This is a supervised learning technique used generally for classifying data. It needs a training dataset where the data is already classified into the required categories. It creates a hyperplane or set of hyperplanes that can be used for classification. The hyperplane is chosen such that it separates the different classes and the margin between the samples in the training set is widest.

For the fraud detection problem, SVM classifies the data points into two classes. The hyperplane is chosen by training the model over the past data. Using the variable "bad", the clusters are labeled as "0" (fraud) and "1" (not fraud). The new data points are classified using the hyperplane obtained while training.

Figure 3: Of the three hyperplanes which segment the data, H2 is the hyperplane which classifies the data accurately

Image Source: http://en.wikipedia.org/wiki/File:Svm_separating_hyperplanes.png

Fraud Detection Agency
A four-tier agency is created to build a workflow process for fraud detection.

Streamer Agent (Tier 1): This agent streams data in real-time to agents in Tier 2. It is the first agent in the agency and its behavior is cyclic. It connects to a real-time data source, pre-processes the data and sends it to the agents in the next layer.

Algorithm Agents (Tier 2): This tier has multiple agents running an ensemble of algorithms with one agent per algorithm. Each agent receives the message from the streamer agent and uses a pre-trained (trained on historical data) model for scoring.

Collator Agent (Tier 3): This agent receives scores from agents in Tier 2 and generates a single score by aggregating the scores. It then converts the score into an appropriate JSON format and sends it to an UI agent for consumption.

User Interface Agent (Tier 4): This agent pushes the messages it receives to a socket server. Any external socket client can be used to consume these messages.

Figure 4: The Fraud detection agency with agents in each layer. The final agent is mapped to a port to which a socket client can connect

Results and Model Validation
The models were trained on 70% of the data and the remaining 30% of the data was streamed to the above agency simulating a real-time data source.

Under-sample: The ratio of number of 0s to the number of 1s in the original dataset for the variable "bad" is 20:1. This would lead to biasing the models towards 0. To overcome this, we sample the training dataset by under-sampling the number of 0s to maintain the ration at 10:1.

The final output of the agency is the classification of the input as fraudulent or not. Since the value for the variable "bad" is already known for this data, it helps us gauge the accuracy of the aggregated model.

Figure 5: Accuracy for detecting fraud ("bad"=1) for different sampling ratio between no.of 0s and no. of 1s in the training dataset

Conclusion
Fraud detection can be improved by running an ensemble of algorithms in parallel and aggregating the predictions in real-time. This entire end-to-end application was designed and deployed in three working days. This shows the power of a system that enables easy deployment of real-time analytics applications. The work flow becomes inherently parallel as these agents run as separate processes communicating with each other. Deploying this in the cloud makes it horizontally scalable owing to effective load balancing and hardware maintenance. It also provides higher data security and makes the system fault tolerant by making processes mobile. This combination of a real-time application development system and a cloud-based computing enables even non-technical teams to rapidly deploy applications.

References

  • Gravic Inc, "The Evolution of Real-Time Business Intelligence", "http://www.gravic.com/shadowbase/pdf/white-papers/Shadowbase-for-Real-Time-Business-Intelligence.pdf"
  • Bernhard Schlkopf, Alexander J. Smola ( 2002), "Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning)", MIT Press​
  • Christopher Burges (1998), "A Tutorial on Support Vector Machines for Pattern Recognition", Data Mining and Knowledge Discovery, Kluwer Publishers
  • Kohonen, T. (Sep 1990), "The self-organizing map", Proceedings of IEEE
  • Samuel Kaski (1997), "Data Exploration Using Self-Organizing Maps", ACTA POLYTECHNICA SCANDINAVICA: MATHEMATICS, COMPUTING AND MANAGEMENT IN ENGINEERING SERIES NO. 82,
  • Rokach, L. (2010). "Ensemble based classifiers". Artificial Intelligence Review
  • Robin Genuer, Jean-Michel Poggi, Christine Tuleau-Malot, "Variable Selection using Random Forests", http://robin.genuer.fr/genuer-poggi-tuleau.varselect-rf.preprint.pdf

More Stories By Roger Barga

Roger Barga, PhD, is Group Program Manager for the CloudML team at Microsoft Corporation where his team is building machine learning as a service on the cloud. He is also a lecturer in the Data Science program at the University of Washington. Roger joined Microsoft in 1997 as a Researcher in the Database Group of Microsoft Research (MSR), where he was involved in a number of systems research projects and product incubation efforts, before joining the Cloud and Enterprise Division of Microsoft in 2011.

More Stories By Avinash Joshi

Avinash Joshi is a Senior Research Analyst in the Innovation and Development group of Mu Sigma Business Solutions. He is currently part of a team that works on generating insights from real-time data streams in financial markets. Avinash joined this team in 2011 and has interests ranging from marketing mix modeling to algorithmic trading.

More Stories By Pravin Venugopal

Pravin Venugopal is a Senior Research Analyst in the Innovation and Development group of Mu Sigma Business Solutions. He is currently part of a team that is developing a low latency platform for algorithmic trading. Pravin received his Masters degree in Computer Science and has been a part of Mu Sigma since 2012. His interests include analyzing real-time financial data streams and algorithmic trading.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
The Microservices architectural pattern promises increased DevOps agility and can help enable continuous delivery of software. This session is for developers who are transforming existing applications to cloud-native applications, or creating new microservices style applications. In his session at DevOps Summit, Jim Bugwadia, CEO of Nirmata, will introduce best practices, patterns, challenges, and solutions for the development and operations of microservices style applications. He will discuss ...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
Before becoming a developer, I was in the high school band. I played several brass instruments - including French horn and cornet - as well as keyboards in the jazz stage band. A musician and a nerd, what can I say? I even dabbled in writing music for the band. Okay, mostly I wrote arrangements of pop music, so the band could keep the crowd entertained during Friday night football games. What struck me then was that, to write parts for all the instruments - brass, woodwind, percussion, even k...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed.
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
Brands are more than the sum of their brand elements – logos, colors, shapes, and the like. Brands are promises. Promises from a company to its customers that its products will deliver the value and experience customers expect. Today, digital is transforming enterprises across numerous industries. As companies become software-driven organizations, their brands transform into digital brands. But if brands are promises, then what do digital brands promise – and how do those promises differ from ...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
We chat again with Jason Bloomberg, a leading industry analyst and expert on achieving digital transformation by architecting business agility in the enterprise. He writes for Forbes, Wired, TechBeacon, and his biweekly newsletter, the Cortex. As president of Intellyx, he advises business executives on their digital transformation initiatives and delivers training on Agile Architecture. His latest book is The Agile Architecture Revolution. Check out his first interview on Agile trends here.