Invented by Kaitlin Newman, Jeffrey Rule, Rajko Ilincic, Capital One Services LLC
The Capital One Services LLC invention works as followsA system and method for training a machine learning model to detect fraudulent calls is disclosed. The machine-learning system is trained by using audio clips, voice detection, call handler feedback, and public knowledge about commercial risks. This allows it to detect and divert fraud calls.
Background for The system and method for detecting fraud by using recorded voice clips and machine-learning
The “Payment Cards Industry (PCI), standards govern obligations for major credit card companies in protecting customer information when making payments with a card. PCI regulations, for example, require that companies provide telephonic assistance to customers so they can modify their information like phone numbers and correspondence addresses. Customer support call centers are often targeted by fraudsters because accounts can be accessed after obtaining customer information. Up to 90% of all calls to customer call centers on any given day are fraudulent callers trying to gain improper access to customers’ accounts.
According to an aspect of the invention is described a system that detects fraudulent calls from incoming call networks by a call center with a number of call handlers. The system comprises a fraud table with at least one entry. This entry stores information about at least one characteristic that a call handler indicates as being a fraudulent caller. The characteristic can be selected from the group of audible characteristics or origin characteristics. The system also includes a machine learning model that receives training input from a fraud training table, and a contact center interface coupled between an incoming call network, and the call handler to selectively route a phone call from the network to the handler based on the machine learning model.
The method also includes steps to monitor characteristics of the call in order to locate audible characteristics that are identified in a fraud training table and to calculate a probability that the call is from a fraudulent caller. The method also includes steps to monitor characteristics of the call in order to identify audible features that are identified in a table of fraud training and, in response to the location an audible feature that is identified on the table of fraud training, calculate a probability that call is fraudulent.
A further aspect of the invention is a method of detecting fraudulent calls received by a call center that includes training a machine learning model to recognize characteristics of fraudulent calls, including background noise, to produce a trained machine learning model. The training can be done using a fraud table that contains a number of entries that correspond to different characteristics of fraud calls. The method comprises the steps of receiving the call at the center for call services and dispatching a call agent who has a trained machine-learning version to handle the phone call. The call agent analyses the characteristics of the phone call in order to identify those characteristics that are linked to fraudulent calls. In response, the agent calculates the probability that the telephone call is fraudulent. The call is routed selectively to a call handler based on the determination of probability. The probability can be displayed on a GUI of the call handler. The method includes determining that, in response at least to one of the call handler input, or the probability, the call is coming from a fraud caller, and diverting it.
According to an aspect of the invention it was determined that fraudulent calls can share certain audible features, such as background sound, speech patterns, selected words, and so on. A system and method according to the invention train a machine-learning algorithm in order to detect fraudulent phone calls. The machine-learning system is trained by using audio clips, voice detection, call handler feedback, and public knowledge about commercial risks. This allows it to detect and divert fraudulent calls.
In one embodiment, a system to detect fraudulent calls includes an machine-learning model that is adapted to receive data from call handlers about calls determined to be fraudulant. The machine-learning system parses the audio from received calls by using voice recognition to extract characteristics that are fraudulent. The extracted characteristics can be used to train an interface in a call center to detect and eliminate fraudulent calls more quickly. A call handler’s GUI can be used to display a probability of fraud based on the machine-learning model.
Reference now is made to the drawings wherein similar reference numbers are used throughout to refer to elements of like nature. To provide a better understanding of the novel embodiments, many specific details will be provided in the following description. It is possible that these details are not necessary to practice the novel embodiments. Other times, well-known devices and structures are shown as block diagrams to make a description easier. “The intention is to cover any modification, equivalents and alternatives that fall within the scope or the claims.
The terms “system” and “component” are used here. “As used in this application, the terms?system? The terms “component” and “computer-related entity” are meant to refer to computer-related entities, such as hardware, a combination between hardware and software or software, in execution. Examples of these are shown by the computing architecture 100. Components can include, but are not limited to, processes running on processors, hard drives, multiple storage devices (of optical or magnetic storage medium), objects, executables, threads of execution, programs, computers, etc. As an example, both the application running on a web server and the web server can be components. A component may be part of a thread or process. It can also be located on a single computer or distributed across two or more. Components can also be communicated with each other via various communication media in order to coordinate their operations. Information may be exchanged in a unidirectional or bidirectional manner as part of the coordination. The components could, for example, communicate information via signals sent over communications media. Information can be represented as signals that are allocated to different signal lines. Each message in such allocations is a signal. In other embodiments however, data messages may be used instead. These data messages can be sent over various connections. “Examples of connections include serial interfaces and bus interfaces.
The computing architecture 100 comprises various computing elements such as processors (multi-core, single-core, etc. ), memory units, controllers and peripherals. It also includes oscillators, timing components, video cards, audio card, multimedia input/output devices, power supplies, etc. “The embodiments are not limited to the computing architecture 100.
FIG. The figure 1 shows one embodiment of a computer architecture 100 that may include a call-center server 110. The call center 110 is configured to send incoming calls from users such as 111 via network 150 to the call center service representatives (“call handlers” for short) at workstations 190, 192, 194 to provide customer service support. Call handlers (or simply “call handlers”) at workstations 194, 192, 190 provide customer service support. In some embodiments, the call center 110 can be a system of processing that includes one, two, or more computers or servers that are connected via one, or multiple, network links. In some cases, the call centre server can be a distributed computer system. Each server may have one or multiple processors 113, which can include one of more processing cores for the software instructions that control data and information.
The call center server 110 includes several components that are specific to the invention. These include fraud training table 120 and machine-learning model 165, voice recognition system 165, and call center interface 170-.
Accordingly, according to an aspect of the invention a machine-learning models 160 estimates the validity a caller based on a training set built with fraudulent call data (i.e. data obtained from a later determined fraudulent caller). Training is a process that can be used to construct machine-learning models. This process can be at least partially automated, e.g. with minimal or no human interaction. During the training and operation, input data is iteratively fed to a machine learning model. This allows the machine-learning to identify patterns relating to the input or identify relationships between input and output data. As an example, a “call handler detected fraud” message can be reacted to by the machine learning model. The machine-learning model can retrieve audio files from the datastore in response to a?call handler detected fraud? The machine-learning model can be fed audio clips that are then processed by voice recognition system 165 in order to extract fraudulent call features. The voice recognition system 165 compares incoming audio files with a data set already available to identify possible audio matches. The machine-learning system analyzes the extracted characteristics to identify patterns of fraudulent calls. It then records these characteristics in the fraud training table 120. The machine-learning model is transformed into a trained one with training. This allows it to detect fraud callers in real time.
In one embodiment, Fraud Training Table 120 is configured to store a plurality training entries. Each entry is associated with a fraudulent call characteristic. The machine-learning module 160 continuously updates the fraud training table to reflect fraudulent caller behavior. This allows for trending fraudulent activities to be detected and diverted appropriately. “As will be explained in greater detail below, call handlers flagging fraudulent calls, calls are processed in real-time to extract fraudulent call characteristics in order to more quickly identify patterns and reduce their impact.
Call Center Interface 170 is used to route selected calls from network 150 to call handlers located at workstations 194, 192, and 190. A call received by the call center server is handled by a call agent, such as 182, 184 or 186, according to an aspect of the invention. Call agents, like call agent 182, are copies of the software code for the current machine-learning model. The call agent 182 uses the machine-learning trained model to monitor the data stream of the incoming call. The agent redirects the call if it detects fraudulent activity. Diverting is used in this application. Diverting” a call can include forwarding it to an individual or administrative branch, terminating it or otherwise moving the call from the customer service.
It should be noted, however, that while FIG. The call agents 182,184,186 are shown in FIG. 1 as part of the interface of the contact center 170. However, other embodiments can be envisioned, where the call agent operates on the local workstation 190. It is up to the designer to decide where the call agent should be deployed. This will depend on factors like the architecture of the center, geographic distribution of the call handlers and similar considerations. The present invention does not limit itself to using software agents. Although the use is described in this document, similar functionality can be achieved by a shared service running on a cloud computing system.
In one embodiment, the call agent includes additional functionality to maintain a running Fraud Probability Factor for each associated call. The FPF can be a numerical value that represents the likelihood of the caller being fraudulent. The FPF is updated dynamically in real time as new information (such as audible responses or background noise) becomes available. During a phone call, information is obtained from the caller or about them. In one embodiment, as will be explained in greater detail below, the FPF is shown to the call handler. This alerts the handler of the potential for fraud, and allows the handler divert the phone call appropriately. Other or concurrent embodiments may automatically redirect calls by the server 110 when the FPF meets a predetermined level.
The data store 140 is shown as part of the call center server 110. In some embodiments the data store is used to store call audio data. Data store 140 may include one or multiple memory devices (volatile and non-volatile), which can be configured to store instructions that are used by one or several processors 113 in order to perform certain operations. Data store 140, for example, can be configured to contain one or several software instructions such as programs which can perform operations when executed by one or multiple processors 113.
FIG. The second example is a table entry for a fraud-training program that includes a characteristic type, characteristic data, and fraud factor. The characteristic type identifies a fraudulent call characteristic. For example, it could be an audible feature, an origin attribute, or another call attribute. The Audible Characteristic Types include but are not limited to, background noises and speech patterns. Accents, voices, and other accents may also be included. Background noises associated with fraud include sirens, screaming children, loud noises, and other noises meant to stress the call handler. “Origin characteristics can include area codes, countries codes, phone numbers and IP addresses.
The characteristic data field 204 contains the data that is associated with the type of characteristic. Audible data, for example, may include an audio clip. The origin characteristic data can include the area code, country code or any other data that indicates the call.
The Fraud Probability Factor field 206 is used to store the FPF attributed to each characteristic of the table entry. In one embodiment, FPF can range from 0 to 1, where a zero indicates a very low likelihood that the caller will be fraudulent and a 1 indicates a certainty. The invention is not limited to a range between 0-1. Other ranges of values, or other methods of quantifying the risk can be substituted.
In one embodiment the FPF is calculated for each fraud entry when the machine-learning algorithm 160 first identifies the potentially fraudulent characteristic. It is then dynamically updated in real time to reflect the correlation between that characteristic and fraudulent calls. An incoming call with a loud background sound such as siren can be parsed in order to extract this noise characteristic. This noise is then assigned an initial FPF to reflect the uncertainty of whether the sound is reliable as a way to determine if the call comes from a fraud source. The FPF for the characteristic increases as more fraudulent calls that include the siren are detected.
Click here to view the patent on Google Patents.