Machine learning 'RegTech' being driven by post-Paris attacks AML rules
Sarah Clark, general manager, Identity at Mitek Systems, discussed the state of the art at Finovate 2017 in London.
Regulatory reforms following last year's terrorist atrocities in Paris are just one example of tail-winds driving smarter anti-money laundering and know your customer technologies.
The European Commission recommended early transposition of the latest iteration of Anti-Money Laundering Directive and it's pressing ahead with certain amendments. Following the AMLD4.1, the monetary threshold triggering identity verification/KYC for pre-paid cards was changed from the previous annual €2500 limit to a monthly €150 limit.
The transformation from in-person verification to digital is being paired with more stringent AML regulations, says Sarah Clark, general manager, Identity at Mitek Systems.
"Going forward many more payments are going to require enhanced customer due diligence," said Clark. "For PayPal and other similar platforms, those are mobile payments or digital payments, so that's a pain point we are very active in.
"How do you meet that regulation, but not frustrate the end user? Say it's someone just trying to send money to their son and they want to do it now. Okay, well thanks to our technology you don't need to do anything too cumbersome – just scan your ID and continue through to complete your transaction."
Clark also pointed to specific products that are under intense scrutiny; the most obvious example being prepaid cards, which were used in preparation for some of the Paris attacks.
"As result you cannot get anonymous prepaid cards since the enactment of the new AML laws. So those providers need effective ways to let people sign up for prepaid cards through digital channels," she said.
Mitek is a leader in the exploding field of biometric and machine learning-enhanced digital KYC on-boarding. The company works with numerous banks and processes over 100,000 IDs each week, replicating in a secure and compliant manner what used to be an in-person interaction in a bank branch.
A common way to ensure the "liveness" of selfie-capture is for the user to blink. This is easy and puts the user in control, while also triggering machine learning algorithms behind the scenes to evaluate that you are a real person and capture the image.
Clark explained that while the platform is essentially software which returns automated results, there is also a review process carried out by experts trained through Interpol, for instance. These people search for forgery techniques, and then these are used to train the platform to detect them in future.
"We observe people trying to get away with stuff: photo manipulation, changing some of the fields to create a synthetic type of ID, starting with something that may have been real initially.
Clark said a lot of advancements have been made in computer vision. Various forgery techniques are now digested by deep learning technology.
"We feed these tagged samples of real IDs and forged IDs through our deep learning platform and it can really pick up like subtle differences and then it can classify something that appears to have been tampered with," she said.
"Liveness detection is key. One way to try and game the system is to use an ID and put a photo in front of the camera. That won't work; it's going to know that it's a photo. While you are facing the camera, it's actually evaluating different aspects of the light on your face.
"A more sophisticated attack might to use a video of you. It can tell the difference. It's not a dumb camera. There are a lot of computer vision algorithms running continuously on each and every frame that it's looking at to evaluate, is that a live human, or is it a spoofing attempt."
© Copyright IBTimes 2024. All rights reserved.