The balancing act between preventing fraud and reducing friction comes to its climax in the tools we use to try and manage and mitigate fraud. These are the very elements that run the risk of introducing friction in the name of preventing fraud. And while every tool used in this context is certainly well-intended, it remains an open question whether they’re effective at the job they’re intended for. Namely, if these tools are, in fact, slowing down users, are they at least bringing fraud to a negligible level?
TL;DR - not really. The bots that plague the digital experience today are sophisticated enough to evade the tools and features deployed to stop them. Which, in turn, suggests that the only thing those tools and features accomplish is to increase user friction.
Below, we explore one of the most common tools used for fraud prevention: CAPTCHA. We explore new HUMAN research into just how much friction users experience trying to complete CAPTCHA challenges, as well as reviewing the business impacts of relying on a defeatable technology to prevent fraud.
CAPTCHA fields and tools, also occasionally referred to as cognitive challenges, became one of the front-line weapons in an attempt at preventing automated attacks from reaching the sensitive information at the heart of the tech stack. But how do the humans who spend time solving these challenges feel about that spent time? How frustrated are they with the process?
HUMAN recently completed a research study asking 1,000 consumers their impressions and frustrations about various styles of CAPTCHA, and the results suggested that while the tools aren’t often a showstopper for a human trying to do something online, they can introduce a level of annoyance that may push a consumer to look elsewhere:
These statistics may not be especially surprising, but they underline one major takeaway: CAPTCHA and cognitive challenges introduce friction in the online user experience. And what’s more, they don’t do enough to stop automation by bots. Search on any combination of “captcha” and “solver” and you’ll find numerous services that will claim to solve thousands of CAPTCHA challenges for only pennies. Some of these even have human solvers in the background, doing the work that a cybercriminal needs done to get on with their attacks.
If a tool doesn’t effectively protect against the mechanism of attack and frustrates the real users attempting to use the site, that’s a failed tool.
One major realization I’ve had in the past few years is that too many organizations perceive bot mitigation and management as a checkbox item in their cybersecurity planning. A little bit of user frustration is thought of as the price of protection, that friction is somehow a symptom of security. But when cognitive challenges are dispensed with easily (both by and for the benefit of bots), that frustration can become the source of significant pain to the business.
ESG’s recent research into bot management trends asked cybersecurity decision-makers how long they believed it would take to recover market share and customer trust following a bot attack:It’s 2022. User friction as a substitute for security is long since played out, and there are ways to mitigate threats without slowing down users. BotGuard for Applications doesn’t ask every session and every login attempt to demonstrate humanity before continuing, it scans for signs of automation and only then takes action, up to and including preventing the request from proceeding at all. And BotGuard completes that scan and decision faster than the time it takes to blink.
Test-drive BotGuard for Applications for 30 days with a single line of code and see for yourself just how seamless effective cybersecurity can be.