How AI Could Enable Financial Crimes and Result in More Chargebacks and Other Issues

Artificial intelligence has the potential to be the next revolutionary technology that reshapes human civilization. Yet while many of the benefits of artificial intelligence may prove to be positive, some unscrupulous parties may misuse it to perpetuate crime and engage in other unethical activities. AI may prove to be a useful tool for cybercriminals, for example, and could lead to more data breaches, chargebacks, and other issues.
Source: Pexels
How might artificial intelligence enable cybercriminals? Ultimately, AI is a tool, and just as it may deliver large productivity boosts and insightful data analysis to ethical parties, it can provide the same for hackers and other criminals. Let’s look at some specific ways AI could fuel cybercrime.
Zero-Day Exploits Could Lead to More Data Breaches  
Data breaches are one of the most serious and persistent cybersecurity threats. Not only might data breaches damage a company’s reputation and lead to fines from government authorities, but they can also spur identity theft and account takeovers. In turn, these activities could lead to increased chargebacks. Unfortunately, AI could enable data breaches by exposing zero-day exploits.
Zero-day exploits rank among the most dangerous cybersecurity threats. Essentially, a zero-day exploit is a previously-unknown weakness or gap in software’s code that allows hackers to exploit the software itself, such as breaking in to steal information from databases.
In the past, finding zero-day exploits often meant going through lines of code manually, looking for weaknesses. This could take a long time. Crucially, AI tools can work much more quickly than humans. Just as artificial intelligence is already being trained to write code, it is also being trained to analyze code and find weaknesses.  
Ethical hackers could use this to find gaps and close them by developing artificial intelligence to terminate fraud , but they’re in a race against cybercriminals who can use AI to identify zero-day exploits and then use them to break into security systems. This could lead to data breaches and once hackers have the data, they could use stolen credit card information and other bits of sensitive info to make unauthorized purchases.  
For merchants, unauthorized purchases could lead to customer complaints, lost inventory, and ultimately, chargebacks.
AI Could Supercharge Phishing and Similar Strategies
Many hackers skip code altogether and focus on social engineering to get people to hand over credit card numbers, login credentials, and other bits of sensitive data. Unfortunately, hackers may already be using AI to supercharge social engineering and phishing tactics.
With phishing, a criminal will pretend to be a legitimate authority, such as the tech team or customer service department for a merchant website (e.g. Amazon). The fraudster might claim that an account has been compromised and that a customer needs to hand over their login information so that the “tech team” can get in and secure their account.
Hackers can use AI-enabled chatbots to talk with potential customers via text and other chat programs. A fraudster might set up a spoofed customer service portal, for example, and then direct customers to contact the “customer service” department there to resolve the problem. Yet customers won’t be talking with customer service reps, but instead, an AI chat program set up by the fraudsters. ChatGPT and other AI bots can already emulate human speech quite convincingly. Similar AI chatbots may be able to use convincing, human-like speech and programmed social engineering tactics to get people to hand over sensitive information.  
Social engineering, especially through chat programs and text messaging, can take quite a bit of time. As with zero-day exploits, AI can save fraudsters time by using artificial intelligence, ultimately allowing them to target countless people with minimal effort.  
For better or worse, even if a customer handed over their account login information or credit card numbers, it’s often merchants who foot the bill. Let’s say a fraudster gets a customer to hand over their password to their account with a retail website and then uses that account to make an unauthorized purchase with the customer’s credit card. The merchant will likely end up eating the bill should the client file a chargeback with their card issuer.
AI Threats Will Evolve at a Rapid Pace
The scenarios above are just a few examples of how AI could be used by fraudsters. There are many other risks, and new AI cybercrime risks will emerge. For example, AI has been used to spoof voices. You might get a phone call from a loved one who is seemingly in trouble and needs you to send them some cash. Except, it might not be your loved one at all, but instead an AI-spoofed voice.
AI will likely play a role in other common types of fraud, and perhaps more importantly, will likely open the door to new types of crime. Consider that the Internet has provided consumers the world over with tremendous convenience and also created new industries, like online retail. Yet it has also made possible email fraud and online data breaches, among many other types of criminal activity.
Even the most informed and insightful tech gurus are just beginning to understand the potential of artificial intelligence. AI may deliver incredible conveniences and help found entirely new industries. It might also pave the way for various types of financial crimes as well. This could cost consumers and merchants dearly, and businesses may face rising chargebacks as fraudsters become more productive in their criminal undertakings.
Of course, AI could be used for good. For example, AI can be used to monitor transactions and pause suspicious transactions. This could prevent chargebacks and other issues, especially when used in tandem with a chargeback management platform.
The post How AI Could Enable Financial Crimes and Result in More Chargebacks and Other Issues appeared first on The Startup Magazine .

Top Articles