blog
2024: These Five Business Fraud Threats Are Set to Explode

2024: These Five Business Fraud Threats Are Set to Explode

Global fraud losses are on a staggering growth trajectory, causing significant disruption and losses to businesses and consumers. It is estimated that global fraud losses reach $5.4 trillion, while the UK's fraud losses are about $185 billion. In the United States, the cost of fraud for financial services companies increased by 9.9%, highlighting the severity of the issue.

The main drivers behind this growth trend are technological advancements and the continued development of social engineering. As people increasingly turn to online and mobile channels to shop, fraudsters are following suit, using advanced technological means to carry out fraud activities. Additionally, social engineering exploits the human element, which is the most complex and persistent security vulnerability, making many users who lack information security thinking a target for fraudsters.

Social media platforms have become an important tool for fraudsters. The global 4.8 billion social media users provide a wide pool of potential targets, but most users lack the necessary information security thinking and training to identify and avoid fraudulent behavior. Phishing campaigns continue unabated, and with the maturing and application of AI tools, phishing lures are becoming increasingly convincing. These tools can generate realistic text and content, making people more susceptible to being duped by fraudulent behavior.

Dingxiang Defense Cloud Business Security Intelligence Center predicts that the top five business fraud risks in 2024 will be as follows:

AI: New Threats from AI Grow Exponentially

As artificial intelligence (AI) becomes increasingly pervasive in all industries, its security implications are becoming a growing concern. AI is becoming a new tool for threats, with attackers using AI technologies to pose unprecedented risks to businesses and individual users. In 2024, all industries will face a surge in cyberattacks using machine learning tools. 2024010305.png

Spreading misinformation. Generative AI can be used to spread misinformation or create realistic phishing emails. Some criminals have reportedly begun using generative AI tools like ChatGPT to create phishing emails that sound as professional as those from legitimate businesses. These emails often pose as representatives from banks or other institutions, asking victims to provide personal information or funds, which could lead to financial loss or identity theft if not followed.

Disrupting cybersecurity. Malicious or faulty code generated by AI could have a devastating impact on cybersecurity. As more businesses use AI for business applications such as data analytics, healthcare implementations, and user interface customization, hackers could exploit vulnerabilities in these applications to launch attacks. According to reports, the number of AI security research papers has exploded in the past two years, and the 60 most commonly used machine learning (ML) models have an average of at least one security vulnerability. This means that hackers could exploit these vulnerabilities to control or destroy devices and systems that use these models.

Increasing fraud risk. Fraudsters can also use AI technologies to mimic legitimate ads, emails, and other forms of communication, increasing the risk of fraud. This AI-driven approach will lead to an increase in low-quality activity, as the entry barrier for cybercriminals will be lowered and the likelihood of deception will increase.

Manipulating data and decisions. In addition to cybersecurity threats, generative AI could also be used to manipulate data and decisions. Attackers could attempt to poison the data that AI uses, leading to organizations that rely too heavily on AI to be systematically misled in their decision-making. This could involve deleting certain key information from the data or adding false information, making AI systems produce inaccurate results.

Fake Identity: Increasingly Difficult to Identify in 2024

Long-standing, impersonation fraud has always been a common scam tactic. But with the development of artificial intelligence (GenAI), synthetic identity theft and fraud are now easier than ever before. This technology allows fraudsters to create identities at scale, making it easier to generate believable synthetic IDs.

Based on the fusion of “deep learning” and “forgery,” AI can create convincing fake audio, video, or images. This technology allows fraudsters to quickly create new identities that are more believable. By piecing together elements of personal information and combining them with fake identifiers, fraudsters can create new identities. These identities can be used for a variety of fraudulent activities, such as credit card fraud, online scams, and more.

2024010302.png

According to McKinsey Institute data, synthetic identity fraud has become the fastest-growing type of financial crime in the United States and is on the rise globally. In fact, synthetic identity fraud accounts for 85% of all current fraud activity. Additionally, GDG research shows that over 8.6 million people in the UK use a false or someone else’s identity to obtain goods, services, or credit.

Synthetic identity theft is a challenging task because these identities often involve a combination of real elements (such as a real address) and fabricated information. This makes detection and prevention extremely difficult. The use of legal components and false details further complicates detection. Additionally, since these fraudulent identities lack prior credit history or related suspicious activity, they are difficult to identify through traditional fraud detection systems.

Social media platforms have become a major exploitation channel for synthetic identity fraud. Using AI technology, fraudsters can create and distribute highly customized and convincing content that can be targeted to individuals based on their online behavior, preferences, and social networks. This content can seamlessly integrate into users’ feeds, promoting rapid and widespread dissemination. This makes cybercrime more efficient and challenging for users and platforms alike.

For financial institutions, this is even more concerning. Fraudsters using AI to learn the business processes of individual financial institutions, their understanding of how various organizations operate, can write scripts to quickly fill out forms and create what appears to be a credible identity for credit fraud.

This is especially concerning for new account fraud and application fraud. Every bank has its own account opening workflow, its own unique technology, and its own language at onboarding. Individuals must appear credible in order to open an account. Fraudsters can use GenAI tools to learn the different bank screen layouts and stages. With an understanding of how various organizations operate, fraudsters can write scripts to quickly fill out forms and create what appears to be a credible identity to carry out new account fraud. Banks will no longer need to answer the question “Is this a good fit?” but also “Is my customer human or AI?”

It is projected that as AI technology continues to develop and become more widespread, increasingly more impersonation fraud will emerge. Fake identities, fake accounts, and more will become more common. Businesses and individual users need to be vigilant and enhance their security awareness, while businesses and organizations also need to continuously strengthen their AI security measures and internal training to mitigate potential risks.

Malicious Crawlers: Still Rampant in 2024

As artificial intelligence (AI) technology continues to develop, the demand for data for AI is also growing. The emergence of generative AI and large models has put unprecedented demands on data. For example, the pre-training data size of OpenAI's GPT model was only 5GB for GPT-1, 40GB for GPT-2, and 45TB for GPT-3. The market has gradually reached a consensus: those who have data have the world, and data is the key to the competition of large models.

2024010304.png

Currently, there are two main sources of AI training data: self-collection and crawling. Self-collected data requires a lot of manpower, material resources, and time, and the cost is high. Crawling data is relatively easy to obtain, posing a huge challenge to data security. Data leaks and privacy violations are becoming increasingly common. In particular, with the emergence of cybercrime as a service (Cybercrime as-a-Service), it is becoming easier to purchase malicious crawler services and technologies. It is estimated that in 2024, the threat of malicious crawlers stealing data will continue to increase.

Malicious crawlers are automated programs that can access and crawl website data by simulating user behavior. Malicious crawlers are often used to illegally obtain user personal information, business secrets, and other data. The data stolen by malicious crawlers not only includes publicly available data on the Internet, such as user data on social media, but also unauthorized data, such as enterprise internal data, personal privacy data, and sensitive data, such as financial data and medical data. In 2022, the National Security Agency (NSA) released a report stating that data stolen by malicious crawlers has become an important source of cyberattacks.

Account Takeover Fraud: Account Takeover Fraud Will Increase Significantly in 2024

Account takeover (ATO) fraud is a type of identity theft in which fraudsters use phishing and malware methods to obtain legitimate user credentials or purchase privacy information from the dark web. They then use technical means to take over the use of the stolen account. 2024010303.png

ATO fraud has been on the rise for years. In 2022, ATO fraud accounted for more than one-third of all fraud activity reported to the FTC. In 2020, ATO fraud grew by a staggering 350% year-over-year, with 72% of financial services companies experiencing such attacks. In 2021, account takeovers led to 20% of data breaches, resulting in losses of over $51 billion for consumers and businesses.

The increase in account takeover fraud is due in part to the rise in phishing attacks and data breaches, as well as the increasing frequency of data breach events, which make it easier for fraudsters to obtain user personal information and passwords. On the other hand, AI technology can help fraudsters quickly identify suspicious accounts and automatically generate effective attack tools.

Fraudsters will transfer balances, points, and vouchers from the account, and they will also use stolen accounts to send phishing emails, post fake information to brush orders, brush fans, and brush comments, making them look more real. In addition, fraudsters may also use stolen accounts to create fake product or service sales websites, post threatening, harassing, or spreading hate speech. In short, the behavior of fraudsters stealing accounts has caused serious losses to individuals and businesses. Individuals may face economic losses, identity theft, and other problems. Businesses may face financial losses, reputational losses, and other problems.

As AI technology continues to develop and become more widespread, it is expected that account takeover fraud will become more common and difficult to prevent in 2024.

Internal Leaks: Increased Risk of Enterprise Internal Breaches in 2024

Internal threats have become a major challenge for businesses. Data shows that internal threats have surged by 44% in recent years. These threats can come from the actions of employees, customers, or suppliers, whether malicious or negligent. In particular, employees with privileged access are the biggest source of fraud risk. 2024010306.png

Early research has found that language models can leak privacy information, and as generative AI is used more and more in business, it brings new risks and challenges. Internal data leaks are expected to become one of the most important business risks in 2024.

The BYOAI (bring your own AI) phenomenon is becoming increasingly common, as employees use personal AI tools in the workplace. This poses a risk of leaking sensitive corporate secrets, albeit unintentionally. In addition, fraudsters are using AI to carry out complex attacks, such as deepfakes, making it more difficult for victims to distinguish and defend against them. For example, fraudsters can create fake emails or documents to deceive employees or bypass security systems.

2024-01-04
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.