Executive summary
Convier is pleased to present the Nordic Fraud Outlook 2024, offering a deep dive into the prevailing fraud landscape within the Nordic financial market. Through extensive discussions with subject matter experts across major financial institutions and consulting firms, this report provides comprehensive insights into current fraud trends, technological impacts and recommendations for how institutions can overcome operational hurdles to combatting fraud.
The current trends are concerning. Fraud is evolving at an unprecedented pace, with each passing year revealing new complexities and challenges that demand our attention. We have seen actors engaged in criminal activities become more professional, sophisticated and ruthless, and the volume of fraudulent attempts keeps increasing.
Although recent technological advancements may help institutions become more effective in detecting fraudulent schemes, they also enhance the capabilities of fraudsters. Digital interactions are increasingly being infiltrated by fraudsters using voice replicators and deep fakes, threatening the fabric of trust in financial services. Financial institutions have a major challenge in ensuring that customer interaction and dialogue can be trusted.
The adoption of AI is currently a race between actors engaged in criminal activities and the ones designated to protect the economy from being misused. However, financial institutions should not be alone in the fight against crime. In most cases, they represent the last line of defense against the criminals, and it is vital that other parties join in protecting individuals from being defrauded.
We would like to thank the participants in the report for sharing their knowledge and insight.
Contents
Introduction
For this report, we have invited subject matter experts from large Nordic financial institutions and consulting firms to discuss the current and emerging fraud trends and what the future of financial crime prevention in the finance sector will look like. We wanted to find out what challenges financial institutions face in applying effective fraud prevention programs and what they can do to prepare for the emerging threats.
The interviews all touched on three main topics:
- Major trends affecting the fraud landscape,
- How technology is being used for good and for bad,
- Recommendations for what institutions can do to keep track with the fraud trends.
This report presents a summary of the insights gained from these interviews. Each main topic is outlined in a separate section, but a common theme is how new technology will shape the development of both fraud and fraud prevention. Among the topics discussed are whether technology is changing the strategies of fraudsters, who will win the AI arms race and why regulations are not the hurdles hindering adoption of new technology. The final section also mentions concrete suggestions for how financial institutions can become more effective in preventing financial crime.
We would like to thank the participants from DNB, Danske Bank, SEB, Handelsbanken, KPMG, KLP, Deloitte and a Nordic Insurance Company for taking the time to participate in the survey and share their thoughts and insight.
For any questions on the contents of this report, please reach out to andreas.engstrand@convier.com.
Current fraud trends
2023 was a remarkable year for AI and the possibilities of technology. At the same time, it was an uncertain year with significant geopolitical instability, increasing interest rates and high inflation. With every such downturn comes a corresponding wave of fraud, and this example was no different. According to the interview subjects, the Nordic financial market experienced a notable increase in fraud cases in 2023.
The emergence of a groundbreaking technology paired with a globally unstable socioeconomic situation is bound to affect the landscape of financial crime. The following section sheds light on what the experts are noticing to be the current trends in fraud today.
Social engineering is the most effective strategy
Despite all the advancements in technology, it seems like the core strategy of the fraudulent attacks largely remains the same. Individuals are still the primary target for fraudsters in the banking sector, where social engineering continues to be the most effective tool. The reason is simple; If the fraudster can gain the trust of the victim, they can make the victim process payments instead of the fraudster stealing their online bank access.
These types of payments are called “Authorized Push Payments (APP)”. An individual is manipulated into making transactions to a fraudster through social engineering or impersonation. Perpetrators leverage scenarios that create a false sense of legitimacy, such as posing as recruiters, suppliers, or familiar individuals, and tailor attacks to each victim. Love scams and investments scams also fall into this category.
One of the interviewees described an example of a “safe account fraud” recently observed in Sweden:
- The scammer sends an SMS to an individual informing them that the iPhone they ordered had been shipped.
- The message includes a comment at the bottom saying “should you have any questions, please reach out to our customer center”.
- Since the recipient has not ordered an iPhone, the scammer expects the recipient to call the phone number on the message.
- Once the individual calls, the scammer acts as a customer support agent. The agent gives them the impression that they may have been scammed and promises to help solve the case.
- They let the individual believe that the call is being transferred to the Police where the individual can report the case to a fictitious “police officer”, in reality another scammer.
- Once "reported”, the individual is put back in dialog with the “customer support agent” that will help refund the money. To do so, the individual needs to transfer funds to a secure account provided by the “customer support agent” before being able to get the transaction refunded.
These Authorized Push Payments are significantly harder to detect because the victims themselves making the transactions through their own phone or computer.
Attacks are becoming more sophisticated
Actors engaged in criminal activities are getting better. With sophisticated translation software, language is no longer a barrier. Phishing emails can be realistically generated in seconds by an LLM, and phishing websites are now exact copies of original websites.
These schemes were easier to recognize as fraudulent just a few years back when the fraudsters spoke poor English and knew little about the social dynamics of their target. But they have learned. Now, the fraudster is fluent in the same language as the victim and may even come from the same area. Even highly alert, tech-savvy people experience getting roped into phishing-like attacks today, simply because they look and feel completely legitimate.
Criminals are getting organised
In the last 12 months, media and financial institutions have reported multiple examples of how criminal networks exploit the financial system to generate revenue and transfer illicit funds:
Fake Spotify streams
In September 2023, an investigation report from Svenska Dagbladet revealed that criminal networks in Sweden are reportedly using fake Spotify streams to launder money. The illicit funds, stemming from drug deals, robberies, fraud, and contract killings, are converted to Bitcoin and utilized to pay individuals for generating false streams of songs associated with artists linked to the criminal gangs. An English version of the article can be read in The Guardian.
Organised criminals are increasingly infiltrating the legitimate businesses
In an article in Dagens Næringsliv, DNB is reporting that they see a trend where organized criminals are increasingly infiltrating legitimate Norwegian businesses through corruption and tax evasion. Similar trends are reported in Sweden. In an article by the Financial Times, Swedish gangs are reported to be making more money from fraud and delivering welfare services than from drug trafficking. The criminals are running schools and care homes, benefitting from the Swedish welfare system
Several interviewees confirmed that they have observed the alarming trend with more professionalized and organized criminals.
Economic downturn gives rise to more fraud by customers
With increased inflation and interest rates, individuals have less money to spend. The difficult economic landscape seen after the pandemic has given rise to a new trend – insider threats.
Insider threats
The most notable case in 2023 was a temporary Manpower resource that defrauded a bank for NOK 74 million (USD 7.4 million). The case represents the largest Norwegian bank robbery ever recorded.
The former employee was on a temporary contract and worked in the customer support division at SpareBank 1 SMN. The method he used was simple:
- He transferred funds from the bank’s cash register to his personal accounts in the bank. In the transaction description field, he wrote “Test”.
- In court, he explained that he did this to test whether the payments were processed in the system
- He continued to transfer funds to his personal accounts, which were subsequently transferred to other accounts in other Norwegian banks, and then to a fund manager in Germany
Although the payments were flagged in the transaction monitoring system a few days later, the investigators initially believed the explanations from employee that this was only a test of the system. The case went to trial at the end of the year, and the former employee was sentenced to six years in prison.
Loan Fraud
There are two notable trends related to loan fraud: Fictitious documentation and help from insiders.
The heightened interest rates have led to individuals securing lower financing than previously available. In response, some individuals have resorted to falsifying documentation to obtain greater financing opportunities.
In 2022, the Police arrested five employees at DNB and Nordea accused of corruption in relation to approving loan applications. Nordea suspected that a total of NOK 150 million (USD 15 million) had been disbursed based on fabricated documentation. Link to the full article in Aftenposten.
The emergence of potential internal collaboration poses new challenges for fraud detection. Not only are institutions required to monitor customer activity, but also employee activity, requiring a multifaceted approach.
Insurance Fraud – Fictitious claims
Insurance claims fraud refers to the deliberate attempt by an individual or entity to deceive an insurance company to obtain illegitimate benefits or payments. This can involve exaggerating damages or losses, providing false information, staging accidents, or submitting fake documentation.
The financial institutions surveyed for this report noted that the amount of insurance claims fraud cases tend to increase as inflation goes up, which is what they observed in 2023.
They also observed customers starting to use Chat GPT to help them write claims. This could potentially be seen as a red flag, as claims are often incomplete, or lack thorough description. However, with ChatGPT, the descriptions were much more detailed and almost looked too good to be true.
Who will win the AI race?
2023 was a remarkable year for AI. With Open AI democratizing access to AI technology with ChatGPT, anyone can change how they generate, consume and process information online at a much faster pace than before.
As illustrated in the previous section, this technology can be used as highly efficient tools for creating sophisticated scams. AI can help scammers attack at a faster rate and in higher volumes, while also making each fraud more effective and believable.
For financial institutions, AI could become an important tool for rapid, real-time detection of anomalies and identification of patterns invisible to the human eye. However, adopting AI in practice can prove challenging, as most institutions struggle with legacy technology infrastructures and slow bureaucratic processes for making operational changes.
The adoption of AI is currently a race between fraudsters and the ones designated to protect the economy from being misused. On the topic of how this race will play out, the following insights emerged:
Using AI for bad – How AI can make fraudsters more effective
Scams are easier to conduct
Interviewees expect fraud attacks to continue increasing. With new technology making it faster and easier to tailor phishing emails and websites, the volume will go up. One interviewee expected attacks to be close to automated where multiple companies could be attacked simultaneously. They also envisioned a strategy where the attackers create customer relationships at multiple financial institutions in parallel with the same stolen identity.
Initiating these types of fraud attacks will become easier with tools such as Fraud GPT.
Example
Fraud GPT, “Your Cyber Criminal Co-pilot”, was first advertised in July 2023. The co-pilot has a subscription-based pricing model where users can pay $200 per month, or $1,700 per year. The tool can help criminals generate phishing websites, generate emails and develop malware with a few prompts.
Personalized attacks
A concern that was raised is how AI can enable more targeted attacks by harvesting open data. Two main examples were discussed in the interviews: Profiling of victims and identity theft.
Example 1 – Profiling of victims
Technology can be used to harvest open data about individuals to help them tailor more sophisticated and targeted attacks. Imagine an algorithm that collects information from a person’s social media profiles. Based on the person’s activities, who they follow, what they like and what they comment, the fraudster can generate a profile of the individual that can help them tailor an attack. With each attack being fully personalized for individual victims, fraudsters can make it significantly more challenging for their targets to differentiate between a fraudulent attack and a real-life situation.
Example 2 – Identity theft using deepfakes
There is a concern that we have only seen a glimpse of what deep fakes can do. Scams are evolving to exploit personal relationships, with criminals pretending to be relatives in need of financial assistance. With data found online about an individual, it is possible to make a deepfake video with high quality where it is difficult to distinguish a fake video from a real video.
One interviewee expected that they will see more attempts by criminals using deep fakes in dialogue between the bank’s customer support and customers, but that criminals pretending to be relatives may be an even larger issue for their customers unless there are solutions that can detect such attacks in realtime.
Using AI for good – How institutions can apply AI to combat financial crime
During our discussions, it was evident that AI will play a significant and crucial role in preventing and detecting fraud. Recognizing the importance of applying new technology to stay abreast of current trends was unanimous.
However, different opinions surfaced concerning its implementation and how financial institutions best can use AI. AI is no magic wand that will prevent any fraud automatically, and there is no easy answer for how to best apply it. Implementing AI solutions requires a strategic and nuanced approach, involving continuous refinement and adaptation to evolving threats.
These are the areas where the interviewees believe AI will make an impact.
Pattern recognition
AI is seen as having a major potential in pattern recognition and handling of large datasets. This can enable detection of more complex activity across a variety of data points that the human eye would never be able to see.
One of the challenges with achieving this potential is providing the AI tools with the right data. It is difficult to determine what data is relevant, even more so is getting access to it in a format and quality level that machines can use. Current data storages often contain information in the form of PDF documents or even JPG image files. Data is also usually fragmented across multiple disjointed systems, making it hard to put them into context. Proper data governance and strategies will become key for any institution serious about implementing AI in their operations.
Offload overloaded investigators
Financial institutions receive numerous daily alerts in their fraud detection system. The current challenge in this area is generating accurate alerts, avoiding both unnecessary alerts on legitimate transactions and simultaneously never letting suspicious ones slip past. The static rules that determine when to trigger alerts today tend to be either too strict (many false alerts) or too lenient (missing suspicious transactions). Interviewees hope AI will be able to evaluate transaction and customer behavior data at a scale that allows for detecting trends and patterns, generating alerts based on what is abnormal for this customer type rather than trying to apply catch-all rules across all customers. This will in turn lead to much more accurate automated transaction monitoring, lowering the volume of alerts to be unnecessarily handled manually.
A symbiosis of artificial and human intelligence
Although the interviewees believed there are multiple benefits of new technology, they were careful about being over-reliant on what AI can do. A significant amount of investigation time is today spent on data collection, comparison, and reporting. These types of repetitive and time-consuming tasks are ideal for outsourcing to machines, as it may free up time for investigators to focus on the complex tasks requiring reasoning and human judgement. Some interviewees were however skeptical about AI algorithms handling the final decision making in fraud cases given the potential impact it may have on individuals.
Rather than viewing AI as a replacement for human analysts, AI is seen as a potential for reducing the level of noise, and help accelerate repetitive tasks, enhancing the efficiency of the investigators.
Overcoming the hurdles
The final topic of discussion centered on operational challenges in thwarting fraud within a financial institution. This section presents key recommendations for effectively preparing for emerging fraud trends.
The role of regulations
The interviewees were asked if there were regulatory challenges with addressing the emerging fraud trends. Specifically, discussions centered around two key application areas:
- Are there current regulations that prevents institutions from working effectively with combatting fraud?
- Are there new regulatory changes that may change how institutions will combat fraud in the future?
Interestingly, regulations did not emerge as a significant obstacle when it came to monitoring and investigating customer activity. Some pointed out certain challenges related to monitoring employee activity to detect insider fraud. However, the major impediment highlighted was how regulations were interpreted and implemented within an organization.
The bureaucratic complexities, such as requiring consensus from multiple entities including the technology management board, risk management committee, IT GDPR legal unit, and the AML fraud team, were seen as substantial hurdles. These internal challenges could potentially impede the testing and adoption of new technologies.
A second element that was mentioned is the lack of knowledge. Several of the respondents stressed the importance of training relevant decision makers and gatekeepers in technology to prevent friction in testing new technology that can help improve the institution's fraud framework.
Information sharing and collaboration
Information sharing and collaboration have been focal points of industry discussions in recent years. The insights shared by the interviewees shed light on the challenges associated with collaboration.
Collaboration proves to be a formidable task. While some positive initiatives have been launched, their effectiveness remains limited. Successful collaboration necessitates active participation from all stakeholders. One-way communication or limited intelligence sharing hinders its proper functioning. It appears that the industry may still be in its infancy when it comes to sharing intelligence.
One interviewee dissected collaboration into different levels:
- Intelligence: Current collaboration is in a nascent stage, both organizationally and culturally. It heavily relies on individual relationships and informal networks. Success depends on whether a team within one organization has strong relations with another institution.
- Detection: Collaboration in this space is currently non-existent. A pertinent question arises: Who will bear the cost of detection across institutions?
- Reaction: This is the area where collaboration is currently most successful. However, challenges persist. Collaboration between insurance companies and banks individually is commendable, but there is potential for even greater collaboration between banks and insurance companies.
Recruit more specialized investigation & intelligence teams
As criminals become increasingly specialized and organized, financial institutions must adapt. This adaptation calls for subject matter experts with a deep understanding of various product types or customer segments to effectively comprehend the strategies employed by criminals.
During interviews, one participant highlighted the disparity between investigators in insurance and banking. In insurance, investigators often conduct extensive and thorough examinations of customers or claims, ensuring the accuracy of factual analyses. This may involve on-site visits, meticulous collection of customer documentation, and analysis of documentation for potential tampering. In contrast, the banking sector tends to face higher pressure on volume, resulting in quicker investigations.
Fraud investigators in banking may find valuable insights by considering the meticulous approach of insurance companies when investigating leads.
Increased responsibility
Financial institutions should anticipate not only a surge in transaction volumes and alerts but also an escalating level of responsibility placed on them. Law enforcement is currently grappling with significant capacity constraints, leading to the dismissal of multiple cases. Since financial institutions possess the necessary data for analyzing and detecting financial crimes, they should brace themselves for an increased role in both detecting and investigating criminal activities.
Detection as a preventive measure
Most monitoring platforms detect fraudulent activity at the time when the customer makes a transaction. At that time, it may often be too late. Institutions should assess whether it is possible to stop a transaction earlier in the process. As an example, it may be at the time when the customer is typing in amount and account number in the transaction form. If the customer is about to empty their account, they can be prompted before approving the transaction. Asking the customer multiple questions to raise awareness of potential fraud might cause them to realize that they are being tricked.
Detection at different stages of the value chain
It is imperative to understand the value chain of each fraud vector. After a customer has been defrauded, where is the money wired? Before a customer wires the transaction, what happens? The transaction is the final step in the fraud value chain for that customer, but it may be the first step in a money laundering incident. The whole chain must be mapped to understand where to place relevant controls in order to intercept an attack.
Be realistic and iterative when implementing new technology
There are multiple misconceptions around technology, and AI especially. Some expect AI to be a plug-and-play tool that will handle fraud detection end-to-end. Others fear that they will analyze significantly more than they’re tasked with. Before starting to implement a new AI platform, it is important to be realistic about what task the technology should solve. There must also be a proper assessment in place of whether the required data is available to achieve the desired goals.
It is advisable to avoid gaping over too much at once when starting on a technological transformation project. More success can be achieved by taking an iterative approach, starting by automating simple and well-defined tasks. The learnings from these smaller projects will become steppingstones for more advanced technology systems in the long run. Building an internal culture for iteratively adapting and improving processes and systems will yield vastly better results compared to an endless chain of failed mega-projects.
Conclusion
In conclusion, the collective insights provide a comprehensive view of the evolving landscape in the fight against fraud. The effectiveness of social engineering employed by fraudsters is a pressing concern, underscoring the need for enhanced vigilance and adaptive defense strategies. The rising sophistication of attacks and the increasing organization and professionalism among criminals necessitate a proactive and strategic response from financial institutions.
Furthermore, the economic downturn has amplified the challenges, with customers turning to fraudulent activities. The dual potential of AI, both as a tool for good and malicious intent, raises a pivotal question about the ongoing race between criminals and institutions in leveraging this technology.
To stay ahead, banks must prioritize recruiting specialized personnel, acknowledging that detection not only identifies but also prevents potential threats. Comprehensive understanding and mapping of the entire fraud value chain are imperative for developing robust preventive measures. Finally, a realistic approach to implementing new technology is crucial, emphasizing the importance of balancing innovation with practicality to fortify defenses against the dynamic landscape of financial fraud.