Swift, the global financial messaging cooperative, is working with its member banks to announce two AI-powered experiments to combat cross-border payment fraud that could save the industry billions of dollars in fraud-related costs.
The initial pilot is aimed at enhancing Swift’s existing payment management service, which uses AI models to detect indicators of fraud by better capturing potential fraudulent activity.
This enhancement leverages historical patterns on the Swift network, refined with real data from Payment Controls’ customers.
In a separate experiment, Swift has partnered with 10 major financial institutions, including BNY Mellon, Deutsche Bank, DNB, HSBC, Intesa Sanpaolo and Standard Bank, to test its AI technology to analyze data shared anonymously.
The initiative has the potential to transform sensitive data sharing and improve fraud detection globally.
Additionally, the test could lead to a more widespread use of information sharing in fraud detection, building on successes in assessing cybersecurity threats.
The group will employ secure data collaboration and federated learning techniques to enable financial institutions to exchange relevant information while maintaining strong privacy controls.
Swift’s AI anomaly detection models analyze this enriched dataset to identify potential fraudulent patterns.
Fraud will cost the financial industry US$485 billion in losses in 2023. AI has great potential to reduce these costs and help achieve the G20 goal of faster cross-border payments.
Tom Zacher, Chief Innovation Officer QuickSaid,
“AI has great potential to significantly reduce fraud in the financial industry. This is a very exciting prospect, but it will require strong collaboration.”
Swift has a unique ability to bring financial institutions together to harness the benefits of AI for the benefit of the industry, and we are excited about the potential these two pilot programs have to help further strengthen the cross-border payments ecosystem.”
Swift is working with the community to build an AI governance framework to ensure accuracy, explainability, fairness, auditability, security, and privacy are integral to all aspects of AI applications.
These pilots are rooted in the responsible use of AI and are aligned with emerging global standards such as ISO 42001, the NIST AI Risk Management Framework, and EU AI Law.