Return to site

The pros and cons of AI-powered bots in payments and transactions (Part 2)

March 24, 2025

In Part 1, we looked at how AI-powered bots are revolutionising payments by enabling instant, automated and secure transactions, so integrating with blockchain and stablecoins to optimise financial operations. Whilst improving efficiency and fraud detection, this also raises concerns about market manipulation, regulatory oversight and real user adoption; AI-driven payments are set to reshape finance - but can banks and regulators keep up? In addition to enhancing speed, AI contributes to cost reduction by automating tasks that were once performed by human workers or intermediaries. This reduces the reliance on manual labour, lowers overhead expenses and decreases transaction fees, thereby making financial services more affordable. For example, Stripe utilises AI-powered automation to optimise recurring payments, enabling businesses to cut administrative costs whilst boosting efficiency. However, it is crucial to consider whether these savings come with risks - although AI can reduce costs, issues such as system malfunctions or security breaches must be carefully managed. Therefore, businesses implementing AI should weigh the potential financial savings against these emerging risks. Furthermore, AI’s skill in detecting patterns and abnormalities in data makes it an essential asset for fraud prevention; by analysing transaction data in real-time, AI can identify suspicious behaviours and prevent fraud before it takes place. Mastercard’s Decision Intelligence system is a perfect illustration of this as it reduces false declines and boosts fraud detection accuracy. However, the constant evolution of fraud presents a challenge:

· can fraudsters learn to bypass AI systems?

· as AI technology continues to improve, then ensuring it stays one step ahead of fraudsters is vital for the protection of financial transactions.

AI-powered bots such as Google Pay and Apple Pay go beyond fraud prevention by offering 24/7 availability, executing transactions and handling payments 24/7. This ‘always-on’ capability is especially advantageous for global platforms such as Aave, a decentralised finance (DeFi) service that needs real-time transaction processing. However, the constant operation of AI systems comes with potential risks. Excessive reliance on these systems may create weaknesses, particularly if there are cyberattacks or system breakdowns. Therefore, as AI continues to play a key role in financial services, maintaining its reliability and security will be essential. AI’s integration with blockchain technology streamlines financial transactions, enhancing both efficiency and transparency, and smart contracts enable AI bots to automate processes such as loan management, carrying out transactions automatically based on set conditions. Ethereum’s smart contracts are a prime example, allowing bots to oversee complex financial agreements without intermediaries. However, as programmable payments become more common, it raises important questions about user autonomy. The growing automation of financial systems may result in a hyper-automated ecosystem that diminishes individuals’ control over their financial choices. AI on going influence in payments and more generally in financial markets could lead to a host of unintended consequences. At the University of Chicago Booth School of Business in London, Andrew Bailey, Governor at the Bank of England, recently said (using the analogy of when Apple introduced the iPhone): “I think the question is how do we get the benefits of digital technology in the world of payments. If we were to assume there are no benefits of digital technology in the world of payments, we would probably be failing the test of imagination.” (Apple had surveyed potential customers asking whether they would like a phone with certain characteristics, and most respondents said no - that’s a failure of imagination.) Understandably, the Bank of England is treading carefully and not rushing in to issuing a GBP CBDC. Nonetheless, for industry to innovate and create digital money and so allow AI bots to transact, there is a need for regulatory clarity or the creation of a digital payments sandbox (similar to the Bank of England digital securities sandbox).

So, AI bots are offering highly personalised financial services by analysing how users behave and delivering recommendations that align with individual preferences. Indeed, Larry Fink, CEO of BlackRock, the world’s biggest asset manager, has said: “We believe the next step going forward will be the tokenisation of financial assets, and that means every stock, every bond […] will be on one general ledger.” He also believes that we will see the tokenisation of most asset classes and this will then enable mass customisation of individuals portfolios using AI bots that buy and sell 24/7. A good example of how this is happening already is Bank of America’s, Erica chatbot, which gives personalised savings advice to users. However, this highly bespoke advice comes at the cost of data collection, leading to privacy concerns. Essentially, how much personal data are users willing to disclose for the sake of personalisation? Striking a balance between the advantages of tailored financial guidance and user privacy will be vital as AI continues to evolve in finance. The other important factor is by whom and how are the AI bots being programmed, plus, how do we ensure that human bias (intentionally or unintentionally) does not get programmed into AI? Hence, whilst AI bots offer substantial benefits in the financial and payments sectors there are, without doubt, challenges in AI-powered financial eco systems. One significant challenge of AI-powered systems is their inability to match the nuanced judgment that human decision-makers bring to complex financial choices. This has been highlighted by Jonathan Hall, an advisor to the Bank of England, who stressed a number of concerns in his speech, “Monsters of the Deep”. AI is proficient in processing large amounts of data quickly but struggles in situations that require human intuition or subjective assessment. A noteworthy case from 2018 involved Wells Fargo’s AI system mistakenly rejecting mortgage applications from qualified borrowers, showing the limitations of AI in handling high-stakes decisions that require empathy or discretion. Therefore, this raises the important issue of whether AI should be trusted with critical financial decisions, or should human oversight remain essential to ensure fairness and accuracy?

Meanwhile, security vulnerabilities in AI-powered systems are another significant challenge. Whilst automated bots can enhance efficiency, they are susceptible to hacking if their underlying code contains vulnerabilities. In 2021, the Poly Network hack exposed these risks, with hackers exploiting flaws in AI-driven bots to steal over $600 million in cryptocurrencies. This event brings forth an important question: does the efficiency and speed of AI-powered payments make financial systems more vulnerable to cyberattacks? The adoption of AI in financial services certainly faces substantial regulatory hurdles - whilst AI is transforming the payments sector, global regulations governing its application remain in early stages. The European Union's Artificial Intelligence Act and ongoing US discussions reflect the regulatory uncertainty surrounding AI-driven financial systems. With regulations varying across different regions, financial institutions operating internationally may face significant compliance challenges, such as Binance and Kraken. This raises another important issue: how will inconsistent regulations influence the scalability and effectiveness of AI-powered financial systems? And, without doubt, privacy is a vital concern when it comes to AI systems in financial transactions particularly as these systems often rely on large quantities of consumer data to operate effectively, raising potential risks to privacy and data security. An example of this is Facebook's Libra project, which was heavily criticised over privacy concerns, so leading to its rebranding as Diem.

The key challenge is balancing AI's efficiency and personalised service offerings with the need to protect consumer privacy. Can AI-powered payment systems secure consumer data while delivering customised experiences? The effectiveness of AI systems hinges on the quality of the data with which they are trained. Poor-quality or biased data can result in unfair decisions, potentially contributing to financial exclusion. A prime example of this was Amazon's AI recruitment tool, which was abandoned after it was discovered that it favoured male candidates. Similarly, AI-powered financial systems could unintentionally reinforce discrimination if the data used is incomplete or biased, posing a serious barrier to financial equity. Over-reliance on automation also introduces the risk of unintended consequences, especially within financial markets. A key instance of this was the 2010 “Flash Crash” when algorithmic trading bots caused a swift market collapse, erasing $1 trillion in equity in minutes. Hence, this leads to yet more important questions: what dangers do unsupervised AI bots present in financial markets, and how can we minimise such risks moving forward? Job displacement is becoming an increasing concern as AI steps in to handle tasks once managed by humans, particularly in customer service and routine banking operations. JPMorgan Chase, for instance, reduced its workforce due to the improved efficiency of automated systems. As AI continues to dominate more areas of finance, societies need to prepare for the potential loss of jobs and the broader social impact of this shift, although there will also be a huge demand for staff to be able to implement AI powered solutions and check the code and ongoing monitoring of these AI bots. Finally, accountability presents a major issue when AI systems make errors. The Tesla autopilot accidents have sparked significant legal questions about who is liable for mistakes made by autonomous systems. In the financial sector, a similar issue arises regarding responsibility when AI bots make errors - should it be the developers, the financial institutions that use the technology, or the users themselves?

AI-powered bots are certainly reshaping financial transactions, offering unparalleled speed, efficiency and fraud detection while driving automation in payments, lending and asset management. Firms such as Stripe and Mastercard are harnessing AI so as to streamline operations; this rapid shift is challenging whether AI-driven finance can remain secure against evolving cyber threats and whether regulatory clarity can be ensured without stifling innovation. Ultimately, who is accountable when AI makes financial errors? With AI integrating into blockchain and digital money ecosystems, its potential is limitless - but so are the risks. As finance moves toward a hyper-automated future, the real challenge is not simply adoption - it is control. That is, will we master AI or will AI redefine the financial system on its own terms?

This article first appeared in Digital Bytes (18th of March, 2025), a weekly newsletter by Jonny Fry of Team Blockchain.