Central Banks Tread Carefully with Artificial Intelligence
Global central banks are largely limiting their use of artificial intelligence to low-risk tasks, prioritizing caution over rapid adoption due to concerns about cybersecurity and operational reliability, according to a new survey by the UK-based think tank OMFIF.
The report,compiled through discussions with ten central banks across Europe,Africa,Asia,and Latin America (facilitated by a working group from BNY,Bridgewater,and Capital Group),reveals a measured approach shaped by past financial crises and increasing cybersecurity threats.
While AI is rapidly transforming industries like finance – offering potential benefits in risk management, fraud detection, and efficiency – 61% of central banks surveyed report that AI is not yet meaningfully integrated into their operations. They view it currently as a helpful tool for tasks like scanning news, identifying market anomalies, and summarizing reports, rather than a core strategic asset.
A key concern is the potential for AI models to misinterpret unusual events – a critical weakness for institutions tasked with navigating rare but impactful economic shocks. “Model reliability remains a top worry,” the OMFIF report states.
The survey also highlighted disparities in preparedness. Some central banks boast dedicated data science teams and robust security infrastructure, while others face limitations in staffing, funding, and governance, hindering experimentation.
Though, the report emphasizes that central banks are not looking to replace human judgment with AI. “Central banks have no interest in outsourcing judgement to machines,” OMFIF stated. “AI can summarise, filter and accelerate, but decisions remain with people.” Human oversight will remain the cornerstone of decision-making, ensuring accountability and maintaining public trust.
the findings underscore the need for robust governance frameworks as AI technology continues to evolve, notably as regulation struggles to keep pace with its rapid development and potential implications for privacy, data security, and employment.