Recent cross-party MPs in the UK have issued strict inquiries to the government and financial regulators. According to a latest report by the UK Parliament's Treasury Committee, the UK government, the Bank of England, and the Financial Conduct Authority (FCA) have adopted an overly passive "wait-and-see" approach in addressing AI risks in the financial sector, which could potentially cause "serious harm" to consumers and the entire financial system.

Image source note: The image is AI-generated, and the image licensing service is Midjourney
Currently, over 75% of companies in the City of London have integrated AI technology, especially insurance companies and international banks that widely use it for core operations such as credit assessment and claim processing. However, due to the lack of specific laws targeting AI, companies can only explore on their own within vague existing guidelines, increasing risks such as opaque algorithmic decisions, discrimination against vulnerable groups, and financial fraud.
More concerning for experts is that the widespread use of AI could lead to a "herd effect" in the market. When multiple financial institutions face economic shocks, if their AI systems make similar defensive decisions, it could easily trigger a chain reaction and even evolve into a systemic financial crisis. In addition, the financial industry's excessive reliance on the infrastructure of a few U.S. tech giants also poses cybersecurity risks.
The committee has urged regulators to end the "wait-and-see" approach as soon as possible, suggesting the launch of specialized AI stress tests and the release of clear operational guidelines by the end of the year to define liability when AI causes losses. Although the Bank of England and other institutions have stated that they are already conducting risk assessments, MPs have warned that regulatory efforts must keep up with the pace of technological advancement.
Key Points:
⚠️ Delayed Risk Control: UK MPs criticize the government and regulators for taking a passive approach to AI regulation in finance, which could lead to serious social and economic harm.
📉 Triggering Systemic Crises: The similarity of AI algorithms may cause financial institutions to take convergent actions during market turbulence, thereby amplifying risks and triggering financial crises.
📑 Calls for Specialized Testing: The report recommends that regulators introduce market stress tests specifically for AI and clarify legal accountability for AI in areas such as loan approvals and insurance assessments.
