Artificial intelligence (AI) represents a clear and present danger to our currency and our markets. Using AI, bad actors and rogue countries can easily and cost-effectively engage in market manipulation, financial disinformation, and inappropriate regulatory behavior that threatens the stability, integrity, and security of our financial system like never before. The intelligence behind this new technology may be artificial, but the losses in market value and trust are real.
In a recent articleI examine the impact and risks posed by AI on misinformation and market misconduct.
Financial disinformation, manipulation and deepfakes
For as long as markets have existed, attempts have been made to manipulate them. AI is just the latest and arguably most consequential means for bad actors to engage in market manipulation and inappropriate financial behavior. An axiom of the market in the future could be: anything that can be manipulated with AI will be manipulated with AI. AI-powered applications like ChatGPT, Sora, and Gemini make it cheaper and easier to engage in misconduct on a larger scale, especially methods involving information and misinformation.
These methods include sophisticated, fast-paced systems such as ping, in which AI programs quickly submit and cancel large orders to trick other machines into disclosing their trading intentions, and spoofing, where fraudulent orders are placed to trick other participants' AI systems into reacting and distorting price discovery. Furthermore, the use of “financial deepfakes” deserves further scrutiny due to its pernicious impact. With relatively inexpensive AI tools, bad actors can easily produce highly convincing fake images, documents, videos, and audio recordings of companies and executives to move a stock or the entire market. This proliferation of financial deepfakes could erode trust in market integrity as investors distrust financial information.
Systemic risks of speed and opacity
The injection of AI into finance creates new, interrelated systemic risks. During the 2008 financial crisis, the main vulnerability was “too big to fail”, linked to the size of institutions. With AI, regulators must now monitor risks related to speed (“too fast to stop”) and opacity (“too opaque to understand”).
First, regarding “too fast to stop,” the AI-powered acceleration of financial speeds creates a systemic risk where misinformation and misconduct could destabilize the financial system before corrective or preventative measures can be implemented to stem the fallout.
Second, regarding the notion of “too opaque to understand,” the black box nature of many AI systems introduces a layer of complexity that obscures the identification and remediation of systemic vulnerabilities and market misconduct. Once a machine is programmed to achieve a goal, the means by which it learns and operates to achieve that goal – legally or illegally – can often be a mystery to its human overseers. This opacity of AI algorithms makes it difficult to predict, diagnose, and quickly rectify problems or accurately fix problems after the fact.
Geopolitical threats
The proliferation of AI in the financial sector also introduces a new dimension of geopolitical risk, as adversaries use relatively inexpensive AI tools to design sophisticated attacks that exploit weaknesses in financial AI systems. Rather than engaging in costly and protracted direct military conflicts with uncertain outcomes, nations are increasingly resorting to economic and trade warfare. Rogue states can use AI to spread misinformation, harm rival economies, procure illicit gains, or damage the political prospects of foreign leaders. The inherent complexity, opacity and proliferation of AI algorithms in many aspects of our daily and financial lives exacerbates the impact of these threats, increasing the speed and scale at which an attack can develop into a catastrophic systemic failure.
Pragmatic recommendations
Although broad consensus on AI regulation remains politically difficult to achieve, steps can be taken now. First, regulators should strengthen incentives and sanctions to encourage financial intermediaries to better manage AI risks. This approach can quickly address immediate concerns and encourage businesses to put strong protections in place. Second, investors, especially retail investors, should focus on long-term passive investment strategies. By doing so, individuals can avoid the costly short-term volatility of an AI-affected market. Finally, policymakers should revitalize traditional regulatory tools such as public disclosures, human reviews, and stress testing to include scenarios that specifically address AI risks. The reinvention of such tools should leverage AI itself to help regulators and policymakers better address AI risks.
Conclusion
Addressing important issues related to AI, disinformation and market misconduct will be one of the most important and demanding challenges for policymakers, regulators and business leaders. While no single security solution exists, developing a first plan toward a safer, more robust marketplace is well within our capabilities.
This message comes to us from Professor Tom CW Lin of Temple University's Beasley School of Law. It is based on his recent article in the Ohio State Law Journal, “Artificial Intelligence, Misinformation, and Market Misconduct,” available here.
