Introduction
Two noteworthy events occurred last week in the world of artificial intelligence (AI). Last Wednesday, President Trump released his administration’s “AI Action Plan,” which described the regulation of AI as a barrier to its innovation and adoption. The previous day, OpenAI CEO Sam Altman warned financial institutions about a “‘significant, impending fraud crisis’ brought about by artificial intelligence.” The problem presented by the juxtaposition of these two perspectives should be obvious. A plan that allows companies to use AI unfettered from oversight and safeguards is unlikely to prevent the crisis from AI’s use that those in the AI industry themselves fear. Instead, reasonable regulation will prevent crises and allow AI to fulfill its vast potential.
Those resistant to regulation need only look to the financial industry as a model. A financial industry largely unfettered by oversight and safeguards caused the Great Depression. That is why Congress found, in adopting our foundational securities laws in the 1930s, that it was “necessary to provide for regulation and control” of securities transactions. Almost 100 years later, our capital markets are the envy of the world. The reason is that investors trust our markets and invest in them with confidence because traditionally they have been well-regulated and well-policed.
We agree with the administration that AI has the potential to “usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people.” But right now there is limited to no regulation for how AI may be used, which means there is nothing to stop it from being used to harm Americans instead of helping them. Without reasonable regulation to guide AI’s development, as AI becomes “more integral to the fabric of society,” the dangers it poses may end up rivaling the benefits it brings.
Nowhere is this clearer than in the financial industry. The financial industry has real challenges that AI can address, but also real problems that AI could exacerbate. As researchers at MIT investigating the uses of AI in the financial industry recently said,
Congress appears poised to grapple with AI’s potential rewards and risks in the financial industry. Today’s Senate Banking Committee hearing, “Guardrails and Growth: AI’s Role in Capital and Insurance Markets,” points to how guardrails and growth go hand in hand. Reasonable regulation provides the necessary guardrails that build greater confidence in AI applications in the financial industry. Unfortunately, the SEC under the Trump Administration has already rejected the reasonable regulations the SEC proposed under the Biden Administration. A continued emphasis on deregulation at all costs will only heighten the risks to investors AI poses.
The Predictive Data Analytics Rule
On July 26, 2023, the SEC proposed its “Predictive Data Analytics” rule—its first attempt to regulate financial firms’ use of AI. The rule would have required broker-dealers and investment advisers to eliminate or neutralize certain conflicts of interest associated with their use of AI in their interactions with investors. The purpose of the rule was to ensure that technologies that could provide greater market access, efficiency, and returns were not used instead to place the interests of the financial firms employing these technologies ahead of the interests of investors.
The financial industry subjected the Predictive Data Analytics rule proposal to withering criticism, but its objections to the rule show the problem with characterizing all AI regulation as unduly burdensome. For example, Robinhood criticized the Predictive Data Analytics rule as requiring “broker-dealers and investment advisers who use AI or other predictive data tools to eliminate or neutralize all ‘conflicts of interest’ arising from the use of such tools, regardless of the cost or feasibility of doing so.” So broker-dealers and investment advisers should be able to use AI to provide recommendations or advice riddled with conflicts of interest if it would be too costly or difficult to use AI in a conflict-free way? The provision of conflict-free recommendations or advice should be the bare minimum that investors should be able to expect from broker-dealers and investment advisers. Yet Robinhood says that ensuring AI is not used to put the interests of the firm ahead of the interests of investors is somehow an attempt “to regulate a new technology prematurely.” The industry may want to wait until conflicts of interest in the use of AI harm investors, but the SEC should not want to wait to act until harm occurs.
Unfortunately, the SEC under the Trump Administration has sided with the industry. On June 12, 2025, it withdrew the Predictive Data Analytics rule proposal. It is hard to see how investors will realize the promise that the use of AI poses for their ability to obtain quality financial advice if there is no rule preventing firms from using AI to profit at their expense.
The Potential for Conflicts of Interest
The enormous potential for conflicts of interest in the way financial firms use AI demonstrates the urgent need for guardrails. Almost every way AI can be used to help investors can also be used to hurt them. For example, earlier this year, Robinhood announced Robinhood Cortex. Robinhood described Cortex is “an AI investment tool . . . that is designed to provide real-time analysis and insights that help you better navigate the markets, identify opportunities, and stay up to date on the latest market moving news.” Cortex will “offer explanations for a particular stock’s rise or fall and suggest options trades based on a user’s expectations for a stock price.”
This last point shows both the promise and perils of AI in the financial industry. Theoretically, AI could be used to help investors make better options trades. However, research shows that retail investors are terrible at trading options. Cortex’s suggestion of options trades could induce investors to trade recklessly. Moreover, Robinhood has already been fined for its “gamification” features that encouraged customer engagement on its platform without adequate controls to protect inexperienced investors. Will investors be able to resist Robinhood’s AI-driven trading suggestions if they include possible trades that investors would be better off not making?
Regulators need to adopt rules so that firms do not use AI to profit at investors’ expense. Robinhood says that “the federal government should avoid rushing to regulate AI in a manner that would stifle innovation and harm consumers and investors.” However, the unchecked use of AI free from guardrails is what will hurt consumers and investors. And there is no reason regulation that protects consumers and investors should stifle innovation. We agree with Robinhood that AI “has the potential to transform the financial services industry . . ., delivering financial services like investment advice to a greater number of consumers in far more cost-effective and efficient ways.” This will happen only if regulations force companies to use AI in a way that maximizes its benefits to investors and consumers rather than its benefits to financial firms themselves.
The Potential for Biased Advice
Conflicts of interest are not the only risk AI poses to investors that reasonable regulations would help address. Investors need rules designed to protect their interests and those of the financial system from the biases often inherent in AI-driven portfolios. These biases can lead AI platforms to recommend unduly risky portfolios. For example, researchers at the University of St. Gallen in Switzerland compared AI portfolio recommendations to the Vanguard Total World ETF, a simple, low-cost global index fund. They found that, because the AI exhibited cognitive biases similar to humans, it “consistently suggested portfolios with higher risk than” the ETF.
The researchers highlighted the biases that made the AI-driven portfolios risky for investors:
- AI favored US companies, as it recommended a portfolio with 93% of investments in US equities, which made investors vulnerable to a downturn in the US economy
- AI favored investments in certain sectors, like tech, instead of a balanced portfolio
- AI suggested investing in “hot” companies that had recently seen a lot of trading
- AI recommended stock picking and actively managed investments, which carry higher fees and more risk, rather than simple, low-cost index funds
- AI recommended portfolios with higher fees, which could drag down returns
The study concluded with various recommendations, including that regulators “may need new rules to ensure AI advice doesn’t create systemic risks—like too many people piling into the same stocks.” In light of these findings, rules that would require financial firms to monitor the biases in their AI systems to ensure the biases don’t skew their recommendations would be prudent.
The need to monitor the biases of AI systems is even more important because AI’s possible biases in providing financial advice are not limited to financial information. Studies have shown that AI has a tendency to amplify gender and racial stereotypes. That is because the data on which it learns “can come from all corners of the internet, which contains a glut of biased and toxic content.” This can lead to AI discriminate when it provides financial advice in the same way humans might discriminate. For example, AI might not recommend “attractive and appropriate investment products to a historically disadvantaged population, perpetuating that population’s position.” This would frustrate, rather than further, AI’s potential to democratize finance.
The Potential for Investor Confusion
We also need rules so investors understand what AI is and is not for in the financial industry. For example, investors need to understand that just because a firm says it uses AI, that doesn’t mean the firm’s products are going to outperform other products. One comparison of an AI-powered exchange traded fund (EFT) with the SPDR S&P 500 ETF is instructive:
This chart comes from an article about how to use AI in an investment strategy, and the lesson it draws from the chart is an important one for investors to bear in mind.
Other studies have similarly identified AI-driven mutual funds that were unable to beat the S&P 500. This should not be so surprising, as the creation of the index fund is widely regarded as one of the most successful innovations in modern financial history. A real test for the use of AI in helping retail investors select a portfolio will be whether it can consistently outperform the market.
Interestingly, research also shows that human-generated portfolios may outperform AI-driven portfolios. As AI develops, it may reach a point where it can consistently beat both the market and human-generated actively managed portfolios. Until that occurs, investors should understand that an AI-driven portfolio is no more guaranteed to generate returns than any other.
The possible underperformance of AI-driven portfolios reveals another reason we need rules to address how financial firms use AI to provide investment advice. We need to ensure that AI-driven investment platforms act as fiduciaries. Research shows that AI has the capacity to “role-play a financial advisor convincingly and often accurately for a client, but even the largest language model currently appears to lack the sense of responsibility and ethics required by law from a human financial advisor.” So even if AI develops the capacity to provide investors with better financial advice than they could receive from a human, it may lack the incentives to do so. As a result, “generative AI will require explicit alignment to ethical finance and the principles of fiduciary duty in order to function as a financial advisor.” This alignment will not occur without regulations that force firms to design their AI platforms as fiduciaries.
Conclusion
We agree with those who believe AI can be used “to communicate and translate investment and risk management concepts for the widest variety of investors, democratizing finance even more fully than before.” But we also believe that AI will never reach this potential unless it is well-regulated. Otherwise, AI will just be another way for financial firms to fleece investors under the guise of democratizing finance. For example, Robinhood says that its mission is just that, yet it has been found to induce users with little to no investment experience to trade the riskiest products and trade excessively. That is why its business model has been described as “‘opening up the casino to as many people as possible, while masking it in the language of universal stock ownership.’” Financial firms will say that they want to use AI to democratize finance, but without reasonable regulation they will use it to increase their profits.
This would not be good either for investors or for the use of AI in the financial industry. That is why reasonable regulation both maximizes AI’s rewards and minimizes its risks. “Existing rules and regulations are inadequate for AI agents deployed by investment service providers.” And investors are unlikely to rely on AI if they discover that AI platforms are being used “to exploit consumers in order to enrich financial advisors.” Conversely, subjecting AI platforms to reasonable regulations would instill trust in these systems and encourage widespread adoption.
There is no question that AI is transforming how investors interact with brokers and advisers. It should not be controversial to say that, as a result, the “obligations and responsibilities owed by broker-dealers and investment advisers should be updated when using AI.” That is how to ensure AI systems “are truly being used for the benefit of the retail investor.”