AI Not a Financial Advisor: CNMV Warns of Serious Flaws in ChatGPT, Gemini, DeepSeek, and Perplexity Usage

Sports News » AI Not a Financial Advisor: CNMV Warns of Serious Flaws in ChatGPT, Gemini, DeepSeek, and Perplexity Usage
Preview AI Not a Financial Advisor: CNMV Warns of Serious Flaws in ChatGPT, Gemini, DeepSeek, and Perplexity Usage

Artificial Intelligence is now used for virtually everything, an undeniable and unstoppable trend that has extended to investment decision-making. The CNMV (Spain’s National Securities Market Commission) has stepped in to inject some reality: while AI models like ChatGPT, Gemini, DeepSeek, and Perplexity can assist in analyzing financial information, they are not suitable as autonomous financial advisors. The supervisory body, responsible for market competition in Spain, has highlighted significant flaws, errors, and “hallucinations” when these models are used without human oversight. It’s perhaps unsurprising, but crucial to acknowledge.

The core issue, as one might expect, lies in how these tools are utilized. Requesting a quick answer without context or methodology differs greatly from tasking them with clear instructions, reliable data, and subsequent human review. This is where the problems begin for investors, particularly retail investors, who often seek straightforward answers to complex financial decisions that can have significant monetary consequences.

CNMV Warns Against Using AI as a Financial Advisor by Letting Responses Dictate Investments

The CNMV has released a noteworthy study titled “Large Language Models and Stock Investing: Is the Human Factor Required?”, authored by Ricardo Crisóstomo and Diana Mykhalyuk, technicians from the organization’s Directorate General of Strategy and International Affairs. The study assesses the reasoning capabilities of four advanced language models – ChatGPT, Gemini, DeepSeek, and Perplexity – in investment decisions, representing some of the most widely used options.

While the study doesn’t include Claude, its findings offer little room for complacency. According to the CNMV, these AI systems exhibit recurring reasoning errors, computational mistakes, incorrect financial interpretations, and reliance on outdated or fabricated information. These are the well-known AI hallucinations, but in a domain where mistakes can lead to more than just a poor answer; they can result in actual financial losses because your money is at stake.

The research also points out a critical detail: errors are more frequent when queries are simple, unstructured, and lack context. In essence, asking an AI “which stock should I buy?” or “is this company a good investment?” is a fundamentally flawed approach, bordering on entirely misguided. The AI might present its response convincingly and in a well-organized manner, yet still be based on incorrect data, misinterpreted ratios, or outdated information.

The Human Factor Remains Paramount for Successful Investment

This is why the CNMV emphasizes the indispensable human factor. The integration of AI into financial markets presents not only a technological challenge but also an organizational one, given its rapid adoption. Rigorous verification, systematic human validation, and control mechanisms are essential to detect errors before they impact real investment decisions.

The logical conclusion is straightforward: if AI were superior to human professionals in investment tasks, human advisors would have long been replaced. Furthermore, publicly traded companies also utilize AI, meaning the playing field is somewhat balanced, and the human element continues to be the differentiating factor.

Another significant aspect highlighted by the study is the quality of sources, which is crucial for anyone actively involved in the stock market. The CNMV stresses that models perform better when grounded in official, regulated, and standardized information compared to general internet content, which can contain conflicting narratives, biases, or non-comparable data. Information originating from regulators like the CNMV reduces noise, enhances comparability, and leads to more coherent financial responses. While not infallible, it’s far more reliable.

The practical takeaway is that AI can be a valuable tool for organizing information, comparing data, or preparing analyses, but it should not be making decisions for us. This is especially true when personal finances are involved. Perhaps the CNMV foresees that using AI as a financial advisor could lead to significant financial ruin for many, or perhaps they simply don’t want individuals to become overly wealthy by misusing AI.

Regardless of the underlying motivation, the CNMV is not issuing a veiled prohibition but rather a warning: misusing AI can lead to the evaporation of invested capital due to a series of poor decisions based on the AI’s recommendations.