Chat GPT is usually wrong on financial matters

Yeah, pretty much. LLMs are trained on massive datasets derived from publicly available online sources like websites, books, and code repositories.

Which means, increasingly, that LLMs are being trained on material, much of which was produced by other LLMs (or even by the same LLM). To the extent that LLMs produce output which is incomplete, unreliable or downright false, this will be a self-reinforcing problem.