Chat GPT is usually wrong on financial matters

if you badger an LLM it will hallucinate a false response in many cases.

That's quite the statement...

Been trying to get copilot to return documents I know are there. But it's like trying to get a cat to find a book in a library. I can't help but think it's easier to find it myself.

Going to try some coding stuff at some point.
 
Last edited:
It's pretty well known from the research that ChatGPT is quite willing to just make stuff up if it doesn't 'know' the answer to a question, or part of a question (ie if it hasn't been trained on the relevant information). It actually 'likes' to be corrected as this new knowledge (whether correct or not) is then fed into the database.

The problem is the amount of wrong or ambiguous information that has been used for the training. It's easy enough to spot this if you ask it questions about any area you are expert in already yourself. Or ask it questions about a very narrow field and see how it generates false information, or just throws in partially correct information (e.g. a person with the same name; a similarly named scheme from a different jurisdiction) as fact.

I find it completely useless as presently constituted. It will undoubtedly get better over time, as long as the big ethical questions around copyright etc can be sorted out.
 
A better test would be to give 5 financial advisors a few questions to answer and see how many of them are right. I would imagine that there would be great variety in the answers.

Then give the questions to Chat GPT and see if any of them are actually wrong.
 
  • Like
Reactions: jim
Interesting that Askaboutmoney posts are being used to feed the Gemini model.
Isn't it the case that most or all such publicly accessible AIs/LLMs are being trained on the complete visible (i.e. non deep/dark) web contents? Which is part of the reason that some of the results can be of variable/questionable quality - because there's a lot of rubbish on the web. E.g. the "AI recommends that people eat at least one small rock daily" example that came about because of a parodic article on The Onion...
 
Yeah, pretty much. LLMs are trained on massive datasets derived from publicly available online sources like websites, books, and code repositories.

Which means, increasingly, that LLMs are being trained on material, much of which was produced by other LLMs (or even by the same LLM). To the extent that LLMs produce output which is incomplete, unreliable or downright false, this will be a self-reinforcing problem.
 
It's not great on legal matters either.

A super article in the Irish Times about the way the Rangers boarded the drug smuggling ship.


After failing to rendezvous with the Castlemore, their criminal bosses in Dubai instructed them to put the cocaine in a lifeboat and prepare to land them ashore.

Later, the crew were wrongly advised that the Irish authorities had no legal authority to board their vessel. It subsequently emerged in court the Dubai criminals were relaying legal advice from ChatGPT.
 
Later, the crew were wrongly advised that the Irish authorities had no legal authority to board their vessel. It subsequently emerged in court the Dubai criminals were relaying legal advice from ChatGPT.
From what I've seen from other sources, including the Garda Press Office video on YouTube, ChatGPT may not have been incorrect. The gangsters (and/or their paymasters in Dubai) seemed to have thought that they were in waters outside the jurisdiction of the Irish authorities when, in fact, they were inside. So they may simply have given the AI an inaccurate prompt/question based on their misunderstanding of the situation...
 
Last edited:
From what I've seen from other sources, including the Garda Press Office video on YouTube, ChatGPT may not have been incorrect. The gangsters (and/or their paymasters in Dubai) seemed to have thought that they were in waters outside the jurisdiction of the Irish authorities . . .
Or, it may still have been incorrect. Even if the vessel was on the high seas, there are a couple of different bases on which boarding might have been permissible under international law.

An actual lawyer giving advice in this situation would ask a number of questions, some of which the people on the vessel would almost certainly have been unable to answer with confidence. One of the questions would have been "where were you when the Irish authorities first began to monitor your progress and follow you?" If the answer to that question was (as seems highly likely) "we don't know when they first began to monitor us", then the lawyer will say "In that case, I cannot tell you whether a boarding is authorised by international law". But that's the kind of answer an AI system is highly, highly averse to giving.
So they may simply have given the AI an inaccurate prompt/question based on their misunderstanding of the situation...
Or — and this is extremely common — being highly motivate to hearing that boarding was not authorised, they may have given prompts/questions designed to elicit advice to that effect. (We see this quite often on askaboutmoney — people give the facts which are favourable to the view they would like to take of their situation, but getting the facts which are less favourable to them can be like drawing teeth.)

A human lawyer is familiar with the natural tendency of people to focus on the facts and circumstances that give them hope, and he will ask questions designed to uncover the facts that they would prefer to downplay. AI chatbots don't do that.
 
There is a huge risk with a financial advisor that you are sold the products which benefits the advisor rather than the client.
Is this really true, Brendan?

My understanding is that advisors must make full disclosure of any tied agency or similar arrangements they have with product providers, and that outside this, there is no massive difference in terms of intermediary remuneration etc in the offerings presented by the various prestigious, market-leading life and pensions companies.

The biggest risk that I see in relation to life and pensions is like that of the various places where one can buy shoes. If you remain overly suspicious of the motives and earnings of every shoe seller, you'll sooner or later end up barefoot.
 
Hi Tommy

Yes, the risk is quite high.

If you ask here about where to invest €100k, you will be told to pay off any borrowings including your mortgage.

I have seen numerous cases where the broker will say "Keep you home and mortgage separate from your investments. You should buy the xxx bond"

And I have also seen very questionable advice from financial advisors.

I think it's very risky.

Here, you will get both points of view. And no one is being paid.

Brendan
 
Hi Brendan

Anyone who receives questionable advice from a financial advisor or other professional has an elaborate set of remedies open to them.

I don't accept the claim that there is a huge risk of a given financial advisor successfully selling products to clients which primarily benefit the advisor rather than the client.
 

is only one example.

It may be better now but I suspect that there are many poor advisors out there.
 
Hi Brendan

In that case the following axiom applied.
Anyone who receives questionable advice from a financial advisor or other professional has an elaborate set of remedies open to them.

I'm not an investment advisor but thought that particular investment a bad one to be buying and a bad one to be selling.
But I also thought the same of what Harry Cassidy in Custom House Capital was selling 20+ years ago.

If greed blinds some people to the obviously inherent risk in certain investments, that is not something you can simply legislate away unless you ban everything but the safest investments.
 
I don't think it's greedy to want to get a good return on your investments.

People go to a financial advisor based on recommendations by friends or in response to ads.

They don't have the insight that you and other users of askaboutmoney have.
 
From my limited experience if you use an AI that one of the big general purpose AI. They are so often wrong and hallucinates so often that you can't trust it.

If you have LLM that's using source data that's being managed, audited on a regular basis like a legal reference library or this forum it will be vastly more accurate.
 
And now Musk is planning to insert ads into the answers provided by Grok


  • Grok will let advertisers pay to appear in chatbot suggestions. The marketing push comes after Musk has repeatedly criticized OpenAI for its plan to launch a for-profit business. Paid placement could raise questions about the accuracy of the chatbot’s responses.
Elon Musk is looking to monetize Grok. Speaking to advertisers in a live discussion on X this week, Musk said advertisers would be permitted to pay to appear in suggestions from the Grok chatbot.

“Our focus thus far has just been on making Grok the smartest, most accurate AI in the world, and I think we’ve largely succeeded in that. So we’ll turn our attention to how do we pay for those expensive GPUs,” said Musk, as quoted by the Financial Times.
 
Back
Top