Chat GPT is usually wrong on financial matters

Yeah, pretty much. LLMs are trained on massive datasets derived from publicly available online sources like websites, books, and code repositories.

Which means, increasingly, that LLMs are being trained on material, much of which was produced by other LLMs (or even by the same LLM). To the extent that LLMs produce output which is incomplete, unreliable or downright false, this will be a self-reinforcing problem.
 
It's not great on legal matters either.

A super article in the Irish Times about the way the Rangers boarded the drug smuggling ship.


After failing to rendezvous with the Castlemore, their criminal bosses in Dubai instructed them to put the cocaine in a lifeboat and prepare to land them ashore.

Later, the crew were wrongly advised that the Irish authorities had no legal authority to board their vessel. It subsequently emerged in court the Dubai criminals were relaying legal advice from ChatGPT.
 
Later, the crew were wrongly advised that the Irish authorities had no legal authority to board their vessel. It subsequently emerged in court the Dubai criminals were relaying legal advice from ChatGPT.
From what I've seen from other sources, including the Garda Press Office video on YouTube, ChatGPT may not have been incorrect. The gangsters (and/or their paymasters in Dubai) seemed to have thought that they were in waters outside the jurisdiction of the Irish authorities when, in fact, they were inside. So they may simply have given the AI an inaccurate prompt/question based on their misunderstanding of the situation...
 
Last edited:
From what I've seen from other sources, including the Garda Press Office video on YouTube, ChatGPT may not have been incorrect. The gangsters (and/or their paymasters in Dubai) seemed to have thought that they were in waters outside the jurisdiction of the Irish authorities . . .
Or, it may still have been incorrect. Even if the vessel was on the high seas, there are a couple of different bases on which boarding might have been permissible under international law.

An actual lawyer giving advice in this situation would ask a number of questions, some of which the people on the vessel would almost certainly have been unable to answer with confidence. One of the questions would have been "where were you when the Irish authorities first began to monitor your progress and follow you?" If the answer to that question was (as seems highly likely) "we don't know when they first began to monitor us", then the lawyer will say "In that case, I cannot tell you whether a boarding is authorised by international law". But that's the kind of answer an AI system is highly, highly averse to giving.
So they may simply have given the AI an inaccurate prompt/question based on their misunderstanding of the situation...
Or — and this is extremely common — being highly motivate to hearing that boarding was not authorised, they may have given prompts/questions designed to elicit advice to that effect. (We see this quite often on askaboutmoney — people give the facts which are favourable to the view they would like to take of their situation, but getting the facts which are less favourable to them can be like drawing teeth.)

A human lawyer is familiar with the natural tendency of people to focus on the facts and circumstances that give them hope, and he will ask questions designed to uncover the facts that they would prefer to downplay. AI chatbots don't do that.
 
Relying on AI is like relying on AAM,

If you end up misled by either, it's generally because you either neglected to provide a full outline of your relevant circumstances or simply asked the wrong or an incomplete set of questions.

There in large part lies the value of a professional advisor.
 
I think that there is a big difference between AI and askaboutmoney.

  • If you get a wrong answer on askaboutmoney, it will usually be corrected very quickly.
  • If you ask the wrong question on askaboutmoney, you will be told so
  • And many times, there is no clear cut answer, just opinions and you will get different views on askaboutmoney which will allow you to weigh up the pros and cons - maybe AI does this as well?
A professional advisor on a fairly factual matter like tax would usually be better.

But on a general financial question e.g. a Moneymakeover, askaboutmoney is better than a financial advisor as different views are given. There is a huge risk with a financial advisor that you are sold the products which benefits the advisor rather than the client.
 
Moneymakeovers are great precisely because they pose a wide range of questions. In an ideal world, all AAM queries would require a pre-query questionnaire but that would be unwieldy and too time consuming for what is a voluntary service.

But there are so many times on AAM that I see wrong or questionable answers and just don't have the time or energy to correct them. And that's only in my field of expertise.
 
There is a huge risk with a financial advisor that you are sold the products which benefits the advisor rather than the client.
Is this really true, Brendan?

My understanding is that advisors must make full disclosure of any tied agency or similar arrangements they have with product providers, and that outside this, there is no massive difference in terms of intermediary remuneration etc in the offerings presented by the various prestigious, market-leading life and pensions companies.

The biggest risk that I see in relation to life and pensions is like that of the various places where one can buy shoes. If you remain overly suspicious of the motives and earnings of every shoe seller, you'll sooner or later end up barefoot.
 
Nevertheless, if you're given a wrong or incomplete answer on AAM your chances of having it queried or corrected by another poster are a lot higher than if you got the same wrong/incomplete answer on ChatGPT.

So I think the results in the reliable advice steeplechase are:
  1. Advice from a qualified and competent professional
  2. Advice you get on Askaboutmoney
  3. Advice you get from technologies yet to be invented
Also ran: ChatGPT.
 
Hi Tommy

Yes, the risk is quite high.

If you ask here about where to invest €100k, you will be told to pay off any borrowings including your mortgage.

I have seen numerous cases where the broker will say "Keep you home and mortgage separate from your investments. You should buy the xxx bond"

And I have also seen very questionable advice from financial advisors.

I think it's very risky.

Here, you will get both points of view. And no one is being paid.

Brendan
 
Hi Brendan

Anyone who receives questionable advice from a financial advisor or other professional has an elaborate set of remedies open to them.

I don't accept the claim that there is a huge risk of a given financial advisor successfully selling products to clients which primarily benefit the advisor rather than the client.
 

is only one example.

It may be better now but I suspect that there are many poor advisors out there.
 
Hi Brendan

In that case the following axiom applied.
Anyone who receives questionable advice from a financial advisor or other professional has an elaborate set of remedies open to them.

I'm not an investment advisor but thought that particular investment a bad one to be buying and a bad one to be selling.
But I also thought the same of what Harry Cassidy in Custom House Capital was selling 20+ years ago.

If greed blinds some people to the obviously inherent risk in certain investments, that is not something you can simply legislate away unless you ban everything but the safest investments.
 
I don't think it's greedy to want to get a good return on your investments.

People go to a financial advisor based on recommendations by friends or in response to ads.

They don't have the insight that you and other users of askaboutmoney have.
 
Back
Top