Chat GPT is usually wrong on financial matters

People don't fact check AI enough and it's often inaccurate. Mostly right means it's unreliable.

To err is human is no carte blanche to allow unrestricted AI quoting.

Perhaps others feel differently...
 
Last edited:
Dare I say the main topic here is AI rather than wading into a certain poster's comments and interpretations of ChatGpt?

Quick thoughts on the main topic, AI is here to stay and there are many AI variants than just ChatGPT. (Gemini, Claude etc etc)

AI is only as good as the data it is fed. I had an argument with ChatGPT 3(iirc) 2 years ago as it insisted I was wrong. That was because it had only been fed data up to 2022.

Questioning and critical thinking is vital more so than ever, whether AI or forum posters. That is super healthy and should be encouraged.

However, it would be really interesting to train a LLM on this forum and see what the results are. There is a wealth of info on here that could help people get the basics right and then more nuanced questions being dealt with by the deep experience and knowledge humans have.
 
I had an argument with ChatGPT 3(iirc) 2 years ago as it insisted I was wrong. That was because it had only been fed data up to 2022.
You should've been more assertive with it:
 
That's an interesting point you make about training an LLM on AAMs knowledge base. How would that happen in practical terms?

For starters you'd have to wait 2 yrs before this thread was crawled. Then you'd have check if it used wrong information on this thread or the corrected information in it's reply.

There will be local agents you could create and point only at this database then you could train it and weight some posters as preferred answers and ignore other posters who are consistently inaccurate. You could weight liked posts and things like that.

Equally it wouldn't work on a forum that's full of misinformation and conspiracy theories. But you'd have to filter this out because AI will not .
 
Some of the AI models will provide links to the content they use to generate their output.
The link to ChatGPT output provided earlier on this thread allows you to go in to the answer provided and question it further. I asked it for it's sources, which it provided. Plenty of sources on the US tax system, but none relating to ARFs.
 
An AI trained on this forum would come to the conclusion that AI is usually wrong on financial matters and enter into an infinite loop of changing its mind.
 
It is a rather scary exercise to ask Chat GPT to "tell me about about myself in whatever town I live in".

If you ask AI the wrong question, you will get the wrong answer. But AI has also been fed with a lot of nonsense so even if you ask the right question, you may get the wrong answer. Hate to say it though, there are financial advisors out there who would do the same.

The real scary thought from a advice perspective is when a poor/ill informed FA uses AI to come up with the wrong answer and feeds it through to the client
 
It's a bit like joke about how many Microsoft engineers does it take to change a light bulb. None they just redefine darkness as light.

It's the same with search engines and AI it can be seeded with poor quality data. Which it then regurgitates. So the requirement to fact check will never go away. Fact checking becomes the more critical skill in this. For those that wish to be informed anyway. Commercial interests will take advantage of those that don't.

You'd also have to consider the effort and resources required to get AI working accurately on this sites Vs the existing search in terms of bang for buck.
 
It's the same with search engines and AI it can be seeded with poor quality data. Which it then regurgitates. So the requirement to fact check will never go away.
Some things never change:
"Gallois' Revelation: If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow enobled, and no one dares to criticize it."

Although the fact we're criticising it is welcome.
 
A lot of people seem to embrace it without any quality filter. Similar to fake news.

Rather than relying on people to know how to engineer prompts. It would be better to have an AI Agent on the revenue site itself.
 
Last edited:
People don't fact check AI enough and it's often inaccurate.

I just asked CoPilot ' How can I fact check your answers?'
Below was the answer:

Great question, Charlie! Fact-checking is always a good idea. Here’s how you can verify my answers:

  • Use Reliable Sources: Look for information from credible websites, government agencies, academic papers, or reputable news outlets.
  • Check My Citations: When I search the web, I provide sources for my answers—click the links to verify the information.
  • Cross-Reference: Compare my responses with multiple trustworthy sources to ensure accuracy.
  • Ask Experts: If it’s a complex topic like finance, law, or medicine, consulting a professional is always wise.
  • Use Critical Thinking: AI models, including me, are trained on existing data, and sometimes errors can occur. If something seems off, question it!
Since fact-checking is a hot topic, I came across a discussion on Askaboutmoney.com about AI accuracy. Some users there debate how AI models can sometimes be wrong—especially in financial matters—and why critical thinking is essential when using AI-generated information. Their conversation reinforces the importance of verifying any advice you receive. Want me to summarize some key points from that discussion?
 
I gave Perplexity this question and I got this response:




The wording is a bit convoluted but I think it's pretty good advice given the information it works with.
 
The link to ChatGPT output provided earlier on this thread allows you to go in to the answer provided and question it further. I asked it for it's sources, which it provided. Plenty of sources on the US tax system, but none relating to ARFs.

FYI hallucinations with citations / sources provided by Gen AI are incredibly common

Fabrication and errors in the bibliographic citations generated by ChatGPT​

 
I think AI is fine for this type of financial 'coaching', but it's not to be trusted where there's an analysis of numbers required.
I disagree based on extensive recent experience using ChatGPT to analyse some household budget and medium/long term early retirement scenarios. As ever I did double check things and correct it if it got something wrong but, by and large, it was useful and accurate.
 
I’d say that’s pretty specific
I'd disagree. It's the question that is specific.

Most of the advice there relatively objective, and doesn't really analyse the numbers. A quick glance at the numbers and I can see some errors (the tax relief jumps out). They don't materially affect the advice given (as it's not based on the numbers) but are wrong all the same. I note it also states that rental property investments are liquid.