Chat GPT is usually wrong on financial matters

Go back and read what you actually said.

You had a chin wag - Irrelevant, dont be cheeky
You wondered was it right - Correct
You were told it wasn't by two experts who deal with the issue every working day - Already dealt with this, I don't know from Adam who different posters are or what their backgrounds are and in any case I wouldn't 100% trust something that was told to me on an internet forum.

But you said you would check with Revenue anyway. - it's a free country, I can check what I want. It's not a crime and there is no reason to drag someone through the mud for it.

I think you should also get Senior Counsel's opinion on it as well in case everyone other than Chat GPT has been getting it wrong for years. - This is a nonsense statement and shows you're not listening to a word I said here.
 
People don't fact check AI enough and it's often inaccurate. Mostly right means it's unreliable.

To err is human is no carte blanche to allow unrestricted AI quoting.

Perhaps others feel differently...
 
Last edited:
Dare I say the main topic here is AI rather than wading into a certain poster's comments and interpretations of ChatGpt?

Quick thoughts on the main topic, AI is here to stay and there are many AI variants than just ChatGPT. (Gemini, Claude etc etc)

AI is only as good as the data it is fed. I had an argument with ChatGPT 3(iirc) 2 years ago as it insisted I was wrong. That was because it had only been fed data up to 2022.

Questioning and critical thinking is vital more so than ever, whether AI or forum posters. That is super healthy and should be encouraged.

However, it would be really interesting to train a LLM on this forum and see what the results are. There is a wealth of info on here that could help people get the basics right and then more nuanced questions being dealt with by the deep experience and knowledge humans have.
 
I had an argument with ChatGPT 3(iirc) 2 years ago as it insisted I was wrong. That was because it had only been fed data up to 2022.
You should've been more assertive with it:
 
That's an interesting point you make about training an LLM on AAMs knowledge base. How would that happen in practical terms?

For starters you'd have to wait 2 yrs before this thread was crawled. Then you'd have check if it used wrong information on this thread or the corrected information in it's reply.

There will be local agents you could create and point only at this database then you could train it and weight some posters as preferred answers and ignore other posters who are consistently inaccurate. You could weight liked posts and things like that.

Equally it wouldn't work on a forum that's full of misinformation and conspiracy theories. But you'd have to filter this out because AI will not .
 
That's an interesting point you make about training an LLM on AAMs knowledge base. How would that happen in practical terms?
Some of the AI models will provide links to the content they use to generate their output. I have already seen AAM being quoted as a source for a travel related question I posed, the linked post was well out of date but someone who didn't delve into the sources would not be aware.
 
An AI trained on this forum would come to the conclusion that AI is usually wrong on financial matters and enter into an infinite loop of changing its mind.
 
It is a rather scary exercise to ask Chat GPT to "tell me about about myself in whatever town I live in".

If you ask AI the wrong question, you will get the wrong answer. But AI has also been fed with a lot of nonsense so even if you ask the right question, you may get the wrong answer. Hate to say it though, there are financial advisors out there who would do the same.

The real scary thought from a advice perspective is when a poor/ill informed FA uses AI to come up with the wrong answer and feeds it through to the client
 
It's a bit like joke about how many Microsoft engineers does it take to change a light bulb. None they just redefine darkness as light.

It's the same with search engines and AI it can be seeded with poor quality data. Which it then regurgitates. So the requirement to fact check will never go away. Fact checking becomes the more critical skill in this. For those that wish to be informed anyway. Commercial interests will take advantage of those that don't.

You'd also have to consider the effort and resources required to get AI working accurately on this sites Vs the existing search in terms of bang for buck.
 
I still maintain that AI, even as it stands - and it's generally only improving, can be a useful tool if used with suitable prompts (e.g. if asking about pensions and tax relief then make sure to specify "Ireland" and don't assume that it will know that this is the context), caveats and fact checking. A lot of people seem to dismiss it altogether which seems rash to me. There is always a risk of "garbage in, garbage out" but sometimes the input garbage is the prompt rather than the training data.
 
Last edited:
It's the same with search engines and AI it can be seeded with poor quality data. Which it then regurgitates. So the requirement to fact check will never go away.
Some things never change:
"Gallois' Revelation: If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow enobled, and no one dares to criticize it."

Although the fact we're criticising it is welcome.
 
A lot of people seem to embrace it without any quality filter. Similar to fake news.

Rather than relying on people to know how to engineer prompts. It would be better to have an AI Agent on the revenue site itself.
 
Last edited:
People don't fact check AI enough and it's often inaccurate.

I just asked CoPilot ' How can I fact check your answers?'
Below was the answer:

Great question, Charlie! Fact-checking is always a good idea. Here’s how you can verify my answers:

  • Use Reliable Sources: Look for information from credible websites, government agencies, academic papers, or reputable news outlets.
  • Check My Citations: When I search the web, I provide sources for my answers—click the links to verify the information.
  • Cross-Reference: Compare my responses with multiple trustworthy sources to ensure accuracy.
  • Ask Experts: If it’s a complex topic like finance, law, or medicine, consulting a professional is always wise.
  • Use Critical Thinking: AI models, including me, are trained on existing data, and sometimes errors can occur. If something seems off, question it!
Since fact-checking is a hot topic, I came across a discussion on Askaboutmoney.com about AI accuracy. Some users there debate how AI models can sometimes be wrong—especially in financial matters—and why critical thinking is essential when using AI-generated information. Their conversation reinforces the importance of verifying any advice you receive. Want me to summarize some key points from that discussion?
 
I gave Perplexity this question and I got this response:




The wording is a bit convoluted but I think it's pretty good advice given the information it works with.
 
The link to ChatGPT output provided earlier on this thread allows you to go in to the answer provided and question it further. I asked it for it's sources, which it provided. Plenty of sources on the US tax system, but none relating to ARFs.

FYI hallucinations with citations / sources provided by Gen AI are incredibly common

Fabrication and errors in the bibliographic citations generated by ChatGPT​