Dr Strangelove
Registered User
- Messages
- 2,189
Mortgage: €50kit's not to be trusted where there's an analysis of numbers required.
I disagree based on extensive recent experience using ChatGPT to analyse some household budget and medium/long term early retirement scenarios. As ever I did double check things and correct it if it got something wrong but, by and large, it was useful and accurate.I think AI is fine for this type of financial 'coaching', but it's not to be trusted where there's an analysis of numbers required.
I'd disagree. It's the question that is specific.I’d say that’s pretty specific
Let's look at the example posted.I disagree based on extensive recent experience
I tested it with questions taken from QFA practice exams (hardly a difficult test). It was wrong about 50% of the time.it would be interesting to hear forums members experience and tests with it.
Did you sense any pattern in the type of questions it got wrong or right? Or was it completely random?tested it with questions taken from QFA practice exams (hardly a difficult test). It was wrong about 50% of the time.
I didn't see any particular pattern. Numbers were sometimes right, sometimes wrong. It was almost as if it was someone not great with numbers doing the calculations (I suppose that's likely the case in the days it was trained on). Sometimes it hallucinated regulations that didn't exist, or just got them backwards (something allowed wasn't allowed, etc).Did you sense any pattern in the type of questions it got wrong or right?
I’ve been using LLMs since late 2022.If that's true I wonder if the user could possibly increase accuracy of the responses by wording the questions in a particular way.
if you badger an LLM it will hallucinate a false response in many cases.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?