Let's look at the example posted.I disagree based on extensive recent experience
I tested it with questions taken from QFA practice exams (hardly a difficult test). It was wrong about 50% of the time.it would be interesting to hear forums members experience and tests with it.
Did you sense any pattern in the type of questions it got wrong or right? Or was it completely random?tested it with questions taken from QFA practice exams (hardly a difficult test). It was wrong about 50% of the time.
I didn't see any particular pattern. Numbers were sometimes right, sometimes wrong. It was almost as if it was someone not great with numbers doing the calculations (I suppose that's likely the case in the days it was trained on). Sometimes it hallucinated regulations that didn't exist, or just got them backwards (something allowed wasn't allowed, etc).Did you sense any pattern in the type of questions it got wrong or right?
if you badger an LLM it will hallucinate a false response in many cases.
Isn't it the case that most or all such publicly accessible AIs/LLMs are being trained on the complete visible (i.e. non deep/dark) web contents? Which is part of the reason that some of the results can be of variable/questionable quality - because there's a lot of rubbish on the web. E.g. the "AI recommends that people eat at least one small rock daily" example that came about because of a parodic article on The Onion...Interesting that Askaboutmoney posts are being used to feed the Gemini model.
Isn't it the case that most or all such publicly accessible AIs/LLMs are being trained on the complete visible (i.e. non deep/dark) web contents?
Yeah, pretty much. LLMs are trained on massive datasets derived from publicly available online sources like websites, books, and code repositories.
That could apply to most humans too.People don't fact check AI enough and it's often inaccurate. Mostly right means it's unreliable.
From what I've seen from other sources, including the Garda Press Office video on YouTube, ChatGPT may not have been incorrect. The gangsters (and/or their paymasters in Dubai) seemed to have thought that they were in waters outside the jurisdiction of the Irish authorities when, in fact, they were inside. So they may simply have given the AI an inaccurate prompt/question based on their misunderstanding of the situation...Later, the crew were wrongly advised that the Irish authorities had no legal authority to board their vessel. It subsequently emerged in court the Dubai criminals were relaying legal advice from ChatGPT.
Or, it may still have been incorrect. Even if the vessel was on the high seas, there are a couple of different bases on which boarding might have been permissible under international law.From what I've seen from other sources, including the Garda Press Office video on YouTube, ChatGPT may not have been incorrect. The gangsters (and/or their paymasters in Dubai) seemed to have thought that they were in waters outside the jurisdiction of the Irish authorities . . .
Or — and this is extremely common — being highly motivate to hearing that boarding was not authorised, they may have given prompts/questions designed to elicit advice to that effect. (We see this quite often on askaboutmoney — people give the facts which are favourable to the view they would like to take of their situation, but getting the facts which are less favourable to them can be like drawing teeth.)So they may simply have given the AI an inaccurate prompt/question based on their misunderstanding of the situation...
I would absolutely and very seriously doubt this Brendan.askaboutmoney is better than a financial advisor as different views are given
Is this really true, Brendan?There is a huge risk with a financial advisor that you are sold the products which benefits the advisor rather than the client.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?