Academics apologise for AI blunder implicating Big Four
A group of academics has apologised to the Big Four after it was revealed they had used material generated by artificial intelligence to implicate the firms in non-existent scandals via a submission to the Australian Parliament.
You might also be interested in
Replies (10)
Please login or register to join the discussion.
Dreadful example of how AI sometimes makes things up - AT THE MOMENT. The large language models are still learning and will only get BETTER. They will never again be as stupid as they are now (and they're not that stupid generally).
I find it helps to think of ChatGPT as if it is a bright and eager-to-please 15 year old with access to a world of data.
It tries to help but needs very clear and specific instructions. It occasionally makes things up when It can’t easily find the facts you have requested, and it hopes you won’t notice. So you always need to check what it produces. You can then ask it if it is sure and to check its sources.
Many people are disappointed with what Chat-GPT produces but that’s often because they didn’t provide enough context for their request.
I get your enthusiasm Mark but it's the same old SISO and that cannot be changed, because quite simply the input is either a determination or an interpretation but very few actual facts. Why is that, Mark? It's because everything we have learned about most things is changing, which means those changes will change. For processing loads of data, yes every time
Mark Lee: I regret to disagree for the simple reason that AI produces a considerable volume of factually inaccurate content in such small amounts of time that inevitably it will be fed back in to 'the system' until it is impossible to look back to determine what is true and what is not.
So on that basis the learning will get worse, not better, as there is no current method of 'watermarking' what has been generated by AI and that which has been properly researched. There are sayings about taking shortcuts which should be heeded here.
It's a bit like when people who complain of bad grammar/spelling themselves use bad grammar/spelling. Incidentally, I hear good grammar/spelling is the only currently reliable thing about ChatGPT etc. (which makes it easier to spot unless perhaps you deliberately add bad grammar/spelling to disguise it).
Come on , it is so obviously artificial and absolutely useless except for some basic rudimentary tasks , and will be so for a long long time after we are all dead...the main danger is from the idiots relying on it.
Millenium Bug
Sarbanes Oxley
360 Reviews
MTD
Crypto Currency
AI
all will be "pining for the fiords"
Feel free to add other examples!