A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.
Maybe people need to learn that AI hallucinates
Yea, I’m mind blown, how, after 3 years people still don’t know how to use LLM effectively in use cases they bring value (by reducing work time)
Using AI like this, helped me enormously in work and live Like, I learned a lot C, C++, how linux kernel modules work, how PO/POT works, helped me with translations, introduced me into music production, helped me set up appFlowy and general windows/linux issues.
Maybe the owners of LLMs need to be held responsible for the problematic software they release
There’s no problem here. User error
you misspelled “is fucking wrong all the goddamn time”
It would be more accurate to say that rather than knowing anything at all they have a model of the statistical relationship between a series of tokens and subsequent tokens which words are apt to follow other words and because the training set contains many true things the words produced in response to queries often contain true statements and almost always contain statements that LOOK like true statements.
Since it has no inherent model of the world to draw on and only such statistical relationships you should check anything important
you say more accurate but all I see is a very roundabout way of saying fucking wrong all the goddamn time
it produces things that appear to be cohesive sentences. there is no reason to assign correctness to a sentence.
maybe you should tell that to the companies that shove it in every crevice of every website and app. why is it on search results? why is it summarizing emails? why is it literally doing anything? it’s useless. actually it’s less than useless. it’s misleading and harmful. and the companies should be held liable for it.
So then what’s the use of the program if it uses a bunch of energy to just make shit up?
sometimes you need a machine that makes things up according to a given specification.
Because it makes up things that are 99% correct and in some areas the 99% + verification and expansion can be superior time wise to the 100% manual route
What models are youseeing where things are 99% correct? Google’s search chat bot can’t even keep Windows vs Mac hotkey commands straight.
it’s pretty good a getting grammar correct.
And when it hallucinates harmful things, protections need to be put onto the output.
Ok so explain particularly what this means
If you have a service, and that service is generating things that harm people, you should have to stop it.
We value the gains both immediate and presumed more than the harm
Surely. Otherwise it’d be shut down. Like they did with my gun machine :-(