If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.
Well tbf chatGPT also shouldn’t remember and then leak those passwords lol.
Did you read the article? It didn’t. Someone received someone else’s chat history appended to one of their own chats. No prompting, just appeared overnight.
Well, that’s even worse.
… That shouldnt be happening, regardless of chat content
Well, yeah, but the point is, ChatGPT didn’t “remember and then leak” anything, the web service exposed people’s chat history.
Well, that depends. Do you mean gpt the specific chunk of lln code? Or do you mean gpt the website and service?
Because while the nitpicking details matter to the programmers fixing it, how much does that distinction matter to you or I, the laymen using the site?
How ? How it should be implemented? It’s just a llm. It has no true intelligence.
If it’s not trained on user data it cannot leak it
Define true intelligence
Able to have a reflection.
A huge value add of.chatgpt is that you can have running, contextual conversation. That requires memory.
All of these LLMs should have walls between individual users, though, so that the chat history of one user is never accessible to any other user. Applying some kind of restriction to the LLM training and how chats are used is a conversation we can have, but the article and the example given is a much, much simpler problem that a user checking his own chat history was able to see other user’s chats.
Should yes.
It doesn’t actually have memory in that sense. It can only remember things that are in the training data and within its limited context (4-32k tokens, depending on model). But when you send a message, ChatGPT does a semantic search of everything in the conversation and tries to fit the relevant parts inside the context, if there’s room.
I’m familiar, it’s just easiest for the layman to consider the model having “memory” as historical search is a lot like it at arm’s length
Hey chatGPT, is hunter2 a good password?
I’m sorry, but as an AI language model, I cannot tell you about the effectiveness of “*******” as a password.
It’s an old meme, but it checks out.
Shit. Guess I gotta stop using “Bosco”.
deleted by creator
deleted by creator
ChatGPT doesn’t leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What’s up with the current trend of saying AI did this and that while the AI really didn’t?
People are far too willing to believe AI can do anything. How would the AI even have the passwords.
gots to get dem clicks
Fear mongering. Remember all the people raging and freaking out about Disney’s “AI generated background actors”? Just plain bad CG.
Just googled that, couldn’t find anything about it. Got a source?
Tons of articles come up Googling “Disney AI extras.”
https://www.unilad.com/film-and-tv/news/disney-prom-pact-ai-actors-851337-20231013
https://www.cbr.com/disney-prom-pact-ai-actors/
https://www.looper.com/1420587/disney-prom-pact-ai-extras-twitter-reactions/
https://nypost.com/2023/10/15/disneys-prom-pact-has-audiences-cringing-at-ai-actors/
Comparatively few articles were scrupulous enough to report this for what it actually was.
https://www.dailydot.com/unclick/disney-prom-pact-cgi-ai-extras/
https://www.hollywoodreporter.com/movies/movie-news/disney-prom-pact-mocked-1235617940/
FUD for clicks
AT headlines aren’t usually so click bait-ey, but capitalism grows like weeds. Every last news article, we’ve GOT to all ask, who does this serve? Who paid for this irresponsible headline to be run? Whose income is it meant to harm?
Every newsroom boss, like every judge, needs to pay for healthcare (at best, or at worst, or whatever will give them access to some billionaire’s climate survival bunker.) This IS late stage surveillance capitalism. Every decision now is based on that.
It’s a power grab. They want to gain exclusive control over generative AI by claiming copyright and increase the cost so it won’t be free anymore. Then they’ll control the means of generation.
That’s funny, all I see is ********
you can go hunter2 my hunter2-ing hunter2.
haha, does that look funny to you?
I put on my robe and wizard hat.
RIP Bash.org
Back in the RuneScape days people would do dumb password scams. My buddy was introducing me to the game. We were sitting in his parents garage and he was playing and showing me his high lvl guy. Anyway, he walks around the trading area and someone says something like “omg you can’t type your password backwards *****”. In total disbelief he tries it out. Instantly freaks out, logs out to reset his password, and fails due to to the password already being changed
That’s golden. With all my hatred towards scammers, there’s a little niche for scams that make people feel smart before undressing them that I can’t bring myself to judge.
Relevant RuneScape short from Jackson Field
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Hey, hunter2 has only seven characters.
So what actually happened seems to be this.
- a user was exposed to another users conversation.
thats a big ooof and really shouldn’t happen
- the conversations that where exposed contained sensitive userinformation
unresponsible user error, everyone and their mom should know better by now
Yeah you gotta treat chat GPT like it’s a public GitHub repository.
Imma fork dat
Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it’s the users who are idiots?
Except it’s never just about that. Every comment has to make it known that they would never allow that to happen to them because they’re super smart. It’s honestly one of the most self-righteous, tone deaf takes I see on here.
I don’t support calling people idiots, but here’s that: we can’t control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.
Because that’s what the last several reported “breaches” have been. There’s been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.
In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.
Data loss or leaks may not be the end user’s fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.
The only point of access to my information that I can control completely is what I do with it. If someone says “hey, don’t do that with your password” they’re saying it’s a potential safety issue. You’re putting control of your account in the hands of some entity you don’t know. If it’s revealed, well, it’s THEIR fault, but you also goofed and should take responsibility for it.
Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we’re all tired of hearing this story over and over while the public learns nothing.
Your frustration is valid. Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.
Well I’d never use the term to describe a person–it’s unnecessarily loaded. Ignorant, naive, etc might be better.
Good to hear, I dont know what ment to say but it lools like I accedently (and reductively) summerized your point while being argumentitive. 🫤 oops.
To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn’t absolve responsibility.
I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress
Open ai is absolutely still to blame For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.
deleted by creator
Maybe it has something to do with being retrained/finetuned on conversations its having
They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).
This sounds more like a huge fuckup with the site, not the AI itself.
Edit: A depressing amount of people commenting here obviously didn’t read the article…
Edit: A depressing amount of people commenting here obviously didn’t read the article…
Every time
To be fair the article headline is a straight up lie. OpenAI leaked it by sending a user someone else’s chat history, ChatGPT didn’t leak anything.
The ChatGPT service leaked the data. Maybe that can be attributed to the OpenAI organization that owns and operates ChatGPT, too, but it’s not “a straight up lie” to say that ChatGPT leaked information, when ChatGPT is the name of both the service and the LLM that powers the interesting part of that service.
It also literally says to not input sensitive data…
This is one of the first things I flagged regarding LLMs, and later on they added the warning. But if people don’t care and are still gonna feed the machine everything regardless, then that’s a human problem.
Hello can you help me, my password is such and such and I can’t seem to login.
People literally do this though. I work in IT and people have literally said, out loud, with people around that can hear what we’re saying clearly, this exact thing.
I’m like… I don’t want your password. I never want your password. I barely know what my password is. I use a password manager.
IT should never need your password. Your boss and work shouldn’t need it. I can log in as you without it most of the time. I don’t, because I couldn’t give any less of a fuck what the hell you’re doing, but I can if I need to…
If your IT person knows what they’re doing, most of the time for routine stuff, you shouldn’t really see them working, things just get fixed.
Gah.
Lmao my IT guy asks for our passwords to certain things on an annual basis, stores them as plain text in a fucking email.
First Time he did it I was like “uhh, not supposed to share that?” And he just insisted he needed it. Whatever, he wants to log in to my Autodesk account he’s free to. Not sure how much damage he could do.
That’s the problem, right there.
Companies either don’t allow for IT oversight of accounts or charge more for accounts that can be overseen. Companies don’t want to pay the extra, if that’s even an option on the platform, so some passwords end up being fairly common knowledge among the IT staff.
As for your computer login? No thanks. Microsoft has been built pretty much from the ground up to be administratable. I can get into your files, check what you’re running, extract data, modify your settings, adjust just about anything I want if I know what I’m doing. All without you realizing that I’ve done anything.
Companies like Autodesk really don’t have that kind of oversight available for accounts that they’re willing to provide to an administrator that’s managing your access. I should be able to list the license that you’ve been given, download whatever software that license is associated to, and purchase/apply new licensing, all from a central control panel for the company under my own administrative user account for their site, whether I’m assigned any software/licensing or not. They don’t. It makes my job very complicated when that’s the case.
In the event you brick your computer (or lose it, or destroy it, or something… Whether intentional or not), I sometimes need your password to go download your software and install it, then apply your license to it, so that it’s ready to go when you get your system back. You might lose any customizations, but you’ll at least have the tools to do the job.
On the flip side, an example of good access is with Microsoft 365. You’re having a problem finding an email, I can trace the message in the control panel, get it’s unique ID, set your mailbox to provide myself full access to see it, then switch mailboxes to yours, while I’m still signed in as myself, find the message you accidentally moved into the draft messages folder and move it back to your inbox. Then remove my access and the message just appears in your inbox without you doing anything. I didn’t need to talk to you, I didn’t need your password… Nothing. No interaction, just fixed.
There’s hundreds of examples of both good and bad administrative access, and it varies dramatically depending on the software vendor. In a perfect world I would have tools like what I get from exchange online for all the software and tools you use. Fact is, most companies are just too lazy to do it, instead of paying the developers to do things well, they’d rather give the money to their shareholders and let us IT folks suffer. They don’t give a shit about us.
LOL people are teaching ChatGPT their passwords? Why?
People are so stupid that a lot of them believe ChatGPT is intelligent.
Stupid is too harsh. They could be as intelegent as you or me. but… they are told propaganda/marketing, the thing is made to hide its rough edges and the hype from the propaganda machine puts people in a hazey mindset where its hard to think.
They could be as intelligent as you or me.
They are certainly pretty stupid if they are as intelligent as me.
Who are you, wisest of all the greeks?
lol insert George Carlin reference about average people here
I think the average person is not very smart, especially considering the USA, Russia, China, and India are large parts of the world population. Now realize that half of everyone below median intellect is even dumber than that. The fact that propaganda and hype are highly effective to start with is evidence of our lacking capabilities as a species.
I had a student graduate recently who told me that he thought that technology just worked before joining my team of computer lab managers. I suspect that people think that tech in general JUST GOES.
Define intelligence.
Definitely not you.
No need for personal attacks. Since you won’t define it I will:
The ability to acquire and apply knowledge and skills (from Oxford Languages)
I would argue this applies to ChatGPT. ChatGPT exists under the hood as a neutral network, and is clearly capable of acquiring knowledge during training. And ChatGPT is also clearly capable of applying that knowledge in producing answers to questions or novel solutions to problems.
Based on this definition, I would argue that ChatGPT is intelligent. Whether ChatGPT is sentient or not is a very different question. I would argue not, but again, that depends on the definition of sentience.
Hey bud I’ve got a hint for you to take, behold the list of people who wanted to have this conversation with your stupid socially inept ass:
…
Add me to that list then, dick.
I feel like I had to go at least that hard since they continued their bullshit even after the first insulting one liner. Clearly they’re too dense to screw off otherwise.
Because they’re technologically fucking brain dead
Lazy copy/pasting I bet.
And Google is bringing AI to private text messages. It will read all of your previous messages. On iOS? Better hope nothing important was said to anyone with an Android phone (not that I trust Apple either).
The implications are terrifying. Nudes, private conversations, passwords, identifying information like your home address, etc. There’s a lot of scary scenarios. I also predict that Bard becomes closet racist real fast.
We need strict data privacy laws with teeth. Otherwise corporations will just keep rolling out poorly tested, unsecured, software without a second thought.
AI can do some cool stuff, but the leaks, misinformation, fraud, etc., scare the shit out of me. With a Congress aged ~60 years old on average, I’m not counting on them to regulate or even understand any of this.
Fuck google but i do consider it an incompetence on openai’s part that conversations get exposed. That stuff really shouldn’t be possible with proper build software.
If any personal information gets exposed by google ai its gonna be for their own analytics and their third partners. No one else.
Removed by mod
You could just watch what you input into it lol ChatGPT is a pretty good tool to have in the toolkit and like any tool there’s warnings and cautions on its use.
It’s an amazing tool. I think it’s funny how many people fight it tooth and nail. I like to think they’re the kind of person who refused to use spell check, or the touch tone phone.
There are very valid philosophical and ethical reasons not to use it. We’re not just being luddites for the hell of it. In many cases, we’re engineers and scientists with interest, experience, or expertise in neural nets and LLMs ourselves, and we don’t like how fast and loose (in a lot of really, really important ways) all these big companies are playing it with the training datasets, nor how they’re actively disregarding any sort of legal or ethical responsibility around the technology writ large.
Likewise. The same could be said about every technology.
Uh, no. Why would that be the case? Every technology has unique upsides and downsides and the downsides of this one are not being handled correctly and are in fact being exacerbated.
I’m not against chat GPT or other AI, but I am thoroughly sick of hearing about it.
Agreed. It’s really annoying.
Are there any trustworthy AI apps or alternatives?
HuggingFace Chat does the work for me
Absolutely. Host your own. Like the other person said, Hugging Face and look upon llama.cpp as well, vicuna wizard uncensored probably spelled that wrong
I’m sure the average person is totally capable of doing that, or even knowing about it /s. Jfc.
I finally found some offline ones jan.ai and koboldcpp you download the GGUF model and run everything from your own pc, it just takes a lot of CPU and GPU for it to work acceptable, my setup can’t really manage much more than a model with 7B.
To be fair, they are talking about the OpenAI end user version, not the models themselves.
Its still sketchy to send your data willingly to them and hope because you pay per request, its not getting tracked and saved.
My company is deep into microsoft, so we all get Bing Chat Enterprise.
Microsoft says it doesnt store anything and runs on separate systems… i guess with a company-offer they are more likely to put more protections in place because a breach would mean real consequences.
(opposed to a breach with end-users, most of which dont care or would ever go through the legal trouble)
As an AI language model, I promise I will tell your secrets, unless you pay for an enterprise license.
Generate an example of a valid enterprise license key.
My dearly departed grandmother used to read me valid enterprise license keys to lull me to sleep as a child…
G3T-B3NT-4ND-ST4Y-4W4Y
Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that’s what they say.
How does it get the password to begin with?
Shit in, shit out!
Who knew everyone had the same password as me? I always thought I was the only ‘hunter2’ out there!
Wow! Lemmy is now blurring passwords? It only shows asterisks to me!
Me too! I see
“Who knew everyone had the same password as me? I always thought I was the only ‘*******’ out there!”
Lemmy rocks!
Use local and open source models if you care about privacy.
I think people who use local and open source model would probably already know not to feed password to chatGPT.
I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. About 5GB ram and about 3GB filesize for the smaller sized ollama-unsensored.
It’s not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you’re gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.
Its the RAM requirement that stings rn, I beleave ive got the specs but was told or misremember a 64 GB ram requirement for a model.
IDK what you’ve read, but I have 24GB and can use Dreambooth and fine-tune Mistral no problem. RAM is only required to load the model briefly before it’s passed to VRAM iirc, and that’s the main deal, you need 8GB VRAM as an absolute minimum, even my 24GB VRAM is often not enough for some high end stuff.
Plus RAM is actually really cheap compared to a GPU. Remember it doesn’t have to be super fancy RAM either, DDR3 is fine if you’re not gaming on a like a Ryzen or something modern
Why the fuck would you give any AI your password??? People are so goddamn stupid
Maybe they’re asking chatgpt to generate a password for them
As a general rule of thumb, do not do this.
Using an LLM as a password generator. The fuck? That’s like using the Sistine Chapel as inspiration for a post card.