- cross-posted to:
- hackernews@derp.foo
- cross-posted to:
- hackernews@derp.foo
Stephen King: My Books Were Used to Train AI::One prominent author responds to the revelation that his writing is being used to coach artificial intelligence.
Stephen King: My Books Were Used to Train AI::One prominent author responds to the revelation that his writing is being used to coach artificial intelligence.
Lol. AI training is more like human awareness of a subject or style. LLM’s are not Artificial General Intelligence. They have no persistent memory. They are just a complicated way of categorizing and associating subjects mixed with a probability of what word comes next. The only thing they are really doing is answering what word comes next. Thar be no magic in them thar dragons
This is just another market hype article. AI can’t reproduce a work or replace the author. It can write a few lines that may reflect a similar style just like any human also familiar with the work and style.
Unless you want to go back to the medieval era of thought policing, all of these questions about AI training are irrelevant.
It’s an interesting philosophical question to ask whether we humans, when writing something, based on the sum total of all the things we’ve seen, heard, read, etc., aren’t just also working out which is the most likely next word to make a good story. *
One question that could be worth asking though is whether this should have been done without permission. From experience talking with authors, that’s a bigger concern than whether they’ll be replaced.
*Totally agree with you that current LLMs are a long way from that. And humans don’t work at the word level either, so the abstraction is different, but the principle might be the same.
I only play with offline AI stuff, but I usually use a model that is way more powerful than the average in the offline LLM space (Lama 2 70B). I have tried building a Leto II character from God Emperor of Dune, and I did several LoRA training layers on various data including the entire GEoD book. The book itself had very little value. It lacks the right context scope and detail needed to do anything relevant. It becomes possible to question some high level plot or character info, but I remember more than the model could accurately pull out.
The really cool cutting edge thing of the future will be authors embracing AI for their own creative inspiration and whomever is the first to fully integrate with AI. Train yourself and your characters so that a screen can generate images of characters and locations as they develop page by page. The identity of each character can build with a random seed with a persistent token so that every read has a different accompanied look and feel. In the future I’m sure it will be possible to limit a character’s context to the information read by the user, (this is a major key to individualized learning - probably the biggest future application of LLMs). A character/user matched scope would make interaction and direct questioning next level cool. Forget book clubs, now you converse with a circle of characters, or the author as a digitized character.
AI is the framework. All the hype and BS is because of stupid greedy people trying to make AI a product. This is next level privacy invasive nonsense. Proprietary AI must die. The future is offline AI as a simple tool unitized to create a richer or accessible future experience.
I’ve been trying to make GPT4 turn my ideas into short stories, but that has turned out to be a really tough nut to crack. I need to specify my vision in excruciating detail, and I still need to spend hours editing the text. Occasionally, GPT goes totally off the rails, so I need to hold its hand every step along the way in order to get what I want.
Other than that, it’s been very fun and educational.
You could compare the situation to a farmer using a horse to plow the field. Without the horse, it’s going to take forever, but with the horse you can get stuff done. It’s just that you have to steer the horse all the time. You can’t expect the horse to do all the work, but you can make it do a lot of it.
all i have to say is show me one truly original thought that doesn’t have a basis in other thoughts and i’ll start thinking this AI learning other people’s styles is a bad thing.
just one.
unfortunately for everyone poo pooing it, the fact of the matter is everything is a remix of a remix of a remix. ai is just one more step in a long line of steps of mimicry and adaptation.
and if someone carbon copies someone else’s work, that’s really obvious and then gets called the fuck out. but if you are taking someone’s style and combining it with subjects they’d never use and other styles, then you’ve just used ai to do the exact thing we’ve always done.
this knee jerk reaction stuff, honestly, i can’t wait for it to dissipate
People believing this stuff is AGI also makes me think of how my poor, illiterate, provincial grandmother who when she moved to live with us “in the big city” used to get really confused when she saw the same actor in more than one soap opera on TV: she confused the immitation of real life which is acting (soap operate acting, even, which is generally pretty bad) with actual real life.