- cross-posted to:
- sneerclub@awful.systems
- tech@kbin.social
- cross-posted to:
- sneerclub@awful.systems
- tech@kbin.social
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.
But you don’t really “know” anything either. You just have a network of relations stored in the fatty juice inside your skull that gets excited just the right way when I ask it a question, and it wasn’t set up that way by any “intelligence”, the links were just randomly assembled based on weighted reactions to the training data (i.e. all the stimuli you’ve received over your life).
Thinking about how a thing works is, imo, the wrong way to think about if something is “intelligent” or “knows stuff”. The mechanism is neat to learn about, but it’s not what ultimately decides if you know something. It’s much more useful to think about whether it can produce answers, especially given novel inquiries, which is where an LLM distinguishes itself from a book or even a typical search engine.
And again, I’m not trying to argue that an LLM is intelligent, just that whether it is or not won’t be decided by talking about the mechanism of its “thinking”
We can’t determine whether something is intelligent by looking at its mechanism, because we don’t know anything about the mechanism of intelligence.
I agree, and I formalize it like this:
Those who claim LLMs and AGI are distinct categories should present a text processing task, ie text input and text output, that an AGI can do but an LLM cannot.
So far I have not seen any reason not to consider these LLMs to be generally intelligent.
Literally anything based on opinion or creating new info. An AI cannot produce a new argument. A human can.
It took me 2 seconds to think of something LLMs can’t do that AGI could.