In the future, everything will be owned and nothing taken care of.
In the future, everything will be owned and nothing taken care of.
Adversarial attacks on training data for LLMs is in fact a real issue. You can very very effectively punch up with regards to the proportion of effect on trained system with even small samples of carefully crafter adversarial inputs. There are things that can counter act this, but all of those things increase costs, and LLMs are very sensitive to economics.
Think of it this way. One, reason why humans don’t just learn everything is because we spend as much time filtering and refocusing our attention in order to preserve our sense of self in the face of adversarial inputs. It’s not perfect, again it changes economics, and at some point being wrong but consistent with our environment is still more important.
I have no skepticism that LLMs learn or understand. They do. But crucially, like everything else we know of, they are in a critically dependent, asymmetrical relationship with their environment. The environment of their existence being our digital waste, so long as that waste contains the correct shapes.
Long term I see regulation plus new economic realities wrt to digital data, not just to be nice or ethical, but because it’s the only way future systems can reach reliable and economical online learning. Maybe the right things happen for the wrong reasons.
It’s funny to me just how much AI ends up demonstrating non equilibrium ecology at scale. Maybe we’ll have that self introspective moment and see our own relationship with our ecosystems reflect back on us. Or maybe we’ll ignore that and focus on reductive world views again.
It’s hilarious to me how unnecessarily complicated invoking moore’s law is to say anything…
With Moore’s Law: “Ok ok ok, so like, imagine that this highly abstract, broad process over huge time period, is actually the same as manufacturing this very specific thing over a small time period. Hmm, it doesn’t fit. ok, let’s normalize the timelines with this number. Why? Uhhh because you know, this metric doubles as well. Ok. Now let’s just put these things together into our machine and LOOK it doesn’t match our empirical observations, obviously I’ve discovered something!”
Without Moore’s Law: “When you reduce the dimensions of any system in nature, flattening their interactions, you find exponential processes everywhere. QED.”
Recently, a sign showed up in El Paso advertising San Francisco as a sanctuary city, as a great “own the libs,” I suppose because SF would receive of applicants overwhelming their social service programs?
It didn’t work.
Also meta but while I am big on slamming AI enshitification, I am still bullish on using machine learning tools to actually make products better. There are examples of this. Notice how artists react enthusiastically to the AI features of Procreate Dreams (workflow primarily built around human hand assisted by AI tools, ala what photoshop used to be) vs Midjourney (a slap in the face).
The future will involve more AI products. It’s worthy to be skeptical. It’s also worthy to vote with your money to send the signal: there is an alternative to enshitification.
You can read their blog about the AI-crap, in terms of their approach and philosophy. In general, it is optional and not part of the major experience.
The main reason I use kagi is immediately obvious from doing seaches. I convinced my wife to switch to it when she ask, “ok but what results does it show when I search sailor moon?” and she saw the first page (fan sites, official merch, fun shit she had forgotten about for years).
What you need to know is that you pay money, and they have to give you results that you like. It’s a whole different world.
Helpful reminder to spread the word on Google alternatives this holiday season. Bought Kagi subscriptions as stocking stuffers for my loved ones. Everyone who I have convinced to give it a try has been impressed thus far.
SEO will pillage the commons. It has been for years and years. Community diversity and alternative payment models for search are part of the bulwark.
Maybe unpopular take here, but I love discord as an excellent fit for specific use cases. I think plenty of groups that should be web forums use discord wrong, but for several of my favorite communities:
Good examples for me are: Friend of Friend Groups for organizing dinners or parties Online gaming communities Book clubs Co-worker chat alternative to slack
Elon: “I created OpenAI! It only exists because of me!” Also Elon: “I created this new AI, which I copied from OpenAI, because it was… mine all along?”
Oddly, r/buttcoin is still doing well enough that it’s one of the few places I still stop by on reddit. Can’t say the same for any community still on twitter, dough.
Exponential progress, I see.
Wouldn’t it be funny if, not only do we not get super intelligence in the next couple of years, but we do still get energy, resource, and climate crisises, which we don’t get to excuse and kick the can on?
Looking forward to when the grizzly bear grunts in his direction and he has to decide which reaction is the clear non consent one.
The irony in all this is that if they just dropped the utilitarianism and were just honest about feelings guiding their decision making, they could be tolerable. “I’m not terribly versed in the details of the gun violence issue, but I did care about malaria enough to donate to some functional causes.” Ok, fine, you’re now instantly just a normal person.
duh, duh duh, duh, duuuuuuuuuh, yup.
Takes like this are one of the many things I pull out to point out how naive and misguided most x-risk obsessive people are. And especially Mr. Altman.
Despite wide fears of synthetic gain of function attacks, as it turns out, it’s actually really hard to create a new virus meaningfully stronger than the standard endemic ones that already exist. Many countries and labs have legitimately tried. Lots of papers and research. It’s, really really hard to beat nature at the microbiological scale; Viruses have to not only be virulent, but it has to contend with extremely unpredictable intermediate environments. The current endemic viruses got there through many mutations and adaptations inside environments that they were already at least successful (and not in vitro). And in the end, what would be the point? Once a virulent virus breaks out, you have very little control. Either it works really well and backfires or, even far more likely, it doesn’t do that much at all, but it does piss other nations off.
It’s not impossible. But honestly, yeah, I don’t comprehend x-riskers who obsess over this.
Desperation of delusion. “End of all value” => “I don’t understand things, so I better at least have control!” I wonder if these kinds of people would send literal Nazis to my doorstep if I suggested that I don’t have any stake either way in the “coin flipping on the end of my world view.”
100% cross platform
also
Main downside is CSS and DOM.
Yeah should have just stuck with “at least it’s a scripting language” (that doesn’t support 64bit integers out of the box).
There’s a difference between “can” and “cost”. Code is syntactic and formal, true, but what about pseudo code that is perfectly intelligible by a human? There is, afterall, a difference between sharing “compiled” code that is meant to be fed directly into a computer and sharing “conceptual” code that is meant to be contextualized into knowledge. Afterall, isn’t “code” just the formalization of language, with a different purpose and trade off?
What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!