

Weāre already behind schedule, weāre supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!
Weāre already behind schedule, weāre supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!
He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.
His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like
And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far
My own scoring:
The first prompt programming libraries start to develop, along with the first bureaucracies.
I donāt think any sane programmer or scientist would credit the current āprompt engineeringā āskill setā with comparison to programming libraries, and AI agents still arenāt what he was predicting for 2022.
Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.
There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.
Revenue is high enough to recoup training costs within a year or so.
Hahahaha, noā¦ they are still losing money per customer, much less recouping training costs.
Instead, the AIs just make dumb mistakes, and occasionally āpursue unaligned goalsā but in an obvious and straightforward way that quickly and easily gets corrected once people notice
The safety researchers have made this one ātrueā by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that donāt read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.
The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.
They also try to contrive scenarios
Emphasis on the word"contrive"
The age of the AI assistant has finally dawned.
So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.
I can see how they are trying to anoint his as a prophet, but I donāt think anyone not already drinking the kool aid will buy it.
I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term ā0-2 paradigm shiftsā so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).
Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?
Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced grifters forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.
I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.
They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.
Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they havenāt touched yet.
Yeah there might be something like that going on causing the āscreamingā. Lesswrong, in itās better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isnāt any effort to do that here.
I agree. There is intent going into the prompt fondlerās efforts to prompt the genAI, itās just not very well developed intent and it is using the laziest shallowest method possible to express itself.
If you understood why the splattered paint was art, you would also understand why the AI generated images arenāt art (or are, at best, the art of hacks). It seems like you understand neither.
Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1
Lol, Altmanās AI generated purple prose slop was so bad even Eliezer called it out (as opposed to make a doomer-hype point):
Perhaps you have found some merit in that obvious slop, but I didnāt; there was entropy, cliche, and meaninglessness poured all over everything like shit over ice cream, and if there were cherries underneath I couldnāt taste it for the slop.
Is this water running over the land or water running over the barricade?
To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.
Looking at the details of āsafety evaluationsā, it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.
My understanding is that it is possible to reliably (given the reliability required for lab animals) insert genes for individual proteins. I.e. if you want a transgenetic mouse line that has neurons that will fluoresce under laser light when they are firing, you can insert a gene sequence for GCaMP without too much hassle. You can even get the inserted gene to be under the control of certain promoters so that it will only activate in certain types of neurons and not others. Some really ambitious work has inserted multiple sequences for different colors of optogenetic indicators into a single mouse line.
If you want something more complicated that isnāt just a sequence for a single protein or at most a few protein, never mind something nebulous on the conceptual level like āintelligenceā then yeah, the technology or even basic scientific understanding is lacking.
Also, the gene insertion techniques that are reliable enough for experimenting on mice and rats arenāt nearly reliable enough to use on humans (not that they even know what genes to insert in the first place for anything but the most straightforward of genetic disorders).
One comment refuses to leave me: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=C7MvCZHbFmeLdxyAk
The commenter makes and extended tortured analogy to machine learningā¦ in order to say that maybe genes with correlations to IQ wonāt add to IQ linearly. Itās an encapsulation of many lesswrong issues: veneration of machine learning, overgeneralizing of comp sci into unrelated fields, a need to use paragraphs to say what a single sentence could, and a failure to actually state firm direct objections to blatantly stupid ideas.
My favorite comment in the lesswrong discussion: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=oyDCbGtkvXtqMnNbK
Itās not that eugenics is a magnet for white supremacists, or that rich people might give their children an even more artificially inflated sense of self-worth. No, the risk is that the superbabies might be Khan and kick start the eugenics wars. Of course, this isnāt a reason not to make superbabies, it just means the idea needs some more workshopping via Red Teaming (hacker lingo is applicable to everything).
Soyweiser has likely accurately identified that youāre JAQing in bad faith, but on the slim off chance you actually want to educate yourself, the rationalwiki page on Biological Determinism and Eugenics is a decent place to start to see the standard flaws and fallacies used to argue for pro-eugenic positions. Rationalwiki has a scathing and sarcastic tone, but that tone is well deserved in this case.
To provide a brief summary, in general, the pro-eugenicists misunderstand correlation and causation, misunderstand the direction of causation, overestimate what little correlation there actually is, fail to understand environmental factors (especially systemic inequalities that might require leftist solutions to actually have any chance at fixing), and refuse to acknowledge the context of genetics research (i.e. all the Neo-Nazis and alt righters that will jump on anything they can get).
The lesswrongers and SSCs sometimes whine they donāt get fair consideration, but considering they take Charles Murray the slightest bit seriously they can keep whining.
That was literally the inflection point on my path to sneerclub. I had started to break from less wrong before, but I hadnāt reached the tipping point of saying it was all bs. And for ssc and Scott in particular I had managed to overlook the real message buried in thousands of words of equivocating and bad analogies and bad research in his earlier posts. But āyou are still crying wolfā made me finally question what Scottās real intent was.
I normally think gatekeeping fandoms and calling people fake fans is bad, but it is necessary and deserved in this case to assume Elon Musk is only a surface level fan grabbing names and icons without understanding them.
This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadnāt thought about the extent to which Altmanās plan is āhey morons, hook my shit up to fucking everything and try to stumble across a use case thatās good for somethingā (as opposed to the āweāre building a genie, and when weāre done weāre going to ask it for three wishesā he hypes up), that makes more sense as a long term planā¦
Bonus: a recent comment is skeptical: