Everything about that movie is a fever dream.
Everything about that movie is a fever dream.
The actors who played Merry and Pippin discussed this in a video once, and went back and forth on where they’d put it. I particularly liked a suggestion in Moria: “I’m so fuckin’ sorry.”
But the best reply to this idea is Boromir at the council: “The trilogy has no f-bomb. The trilogy needs no f-bomb.”
“Loser.”
Fuzzing.
Mike: “You guys watch Joe Don Baker movies?”
Nothing good is allowed to happen ever again.
Deltarune lookin’ ass.
I had not. There’s a variety of demos for guessing what comes between frames, or what fills in between lines… because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.
Accidental compliment to a bunch of forty-somethings.
What doesn’t exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what’s going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.
We have the technology to fill in crisp motion at whatever framerate the creator wants. If they’re unhappy with the machine’s guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.
We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.
We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.
Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.
Video generators are going to eat Hollywood alive. A desktop computer can render anything, just by feeding in a rough sketch and describing what it’s supposed to be. The input could be some kind of animatic, or yourself and a friend in dollar-store costumes, or literal white noise. And it’ll make that look like a Pixar movie. Or a photorealistic period piece starring a dead actor. Or, given enough examples, how you personally draw shapes using chalk. Anything. Anything you can describe to the point where the machine can say it’s more [thing] or less [thing], it can make every frame more [thing].
Boring people will use this to churn out boring fluff. Do you remember Terragen? It’s landscape rendering software, and it was great for evocative images of imaginary mountains against alien skies. Image sites banned it, by name, because a million dorks went ‘look what I made!’ and spammed their no-effort hey-neat renders. Technically unique - altogether dull. Infinite bowls of porridge.
Creative people will use this to film their pet projects without actors or sets or budgets or anyone else’s permission. It’ll be better with any of those - but they have become optional. You can do it from text alone, as a feral demo that people think is the whole point. The results are massively better from even clumsy effort to do things the hard way. Get the right shapes moving around the screen, and the robot will probably figure out which ones are which, and remove all the pixels that don’t look like your description.
The idiots in LA think they’re gonna fire all the people who write stories. But this gives those weirdos all the power they need to put the wild shit inside their heads onto a screen in front of your eyeballs. They’ve got drawers full of scripts they couldn’t hassle other people into making. Now a finished movie will be as hard to pull off as a decent webcomic. It’s gonna get wild.
And this’ll be great for actors, in ways they don’t know yet.
Audio tools mean every voice actor can be a Billy West. You don’t need to sound like anything, for your performance to be mapped to some character. Pointedly not: “mapped to some actor.” Why would an animated character have to sound like any specific person? Do they look like any specific person? Does a particular human being play Naruto, onscreen? No. So a game might star Nolan North, exclusively, without any two characters really sounding alike. And if the devs need to add a throwaway line later, then any schmuck can half-ass the tone Nolan picked for little Suzy, and the audience won’t know the difference. At no point will it be “licensing Nolan North’s voice.” You might have no idea what he sounds like. He just does a very convincing… everybody.
Video tools will work the same way for actors. You will not need to look like anything, to play a particular character. Stage actors already understand this - but it’ll come to movies and shows in the form of deep fakes for nonexistent faces. Again: why would a character have to look like any specific person? They might move like a particular actor, but what you’ll see is somewhere between motion-capture and rotoscoping. It’s CGI… ish. And it thinks perfect photorealism is just another artistic style.
Ah, so assholes trying to stomp the meaning out of an important term.
Oh hey, it’s Richard O’Brien.
Authoritarians worldwide begging to get got.
Dear powerful assholes: it doesn’t take much to stay on the tolerable side of pissing people off, and still get a boner from exercising control over the little people. Human beings will put up with a lot. But when you start locking people up for life, just for publicly sassing you… all you’re doing is driving that near-universal anger to places you won’t see it building.
Shit, I got charged to read text messages. I’d get annoyed with people for replying “OK.”
When they work.
Sometimes they decide there’s no word in the English language that begins with a K, so you get a long pause, the word “thus,” and no alternate guesses.
Sometimes they decide this five or six times in a row and you give up and tap it out letter by letter like some kind of neanderthal.
Jesus, people, they’re not asking ChatGPT to guess who wins.
This is rollback netcode. This is literally just rollback netcode, plus a buzzword.
Neural networks are sixty years old. All that changed recently is how hard we can train them.
And this application is where neural networks should be downright magical: given complex events, you need a simple answer, and approximate guesses work okay. If the network is wrong… you roll back. Just like we already fucking do, with the lag-reducing prediction written by human beings.
The real thing to get worked-up over is - fuck software patents.
Recommendations:
DKC Tropical Freeze is everything GDQ is about: it looks fast, it’s a little broken, and everyone onstage has a great time.
Ocarina of Time is a no-logic randomizer, so all the items are shuffled without concern for whether the game is beatable. Sometimes getting to a boss takes three separate glitches, and then hitting them takes five.
Super Sheffy World is the best of four-ish Kaizo / Mario Maker games this year. Fast-paced and comically difficult. But I’d say Kaizo Mario World 3 was the better run, if only for the final boss.
Vice City’s hard-mode mod is a delightful trainwreck. The game actively does not want to be in a speedrun.
Tetris showcases are always fun. This year they did Grandmaster 3 in Shirase and Grandmaster 2 in Death difficulty.
Elden Ring was a lockout bingo race - two runners trying to check off random goals.
Super Metroid races are the finale for a reason.
They’re just shuffling cards. This is the shape a valid rebuttal would have, so they perform that, regardless of whether it has any basis in reality.