Frame generation is fucking huge.
Especially since it works best at high frame rates. Like, if you were playing 30fps doubling to 60 might be a perceptable difference because of how long it is in between frames.
But going from 60 to 120, it’s still 50% “fake” frames, but the time between “real screens” is much smaller allowing for more frequent corrections to what the “fake” frames are predicting.
So while it won’t help a bad computer run anything, it can help a “mid” computer make what it can run look a lot better, because you can crank up a bunch of options to maintain the fps you were getting without I.
Why wouldn’t it?
It’s talking about two things “AI” which is actually a pretty good use of the label
Generatinga lower Rez screen and upscaling.
Generating addition frames based on what might happen in between real screens
There’s no valid reason not to use that. Hardware costs more so you’d be paying a lot more money for the same performance. With less people making that choice, the price differential would be even greater.
Like, this is right. They can’t make them without this for low enough people will buy it.
It’s facts bro