The paper includes the following chart for average frame gen times at various resolutions, in various test scenarios they compared with other frame generation methods.
Here's their new method's frame gen times, averaged across all their scenarios.
540p: 2.34ms
720p: 3.66ms
1080p: 6.62ms
Converted to FPS, by assuming constant frametimes, thats about...
540p: 427 FPS
720p: 273 FPS
1080p: 151 FPS
Now lets try extrapolated pixels per frametime to guesstimate an efficiency factor:
540p: 518400 px / 2.34 ms = 221538 px/ms
720p: 921600 px / 3.66 ms = 251803 px/ms
1080p: 2073600 px / 6.62 ms = 313233 px/ms
Plugging pixels vs efficiency factor into a graphing system and using power curve best fit estimation, you get these efficiency factors for non listed resolutions:
1440p: 361423 px/ms
2160p: 443899 px/ms
Which works out to roughly the following frame times:
1440p: 10.20 ms
2160p: 18.69 ms
Or in FPS:
1440p: 98 FPS
2160p: 53 FPS
... Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn't introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.
(I want to again stress here this is very rough math, but I am ironically forced to extrapolate performance at higher resolutions, as no such info exists in the paper.)
IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively... this frame gen would be pointless.
I... guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good.
But ... this is GPU tech.
Which, like DLSS, requires extensive AI training sets.
And is apparently proprietary to Intel... so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.
Its not gonna somehow be a driver/chipset upgrade to existing Intel CPUs.
Basically this seems to be fundamental to Intel's gambit to make its own new GPUs stand out. Build GPUs for less cost, with less hardware devoted to G Buffering, and use this frame gen method in lieu of that.
Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn't introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.
The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.
IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively... this frame gen would be pointless.
Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI). Remember time to run a model is dependent on GPU performance, so a faster GPU will be able to run this model faster. I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.
I... guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good
It can. This method only needs access to the frames, which can easily be accessed by the OS.
But ... this is GPU tech.
This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.
And is apparently proprietary to Intel... so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.
Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.
The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.
I feel this is a bit of an overstatement, otherwise you'd only render the first frame of a game level and then just use this method to extrapolate every single subsequent frame.
Realistically, the model has to return back to actually fully pipeline rendered frames from time to time to re-reference itself, otherwise you'd quickly end up with a lot of hallucination/artefacts, kind of an AI version of a shitty video codec that morphs into nonsense when its only generating partial new frames based on detected change from the previous frame.
Its not clear at all, at least to me, in the paper alone, the average frequency, or under what conditions that reference frames are reffered back to... after watching the video as well, it seems they are running 24 second, 30 FPS scenes, and functionally doubling this to 60 FPS, by referring to some number of history frames to extrapolate half of the frames in the completed videos.
So, that would be a 1:1 ratio of extrapolated frame to reference frame.
This doesn't appear to actually be working in a kind of real time, moderated tandem between real time pipeline rendering and frame extrapolation.
It seems to just be running already captured videos as input, and then rendering double FPS videos as output.
...But I could be wrong about that?
I would love it if I missed this in the paper and you could point out to me where they describe in detail how they balance the ratio of, or conditions in which a reference frame is actually referred to... all I'm seeing is basically 'we look at the history buffer.'
Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI).
Thats a good point, I missed that, and it's worth mentioning they ran this on a 4070ti.
I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.
Unfortunately they don't actually list any baseline for frametimes generated through the normal rendering pipeline, would have been nice to see that as a sort of 'control' column where all the scores for the various 'visual difference/error from standard fully rendered frames' are all 0 or 100 or whatever, then we could compare some numbers of how much quality you lose for faster frames, at least on a 4070ti.
If you control for a single given GPU then sure, other than edge cases, this method will almost always result in greater FPS for a slight degredstion in quality...
...but there's almost no way this method is not proprietary, and thus your choice will be between price comparing GPUs with their differing rendering capabilities, not something like 'do i turn MSAA to 4x or 16x', available on basically any GPU.
More on that below.
This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.
Yes, this is why I said this is GPU tech, I did not figure that it needed to be stated that oh well ok yes technically you can run it locally on a CPU or NPU or APU but its only going to actually run well on something resbling a GPU.
I was aiming at practical upshot for average computer user not comprehensive breakdown for hardware/software developers and extreme enthusiasts.
Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.
To be fair, when I wrote it originally, I used 'apparently' as a qualifier, indicating lack of 100% certainty.
But uh, why did I assume this?
Because most of the names on the paper list the company they are employed by, there is no freely available source code, and just generally corporate funded research is always made proprietary unless explicitly indicated otherwise.
Much research done by Universities also ends up proprietary as well.
This paper only describes the actual method being used for frame gen in relatively broad strokes, the meat of the paper is devoted to analyzing it's comparative utility, not thoroughly discussing and outlining exact opcodes or w/e.
Sure, you could try to implement this method based off of reading this paper, but that's a far cry from 'here's our MIT liscensed alpha driver, go nuts.'
...And, now that you bring it up:
Intel filed what seem to me to be two different patent applications, almost 9 months before the paper we are discussing came out, with 2 out of 3 of the credited inventors on the patents also having their names on this paper, which are directly related to this academic publication.
This one appears to be focused on the machine learning / frame gen method, the software:
So yeah, looks to me like Intel is certainly aiming at this being proprietary.
I suppose its technically possible they do not actually get these patents awardes to them, but I find that extremely unlikely.
EDIT: Also, lol video game journalism processional standards strike again, whoever wrote the article here could have looked this up and added this highly relevant 'Intel is pursuing a patent on this technology' information to their article in maybe a grand total of 15 to 30 extra minutes, but nah, too hard I guess.
The ideal framerate booster was already invented, it's called asynchronous space warp.
Frames are generated by the GPU at whatever rate it can do, and then the latest frame is "updated" using reprojection at the framerate of the display, based on input.
It blows my mind that were wasting time with fucking frame generation, when a better way to acheive the same result has been used for VR (where adding latency is a GIANT no-no) for nearly a decade.
This is a hilariously bad take for anything not VR. async warping causes frame smearing on detail that is really noticable when the screens aren't so close your peripheral blind spots make up for it.
Its an excellent tool in the toolbox but to pretend that async reprojection "solved" this kind of means you don't understand the problem itself..
Edit: also the LTT video is very cool as a proof of concept, but absolutely demonstrates my point regarding smearing. There are also many, MANY cases where a clean frame with legible information would be preferable to a less latent smeared frame.
I'm not pretending it solves anything other than the job of increasing the perceived responsiveness of a game.
There are a variety of potential ways to fill in the missing peripheral data, or even occluded data, other than simply stretching the edge of the image. Some of which very much overlap with what DLSS and frame generation are doing.
My core argument is simply that it is superior to frame generation. If you're gonna throw in fake frames, reprojection beats interpolation.
Frame generation is completely unfit for purpose, because while it may spit out more frames, it makes games feel LESS responsive, not more.
ASW does the opposite. Both are "hacky" and "fake" but one is clearly superior in terms of the perceived experience.
One lets me feel like the game is running faster, the other makes the game look like it runs faster, while making it feel slower.
This solution by intel is better, essentially because it works more like ASW than other implementations of frame generation.
This is great and I hope this technology can be implemented on older hardware that maybe barely doesn't meet todays high system requirements.
I hope this is not used as a crutch by developers to hide really bad optimization and performance, as they have already been doing with upscalers like FSR/DLSS.
no, I fucking hope not. Older games rendered an actual frame. Modern engines render a noisy, extremely ugly mess, and rely on temporal denoising and frame generation (which is why most modern games only show you scenes with static scenery with a very slow moving camera).
Just render the damn thing properly in the first place!
I think you are misunderstanding, because I agree with you when the games minimum hardware requirements are met.
I am saying I hope this technology can be used so that hardware that is below minimum requirements could potentially still get decently playable framerates using this technology on newer titles. The obvious drawback being decreased visual quality. I agree that upscaling, particularly TAA and its related effects, should not be used to reduce system requirements because the developers do not design their game well or make use of ugly effects. But I think this can be useful for old systems or perhaps only integrated graphics chips depending on how the technology works. That was what I meant. Sorry I was not clear enough initially.
Depends what you want to render. High fps requirements in conjunction with movement where the human eye is the bottleneck is a perfect interpolation case. In such a case the bad frames aren't really seen.