Nvidia demos AI method to convert 30fps image into 480fps slow motion video
Click here to post a comment for Nvidia demos AI method to convert 30fps image into 480fps slow motion video on our message forum
Noisiv
https://arxiv.org/pdf/1712.00080.pdf
Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation
Abstract:
Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences.
While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled.
We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another UNet to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame.
By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. We use 1,132 video clips with 240-fps, containing 300K individual video frames, to train our network. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods
https://abload.de/img/screenshot2018-06-191wekav.png
Noisiv
Conclusion:
In this paper, we propose an end-to-end CNN that can produce as many intermediate video frames as needed between two input images.
We first use a flow computation CNN to estimate the bidirectional optical flow between the two input frames, and the two flow fields are linearly fused to approximate the intermediate optical flow fields. We then use a flow interpolation CNN to refine the approximated flow fields and predict soft visibility maps for interpolation. \
We use more than 11K 240-fps video clips to train our network to predict seven intermediate frames.
Ablation study on separate validation sets demonstrate the benefit of flow interpolation and visibility map. Our multi-frame approach consistently outperforms state-of-the-art single frame methods on the Middlebury, UCF101, slowflow, and high-framerate Sintel datasets. For unsupervised leanring of optical flow, our network outperforms the recent DVF method [15] on the KITTI 2012 benchmark.
https://arxiv.org/pdf/1712.00080.pdf
Extraordinary
TheDeeGee
What about Integer Scaling?
A feature people really really want for years?
Noisiv
yasamoka
Fox2232
poornaprakash
How authentic the interpolated AI slow-motion video will be ? AI simply added those non existing frames out of approximation. Now videos authenticity is in question especially in front of Judiciary.
Denial
yasamoka
Fox2232
yasamoka
Glidefan
Moderator
Fox2232
jortego128
Not sure why all the hate, this is neat stuff. Regardless if its from NV or AMD, I think things like this that use GPU for things other than gaming are fantastic. If they could get this working with standard CUDA or OpenCL on consumer grade GPUs, it would be awesome for video editors.
Mateja
this is absolutely amazing! BFA in animation here and no you can't see visual artifacts (from these videos). some are complaining that NVidia is trying to get rich with a new gimmick. welcome to capitalism, corporations try to make money. without getting into politics on why we should be more socialist I will say that free market competition does force competitors to create superior products. this is a good example of that.
some say this effect is pointless. maybe you folks are missing the point. it's not about watching your movies in slow motion ... it's about video quality at normal speeds, with higher framerates. because people get headaches from 30fps images panning across their gigantic 90" 4k TVs like some jaunting slide show. yes, source material is ideal, but there's no way to go back in time and refilm everything from the first 100 years of cinema... if you don't notice framerates good for you. people also said they didn't notice or want gimmicky color TVs. lucky for us, quality keeps improving. the last best attempt I've seen of framerate interpolation was on my LG tv. it does a pretty sweet job at low settings but at high settings you would see serious artifacts on objects moving quickly across a background. (one draw back I will say about this technology is it fried the board in my tv and I had to get it replaced. if they can implement this stuff w/o obsolescing my $3000 hardware after 5 years, then maybe...)
having studied animation for 4 years, artifacts introduced by post processing are painfully obvious to me. I see nothing distracting or artificial whatsoever in any of these sample videos. the motion is remarkably smooth and natural as if the source material itself was filmed on high speed cameras. can't wait for this to be standard in every display! that said, it would also be cool to have the option to see the original source material in it's native format, like the on/off option for frame smoothing on my tv.
Larry Cañonga
How GPU intensive is that? Can we render a 4K game at say 30fps and upconvert to 120fps?!
Glidefan
Moderator
Stormyandcold
Fox2232