Intel Labs shows photo-realistic image enhancement GTA V done in AI

Published by

Click here to post a comment for Intel Labs shows photo-realistic image enhancement GTA V done in AI on our message forum
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
seen this yesterday,didn't notice it was intel impressive,really I hope intel pushes gaming even further than nvidia did with ray tracing,their r&d budget is tremendous.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
So, now GTA5 looks like GTA4, nice! And why 720p, can't the AI scale to at least 1080p?
https://forums.guru3d.com/data/avatars/m/272/272918.jpg
Silva:

So, now GTA5 looks like GTA4, nice! And why 720p, can't the AI scale to at least 1080p?
As if gta 4 ever looked like that.
data/avatar/default/avatar06.webp
id rather see this than ray tracing
data/avatar/default/avatar12.webp
grey depression filter is not realism. for this kind of training you should use RAW data sets, not jpegs.
data/avatar/default/avatar40.webp
Well not sure what you have to comment so bad, this is a great example of style transfer and looks good. You can use a different style if you have the data to train the algo.
data/avatar/default/avatar39.webp
GTA IV's graphics was was ahead of its time Intel finally invest in R&D? Wow, shocked Looking at the footage, I'm honestly really impressed, it's beautiful. 720p video about a tech showing GRAPHICS. Gj intel. Next time show how you run the Intel presentation on AMD CPU.
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
It looks like they set digital vibrance to 0 and just improved lighting and shadows.
https://forums.guru3d.com/data/avatars/m/254/254132.jpg
The 720p video though
https://forums.guru3d.com/data/avatars/m/249/249528.jpg
Alessio1989:

grey depression filter is not realism. for this kind of training you should use RAW data sets, not jpegs.
Smallminded. What you're seeing is what they trained it with. One can train it with whatever the hell one wants. Why are you thinking inside the box? People who figure out and create this kind of stuff know everything there is to know about it, calling it a depression filter is straight up disrespectful, especially after seeing its result. You should be ashamed of yourself.
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
The video is plain awful. All their AI did was make everything grey and drab. Only accomplishment was making the streets appear more smooth. The original being compared to is way too red. LA is nowhere near this grey even when it is overcast. The color really did not come out till the very end but still seemed too dull. The AI needs a lot more training.
https://forums.guru3d.com/data/avatars/m/252/252141.jpg
Gee, hard to tell, with the video being compressed in non-HD 720p! Give it the 4K treatment, then get back to us!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
It doesn't necessarily look more representative of real life, it just looks like what you'd see in a typical dashcam.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I think the point was to demonstrate real time color grading being injected into a game. Everyone here is getting caught up on the unimportant details.
data/avatar/default/avatar06.webp
Denial:

I think the point was to demonstrate real time color grading being injected into a game. Everyone here is getting caught up on the unimportant details.
They're important if you don't want gamers to dismiss your work as irrelevant and limited by your choice of hardware. People nowadays want to play at 4k, not 720p. Can this system manage that? It appeared to be chugging at times. They're also rarely interested in playing for extended periods in the drab cityscapes of an overcast Germany. This is GTA5, after all, not Euro Truck Simulator; it might be useful in ETS, since Europe has a wide variety of climates, and frame rate maybe isn't quite as big an issue. GTA gamers might have liked seeing the Vivid visualisation, but the videographer buried that lede at the end. Maybe they could throw in Mars, if they had any roads in Mars. Or the world of Futurama. Or turn cars into catbuses. Wouldn't mind giant trees in the sky, either - just make them stable. Also, I liked the reflections on the car hood in one comparison; their version seemed to have less. I guess it's likely to be less stable, though, too.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
GreenReaper:

They're important if you don't want gamers to dismiss your work as irrelevant and limited by your choice of hardware. People nowadays want to play at 4k, not 720p. Can this system manage that? It appeared to be chugging at times.
No I don't think they are important. This is a proof of concept, an academic experiment designed to derisk a potential value-add to some future product. When DLSS came out, no one said "man this is irrelevant because I saw a presentation on autoencoders at siggraph in 2017 and they looked awful". When Intel inevitably launches some product called Intel X-Streme RealCCâ„¢ or some bullshit and everyone is hitting control+r to inject 4K realism filter into their games no one is going to remember this demo.
https://forums.guru3d.com/data/avatars/m/174/174772.jpg
Clouseau:

The video is plain awful. All their AI did was make everything grey and drab. Only accomplishment was making the streets appear more smooth. The original being compared to is way too red. LA is nowhere near this grey even when it is overcast. The color really did not come out till the very end but still seemed too dull. The AI needs a lot more training.
There is way more in it, they show that they can improve details, making blurry broken text stand out sharp and readable, changing distant vegetation etc. At the end they also show that they can preserve the original color, although it seem that they need to feed the AI a lot more images to make it perfect. It's just the beginning, clearly shows that the AI has quite as lot of possibilities for photo realistic enhancements without the negative side effects using loads of current available filters to get a similar effect besides that it can change on the fly.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
IceVip:

Smallminded. What you're seeing is what they trained it with. One can train it with whatever the hell one wants. Why are you thinking inside the box? People who figure out and create this kind of stuff know everything there is to know about it, calling it a depression filter is straight up disrespectful, especially after seeing its result. You should be ashamed of yourself.
Yes... typical responses from come "gurus" here. A lot of gamer morons who actually know FA about what's going on outside their little gaming world. They think buying a few off the shelf PC components and putting them together like LEGO somehow makes them a COMPUTER SCIENCE PHD.
data/avatar/default/avatar32.webp
IceVip:

Smallminded. What you're seeing is what they trained it with. One can train it with whatever the hell one wants. Why are you thinking inside the box? People who figure out and create this kind of stuff know everything there is to know about it, calling it a depression filter is straight up disrespectful, especially after seeing its result. You should be ashamed of yourself.
The training thing is interesting for sure, but they showed it in a wrong way (I would point more it on a content/artist offline creation pipeline instead). The result shows up with a greyish depression effects, since this comes from the training dataset. It is not disrespectful, it's a fact. As is a fact the training dataset quality is far from being good for a lightning simulation. When you train on a dataset, if the dataset is "wrong" for your work (like are compressed picture formats for lightning physics simulations), the result is wrong (like fucking all the albedo g-buffer). And this vanish the improvements on reflections results, which are pretty amazing. And yes, I passed the machine learning course at uni and I read the paper.
geogan:

Yes... typical responses from come "gurus" here. A lot of gamer morons who actually know FA about what's going on outside their little gaming world. They think buying a few off the shelf PC components and putting them together like LEGO somehow makes them a COMPUTER SCIENCE PHD.
But I am pretty sure everyone here that is not an average guru3d users must always insults the others citing their "questionable"/fake PHD graduation to show up above the average user. And I am pretty sure you should know that the choice of the dataset is far from being a secondary factor. So next time try to not shoot shit on other people from above your arrogance throne, which is more a toilet looking at your average posts.
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
Am impressed and not impressed. Proof of concept, only problem, it is not presented as a proof of concept. It was presented as hey look over here and see what we are doing, not look at what we are working on. Things like this are so easily taken out of context when put out there as a stand alone. That is what we were presented with though. Always lead with best...make a showcase for it and then disect it. One has to think as to why they lead with what they did. They felt the street texture was their crowning achievement? The overall grey tonality made that stand out. It is a cool presentation on a technical standpoint. Sorry but the choice of what they lead with set the tone for the whole presentation...just awful. Their best stuff was all the way at the end...they needed to lead with that and then go to look at our street and move forward. Then end with the beginning. The flow seemed all wrong...had more like a documentary of here is how we progressed from grey to color.