NVIDIA’s researchers came up with an impressive algorithm that’s able to generate realistic faces. Some of them are so realistic that you may have a hard time figuring out that they were computer-generated. If you’re up for a challenge, there’s now a website where you can test how many fake faces you can distinguish from real ones. It can get more difficult than you may think.
Nvidia unveiled their new Creator Ready Drivers (CRD) for the Titan, RTX 20, GTX 10 and GTX 16 series graphics cards at GTC 2019 last week and now they’re ready to download.
Nvidia claims they offer increased performance while offering greater stability with the apps that many photographers, video editors and other creatives use on a daily basis. Applications like Adobe Photoshop CC and Premiere Pro CC, both of which, Nvidia says see up to a 9% performance increase.
We’ve seen NVIDIA’s impressive content aware tool and noise removing tool. They have recently developed a generative adversarial network (GAN) which easily customizes styles of realistic faces and creates new faces. That’s right, these super-realistic faces you can see in the lead image are not real at all!
If you’ve ever tried slowing down a video shot at 30 fps, you know that it becomes choppy and unusable. Nvidia has an AI-based solution for that which can turn your standard videos into watchable slow motion. The algorithm predicts what should come between two frames and fills in the space between them. As a result, you can get perfectly usable slow motion videos even if they were shot at 30 fps.
When I first started on my journey of learning my way around Photoshop I was a full-on Apple product fanboy, it just seemed like every creative was using an Apple machine and that I should do the same. Once I went full time I realised that I would build myself a PC that would be much more powerful for the price.
After my build was complete the first thing I did was install the Wacom drivers and Photoshop only to find that some things didn’t work quite as I had expected them to. There were major brush lag issues and the annoyance of Window Ink that would be a shock to anyone having just moved over to Windows from OSX.
When the Content-Aware Fill tools were added to Photoshop a number of years back, they were hailed as being the best thing since Photoshop itself was created. Now, with a couple of clicks, you could get rid of the stuff you didn’t want in your image and Photoshop would magically replace it with what you wanted. The reality was that it didn’t always do what it said on the tin, leading to the nickname “Content-Aware Fail”.
It’s come a long way since then. It’s smarter than it was, and soon it may expand to include fill sources from Adobe’s vast stock library. Nvidia’s taking a slightly different approach, though, using deep learning AI to help fill in the gaps, and while the aesthetics are still fairly primitive, the AI seems to do a great job of recognising what’s what.
Gone are the days where a photographer had to invest hours creating a look or developing a style. And when I say “gone are the days”, you know that I will discuss either A.I or Blockchain. It’s A.I. this time.
NVIDIA just released a new tool called FastPhotoStyle which takes a visual style and applies it to a photo. But how do you define a visual style? Easy, with another photo which has the style you want.
This whole “computational photography” thing always felt a little bit weird. But it also intrigued me. The idea that a computer can realistically create things that weren’t actually shown in the original shot is pretty amazing. Maybe it was seeing this scene in Blade Runner as a kid that did it for me. It was pure fantasy back then, but we’re getting there.
A new “computational zoom” technology developed by researchers at Nvidia and UCSB brings us a step closer to Deckard’s reality. Essentially it allows the photographer to change the focal length and perspective of an image in post, but this description barely does it justice. It actually allows you to simulate multiple focal lengths simultaneously. Here, watch this video, and it’ll all make sense.