We’ve seen NVIDIA’s impressive content aware tool and noise removing tool. They have recently developed a generative adversarial network (GAN) which easily customizes styles of realistic faces and creates new faces. That’s right, these super-realistic faces you can see in the lead image are not real at all!
If you’ve ever tried slowing down a video shot at 30 fps, you know that it becomes choppy and unusable. Nvidia has an AI-based solution for that which can turn your standard videos into watchable slow motion. The algorithm predicts what should come between two frames and fills in the space between them. As a result, you can get perfectly usable slow motion videos even if they were shot at 30 fps.
When I first started on my journey of learning my way around Photoshop I was a full-on Apple product fanboy, it just seemed like every creative was using an Apple machine and that I should do the same. Once I went full time I realised that I would build myself a PC that would be much more powerful for the price.
After my build was complete the first thing I did was install the Wacom drivers and Photoshop only to find that some things didn’t work quite as I had expected them to. There were major brush lag issues and the annoyance of Window Ink that would be a shock to anyone having just moved over to Windows from OSX.
When the Content-Aware Fill tools were added to Photoshop a number of years back, they were hailed as being the best thing since Photoshop itself was created. Now, with a couple of clicks, you could get rid of the stuff you didn’t want in your image and Photoshop would magically replace it with what you wanted. The reality was that it didn’t always do what it said on the tin, leading to the nickname “Content-Aware Fail”.
It’s come a long way since then. It’s smarter than it was, and soon it may expand to include fill sources from Adobe’s vast stock library. Nvidia’s taking a slightly different approach, though, using deep learning AI to help fill in the gaps, and while the aesthetics are still fairly primitive, the AI seems to do a great job of recognising what’s what.
Gone are the days where a photographer had to invest hours creating a look or developing a style. And when I say “gone are the days”, you know that I will discuss either A.I or Blockchain. It’s A.I. this time.
NVIDIA just released a new tool called FastPhotoStyle which takes a visual style and applies it to a photo. But how do you define a visual style? Easy, with another photo which has the style you want.
This whole “computational photography” thing always felt a little bit weird. But it also intrigued me. The idea that a computer can realistically create things that weren’t actually shown in the original shot is pretty amazing. Maybe it was seeing this scene in Blade Runner as a kid that did it for me. It was pure fantasy back then, but we’re getting there.
A new “computational zoom” technology developed by researchers at Nvidia and UCSB brings us a step closer to Deckard’s reality. Essentially it allows the photographer to change the focal length and perspective of an image in post, but this description barely does it justice. It actually allows you to simulate multiple focal lengths simultaneously. Here, watch this video, and it’ll all make sense.
For those who hate having to actually get up off their computers and leave the house, photography can be a potentially tricky prospect. Now, Nvidia have presented the world with a possible solution to this problem in the form of Ansel; An in-game “camera system”. Ok, so it’s not really photography, but it is quite an interesting technology.
The Ansel photo mode allows you to step outside of the game’s predetermined destiny for your character, and allow you to position your camera and compose your shot with a fully free form camera, with full 360 degree stereoscopic abilities viewable on a range of devices including your mobile phone.