In a recent update to its popular image editing app Photoshop, Adobe has implemented a new feature called Depth Blur (currently in beta). It’s designed to reduce the depth of field via AI after you’ve taken a shot, and that raises a burning question: Will we still be using fast lenses once this feature works flawlessly (and possibly finds its way into video)?
Much of what makes an image stand out is massively influenced by the lens with which it was taken. All the light has to go through that lens. Equally important, but perhaps to a lesser degree in today’s world, is the sensor and the image pipeline behind it. That may be a bold statement, but think about it:
Almost every current (photo) camera shoots very decent raw or processed image formats. Same is true for video. Flat log profiles in almost every semi-pro camera. In some ways, the sensor is more of a clinical capture device than a look-defining tool.
With lenses, I’d say the opposite is true: the coating used matters, zoom vs. fixed focal length matters, sharpness varies enormously between lenses, and, perhaps most importantly for some, the aperture (or T-stop for cine glass). Really fast lenses are usually very expensive, no compromise high-end pieces of technology.
So does AI put an end to all this? Just shoot at f/5.6 and make it more cinematic in post-processing (aka shallow depth of field)? Adobe’s Depth Blur might lead us the path here.
Adobe Photoshop Depth Blur
As seen in a quick review video by Andy Day of Fstoppers, the new feature is far from perfect, but it’s certainly a start and the “better” the input photo is for the AI algorithms, the better the result will be. Also important to note: the AI in the cloud is.. learning.
And since everything related to computers is getting better with time, I think this feature is just a glimpse into the future. Light Field technology (Anyone remember LYTRO’s giant Light Field rig?) is a thing but still not yet quite up there for prime time.
Bokeh and beyond
Adding nice bokeh is one thing. But where does this end? You already can digitally change the age of a person, retouch all kinds of “problems”. So is Adobe’s Depth Blur really the only thin AI can us with? I don’t think so.
The question is: Will cinematography become even more computationally intensive than it already is? Just point and shoot, then add a cinematic look and feel? To be honest, I find that a bit scary. Since I’m a tech nerd and like to play with expensive cameras, lenses, gear in general…. and I think it’s getting harder and harder to tell if an image is actually real, computer generated or just a bit tweaked. Totally artificial backdrops are one thing, yes, but actors?
In Martin Scorsese’s The Irishman (here’s an article on VFX de-aging in that movie), Robert De Niro already gets a fair amount of CGI treatment, but to my eye it wasn’t always 100% convincing. With time, especially as a program like Adobe Photoshop brings more and more AI technologies into the mainstream, it will get better and more convincing in no time. The AI will be trained and will get better, and over time other (video)applications will also benefit from the sheer mass of “training images” for the AI behind it.
So, no more need for decent glass? I think we’re safe for a little while, but after all, film stock hasn’t necessarily survived, has it? Maybe the future of vintage glass is just that then: vintage. For snapping a photo or recording a video the “old fashioned” way. We’ll see what the AI-driven future has in store for us.
Perhaps cameras and lenses will just become capture devices for AI-driven post-production, a path we’re definitely already on in terms of cameras. And maybe lenses will follow suit.
What do you think? Is AI the future for everything photo/video? Or do you prefer the what-you-see-is-what-you-get approach? What do you think about the Depth Blur feature? Share your thoughts in the comments below!