
Star trails made in an hour by merging over 200 shorter photos together on the computer. Compared to shooting one long picture, it gives me a cleaner photo as if I'd used a camera with a bigger sensor. Plus I can take out any frames that didn't work, and I can choose how long the trails are by choosing how many photos I merge. Phones already do all this instantly with every photo they take. Cameras will soon too, letting us choose settings after shooting. Lake Moogerah, Queensland, Australia
It's called "Computational Photography", and it's going to change everything.
Until now a camera’s quality has depended on three things: the quality of its lenses, its sensor and its engineering. For decades, SLR cameras have been the pinnacle of all three. But a fourth factor is starting to trump them all: the processing power inside the camera. It’s already happening in phone cameras: they’ve surged forward in the last 12 months to rival dedicated cameras, without improving anything in the physical camera on the phone. And it’s not slowing down. Computer processing gets better so much faster than lenses, sensors or engineering that cameras are poised to join phones in a phase of super-rapid improvement.
Size won't matter much longer
The size of a camera’s sensor has always been the biggest factor in its quality. But not for much longer. Phones with tiny sensors have already bridged the gap to their bigger cousins by merging lots of photos together. Modern phones shoot all the time, and when you press the button they reach back in time to piece together a photo from up to 15 pictures, even dealing intelligently with moving things. It effectively makes their sensor 15 times bigger, or multiplies by 15 their action-stopping power or low-light shooting ability. In just 2 years, the size of the sensor has become 15 times less important, Next year, it’ll be 20-30 time less important.
Bigger sensors will always be better, but faster processing will soon dwarf their advantage - probably within 2 generations of camera (3-5 years). In anticipation I’m gradually replacing my bulky full-frame cameras and lenses with with smaller half-frame mirrorless cameras and lenses. There's a small quality penalty today, but my back is thanking me already.
Choose your settings after taking a photo
The craft of photography will become more playful and easier to learn, because we can refine settings after shooting. We'll be able to change the shutter speed by picking how many photos we merge, change the look of the aperture with focus stacking (many cameras already do this - just slowly), and with 3D modelling, letting us choose where we add blur into pictures by depth. Phones do all this already, and cameras are lagging behind. For now.
Don't buy an SLR?
Digital SLRs can’t join the computational revolution because they have a mechanical mirror that gets in the way. Their lenses WILL work on future cameras, and SLRs are good value today, so there’s no reason to avoid them. Just be aware that they won’t be around forever. The future belongs to “mirrorless” cameras that will play the computational game when their processors let them.
Don't buy any camera yet?
No - now is a good time to buy a camera. Prices are lower than this time last year, and now we know that today’s lenses will be compatible with tomorrow’s cameras, there’s no reason to hold back. Just be aware that as soon as computational photography kicks in, cameras will improve at the same rate as phones and computers: not so fast that you don’t buy one, but fast enough that we don’t think of them as lifelong investments. At least for now, the old adage to skimp on cameras and buy the best lenses still applies.
Dean Holland, December 2018.