A couple of years ago I wrote about Google's AI camera, Google Clips ($250). This is a device that you plonk on the kitchen table (or hang around your neck), and which automatically takes photos whenever your favourite landscape, child or pet steps into frame.
I compared it with a Nikon DSLR ($3,300) and proclaimed that AI devices would consume the lower end of the market while creative photographers would cling on to their interchangeable lenses.
On reflection I'm not surprised by the news this week that Google Clips has been withdrawn. As we all know, AI relies heavily on machine learning, which requires huge volumes of data and experiments to accurately predict everything from eye disease to your next favourite artist on Spotify.
Google tried to teach its camera about composition, subject focus and other skills using photo libraries but even then the results were disappointing.
When that didn't work, Google asked users to sit down with their cameras at the end of the day and go through the pictures highlighting bad and good images.
At this point you're probably asking the same question as me. Why shell out $250 for a smart camera that you need to teach (and teach and teach) until it can take a half decent snap of your Pomeranian?
So if AI isn't ready yet, what is driving creativity in consumer photography? Smartphone cameras have accelerated in quality as predicted. They're pretty much the only 'wow' factor at every Samsung, iPhone and Pixel launch.
But apps are more important I think. I'm constantly staggered by the story telling creativity on Instagram Stories and more recently Tik Tok, where memes spread at lightning speed, challenging users to invent new content and outperform the rest of the community.
As for the high-end, professionals will cling to their Canons and Nikons, but if you're a consumer, why spend $1,000 on a bulky camera when a smartphone is equally versatile and fits in your pocket?
DSLRs have played a large part in my life for about 15 years now, but I can't see myself upgrading my old Nikon D90. Not when I'll be buying a new smartphone in about six months time. Pixel 4 anyone?
We know what A.I. is good at doing: Making a decision with a clearly correct answer. If there’s a dog in an image, Google can identify it. The dog is a fact. But for Google to figure out when your dog is doing something you want to see again, like the guilty face it makes after digging through the garbage, it needs a ton of outside information that won’t be the same for every user. In other words, A.I. doesn’t work when it has to quantify something qualitative. That spans from whether a person is trustworthy to whether a photo is good or bad.