Last year, Ai rocked the world by being able to produce artificial images that were, by the standards of those days, incredibly realistic. That is of course provided we weren’t too fussed about how many fingers there were on each persons hand, or the reality of other details in the image! Then the later release that year of image generator Flux1.0 certainly raised the quality bar to new heights.
This year, the rate of change and improvements has been equally dramatic.
I spoke recently at the club about the challenge of coherence I’d experienced when making ‘More Tea Vicar’ back in the early months of 2025. Trying to retain image coherence was almost impossible.
Perhaps no surprise then that the biggest shift in Ai this year is coherence. Those early Ai tools could generate some accurate single shots, but keeping characters, faces, and styles consistent across a whole sequence was almost impossible.
That’s changing. New generation models are now producing multi-shot clips where the same character carries through different angles and expressions, edging us ever closer to real film making. These images were generated using Dall.e 3 and Flux Context
The big question now isn’t “Can Ai make videos?” – it’s “How are we actually going to use it?”
Will video clubs lean on Ai for scriptwriting, storyboarding, filler shots, whole scenes, music, dialogue… or maybe something else entirely?
The technology is already here – the real challenge is deciding how we want to use it. And that means taking the time to separate the truly useful from the “not so good.”