7 minute read

ANOTHER INFINITE NUMBER OF MONKEYS

Back in 2018, Martin Christie began this column with reference to an infinite number of monkeys, a theory first proposed by mathematicians over a hundred years earlier in that rarefied atmosphere where simple adding up becomes philosophy. The premise was that given enough time and a sufficient number of typewriters, a monkey bashing away at the keys in a random manner would eventually write a Shakespeare play. It’s a fanciful concept, although he has it on the good authority of Professor Brian Cox no less that given the nature of infinity, it is actually not only likely but actually possible.

Given the finite nature of the universe, as far as we know, it is unlikely there would be a stage or even a planet, available to perform it on. It also assumes that the primates would actually think it worthwhile to pursue a literary career rather than hanging around in the trees, eating bananas and generally having a good time in evolutionary terms. A couple of more recent experiments introducing the smartest and quickest to learn, chimpanzees, to laptops produced no more interest from them than something to chew or urinate on. There is, of course, an important difference between something produced by a random sequence of actions and something created by intelligent thought and learning. It’s the difference between banging two rocks together by chance to make a spark and using the discovery to design a jet engine. While many animals have learned to make simple tools to aid survival, only the human brain has been able to extend the talent to more complicated creations, whether functional or merely artistic. Until now, that is. A good part of the discussion on artificial intelligence involves whether it is intelligent in a human, intuitive sense rather than a predictive machine, albeit a very sophisticated one. Can it produce something truly original or just reproduce an alternative version of what has already been?

The question has prompted heated discussion among artists and other creatives, although that often fails to acknowledge that any creation of the mind is also mostly the product of a library of sights and other sensory influences stored in memory. So what’s the difference with a machine using its enormous computing power to trawl a massive bank of resources to generate an innovative image? The issue is surely how much of a difference an individual can make, but it is one that is unlikely to be resolved as the conclusion will always be in danger of being overtaken by technology.

Much of the broader debate about AI, now it is very much in the public domain, seems to be led by those who are apparently unaware they have been taking advantage of it for some years already in services and devices. It has come to the fore now they realise there may be consequences they hadn’t anticipated at the time.

Regular readers will know that as far as digital imaging is concerned, I am both an enthusiastic user and a serious critic in equal measure. My contention is that, like any useful tool, it’s important to know how to use it, even if you can’t get your head around all the complex algorithms. If you drive a car, you do not need to know the intimate workings of the internal combustion engine and its reciprocal parts, but you do need to be aware of what will happen if you keep your foot on the accelerator!

As far as print is concerned, we have been using AI-assisted tools for some time, mostly in selecting and replacing pixels to produce a seamless image, compared to clumsy cloning and patching, which was the only previous option. This applies to colour as well as shapes which are all the same electronic patterns as the processor sees them. How it interprets them can be varied and sometimes apparently random. That’s why some manual override is usually a safer bet.

A more important reason is that no one tool in Photoshop does everything perfectly — there are always at least six ways of doing something, and each one has its benefits and disadvantages. Only experience or trial and error will reveal what they are. A oneclick fix-all is still a big step away, at least to the professional eye, if you are attempting something to match reality rather than a digital fantasy.

Generative Fill

At the time of writing, the next big and muchanticipated upgrade, introducing Generative Fill, had yet to arrive, but that is expected in the second half of the year and may well be part of our next column after the summer break.

Unlike previous upgrades, this has been extensively previewed and available as a Beta version for some time, but perhaps bugs are still being sorted. After all, it is taking the concept of content-aware to a whole new level, but maybe not the magic bullet that will solve everything, at least for us who have to turn screen illusion into hard paper reality.

GenFill is linked to the Contextual Bar, which appeared in the previous update and is actually an adaptive toolbar that aims to provide the features you want exactly where you need them instead of having to search for them — like putting a spanner close to hand for the next nut.

But unlike previous filling aids, it doesn’t just sample pixels in the existing image but can draw on an almost infinite reference library of online images to source its patch and then blend it — more or less seamlessly — into the original. And this image search can be done by a text prompt of features required rather than a visual comparison. So you can type absolutely anything into the menu, and GenFill will try and serve it up. It’s content-aware on steroids, as Jeremy Clarkson would probably describe it. You may have spotted one of the limitations already. Like many tasks these days, it relies on a good internet connection, not only for sourcing the image but for the processing power available in Adobe’s servers that would cripple the average user’s desktop. As we always say, other editing programs are available. Still, the effect will be even more notable for those trying to use the cheaper, alternative versions of this intelligent software on their phones. Even with Photoshop, the patches can often be low resolution, screen dimensions, in large files causing blurred edges and other odd anomalies.

Suppose you are trying to create a totally artificial, unnatural vista. In that case, this is probably not an issue, but for using GenFill to edit unwanted features in images, particularly for large print output, it obviously needs to be handled more gently. As with the Healing tools, as well as Content-Aware, the trick is not to ask it to do too much at a time. Small dabs are better than big brush strokes, and if you are requesting actions by word, be very specific about what you want it to do. Be careful what you wish for, as they say.

More of this when we have the full version shortly, but be warned that more images edited by computer are likely to be coming your way, as well as claims that you don’t need any professional skills to perform magic.

Of course, if you haven’t upgraded your computers to 2023 performance specifications, you may struggle with the latest updates, which are both memory and graphic display dependent. If you are lucky, you may be able to upgrade both to keep up. It’s worth checking what your existing system will handle as a number of basic office desktops can be improved.

Check out what the geeks on U-tube recommend as well as what the manufacturers tell you, as lots are tuned up as cheap gaming computers.

Even without GenFill, the last Photoshop upgrade, 24.5, had a really handy tool which almost slipped under the radar, calling it the Remove tool, which Adobe very much undersold. Yes, it removes things, but in order to do that, it has to sample the pixels around it and replace the original object with something appropriate otherwise, it will just leave a big hole. So in effect, it becomes another option for healing, cloning and content-aware.

If it doesn’t pop up in your brushes, you may need to find it in the toolbar and refresh. It’s not perfect, or at least it’s no better than the task you give it to tackle, but if you are familiar with the other brushes and how they work, you will be able to achieve things that previously would have been impossible or extremely difficult.

If you look at the ostrich behind bars and then breaking out, you may think it’s just one of those tricks you can do on your phone with the press of a button. But it’s actually a high-resolution image which has been modified almost perfectly for print — not the sort of instant fix gimmick you can see online. It’s exactly the sort of transformation customers will think you can do in a second, but it’s knowing how complicated and how much professional skill you can offer if they are prepared to pay for it.

Similarly with the much anticipated GenFill, until we’ve had time to test it with various images and difficulty levels, the verdict will have to hold.

But it does some things really well and others rather badly. Although it’s only a quick challenge, I used an Adobe Stock AI-generated image of the monkey with the laptop and then manually superimposed the screenshot of Hamlet because it looked like the best match.

When I asked Genfill to place a Shakespearean image instead, it got a bit confused, presumably because of all the many options at its disposal. I probably should have been more specific instead of letting the software decide what was best. Similarly, when I asked for a human to replace the chimpanzee, it got a little confused, putting the constituent parts together, and came up with a figure that looks like it had been jumbled up in a teleport from some science fiction. Lesson learned, make sure the computer knows exactly what you want. I know customers often expect us to be mind readers, but Artificial Intelligence hasn’t yet installed that ability.

You may also spot that AI struggles with very human features like hands and noses, something that no doubt will be corrected in future versions. However, it is likely that the human touch will still be needed to supervise and perhaps correct the machine-created results. Otherwise, monkeys may well randomly press the right keys to generate something resembling Shakespeare, but whether anyone will want to watch it is another matter.

*Text generated entirely without artificial assistance

This article is from: