7 minute read
STILL NOT DEAD YET!
Well, that escalated quickly! I only had to do a piece on stable diffusion and its ability to create artificial images out of pure text alone, and within a few weeks, it became tabloid headlines and debated on news channels worldwide. Martin Christie scribes…
Well, I can’t take full credit for the exposure. It was the IT wizard who cheekily (in his own words) entered an AI image into an international photography competition and won, only to refuse the award to expose the hoax. Of course, the popular media has made a meal of the story as if they had suddenly discovered it. Still, the reality is that we have been taking advantage of artificial intelligence for years. It only becomes an issue when it challenges the comfort zone of what we believe to be real. And that was the intention of the submission in the first place, as the creator explained, if you bothered to read further than the sensational strap lines.
Computer learning has been an integral part of processing over the last decade; the only thing that has changed is the pace of development which is linked to the power and complexity of modern devices. Consider the fact that it took a little over sixty years from the first manned-powered flight to putting a man in a rocket on the moon with the relatively pedestrian learning curve of the last century. A modern computer could probably do the maths in sixty days, and those to come in the same number of seconds.
Attempts to limit AI are pointless; the genie is already a long way out of the bottle. The important thing is to recognise where it is used and how it works so you can identify or even anticipate potential flaws in its logic. At the moment, for example, it’s not very good at replicating some of the more subtle human features such as eyes and fingers — things we use to recognise familiar faces and features. That’s why it’s much better at fantasy creations or ones that don’t have to be specifically accurate.
Something that puzzles me about the competition entry is why the judges didn’t look for the metadata — the smoking gun, as I like to call it — that reveals the authenticity of a digital image. I use it instinctively as a guide to any unknown photo file, and many photography websites won’t publish pictures now unless the camera data is embedded. This tells you when it was shot, the device that shot it, and all the details of how it was originally captured. All of these details help you build an idea of how good or bad the image is likely to be if you have to print it — even if it was ‘taken with a very good camera’.
A few years back, I was somewhat sceptical about a high-profile ad campaign from a phone company that boldly claimed that its latest offering meant the death of photography, with images of cameras and kit being consigned to the scrapheap. Well then, as of now, I am confident that it is not dead yet. Proper photography may return to becoming a respected profession, and its output retains real value.
What AI may do is reduce the vast number of random images snapped on mobiles and rarely saved to see the light of day because the user will just ask the phone to do all the hard work for them and produce something out of an electronic imagination. Stand by for even cuter kittens and more stupid stunts. These will no doubt fill social media and keep viewers entertained, but what will happen when they want to print from them remains to be seen. The flaws and the fantasy may well be revealed, and the embarrassing absurdities brought into stark relief. If AI learns by the example of what we like to look at, it will most likely create illusions of how we would like to look, like so many dreamily filtered selfies, rather than how we actually look. Because it wants to serve us, it will show us what it thinks we want to see, much like online search engines now direct you to things they think you will like.
The technology is new, but the principle has been around in image reproduction for many years, most famously in history with Holbein’s beauty portrait of Anne of Cleeves that fooled Henry VIII into a hasty marriage, just as hastily regretted when he viewed the lady in real life. At least she kept her head from the deception. In more recent times, film stars had their face and figure airbrushed to appear in magazines.
As an old photographer who has transitioned from film to digital, I am well-tuned to the positives and negatives of both and have been prepared to adapt and, in fact, embrace the new tools with some enthusiasm because I remember the limitations of solid originals, and the hours spent in the darkroom trying to recover one perfect print. In contrast, I was listening to some young photography students now mentally slashing their wrists over the ‘impending death’ of the camera and having no future after all their hours of study. Having been to several graduate shows, however, I think most of them would already benefit from more than a little intelligent assistance on correctly using composition, exposure and focus. I have never been sure how many would survive in the commercial world anyway.
That brings us conveniently to using the technology available most effectively, and that is a challenge in itself as it is all moving at such a pace. As printers, the illusional images of AI do not concern us; the integrity of digital files do because we have to turn it into something real, and while the actual print process of putting inks onto paper has changed very little, how we manipulate the information sent to the print machine has.
In previous columns, I’ve gone through the basic controls in the Photoshop workspace, most of which have stayed familiar through all the various iterations of Adobe’s flagship and are similar to those in other photo editing programmes. In simple terms, before basic AI was added, most of these affected the whole picture in one way or another. So altering one particular colour would affect the hue of another, as we know with composite colour printing. Likewise, isolating a specific colour was tricky with so many pixels with so many different shades of similar ones. Similarly, isolating a single shape often required laborious cutting paths with the pen tool and, even then, rarely provided a satisfactory selection.
Overall adjustment of exposure or colour is often referred to as global adjustment, and you almost certainly used to tweak an image on the screen to how you know it will look in print. However, AI enables a far more selective approach because it is not distracted by looking at the image and making a subjective judgement. Instead, it can only see the complicated mathematical connections below the surface and can rearrange the algorithms in minute detail as a result. This may be referred to as local adjustment.
Some of this is within the scope of the main selection tools in Photoshop, but the real gems are hidden from main sight under Filter >Camera Raw Filter, often referred to as Adobe Camera Raw or ACR. This opens up an entirely new dialogue box with a host of extra tools, which will look a little daunting at first but is well worth getting familiar with. The reason it is hidden is partly because it would otherwise overload the already busy PS workspace, but it also has a history in the development of digital photography.
Most of the images we receive from customers, however created, are JPGs or PNGs. These have already been preprocessed before they get to you, whether when shot by the device or edited afterwards, so there may be little you can do with them other than the aforementioned global adjustment. Camera RAW files were a game changer for photographers as they were essentially untouched by any electronic imagination and, therefore, could be dramatically modified on the computer afterwards. Of course, you still had to take quite a good picture in the first place, but a good many of the parameters, like exposure, colour balance and the like, could be infinitely corrected to suit. At the same time, Adobe had also introduced an entirely new programme, Lightroom, with identical features but specifically aimed at photographers and camera files in quantity. It is less used for printing as it only works with pixel files, not PDFs or Word documents, for example, and the print dialogue is a little pedestrian and often awkward to manage compared to PS.
Almost all quality cameras and even some good smartphones can now shoot and save RAW, but most users don’t know about it or can’t cope with the complications, so they could have remained a hidden secret until Adobe made it possible to adjust all picture files in ACR. So although they may not have the full range of editing capabilities, preprocessed files can still have some major alterations not only to make them look good but print better. And that’s what will please your customers and keep them coming back and recommending you, even if they do claim most of the credit for the photography or the camera.
ACR now relies heavily on AI and has developed because it has such a wealth of image editing to learn from and because it is aimed at the more patient professional rather than the impatient amateur; it is designed to concentrate on the fine details more than the overall canvas. In particular, in the selection of masks to isolate subjects or colours, which was previously quite tedious and often inaccurate, preset options can perform the actions at a stroke and still have the option for manual adjustment. In addition, where previously the software might just identify the shape of a person, it can now separate the hair, eyes, teeth and even eyebrows, as well as the clothing, to give almost infinite control of the output.
Similar intelligence is used in the newly introduced Neural Filters, which make more automated decisions on colouring or sharpening with limited manual override, but like ACR, these are continually being updated and added to. They may well be a quick fix to improve or restore an image or may be completely the wrong choice. But that is the Achilles heel of machine learning. We learn from customers, for example, and what they ask for. But we also learn with human experience that what they ask for is not always what they need or what they expect. What makes the difference is that little bit of guidance in the right direction.
Next month I will show you some of my favourite and most useful lessons that I and Adobe have learned together.