9 minute read

DIGITAL IMAGERY

Next Article
PAPER CHASE

PAPER CHASE

PIXEL PROPHET I AM NOT A ROBOT

Inevitably the media has been dominated by events in Downing Street this last month. But, for Martin Christie, the standout highlight was the video of Larry the cat seeing off an urban fox, showing that at least one inhabitant of no.10 knew how to justify his salary.

The uncertainty of who was actually in charge from day to day was otherwise rather masked by at least one historical event in the halls of power, and one which may very well have more long-term implications than the fate of a here today gone tomorrow politician, to quote a famous interview.

The House of Lords, that venerable estate often referred to as god’s waiting room and sometimes more attuned to nineteenthcentury ways than those of the twenty-first, had a genuinely extraordinary guest speaker — a robot. Not just any robot, but Ai-Da, claimed to be the world’s first ultra-realistic humanoid. Ultra-realistic, seriously? Despite the attempt at an empathetic face and an occasional blink, with a mechanical arm and a claw hand, she/it looked a lot more like a subtle attempt at assimilation by the Borg. I doubt Captain Jean Luc Picard would have been fooled for a second if she had stood on the bridge of the starship Enterprise.

The Lords, however, gave this thing due respect and recognition to the extent that when it was asked a particularly awkward question, it fell silent and just looked at the floor, leaving many to assume it was actually the new prime minister.

In fact, Ai-Da is an artist who, according to her creator, uses Artificial Intelligence to produce original work from a learning algorithm. There is no doubt that some of the creations are quite stunning visually. Still, as I discussed last month, there are many in the arts, including photography, uncomfortable with this process and questioning whether it can be genuinely creative. It is a debate that will run deep as it has philosophical as well as technical aspects.

We are all familiar with computergenerated images on the screen when wobbly sets and dodgy stunts were replaced with impressive artificially created characters and backgrounds. But those techniques have now enabled facsimiles of people very hard to distinguish between the truth and the fraud, at least at first glance. So while many are created for amusing cameos on social media, there are, of course, troubling implications for a future separating fake from reality. It not only affects the visual arts, but I keep getting paid electronic ads for automated blog creators that can spew out algorithm-based nonsense in seconds like a sausage factory.

I suspect some of the press releases we receive use these systems as I can never decipher what they are about, containing as they do too many adjectives and very few descriptive nouns. There’s one promotion with a real person’s head and the title ‘This tool writes copy for you’, proving that a machine lacks a sense of irony or understanding of the double entendre.

At this point, I should assure readers that this column is not only written by a human being but checked for sense and spelling by another one and laid out by a third. Now, this may seem like a great waste of human resources, but at least if you don’t like it, there is a someone you can complain about and hold to account.

I may be shouting against the wind, but if you have followed this column, you will know that my argument is not against the use of AI itself but the possibilities of misuse of it.

In the last century, the science fiction writer Isaac Asimov introduced a law of robotics to be installed in the brain of every thinking machine — that no robot could cause harm to a human or allow harm to be done to one. Stories and films that followed speculated that the machines would become so smart they would plot against their creator and take over. Unfortunately, the reality is that we are much more likely to let them take over through ignorance and laziness.

We have to deal with both of those conditions, working as we do at the sharp end of customer demand and with products often created by them using AI in one form or another. Unlike other retail shops which have products on the shelf for people to choose from, our output doesn’t exist until we turn it into hard copy reality. So at the counter, we have to deal with people with sophisticated mobile phones but no idea how to find anything or save it at sufficient quality to make it usable, or alternatively apparently technically aware but lacking any real

It’s life, Jim, but not as we know it…

understanding. The latter is likely to be more impatient as they don’t understand why you can’t just press a button and make it happen. It doesn’t seem to matter whether they are young or old; they share the same qualities in equal measure. How often do you hear the phrase ‘but it looks good on my phone’?

Of course, there are also a good many people — though we probably don’t see too many of them — to whom the whole concept of sending information by electronic means is a complete anathema. How they survive in a world where buying a bus ticket can be a technical challenge is a wonder. Nevertheless, they will arrive with an old carrier bag with dog-eared originals of various materials expecting you to turn them into reproduction gems.

There is another customer — one who had something done; they can’t remember when, but you must have it on file somewhere, and they are sure you will remember it. It is less of an issue nowadays when people have the files in emails stored on their phones — even if they don’t know where it is. Previously the archives were written to dozens of discs, catalogued and stored for future reference, but sorting through them to find anything was laborious. Of course, it’s easier now when a single hard drive can contain thousands of files, but the problem of finding the correct one remains.

This is one area where AI can definitely be a friend rather than a foe with much more intelligent searching potential, as long as you have a system in place for the logical saving of items in the first place.

Both Adobe Bridge and Lightroom have excellent ways of organising files in shared folders and catalogues for ease of access and future reference. These are being further refined to make it easier to share with work colleagues as a group rather than swap memory sticks or emails which will inevitably result in duplicate copies of the original, which may be modified or flawed.

Workflow is one of the big things Abobe has been introducing, but it’s hiding away in one of the working space options in Bridge alongside Essentials, Libraries and Output. I recommend you have a look and use the little button that says Learn More to guide you through how it can help. But it can certainly make work easier by organising regular tasks, and by the nature of its automation, less chance of making mistakes. And, of course, there is the additional safety check that you can view all the information about the file — when it was created, modified etc. — as well as its shape, size and format.

In October, Adobe introduced a whole raft of updates across the Creative Cloud programmes, so it’s worth checking you are both up to date and aware of what they are; they often keep some features very useful for us in the shade and highlight more eyecatching items.

While all programmes have a print output option, none are specifically designed around a commercial printing environment, which is why, particularly with large format, production would use a RIP (Raster Image Processor) software to supervise workflow. But much of that can now be done in Photoshop or Lightroom, not only in laying up but in often quite subtle adjustments of colour, density etc., in order to refine the final print. That way, if there is an issue with the output, it is much easier to trace whether it is a problem with the printer or what you have sent to the printer. There is less that can go wrong, whether you select the wrong option or the software reads the wrong instructions.

Lightroom won’t be most people’s first choice for printing as it was primarily designed for photographers processing lots of images in one session rather than individual customers’ files. Nevertheless, as a photographer, I have been using it from the very first edition, usually swapping into Photoshop for the final edit, as there were a number of things LR just didn’t do. Now, however, with version 12, there aren’t many things that it doesn’t do.

One of the crucial advantages of editing any file in LR was that it was non-destructive — something you could only do in PS by using multiple layers that became complex and easily deleted or merged to make any further adjustments impossible. You must have had the dilemma of the dreaded question ‘do you want to save changes’ before saving a file, only to find you made the wrong choice and wiped out all the history of painstaking work and possibilities of revision.

Lightroom saves all of that, and unless you actually want to overwrite the original file, it makes you save it as another or several other versions so that it is easy to go back and start again, or with so-called virtual copies, at any point in the process. But because it didn’t use layers, it wasn’t possible to exploit some of the tricks you could perform in PS, like using selections and masks to isolate people and features and edit them individually. These possibilities have been slowly appearing in dribs and drabs in recent versions; now, there’s a virtual flood. Not just select people but their hair, eyes, teeth etc., all automatically recognised by smart learning algorithms, and not just people but anything that is familiar can be isolated and modified. Of course, anything that’s a little more unusual or less well-defined will need a little bit of human attention to detail, but this will save a lot of time as often customers want someone taken out of a group shot or made more prominent when they are in shadow or bleached out by sunlight.

It’s not just restoring valuable old photos, which is a premium service, but it opens up the possibility of almost while you wait for tweaks that will make all the difference to images that are most often supplied badly composed and exposed by the nature of instant photography. When you would have previously rolled your eyes when they said, ‘can you do anything on the computer’ knowing it would take hours of painstaking correction, now you can say, ‘sure, no problem, give me half an hour!’

One of the other things that was difficult in LR compared to PS was cloning and healing. Now the awkward patching tool has been replaced with proper content-aware capabilities, which is a massive step forward as moving from one program to another to perform relatively similar tasks was a pain.

As ever, there’s far more that I could possibly detail in these pages, so do check out the new features on the Adobe website, from the helpful prompts in the programmes themselves, and the many recommended YouTube tutorials, a few of which are listed below. Matt Kloskowski https://youtu.be/Hu2FX3HwNSw Photoshop Cafe https://youtu.be/-RGuC_D48K4 Julieanne Kost https://youtu.be/xnECvQ0wNBU

Now it’s much easier to select people or objects in a photo without tedious cutting out

This article is from: