10 minute read

DIGITAL IMAGERY

Next Article
NEW PROFIT CENTRE

NEW PROFIT CENTRE

PIXEL PROPHET IN THE EYE OF THE BEHOLDER

Martin Christie reflected on last month’s column on Artificial Intelligence and the nature of machine learning upon which it is based. Prompted by a couple of things that caught his attention, it further encouraged his train of thought.

In light of the increasing use of AI in all manner of processes, the point I was making was that it should be considered an invaluable aid, rather than an absolute direction — a guide and not a dictator of facts. Like any tool, unless you understand how it works and what it does, you may not end up with the results you want, and, moreover, not even realise it.

What drew my attention was a particularly irritating advert for Alexa — that little talking pillbox of wisdom that so many people have adopted like it was an amusing pet rather than a substitute for thinking.

A witless father is harassed by his precocious child for historical facts and resorts to Amazon’s soothing vocal database to save his educational reputation, and worse still, smugly pretends he knew the facts all along.

I have nothing in particular against Alexa or any other smart device, rather the marketing promotion that it will make you more intelligent, more creative and more successful than you would be without it. The logical progression is that you will gradually be unable to do anything without electronic assistance. Of course, we do so many things every day that are done better, quicker and safer, with the aid of computers. The whole pace of life, social and economic now depends upon them. But the potential flaw lies in the very nature of machine learning itself. In simple terms, it is basically linear in nature: because there is A, there is B, therefore C and so on. It can get to XYZ, and probably very quickly, but it has to flash through the alphabet to get there. Until the binary system upon which it is based is made redundant by an alternative, it will continue to be the pattern, albeit a highly advanced one.

The process is not able to make the exceptional lateral jumps intelligent organisms have evolved to do by a mixture of experience, intuition and perception through their various senses. But they are trying to fill in the gaps of electronic education. Enter Deep Learning — not a new idea but an extension of it. It’s an attempt to mimic the workings of the human brain by learning from its mistakes as well as its successes and copy all the intricate internal actions of the human brain.

You may not have heard of it, but you are already living and working with it; in fact may already be suffering from it as it is lurking in the background of your computer, smartphone and many other modern devices. It may be a well-intentioned invasion, desperate to make itself more useful anticipating what you like, what you may want to do next. Still, the demands on the memory banks and computing power of your machine are considerable.

Deep learning is an intrinsic part of almost all current software, so if you have updated anything in the last few years, you are running with it. You don’t have a choice — there was no tick box to check that you wanted or needed it.

In last month’s column, my contention was that however smart the technology still needed human intelligence to cast an eye over its results to check its efficacy. The intention of deep learning in the longer term is to perfect machines that require no human supervision at all. The device will, in effect, be marking its own exam paper.

Replacing human judgement in roles that are dangerous or tedious is nothing new and largely commendable. It has generally made the workplace safer and less arduous, and the speed of complex calculations has made previously insurmountable tasks possible. But a device that, in effect, has its own agenda is one that needs to be viewed with caution.

It is not the stuff of science fiction, with robots intent on dominating the human race with evil intent. Quite the reverse, they are being developed to serve us and be content to do so. But relying on binary logic to determine our best interests and desires has a potential flaw. When a machine decides what we want, even what we need, it is a time when we stop having any choice at all.

If you are new to this column, you may already be wondering what all this has to do with digital imaging; regular readers will indulge me as they know I will eventually get round to my point of reference. The simple fact is that we no longer work in an isolated bubble if we ever did. Work has invaded our home environment and vice versa, and the lifestyle of ourselves, as well as our customers, has changed forever. Their choices and expectations are very much dominated by a small piece of apparently intelligent metal and plastic they carry in their hands and consult at all times. That reality has more relevance in the overall activities of on-demand print than the precise detail of a particularly useful tool in Photoshop.

PHOTOSHOP’S AI GUIDE

In fact, Photoshop has very much taken over the functionality of this column by using its own AI to guide you on how to do things and how to use its options specifically rather than search for tips and tutorials online. It is a much faster alternative to trawling through links that more frequently now direct you to things they think you ought to know or like, rather than things you actually want, for all the reasons previously discussed.

There was always a help option hiding mostly unnoticed and unused in the top toolbar. Not only has the reference library of this choice been expanded, but there is a more intuitive guide on hand by pressing Control (Command on Mac)>F for Find, and that will help source the right tool and how to use it. Somewhat ironically, this provides

Control F discover the very human possibility that has always been available in PS that there are more ways than one of doing things, rather than letting the software dictate what is right and what is wrong.

But don’t worry, there is no danger of Adobe’s machine learning, which is called Sensei, replacing me anytime soon. There are still many more mysteries in the digital world to be discovered more by accident than design and solved more by thinking outside the box rather than within it.

Sensei is a term used in martial arts, meaning teacher, and suggests that it is your sage and guiding hand in all things creative, which of course, in many ways it is. But who teaches the teacher? That’s more than a philosophical question and one which will continue to challenge AI.

One of those challenges is facial recognition, intended simply to identify individual faces and widely used in security and social media, but is purely based on biometric data — measurements of the regular dimensions of a face. It might just as easily identify something that closely matches these clues — a feature that looks like a face in the bark of a tree — or alternatively fails to recognise a face when matching features are missing — if the person is wearing a mask example. Amazon had to abandon selling it to police forces, not because it impinged on human rights, but because it just didn’t work, or at least couldn’t be relied on.

As humans, we rely on many more senses to identify people, read their expressions and emotions, and communicate with each other. Birds that nest in great colonies can find their young amongst thousands of other apparently identical ones because they can spot and sense more than the colour of their plumage. If it was down to facial recognition technology, they would probably starve.

In a more frivolous way, there are other means of using facial recognition in the changing of features to make individual faces more appealing or amusing — adding bunny ears or doggy tongues to a profile, for example. But apart from entertaining, the technology is potentially useful for other reasons. If you understand it, you can apply it intelligently rather than randomly, as most people tend to do.

As an old fashioned photographer who grew up with film, image capture was more intuition than science. You had to learn what the camera saw because you couldn’t see the results until the film was processed and printed. But then that’s how you learned, working backwards, analysing the mistakes and trying not to make them again. In many ways, that sounds like how deep learning is supposed to work, and in many ways, it does. However, sometimes history teaches us mistakes can be happy discoveries, and technically correct answers may not always be the ones you are really looking for. It’s a process called serendipity — something useful found by chance.

That’s very much how I’ve applied myself to digital photography and printing and why this ramble is the theme of this month’s column. I appreciate that those new to the industry may find the technology a little bewildering, as I did at first. There seemed to be so many experts explaining ICC profiles, calibration and colour gamut, all sounding so knowledgeable and correct. Even a little device measured colour and told you precisely what Pantone number it should be. All very clever and scientifically accurate.

But at the end of the day, it’s what the customer expects that is the right colour, even if the technology says otherwise because, in the real world, there are margins for error or perception that computers don’t allow for.

It’s not a perfect world. So I’ve always fallen back on the old darkroom principle of when in doubt, do a test strip and then work backwards to your source rather than the other way around.

If there’s nothing wrong with the printer, then there’s something wrong with what you’re sending to the printer, which is usually the case when dealing with customers’ files rather than ones you have nurtured yourself. You never know what buried treasures like within that digital file until you try and open it up.

A BASIC LESSON

So this month’s very basic lesson in printing is about the most simple Photoshop colour picker tool, often overlooked because of the more sophisticated image management options now available. But it is the quickest to do some basic adjustment to see which direction you need to be heading and whether what you see on the screen bears any similarities to what prints out.

While the colour picker has the values in RGB and CMYK as well as LAB or any chosen pixel or group of pixels — remembering you can alter the number sampled by clicking on Sample Size at the top of the workspace —

It also has HSB, which stands for Hue, Saturation and Brightness.

Hue is the actual colour, Saturation is the amount of it, and Brightness is just that, bearing in mind that anything printed will be reflected light, not backlit.

But you can easily review and balance out these choices by moving the slider back and forth from left to right and up and down, and see how brightness or lack of it makes a difference to the shade you want.

The most common issue in print is that the output is too dark and dense compared to the customer’s expectation because they have never seen anything more than a little bright screen. It is the simplest way to correct while keeping the original colour without meddling with adjustments at the printer end. You can do that by eye without having to consult any artificial intelligence.

Hue vs Saturation and Brightness

This article is from: