9 minute read
DIGITAL IMAGERY
PIXEL PROPHET EXPECT THE UNEXPECTED!
Later this year, we shall see the official introduction of self-drive cars on our roads — initially for “evaluation” purposes, but, as Martin Christie suggests, there is no doubt that this trend is comings whether we are asking for it or not.
It’s all about taking control from fallible human beings and putting faith in intelligent machines. The logic is very simple — most accidents are caused by human error therefore removing the human element should make the roads safer, and perhaps as part of the pressure to push them forward, make traffic flow more smoothly on increasingly congested roads.
However, those promoting this technology seem to have more faith in it than those who work with intelligent machines on a daily basis — like us.
And you have to ask, in a working environment increasingly at the mercy of technical foibles and failures, whether extending this prospect to the transport system is entirely wise. When that development also puts lives at risk rather than just mere inconvenience, it is surely worth more detailed investigation.
Regular readers will know I often start with an apparently unconnected rant about some topical subject on the basis that we do not work in a bubble and that anything that affects the outside world can in some way affect us, especially when it involves the condition of machine learning which has been a more direct example in our own working space.
Some years ago, I took a dig at earlier attempts at teaching cars to think for themselves, with companies like Google finding they struggled to cope with the complicated calculations of entering a roundabout or any unpredictable event. Machine learning is all about predictability — which is why your computer keeps offering you things you liked before. But human beings are by their very nature unpredictable; that’s how we learnt and evolved — from banging rocks together to make fire to building space rockets.
So I accept that Artificial Intelligence has progressed dramatically since then, as it is used to some extent in almost every device we use daily. But there is a big difference between applying it to your shopping selection and putting your life — or those of others — in its hands. Surely common sense would predict that self-driving cars could only work if all vehicles were driven by the same logic and that mixing them with erratic individual behaviour is a recipe for disaster. And I don’t just mean other cars. For example, the rush to promote two wheels in city centres now has cyclists who conform to no highway code rules or any sense of orderly direction, and much worse electric scooters with untrained and inexperienced riders often doing speeds faster than regular traffic.
Even as a pedestrian, I now have to use all my extended senses to walk safely to work across town, with potential hazards likely to assault me from any direction, and this must be a similar experience across the country. It would seem that the best advice for survival is that offered by the late Douglas Adams in his manual for humans in an alien world ‘The Hitchhiker’s Guide to the Galaxy’ which was: expect the unexpected. This is not a philosophy artificial intelligence will embrace easily or quickly.
It is, however, a condition we have had to get used to on a regular basis in print on demand, which is how I justify this little preamble ramble because you know it’s best to expect the worst, then you may be pleasantly surprised rather than the other way round.
If your default mode is that whatever the customer has sent you is wrong in some way, then you won’t be disappointed and instinctively look to find a fault rather than assume it’s correct and discover it doesn’t fit the dimensions or print properly, and before it’s gone too far along the print process. We all make mistakes; it’s all about reducing the chances of them happening before they get expensive in time and paper.
Of course, reducing mistakes is also the impeccable logic behind the self-driving car, but when you hear that the driver who is now a passenger will be allowed to watch movies while he/she/they are being whisked through the busy streets or zoomed along bustling motorways, and only take control ‘when it becomes necessary’ the flaw is that it relies on AI to judge when it is necessary.
By their very nature, accidents are unpredictable; otherwise, there would be fewer of them. Many are avoided by simple experience and judgement that anticipates they may occur and often only split seconds before they do. Removing both that second sight and the reaction time would seem to be total folly. It is a human condition as important in the workplace as it is on the road.
People already have so many distractions in their daily lives, as we know, because they are not even concentrating on their email messages while also scrolling through social media, browsing shopping choices and admiring cute kittens. The one job at a time philosophy is a distant memory, but it used to go with the advice that it was necessary to do at least one properly.
Last month I used the example of a customer who had supplied several similar images, but which all turned out to be taken on different devices, with various modes of quality and clarity. This is increasingly common as people generally will have no specific memory of how an image originated and often no idea how it was edited or saved. This is where the first call in checking images should be using Adobe Bridge or Lightroom (other software is available) to check the finer details contained in the metadata.
This is your first line of defence against the unexpected. In this case, although the images appeared to be almost identical at first glance, under more careful scrutiny, they weren’t.
This month I had an artist who had had several images taken of different original artworks, and over some period of time, but again each on different cameras, phones and the like. The artist had no idea which was what until I spilt the beans, and we resolved it would be better to have them scanned properly rather than “a friend who has a good camera.”
The problem is that so many people use digital imaging daily without understanding how it works and what is involved. Our eyes are amazing as, unlike a camera, we have two of them, and our brains are about to quickly adjust to lighting conditions and focus on the subject. The stereoscopic vision, which is an essential part of animal survival, allows us to determine perspective and dimension.
These are all concepts the simple single-lens camera struggles to interpret, regardless of how clever the manufacturers claim it to be, so it’s common to get some
Camera focus points missing subject A grid is better than doing it by eye
Adjusting to straighten the perspective The Brighton Pavilion in all its glory
optical distortion of anything that has straight lines, like buildings, or straight edges like anything square or rectangular. So an image may not quite fit the shape you want to print it.
Photoshop has some really clever tools to deal with this now, so you can alter the perspective, not only of the whole image but any particular part or parts of it. Hidden under the familiar crop tool is a more sophisticated one called perspective crop which works by just selecting four corner points on the chosen image and automatically correcting the distortion. This may not be a perfect fix, but any small sections left out can be dragged back into the main one using the warp tool. That can pick out small pieces to make minor adjustments.
It gets better as there is a tool that combines both functions under Edit>Puppet Warp, enabling you to select particular items to be placed in proportion to the background. If you click on the tool, the handy tips should pop up to show you how it all works as it’s too much to cover on these pages, but there are also lots of helpful tutorials online that should you how helpful and easy this can be, and which can show you much easier visually than in print.
Be aware though that all the virtual experts will show you the way they prefer to do things, so you need to consult a few to pick a favourite.
Like most things in Photoshop, there are always several ways of doing things, so it’s best to choose what suits your working practice. At its simplest, you can convert the image into a layer or preferably a smart object, and pressing Ctrl + pointer gently adjusts the overall skew.
Selecting a grid view as an overlay helps get this accurate if you are not confident doing it by eye or with guides. The size of the grid can be altered in Preferences>Guides, Grid and Slices, so that you can adjust to the very millimetre if you want to be that precise.
The most basic fails on customer images are usually exposure and focusing, again by relying on automated settings rather than manual intervention. Focusing the lens on a camera has always relied on sharp edges for definition. Pre-digital film cameras would often have a small split screen in the centre of the viewer, which could be used to align straight lines. Digital lenses still need the same aid; otherwise, they are searching for something solid in a world of fog. On more professional cameras, you can change the points it will focus on or the spread of those points. On more simple devices, it may just be a matter of choice between landscapes or portraits by picking a tree symbol or ahead. Choosing the right one will help.
Exposure — that is the length of time taken to capture light from the subject- has a similar problem in that without some guidance as to which bit of light is the correct one, it will make up its own mind from a pre-programmed bit of software developed several years before in a laboratory on the other side of the world.
Left to its own choice, the camera will likely choose to let the maximum amount of light in to get the best image result, but as it cannot control the external environment and make it any brighter, it is likely to opt for the slowest shutter speed and largest lens aperture regardless. Slow shutter speeds and wide apertures have always been the cause of poor image capture in photography, and no amount of digital magic and manipulation will do anything to change it.
You can now do quite a lot of exposure compensation and colour adjustment in post-production — much more than you could ever do with film in a darkroom. But you are still only making a poor image a little bit better, maybe enough to keep a customer happy at least.
The frustration is often having to deal with what could have been a really great image if it was taken properly in the first place, rather than trying to rescue a disaster. Even if the customer goes away content, it’s not very satisfying as a printer.
It is, in effect, the least worst option rather than the best possible outcome, which rather neatly takes us full circle to the driverless car, which will have similar choices in the event of potential accidents — which is the most acceptable result? Personally, I’d still prefer a human decision in that choice.