Thinking Out Of The Box



Martin Christie –Digital Imaging Lead Colourfast

Last month saw the anniversary of the birth of William Henry Fox Talbot in 1800, regarded as the father of photography, who would have seen great changes in his seventy seven years from the technology he helped develop. It turned the written, unseen world, into one that captured the reality of it.

He would have been a teenager when brief reports of the battle of Waterloo emerged in newspapers weeks after the event, but within a few decades actual images of the horrors of the Crimean War were available for public view.
It’s hard to appreciate the effects of this visual revolution on a previous century unless you compare it with our present when we cannot only capture images but apparently create them at will. Like the Victorians, we cannot imagine where this will lead, but view it with equal amounts of excitement and apprehension.

I was following a podcast where a photographer who had only worked with digital was questioning whether she could ever have adapted to shoot with film. Ironically, as one who has now spent decades with both, understanding film was more of a hindrance than a help. Those new to electronic imaging just accepted it, rather than having to come to terms with the fact that, unlike holding a strip of developed celluloid to the light, what you were viewing wasn’t actually real. It was just a jumble of digits that only made sense to an intelligent processor.

Once you get your head around that, you can begin to identify the patterns that determine how colour and detail is transferred to the screen and eventually to print. And also why sometimes it isn’t always what you want or expected. Trial and error was always an essential darkroom technique. So that’s nothing new. What has changed is how we get there.
The traditional organic learning curve involves exploring possibilities until a solution is reached, even if some patience is required. But the time taken is not necessarily wasted because often other valuable information is accumulated along the way. 

The journey can be as interesting as the destination, or as an ancient philosopher exclaimed, better to know how to go if not where, than where to go and not how. The old sage was commenting on the naivety of human ambition, but if he were with us today he might well be reflecting on those placing total faith in artificial intelligence without any understanding of how it works.

If you are a regular to this column you will know that this is not just a kneejerk reaction to the new, but a fairly well thought out view based on real experience. I have been advocating some of the brilliant tools in image editing that have been enhanced with AI. They have been slipping into Photoshop for some years. It’s just that now that trickle feed has become an avalanche. And just like a thick coverage of snow, it is in danger of obscuring all the particular features underneath.
So why is that important if you can just press a button and don’t need to know anything about the tedious details? Well apart from the fact that for most of the last twenty years these pages have been trying to talk you through these actions, if you are working professionally there really are some things you need to understand. 

Back in the early days of digital lots of people took up photography because it was easy, just point and click, no skill needed, and after all there was Photoshop. It took a while for them to realise that editing was actually a lot more difficult than shooting and that therefore it was probably better to take a good photograph in the first place. And that needed expertise not just fancy equipment. Hence, despite spanning two centuries of technology I’m not about to be redundant anytime soon, and I’m just as ready to embrace the challenge of AI with some enthusiasm.

The one thing that is becoming more difficult is writing this column as I’m not about to call on Chat GTP for any assistance for easy answers. Instead I’m drawing on decades of experience and trying to compress cumulative wisdom into easily digestible paragraphs. And that is a very human ability, not the mechanical method of summarising the work of a lifetime into convenient bites.

That’s why you always get this lengthy preamble rather than a list of bullet points. It’s just not that simple. And why it’s often easier to direct you to advice available online for particular features as any one of them would need most of the space available here, and a year of columns would barely cover the last few months updates. 

However one of the limitations of the many podcasts, and especially the Adobe official ones, is that they are dealing with original files that are virtually perfect in the first place, so it’s like polishing the family silver. Unfortunately for those of us working at the sharp end of print on demand the input is most often more like doggie deposits.

And that’s where AI tends to gasp due to lack of virtual oxygen. It needs the vital elements of digital detail to create its visual magic. It often needs a bit of guidance and encouragement to perform correctly. That’s why you have to view an image as the computer is looking at it, not exactly as it appears on your screen. Our brains can make creative leaps a computer cannot. Just because we know what something should look like, doesn't mean your AI does.
The first simple example is of photo restoration, and there are now numerous automated versions of this for dealing with damaged and distressed originals. But they all depend on the amount of information that can be pulled out of the image, even when replacing parts that may be missing.

A familiar issue is colour fading, often when a photo has been on display in a frame rather than hidden from sunlight in an album. Traditional photographic paper is light sensitive - that’s how it works. And although the process is supposed to stop at the finishing stage there is inevitable chemical fall out over time, especially with some of the budget mini labs who may not have been quite so scrupulous in maintaining the condition of their developing tanks.



As a result, ironically, some more recent prints may suffer more than those of Victorian times when price and health and safety issues were less of a concern. This customer’s family group is a typical example. Much of the colour has been drained, so the computer has very little reference for the clothing being worn or even some of the facial detail. It can only identify colour by comparing the tonal differences, hence blues and reds are very difficult to distinguish in AI while they seem very obvious to our eyes.

If you just rely on the software to colourise it does a fair job, but not a great one, as you can see in the before and after. The colours are a bit random and the human features are not very sympathetic. What is needed is more basic Photoshop skills to enhance details, pulling out facial features, increasing selective hues, and generally adding contrast. Remember digital imaging relies on the difference between one pixel and the next to determine the result. We rely on human intuition.
And a certain amount of patience.

As you can see the difference between an image entirely restored by AI and one being a cooperation between computer and human is quite significant. Significant enough to charge as a premium service rather than just acceptable, if a little disappointing if the image has great personal value.

The second example, the girl with a motorbike, is one of my photos, not shot in the best lighting conditions, and from more than a decade ago and two past generations of digital cameras. As you can see, the original appears very flat and featureless overall. Back then it would have taken a lot of manual manipulation in Photoshop to pull anything out of it and make it more exciting. 

But because it was taken with what was then a good quality DSLR and lens, unlike most of the customer input that is our general diet, it is quite a big file, with a lot of pixel information that is not immediately apparent to the eye. But it is there, inside the computer, and can be pulled out with careful coaxing, tweaking the shadows and highlights, balancing the whites and blacks. All of those subtle actions are human judgement, directed by experience, rather than automated by processor. That practical skill, now combined with the potential of AI, can produce something a bit special.

Once again, this is not done in a signal giant leap, but in incremental steps, pausing in between just long enough to check the finer detail before the process goes too far. One of the most valuable lessons I have learned is to stop and walk away, even for a few minutes, because then with fresh eyes you usually spot something you want to change.

That’s not a luxury always available for print on demand if the customer is demanding instant solutions at the shop counter, but is an option if there is no immediate deadline. This whole edit took about an hour, but not more than half an hour on the actual editing. The rest of the time I could spend resetting my brain with a more menial task like making coffee.



Obviously every image is different, but this just illustrates what can be done once you explore the true digital content of a file rather than what is apparent on the screen. All the information should be there in the metadata: how it was taken, the device, the settings etc. All are vital clues as to what you can do, or what AI can do, if you take the time to look for them in File Info, whether in Photoshop, Bridge or Lightroom.

That detail is the journey the file has taken to get to where it arrives, in the box as it were. That’s why the clues are all in how it got there, in order to determine where it can go.
 
Home
About Us
Contact
Archive News
© Quick Print Pro
Privacy Policy