Martin Christie –Digital Imaging Lead Colourfast
Despite all the dramatic events on earth, or maybe because of them, a lot of attention in April was turned to the bigger picture - literally the whole world - as a lot of people focused on images coming back from the Artemis spacecraft as it made its record breaking journey around the moon and back again.
Being old enough to have seen some of the originals first time around - mostly in black and white- it’s easy to forget that for several new generations this was the first time such sights were viewed live, and this generated some real excitement in a world full of so many illusions.
Something a number of viewers commentated on was the comparison between the iconic Apollo capture of the earth, more than half a century ago, and the present day one. That one of course would have been on film, compared to digital, but at least at first glance, it looked like a better picture. The truth, however is hidden in the picture in plain sight, as most photographers would spot fairly quickly.
Back in 1972 the the sun is above or behind the space capsule so light is shining on the earth and illuminating it, whereas in 2026 the little white halo immediately below the sphere tells you it is behind it. As I am often asked to explain photographic lighting I point out that it is just as important to understand how the sun works as the camera.
Most importantly, being a digital image, we know a lot more about it that it’s famous ancestor as NASA was kind enough to upload everything to its project website, not only the file but all the associated metadata so we can see what, when and how. So we can see that the widely circulated picture isn’t the original, or even an edited version but more of a snapshot - a lower quality Jpeg for speedy transmission rather than a full scale RAW file.
This is exactly the dilemma we face daily in print on demand, of course, where the impatience of the customer usually overtakes the option of providing a file suitable to print, although in this case NASA could hardly be blamed for wanting to fast feed the fact hungry media.
The other thing that raised eyebrows in the detail was that the image was actually taken with a ten year old camera. Admittedly the Nikon D5 is still an awesome bit of kit but with a tad over 20 megapixels would hardly look like a state of the art choice on specification unless you factor in its renown reliability and robust build rather than an exceptional ability to record detail in low light - something you need when emerging from the dark side of the moon and an on-camera flash is not going to carry a quarter of a million miles.
All in all a very high profile example of not judging a file immediately but what you see on the screen by what is hidden behind it in the fine detail.
Of course Artemis took more than one camera to capture internal and external views, so NASA would end up with thousands of images to sort through, compared to probably a few hundred fifty years ago. You might think that Artificial Intelligence would make this task easier but there is a likely catch. AI tools are able to match similar images based on object recognition, including many individual details. But as almost every external photo is likely to include the moon or the earth, or both, it may not be much help.
And there is the possibility that a valuable image might be rejected because it is not entirely perfect. It all depends on the criteria chosen for selection. It might be slightly out of focus or badly exposed - still a great view, but could still go in the trash.
Search engines have been available to us for years and obviously have improved dramatically, but there is a limitation in that they still depend upon matching familiar patterns rather than taking intuitive leaps. So they can quickly skip through millions of files to select those with a specific subject like a dog or cat. They can also read text and titles so anything with the subject’s name in it can be found. But anything where the image and the text are combined or in effect disguised as part of artwork is challenging. Even the best OCR can struggle with non standard characters.
All digital camera images start with metadata embedded, so they are easily catalogued by time and date automatically, and if you are using a programme like Adobe Lightroom, as NASA is, all the varied camera data is available to enable organising pictures down to the second.
Unfortunately for us, customers are nowhere near as organised, as most images come regurgitated via phone apps with source data stripped and random numerical titles that carry no clues.
So unless you methodically archive jobs manually it becomes very difficult to recall jobs much longer than an individual’s immediate memory. Even trying to follow the email trail can be confusing if there are multiple entries, often with duplicate files from different dates. Without those vital details, it’s not like looking for a needle in a haystack, more an entire wheatfield.
It’s important to understand the difference between AI and Generative AI, the latter being the easy fix that is increasingly being adopted as first response to any matter. AI on its own is simply machine learning, recognising familiar things and acting on experience, in many ways similar to our own way of collecting knowledge - just quicker, but in a very lateral direction.
Gen AI is a whole new concept using the collective knowledge to create entirely new products conceived by computer imagination rather than an organic one. Essentially it’s a short cut, albeit an attractive one in an impatient world. But if you read last month’s column, my argument has been that the journey is just as important as the destination in terms of education.
This is completely ignored for marketing reasons in the massive social promotion of editing software that will save time and do things better. I particularly cringe at straplines urging me to ‘stop wasting time’ editing the old way. Well learning has never been a waste of time, even if it is not immediately applied, it will increase the wealth of knowledge.
Despite the hype, AI has not reinvented the wheel in terms of digital image editing. The basic techniques have been learned and tuned over the years, and improved as technology allows. Fortunately Adobe has kept the basic tools available while introducing all the sophisticated extras over the top of them rather than instead of them. Unfortunately this also means Photoshop looks very complicated and without an understanding of those essentials it may be difficult to know where to start. As a result it’s very tempting to go for a one click solution if it’s available.
The issue there is that in our role, preparing customer files for print, we are more often problem solving matters we didn’t actually create, and there’s always a danger of making things worse and wasting time and paper on samples that just don’t work out. And unless you go through due process you’ll never pin point where it went wrong.
The failsafe in PS is the layer system. This is vital because it means you can work on at least one copy above the original, not only to compare changes, but to be able to modify them or entirely remove them and thus work none destructively, rather than generating an entirely new one. That might be quicker, but if it doesn’t work you have to start again, and you haven’t gained any understanding of the process.
Creating a new layer is much more than a duplication service as there are a whole raft of blending modes and adjustment options that enable both minor or major manipulation of an image. This is particularly important for print because of the nature of composite colour, even small adjustments barely seen on screen can make a much larger change in hue when converted to paper. I have learned that the hard way over the years as the waste bin will testify.
Back in the dark ages of digital we started with controls of brightness and contrast as well as exposure and those sliders are still there in PS though they really are the crude tools to be avoided. The essentials of a digital image are much more than those basic actions as devices are able to capture so much more raw data, and the software is able to handle it.
The fundamental ingredients are hue, the actual colour, saturation, the amount of that colour, and lightness or luminosity - how that colour is revealed by reflected light. If we were dealing with one single colour it would be easy, but of course we are juggling a combination of millions to make up a picture all based on three or four ingredients - RGB or CMYK. That’s why understanding this balancing act is so important.
The best way to do that is to see the effects of tweaking them individually with an adjustment layer.
This function is now greatly improved by the ability to select specific colours in much finer detail than previously possible. By default, the overall colours are made available, but by changing the criteria to Prominent, the dominant colours are identified. This is a much greater aid than it might seem as the human eye is easily distracted by an overall colour hue, and concentrating on that may miss more important targets. A perfect example of mind and machine working in tandem.
Improvements in other selection tools have been the most significant step forward for professional editing in PS in recent years - far more than the headline Gen AI gimmicks. Previously the entirely manual picking of pixels was both time consuming and imprecise.
In a previous column you may have seen the individual spokes of a motorcycle cut out by machine learning where years ago you would struggle to get the perfect circle of a wheel. The same precision puts colour control at the click of a mouse.
As you can see in the portrait example you can change the impact of the foliage using the adjustment layer alone without having to select the individual leaves or isolate the subject - all of which are easily done now. There’s always more than one way to make changes in PS. It just depends what you are trying to achieve, and you know how to achieve it, rather than letting a computer decide that for you.