How Camera Clubs Can Address the AI Dilemma

A small but growing number of clubs and competitions are beginning to address the AI problem by separating competitions by editing philosophy rather than trying to police specific tools. This approach removes much of the ambiguity surrounding AI tools.

The model typically uses three clearly defined categories.

1. Straight Photography (or “Nature / Documentary”)

This is the strictest category, designed to preserve photographic authenticity.

Typical rule

The image must represent the scene as photographed. No elements may be added, removed, or relocated.

Allowed adjustments

  • exposure and contrast

  • colour balance

  • cropping

  • sharpening

  • noise reduction

  • minor dust spot removal

Not allowed

  • removing objects (people, branches, signs, etc.)

  • compositing

  • sky replacement

  • generative AI

  • significant cloning or healing

This model is widely used by organisations such as the Photographic Society of America and many nature competitions.

2. Altered Reality / Creative Photography

This category allows extensive editing and compositing, but still requires photographic origin.

Typical rule

All visual elements must originate from photographs taken by the entrant.

Allowed:

  • compositing multiple photographs

  • sky replacement (from the photographer’s own images)

  • object removal

  • creative colour grading

  • extensive cloning

Still not allowed:

  • text-prompt AI image generation

  • generative AI objects

  • stock images

  • images captured by someone else

The philosophy is:

creative manipulation is allowed, but the raw material must still be photography.

3. Open Digital Art / AI Art

Some organisations are now adding a third category.

Here almost anything is allowed:

  • generative AI

  • text prompts

  • stock elements

  • illustration

  • heavy digital painting

This category recognises that AI imagery is becoming a legitimate creative medium, but it is no longer treated as photography.

Why this system works

Instead of debating specific software tools, it focuses on creative intent.

Straight: Faithful representation.

Altered Reality: Creative photography.

Digital / AI Art: Image creation.

This structure prevents endless arguments about individual tools like:

  • generative remove

  • content-aware fill

  • AI upscaling

because the category's intent is clear.

An example rule set some clubs are adopting

Category A — Authentic Photography

Minimal editing only; the scene must remain unchanged.

Category B — Creative Photography

Extensive editing allowed, but all elements must be photographic captures by the entrant.

Category C — Digital / AI Art

Any digital techniques allowed.

Why this is becoming necessary

AI tools are improving extremely quickly. Within a few years, it will be very difficult to tell whether:

  • an object was cloned away

  • an object was generated

  • a sky was real or synthetic

By separating categories, clubs avoid the need to detect AI usage.

An additional rule that simplifies judging

Some competitions now include a declaration such as:

“Entrants must be able to supply the original capture file(s) for all photographic elements used in the image.”

This protects authenticity in the first two categories.

In effect, the emerging international standard is:

  • photography categories → no AI-generated scene content

  • AI categories → anything allowed

This preserves photography while acknowledging that AI image creation is a different medium.

The Use of AI in Camera Club Competitions

I asked ChatGPT a detailed question about the authenticity of photographs in camera club competitions. In brief, I discussed the available tools and how their use might affect authenticity. Here is the answer.

The uncertainty you describe is widely recognised in photography organisations and competitions. The key issue is whether a tool merely modifies existing pixels or generates new image content. Documentation from Adobe and commentary by photographic educators clarifies this distinction fairly well, even if the ethical rules used by camera clubs have not yet caught up. Below is a structured interpretation based on current documentation and common competition rules.

1. The critical distinction: analytical AI vs generative AI

Most modern software uses AI in two fundamentally different ways:

1. Analytical (non-generative) AI

• Examines the pixels already in the image.

• Adjusts, selects, or enhances them.

• Does not create new image content.

Examples:

• AI subject selection

• Sky masking

• Noise reduction

• Lens blur simulation

These tools analyse the image and then modify the existing pixel data. They do not invent objects or scenery. (matiash.com)

From an authenticity standpoint, these are usually considered equivalent to traditional editing.

2. Generative AI

• Creates new pixels that were not present in the original image.

• Uses trained models rather than simply copying neighbouring pixels.

Example tools:

• Generative Remove

• Generative Fill

• Generative Expand

• Text-prompt image generation

Adobe’s documentation explicitly states that Lightroom’s Generative Remove uses the Firefly generative AI model to generate new fill content after an object is removed. (Adobe Help Centre)

It even produces multiple variations for the user to choose from, which confirms that the replacement area is synthesised rather than copied from the photo. (PetaPixel)

For camera club authenticity rules, this is the dividing line.

2. Lightroom’s Remove Tool — three different technologies

Lightroom now combines three distinct mechanisms, which behave very differently.

Clone tool

• Direct pixel copying.

• Source area → target area.

Equivalent to classic cloning in Photoshop.

Authenticity status: Usually acceptable in competitions.

Heal tool

• Samples nearby pixels.

• Blends colour and tone from surrounding areas.

It still derives its content from the photograph itself. (Adobe Help Centre)

Authenticity status: Typically acceptable (traditional retouching).

Generative Remove

• Uses Adobe Firefly.

• Generates new pixels to fill the removed area.

The fill is not guaranteed to come from the image itself. (Adobe Help Centre)

Authenticity status: Many competitions now consider this AI generation, even though it is used for object removal.

3. Why generative removal behaves differently

Several clues reveal that generative AI is being used:

Multiple generated versions

Lightroom offers three alternative fills for the removed area. (PetaPixel)

Traditional pixel cloning could not produce multiple plausible scenes.

Internet connection required

The generative fill is often produced by a cloud model rather than local pixel analysis. (Michael Frye Photography)

Lower resolution patches

Generated areas may have slightly lower resolution than the surrounding image because they are synthesised rather than copied. (Michael Frye Photography)

4. Super-resolution and upscaling

Upscaling tools (e.g., Super Resolution in Lightroom or AI upscalers) occupy a grey area.

Technically, they do create new pixels, but those pixels are:

• mathematically inferred from the original image

• not arbitrary new content

The process estimates detail based on patterns in the existing data.

In practice:

Bicubic enlargement: Interpolates pixels mathematically, usually acceptable.

AI Super Resolution: Predicts fine detail, usually acceptable but debated.

Generative Expand: Invents entirely new scene content, usually not acceptable.

5. Generative Expand (Photoshop)

Generative Expand clearly crosses the authenticity boundary.

It can:

• extend landscapes

• create new sky, buildings, water, etc.

• fabricate entirely new visual content.

Although the style matches the image, the added scene elements were never photographed.

Most competitions would classify this as image creation rather than photography.

6. Why camera clubs are struggling

Traditional editing rules assumed a simple model:

Photograph → edit existing pixels

AI introduces a new model:

Photograph → generate new pixels

Because the output can look perfectly realistic, the boundary is no longer obvious.

7. How many photographic organisations are responding

Major competitions and salons increasingly define rules around image provenance, not just editing tools.

Typical modern wording:

Allowed

• tonal adjustments

• colour grading

• cropping

• sharpening

• noise reduction

• cloning of minor distractions

Not allowed

• adding elements

• replacing skies

• generative fill

• AI-generated imagery

• compositing external images

The difficulty is that generative removal looks similar to traditional cloning.

8. Practical guidance for camera clubs

A workable rule set emerging in many places is:

Safe edits (generally acceptable)

• exposure and colour adjustment

• cropping

• sharpening

• noise reduction

• lens corrections

• dodging and burning

• cloning/healing using existing pixels

Grey area

• AI upscaling

• AI denoise

• AI masking

These analyse the image but don’t invent new scene content.

Usually prohibited in “photography” categories

• Generative remove

• Generative fill

• Generative expand

• sky replacement using external images

• prompt-based generation

Because new scene content is created.

9. A useful rule of thumb

A simple conceptual test:

“Could the missing pixels theoretically be reconstructed from the original photo?”

If yes → editing
If no → generation

Examples:

Remove dust spot via clone: Reconstruction.

Heal small branch: Reconstruction.

Remove person via generative fill: Generation.

Expand canvas with new mountains; Generation.

Your observation about the Lightroom tool is correct:
When “Use Generative AI” is enabled, Lightroom is not borrowing pixels from the image — it is synthesising them.


Quotations for the new Year

Every year, you make a resolution to change yourself.
This year, make a resolution to be yourself.  ~ Annon.

Art is what we call…the thing an artist does. It’s not the medium or the oil or the price or whether it hangs on a wall or you eat it. What matters, what makes it art, is that the person who made it overcame the resistance, ignored the voice of doubt and made something worth making. Something risky. Something human. Art is not in the …eye of the beholder. It’s in the soul of the artist.  ~ Seth Godin.

Sharpness is a bourgeois concept.  ~ Henri Cartier-Breson.

Isn’t it remarkable how photography has advanced without improving?  ~Charles Sheeler, remarking to Ansel Adams.

Great photography is about depth of feeling, not depth of field.  ~ Peter Adams.

You don’t make a photograph just with a camera. You bring to the act of photography all the pictures you have seen, the books you have read, the music you have heard, the people you have loved.  ~ Unknown.

 If you can smell the street by looking at the photo, it’s a street photograph. ~ Bruce Gilden.

Photography is an immediate reaction; drawing is a meditation.  ~ Henri Cartier-Bresson.

A photographer is like a cod, which produces a million eggs in order that one may reach maturity.  ~ George Bernard Shaw.

Be yourself. I much prefer seeing something, even it is clumsy, that doesn’t look like somebody else’s work.  ~ William Klein.

If Photoshop is the answer, you’re asking the wrong question.  ~ Dean Farrell.

DJI Air 3S has landed

The idea of buying a Sony A7 V got replaced by a drone. Ever since the DJI Phantom was released in 2013, I've had a case of gear acquisition syndrome, but with a mighty effort, I resisted it. Recently, I read that the DJI Mavic 4 Pro had landed in the marketplace with a Hasselblad 100 MP APS-C sensor, two other cameras with different focal lengths and lots of other impressive specs. I was hooked. Then reality set in again. It was the most sophisticated camera drone yet, but its weight meant that I would have to register it, and I thought it might be too difficult for a raw beginner to fly. I did some research and settled on the next best, the DJI Air 3s.

https://store.dji.com/au/product/dji-air-3s-fly-more-combo-rc-2?from=site-nav&vid=173571

The Air 3s also has impressive specs, including two cameras, the main one having a 1” sensor, but not a Hasselblad lens. It has facial recognition, subject tracking, and automated flight settings, such as orbiting a selected subject. Cool! The control software, with sophisticated obstacle avoidance utilising Lidar and a host of other sensors, is impressive. The return-to-home feature doesn’t use GPS technology. It memorises the outward-bound flight path, then retraces it.

Without dipping into the voluminous settings, I flew it from a crowded launch area without crashing! To better understand its capabilities, I need to work through the settings and learn to work the joysticks instinctively so that I can concentrate on framing images. One of my beliefs is that good photography depends on good composition*, which in turn depends on camera position and orientation. This device will open a whole world of possibilities regardless of whether the aircraft height is three metres or 100 metres. The future looks interesting.

  • Good composition and a score of other things!

Note: The Department of Civil Aviation has rules! However, they have a number of recommended apps that make it easy. I use ‘OpenSky’.

Nothing to do with photography

With advancing age, one is prone to reminiscing about the past, often through rose coloured glasses. In such a mood, I came across this forgotten epistle hidden deep within my hard drive. It was penned by Senior Building Science Lecturer Derrick Kendrick and delivered on the occasion of the 50th anniversary of the graduation of his most skilled and admiring students. It’s worth noting that the first year architecture class comprised 45 students, none of whom were female—such deprivation! Five years later, at graduation, only 18 remained. This was due in part to Professor Rolf (ex-British Army) Jensen’s belief, referred to by Derrick, that it was not impossible to teach everything. The attrition might also have been caused by students fleeing to an arts course where there were many women.

Read More

A different way to judge photos

The Port Adelaide Camera Club has changed its judging procedure. They no longer provide judges with a preview of images. I wasn't alone in being unhappy about that. Conventional judging attempts to create a spread of scores centred around seven out of ten points, ranging from a minimum of five for a very small proportion of images to a maximum of ten for those considered the best in the set.

Recently, I was invited to judge at the club. In the first session, I judged conventionally. During the tea break, I raised the matter with the president. She explained that the 'no previews' rule was the result of members' preference for their images to be commented upon and judged individually without reference to others in the set. Unconventional, but now I understand the reason for the 'no previews' rule.

In the second session, I judged following this requirement. Judged may not be the best word, as the process is more akin to a critique of each image, in which strengths, weaknesses, and opportunities for improvement are discussed. A number between one and ten is provided to summarise the quality of the image. There is no need for a spread of scores between images in the set.

I left the meeting feeling good about having come to understand the club's approach, but also thinking that it provided members with what they were looking for - commentary and advice.

I suspect that many judges would remain unhappy with the Port Adelaide club's approach, but I believe that the overall quality of images presented at club competitions is high; therefore, why not acknowledge that and refrain from attempting to create an artificial spread by splitting hairs between scores of seven and eight, eight and nine?

The system of forced ranking may suit those of us who are competitive, but I suspect the majority's interest is in learning and improving their image-making skills. Constructive feedback and encouragement are fundamental to this.

Port Adelaide Photography Club is attempting to address this need.