The Use of AI in Camera Club Competitions

I asked ChatGPT a detailed question about the authenticity of photographs in camera club competitions. In brief, I discussed the available tools and how their use might affect authenticity. Here is the answer.

The uncertainty you describe is widely recognised in photography organisations and competitions. The key issue is whether a tool merely modifies existing pixels or generates new image content. Documentation from Adobe and commentary by photographic educators clarifies this distinction fairly well, even if the ethical rules used by camera clubs have not yet caught up. Below is a structured interpretation based on current documentation and common competition rules.

1. The critical distinction: analytical AI vs generative AI

Most modern software uses AI in two fundamentally different ways:

1. Analytical (non-generative) AI

• Examines the pixels already in the image.

• Adjusts, selects, or enhances them.

• Does not create new image content.

Examples:

• AI subject selection

• Sky masking

• Noise reduction

• Lens blur simulation

These tools analyse the image and then modify the existing pixel data. They do not invent objects or scenery. (matiash.com)

From an authenticity standpoint, these are usually considered equivalent to traditional editing.

2. Generative AI

• Creates new pixels that were not present in the original image.

• Uses trained models rather than simply copying neighbouring pixels.

Example tools:

• Generative Remove

• Generative Fill

• Generative Expand

• Text-prompt image generation

Adobe’s documentation explicitly states that Lightroom’s Generative Remove uses the Firefly generative AI model to generate new fill content after an object is removed. (Adobe Help Centre)

It even produces multiple variations for the user to choose from, which confirms that the replacement area is synthesised rather than copied from the photo. (PetaPixel)

For camera club authenticity rules, this is the dividing line.

2. Lightroom’s Remove Tool — three different technologies

Lightroom now combines three distinct mechanisms, which behave very differently.

Clone tool

• Direct pixel copying.

• Source area → target area.

Equivalent to classic cloning in Photoshop.

Authenticity status: Usually acceptable in competitions.

Heal tool

• Samples nearby pixels.

• Blends colour and tone from surrounding areas.

It still derives its content from the photograph itself. (Adobe Help Centre)

Authenticity status: Typically acceptable (traditional retouching).

Generative Remove

• Uses Adobe Firefly.

• Generates new pixels to fill the removed area.

The fill is not guaranteed to come from the image itself. (Adobe Help Centre)

Authenticity status: Many competitions now consider this AI generation, even though it is used for object removal.

3. Why generative removal behaves differently

Several clues reveal that generative AI is being used:

Multiple generated versions

Lightroom offers three alternative fills for the removed area. (PetaPixel)

Traditional pixel cloning could not produce multiple plausible scenes.

Internet connection required

The generative fill is often produced by a cloud model rather than local pixel analysis. (Michael Frye Photography)

Lower resolution patches

Generated areas may have slightly lower resolution than the surrounding image because they are synthesised rather than copied. (Michael Frye Photography)

4. Super-resolution and upscaling

Upscaling tools (e.g., Super Resolution in Lightroom or AI upscalers) occupy a grey area.

Technically, they do create new pixels, but those pixels are:

• mathematically inferred from the original image

• not arbitrary new content

The process estimates detail based on patterns in the existing data.

In practice:

Bicubic enlargement: Interpolates pixels mathematically, usually acceptable.

AI Super Resolution: Predicts fine detail, usually acceptable but debated.

Generative Expand: Invents entirely new scene content, usually not acceptable.

5. Generative Expand (Photoshop)

Generative Expand clearly crosses the authenticity boundary.

It can:

• extend landscapes

• create new sky, buildings, water, etc.

• fabricate entirely new visual content.

Although the style matches the image, the added scene elements were never photographed.

Most competitions would classify this as image creation rather than photography.

6. Why camera clubs are struggling

Traditional editing rules assumed a simple model:

Photograph → edit existing pixels

AI introduces a new model:

Photograph → generate new pixels

Because the output can look perfectly realistic, the boundary is no longer obvious.

7. How many photographic organisations are responding

Major competitions and salons increasingly define rules around image provenance, not just editing tools.

Typical modern wording:

Allowed

• tonal adjustments

• colour grading

• cropping

• sharpening

• noise reduction

• cloning of minor distractions

Not allowed

• adding elements

• replacing skies

• generative fill

• AI-generated imagery

• compositing external images

The difficulty is that generative removal looks similar to traditional cloning.

8. Practical guidance for camera clubs

A workable rule set emerging in many places is:

Safe edits (generally acceptable)

• exposure and colour adjustment

• cropping

• sharpening

• noise reduction

• lens corrections

• dodging and burning

• cloning/healing using existing pixels

Grey area

• AI upscaling

• AI denoise

• AI masking

These analyse the image but don’t invent new scene content.

Usually prohibited in “photography” categories

• Generative remove

• Generative fill

• Generative expand

• sky replacement using external images

• prompt-based generation

Because new scene content is created.

9. A useful rule of thumb

A simple conceptual test:

“Could the missing pixels theoretically be reconstructed from the original photo?”

If yes → editing
If no → generation

Examples:

Remove dust spot via clone: Reconstruction.

Heal small branch: Reconstruction.

Remove person via generative fill: Generation.

Expand canvas with new mountains; Generation.

Your observation about the Lightroom tool is correct:
When “Use Generative AI” is enabled, Lightroom is not borrowing pixels from the image — it is synthesising them.