How NGW Works
No Guesswork uses a multi-stage vision pipeline to reverse-engineer a portrait photo into a reproducible lighting setup. Here's what happens from the moment you upload a photo to the moment you get a blueprint.
Overview
Most photographers look at a great portrait and wonder: "How did they light that?" The answer is usually buried in shadow shapes, catchlight positions, and tonal characteristics that take years of experience to read. NGW automates that read — extracting every visual cue systematically, then matching it to a known lighting pattern and generating the exact setup to recreate it.
The process takes 2–5 seconds and produces: a pattern identification with confidence score, a per-light blueprint with positions and power ratios, camera settings, and an on-set step-by-step guide.
The 7-Stage Pipeline
Image Ingestion & Validation
The image is checked for suitability — face detection, resolution, and quality thresholds. Images with no detectable face, severe blur, or extreme exposure are flagged. The system also detects whether the image is colour or black-and-white (which affects shadow threshold calibration).
Cue Extraction
Visual lighting cues are extracted from the image:
- Shadow analysis — nose shadow position, direction, and length; cheek shadow shape
- Catchlight detection — position in the iris (reveals key light height and angle), shape (reveals modifier type), and count (reveals number of sources)
- Tonal mapping — key-side vs. shadow-side luminance ratio (fill ratio)
- Background analysis — whether background is lit separately, tone relative to subject
- Colour temperature — dominant CCT from skin tone and highlight colour
Reference Image Read
A high-level descriptive read of the image is generated — scene, subject, mood, framing, and the photographer's apparent intent. This feeds the narrative cards in the results screen and provides context for the blueprint recommendations.
Pattern Resolution
Three independent classifiers evaluate the extracted cues:
- Shadow classifier — identifies the lighting pattern from shadow shape alone
- Catchlight classifier — infers key position and modifier from catchlight analysis
- Tonal classifier — identifies contrast range and fill approach
The three classifiers vote on a pattern. Agreement across classifiers raises confidence; disagreement triggers ambiguity warnings.
Lighting Intelligence Synthesis
The raw pattern match is enriched with quantitative estimates: light count, key position (angle and height), fill ratio, modifier family, detected environment, and colour temperature. These feed the blueprint directly.
Blueprint Generation
A per-light blueprint is assembled from the pattern + intelligence data. For each light: role (key/fill/rim/hair/background), position, modifier recommendation, distance, and power hint relative to the key. Camera settings (aperture, ISO, shutter, white balance) are computed from the detected environment and modifier family.
On-Set Step Generation
The blueprint is translated into a sequenced step-by-step guide for use on set: position key first, add fill and set ratio, position rim, dial in camera settings, test exposure checklist. In Shoot Mode, these steps are served one at a time in a large, readable format designed to be used at arm's length.
Confidence Scoring
Each analysis produces a confidence score between 0 and 1:
- High (≥0.75) — Multiple independent cues agree. The pattern is clearly identifiable from the image.
- Moderate (0.50–0.74) — Cues present but some ambiguity. The pattern is likely correct but small placement changes may produce inconsistent results.
- Low (<0.50) — Insufficient or conflicting cues. The image may lack a clear shadow pattern (flat light, soft fill obscuring shadows, face turned away from camera).
Low confidence doesn't mean NGW is wrong — it means the source image itself doesn't contain enough information to be certain. A heavily retouched beauty image with all shadows removed will always produce low confidence.
Pattern Library
NGW matches against 28 distinct lighting patterns across several risk tiers:
- High-volume portrait patterns (loop, rembrandt, clamshell) — the most commonly confused; require the most signal for confident classification
- Standard portrait patterns (split, butterfly, broad, short, rim-only, three-point, etc.)
- Modifier-defined patterns (beauty dish, ring flash, parabolic)
- Environment-defined patterns (window light, outdoor fill-flash, golden hour)
- Tonal styles (high key, low key)
See the full Pattern Library for descriptions, identification tips, and setup instructions for each.
Blueprint Generation
The blueprint adapts to your gear. If you enter your equipment in the wizard, NGW substitutes equivalent modifiers from your kit. If you select "Best Setup," it recommends the ideal rig regardless of what you own.
Power values are expressed as relative ratios (key = reference, fill = 1–2 stops below) rather than absolute watt-seconds, since the ratio is what determines the look — not the absolute output level.
Shoot Mode
Shoot Mode serves the blueprint as an on-set assistant. Three depth levels:
- Photographer — Conversational, context-aware instructions. Assumes working photographer knowledge.
- Assistant — Direct commands. Designed to be handed to a lighting assistant.
- Learning — Each step includes "why this matters" explanations. Role context, spec hints, and effect descriptions. Designed for photographers who want to understand, not just follow steps.
Known Limitations
- Heavy post-processing — Images with significant retouching, skin smoothing, or shadow removal reduce cue signal. Confidence will be lower.
- No-face images — NGW is optimised for portrait photography. Product, landscape, and images without faces will not produce useful pattern matches.
- Very dark images — Low-key images with minimal highlight detail make catchlight detection unreliable.
- Mixed ambient + flash — Complex mixed-source setups may be partially misread. The dominant source will be identified correctly; secondary sources may be missed.
- Non-standard lighting — Projected patterns (gobo, venetian blind, stencil), coloured gels, and LED panels with unusual spectral profiles are outside the core classifier training.
See it in action
Upload any portrait reference and watch the pipeline run in real time — pattern identification, confidence score, and full blueprint in under 5 seconds.
Try it free →