IALD 25: Exploring AI as a Lighting Design Partner

by | Jul 1, 2025 | News

Foad Shafighi Discusses AI at IALD Enlighten Europe 2025

Exploring AI as a Lighting Design Partner: Insights from IALD Enlighten Europe

At the recent IALD Enlighten Europe conference in Valencia, Foad Shafighi of HGA delivered one of the event’s most thought-provoking sessions: an in-depth exploration of artificial intelligence and its expanding role in lighting practice. His presentation blended caution, curiosity, and optimism, making a compelling case that AI is not a passing trend but a powerful force reshaping how designers imagine and deliver their work. It was the most detailed AI presentation I have seen thus far in our industry.

Seeing AI as a Collaborator, Not a Rival

Foad began by addressing the elephant in the room: widespread anxiety about AI’s potential to displace creative professionals. “I’m aware that some of you worry it will replace designers,” he acknowledged. “My plan is simple: learn how to use it before it uses us.”

He emphasized that generative AI—which can write text, produce imagery, or even draft code—is simply a tool. Like any technology, it reflects the intentions of its operator. The difference, he said, is the speed and scope: “We’ve never had a tool that can iterate this fast—but that same speed comes with a risk: when unsure, it may confidently produce inaccurate information.”

AI as a Lighting Design Partner Presentation

Research with AI

Comparing Large Language Models

To demystify how AI can fit into daily lighting workflows, Foad compared several general-purpose large language models (LLMs). He cited ChatGPT 4.5, which excels at:

  • Drafting proposals and design narratives
  • Summarizing codes and standards in plain language
  • Brainstorming marketing copy or project descriptions

Its clarity and breadth make it a strong all-around assistant. However, Foad pointed out that the model’s context limit—approximately 8,000 tokens for ChatGPT 4.5—lags behind most other current models, which typically support around 128,000 tokens. Google’s Gemini, in particular, can handle up to 1 million tokens. This limitation can restrict the processing of especially large or complex documents.

Foad warned, “Never rely on LLMs for standards and code compliance. They don’t have access to the full text of these documents, and when they hit a gap—they hallucinate.”

Next, he discussed Anthropic Claude Sonnet 4, which he described as “particularly good at editing and rewriting.” Claude’s strengths lie in polishing text, generating multiple narrative options, and improving tone. He encouraged designers to experiment with both tools, since their outputs often differ in nuance and style. “It’s about finding which one resonates with how you communicate,” he said.

Introducing LightingAgent.AI

Recognizing the pitfalls of relying solely on generalist AI, Foad unveiled a personal passion project—independent of his work at HGA—called LightingAgent.AI, a grounded LLM, trained exclusively on lighting design literature. “Large language models can be confident liars,” he said. “I wanted something grounded in actual research.”

LightingAgent.AI draws from a verified knowledge base of standards, specifications, and academic publications. When a query falls outside its scope, the system simply replies: “I don’t know.” This approach, he explained, is built on Retrieval-Augmented Generation (RAG), where the AI retrieves supporting documents before generating a response. He shared a diagram illustrating how a user question moves through a retrieval engine, LLM processing, and ultimately produces an answer with citations.

Foad invited attendees to join a waitlist for beta testing and to contribute content to expand the model’s database. “It will be stronger if the community participates,” he said.

Prompt Engineering: Medium, Subject, Context

When using AI image generators, precision is key. Foad explained that you should break down prompts into three parts: medium, subject, and context.

A large part of the presentation centered on prompt engineering—the art of describing in precise detail what you want AI to create. Foad offered a straightforward framework to guide designers:

  • Medium: The visual style you want to evoke, such as a photorealistic rendering, watercolor, or sketch. For example, photography styles might create nostalgic warmth, like an image shot on Kodak Portra 400 film from a drone perspective at dusk.
  • Subject: The focus of the lighting—what exactly is illuminated. This includes specifying elements such as linear coves, backlit panels, track heads, or uplights. You can also define the space type, whether it’s a quiet library corner, a glowing hotel lobby, or a vibrant fitness studio.
  • Context: The surrounding environment and mood. This covers the time of day—morning haze, golden hour, twilight, or deep night—and weather conditions that affect atmosphere, such as fog for soft diffusion or rain for reflective surfaces. Context is essential for creating emotion and setting the tone.

Each example emphasized the same principle: the more specific the prompt, the more accurate and compelling the image.

Foad demonstrated this with a range of visuals. A cinematic rendering of a high-end restaurant captured layered ambient lighting, brass pendants, and a shallow depth of field, evoking an aspirational mood. A watercolor illustration of a reading lounge showed how softer styles can be useful in the early stages of design exploration without committing to full photorealism. Another prompt produced a moody black-box theater lobby, using deep contrast lighting to create a dramatic and intimate feel.

Sketch to Rendering: Stable Diffusion in Action

Moving beyond text, Foad turned to Stable Diffusion, an open-source AI engine gaining popularity among designers. Using user-friendly interfaces such as ControlNet and Automatic1111, Stable Diffusion can transform a napkin sketch or 2D plan into a sophisticated visualization. He emphasized that this approach is especially useful for early concept development, when fast iterations are critical.

“Instead of investing in a 3D model right away, you can generate an image to test your ideas,” he explained. “It’s about lowering the barrier to visualization.”

Foad showed side-by-side slides demonstrating how a simple sketch of a hospitality suite evolved into a refined rendering, complete with realistic fabric textures and lighting falloff. For many in the audience, it was the first time seeing such direct before-and-after comparisons.

The Broader Impact: Accessibility and Sustainability

While much of the discussion focused on creativity, Foad also shared a perspective on social impact. AI-powered visualization, he noted, has the potential to democratize design. Smaller firms can produce high-quality images without expensive software or specialist training. And by shortening revision cycles, AI tools can reduce material waste and rework—factors that carry both environmental and financial benefits.

A Future of Human-Machine Collaboration

Foad closed on an optimistic note: “AI can’t replace the spark of inspiration that drives lighting design. But it can help us express that inspiration faster and more clearly.” In his view, the future isn’t about choosing between human or machine—it’s about learning how the two can work together.

For designers willing to experiment, this future is already here. And judging by the enthusiastic questions that followed, many in the room were ready to explore it.

Editor’s Note:  For those interested in seeing Foad’s presentation up close—along with the visuals he shared and an update on the progress of LightingAgent.AI—he will be presenting again at IALD Enlighten Americas 2025, taking place 9–11 October in Tucson, Arizona.

Read more from IALD 2025 Enlighten Europe: