3D lighting control for photographs is now a reality thanks to a groundbreaking tool developed by researchers at Simon Fraser University (SFU). Set to debut at SIGGRAPH 2025, this innovation introduces Blender-style lighting control to ordinary photos, allowing creators to change lighting with precision and realism after the image is captured.
The process starts by generating a 3D model of the photo scene. This model includes surface color and shape but intentionally excludes lighting. Researchers built the tool using earlier work from SFU’s Computational Photography Lab. Once the virtual scene is ready, users can position digital light sources just like they would in 3D programs such as Blender or Unreal Engine. The system then simulates those lights using well-established computer graphics techniques.
A specially designed neural network transforms the rough lighting simulation into a realistic photograph. Unlike most generative AI tools that often guess lighting interactions, this tool allows creators to control every lighting detail. The results consistently reflect the user’s creative choices with physical accuracy, not random output.
Photographers, digital artists, and filmmakers now have the power to relight their images without reshoots or expensive gear. This tool offers a cost-effective solution for creative professionals. Whether you need dramatic shadows, softer highlights, or different ambient tones, this tool lets you achieve it quickly and precisely. Users gain the kind of lighting control usually reserved for full 3D modeling environments.
Generative AI models often behave like black boxes. They rely on massive datasets and can produce unpredictable results. SFU’s method avoids that problem by focusing on physical simulation. Instead of starting with an AI-generated guess, the tool simulates real-world lighting behavior. This shift gives users a reliable, controllable, and creative environment for relighting.
Photographers can now fine-tune lighting for mood, branding, or editorial style. Content creators can save time and money by adjusting lighting in post-production rather than arranging complex on-set setups. For visual effects artists, this tool fits seamlessly into standard pipelines, enhancing control and realism.
While the current system only supports static images, the research team is already exploring ways to extend it to video. Real-time relighting in film could revolutionize the way creators handle lighting in post-production. Once implemented for video, the technology will offer even more value to industries like advertising, filmmaking, and AR/VR.
The research paper titled Physically Controllable Relighting of Photographs builds upon earlier projects focused on separating image illumination from surface detail. This foundation allows the tool to simulate light interactions with accurate depth and material reflection.
SFU’s Computational Photography Lab has published this research in the SIGGRAPH 2025 conference proceedings. The team also provides an explainer and video demonstrations on their website, outlining how the system works and what future improvements could look like.
By giving creators full 3D lighting control for photographs, this tool opens up new creative freedom and eliminates the limitations of traditional photo editing. Users no longer need to rely on trial-and-error or artificial intelligence prompts. Instead, they can directly shape their images with tools that understand the laws of light.