views
Apple's Deep Fusion photography system has finally arrived as part of the newest developer beta of iOS 13, version 13.2 beta 1. Notably, Deep Fusion is a new image processing pipeline for medium-light images, which Apple senior VP Phil Schiller called "computational photography mad science" when he had first introduced it. According to the report, the feature wasn’t ready when the phones arrived two weeks ago and even though the iPhone 11 and 11 Pro have really impressive cameras, Deep Fusion is meant to be a huge development when it comes to indoor and medium-lighting situations.
Deep Fusion will allow iPhone 11 and 11 Pro cameras to have three modes of operation that automatically kick in based on light levels and the lens people are making use of. The standard wide-angle lens will use Apple's enhanced Smart HDR for bright to medium-light scenes with Deep Fusion kicking in for medium to low light. Night mode will come on for dark scenes. Secondly, the tele-photo lens will make use of Deep Fusion, with Smart HDR only taking over for extremely bright scenes. Finally, the ultrawide will always use smart GDR. It does not support either Deep Fusion or Night mode.
Notably, Deep Fusion is totally invisible to the user and there is no indicator in the camera app or the photo roll. It does not show up in the EXIF data as well. According to reports, the idea for Deep Fusion is that Apple does not want people to think about how to get the best photo, with the idea being that the camera will just sort it out for the phone owner. However, Deep Fusion will do quite a lot of work and operate much differently than Smart HDR in the background.
For starters, by the time a person has pressed the shutter button, the camera has already grabbed four frames at a fast shutter speed to freeze motion in the shot and four standard frames. Turns out when a person presses the shutter, it grabs a longer-exposure shot to capture the intricate details. Furthermore, there is something called a synthetic log in which three regular shots and a long-exposure shot are merged into one, a major difference from Smart HDR.
Deep Fusion picks the short-exposure image with the most detail and merges it with the synthetic long exposure. The images are run through four detail processing steps, with the sky and walls in the lowest band, while skin, hair, fabrics, and so on are the highest level, following which the final image is generated. The entire process takes a bit longer than Smart HDR image with a person seeing first a proxy image while Deep Fusion runs in the background, before popping the final version with more detail.
Comments
0 comment