How does the Deep Fusion feature work on the iPhone 11 and iPhone 11 Pro?

When Apple presented the new iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max, the company noted them along with the improved camera

new Deep Fusion feature. This feature will make an already great camera even better. Unfortunately, it has not been available on new smartphones since their release.

</ p>

With the release of iOS 13.2, Deep Fusion has finally become available to all owners of the iPhone 11 and iPhone 11 Pro. Now your photos will become even more beautiful, especially if you like to wear sweaters.

The function work process is really complicated. She uses machine learning and several exposures for one shot, and then combines all this into beautiful pictures against the background. The function will not be active always. However, when it works, you will notice the difference.

Here is a brief description of the function from Apple itself:

Deep Fusion system for iPhone 11, iPhone 11 Proand iPhone 11 Pro Max uses the Neural Engine module in the A13 Bionic chip to take multiple photos at different exposure values, perform pixel-by-pixel analysis, and combine the best parts of the original images; This allows you to get high-quality photos with excellent detail and texture, as well as less noise, which is especially noticeable when shooting in medium and low light conditions.

So why do you need a function? Best of all, she copes with shooting in medium light. Below we’ll take a closer look at how it works.

Less noise, better quality

When you take a photo with Deep Fusion,it immediately captures as many as three different frames. This is possible due to the lack of delay when shooting. After that, three more frames will be taken with a longer exposure to capture more details.

After that, three ordinary frames will be combined with frames with a long exposure.

The function will select the best frame with a shortexposure, on which the most details, and then combine it with the best shot with a long exposure. Finally, the photo will consist of these two frames. This is the main difference between Deep Fusion and Smart HDR.

Next is the usual processing. A pixel-by-pixel shot will go through four more steps, all in the background and instantly. The least attention in processing is given to the sky and walls, and most of all to hair, skin, clothing and other elements with many details. The function collects details from all frames, and then adjusts the color, light and tone for the final shot.

After all this (and the process happens almost instantly), the final photo is obtained.

At the top of this article, you can see the result of the Deep Fusion feature. Apple is very fond of demonstrating the function on models in sweaters.

Smart hdr

Once again, we note that the Deep Fusion function will not always work, and some lenses do not support it at all.

The telephoto lens will use the function more often.total, but Smart HDR will activate in bright light. A standard lens will use Deep Fusion in moderate to low light conditions, and Smart HDR only activates in bright conditions. The ultra wide angle lens does not support Deep Fusion, only Smart HDR.

Most often, in low light, the Night mode will still work, and Deep Fusion will only be activated in some situations.

The function is provided by the A13 Bionic processor,so it’s not on older iPhone models. Most likely, only true photography enthusiasts on iPhone will notice the function in action. However, its benefits cannot be denied.