• Introducing XDA Computing: Discussion zones for Hardware, Software, and more!    Check it out!

Extracting Both Images from P9 Dual Camera

Search This thread

jbudman

New member
Sep 21, 2016
3
0
Hi All,

I am trying to test some image analysis applications with the Huawei P9. Is it possible to extract two images (one from each camera) from a single shot? I know one of the cameras has a monochrome lens, and I know how to obtain just the monochrome image, but it would be extremely valuable if I could obtain both images from just one shot.

Looking forward to your assistance,

Josh
 

ScareIT

Member
Aug 19, 2016
22
13
I do not want to stop your enthusiasm, but from my tests, they don't exists two images from one shot.
I didn't do my tests with an engineering approach, I only did some empirical test and from these I gather that:
- when you setup the Monochrome mode, the P9 activates the left camera (on the left when facing the phone back)
- with all the other modes, the P9 activates the right camera (the one between the flash and the left camera)

The P9 doesn't create 2 images, than combine them, it just shot always 1. How I came to this conclusion? You can also try it at home:
I choosen few static subject and I made my photos with the phone on tripod, than I did many photoshoots in the normal way and also by covering alternately the 2 cameras with a black scotch tape.
Even by naked eye, even by using an image comparation software (I used Beyond Compare from Scooter Software) I found no difference at all, no more brightness, no more contrast, no better image definition. :(
I did in a bright environment, in a dark one, I enabled and disabled the PRO mode and I tried to do a testing more complete as I could (honestly, I omitted to test the image in RAW mode, I tested only JPEGs), but my conclusion is that the 2 cameras are doing a different job, but they are definetely NOT working together.
 

Gray44

Senior Member
Dec 13, 2010
156
38
Thanks for testing, but did you also try this outside on a landscape view? Maybe then we will see other results?

Otherwise this is yet ANOTHER thing Huawei lied about.
 

oTToToTenTanz

Senior Member
Aug 11, 2008
648
64
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.

If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).

Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
 

ScareIT

Member
Aug 19, 2016
22
13
Great oTToToTenTanz!
I confirm that! Both cameras are essential to enable the wide aperture effect: when you try to shoot in the bokeh mode it appear an alert to check if the lens is clear, the blurred effect disappears and it's impossible to edit the depth in post-production.

I make 2 hypothesis:
- the phone really combines the 2 pictures in order to recreate the depth (is a strategy used in all the 3D cameras), so in some way there should be the possibility to get both pictures
- the phone uses the laser pointer to shot IR around the subject, then the monochrome camera will get the infrared information (and considering that its lens is without the RGB filter, will be very efficient to do that) and store them in order to obtain an accurate depth (I mean something like this: https://www.youtube.com/watch?v=dgrMVp7fMIE)
Nice things to try!
 
  • Like
Reactions: jbudman

jbudman

New member
Sep 21, 2016
3
0
Additional Info on Depth

Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.

If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).

Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.

Hey oTToToTenTanz,

Really appreciate your (and everyone else's) help on this! Can you give me some more info on how you actually extract the depth info in a usable form e.g. a matrix? Does the image just produce an RGB-D image once saved?

Thanks so much,

Josh
 

Tijauna

Member
Aug 9, 2015
18
4
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
 
  • Like
Reactions: jbudman

PerpulaX

Member
Apr 24, 2008
32
9
Hoppegarten
As far as I understand it, there are two cases in which both cameras are used.

One is for the wide-aperture ("bokeh") mode, in which a depth map is created from both pictures that have a slightly different perspective. I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.

The other case is landscape shots in low light. Several people reported that covering the second camera in this scenario results in much darker images. This seems like a silly limitation, but I believe I understand why it's there. The two images that the cameras take differ in perspective (obviously, due to the fact that the cameras are mounted next to each other), which is quite difficult to adjust for when trying to combine both sensors' data. However, when focusing at infinity, for example when taking landscape shots, the difference in perspective is negligible, so that in this case the two sensors' data can be easily combined to improve low-light performance.

Maybe it would be possible to combine both sensors' output at closer distances in a satisfactory way, but it seems that Huawei chose not to implement that. If I find a way to extract the second sensor's data from a wide-aperture image, I'll poke around a bit to see if it would be possible to combine them.
 

PerpulaX

Member
Apr 24, 2008
32
9
Hoppegarten
I did some poking around on my lunch break. I threw a wide-aperture image into JPEGsnoop and it came up with two images in the file (four if you count the thumbnails, as well), the first one being the processed, "bokeh" image, while the second is the original color image without any processing. I assume that this is the image that is used to re-process the wide-aperture image when editing the focus point or aperture through the gallery app.

JPEGsnoop also told me that there's more data after the image segments. Since it couldn't work out what that data is for (this is past the end of the actual JFIF file), I checked it out using a hex editor. I found a marker "edof" (extended depth-of-field?) followed by what looks like some header data, followed lots of repeating bytes. This block is about 1/16 the size of the image in pixels (so 1 byte for each 4x4 pixel block). I'm not sure whether that's a small greyscale version of the image itself or a depth map, but I suspect it's the latter.

So, I'm afraid that it will be impossible to extract the monochrome image sensor data from a wide-aperture image, as it's not there anymore. :(
 
Last edited:
  • Like
Reactions: ScareIT

ScareIT

Member
Aug 19, 2016
22
13
I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.
I confirm that: I did few shots on a single subject (always using tripod);
- the pictures in normal mode and with wide aperture with the BW camera covered results in 2.5 MB weight (max resolution; the photo's Title/Subject/Description is marked as "edh"
- the same subject in wide aperture mode (with the BW camera fully working) results in 5.5 MB weight (more than double); the photo's Title/Subject/Description is marked as "edf"; if this photo is opened with some image editing software, no alpha layers or other visual information appears anywhere; if the photo is saved back, the size will became comparable to the same photo without wide aperture effect

As depth information are not appearing in any editing software, I suppose they are hidden inside the jpeg file with some kind of steganography technique. I tried to examine the file with some ready-to-use tool (like stegdetect, that should be capable to detect if a jpeg file is standard or has something hidden) but I get only some mismatching header error, nothing that can let me understand where and how the depth information are stored and, primarily, if the black and white picture is also stored inside.
 

dragon-tmd

Senior Member
Jan 16, 2005
213
67
BRD
The cam seems to be making two Images for every shot. You can for - instance - make a picture and then edit it with the onboard effects. If i make the picture e.g. partially B&W, I can see, that it does use an original B&W picture taken with the original shot. This is not an artificial B&W.

The question is, where it is stored or are the necessary informations only "combined"?
 

zoubla88

New member
Dec 9, 2016
1
0
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance ;)
 

Devil Trigger

Senior Member
Mar 30, 2012
268
90
Rome
OnePlus 6
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance ;)

Can you explain what is possible to do in post-process? What can I do with the photo?
 

jbarbar

Member
Aug 27, 2015
7
3
You can do exactly the same thing as the Huawei gallery app (at least).
For Photoshop there are plenty of tutorials using Depth Maps with the Lens Blur plugin
 

Bumper03

Member
Dec 11, 2012
45
9
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...

Hy!
I think, that P9 does take two pictures and combines them in low light conditions. Here is two example, when something went wrong with the combination of images, and the two images becomes visible: https://goo.gl/photos/cK5q2TEisEU7rmpz9


What do you think?

Abel
 

Top Liked Posts

  • There are no posts matching your filters.
  • 4
    I do not want to stop your enthusiasm, but from my tests, they don't exists two images from one shot.
    I didn't do my tests with an engineering approach, I only did some empirical test and from these I gather that:
    - when you setup the Monochrome mode, the P9 activates the left camera (on the left when facing the phone back)
    - with all the other modes, the P9 activates the right camera (the one between the flash and the left camera)

    The P9 doesn't create 2 images, than combine them, it just shot always 1. How I came to this conclusion? You can also try it at home:
    I choosen few static subject and I made my photos with the phone on tripod, than I did many photoshoots in the normal way and also by covering alternately the 2 cameras with a black scotch tape.
    Even by naked eye, even by using an image comparation software (I used Beyond Compare from Scooter Software) I found no difference at all, no more brightness, no more contrast, no better image definition. :(
    I did in a bright environment, in a dark one, I enabled and disabled the PRO mode and I tried to do a testing more complete as I could (honestly, I omitted to test the image in RAW mode, I tested only JPEGs), but my conclusion is that the 2 cameras are doing a different job, but they are definetely NOT working together.
    3
    Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.

    If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).

    Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
    2
    As far as I understand it, there are two cases in which both cameras are used.

    One is for the wide-aperture ("bokeh") mode, in which a depth map is created from both pictures that have a slightly different perspective. I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.

    The other case is landscape shots in low light. Several people reported that covering the second camera in this scenario results in much darker images. This seems like a silly limitation, but I believe I understand why it's there. The two images that the cameras take differ in perspective (obviously, due to the fact that the cameras are mounted next to each other), which is quite difficult to adjust for when trying to combine both sensors' data. However, when focusing at infinity, for example when taking landscape shots, the difference in perspective is negligible, so that in this case the two sensors' data can be easily combined to improve low-light performance.

    Maybe it would be possible to combine both sensors' output at closer distances in a satisfactory way, but it seems that Huawei chose not to implement that. If I find a way to extract the second sensor's data from a wide-aperture image, I'll poke around a bit to see if it would be possible to combine them.
    1
    Yes, I did.
    I'm thinking about making a full post about photo comparation. Let's see :)
    1
    Great oTToToTenTanz!
    I confirm that! Both cameras are essential to enable the wide aperture effect: when you try to shoot in the bokeh mode it appear an alert to check if the lens is clear, the blurred effect disappears and it's impossible to edit the depth in post-production.

    I make 2 hypothesis:
    - the phone really combines the 2 pictures in order to recreate the depth (is a strategy used in all the 3D cameras), so in some way there should be the possibility to get both pictures
    - the phone uses the laser pointer to shot IR around the subject, then the monochrome camera will get the infrared information (and considering that its lens is without the RGB filter, will be very efficient to do that) and store them in order to obtain an accurate depth (I mean something like this: https://www.youtube.com/watch?v=dgrMVp7fMIE)
    Nice things to try!