FORUMS
Remove All Ads from XDA

Extracting Both Images from P9 Dual Camera

3 posts
Thanks Meter: 0
 
By jbudman, Junior Member on 21st September 2016, 08:08 AM
Post Reply Email Thread
Hi All,

I am trying to test some image analysis applications with the Huawei P9. Is it possible to extract two images (one from each camera) from a single shot? I know one of the cameras has a monochrome lens, and I know how to obtain just the monochrome image, but it would be extremely valuable if I could obtain both images from just one shot.

Looking forward to your assistance,

Josh
 
 
22nd September 2016, 01:26 PM |#2  
Junior Member
Thanks Meter: 14
 
More
I do not want to stop your enthusiasm, but from my tests, they don't exists two images from one shot.
I didn't do my tests with an engineering approach, I only did some empirical test and from these I gather that:
- when you setup the Monochrome mode, the P9 activates the left camera (on the left when facing the phone back)
- with all the other modes, the P9 activates the right camera (the one between the flash and the left camera)

The P9 doesn't create 2 images, than combine them, it just shot always 1. How I came to this conclusion? You can also try it at home:
I choosen few static subject and I made my photos with the phone on tripod, than I did many photoshoots in the normal way and also by covering alternately the 2 cameras with a black scotch tape.
Even by naked eye, even by using an image comparation software (I used Beyond Compare from Scooter Software) I found no difference at all, no more brightness, no more contrast, no better image definition.
I did in a bright environment, in a dark one, I enabled and disabled the PRO mode and I tried to do a testing more complete as I could (honestly, I omitted to test the image in RAW mode, I tested only JPEGs), but my conclusion is that the 2 cameras are doing a different job, but they are definetely NOT working together.
The Following 4 Users Say Thank You to ScareIT For This Useful Post: [ View ] Gift ScareIT Ad-Free
22nd September 2016, 04:00 PM |#3  
Senior Member
Thanks Meter: 37
 
More
Thanks for testing, but did you also try this outside on a landscape view? Maybe then we will see other results?

Otherwise this is yet ANOTHER thing Huawei lied about.
22nd September 2016, 05:29 PM |#4  
Junior Member
Thanks Meter: 14
 
More
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
The Following User Says Thank You to ScareIT For This Useful Post: [ View ] Gift ScareIT Ad-Free
22nd September 2016, 09:43 PM |#5  
Senior Member
Thanks Meter: 37
 
More
Quote:
Originally Posted by ScareIT

Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see

That would be nice!
23rd September 2016, 07:51 AM |#6  
oTToToTenTanz's Avatar
Senior Member
Thanks Meter: 64
 
More
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.

If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).

Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
The Following 3 Users Say Thank You to oTToToTenTanz For This Useful Post: [ View ] Gift oTToToTenTanz Ad-Free
23rd September 2016, 09:12 AM |#7  
Junior Member
Thanks Meter: 14
 
More
Great oTToToTenTanz!
I confirm that! Both cameras are essential to enable the wide aperture effect: when you try to shoot in the bokeh mode it appear an alert to check if the lens is clear, the blurred effect disappears and it's impossible to edit the depth in post-production.

I make 2 hypothesis:
- the phone really combines the 2 pictures in order to recreate the depth (is a strategy used in all the 3D cameras), so in some way there should be the possibility to get both pictures
- the phone uses the laser pointer to shot IR around the subject, then the monochrome camera will get the infrared information (and considering that its lens is without the RGB filter, will be very efficient to do that) and store them in order to obtain an accurate depth (I mean something like this: https://www.youtube.com/watch?v=dgrMVp7fMIE)
Nice things to try!
The Following User Says Thank You to ScareIT For This Useful Post: [ View ] Gift ScareIT Ad-Free
26th September 2016, 04:22 PM |#8  
OP Junior Member
Thanks Meter: 0
 
More
Additional Info on Depth
Quote:
Originally Posted by oTToToTenTanz

Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.

If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).

Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.

Hey oTToToTenTanz,

Really appreciate your (and everyone else's) help on this! Can you give me some more info on how you actually extract the depth info in a usable form e.g. a matrix? Does the image just produce an RGB-D image once saved?

Thanks so much,

Josh
26th September 2016, 06:17 PM |#9  
Junior Member
Thanks Meter: 4
 
More
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
The Following User Says Thank You to Tijauna For This Useful Post: [ View ] Gift Tijauna Ad-Free
27th September 2016, 10:15 AM |#10  
PerpulaX's Avatar
Member
Flag Berlin
Thanks Meter: 9
 
More
As far as I understand it, there are two cases in which both cameras are used.

One is for the wide-aperture ("bokeh") mode, in which a depth map is created from both pictures that have a slightly different perspective. I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.

The other case is landscape shots in low light. Several people reported that covering the second camera in this scenario results in much darker images. This seems like a silly limitation, but I believe I understand why it's there. The two images that the cameras take differ in perspective (obviously, due to the fact that the cameras are mounted next to each other), which is quite difficult to adjust for when trying to combine both sensors' data. However, when focusing at infinity, for example when taking landscape shots, the difference in perspective is negligible, so that in this case the two sensors' data can be easily combined to improve low-light performance.

Maybe it would be possible to combine both sensors' output at closer distances in a satisfactory way, but it seems that Huawei chose not to implement that. If I find a way to extract the second sensor's data from a wide-aperture image, I'll poke around a bit to see if it would be possible to combine them.
The Following 2 Users Say Thank You to PerpulaX For This Useful Post: [ View ] Gift PerpulaX Ad-Free
27th September 2016, 12:46 PM |#11  
PerpulaX's Avatar
Member
Flag Berlin
Thanks Meter: 9
 
More
I did some poking around on my lunch break. I threw a wide-aperture image into JPEGsnoop and it came up with two images in the file (four if you count the thumbnails, as well), the first one being the processed, "bokeh" image, while the second is the original color image without any processing. I assume that this is the image that is used to re-process the wide-aperture image when editing the focus point or aperture through the gallery app.

JPEGsnoop also told me that there's more data after the image segments. Since it couldn't work out what that data is for (this is past the end of the actual JFIF file), I checked it out using a hex editor. I found a marker "edof" (extended depth-of-field?) followed by what looks like some header data, followed lots of repeating bytes. This block is about 1/16 the size of the image in pixels (so 1 byte for each 4x4 pixel block). I'm not sure whether that's a small greyscale version of the image itself or a depth map, but I suspect it's the latter.

So, I'm afraid that it will be impossible to extract the monochrome image sensor data from a wide-aperture image, as it's not there anymore.
The Following User Says Thank You to PerpulaX For This Useful Post: [ View ] Gift PerpulaX Ad-Free
Post Reply Subscribe to Thread

Guest Quick Reply (no urls or BBcode)
Message:
Previous Thread Next Thread
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes