Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He does have a point in that a 12 MP CMOS sensor will have 12 million sensor elements, not 36 million. Colour filters are placed in front of each pixel, so RGB data can be extracted. Usually, 1/4 of the pixels are R, 1/4 are B and 1/2 are G. The raw sensor data for each pixel thus contains either R, G, or B, of varying intensities, depending on the passbands of each filter. The data is combined using a demosaicing/debayering filter/algorithm to extract subpixel data. That is, surrounding colour information is combined so that each pixel has R, G, and B elements.

Sorry if the writeup isn't that specific, I mostly work with monochrome CMOS cameras.

https://en.wikipedia.org/wiki/Demosaicing https://en.wikipedia.org/wiki/Bayer_filter

edit: I should also state that I don't know anything about iPhone cameras. It's quite possible, but not typical that they have a 36 MP sensor producing 12 MP images.

edit 2: I read that the iphone 12 has 1.7 um pixels. A 36 MP 4:3 sensor with 1.7 um pixels would be 8.3 mm. A 12 MP 4:3 sensor would be just 6.8 mm wide.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: