Hey guys, today in this article we are going to disuss about the smartphones cameras and the actual reality behind these.
So there are three cameras on the back of the iPhone 13 Pro. The main camera, the ultra wide and telephoto.
So this one down here at the bottom. This is the main camera, so if you cover it with your finger. Cover it with your finger. You can see the frame goes dark. Make sense if you cover the other 2, nothing happens when you hit that 3X button to zoom into 3X, it’s supposed to switch to that telephoto camera, right? But sometimes when you zoom into 3X and then cover the main camera, it still goes dark, why?
This because the iPhone camera thinks it knows better than you and he usually does. Basically in certain conditions, especially with lower light you could get a worst photo out of actually switching to the telephoto camera, which is a smaller sensor with a smaller aperture that let’s in less light and could be more noising. So sometimes that 3X. It’s just props in on the main big camera and doesn’t even tell you, and that’s actually gonna give you a better photo.

See smartphone cameras are smart, but something about lately is they’ve gone past smart. They’re bending reality. Some of the middle running blind smartphone camera test over on Instagram right now. If you haven’t already gotten on start. Coating on those should do it. It’s a fascinating experiment every time, but I thought I’ve had is. Maybe it’s not just the brightest photo that’s going to win every single time. So actually think that similar to how in this tech bubble we underestimate how many people put cases on their phones. At least I do.
There is also a bit of an underestimation of how many people want to just be able to take a photo and post it with zero edits. Now that’s kinda crazy. In the tech world or in the photography world, we want the more neutral photo, the one with more information as the better photo to a Scouse. Then we can go take it, edit it and make it exactly how we want because We want that control, but to most people if they can just take their phone and point and shoot and the photo that comes out of that is perfectly good enough to post with no edits at all about to them. Is a great camera. And now you’re letting, of course, the camera dubal of the editing for you because you have the least amount of control over the final look.
Now the danger of giving up all the control is our photos become a product of someone else’s vision technically, and this word start crazy because every smartphone company sees things a little differently, right?
We already know a Pixel photo. Looks different from an iPhone photo which looks different from Huawei phones which look different from Xiaomi shots. Every picture is the result of an image processing pipeline that is tuned by people and that is a reflection of their biases and their skills and what they think we want, which means every photo we take. Even if it’s not the same thing. Will be slightly different, just depending on what camera you take it with, which one is real, which one is the most accurate. Maybe it doesn’t matter.
In 2019 the Huawei P30 Pro came out. It had a pretty solid set of cameras. It was a flagship phone of course, so people went out and test its limits and pretty quickly something sort of fish he came up. So when you went outside at night and pointed the new Periscope zoom camera at the moon and zoomed all the way in, the camera would recognise the moon and suggest you turn moon mode on and people start doing this and posting their results and everyone’s pictures of the moon looked surprisingly similar. Now maybe I should be shocked they were all taking pictures of the same moon. Pictures of the same moon, but have you ever tried to take a picture of the moon? Phone usually just a Bob never looks that good and he’s all looked really good. Maybe a little too good.
That’s what Android is already included within of samples. They believe that Huawei is using AI to not just recognise that you’re taking a picture of the moon, but also to then superimpose a stored image of the moon and merge it.
Onto your photo now first time. That’s pretty crazy, but that is also kind of clever, because the moon is tidally locked with the Earth, meaning one rotation takes the same amount of time. Is one orbit, so one face of the moon is always pointing towards Earth, so you only see one side of the moon all the time, meaning it’s always gonna look the same and you need one stored image of the moon to super impose.
Over everyone’s photos, something that’s not so bad, but what way denied? This, of course, but the seat was definitely planted and might take honestly at the time. Was like alright well, you have this a I mode in your camera anyway, its already recognising scenes and adjusting things in changing things to enhance your photos already.
Why not add a picture of the moon in there, but does bring up the question that totally fair question, which is how far is too far like people already seem to want the most finished version of their photos, straight out of camera and so are doing edits and enhancements.
How far is too far? Xiaomi phones? We already know can detect the landscape and make the blue sky blue are or do crank up the green on the green grass, but also some of these phones from Chinese vendors have very different acceptable levels of body and face adjustment. So this Xiaomi MI 11 ultra.
When you open the selfie camera has beauty filter but this isn’t just spatial smoothing. In literally let’s you move your hairline changes, the shape of your chin and your nose. You can slender up your face. It changes the size of your lips and cheeks. It can make your eyes bigger or smaller and you can put makeup on yourself. And it’s all just built into the camera, out the box and its totally normal.
It reminds me when there was a certain commercialised version of this when the Galaxy S9 in United States had a big speed vision feature to try and make up, and then you could swap between different shades of lipstick and blush in hi shadow and then Bixby would give you a link to buy the actual retail version of that make up. But really the most powerful adjustments.
The ones that happens when you don’t even know it. He didn’t even ask for it to happen in the background.
It’s the highest level of computational photography, so Google Pixel six is always running the main camera at one shutter speed and the ultralight camera at a much faster start of speed at the same time.

So if you take a photo of a moving person. The founded text with face relies. If it’s blurry, it can take a non blurry copy from the ultralight camera and merge it onto your subject to keep just the face Crispin, clear.
All of this happens in the background without you even asking. There’s also already a feature in FaceTime on iPhones called. eye contact that moves your pupils. To make it look like you’re making eye contact with the camera even though you’re not, you’re looking at the screen below the camera, but it’s pretty eerie and slightly creepy, and it works a little too well, but at least you can turn it off and I could swear this is a feature somewhere II must have been imagining a keynote, but I can’t find it anywhere, so I’m gonna predict the future will exist.
At some point in some phone, probably in something like a pixel, first imagine you’re taking a group selfie shot. There’s a bunch of people with you. You hit the shutter button and almost everyone has their face, not blinking and smiling, but at different moments. Everyone Has sort of their ideal face, so this offer smartly goes through. And merge is the best smiling non blinking face for everyone in the selfie. Doesn’t even tell you just does it in the background.
We’ve actually seen versions of this working our way up to this feature, so believe it or not, in 2012 a Nokia Lumia phone had a selfie mode where you hold for 5 seconds and then after the shot you could scroll between five different faces of selfies to pick which one you like the best. So it crazy. This is the directions smartphone cameras are going, which is as computational photography gets better and better or merging more and more things in. Eventually these cameras are outputting captures of moments in time that never really happened, so it’s easy to see a future where smartphone cameras just recognise
Kinds of things like a I mode right now is pretty basic. It will see a sunset and make the oranges brighter. But maybe it will start recognising all types of objects your in front of a popular Instagram wall in Santa Monica somewhere and it notices you take a picture in front of it.
Pingback: What are NFTs? How to buy and sell NFTs? NFTs Explained! - TechWithAndy
Pingback: Difference between HDD and SSD - TechWithAndy
Hi! Someone in my Facebook group shared this site with us so I came to look it over. I’m definitely enjoying the information. I’m bookmarking and will be tweeting this to my followers! Exceptional blog and fantastic design and style.