libcamera v0.0.5. Plenty improvements all around
libcamera v0.1.0. This release brings in lots of
updates to core, and pipeline support, and crucially
takes the next step towards better integration for
applications by moving to a new structure for soname updates.
Now that we have ABI checks in place the soname will only
be updated when we detect an ABI change to help applications
continue to link against libcamera for longer.
Sadly in GNU PureOS still ship old libcamera for Librem 5.
Dude, it was literally tagged 10 hours ago.
How can other devices be so bad compared to L5 camera? Check this:
this is a photo of a kitchen cupboard taken with Samsung S8 and the default settings (auto mode).
Same result is given by a Lenovo Tab10. The photo is far from reality.
Here is the photo as taken by L5:
This image is exactly true.
So, although I do not care about Samsung or Lenovo out of curiosity I wonder how can they be so bad? It seems that they love grays. I hate this. They desaturate colors and L5 does a much better job. I have noticed that they fail especially when you photograph an object with mainly one color. They work better when the object has color differences.
That’s just how white balance works. If there’s no other reference on the picture to average it with, it’s going to balance to the surface you’re presenting it with, making it gray in the process. I’m afraid that Librem 5 is going to do the same thing once its software improves, as currently it only balances to a handful of predefined presets
At least white balance is something that can be easily and losslessly corrected when developing images out of DNG files.
And you call this “an improvement” ? Leave it un-improved
Honestly I do not know what this “balance” you refer to is. But this can not be a good choice. This photo was not taken as a torture test for the software of the phones. It was for a real use. I had to choose a color for a kitchen construction in an other location and I had to send pictures so that they could choose the color properly. I did my job with L5. I could not have done it with the other phones. So maybe some other options could be considered?
For such use case, you would want to disable automatic white balance and use some specific reference value instead. But yes, in general case, having automatic white balance that’s not constrained to presets is obviously an improvement
Clearly I have to learn more about this issue. Is there a simple link to read and learn more?
This is why I love FOSS. People “develop” along with it. It is an educational tool ! When something does not work as expected in FOSS I see it as an opportunity to learn!
I wonder… do android phones have this possibility you mention? To disable white balance and use presets? Thanks for sharing.
Long story short: your color perception is being adjusted by your brain to the light around you. You don’t perceive objects as changing their colors just because the light that illuminates them becomes a bit warmer or colder. Cameras, however, do, so they need to do corrections (chromatic adaptation) based on lighting that illuminates the scene.
If your scene isn’t diverse enough, the automatic algorithm won’t have enough data to work on, and in the end it’s going to work based on assumption that your door is a neutral (gray) surface and that it gets its tint due to lighting around, so it ends up being corrected out.
In fact, the only reason why your L5 photo turned out “exactly true” is that your lighting happened to match one of the presets. If it did not, it would most likely end up either too blue or too orange.
Technical ability - yes. Whether the user can do it? No idea - it’s a matter of camera app functionality and its UI. I haven’t used Android for ages
Yes. The default camera app on Samsung phones does not and I don’t have a camera app I recommend but there are many you can download and try.
In my eyes (after your explanation) it shouldn’t be a problem to automate lerp between white-balance enabled and disabled depending on input-data. Low color-input (like pictures above) would lerp into “less or no balancing” while high color-input just will get balanced.
If you did “less or no balancing”, the whole photo would become green. You always white balance, the question is: to what kind of light?
Details. But at the end it should be just a matter of how to analyze the scene and what kind of balance we need to get the best picture in every scene. And here it sounds not too complicated to decide if it has many similar colors or not.
…but that information doesn’t give you anything at all. You only know that you don’t know. @antonis ended up being lucky that the preset made the picture closer to reality, but it could very well be otherwise
Than let’s train an AI that analyzes colors and shows us perfect results and that will also kill every noise.
Okay good point. Didn’t use the camera often right now (holiday comes soon), so I had not enough experience when it fits and when not.
I liken it to:
_ + _ = _
There’s not enough information to calculate the total. If there’s more information given then you can accurately calculate.
What the presets do is effectively turn that into a multiple choice question instead of fill in the blank. You’re more likely to have a lucky guess with 4 options instead of infinite options but it’s still just luck at the end of the day.
Manual controls allow you to bring in the outside information of what your eyes see to calculate.
Perhaps an easy way to imagine it is to consider a close-up photo of a white door in an orange light, and a orange door in a neutral light. All the camera sees is a huge orange surface - so, what is it then? Is it white or is it orange?
You can analyze the content of the photo and you’ll end up with an assumption that it’s a white door in orange light.
Or… you can use predefined presets of common lighting conditions, in which case you’re more likely to end up with orange photo.
You may be familiar with a different expression of this phenomenon though
And then why one does not program many presets instead or relying to automation? Wouldn’t that be better?
How is that going to help? All you’re doing is making it closer to the fully automatic variant. The preset needs to be chosen in the exact same way anyway, it’s just a technical quirk of our app that instead of going through a continuous scale it only jumps between five or six discrete values.
Then I have not understand what the presets do. I have to study the wikipedia link you gave me and get back. I thought that adding more presets you could rule out the possibility of not having a match with one of the presets. But you say now it would not help.
In any case, this photo was taken in ambient daytime light. There was not artificial light involved.