It’s Millipixels and it’s here: https://source.puri.sm/Librem5/megapixels/
You need to upgrade the kernel before it can work. New kernel will not work with old Megapixels.
It’s Millipixels and it’s here: https://source.puri.sm/Librem5/megapixels/
You need to upgrade the kernel before it can work. New kernel will not work with old Megapixels.
Is this URL also correct for Millipixels?
This one comes directly from the mentioned link.
sudo apt list megapixels
vs sudo apt list millipixels
I think so. It is confusing that the repo is still called megapixels and there is an issue about renaming the repo: https://source.puri.sm/Librem5/megapixels/-/issues/35
Wish granted.
Could some kind sol shed a bit light how to manage the GUI of the app, especially which values to be used for gain
, exposure
, focus
and balance
, or point me to some document where they’re explained.
One problem I detected is: when the app is started it is consuming a lot of CPU cycles because it is constantly taking the raw frames from the sensor and waiting on the click, then the last raw frame is used for further post processing. Why this file *.dng
of ~13M is also stored in ~/Pictures
while the resulting *.jpg
file is only ~1M?
See [MyL5] arrived and looks good. First impressions and following replies - for why and what you can do about it.
Just saw within PureOS repo: digikam
. I never used this application (not yet), but still posting this main description: “While digiKam remains easy to use, it provides professional level features by the dozens. It is fully 16 bit enabled including all available plugins, supports RAW format conversion through libraw, DNG export and ICC color management work flow.”
After this operation, 545 MB of additional disk space will be used.
Do you want to continue? [Y/n] n − I need to ask for a few days off ASAP .
Very very nice the new focus mechanism from v4l-utils on Librem 5 Camera(DEV). Beautiful shots.
I imagine that the raw data from the image sensor is first stored in a DNG file and then it is converted into a lossy JPG image, but I haven’t looked at the code to see how it works. People who want to edit the image or want to store it in a lossless format like TIFF or PNG will appreciate having the original image data.
DNG is a free/open source format for storing the uncompressed information received from the image sensor. It is similar to the RAW format used by the majority of camera makers, but each camera maker has its own RAW format, whereas DNG is vender neutral and a free/open source format which is easy to edit. For more info, see: https://www.adobe.com/creativecloud/file-types/image/raw/dng-file.html
The new focus in the cam is a big step forward. Thanks to all who are involved.
What I’m missing is
I have here a screenshot of a cam app on another mobile where I tapped certain point of the future image (the yellow thin line square) to indicate that this part should be sharp (and not the middle or the background).
Immediately when I tap the shutter or when it goes whiter for a moment one second later? If in the moment of tapping the shutter, why it goes whiter for a short moment after the tap?
Okay, that will be needed for autofocus, but won’t land any sooner.
Somewhere between that moment. The whiter version is your actual photo. It doesn’t have the exact same brightness because modes of different resolution are far from perfect still.
You don’t have to imagine. /usr/share/megapixels/postprocess.sh
It isn’t only about lossless v. lossy, but also about postprocessing the raw data from the sensor. You can generally do more CPU-intensive postprocessing on your computer, as compared with your digital camera (or even your smartphone), so some photographers will only transfer the raw file from the digital camera and do the rest on their computer. (As you say, raw files can be in proprietary formats, requiring the manufacturer’s proprietary software to work with, which is exactly what we wouldn’t want.)
The file name of the postprocessing script is actually /usr/share/millipixels/postprocess.sh
. I did some small modification to log start and stop of the script:
tail -1 /tmp/millipixels.log
postprocess.sh /tmp/megapixels.4cEnxH /home/purism/Pictures/IMG20220125103635 20220125:10:36:35.186403931 - 20220125:10:36:46.360204031
i.e. it takes around 11 secs to run the postprocessing, which also means you have to wait 11 secs before you could take another picture. This isn’t practicable and should be redesigned.
You can shave some milliseconds off that processing time by replacing the Bash shell files in Millipixels with compiled C, C++ or Rust code, and I’m sure that there are other tricks that can be done in the software, but the i.MX 8M Quad lacks an image signal processor and hardware video/image encoder, so the processing of the photos has to be handled through software running on the 4 cortex-A53 cores, which aren’t that powerful. Even with good optimization of the software, I doubt that the L5’s camera will ever be very fast. Both the i.MX 8M Plus in Fir and the RK3566 in the PinePhone 2 will have an ISP and hardware video/image encoder which should improve the speed of their cameras if there is proper driver support for that hardware.
At some point in the future, we hope that we’ll be able to leverage the GPU to process the pictures.
How we get there seems murky right now, given that libcamera is not that much about processing, and that dcraw, as I’ve heard, is a plate of spaghetti.
On byzantium
perhaps. On amber
I think the path that I gave is correct. So we have hopefully now covered everybody.