De-noising photo in the dark

I had an idea of de-noise L5 pictures from the dark. And so I did. The original photo:

Very noisy. Also look around the logo. My de-noising result:

Looks great, doesn’t it? You don’t even need to make the picture full-size to see the differences. Like a completely different camera. And here’s the clue: I just used the JPG-files to improve the image quality. No raw-files, no darktable. So it could be improved even further.

But there is a downside. The technique required 32 pictures of the same object and the quality improves via square root → that means further improvements need much larger amounts of pictures. So I thought the other way around: what is the minimum amount of pictures where I don’t see any improvement when just adding one single further image? The answer was 7, so I took 8 pictures to create this:

You still see a quality difference to 32 images, but you don’t see difference between 8 or 9 images. You need many more to make the improvements visible.

If you want to do it by yourself:
Take all the images (amount depending on your quality needs), put them all into GIMP or similar software and blend them together. The best quality I got with GIMP doing it this way: Opacity of the images, start from bottom is 100%/n. It looks like this:
Image 4 = 25%
Image 3 = 33.3%
Image 2 = 50%
Image 1 = 100%
I also could use the sharpen-filter since my focus was not perfect on these photos.

This technique also works with little movements. It could even improve the de-noising, but it also would require some adjustments work on image manipulators, because we want to de-noise, not to blur images. An automatic tool in Millipixels (or similar) in future would be awesome.

10 Likes

Very interesting! And doable with ImageMagic, if I’m not mistaken - made into a script and automated.

Movements might be solvable with Hugin, maybe (and I think astro-photography has some tools for this kind of long exposure stuff)?

3 Likes

Since nobody else answered:
I’m no expert on photography (not even close) and do not know much about, except that there is a huge amount of knowledge required to make professional-like photos. My knowledge is more about computer graphics (also no expert). Many professional photographers should have great tools for all kind of things, not just astro-photography.

I read about that there is an iOS-app in development that does de-noising with up to 32 images without AI-stuff and at this point I had the idea to try it out without reading further. So the method here is my own idea and maybe not even the most efficient, but still good enough I guess. There was just a comparison photo from that other project where the quality improved a bit better than mine. I think it’s partly because they moved the camera with their hands, which reduces the artefacts that come from dirt on protection glass and maybe some other algorithms to improve the image. We could use dos GlowUp for example to do similar things and who know what else.

But still, it’s great to know that we can have the ability to make better photos in the dark without the need for better cameras.

5 Likes

Nokia 9 did that in hardware to great effect. The problem here is that your scene must be static and thus the technique is next to useless most of the time. You could also try to experiment with lower ISO and longer exposure time.

1 Like

I cannot since there is no such function in Millipixels yet (and I’m no coder). And longer exposure-time also requires static scenes. Our camera is just not made for dark parties where you want to make a photo of your friends while they’re dancing etc. So better plus one use-case than none.

I mean, you also can buy additional tools like in the Sony QX10 Thread or USB-Cameras or a fully dedicated camera, but than we also can ask “why not removing the camera entirely, you can buy additional equipment”?!

2 Likes

Before opening the camera app:

echo 0x0342 | sudo tee /sys/kernel/debug/s5k3l6/address
echo 0xff | sudo tee /sys/kernel/debug/s5k3l6/add_value

To get it back to normal:

echo 1 | sudo tee /sys/kernel/debug/s5k3l6/clear

Note that it can act funky when the kill switch gets used, as the driver doesn’t seem to correctly unload and reload its debugfs interface.

3 Likes

That is crying out for more explanation. :slight_smile:

2 Likes

Nothing has changed since Librem 5 Photo Processing Tutorial – Purism :stuck_out_tongue:

4 Likes

Hey, @Ick - looking at your numbers, is that logarithmic…? Or some other sequence? Approximate logarithmic opacity percentage graph for 10, 20, 30 and 60 images would look like:

Is that about right? For ten images the percentages would be: 100.0, 59.9, 35.9, 21.5, 12.9, 7.7, 4.6, 2.8, 1.7, 1.0. Can you test how that seems with your images?

And there’s no point in using like 5 or 7 pics for this - even when using more linear numbers?

2 Likes

I deleted the images already, but could create new ones. However, when I look at your graph I have the feeling it doesn’t work as well as my numbers and I don’t even think a test is necessary. Here is why:
After 10 numbers your opacity of the blue graph is already at 1% while my row is on 10% and even after 60 images still on 1,67%. So your numbers tell me, that more than 5 images it makes no improvement. anymore.

On the other hand the first numbers are higher than 50%. Let’s just combine 2 images together. Why should the second image be more important than the first image? After combining 2 images, the information of the second image is 60% part of the new image, while the first one is 40%. The result is the same as the opacity of 40% (first image 60%, second image 40%). With the green and yellow line, it becomes even worse at this point. For green I need around 10 images to get a opacity of 50%. So the 10th picture deletes 50% of all information of 9 images together. That doesn’t sound right to me.

As I told, my math was y=1/n while n is the image count. This math has a sense:
1 image = 100% opacity = each image has 100% information
2 images = 50% opacity = each image has 50% information
3 images = 33% opacity = each image has 33,3% information
4 images an so on …
To prove the 33,3%: The first image has 50% on step 2. On step 3 image one and two together contain 66,7% of the information, multiplied with 0,5 (50%) is 33,3% remaining information of the first and of the second image.

Does it make sense?

2 Likes

It does and due to the logarithmic values I was wondering myself how it was working with them (as it wasn’t). For some reason I didn’t register the 1/n.

But I still didn’t get, why the 8 image limit. Was there no improvement with fewer images?

1 Like

There is no limit. You can use everything from 2 to 2000000 images. Of cause the best quality improvement of each step is combining the second image. 8 is just a kinda sweet-spot. On the one hand, 7 images is where you can see the improvement with each step and 8 is thought as “there are may images where you can even see an improvement to this step”. But 8 is also a number computers like very much (2³=8) and even if it may makes not much sense here, it’s a number in my head to prefer choosing it for computation tasks.

So really, just take the amount you want.

Edit:
There is a technical limit. On Image 34 we have 2,9% opacity, on image 35 we have 2,9% opacity. You see, rounding becomes an issue. There is a point from which we don’t get any further improvement. Not sure exactly where it is, but I would not expect much from more than 32. Maybe with (double precise) float point images we can go further. But still, you will sit forever taking that amount of photos. I don’t think that makes much sense to put so much effort into it.

Edit 2:
We my should combine it with exposure time to get further improvements.

2 Likes

fwiw this looks like its just doing image stacking manually and can be automated already on mobile linux. Its how post processing was done on the PP as that sensor is really noisy if it wasn’t super bright outside. I could of swarn millipixels had that too as a option off by default but looks like it was actually removed, im assuming due to performance especially since it was based on the original postprocessd system. If you want to automate the stacking process you can take a look at my python post processing setup GitHub - luigi311/megapixels_postprocess though not sure how easy it is to enable on milipixels. There is also this tool that someone created a image stacking in rust, GitHub - eadf/libstacker.rs: Image stacker based on OpenCV-rust.

As you noticed though there is a point of diminishing returns and as other have said things in motion can cause issues or even if your hand is not stable. You have to take this into account by making sure things are lined up, I use a method called ECC that allows it to line up multiple images as its quick enough to use on mobile devices and I also speed up the process by shrinking the image and matching on the shrunk image and then adjusting it to apply to the full scale images.

Most phones also use this method and benefit from the natural sway that your hand has when holding the phone to take a burst of photos and stack them and average out all the issues. Along with how HDR seems to be done but i think that requires some more smarts and like you mentioned adjusting the exposure for each still.

3 Likes