To not further derail the original topic, created a new thread.
@reC, I agree with almost everything you wrote there.
Boy, would I love to be able hack my Pentax DSLR. FOSS firmware etc…!
Just one thing I disagree, and it made me thinking
Yuck! Please don’t say that.
AI is 90% marketing blah, the other 10% are scary.
Machine Learning might be used to optimize the post processing algorithms, but I assure you, you get a very long way by using “classic” algorithms to combine/enhance the data of several sensors, and we have sufficiently smart FOSS people to do that without a billion dollar budget.
And everything that goes beyond that is something that I even don’t want in a camera. I don’t want some neural network to hallucinate details into my sensor data, no matter how much better than reality it might look.
BTW, the code the team around Dr. Katie Bouman and Andrew Chael created to picture a black hole is released under GPL. So, if you really want to go to the limit and beyond, just adapt that code for the intended purpose
(And yes, I know that the electromagnetic waves they processed were not in the visual spectrum. The code doesn’t care)
While we don’t have multiple sensors on a rotating globe, we have one image sensor, plus a gyrometer. And a shaky hand
Sounds like a real fun project to achieve both stabilization and image enhancing this way.
My aforementioned Pentax can do something similar. It shifts its sensor by one pixel and takes a second picture, combines the two. Et voilà.