What is the state of video recording on Librem?

It’s been nice to really see this project make so much progress.

I was wondering what the current state on recording video is? I couldn’t find any demos.

Specifically, I am interested if the Librem 5 has good audio/video sync and frame rate. I’ve had issues recording video on my Linux laptop where I get bad sync because the processor is not prioritizing the video. Will the Librem have similar “real-time” capture as Android devices?

4 Likes

Currently, I believe that they are only able to kind of make phone calls work which doesn’t even work so great yet and the camera not being able to work at all in anyway yet so this seems still a long ways away. From a person who isn’t optimistic practically ever in his life, I think video will have good audio and video sync, terrible camera quality, pretty low but not a horrific frame rate, poor audio quality and possibly bad echo for video calls.

If you are not satisfied with advice here on the forums or you would like a demo, try contacting the purism team via email and simply asking for a demo or if you think they may create a bias in their demo video, try contacting L5 owners to ask them to make a demo video when support for the camera to work is added in the next software update. If no L5 owner here wants to do something like that on here, search up those that have done testing with the L5 on youtube and ask them there or on their other social media.

edit: it is info@puri.sm for general questions and other emails are shown on this thread: How to properly send emails to Purism

2 Likes

The cameras are not yet supported in the Librem 5. Watch these bug reports to see when they are supported: Support front phone camera and Support rear phone camera.

If I remember correctly, there was a Linux kernel commit to support the front camera, but I can’ t find it, so either I’m not searching in the right place or I’m miss-remembering.

5 Likes

What you are probably seeing is the hardware video encoding provided by the Snapdragon, Exynos or MediaTek Helio. Unfortunately, the i.MX 8M Quad in the Librem 5, doesn’t offer hardware video encoding, nor does it have an image signal processor. It is hard to say what it will do without testing it, but I wouldn’t expect anything special.

The i.MX 8M Plus in the Fir batch will do better, since it has an image signal processor and it has a sample application that records 4K at 30 fps.

3 Likes

One more thing on the image signal processor (ISP) in the Fir batch. The i.MX 8M Plus fact sheet says “Dual Camera ISP (2x HD/1x 12MP) HDR, dewarp” and “Up to 375 MP/s”. Usually “2x HD” just means two 720p video streams, but the fact that they got 4K video at 30fps to work tells me that the ISP is more powerful than what NXP is listing in its spec sheet.

Another question is whether we will have free/open drivers to use the hardware encoders for H.264 and H.265 1080p video in the Plus. It is taking quite a while to get support in mainline Linux for the Hantro H1/H2 video decoders in the i.MX 8M Quad, so I wouldn’t be surprised if we have the same problem with the hardware video encoders in the Plus.

What we know is that video encoding on the Librem 5 Evergreen will have to be done in software. Software video encoding is usually done by the CPU and more CPU cores helps, but the four Cortex-A53 cores in the i.MX 8M Quad aren’t very powerful. If some of the encoding work can be passed to the GPU, that will definitely help, since the GPU in the i.MX 8M Quad is decent. Either way, software video encoding in Evergreen is going to use up a lot of battery and generate a lot of heat.

Hopefully we will get an option to do raw video recording, which will take up a lot of space and depends on how fast data can be written to the eMMC or uSD, but it might allow for higher resolution video, because the video encoding processing can be done later. It looks like we are getting Samsung image sensors, but still don’t know the particular model. If we know the model, we can investigate if its mainline Linux driver will provide an option for raw video recording.

4 Likes

Wow, thank you for the well researched responses. I’m pretty comfortable with Linux but this topic of hardware is way more complex than I imagined. Sounds like there are multiple ways to achieve the same result (excluding battery life) hopefully a good solution develops. I would love to drop some money on one of these devices but I guess I’ll have to wait until after another hardware update.

1 Like

Ok, so since this thread has the relevant topic, a small detail, but since it hasn’t been mentioned here before (as it may not be available for all [I use byz backports]):

L5 now has “new” feature: video recording! :movie_camera:

Just noticed Millipixels (0.22.0-1pureos3) has at some point added a little “Record” button next to normal camera shutter button. Shame on you, if you’ve noticed this and not notified the rest of us who didn’t know to look for it. How timely for the Oscars :wink:

You get videos with it, from front and back. Apparently front (selfie-)camera gives H264 mp4 at 612x816 mirrored (left-right) and the main back camera resolution is 390x526. So, not using full capabilities yet. And if you want sound, remember to select “Handset - Built-in-audio” from sound settings. Unfortunately there’s no simple method to change the default recording destination folder from Video to somewhere in the SD card.

Progress! :partying_face:

3 Likes

Yeah sorry, I’ve had video for two or three years

3 Likes

Don’t feel too bad. I just found it surprising that it was never mentioned anywhere

2 Likes

I guess it happened because most people got their device in 2023, when camera was already fully functional (was also the year when auto focus etc got implemented).

2 Likes

Just to be clear, yes, It’s been there for a long while (screengab evidence) but it’s never been mentioned or griped about. This brings to mind two things. Apparently video camera recording capability is not something that’s used or missed that much (either not at all or because “work in progress”), maybe? And it’s been like that (the low resolution for instance) for quite a while and the limited features haven’t caused any comments either. I can only remember that every few years there are the wishes for video for video conferences and calls etc.

2 Likes

And this is WIP → the API for video calls is not here right now. And the video quality is also not that good (had lags when doing it 2023).

2 Likes

I can only speak for myself but I sort of assumed that, with the early stage of development of still images, it wasn’t worth testing out video. Maybe that was a bad assumption on my part. For sure still images are much more important to me than video.

2 Likes

Interesting musings by Pavel Machek:

Librem 5 camera/kernel can do three possible resolutions, ~1024x768 @ ~24fps, ~2048x… @ ~31 fps and ~4096x… @ ~15fps. Debayering is actually easier and better quality if we downscale at the same time, and that allows best framerate, so we do that (2048x… resolution).

ARM has problems with cache coherency w.r.t. DMA, and kernel solution is to simply disable cache on DMAbufs for userspace, which means accessing video data is 10x slower than it should be on the CPU. Which means debayering on GPU is attractive, and that’s what we do. (gold.frag). GPU can do more image signal processing functions easily, too, so we do some of that.

Unfortunately, we hit the same uncached memory problem at the GPU output. So we use separate thread to copy. All this unfortunately does not fit on one core, so we need two threads, one controlling GPU debayer on frame n+1, while the other one copies video data from frame n. (heart.c). We save resulting RGBA data to ramdisk. This all costs maybe 80% of one core.

From there, Python scripts can pick them up: ucam.py displaying the viewfinder and mpegize.py handling the video encoding via gstreamer. There’s basically 0% cpu left, but I can encode ~1024x… video. Unfortunately that’s without audio and with viewfinder at 1fps. Plus, combination of C + Python is great for prototyping, but may not be that great for performance.

Code is here: icam · master · tui / Tui · GitLab .

At this point I’d like viewfinder functionality merged into the rest of GPU processing. Ideally, I’d like to have a bitmap with GUI elements, combine it with scaled RGBA data, and rendering it to screen. I know SDL and Gtk, SDL looked like better match, but I could not get SDL and GPU debayering to work in single process (template SDL code is here sdl/main.c · master · tui / debayer-gpu · GitLab ).

If you can integrate main.c and heart.c, that would be welcome. If you have example code that combines SDL with processing on GPU, that would be nice, too. If you know someone who can do GPU/SDL, boost would not be bad, I guess.

1 Like

When it comes to the big cam, currently we can stream 1052x780 at 120 FPS, 2104x1560 at 60 FPS and 4208x3120 at either 20 FPS in 8-bit mode or 15 FPS in 10-bit mode. Once we get the camera to work with 4 MIPI lanes instead of 2 as it is right now it will be able to stream at 30 FPS in full resolution (it’s currently limited by available bandwidth on two lanes). It’s not working yet in this configuration and I don’t know why.

That’s about the raw data from the sensor. Since we don’t have a hardware ISP, we need to process it manually to get RGB image. We can use the GPU for that.

I have a shader that implements demosaicing (by taking 2x2 blocks, optionally averaging across 4x4 blocks), vignetting correction, highlight clipping, applies color calibration matrix, white balance and filmic curve, corrects lens distortion and applies slight denoising and sharpening. It is currently able to output 526x360 stream at about 35 FPS, regardless of input frame size. To reach higher resolutions and/or framerates it will have to be optimized; GPU clock could potentially be boosted too, as it can work at 1GHz (under higher voltage though), but we keep it at 800MHz.

(no autofocus here though)

To implement 3A you need to gather some statistics. Traversing through 2104x1560 raw data, with subsampling to 1/16th of pixels, to get channel averages, sharpness, center-weighted brightness and to count pixels that are too bright or too dark eats about 15% of CPU with NEON (and we have 4 cores, so max is 400%) - so it’s not a big deal.

Then there’s encoding. I can encode that 526x360 stream coming out of the GPU at 30 FPS in real time using x264 in ultrafast speed preset. The whole endeavor then eats about 150% of CPU. FWIW, recording a 30 minute video with screen off ate about 15% of battery, so it should be possible to record for above 3 hours on battery (provided that there’s enough disk space available to store the file, it gets big fast if you want acceptable quality on ultrafast preset).

The next big step will be to get it to output 1052x780 30 FPS video in real time. That would be the optimal resolution for playing it back on our 720x1440 screen. Right now my pipeline can reach 10 FPS at this resolution (17 FPS without encoding - the difference may come from GStreamer being fed with uncached buffers). We need to be careful with DRAM bandwidth utilization though, as erratum ERR050384 is quite nasty, and the big cam is connected to MIPI CSI2 which is the one that’s more affected.

8 Likes

Two weeks ago I tried to record something and realized a desynchronization of audio-track to video-track. Not just asynchron about start position, but also about recording speed. Are there plans how to fix that (or is already with the new pipeline)? And are frame issues fixed with your approach, where 1-2 seconds in between are corrupted when recording a bit longer?

1 Like

I just thanks :pray: to this Fancy Cat for all the help for camera development for Librem 5.

1 Like

The way Millipixels currently records video is… let’s just say, not great.

When you record video in any way that could pass as reasonably sensible, this problem simply doesn’t happen :wink:

2 Likes

Libcamera is planning for ISP by GPU, however i not sure if it possible on gles3 or gles3.1.

I gaming from yesterday new milli version. Milli make L5 very hot and drains the battery shamelessly.

EDIT: I would like to change the name: Milli for Lumina.
Lumina is a Spanish name for meaning: Mastering The Light or Light in the Darkness, but also Freedom(Libre)

1 Like

Do you have this code?.. I’m asking because I looked inside Millipixels and libcamera and found that this part is really unoptimized. Just CPU routines with huge loops. There is a experimental branch in libcamera to get GPU based ISP, but I believe that NEON-based could be also demanded.

2 Likes