Librem 15v3 + Qubes 4.0 - the good, the bad and the ugly

I got my librem about ~3 weeks ago and thought to summarize my experience for the portion of Qubes users after the latest release was marked stable. I’m pleased overall with the hardware and my setup, despite the price and will here give my overall summary of everything from a hardware/firmware and OS interaction.

Hardware: Librem 15v3, TPM version + HyperX Kingston DDR4 16GB RAM + a SATA Samsung SSD
BIOS/ME firmware: latest coreboot 4.7-Purism-4, flashed per kararoto’s script (initially with ver. 4.7-Purism-3)
Xen: 4.8.3-series current-testing latest
Linux Kernel (dom0): 4.9.56-21

Initial testing was done on a old 3.2 installation, after which a fresh 4.0-stable install was made

The Good:

  • the laptop comes with some nice packaging and mini instructions for things like battery care
  • the laptop disassembly is easy, provided one is careful with the screwdriver
  • battery time seems decent to me on all systems I tested so far (Q3.2, Q4.0, Linux Mint, PureOS), including suspend, it seems that some people have issues though, so maybe they hit a bad series of battery parts
  • the latest firmware fixes the loud fan (from ver. Purism-3), now it only revvs when appropriately under load
  • cooling in general seems correct now, the system does not have spikes or unexpected shutdowns
  • no general power issues, suspend was always behaving right
  • VT-d/SLAT and TPM function as expected
  • USB, including passthrough and sys-usb functions as expected, but no-strict reset has to be set on the controller (easy to do from the settings menu) and there is a regression for 4.14+ as well here for some USB devices (usbip issues, not for mass storage) - loading a 4.9 VM kernel on sys-usb resolves the issue (https://github.com/QubesOS/qubes-issues/issues/3628)
  • HDMI works correctly, although 4.4 is the last to catch modelines right and 4.9+ kernels will need manual settings for Hi-Res Monitors under Xorg
  • Dual-band WiFi card performs well, with some distance to the router and several walls in-between
  • Touchpad (incl. gestures), Camera, Mic, Bluetooth (over USB, proprietary firmware), Headphone Jack, USB-C, SD/MMC, Killswitches, FN keys all work without a hitch
  • HVMs and PVH modes work without any issues, Qubes in general is responsive under load (video, multiple VMs, disk backups etc.)

The Bad:

  • the PureOS installer of the system which came on the Biwin SSD drive was broken - computers are my job, but normal people would be stuck here which is a bad customer experience
  • the hwclock (qvm-sync-clock uses it for VM clock setting) system sync is broken for linux 4.14/4.15
    4.9 and 4.4 behave as expected, with 4.9 being much better due to Intel graphics improvements
  • SATA is limited to 3Gbps instead of 6Gbps because of firmware issues (see kakaroto’s posts) which is a bit on the slow side for larger backups even with TRIM, NVMe is an option though it that limits you
  • Ethernet device (only shows up once it is connected) does not work correctly - as tested with USB-C connector exposing a RJ45 port with 4.9 and 4.14 kernels, the error are likely some bad USB high-speed driver interactions which are likely to get fixed in the future, still if you need this now, expect trouble (https://github.com/QubesOS/qubes-issues/issues/2594)
  • EDIT: also experiencing CPU fan stuck at high speed sometimes which I originally did not find as the issue only is reproducible on occasion by suspend/resume. Another suspend/resume cycle fixes the annoyance, but it’s clearly firmware-related

The Ugly (not a must, but could be improved):

  • disassembly instructions are quite bare-bones, if you haven’t done this much - watch some short guides to not damage some of the connectors (e.g. for the battery)
  • no LED indicator on the laptop means that to make sure it suspends, you’ll want to hit that manually and then close the lid, lest it burns up from bad ventilation in your bag
  • Qubes 3.2 and 4.0 in general have a slow boot-up time (proportional to boot-time VMs as well) and even slower shutdown due to some LUKS/LVM sync issues, on both SSDs tested over 2 laptops so it’s likely a OS bug I’m hitting
  • PureOS iso’s really should get GPG signed hashsums, with the fingerprint/key available outside puri.sm servers - or at least be available from more than 1 location, which would be a good fail-over in case another server outage happens

Not tested:

  • NVMe M.2 device

I also commend Purism for building this nice forum which welcomes interactions with frequent responses from staff, good job!

Eagerly awaiting Heads support,
Happy customer

9 Likes

I’ve tested Samsung Evo 960 NVMe. There are some performance issues. Sequential write speed is limited to one third of the expected speed. Read speed seems to be ok.

So, actually with the coreboot issue of SATA SSDs it is not a great fun.

1 Like

Did you test the NVMe device with another laptop as well? It would be good to know whenever it’s librem (coreboot?) or Qubes related (because of the many virtualization layers) and open an issue

1 Like

Unfortunately I’ve no other laptop with an NGFF Slot or a desktop adapter available. But I could test this for you with a notebook and Samsung’s ugly Magician software soon. I guess the NVMe SSD works fine.

I was using PureOS as a reference system installed on the SATA SSD and do the benchmark with gnome-diskutil. I’ve test the disk and partition throughput. It shows around 2,7 GB/s average read and 470 MB/s write rate.

I also use dd on a partition of the NVMe device:

dd if=/dev/zero of=test.img bs=2G count=1 oflag=dsync
2133098496 bytes (2,1 GB, 2,0 GiB) copied, 3,01033 s, 709 MB/s

Additionally I’ve tested this with iozone on the partition:

⟫ iozone -i 0 -t 2 -s 2G -b output.xls
        Iozone: Performance Test of File I/O
                Version $Revision: 3.429 $
                Compiled for 64 bit mode.
                Build: linux-AMD64 

    File size set to 2097152 kB
    Command line used: iozone -i 0 -t 2 -s 2G -b output.xls
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
    Throughput test with 2 processes
    Each process writes a 2097152 kByte file in 4 kByte records

    Children see throughput for  2 initial writers  = 1124722.81 kB/sec
    Parent sees throughput for  2 initial writers   =  583152.25 kB/sec
    Min throughput per process                      =  556726.50 kB/sec 
    Max throughput per process                      =  567996.31 kB/sec
    Avg throughput per process                      =  562361.41 kB/sec
    Min xfer                                        = 2064668.00 kB

    Children see throughput for  2 rewriters        = 2076728.38 kB/sec
    Parent sees throughput for  2 rewriters         = 1122575.63 kB/sec
    Min throughput per process                      = 1034576.25 kB/sec 
    Max throughput per process                      = 1042152.12 kB/sec
    Avg throughput per process                      = 1038364.19 kB/sec
    Min xfer                                        = 2081976.00 kB

"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 kBytes "
"Output is in kBytes/sec"

"  Initial write " 1124722.81 

"        Rewrite " 2076728.38 

The results are a bit different, but I guess, that’s not an issue with Qubes. It more looks like an issue with coreboot and/or the Linux kernel with NVMe devices. I will try this on a laptop with proprietary UEFI and a Linux Live System again.

But nevertheless Qubes runs with some tweaks (cause of the fan noise) very smoothly. Only the disk performance isn’t as expected. And there is still some electronic chirping with a NVMe device. Also the touch pad is sometimes audible.

https://forums.puri.sm/t/received-librem13v2-quite-happy-except-for-electronic-chirping/2609

1 Like

I’ve tested this with an USB 3.0 to RJ45 adapter (AX88179 Gigabit Ethernet - USB 3.0 port and also USB-C connector). It looks like both is working now.

Have you found any issues with suspend/resume? I’m having a real hard time getting my librem 13v2 to reliably get network back on resume.

@quban: Which Qubes release?

Yes, there have been some issues with suspend/resume mode, but I only have had this with a Realtek chip. I’m not sure, if this also happens with the WiFi card, but I guess it’s only an issue with USB devices.

For now, suspend and resume with the ASIX chipset works fine. I will have an eye on it, if there are any errors.

BTW: These cards also running on Unix.

Can you mention me when you post a result? I am interested in your tests about the SSD performances

Can you mention me when you post a result? I am interested in your tests about the SSD performances

Yes, for sure. I’m also interested. I will try this with different OS, also Qubes. It will took some time, but as soon I will have the results, I will post them here.

Librem 13v2 running Qubes 4.0 (rc4 upgraded to latest… am considering full reinstall) with stock coreboot 4.6

A rework is underway.

1 Like

@quban: Yes, better to use Qubes 4.0 lastest release. If you haven’t any high battery drain, than you maybe can switch to 4.7, but I guess that’s a coreboot issue.

Yummy - Thank’s for the detailed report! I am also eagerly awaiting Heads support in Qubes.

The VPN template (AppVM) in Qubes 4.0 is still being refined. I have had great success with tasket’s GitHub ‘Qubes-vpn-support’ project. It is a great assistant to those of us who are not quite (yet) command line jockey’s.

I got a Librem 15 last fall, but waited until Qubes 4.0 was final before installing.

Today I finally got around to trying it out but it failed miserably. The installer complained that I was trying to install on unsupported hardware (what?!) and when I booted into Qubes, I had no networking VMs and the error logs complained about missing IOMMU(?).

Is there some trick to installing Qubes correctly on the librem 15? I have qubes running on a system76 meerkat as well and never had problems there.

@mathsguy: That sound like you need the lastest coreboot version with IOMMU support, otherwise Qubes 4.0 won’t run.

https://puri.sm/posts/qubes4-fully-working-on-librem-laptops/

1 Like

@thib: Here are the results of testing the NVMe device (Samsung SSD 960 EVO 500 GB) on an HP Elite Desktop system (i5) of 2013:

Samsung Magican benchmark reports under Windows: Sequential read: 3,204 and write 1,889 MB/s.
Random IOPS: 253,906 (read) and 307,128 (write).

CrytalDiskMark:

2018-04-20 14_02_59-Samsung Magician Shadow Dialog

Gnome disk-util (PureOS):

Ubuntu 16.04 (with 100 MiB samples):

hdparm on ntfs partition:

user@debian:/media/user/Volume$ sudo hdparm -t /dev/nvme0n1p5
/dev/nvme0n1p5:
Timing buffered disk reads: 8192 MB in 3.00 seconds = 2730.31 MB/sec

dd (write) on ext4 partition:

user@debian:/media/user/test$ dd if=/dev/zero of=laptop.bin bs=4G count=1 oflag=direct
2+0 records in
2+0 records out
4294967296 bytes (4.3 GB, **4.0 GiB**) copied, 3.28796 s, **1.3 GB/s**

user@debian:/media/user/test$ dd if=/dev/zero of=laptop.bin bs=2G count=1 oflag=direct
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, **2.0 GiB**) copied, 2.15646 s, **996 MB/s**

iozone on ntfs partition:

user@debian:~/Downloads/iozone3_471/src/current$ ./iozone -s2g -i 0 -i 1 -r2048 -S2048 -t 2 -F /media/user/Volume/test /media/user/Volume/test1

Children see throughput for  2 initial writers 	=  234656.82 kB/sec
Parent sees throughput for  2 initial writers 	=  233724.29 kB/sec
Min throughput per process 			=  117017.51 kB/sec 
Max throughput per process 			=  117639.31 kB/sec
Avg throughput per process 			=  117328.41 kB/sec
Min xfer 					= 2086912.00 kB

Children see throughput for  2 rewriters 	=  286435.89 kB/sec
Parent sees throughput for  2 rewriters 	=  282095.72 kB/sec
Min throughput per process 			=  137359.89 kB/sec 
Max throughput per process 			=  149076.00 kB/sec
Avg throughput per process 			=  143217.95 kB/sec
Min xfer 					= 1933312.00 kB

Children see throughput for  2 readers 		= 10696373.00 kB/sec
Parent sees throughput for  2 readers 		= 10683803.68 kB/sec
Min throughput per process 			= 4928438.50 kB/sec 
Max throughput per process 			= 5767934.50 kB/sec
Avg throughput per process 			= 5348186.50 kB/sec
Min xfer 					= 1792000.00 kB

Children see throughput for 2 re-readers 	= 11676473.00 kB/sec
Parent sees throughput for 2 re-readers 	= 11662806.05 kB/sec
Min throughput per process 			= 5838203.50 kB/sec 
Max throughput per process 			= 5838269.50 kB/sec
Avg throughput per process 			= 5838236.50 kB/sec
Min xfer 					= 2097152.00 kB

iozone on ext4 partition:

user@debian:~/$ iozone -s2g -i 0 -i 1 -r2048 -S2048 -t 2 -F /media/user/test/test /media/user/test/test1

Children see throughput for  2 initial writers 	= 1920047.16 kB/sec
Parent sees throughput for  2 initial writers 	= 1079551.11 kB/sec
Min throughput per process 			=  444537.78 kB/sec 
Max throughput per process 			= 1475509.38 kB/sec
Avg throughput per process 			=  960023.58 kB/sec
Min xfer 					=  632832.00 kB

Children see throughput for  2 rewriters 	= 3784152.88 kB/sec
Parent sees throughput for  2 rewriters 	= 2859785.90 kB/sec
Min throughput per process 			= 1830468.00 kB/sec 
Max throughput per process 			= 1953684.88 kB/sec
Avg throughput per process 			= 1892076.44 kB/sec
Min xfer 					= 1966080.00 kB

Children see throughput for  2 readers 		= 9971123.50 kB/sec
Parent sees throughput for  2 readers 		= 9824818.40 kB/sec
Min throughput per process 			= 4644144.50 kB/sec 
Max throughput per process 			= 5326979.00 kB/sec
Avg throughput per process 			= 4985561.75 kB/sec
Min xfer 					= 1828864.00 kB

Children see throughput for 2 re-readers 	= 11586412.50 kB/sec
Parent sees throughput for 2 re-readers 	= 11399766.63 kB/sec
Min throughput per process 			= 5793158.50 kB/sec 
Max throughput per process 			= 5793254.00 kB/sec
Avg throughput per process 			= 5793206.25 kB/sec
Min xfer 					= 2097152.00 kB    

The results are slightly different, but it seems to be significant faster than on Librem 13v3 with the latest coreboot version.

4 Likes

@mathsguy, see what @amanita said; you just need to run a script, essentially, to upgrade coreboot/libreboot (I forget which it actually is, see amanita’s link) in order to enable eg. networking

I’m now up to coreboot 4.7 and a fresh install of qubes 4.0 on my librem13v2.

After listing ath9k in sys-net’s /rw/config/suspend-module-blacklist, it now comes back from resume 80-90% of the time, up from about 30 on my previous installation. Does anyone else see the wireless not coming back 100% of the time?

Worked all fine for me using a Fedora 26 VM, but then I had to do the same you did when switching to Fedora 28
Seems like a driver/userland problem

For anyone reading this thread who does not already know about that script, it is here.