Librem 15v3 + Qubes 4.0 - the good, the bad and the ugly

@quban: Which Qubes release?

Yes, there have been some issues with suspend/resume mode, but I only have had this with a Realtek chip. I’m not sure, if this also happens with the WiFi card, but I guess it’s only an issue with USB devices.

For now, suspend and resume with the ASIX chipset works fine. I will have an eye on it, if there are any errors.

BTW: These cards also running on Unix.

Can you mention me when you post a result? I am interested in your tests about the SSD performances

Can you mention me when you post a result? I am interested in your tests about the SSD performances

Yes, for sure. I’m also interested. I will try this with different OS, also Qubes. It will took some time, but as soon I will have the results, I will post them here.

Librem 13v2 running Qubes 4.0 (rc4 upgraded to latest… am considering full reinstall) with stock coreboot 4.6

A rework is underway.

1 Like

@quban: Yes, better to use Qubes 4.0 lastest release. If you haven’t any high battery drain, than you maybe can switch to 4.7, but I guess that’s a coreboot issue.

Yummy - Thank’s for the detailed report! I am also eagerly awaiting Heads support in Qubes.

The VPN template (AppVM) in Qubes 4.0 is still being refined. I have had great success with tasket’s GitHub ‘Qubes-vpn-support’ project. It is a great assistant to those of us who are not quite (yet) command line jockey’s.

I got a Librem 15 last fall, but waited until Qubes 4.0 was final before installing.

Today I finally got around to trying it out but it failed miserably. The installer complained that I was trying to install on unsupported hardware (what?!) and when I booted into Qubes, I had no networking VMs and the error logs complained about missing IOMMU(?).

Is there some trick to installing Qubes correctly on the librem 15? I have qubes running on a system76 meerkat as well and never had problems there.

@mathsguy: That sound like you need the lastest coreboot version with IOMMU support, otherwise Qubes 4.0 won’t run.

https://puri.sm/posts/qubes4-fully-working-on-librem-laptops/

1 Like

@thib: Here are the results of testing the NVMe device (Samsung SSD 960 EVO 500 GB) on an HP Elite Desktop system (i5) of 2013:

Samsung Magican benchmark reports under Windows: Sequential read: 3,204 and write 1,889 MB/s.
Random IOPS: 253,906 (read) and 307,128 (write).

CrytalDiskMark:

2018-04-20 14_02_59-Samsung Magician Shadow Dialog

Gnome disk-util (PureOS):

Ubuntu 16.04 (with 100 MiB samples):

hdparm on ntfs partition:

user@debian:/media/user/Volume$ sudo hdparm -t /dev/nvme0n1p5
/dev/nvme0n1p5:
Timing buffered disk reads: 8192 MB in 3.00 seconds = 2730.31 MB/sec

dd (write) on ext4 partition:

user@debian:/media/user/test$ dd if=/dev/zero of=laptop.bin bs=4G count=1 oflag=direct
2+0 records in
2+0 records out
4294967296 bytes (4.3 GB, **4.0 GiB**) copied, 3.28796 s, **1.3 GB/s**

user@debian:/media/user/test$ dd if=/dev/zero of=laptop.bin bs=2G count=1 oflag=direct
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, **2.0 GiB**) copied, 2.15646 s, **996 MB/s**

iozone on ntfs partition:

user@debian:~/Downloads/iozone3_471/src/current$ ./iozone -s2g -i 0 -i 1 -r2048 -S2048 -t 2 -F /media/user/Volume/test /media/user/Volume/test1

Children see throughput for  2 initial writers 	=  234656.82 kB/sec
Parent sees throughput for  2 initial writers 	=  233724.29 kB/sec
Min throughput per process 			=  117017.51 kB/sec 
Max throughput per process 			=  117639.31 kB/sec
Avg throughput per process 			=  117328.41 kB/sec
Min xfer 					= 2086912.00 kB

Children see throughput for  2 rewriters 	=  286435.89 kB/sec
Parent sees throughput for  2 rewriters 	=  282095.72 kB/sec
Min throughput per process 			=  137359.89 kB/sec 
Max throughput per process 			=  149076.00 kB/sec
Avg throughput per process 			=  143217.95 kB/sec
Min xfer 					= 1933312.00 kB

Children see throughput for  2 readers 		= 10696373.00 kB/sec
Parent sees throughput for  2 readers 		= 10683803.68 kB/sec
Min throughput per process 			= 4928438.50 kB/sec 
Max throughput per process 			= 5767934.50 kB/sec
Avg throughput per process 			= 5348186.50 kB/sec
Min xfer 					= 1792000.00 kB

Children see throughput for 2 re-readers 	= 11676473.00 kB/sec
Parent sees throughput for 2 re-readers 	= 11662806.05 kB/sec
Min throughput per process 			= 5838203.50 kB/sec 
Max throughput per process 			= 5838269.50 kB/sec
Avg throughput per process 			= 5838236.50 kB/sec
Min xfer 					= 2097152.00 kB

iozone on ext4 partition:

user@debian:~/$ iozone -s2g -i 0 -i 1 -r2048 -S2048 -t 2 -F /media/user/test/test /media/user/test/test1

Children see throughput for  2 initial writers 	= 1920047.16 kB/sec
Parent sees throughput for  2 initial writers 	= 1079551.11 kB/sec
Min throughput per process 			=  444537.78 kB/sec 
Max throughput per process 			= 1475509.38 kB/sec
Avg throughput per process 			=  960023.58 kB/sec
Min xfer 					=  632832.00 kB

Children see throughput for  2 rewriters 	= 3784152.88 kB/sec
Parent sees throughput for  2 rewriters 	= 2859785.90 kB/sec
Min throughput per process 			= 1830468.00 kB/sec 
Max throughput per process 			= 1953684.88 kB/sec
Avg throughput per process 			= 1892076.44 kB/sec
Min xfer 					= 1966080.00 kB

Children see throughput for  2 readers 		= 9971123.50 kB/sec
Parent sees throughput for  2 readers 		= 9824818.40 kB/sec
Min throughput per process 			= 4644144.50 kB/sec 
Max throughput per process 			= 5326979.00 kB/sec
Avg throughput per process 			= 4985561.75 kB/sec
Min xfer 					= 1828864.00 kB

Children see throughput for 2 re-readers 	= 11586412.50 kB/sec
Parent sees throughput for 2 re-readers 	= 11399766.63 kB/sec
Min throughput per process 			= 5793158.50 kB/sec 
Max throughput per process 			= 5793254.00 kB/sec
Avg throughput per process 			= 5793206.25 kB/sec
Min xfer 					= 2097152.00 kB    

The results are slightly different, but it seems to be significant faster than on Librem 13v3 with the latest coreboot version.

4 Likes

@mathsguy, see what @amanita said; you just need to run a script, essentially, to upgrade coreboot/libreboot (I forget which it actually is, see amanita’s link) in order to enable eg. networking

I’m now up to coreboot 4.7 and a fresh install of qubes 4.0 on my librem13v2.

After listing ath9k in sys-net’s /rw/config/suspend-module-blacklist, it now comes back from resume 80-90% of the time, up from about 30 on my previous installation. Does anyone else see the wireless not coming back 100% of the time?

Worked all fine for me using a Fedora 26 VM, but then I had to do the same you did when switching to Fedora 28
Seems like a driver/userland problem

For anyone reading this thread who does not already know about that script, it is here.

Hi, thanks for the benchmark! What kind of boot times are you getting for this? Cold boot and VM boot times would be wonderful. I’m interested in how much difference the 960 EVO is making.

If you do post a cold boot time, it would also help to know which VMs are being launched on boot (eg. default sys-firewall, sys-usb & sys-net, sys-whonix ?) as I think those are the main contributors to boot time.

@temp-accnt-123:

Hi!

I can not measure the exact values, because of disk encryption (I have to enter the passphrase). Additionally there is a boot and grub delay.

So, approximately the cold boot time is 1 min. to login into dom0. I only have sys-firewall and sys-net activated at boot time. Sometimes it varies a bit (from 55 sec. to 01:07 min.) VM boot times are about 12 to 15 sec. Normally it’s about 13 sec.

if you need exact values, I have to change my setup. Do you know some scripts or dmesg commands to calculate it?

dmesg shows me a value of: 60.705504 after login into the desktop (with entering passphrase and login password). Hopefully that helps!

The difference you can see, if you copy to or from Samsung SSD 960 EVO NVME or if you create VMs.

Regards

Thank you!

I won’t need exact values, those ones are great. I’m surprised to see that despite a massive difference in sequential read speeds in your SSD and mine (my SSD is ~400MB/s), our boot times aren’t as different as I had expected (we’re booting with same VMs and both using disk encryption). When I wrote the previous post I had assumed you were using the fast NVMe SSD as a boot drive for Qubes, but maybe not?

I might as well post my times as well (averaged over 3 tests, pretty consistent):
Hardware: Thinkpad 13
From power-on to end of grub timeout: 17 seconds
From power-on to desktop: 1 minute 14 seconds
(note: I have autologin so I don’t get the login form before reaching the desktop)
Opening a terminal in a pretty typical, unopened Fedora 28 VM: 12 seconds
If I’m interpreting the dmesg output correctly, dmesg and systemd-blame are both giving 45.3 seconds

Again, thank you for the tests! I haven’t seen users posting boot times anywhere else.
Regards

@temp-accnt-123:

You are welcome!

had assumed you were using the fast NVMe SSD as a boot drive for Qubes, but maybe not?

Yes, exactly. I am using the Samsung SSD 960 EVO 500GB NVME as a boot device for Qubes. That’s why I was posting the sequential read and write values in comparison to the same NVME device with a HP desktop system. It seems to be significant slower with the Librem 13 as well as a Samsung SSD 840 PRO SATA 3.0 only has the half of the maximum speed (3 Gbit/s).

That’s still a questions of mine to @kakaroto! Maybe it need some optimization in coreboot or the kernel?

But the NVME SSD is really faster, than a SATA SSD with the Librem 13. Especially if you backup VMs or copy data to the partitions. It’s getting faster if you read or write more than two GB.

I think that faster boot times are impossible at the moment. I will post it here, if it’s getting better.

@amanita Sorry but there’s way too much text in this thread for me to read it at the moment (too busy) but at a quick glance…

  • If you’re asking why the SATA drive is limited to 3Gbps, then read this : https://puri.sm/posts/coreboot-on-the-skylake-librems-part-2/
  • If you’re asking when we’ll fix SATA so it can use 6Gbps, then the answer is never unfortunately. Note that months of work went into this and gave us no way to fix it, so it’s stuck at 3Gbps for now, which is not a big deal since 6Gbps is the theoretical max speed but SATA SSDs rarely reach those speeds. Your own SSD 840 Pro SATA according to this benchmark has a max write speed of 365MB/s (2.92 Gbps) and a read speed of 455MB/s (3.64 Gbps), so the loss is minimal.
  • if you’re asking why your NVMe is slower on the librem than on some other machine : no idea, It might be a hardware configuration (NVMe is PCI-express, so maybe the x2 or x4 or whatever is what affects), it might be a kernel driver or a windows vs linux (your other test was on windows?) or anything like that. I haven’t seen anything about nvme in coreboot config, but maybe I can look and find something, can you give me a summary of the issue, I got lost with all the numbers you posted before?

If you have a more specific question, please ask it directly without expecting me to be able to read all 24 messages to understand it (sorry!)

hi ! are you connected through a fiber pppoe line to the router ?

i would be interested in finding out a good way to establish a manual pppoe connection through the GUI under gnome3/wayland/weston instead of just typing in the terminal “sudo pppoeconf” and “sudo pon dsl-provider” each time i setup and when i want to connect.

1 Like