New drive partitioning advice?

Can you help with up to date docs and info?

I might just need a good resource for modern boot sequence tooling and options. This info space is a hot mess of outdated (maybe? (!?)) writing. Please say “go read this link” (with, you know, useful stuff) because it will really help me.
I’m looking to accomplish the following on an upgraded (1G SSD) drive in a L13v2:

  • fresh PureOS install
  • other OS boots from this drive to play with (PureOS/byzantium ,Tails, Qubes, Kali, etc… I’m booting them from usb now)
  • separate /home, /usr partitions (/usr mostly because database work data needs it) - I really just want my dotfiles to port so I can work efficiently across OSes.
  • encrypted (all the things)
  • PureBoot w/ LibemKey LUKS unlock

currently :
Up to date standard OEM LUKS install, CoreBoot from here (from source)
OEM ISO, ready to go
LIVE ISO, ready to go

I’m learning still, but enthusiastic and (mostly) proficient with admittedly sizeable knowldege gaps here. I’m frustrated because I want current, forward-ready solutions and I’m having a hard time locating non-GUI or default selection option explanations. I’ll learn ALL the history if I have to, but -really?


  1. I do not understand the difference between Live and OEM. Please tell me this. Please don’t say “OEM is the factory install.” My face will go sad.
  2. is /swap even a thing on 16G ?
  3. I need to learn the best way to partition the drive for the above - including position, FS, flags, and stuff I don’t know yet. Preferably some info that’s not really old, or for low memory / dinky-drive systems. (MBR is outdated? my current install is MBR?)
  4. is a /boot partition cool or lame? confused. see rant from 3) encrypted or not?
  5. I’m finding there’s more info I need to ignore because its the future now and that’s frustrating. Not interested in info about windows, BIOS/UEFI, FAT/NTFS/ext# or the way things were back in the day with 512M linux systems. What’s the toolset for for a modern *nix-only system - mine? I get it - I don’t understand some really basic things here, but 2005 (or no-date) explanations are untrustable.

Thanks for reading. I love you. Please help.

  1. Live is universal distribution which is pre-set to be run from CD/DVD (may or may not offer possibility to install from that CD). Usually it has some sort of pre-packaged runtime/rootfs with overlay backed by ram-disk (to be able to modify FS content). It’s alre pre-configured to be generic and contain many-many various possible drivers, hw probes, etc due to which fact may be a bit bloated.
    OEM on the other hand usually is pre-configured for certain hardware (unlike live). OEM is primarily focused to provide installation media and may or may not also provide Live functionality.

  2. for 16G I’d rather recommend not to do any swap at all, the only reason to add swap is to do hibernation.

  3. Unless you use some relict or corner-case FS - most modern mkfs utilities have optimized defaults and are able to recognize SSD/HDD and adjust accordingly. Same for mount options. Just review options set by installer/mkfs and then read man for corresponding commands (eg mount.ext4, mkfs.ext4) to see what it did and why.

  4. Cool (even though I rarely use it, it has its merits so i go for it when i need those merits)

  5. UEFI is actually contemporary thing so don’t ignore it. MBR - yes, past but still used in many cases (Eg Librem 5 phone). exFAT is used vastly on removable media, ext4 still default FS for majority of distributions. Don’t see a point to ignore that all. Btrfs was a bleeding edge some time but it bleeds for way too long so everyone is considering it rather bloody edge. Many linux susbsystems from 2005 are still valid (eg dm, lvm) and widely used.

Good luck.


Thanks, I get OEM now.

  1. is still pretty dense and I guess I’ll focus there for now

Maybe @jeremiah :heart: can point me to some Purism specific docs or discussions about mutiboot formatting, installer default formatting, why BIOS, role of MBR in PureOS boot process, how PureBoot relates to / complicates all this…I’m really just trying to not have to manage thumb drives to play in other OSes (including /byzantium). It seems bizarrely complex right now, but I just need some focused reading to get tracked.

Live and OEM are not mutually exclusive. A live system is basically the distro installed to a portable device (CD or usb stick usually). Often a fair amount of effort is used to shrink the size, and certain steps are taken to account for CDs being read only. Also, often (but not always) a live system includes the installer to write something like the live system to another drive. What actually gets written to the other drive is slightly different (again, to account for the RO nature of CDs). There are also bootable non-live systems. These are called minimal installers, they usually are little more than Bash, the package installer and disk utilities, and internet drivers. They usually can’t launch a graphical shell, or do much beyond install the OS. This saves on space, and can be automated more easily than a live system, but cannot be used for anything useful without installing the system to another device.

Both live and minimal installers can be OEM or non-OEM. An OEM installer is designed to install the system for another person. The idea is you use the OEM installer, maybe do a bit of diagnostic, then shut the computer down, box it and ship it. When the end user first powers it on, the customization steps are still needed, but should be automagically done by the first-start program. This means the end user will be asked for name, username, password, possibly other information. The first-start program will then uninstall itself and usually reboot, leaving the system in the same state a non-OEM installer would be at first boot. The non-OEM installers, meanwhile, usually ask for all that information during the install, and skip the extra boot cycle. Of course, lots of distros only provide a non-OEM live installer and an OEM minimal installer, as the demand for OEM live installers and non-OEM minimal installers is relatively low.

Unless you regularly do things that consume large amounts of RAM, don’t use swap. Performance degrades terribly with swap, often leading to softlocks. If you want hibernation, set your hibernation script up to turn swap on only when hibernating, and off again on resume. That said, If you’re doing molecular dynamics or computational fluid dynamics, swap is a thing at even 128G of memory, but takes careful setup to make it behave.

Three and Four
Sharing /usr between installs is a poor choice. It’s quite likely to break all sorts of things in possibly subtle ways. Putting /usr/local on its own is acceptable, as that only gets things you install yourself. You should be able to tell any database worth using where to store its files and can put it someplace smart.
That aside, I can give you general advice, but it’s best to figure out exactly what you want and understand how and why it works rather than just blindly following someone else’s advice. Four first. Boot partitions are needed when they are needed, and not otherwise. If you want to UEFI boot, the boot files must be on a FAT32 or FAT16 drive, with the boot flag set. They cannot be encrypted, but they can be signed, with the signature verified by the UEFI system. You can put a shim bootloader like LILO or GRUB on the FAT32 partition and your real boot files (the kernel) on another, encrypted partition, or you can put the kernel and associated files on the boot partition. I usually do the latter, but that is because I usually skip GRUB.
Speaking of which, the linux kernel is a valid boot EFI, which means you can tell your UEFI system to start it directly. I don’t have a librem laptop, so I can’t speak to how to get the public key set up to verify the signature, but that should be pretty straightforward.
As for MBR or EFI, UEFI needs EFI, and I suspect the librem key system does too. MBR is a decent idea to install, put a simple copy of grub at the head of the disk, as it gives you some recovery options for if you slag your boot partition. If you have 4 or fewer partitions, a hybrid MBR/EFI is trivial and safe (I recommend putting the head of the first partition like 4mb in, so you have plenty of space to write a healthy grub install to the front of the device if you want it). That said, if you have an external boot device you can use for recovery, you really don’t need the MBR at all.
And this gets to where I can only tell you what I do and a bit of why, as it really depends on your intended use case.

  1. I do not use a partition for swap. I do have a swapfile, but not a standard one (ask if you want help setting up zram for serious swapping).
  2. I do use a boot partition, vfat, 1GB since I’ve got a huge drive, but anything bigger than 128MB is comfortable (figure 25MB per kernel and you want to be able to have at least 2, preferably 3).
  3. Everything else goes on a single BTRFS partition.

BTRFS is stable and ready for everyday use, mostly*. I put /home in its own subvolume (actually a subvolume per user). My actual root filesystem looks something like

  • /__pureos
  • /__home/user1
  • /__home/user2
  • /__local

I set __pureos as the default subvolume, and it has the usually filesystem you’d expect for a linux system. Depending on the system, var might get its own subvolume too. __home/userN gets mounted on /home/userN, and __local gets mounted on /usr/local. If I want to play around with different distros, they’d get their own toplevel subvolume. If you put your database files in their own subvolume, they could get shared between distros, as can the home folders and __local. kexec can chainboot into them, and you can set which subvolume to load at the same time, by passing --append=subvol=/__mint or similar.

Here’s the first big feature for BTRFS. It’s a copy-on-write filesystem, which means before you start your system upgrades which might break something, you can make a snapshot of the current active distro’s subvolume, and roll back to that snapshot if things go badly. Also, it is incredibly space efficient, as you can compress files which are seldom used, and de-duplicate files between distros.

(*) It’s a little rougher around the edges than EXT4, but the advanced features well offset the slight increase in fiddly recovery if something bad does happen. In the last 4 years or so, the only time I’ve had an issue with it that wasn’t related to a physically failing drive was a strange interaction between KVM, Qemu, iommus, and PCI passthrough. Bottom line is the information online saying it’s not ready is part of that outdated information you complained about.


As there are cases where swap is nice even with 16GB RAM, eg in hibernation, I typically set up swap but tune the swappiness knob to only make use of it when absolutely necessary.
For hibernation to work you will need 16GB of swap though. And swap can also be installed as regular files and not special partitions which makes it easier to change their size post installation.

1 Like

You actually usually degrade performance less with swappiness high, as it is when swapping is ‘absolutely necessary’ that performance tanks. I’m planning to write a fairly lengthy article on swap as I’ve recently learned a bunch, but here is the brief version for how to set up swap.

If you want hibernation, create a swapfile at least as large as your main system memory, and leave it off. In your hibernation script, swapoff your other swaps and swapon this swap file during hibernation, and reverse that process on resume. This is not a normal swap, you do not want to use it except for hibernation.

For your real swap, assuming a modern, decent spec’d machine, the performance limiter for swap is the IO speed of your storage device. Reducing the device IO, even at the expense of extra CPU time, will dramatically improve performance. To this end, make sure your kernel is compiled with zram support with backing devices (not zswap or zcache*). zram gives you compressed block devices stored in memory, with an optional block device for if it runs out of ramdisk. Note that it requires a whole block device, not a file, for the backing store. You can use a file, and loopback it to /dev/loopN, and that works just fine (the IO speed loss from the loopback system is more than offset by writing 30-50% of the total data out this way).

It’s not enough to just set the global swappiness, as that treats all processes equally, including the UI (which we always want to remain responsive). In fact, we often want to keep disk cache for the files we actually are likely to access, and swap the pages from programs we aren’t. To do this, I set vm.swappiness to 100, often systemwide, sometimes only for select cgroups.

Which brings us to the subsystem that actually lets linux swap be useful and perform well: cgroups. Specifically the v2 interface that lets you set per process memory rules without interfering with other rules (which means it can integrate with ‘containerized’ systems). If you know what your likely memory hog is going to be, you can put it in its own cgroup, and limit its memory to something reasonable (however much memory it is likely to need at a time in its working set). You can set a soft limit which makes the kernel start swapping idle pages in the background with nearly 0 impact on performance, and it doesn’t affect other processes even when the processes in this cgroup hit the hard limit.

As for the system setup. On a system with 16 G of memory, I globally restrict memory to N G (N=12 is a good choice). I then create a zram device with a maximum used memory size of 16 - N G (so 4 in the above case). For a virtual size, I usually specify something large, like 128G. ZRam usually manages a 2:1 or 3:1 compression ratio, so we can expect that 4G of space to hold 8G of pages before touching the backing device. The backing device will then be large number / 2 - 8, or 60G. In practice, you could probably use 40G and still get 128G of swapped pages, but that will depend on how compressible your dataset is. With this style of setup, I’ve continued to use my main workstation without noticing any performance degradation while it was swapping about 100G of data out and sitting happy with 8GB free. The CFD program only got to have 2G of memory for scratch space.

(*) ZCache is deprecated. ZSwap is fine (simpler to setup than zram), except when it runs out of space it evicts the oldest page, which then gets written to disk uncompressed, and can cause nasty races when the pages explode in size.

For an extra convoluted, but quite awesome setup, if you have a graphics card with more dedicated memory than you need (say an 8G card, or even a 4G card if you don’t play games), you can actually use a FUSE filesystem and OpenCL to mount a tmpfs on the graphics card, then loopback a file on that tmpfs, and pass that to zram as its backing device, and get a neat extra 8-24G of effective high speed swap out of it (or more like 40G extra if you’re running an R VII).