Live and OEM are not mutually exclusive. A live system is basically the distro installed to a portable device (CD or usb stick usually). Often a fair amount of effort is used to shrink the size, and certain steps are taken to account for CDs being read only. Also, often (but not always) a live system includes the installer to write something like the live system to another drive. What actually gets written to the other drive is slightly different (again, to account for the RO nature of CDs). There are also bootable non-live systems. These are called minimal installers, they usually are little more than
Bash, the package installer and disk utilities, and internet drivers. They usually can’t launch a graphical shell, or do much beyond install the OS. This saves on space, and can be automated more easily than a live system, but cannot be used for anything useful without installing the system to another device.
Both live and minimal installers can be OEM or non-OEM. An OEM installer is designed to install the system for another person. The idea is you use the OEM installer, maybe do a bit of diagnostic, then shut the computer down, box it and ship it. When the end user first powers it on, the customization steps are still needed, but should be automagically done by the first-start program. This means the end user will be asked for name, username, password, possibly other information. The first-start program will then uninstall itself and usually reboot, leaving the system in the same state a non-OEM installer would be at first boot. The non-OEM installers, meanwhile, usually ask for all that information during the install, and skip the extra boot cycle. Of course, lots of distros only provide a non-OEM live installer and an OEM minimal installer, as the demand for OEM live installers and non-OEM minimal installers is relatively low.
Unless you regularly do things that consume large amounts of RAM, don’t use swap. Performance degrades terribly with swap, often leading to softlocks. If you want hibernation, set your hibernation script up to turn swap on only when hibernating, and off again on resume. That said, If you’re doing molecular dynamics or computational fluid dynamics, swap is a thing at even 128G of memory, but takes careful setup to make it behave.
Three and Four
/usr between installs is a poor choice. It’s quite likely to break all sorts of things in possibly subtle ways. Putting
/usr/local on its own is acceptable, as that only gets things you install yourself. You should be able to tell any database worth using where to store its files and can put it someplace smart.
That aside, I can give you general advice, but it’s best to figure out exactly what you want and understand how and why it works rather than just blindly following someone else’s advice. Four first. Boot partitions are needed when they are needed, and not otherwise. If you want to UEFI boot, the boot files must be on a FAT32 or FAT16 drive, with the boot flag set. They cannot be encrypted, but they can be signed, with the signature verified by the UEFI system. You can put a shim bootloader like LILO or GRUB on the FAT32 partition and your real boot files (the kernel) on another, encrypted partition, or you can put the kernel and associated files on the boot partition. I usually do the latter, but that is because I usually skip GRUB.
Speaking of which, the linux kernel is a valid boot EFI, which means you can tell your UEFI system to start it directly. I don’t have a librem laptop, so I can’t speak to how to get the public key set up to verify the signature, but that should be pretty straightforward.
As for MBR or EFI, UEFI needs EFI, and I suspect the librem key system does too. MBR is a decent idea to install, put a simple copy of grub at the head of the disk, as it gives you some recovery options for if you slag your boot partition. If you have 4 or fewer partitions, a hybrid MBR/EFI is trivial and safe (I recommend putting the head of the first partition like 4mb in, so you have plenty of space to write a healthy grub install to the front of the device if you want it). That said, if you have an external boot device you can use for recovery, you really don’t need the MBR at all.
And this gets to where I can only tell you what I do and a bit of why, as it really depends on your intended use case.
- I do not use a partition for swap. I do have a swapfile, but not a standard one (ask if you want help setting up zram for serious swapping).
- I do use a boot partition, vfat, 1GB since I’ve got a huge drive, but anything bigger than 128MB is comfortable (figure 25MB per kernel and you want to be able to have at least 2, preferably 3).
- Everything else goes on a single BTRFS partition.
BTRFS is stable and ready for everyday use, mostly*. I put
/home in its own subvolume (actually a subvolume per user). My actual root filesystem looks something like
__pureos as the default subvolume, and it has the usually filesystem you’d expect for a linux system. Depending on the system,
var might get its own subvolume too.
__home/userN gets mounted on
__local gets mounted on
/usr/local. If I want to play around with different distros, they’d get their own toplevel subvolume. If you put your database files in their own subvolume, they could get shared between distros, as can the home folders and
kexec can chainboot into them, and you can set which subvolume to load at the same time, by passing
--append=subvol=/__mint or similar.
Here’s the first big feature for BTRFS. It’s a copy-on-write filesystem, which means before you start your system upgrades which might break something, you can make a snapshot of the current active distro’s subvolume, and roll back to that snapshot if things go badly. Also, it is incredibly space efficient, as you can compress files which are seldom used, and de-duplicate files between distros.
(*) It’s a little rougher around the edges than EXT4, but the advanced features well offset the slight increase in fiddly recovery if something bad does happen. In the last 4 years or so, the only time I’ve had an issue with it that wasn’t related to a physically failing drive was a strange interaction between KVM, Qemu, iommus, and PCI passthrough. Bottom line is the information online saying it’s not ready is part of that outdated information you complained about.