But, librem 16 is going to be a laptop computer? I don’t think a laptop computer is designed to circulate heat efficiently under hours or days of heavy workload.
Will librem 16 handle hours or days of heavy workload gracefully without becoming degraded?
I want to see powerful but efficient laptop computers and powerful desktop computers and server computers with ECC RAM and more drive bays. If a laptop computer is not efficient, it is going to generate a lot of heat which can degrade the chips.
Upon further research, recent apple computers don’t seem to have separate processors that have elevated access, such as Intel ME and AMD PSP. But,
Apple chips are closed-source hardware.
Only mac os runs on apple computers. It’s practically impossible to run generic operating systems on apple computers. Mac OS is closed-source software that is likely to contain backdoors or obvious front doors if apple chips don’t.
If you are a serious purchaser of a Librem Server, you should discuss your requirements directly with Purism.
I totally agree with you.
It would appear that those are the hot-swap bays. It looks like 7 disks in total (1 for boot, 6 for data, and of the 6 data disks, 2 are fixed and 4 are hot-swap).
(Whether you actually need hot-swap depends on how you configure your storage and what your requirements are.)
However even with 7 disks, there is always going to be some limit on the total amount of storage.
About that AI use case. While true, that an AI model can be run on CPU, it will not be anything nearly as fast and efficient that you can get from a proper GPU. And in this case “proper GPU” means often one of the newer and bigger cards, which do not fit Librem server, nor (as far as i know) do they have available proprietary drivers (as far as I know, again, the open drivers are not an option for AI). Some newer CPUs have NPU cores (Apples chips may have those), but their relative help is not on par with GPUs, especially when compared to price. There is the third option of separate NPU cards, but the price, usability and drivers again. Hard to say if and how big closed blobs they all have and require and how much work is needed to utilize them properly on Librem, so either of those seem unlikely help anytime soon.
So, for the AI use case, these servers are not the best suited for it even if the specs would be a bit better (but will do if you optimize your model).
Depend on modality and the specifics. Text is light and terminal input/output is simple (enough with those mini models). Video… again, depends on how big the edited material is, how heavy the work (simple effect or several minutes of HQ new content) and how fast you want it… needs one of the bigger cards. I’m not very familiar with amd and video editing but I think there are probably some limitations, but at the same time I’m fairly confident there are solutions to use those too (just less common and maybe not as reliable or simple). You’d have the challenge of having the GPU external, which may become a thing too (Purism server or mini do not have room for them).
Btw. If anyone is planning on running an AI on server, open to online use (server connected to internet), I highly suggest running a separate VM container server with the AI model and it’s tools and data, which you can kill and replace if necessary (have a backup). The VM adds a security layer, that the AI can not be used against the server (much harder to jailbreak than AIs).
I notice the discrepancy between our methods now. Whenever I perform a Discourse search, I usually use “Latest Post”, not “Relevance”, since quite often later posts have more up-to-date, and thus relevant information to the present time. However, when using this filter, it does not interpret the entire search query the same as the latter, so posts displayed in one filter may not display in another.
Interesting. It looks like the newer version of the Librem Server (not yet available?) is very different from the previous ones. Previously they were rebadged Supermicro (flashed with coreboot) machines and they did have ECC RAM (and, IIRC, they all had Xeon processors). My notes have the mapping between Librem Servers and Supermicro servers as:
the ordering page itself could be out-of-date regarding the CPUs
ECC RAM won’t work with the i3-9100 unless the mobo itself supports ECC (an unknown)
on the ordering page you can’t directly order ECC RAM when configuring the RAM - let’s say that the mobo supports ECC and let’s say that the order is automatically for ECC RAM when you order the i3-9100 and for non-ECC RAM otherwise, ok that could work, but it’s too much speculation!
If you are a serious purchaser of a Librem Server, you should discuss your requirements directly with Purism.
That would also be needed in order to understand the PCIe expansion capabilities, if any.
I demand passive cooling for a box of that size, talking about the Librem Mini (obviously not for a fully-loaded rackmount server).
Purism has not traditionally configured the Librem Mini as fanless i.e. the v1 and v2 both have a fan. However I have no information regarding the specs of the v3.
You would need to decide how important fanless is.
If all you are going to do with it is route, say, 1 Gbit/sec between LAN and internet, I wouldn’t expect it to run hot. You might get away with keeping the fan.
The current Librem Mini versions don’t but I have no information about the v3. You can of course use a USB 3.0 to gigabit ethernet dongle to add a second ethernet port - which is probably adequate for an internet gateway router.
If you had hard requirements of two GbE (or better) ports built-in and fanless there are probably better choices available now but you would be giving up some openness.