In defense of the FSF RYF certification

The Librem 5 is supposed to, hopefully, get the Free Software Foundation (FSF) Respects Your Freedom (RYF) certification.

There are different opinions about the FSF RYF certification criteria, some people think they are silly while others think they are reasonable.

I think this issue is interesting and important, but it’s also a bit difficult to understand. Below is my attempt at describing this, please help out by commenting in case you know better.

The tricky part is how the RYF certification handles the “necessary evil” of proprietary software running in some part of a device. The RYF certification criteria, as far as I understand it, mean that there can be proprietary software running on the device provided that it can be considered as “part of the hardware”. An example could be a harddrive that internally runs some proprietary software (firmware) that runs on a separate processor inside the harddrive itself. We can imagine connecting such a harddrive to our device and using it without caring or even knowing about the existence of that firmware, it is just considered part of the hardware component (the harddrive).

Often, when there is firmware running inside some piece of hardware, there is also some way of updating that firmware. Probably all free software enthusiasts will agree that it is best if the firmware is free software, then we can modify it and update it as we want and everyone is happy. However, in some cases the firmware is proprietary closed-source software and in that case the FSF RYF certification criteria imply that the device must not allow the user to update that firmware. This, I think, is the basis of most of the criticism of the FSF RYF certification. It means that as a manufacturer of a device, if you want to get the RYF certification, you must restrict the device in that way, effectively making it so that the user of the device has fewer options. (As @lperkins2 wrote here Another price increase? Change: Librem 5 fighting about if open source it can in this way be seen as a “restricts your freedom” certification.)

While the above argument against the RYF certification criteria is clear, I would still like to defend the RYF certification. The purpose of the certification is to create incentives for people to use free software more, for everything. In this view, using proprietary software is bad and to the extent that it is necessary it makes sense to make it difficult. Having a device that does not allow updating some firmware is bad, yes, but the proper solution is to build the device using components that use only free software, then the problem goes away. So, having the RYF certification made this way creates more incentives to create components that use only free software. If updating proprietary firmware was allowed by the certification, then there would be less pressure towards creating fully free devices in the future.

Another aspect of this is that the way the RYF certification is done means that for a RYF-certified device we can package all software for it and say that “look, here is the complete software for the device and all of it is free!” That is how it works for the Librem 5, I think, and that is a difference compared to the PinePhone where the software you install includes some proprietary binary blobs. Although you can argue that those blobs are only sent to separate pieces of hardware and never executed by the main CPU, you are still dealing with proprietary blobs.

Perhaps this in the end boils down to a similar question as the “copyleft” GPL vs MIT-style open-source licenses. You can say that GPL is “less free” in some respects, but it promotes free software more strongly and in that way helps build a better world. See

What do you think about all this? Is the issue described properly above, did I miss something important, did I even misunderstand everything from the beginning?


(As an aside, good job moving this to a new thread :slight_smile: . You can actually do that from the reply tab in the previous thread, and it will automatically include a link to the new one).

That seems like a fairly accurate summary. There are a few other bits to point out though.

Against RYF

I believe the original idea of the exception for the isolated firmware was because of the impracticability of making a system not using firmware blobs in the drives and the like. The idea is the SI (system integrator) can pick components which use a standard interface (UVC for example), without worrying about what the internals are or use (in short, encapsulation). I don’t think it was designed for a SI to toe the letter of the rules, including hardware which isn’t fully encapsulated, but then locking the user out of messing with it.

As I mentioned in the previous thread, with your UVC camera, or your hard drive, or the Intel iGPUs included in the x86 laptops, you can change the firmware, and the FSF wouldn’t object, since the ability to do so is included by the hardware manufacturer, not specifically enabled by the SI.


That said, there are several points in favor of putting the firmware on a separate chip, which you missed. First is the licensing of the OS. If it doesn’t include closed source, dubiously licensed stuff, that side-steps a whole pile of headache. Also, there’s usually pretty limited ability with these devices to check what firmware they are currently using. In the cases where you can get the firmware image back from the device, it’s generally the device firmware itself overseeing that transfer. This means a nefarious actor with temporary write access to the firmware could, in theory, stick a nefarious program into it, and you’d have no way to verify it’s been removed (short of using an external chip flasher). Making the flash module unwriteable fixes this issue.

Some final thoughts…

I like that the closed source blobs have their own dedicated storage, and don’t use the main CPU. It makes it harder for someone to screw up somewhere and run something untrusted on the main CPU. At one point in time, bios chips, disk drives, and other storage media (still including standard form factor SD cards) had write protect switches (which couldn’t be actuated or ignored in software). This eliminates the issue with temporary intruders becoming permanent, while still letting the user control their device.

As for the certification itself… Consider that, with the rules as written, and the willingness to exploit loopholes in the rules, you could have perfectly fine hardware choices available, using open source drivers, for all components of the L5. Purism could still choose to use closed source media controllers, closed source modems, closed source everything but the CPU, and get the RYF certification. So that means the certification is not really doing its job. Sure, it does for the CPU, and it does mean a company has to consider the cost of using the open source hardware vs side-loading any needed firmware, but I think that’s fundamentally a broken approach.


i must say if you didn’t start this thread as you did at some point somebody else would have …

i mean what did you expect ? :roll_eyes:

in a world where the WEAKEST link is HUMAN corruption - NOT machine corruption - you have to limit the ability to do conscious harm if you want to SAFEGUARD the perceived NATURE of free-software

i mean think about it … what is more important ? and i say “perceived nature” because the true nature of something is not knowable with an outside eye, it’s ONLY knowable with an INSIDE eye … either way i digress …

that being said, the nature of something REMAINS forever the SAME nature so it doesn’t need protection from the outside … BUT when you have the POWER to alter the internal matrix ethos then you should be DAMN GLAD that somebody thought to put RESTRICTIONS in place …

I don’t think it is appropriate for the RYF foundation to create artificial rules only to protect a philosophical ideal, or to protect us from our own potential less-than ideal system design choices in only certain circumstances where allowable exceptions do not need to be made or could be made. This is a slippery slope for them which could put the RYF foundation philosophically on the wrong side of their own stated ideals under some circumstances.

It is acceptable to allow some code locked parts of a system to exist as an isolated part of the system and still meet the RYF certification requirements. But each part of the system needs to be evaluated on its own merits, not on some imprecise rule that requires more granularity to be complete and instead is guided by a political philosophy that does not require a full understanding of the overall system under design. Exactly what do the control lines do and can we adequately block them when needed? If I need to use this control line sometimes but need to block it the rest of the time, what is really going on inside of the chip? Are operational codes being transferred between devices within the system at any time? If there are and if we can not have access to the firmware in the chip, then block the certification. If there are no operational codes being transferred in or out of the closed part of the system and if we have physical control over all control lines between devices within the system, then grant the certification. The device datasheets should disclose this level of information because it is technically required to create an application using the manufacturers device. The current RYF system is more like saying, “Fords are safer than Chevys. So you can only buy a Ford”, when the real question should be about why one model should be considered safer than others and about whether perhaps both models are adequately safe enough to meet a very specific technical specification.

Small-company board-level designers have no control over what hardware is manufactured or not manufactured (what is available to them or not available to them). If an application designer can build me a fully free linux-based phone that rivals my Note 9 and is RYF compliant, but only the CPU is open-source, that would be a big step forward for the open source movement as a good first step. Why let the need to force a counter-productive philosophy/policy get in the way of that?

Phone hardware designers need to really know what is going on inside of their application, not just kluge together a system from various reference designs from different manufacturers and somehow, it magically works (privacy issues unknown). If the control and status information of the closed system with respect to the rest of the system are understood by the system designer and if operational codes are not being transferred out of the closed part of the system to the rest of the system, then the closed part of the system is safe and should be documented by those seeking the certification, followed by the grant of certification.

what circumstances ? this is only regarding non-free-software …

here it is clear that updating a non-free-firmware (this is a type of software that’s closely related to the linux-kernel-modules specific to the driver as binary only) by yourself without auditing the source code and compiling yourself could pottentially be dangerous …

on the other hand if the firmware is non-free but there are certain hardware isolations in place (as is the case of Purism) then that risk can be mitigated …

if the firmware is free-software then there is no NEED for any additional isolating hardware components to take space in the chasis and everyone is happy :slight_smile:

let’s please not repeat the same things over and over …

1 Like

I think that the respect for freedom of the owners/users should be created to protect owner/user rights, but not from the owner/users themselves. If I buy a RYF device and it is easy for me to tamper with the firmware and I end up infecting the device with a virus or spyware, I should have the right to do that to my own device. This whole culture of protecting us from ourselves is not what free software is about. When I find an exploit to my own cell phone, connect my cell phone to a PC by a USB cable and intentionally upload the exploit to my own phone, I have one goal in mind, to circumvent the lock that was put there to keep me out and then to root the phone. The exploit wouldn’t be necessary if they let me in to my own device willingly. I resent having to find and use an exploit to get in to full control over my own device that I - as the end owner/user- own. Could my phone get a harmful virus in the process? Absolutely yes. But that is my risk to take. There would be far less risk if I didn’t have to locate an exploit from unknown internet sources. Perhaps I can learn something or reverse-engineer the firmware if I am not locked out of parts of it. The DMCA allows an exemption that let’s you crack your own phone. I just don’t think that any certification should limit owner/end-user rights. Many of us are Engineers and might have even bought the phone with the intent - if necessary - to destroy the phone in the learning process if that’s what it takes to learn about the hardware. If the RYF certification takes away my freedom to do this, I become unhappy with the RYF foundation and what they represent to me.

in the context of the phone (L5 and PinePhone) the issue is more sensitive than usual because of the modem having a proprietary firmware. i’ll go as far as speculating about it being militarized/state-policed and that’s a little bit like poking the bear with a stick …

There is nothing wrong with caution. But there are legal ways to do just about anything. If I were to crack in to this modem, my goal would be to examine the RF outputs on a spectrum analyzer first, examine the binary if it can be extracted, to see if it can be reverse compiled or manually analyzed, and from that, to learn how the special function registers are set up and to map them if possible (create your own data sheet via reverse-engineering). Then probably lower the effective radiated power to around 100 mW to see what I can get it to do for the spectrum analyzer to mimic the original outputs from the OEM firmware. If I wanted to take it above FCC Part 15 power limits, I would find a near-by amateur radio frequency to experiment on it in, fully legal and licensed. If anyone gets that far, a full and comprehensive modem driver for it will probably show up in the open source community shortly after that. If the code is protected inside of the same chip, it’ll be very difficult and maybe illegal to do anything with it. That’ll mean game-over and try a different modem. But if it boots from an external memory chip, that serial boot stream can be recorded and hacked. If a device data sheet is available, anyone with the appropriate programming skills could start from scratch to write a new driver for it. The reason we have non-IBM PCs is because someone at one time reverse-engineered the IBM bios. Anything is possible.

If you reprogram it, you just need to be responsible for the RF emissions outputs afterward. That is managable but probably requires some work to master. But if you know the metaphoric rules of the road and follow them, the whole process is completely legal.

1 Like

Do you mean that for the certification to really do its job, it would need to be more restrictive, it would need to disallow proprietary firmware even in isolated parts?

Do you think there is a way such a certification could be defined that would make it work well, or is the whole idea of this kind of certification doomed to always get crippled by this kind of problems no matter how one tries to formulate it?

Easier to address the second question first.

It depends a bit on what you mean by the certification.

If you mean “can you write a fixed set of easily interpreted rules, where anything following the rules Respects Your Freedom and anything not following the rules does not?”, then the answer is no. There are too many edge cases, like the L5, where the device manufacturer honestly tries to empower the owner of the device, but is prevented from going further by technical or legal limitations. That should be recognized. There are also plenty of cases where open source, “copyleft” licensed software arbitrarily and capriciously restricts the user of the software, such as FireFox not allowing you to perform certain actions on DRM enabled pages.

If you mean “can (and is there an value in) a respected organization look at the particulars of a device or software project and determine if the vendor or authors of the product have made a good faith effort to empower users?” Then the answer is yes. As it stands, most people won’t read the actual requirements of the FSF RYF certification. They’ll take the name of the certification, consider how much they respect the FSF (or don’t), and decide if the certification is worth anything to them. They are relying on the criteria of the RYF certification to be right, which at present I don’t think it is. Obviously, it wouldn’t be an improvement to have no guidelines for the RYF, as that just opens the doors to bribes and capriciousness on the part of the reviewers.

I would say the default position is if you want the certification, with minimal fuss: use no black box parts; use only libre licensed software; use only royalty free or unpatented parts. If you deviate from that, you must submit, along with the application, a justification for the deviation, which explains why it’s the best option for the owner or user of the product. To improve regularity of the process, there would be a fairly standard list of exemption reasons. Things like “We want it to run x86 applications at full efficiency, x86 is patented, so we must use a licensed product”. Or “It’s a phone, the whole point is to have a cellular modem. There are no RYF certified cellular modems. This is the one we picked. These are the steps we’ve taken to isolate the device so it can’t compromise the user”.

Here’s the critical piece. The application and all supporting documentation must be made publicly available. The FSF would then review the application, give their initial assessment, and issue a request for comment. Only after the RFC period is over, and only if no one raises valid objections, is a RYF certification issued. This has several benefits. First, the process is open, which builds trust in the value of the certificate itself. Second, it doesn’t require the FSF to be experts in everything. With the open comment system, here can be review from professionals working in whatever field the device targets, who can better identify any glaring issues. Third, there may be solutions to the non-free components which the applicant did not consider. Identifying these may lead to the production of a more-free product.

At the end, I would want there to be several tiers of certification, from “this is totally free, do whatever you want with it including duplicate it”, to “the SI did the best they can, but the product is fundamentally flawed”.