Linux Updates in an Air-Gapped Environment

At work, I have been handed a mostly-finished, air-gapped cluster running virtual networks with Windows and RHEL VM servers and Windows VM desktops. I want to setup an arbitrary number of Linux virtual desktops in this enclave (the engineer users are keen for it). I want to support them and the mechanics of installing the desktops seem pretty straight-forward, but I also need to consider applying security updates/patches. There is WSUS for the Windows desktops, but I’ve never administered Linux before. There has to be an easier way than monitoring repositories for updated packages and manually downloading and schlepping them across the air-gap.

Since my Linux experience is limited to internet-connected real hardware, I am hoping that you Brainiacs can point me in the right direction. Thanks in advance for any suggestions. :man_superhero: :woman_superhero:

3 Likes

You could mirror the repository, much like you’d do by having wsus download a copy of the patches, then have that mirror be the source in the sources.list for your Linux systems.

Both would either be bridging the gap (very bad) or being moved across the gap manually (less bad but still not necessarily good).

I would plan for manually patching all systems with controls on that manual process to keep the devices bringing that data across from contaminating the airgap. If you wanted to be extra paranoid about it, on the Linux side only mirror the source code and compile everything yourself.

4 Likes

I haven’t done this kind of thing but I think one way could be to setup your own local package repository inside the air-gapped area and then all systems there check for updates and install updates from that repository. That would essentially be like a clone of the public repository that Red Hat keeps, you would need to copy that and bring it across the air gap on a regular basis to keep it reasonably in sync.

Edit: like @OpojOJirYAlG is saying, probably “mirror” is the proper name for this.

3 Likes

Neither have I, so take with a grain of salt …

Also, none of us knows exactly how high your security requirements are.

That said, how about the following?

airgapped - firewall - mirror <---sync--- mirror on <---sync--- local <- internet
cluster     /proxy     for airgapped      removable             mirror
                       cluster            media

So you would be using sneakernet to sync the final mirror and the firewall/proxy serves to limit, as far as is possible, the opportunities for exfiltration and for infiltration of files unrelated to Linux updates via the mirror for the airgapped cluster.

As far as is possible, use of removable media would be controlled and sync would be automated (plugging in triggers the sync and no manual use of the drive is permitted), in order to reduce the opportunities for mischief that would be outside the intended activities. Automation could include verifying signatures in the intermediate steps (hence by definition no random, unsigned repositories).

I would assume, in the absolutely highest security environment, it would be unacceptable to take binary updates at all. All updates would come in as source and be internally reviewed before being approved for use (compilation and release). That would mean no Microsoft Windows. So I assume therefore that you aren’t in that situation.

2 Likes

If its truly airgapped, do you really need security patches? If no data or information ever crosses that gap, then why bother?

1 Like

I don’t know how it is in this case but there are scenarios where it would be important anyway. Like, maybe it’s a large facility with lots of computers all inside that air-gapped area not communicating with the outside world, but there are still lots of computers inside there that are all connected to eachother and with many different users who physically go inside the building and login to their accounts there. Then some of those users may try to exploit security holes, making it interesting for administrators to keep things up to date.

5 Likes

Technically the topic title says “updates”. You very much may need an update if it is fixing a show-stopper bug or adding some important new functionality (e.g. support for new hardware).

3 Likes

But the very first post mentions “security updates,” thus my question.

3 Likes

Thank you all for the vigorous response. As many of you suggested, I had the thought of an internal mirror, but didn’t really know how that worked. @kieran’s flow chart helped me visualize the process and the automated plugin-sync is intriguing.

It is true that our accreditation process does allow binaries to cross the air-gap, but they must be tested/certified. In cases where this has not been done or would take too long, we have been known to rarely compile from source after automated (HP Fortify or similar) and manual review of the code. We don’t have the manpower to compile everything, even if we had the source.

As @Gavaudan suggests, security patches in our truly air-gapped environment are superfluous at best, douch-baggery at worst. I’ve made the argument, but if you have ever read Catch-22, you know what I am up against.

@Skalman is correct that we have a lot users and a lot of (virtual) computers, but when we do allow users on the system, they will not be allow to move data across the air-gap – only a limited number of transfer agents will be allowed to do so, and then we may have to enforce 2-person integrity for each transfer. The system will be used for test data analysis, so lots of data will go in, and only a little data will go out (mostly test reports).

I suspect the solution lies in syncing an internet-connected mirror with an internal mirror via sneakernet as several of you have suggested. The devil is in the details, which I will have to learn. If I had a decent internet connection (I don’t), I would setup a mirror at home to iron things out. To test it at work, I might have to setup a system that does not connect to our enterprise network. There will be a significant regulatory hurdle to overcome there, but if I can do that, it would become the outside mirror for the production system. Did I say that, for the time being, I am essentially one deep? My head is spinning!

As an aside, I also need a way to maintain configuration control. My engineer/users write a fair amount of code for their own data analysis purposes and have asked for a Git repository. I don’t grok Git, but it occurs to me that if I setup a repository to track the configuration of my server and desktop images, I would soon grok in fullness. Hopefully, I wouldn’t be such and idiot when it came to administering the repository for the engineers!

1 Like

The major pain point here is going to be RHEL… Debian, arch, and gentoo (and derivatives) all make compiling from source and verifying both the integrity of the source and the resulting build artifacts relatively streamlined. They also let you easily write package control files, so your engineers writing custom code can have it neatly packaged too. And if you have source code analysis tools, you can run them both on the local mirror which supplies the files and on the cluster side at compile time.

2 Likes

It appears that the forum does not allow multiple posts to be flagged as “solution”, but you all have given me much to think about and work with. Thank you.

2 Likes