Blog Open Source About

Enterprise Linux Desktop

---

Introduction

The Enterprise Desktop enters a new era with additional requirements originating from the server world, such as immutability, reproducibility, security, sovereignty and availability.

Linux dominating the server world has since years an extensive Desktop offering that is now mature enough to serve as a front-running Enterprise Desktop (this is also confirmed by recent surveys). It offers companies the biggest choice on adapting the Desktop to their needs leading to higher productivity of their employees.

An enterprise Desktop installation for any operating system is a sophisticated endavour which is very different from a private Desktop installation.

I will investigate in this post the requirements for an Enterprise Desktop. Afterwards I derive an architecture for it as the Enterprise Desktop is depending on servers, e.g. to provide backup or security policies. Then, I explain commercial offerings for Linux Enterprise Desktop. Finally I conclude with some open source building blocks that you can use to design your Enterprise Linux Desktop.

The focus here are on technology aspects, but security involves also organisational and risk management aspects (e.g. BSI IT Grundschutz).

I do not include here specific applications on the Desktop, such as Mail, Office (e.g. LibreOffice or OpenDesk). I may later add a post about them.

Requirements for Enterprise Linux Desktop

I will present in the following some high-level requirements for the Enterprise Linux Desktop. This is not an exhaustive lists, but indicates some important ones. Depending on your enterprise (e.g. startup working with public data vs. confidential banking enterprise) a different subset of requirements may apply to your enterprise.

Area Requirement Description
Backup and Restore Restore applications Restore all applications and configurations exactly to the state the user is used to in case of damage, lost, theft, malware attack
Restore data Restore data to a working version case of damage, lost, theft, malware attack
Synchronise user data with remote server Synchronise user data with remote server for cases such as theft of device
Confidentiality Encrypted disk at rest Encrypted disk at rest to prevent that unauthorised third parties can access the data
Integrity Protect device from tampering Protect the device from unauthorised modifications, such as malware
Resiliency Provide resilient infrastructure Provide a resilient infrastructure in light of various risks (e.g. natural risks, financial risks, energy and networkk infrastructure risks)
Connectivity Secure communication Secure communication to the organisations network and all services
Deny ingress traffic Deny ingress traffic to prevent network-based attacks
Sanitised network egress traffic Sanitised network egress traffic to prevent connnection to malicious websites and servers
Identity and Access Management Central Identity Directory Only identities from central directory can login to Desktop
Device Identity Only allow trusted devices access to the organisations network
User Identity Only allow trusted users access to the organisations network
Confine Resources on Desktop in light of faulty applications Confine resources, such as files, devices, network flows, applications, using minimal permissions at user level even in light of faulty applications
Privileged Access Management Privileged Access to Desktop is guarded trough defined controls
Integrity Tamper-proof Enterprise Desktop Tamper-proof Enterprise Desktop
Software Management Reproducibility: The same desktop on every device Deploy the same images and configuration to every Desktop without a drift in configuration for different users
Self-Service App Management Self-Service App Management allows users to install curated applicatons they are allowed to install
Resilient Software Distribution Resilient software and app distribution in cases of interruption of Internet
Support Secure Remote Access Management Secure Remote Access Management for IT Service Desk to support users
Test and Quality Assurance Provide Test Environments Provide Test Environments
Automated test of updates Automated test of updates
Automated test of packages Automated test of packages
Training and Development Access to courses for using the Desktop Access to courses for using the Desktop
Security Awareness Security awareness training on role of the user on securely using the device
Offline Help Help is directly available on the Desktop without requiring a network connections

Note: I do not discuss in this post certain applications, such as Office, or services, such as emails. This will be scope of future posts.

Not all of the requirements can be fulfilled by technology. For example, it is crucial that users get a proper training for their Desktop - otherwise it will not be accepted.

Architecture for Enterprise Linux Desktop

The architecture of an Enterprise Linux Desktop encompasses naturally more than you will need for a private Desktop. The requirements are simply different. However, some of them may overlap. For example, many people privately also backup their Desktop data or they have a network storage for some of their data with a cloud provider.

You can decide for different employee groups in your organisation to have different types of laptops. Some may only work with web applications and thus a very hardened and restricted thin client is valid. Others may require more flexibility and may have different software installed on their laptop.

We have illustrated an architecture for an Enterprise Linux in the following figure:

Generic Architecture for Enterprise Linux Desktop

You do not need to implement all of this as previously said, it really depends on what requirements are relevant to you and the needs of your different user groups. If you just start thinking about your Linux enterprise Desktop then you may also implement features over time according a roadmap that fits your company’s needs.

Let us go now into the components. We start from the organisation-side what needs to be deployed in an organisation data center(s) and afterwards we look at components that are needed on client side (e.g. Desktop, mobile devices etc.)

Organisation

I describe here what is needed on the organisation side, ie the data centers of the organisation, to be able to run and manage Enterprise Desktops, such as Laptops, Tablets and other devices.

It is highlighted implicitly in the generic architecture, but you should take into account resilience of your infrastructure in case of disasters, such as:

Connectivity

Connectivity involves connecting the Desktop of the user to the network of the organisation. This can taken place when the user is outside of the physical network of the organisation (e.g. at home or during travel) and inside the physical network of the organisation. Both cases require that you protect the network of the organisation and at the same time the Desktop of the user from a possibly hostile environment.

For example, if a user arrives from outside of the physical network of the organisation you need to allow a secure way into the network of the organisation in an environment that is most likely unprotected (e.g. home, hotel etc.). At the same time, you will want to ensure that all network traffic of the user Desktop is routed through the network of the organisation to ensure certain protection mechanisms (e.g. filtering of Internet traffic that only allow-listed domains are contacted etc.) to prevent that you have an open attack point in a network that you do not control.

Inside the physical network of an organisation you will want to make sure that only approved devices can connect to your organisations network. Otherwise users and other people in your premises can easily connect any device and listen into the network, manipulate it or break it in other ways to make it potentially unusable for your organisation.

Both cases can have similar solutions (e.g. VPNs), but there are also solutions specific to one of the situations (e.g. having a dedicated network for controlling the electricity infrastructure to which the users do not have any access, but only specific devices for managing the electricity infrastructure).

Remote Storage

As an organisation you want to have for people using an Enterprise Linux Desktop to have a remote storage of the data on the laptop. This serves multiple purposes, such as switching between different devices and having the access to the same data as well as a way to have a backup in case a device gets lost or stolen.

Remote Storage does not store all the data of the Enterprise Linux Desktop, but only that is valuable for the organisation and/or that is necessary to smoothly move the Enterprise Linux Desktop from one device to another.

Identity and Access Management

Identity and access management is a wide-ranging topic, which also extends much further beyond the Enterprise Linux Desktop. You will want to ensure centrally managed authentication and authorisation to ensure that only people belonging to your organisation can sign into the Desktop.

You want to define policies centrally that govern how the device of the Enterprise Linux Desktop is used. These policies need to be synchronised and applied to the Linux Enterprise Desktop.

You want to introduce special mechanisms for privileged access management. For example, access to critical systems or data should not be directly accessed from the Desktop, but an extra hardened infrastructure (e.g. a Desktop on a remote VM with minimal software that is only reachable from private networks) should be provided.

Security

Security is essential in any organisation. One key aspect is to have sophisticated audit as well as security event and information management (SIEM). You need to get relevant logs from the Enterprise Linux Desktop and make them available for security analysis (e.g. investigation of malicious behaviour) ensuring privacy of the users.

It should be ensured that the logs are generated in a tamper-proof fashion using cryptographic techniques.

Cryptography Infrastructure

All components use some sort of networks algorithms all the time. There are certain aspects you want to do centrally, such as handing out certificates for servers in private networks and devices of the users (the Desktops). Other aspects are providing symmetric key operations for encryption-at-rest (mainly for the organisation and less for the Desktops which have their own mechanisms).

This requires to integrate them also with a lot of other components in the organisation, which implies some integration efforts.

Monitoring

Monitoring encompasses collection as well as filtering of logs and metrics from servers in the organisation and devices. They need to be available for analysis (e.g. dashboards, querying) and to trigger on certain events (e.g. metrics over threshold) automated actions.

Monitoring has a lot of things in common with SIEM and audit logging. However, usually the scope is much wider and focuses on availability measurement according to service level agreements (SLA) and service level objectives (SLO).

Keep in mind that the Linux Enterprise Desktop may have personal data that ends up in the logs. You can anonymise these already on the Desktop before they are sent to the central monitoring solution. You must always get the employee’s agreement and be transparent what personal information you log. Usually personal information are not needed for security or troubleshooting purposes.

Asset Management

IT Asset Management or Data Center Asset Management is used to keep track of the IT assets in your organisations, such as servers, devices, cloud resources etc. This is important to make strategic decisions around them (e.g. replacement, decommissioning, security). Additionally you can identify lost devices that have not been reported yet as lost and devices belonging to specific business areas or people.

Device Management

Device Management (also known as Unified Endpoint Management or Mobile Device Management) provides various services to manage the hardware underlying the Linux Enterprise Desktop: * Ensure a consistent configuration of all devices in an organisation * Installations and updates of software on the device * Deployment of policies enforcing or monitoring behaviour * Tracking location, use and ownership * Remote wiping in case of lost or stolen devices

While we focus here on the management of the device of the Enterprise Linux Desktop, it can also encompass management of other devices, such as smartphones, ruggedized devices, Internet of Things (IoT) devices and printers.

Backup and Restore

The organisation need to backup its core assets, such as data, code and configurations from the servers in the organisations and the devices on which the Linux Enterprise Desktop is running. The latter may be achieved through backing up the remote storage to which all devices sync critical files (e.g. home folders). You should have a backup policy defining backup frequency and retention (how long you keep backup data). Both depend on individual needs and compliance/legal obligation aspect.

Your backup must be stored in an immutable way, i.e. it should provide no APIs to be able to modify the backup after it has been written. Only new backup data can be added. This can prevent certain malware attacks. Very critical backup data with high integrity requirements should be cryptographically signed.

Symmetric encryption with support of cryptographic infrastructure should be the standard. You should use for this a different set of keys than for encrypting your data to ensure separation of duty, e.g. by having different administrators for the backup and for the data, so that malicious intent of one administrator does not affect you.

Similarly, backup and restore should be independent of your main identity and access management solution, because in case it is compromised it may affect your backups. Minimal permissions to the backup should be mandatory and instead of administrators only automation can access the backup.

You must think about replicating your backup data over several data centers as part of your disaster recovery strategy as mentioned before. Locations must not be depended on the same vendor and must not be exposed to the same risks (e.g. natural risks).

It is crucial that you regularly test the restore procedures to make sure that restoring the data from backup works and is not corrupted. This also verifies that the restore indeed works as planned. During these tests you also verify if the recovery time objective (RTO) and recovery point objective (RPO) of your organisation are met. Make sure that recovery procedures are accessible to all relevant stakeholders (multiple!) and that access is available to the right persons (multiple!).

You should have a catalogue of your backups in your IT asset management solution, so you have in case of an emergency an overview where your backups, SLAs and priorities are.

Remote Support

Your users of the Enterprise Linux Desktop will need support from your IT Service Desk. Users can raise tickets about issues or question in your IT Service Management (ITSM) system for them to ensure they are tracked and resolved as well as linked (e.g. tickets with same underlying problems or major incidents).

You may need to offer different channels to communicate with your users, such as e-mail, phone or messenger. Keep in mind that not all channels can be trusted and they may be used to trick your IT support to do things they are not supposed to do. It key that you give them proper instructions on how to verify users.

Some issues can only be resolved by your IT Service Desk by being able to remotely to the Enterprise Linux Desktop of a user. This should be of course only if the user gives explicit consent from their desktop and the user can terminate the remote session at any time. Security is crucial, i.e. connections need to be encrypted, only authorised service desk staff can initiate a session and as said a user needs to confirm it. Additionally, sessions should terminate automatically within a short amount of time.

Application and System Repository

The application repository provides applications, such as Desktop and command line, to the end user via private networks.

Applications should be deployed on the Enterprise Linux Desktop in a sandbox without full permissions of the user running them, but only a minimal subset needed to run the application. This reduces the risk of malware and damages from them.

The system package repository provides system packages that update the operating system on the Linux Desktop.

Both also store any artifact retrieved from the Internet, so that the Enterprise Linux Desktop can fetch applications and system packages through private networks without relying on the Internet. This avoids potentially unreliable Internet connections and allows still that the Enterprise Linux Desktop can work even if the original application or package has been deleted or is not accessible any more.

Network Management

Your organisations infrastructure, such as servers, require a secure private network to which Linux Enterprise Desktops can connect in a secure manner. When you have the chance to set it up from scratch then definitely work on an IPv6-only network. IPv6 is the latest standard that is mature and exists since some time. IPv4 is a relict of the past that will disappear. You should define and implement a fine-granular segmentation of your network. You should have all your critical infrastructure that is mandatory to run your operations in a private network and not accessible via Internet to ensure reliability. This may also be needed for a secure software supply chain as part of software development activities.

Furthermore, you need to have a solution to only allow authorised devices with Linux Enterprise Desktops on your network to make it more difficult for potential attackers to join your network.

Additionally, you need a private DNS (Domain Name System) server to provide names for your servers in your private network.

You will need a public DNS resolver for centrally resolving public Internet addresses that is used by the Linux Enterprise Desktop. This DNS should deny-list a list of known malicious domains that needs to be regularly updated.

It is crucial that all outgoing traffic to the Internet is filtered (e.g. using a Proxy) having a set of well-defined allow-listed and blocked domains to avoid data leakage, loading of malware etc. This proxy may also be used to connect via IPv6 to legacy IPv4 on the Internet.

Desktop

The Linux Enterprise Desktop can be on different types of devices, such as Laptops or mobiles. A user can have one or multiple devices provided by your organisation.

I do not recommend to follow the concept bring-your-own-device (BYOD), because it has high security risks. If you want to allow to connect non-organisation devices then have a dedicated physical network for them that is completely isolated from your company network and does not allow access to company systems.

In the following I present components that are present on the Linux Enterprise Desktop, i.e. a concrete device.

Immutable Operating System

The Desktop needs an operating system and it was said that it is based on Linux. There are various Linux distributions, but only some are suitable for an Enterprise Desktop as I will explain later.

One core principle in modern Linux Enterprise Desktops is that they are based on the concept of immutability. This means changes to the system, such as operating system or application updates, are managed in a controlled way. If a change fails then automatically the newest known-working version is chosen upon startup. Thanks to modern file system technologies this happens in an instant.

Another interesting feature is that updates can happen without impacting the currently running applications by the user. They can simply continue to work without noticing that updates are happening. The updates are applied by automatically selecting the file system snapshot they were applied to when restarting (without any need for user interventions). Immutable operating systems thus start very fast in a reliable manner. This also means updates are happening more frequently (at least daily), since the user impact is negligible.

There are also other aspects coming into place, such as Application Sandboxes that run under a minimal subset of needed permissions of the user. This reduces the blast impact if they show malicious behaviour (they still need to be though vetted and selected before allowing them on the Enterprise Linux Desktop).

Encryption-at-Rest

Encryption-at-rest refers to encrypting all permanent storages of the Enterprise Linux Desktop using symmetric encryption algorithms. This can be supported by specific hardware modules, such as trusted platform modules (TPM). TPMs allow to store the encryption key in a trusted enclave and protect it with a PIN or alternative techniques, so it is difficult to extract the key if you do not have the right access. It is strongly recommeded to apply a PIN.

Encryption-at-rest protects you from attackers that steal physically the device to get confidential information about your organisation or for accessing your organisation on the Enterprise Linux Desktop. It also allows to properly delete data in case the device is sold to third parties or thrown away by applying the concept of crypto-shredding.

One should remember that in all cases encryption-at-rest can reduce risks of confidentiality disclosure, but never can fully eliminate it (e.g. due to bugs, failure to apply latest crypto-algorithms, manipulation of hardware prior installation etc.).

Encryption-at-rest preserves confidentiality of the data, but not integrity or non-repudiation of it. If you require those then you need to apply other techniques, such as cryptographic signatures.

Audit

The Enterprise Linux Desktop needs to audit important security events happening, such as logins, installation of software, changes of relevant configuration or currently installed versions. These audit logs need to be shared via an agent with the central audit facility of the organisation.

Policies

Organisations need to be able to put policies on the Linux Enterprise Desktop to control its usage and limit potential malicious activities.

You can have different type of policies: |Type|Description|Examples| |—-|———–|——-| |Network|Control network usage|Allow network traffic only through VPN tunnel, all Internet connection must use the proxy| |Malware|Detect malware behaviour|Executables that use specific vulnerable library functions| |Mandatory Access Control (MAC)|Restrict access of a subject to objects/resources |A normal user cannot execute anything in the home folder, cannot bind ports < 100, cannot communicate via ssh to Internet hosts| |Application|Application policies determine what minimal permissions an application has|Access to Desktop Clipboard, access to specific files/folders, access to devices| |Agents|Policies for non-AI and AI agents acting on the user’s behalf|Can access user calendar, can access specific folder/files|

Often they are also combined, e.g. MAC and Network policies, so you can define which application can connect through which port to a service on localhost.

End User Computing Services

The Enterprise Linux Desktop has at least the following end user computing services: - Endpoint Security Service: The endpoint security service scans new files for malware. While this is an option you should prioritise defining proper policies and provide applications minimal permissions, isolating applications, allow only approved applications as well as control Internet egress. This will provide better protection also against unknown malware - Audit Service: The audit service collect system relevant information (e.g. logins, errors) and pushes them to the organisation audit trail server and SIEM system - Remote Support Service: The remote support service allows Service Desk staff - after approval of the Linux Enterprise Desktop user - to remotely connect to the Desktop for troubleshooting. The Linux Enterprise Desktop user can terminate the connection at any time and is aware if such a connection is taking place.

Note: Application and system package management is using the mechanisms commonly found in Linux Enterprise Desktops. As mentioned in the organisation section, it is crucial that they connect to the organisation-internal repositories and not the Internet. This is to ensure higher resilience and allow-listing of approved packages.

Trusted Platform Module

Nowadays all devices have a Trusted Platform Module (TPM) or a similar kind of device. This can provide functionality such as: - Support for key management of hard disk encryption - Support for authenticating a device in the organisational network. Note: You always need to do user authentication anyway! The device authentication is just another line of defence

TPMs are not perfect and have flaws. They should be seen as complementary. One has to keep in mind that if TPM fails you need to have a way to restore data from backups outside the device (encrypted with a non-TPM key) and/or the aforementioned remote storage.

While a TPM may initially protect from a stolen/lost device you should ask your users to always require a passphrase (not stored in TPM) or an external FIDO2 Stick to decrypt the device, otherwise even encrypted data might be recoverable. Additionally, consider functionality for remote wiping of stolen/lost devices.

Verified Boot and Measured Boot

Both address a similar set of security threats from different angles. The threat model is that unauthorised software, such as malware, is detected before it is executed. Both are not perfect and may have flaws (e.g. implementation etc.). However, they are another line of defence against malware and to prevent persistence of malware that can also survive complete reinstallations of operating systems.

Often you need to combine these techniques. For example, you can use a hardware cryptography USB stick with support for signing and sign every update yourself (cf. Evil Maid, Anti Evil Maid). Then, you can verify during boot with the presence of your hardware unauthorised USB stick that everything is correct (e.g. Nitrokey PC). One issue of course if you have lost your USB stick or it is broken. Then you cannot verify it any more and need to sign it again with another one (ideally reinstalling everything again to make sure that only valid components are used). You can reuse these hardware cryptography USB sticks also for enhanced multi-factor authentication (see FIDO2 keys).

A good practice is also to check laptops that are returned for any visible hardware modifications (ideally you seal also screws etc. with special material).

You can find more considerations on the threat model in the Heads Wiki.

One key aspect is that these are cryptographically signed, based on open source, open hardware trusted platform modules, transparent processes and proper implemented. UEFI secure boot is a case where this is not happening. It has flaws and one company in the world controls who can sign those and has at the same time clear conflict of interest. The risk is clearly that this company can decide to render your devices cryptography useless. With some additional effort you can though use own signing keys with UEFI.

Note: In case your employees travel to another country, you may want to give them temporary Enterprise Linux Desktop (e.g. laptop etc.) that are completely cleaned and wiped (have a good procedure) after return.

Pluggable Authentication Module

Linux Pluggable Authentication Module (PAM) provide various means to let user authenticate to the Linux Enterprise Desktop and also to the organisations identity and access management system. Often PAM is used for local login, but this is not sufficient for an Enterprise Linux Desktop as it needs to authenticate with the organisation.

There are many new protocols, but also older and outdated protocols supported. Additionally, hardware devices are supported. Usually you have a PAM module provided by your chosen identity and access management solution or can easily develop one to integrate it.

You can define multiple modules and alternatives in case a login method does not work.

It also can include components to protect from brute force attacks against local logins.

Minimal Permissions, Isolation and Application Sandboxes

Nowadays we can have various software installed on the Linux Enterprise Desktop. Some time ago they all ran with the full permissions of the user. While this reduced already the impact of malicious or erroneous software, it still provides more privileges than needed. For instance, even if the password database is encrypted, there is no reason to provide access to any other software than the password manager. Sandboxes help to define minimal permissions on what a software can do - even if the user executing the application has more. This is not limited to files, but has extended to all possible permissions, such as network access, access to Desktop functionality, such as clipboard, or hardware such as specific USB devices.

Isolating applications also means that they can have different - possibly conflicting - versions of libraries and underlying system functionality. This means they can bring their own file system with their own dedicated set of libraries that are only accessible to the application in its own virtualised file system. Applications can thus be also upgraded and downgraded within seconds. Permissions of the application can be dynamically changed based on enterprise policies.

There are different isolation levels (e.g. hypervisor has a higher isolation level than a container which has higher isolation than processes). Here, one needs to be pragmatic depending on what needs to be protected as high isolation levels may become cumbersome to use for user. Local system software usually runs under a dedicated non-human rootless (i.e. without admin privileges and minimal permissions).

FIDO2 keys

Username and passwords have been for a long time the default way of accessing local and remote systems. While this is still the case, it has changed, because they have issues: They may leak and be reused by malicious actors, e.g. as part of man-in-the-middle attack. They can be weak and easy to guess. They cause issues because users have to create for every system a different one and users need to keep track of them.

FIDO2 are hardware keys that have cryptographic modules on a small hardware that is used for authentication. They essentially address all the problems previously stated. They are cheap to produce and have also open source as well as open hardware variants.

There are other alternatives, e.g. using a second device, such as a smartphone for login. While they may work, they also have flaws (e.g. need for yet another complex device).

Usually for the Linux Desktop you want them to guard access to your identity and access management system that decides then to give your user access to their Linux Enterprise Desktop.

While FIDO2 is a concrete technical standard, it seems to be currently the common ground for secure authentication.

Commerical offerings for Enterprise Linux Desktop

Generally an organisation will want to have an integrated solution for the organisational management and the specific Linux Enterprise Desktop on devices. This includes also support plans for specific issues users and administrators may have with the Linux Enterprise Desktop. You can though alternatively build it yourself, which is sketched in the next section. It is crucial that the provider for such a solution supports financially and/or contributes code to open source, otherwise the solution is not sustainable and risky to use.

There are multiple potential providers and we provide here a non-exhaustive list. They may not support all capabilties mentioned before and/or they may have additional capabilities not described here. Nevertheless all of them not only support the software on the Enterprise Linux Desktop on the device, but also large part of the aforementioned organisational management.

Provider Name Desktop Organisational Management
SUSE SUSE Linux Enterprise SUSE Linux Enterprise Desktop SUSE Linux Enterprise Server,SUSE Multi-Linux Manager
Redhat Redhat Enterprise Linux Redhat Linux Enterprise Desktop Redhat Linux (Various components for organisational management)
Canonical Ubuntu Pro Ubuntu Pro Desktop Ubunto Pro

As with any other software, you will need a team to plan and roll-out the infrastructure as sketched above based on these Linux offerings. There are also consultancy companies, such as B1 Systems which have decades of experience.

Software for Enterprise Linux Desktop

You can also build a Linux Enterprise Desktop and organisational management using open source software. This can be based on commercial offerings or you use free distributions, such as openSUSE Linux. You should think about how you contribute financially and/or with code/documentation to open source if you use it. This ensures that it evolves in your sense and just leveraging open source without giving back is not sustainable as well as very risky for your organisation (e.g. the software might loose maintenance or does not provide the technology that you need for your organisation).

I will describe here some components that you can use to implement the generic architecture above. Keep in mind that you have a lot of alternatives and I do not list all of them. Depending on your organisational size you may choose different alternatives as well as you do not need to implement everything mentioned above.

You should assume to have at least two sites (data centers), where you keep your infrastructure redundantly. The design of a site should take into account certain risks and mitigations. For example, you may want to have redundant power supply in case of power failures or you should take into account fire incidents. However, also the choice of sites should be carefully done. For example, you probably do not want to expose them to the same natural disaster risks (e.g. floods or earthquakes).

Usually the IT security agency of your hosting country provides advice (e.g. find here advice of the German Federal Office for Information Security).

In all cases it is crucial to test if you can continue to user your IT after a disaster (e.g. switch Enterprise Desktops to different sites regularly to check if this would work in case of a disaster).

This means two sites should be not in the same building and have a reasonable distance from each other.

We will only briefly touch the topic of disaster recovery here, but it should take a big place in your organisation.

The following diagram illustrates one possible set of software (among many other alternatives):

Example software for Enterprise Linux Desktop

Note this is just an example, it is neither “the best” solution nor a solution that fits to your organisation.

I will present in the following subsections some software and alternatives.

Organisation

Connectivity

You can use various Linux-based software to enable point-to-site (P2S) connection, where the client is the point and the site is your organisations network. This is usually done through an encrypted tunnel through the Internet (which is considered as not trustworthy - hence you need to encrypt all traffic).

This is different from a site-to-site (S2S) connection where you want to to connect two different sites (e.g. data centers) of your organisation together.

Popular software for this is Wireguard or OpenVPN. The first one is usually more suitable and commonly used for site-to-site connections. While it kind of supports also point-to-site (Remote Access “Client”), it becomes also quickly complex to manage many different “clients” as you need to manage the public/private keys for each of them (or have a preshared secret).

OpenVPN is suitable for point-to-site connections and supports various means of authentication usually integrated with the organisation’s identity and access management solution (the in the following sections).

Remote Storage

Remote Storage has been traditionally provided using storage appliances that are connected via protocols, such as Network File System. This is not any more today the case as they cannot deal with security and performance requirements of organisations. They are also only suitable for storing files, but do not handle file labelling or other type of data (e.g. calendars, mails etc.) well.

Nowadays organisation use cloud storage usually based on HTTP. Example technologies for these are Nextcloud. These require dedicated clients on the Enterprise Linux Desktops.

Enterprises should integrate them in their monitoring to be able to quickly act if synchronisation does not work any more.

A remote storage will still need a dedicated backup to address data loss due to malware or accidental deletion. While they often support versioning this does only help in certain scenarios, but not all (e.g. loss of data center or deletion of versions).

Identity and Access Management

Identity and Authentication

Nowadays, you will find many identity and access management solutions. It makes sense here to leverage open source solutions and standards to stay fully in control of this critical part of your infrastructure. Another important aspect is that you leverage open standards, such as OAuth, OpenID:Connect (OIDC) and System for Cross-domain Identity Management (SCIM) in their latest version. Older standards, such as SAML2, LDAP or Kerberos should be avoided to reduce complexity and management effort.

The authentication to the Linux Desktop though is realised with an IAM-specific Pluggable Authentication Modules (PAM) and can be realised using various protocols supported by your selected IAM solution.

There are many open Identity and Access Management solutions: * Kanidm * FreeIPA * Keykloak * Many more…

You should select for your organisation one that fits to the size of your organisation and future growth. If you are a small organisation then do not choose the most complex one as you will not need many features and thus just have additional work. You may also choose a solution which is supported by your Linux distribution.

Usually you should have your identity and access management solution only exposed in your private network and not expose it to the Internet to reduce your attack surface. However, sometimes you may need to do this to support integration with a Software-as-a-Service (SaaS) offering. For those cases you should provide an additional instance completely isolated from the instance for internal purposes to reduce the blast impact of attacks. They need to realised of course user and roles, because it would be not user-friendly that each user has more than one account.

Your IAM solution should support Two-Factor-Authentication (2FA) ideally based on FIDO2 key for every type of authentication (including login to your Linux Enterprise Desktop). This makes it significantly more difficult to break into your organisation with stolen or leaked passwords.

Policies for the Enterprise Linux Desktop

You need to deploy policies on the Enterprise Linux Desktop. You can do this by providing them in packages that are installed automatically by the Enterprise Linux Desktop. You should provide these packages on your “Application and System repository” on your private network where your Enterprise Linux Desktop fetch them regularly to be deployed automatically.

This provides you great flexibility as you can load any type of policy, such as SELinux policies, firewall rules, device rules etc. It also simplifies detecting non-compliant devices of the Enterprise Linux Desktop.

If you provide Desktop applications then you should only provide curated Flatpak packages on your “Application and System repository” with customised policies on what they can access.

Privileged Access Management

Privileged Access Management (PAM) is an additional solution that is needed for securing privileged access to servers. It usually required specially hardened (virtual) Desktops with minimal software and privileges to use as a jumphost to access servers with higher privileges. Detailed audit logging for any actions/changes using these jumphosts should be collected and centrally stored in a tamper-proof log. Servers themselves should only accept privileged logins via these hardened jumphost. PAM solutions may also provide a secret vault for storing and rotating static credentials, where it is not possible to use short-living tokens (e.g. based on Workload Identity Federation). All privileged accounts must be managed using the IAM solution as normal user account. Privileged accounts must not be used for day-to-day work, but only for administrative purposes. This means people that do administrative activities will have two different accounts: A non-privileged user account (e.g. for their Linux Enterprise Desktop) and a privileged account (which they cannot use to login to their Enterprise Linux Desktop, but to the jumphosts and servers).

Note: You should completely disable privileged accounts on Linux Enterprise Desktops used by end users. Nowadays this is not needed any more as even for developers tools exists where this is not necessary (e.g. distrobox). IT help desks staff should also never need privileged access to Linux Enterprise Desktops to support users. Immutable Linux, remote package installations, Flatpaks, and device management (remote software management) makes this unncessary. It is simply too risky to allow privileged access for users as root/administrators and it is also not needed.

Security

Audit

Auditing aims at capturing critical activities in your organisation that could be used to identify threats (e.g. many invalid login attempts). Examples for critical activities are authentication, access to resources (e.g. object storage, VM, data), session activities (create/terminate), network connections, change of permissions, change of configurations, change of software and change of infrastructure. For example, cloud providers usually provide an audit log that captures critical activities to all their API.

You should collect your activities for auditing centrally using standard protocols (e.g. OpenTelemetry). You should store them immutable (i.e. they cannot be deleted or modified), have integrity checks defined (e.g. using cryptographic signatures) to ensure that they have not been tampered with, have reasonable retention periods (it does not make sense to keep audit logs forever) and define minimal permissions so that only the people that need to have access have access.

Security Incident and Event Management (SIEM)

A security incident and event management (SIEM) system is key part for monitoring IT security relevant events in your organisation and on the devices of your Enterprise Linux Desktop. This requires you to collect events from various sources (e.g. audit, log files etc.), filter relevant ones and correlate them.

There exists a couple of open source and commercial platforms, such as Wazu (Open Source) or Splunk (Commercial).

Often these platforms require to deploy agents and connectors that connect to the original event sources, such as logs, databases, object storages etc. You may also need to build your own connectors depending on what software you want to include. You should collect them using standardised interfaces, such as OpenTelemetry.

Additionally, you may want to filter what is provided to your SIEM software as too many events become unmanageable and/or too expensive. This can be done using custom platforms for Stream Processing (e.g. based on Apache Flink) or any tool that is available or custom-built.

Most of the SIEM solutions build on Search solutions to do real-time analytics, such as OpenSearch, Elastic or Apache Solr. This you can also use to build up a SIEM solution yourself.

Velociraptor is a specialised platform for endpoint monitoring (e.g. Linux Enterprise Desktop).

Most of these tools have their own rule language to define correlation and detection rules, e.g. Wazu XML, OpenSearch Correlation Rules, Solr Streaming Expressions (scheduled using Daemons), Sigma rules, Velociraptor Query Language (VQL) or Splunk SPL.

It is very beneficial to have some rules already predefined so that you do not have to come up with rules from scratch. You may find predefined rules in existing solutions or on various open source repositories. However, make sure the rules work and they cover all risks that you want to cover.

Cryptography Infrastructure

Hardware Security Module (HSM)

A hardware security module (HSM) provide functionality for managing keys and decrypting/encrypting/signing/verifying signature of (small amounts) of data. They are usually an additional device in your server, but also other devices. They provide additional protection of key material and some of them also provide functionality to securely synchronise sensitive key material across different data centers, which is important as if you loose the key you loose data encrypted with it. They are highly specialised for cryptographic tasks, which means also the attack surface is significantly reduced. Usually, the keys never leave the hardware, but the hardware provides APIs where one can ask to encrypt/decrypt data. These are only small datasets, because it would be too slow to process very large dataset. Hence, when you need to encrypt large data volumes on object or block storages usually it only encrypts/decrypts a “data key” that is used for encrypting/decrypting the real data and this is stored encrypted with the encrypted data. The data is encrypted then using a software solution.

However, they are only one building block in a cryptographic infrastructure and one must take into account the whole security architecture.

Ideally, HSM modules are based on open hardware and open source (e.g. Nitrokey NetHSM). They come in different sizes ranging from a chip itself to server systems that replicate key material across different data center sides.

Note: Devices that run the Enterprise Linux Desktop usually come with their own HSM module, there also sometimes called “trusted platform module” (TPM), which has similar functionality, but only for the individual device.

Key Management Service

As managed before a HSM is just a building block for the cryptographic infrastructure. You need to offer cryptographic operations to other applications/services (e.g. object storage) in your organisation. They provide these functionalities behind simple HTTP-APIs and manage in the background high-availability, multitenancy as well as disaster recovery aspects. Key Management Services (KMS) provide these. Their main use case is support encryption-at-rest.

Example for such a service is Cosmian.

Private Certification Authority (CA)

Key Management Services provide capabilities for encryption-at-rest. Encryption-in-transit requires for organisational-internal traffic a private certification authority (CA) as part of a Public Key Infrastructure that signs public certificates of servers with its own private key using a HSM. This is to ensure that the server is genuine and that there is no server in the middle capturing and decrypting the traffic between clients, such as the Enterprise Linux Desktop and servers in the organisation. Certificates can be also used for authorisation of users and devices to access the organisations networks. The CA also keeps track of revoked keys (e.g. because the corresponding private key has been leaked).

Examples for private certification authorities are Cosmian,step-ca or XiPKI.

Monitoring

There are various monitoring solutions on the market. Some encompass the complete lifecycle (e.g. collection, filtering, metrics, alerts, dashboards and actions) and some only parts. Collection on Enterprise Linux Desktops is done usually using agents that write to a log aggregator which indexes the logs and stores them on a storage (due to the large amount of data this is often an object storage) centrally for central monitoring. This is also done on servers or any other form of compute (e.g. “functions”).

You can log events (e.g. “user a logged in”) and metrics (e.g. CPU usage in the last minute).

You should try to enforce standards for structured logging across your application landscape whenever possible (e.g. using JSON Lines, Logstash JSON, Elastic Common Schema (ECS) or Graylog Extended Log Format (GELF)). You may also provide logs directly through OpenTelemetry endpoints. Structured logging facilitates later processing and analysis. Additionally it can improve processing performance significantly.

Find here some examples for software for monitoring:

There are different query language so you should choose an open standard, such as Loki (LogQL) or Prometheus (PromQL). Additionally you can configure rules that create alerts (e.g. CPU at 100% over 10 minutes). These are stored in a time series DB, such as Promotheus.

Sometimes you may want to trigger actions on a given alert (e.g. restart service). You can do this by polling metrics or alters to trigger any computation (e.g. script, HTTP request).

You should have a dedicated monitoring tool for your monitoring tool in case it is down. This can be a dedicated special instance just for your monitoring tool.

Asset Management

IT Asset management tools (or Configuration Management Databases (CMDB)) are mainly large databases with a web frontend to track your IT Assets (or Configuration Item (CI)). They may also offer additional applications to support your yearly inventory process, e.g. by providing bar code or QR codes for your asset that can be scanned with an app or other reader.

Some tools also offer software agents to automatically collect asset information and keep it up-to-date.

Often they are combined with IT Service Management (ITSM) solutions.

Examples for asset management solutions are: * GLPI * iTop * Ralph - asset management * i-doit * Snipe-IT

Device Management

Device management can have various functionalities to manage the lifecycle of a device. You should take care that they cover the operating systems and devices your users use (e.g. your Linux Distribution of your Enterprise Linux Desktop) or specialised devices (e.g. IoT). Main functionalities are that they offer to deploy a company-vetted and updated image for devices and allow local/remote wiping of devices (e.g. in case they get lost or stolen). While some offer also management of updates this is often not needed on Linux Enterprise Desktops as they usually fetch latest software using standard private repositories as mentioned. This can also include firmware, operating systems, policy and application updates.

You may deploy agents to report on compliance status of installed software.

Examples for device management software are: * Opsi * Relution * Apptec360 * Baramundi * SUSE Multi-Linux Manager * B1 Linux Client Management (LCM)

Operating system images can be deployed locally or remotely using standards such as the Preboot Execution Environment.

While bring-your-own-device may be possible with some of the solutions, it is not recommended as it has additional security risks and complexity leading to higher support costs.

Backup and Restore

Backup and restore is a complex topic in itself and can here only be briefly touched. Make sure that you do in-depth research, let your solution audit by different independent experts and collect different experiences how to do backup. While the devices of the Enterprise Linux Desktop will sync most important company data and user assets to remote storage in your data center(s), you need to have backup and restore capabilities for all your server, infrastructure and data assets in your data center(s).

IT IS CRUCIAL THAT YOU DO REGULAR RESTORE TESTING OF YOUR BACKUP - OTHERWISE A BACKUP IS WITHOUT VALUE

Prioritise the testing based on data value and risks.

As said your software should encrypt by default your backups and integrate with your cryptographic infrastructure. Use different keys for your backup than used for encrypting the original data - this enables segregation of duty.

Software will you give you only one part of the solution - you need to design your restore procedures and use as backup storage software that enables functionality such as immutability of data (e.g. by using Ceph with object versioning and locking). The storage should be also performant so that in case of a disaster you can restore quickly and not needing weeks to restore.

You must integrate your backup solution in your monitoring to check regularly if backups are run successfully in ADDITION to regular restore tests. Of course you also need to backup your monitoring system!

You may need different backup solutions for VM snapshots, object storage, databases or selected files. Solution for selected files for the latter are usually installed as agents on servers. Databases often come with their own backup tools, which are recommendable to get a consistent database backup.

Some examples for backup software are: * Ceph Multi-Site Replication * Borg * Postgres Database Backup * Velero * Bareos * Kopia

Remote Support

IT Service Management Ticket System

All interaction of your help desk/support team should be tracked as tickets in a central IT Service Management ticketing system. This enables your organisation to ensure that issues are solved or identify any blockers. Additionally it allows to continue to work on issues if a help desk member is on vacation or leaves your company.

Your ITSM ticketing system may need to integrate with your IT asset management system to track seamlessly issues related to specific assets, such as Linux Enterprise Desktops. Often you find both functionalities in the same tool.

Following tools are examples for ITSM (and sometimes asset management) software: * GLPI * iTop * Zammad * OTRS * Snipe-IT

As said before you may need to provide different communication channels to support your users (e.g. mail, phone or messenger). You need to make sure that in all channels the identity of the user is properly verified to avoid security breaches.

Remote Access

One challenge with remote support is that it is not about simply having on the Linux Enterprise Desktop a remote Desktop server running. This is a huge security issue (especially in company-external networks, but also in general). Additionally, at all points in time the user of a Linux Enterprise Desktop must give consent that a help desk employee can see the screen of the user’s desktop and any point in time the user can disconnect the help desk staff. This is not only important for user’s privacy, but to protect critical company’s information assets. The user must be at all times be sure that only authorised help desk staff can connect to their Linux Enterprise Desktop.

Additionally, the all interactions by help desk support staff must be logged in an audit log stored and protected centrally in your organisation.

Note: Often it is better to rely on a monitoring system instead of remote access for the IT help desk to understand the issue. This is more secure as remote access could be an entry point for an attacker. Furthermore, by integrating your monitoring system you may also use automation to resolve the issues users face.

Not all software may support all the features. Especially management of the enterprise setting (e.g. security, explicit approval by users, automatic identification of the device of the user etc.) requires more in-depth comparison as this is not simply remote desktop software.

Examples for remote support software are: * RustDesk * TeamViewer * AnyDesk

Application and System Repository

Application and system repositories are not necessary complex applications, but you should take care AND test that they can handle peak loads, such as days when there are a lot of systems and/or application updates that affect a wide range of users.

Additionally, you should look for solutions that are able to block specific applications/libraries/system components, because they may be outdated or impose a too high security risk for your organisation.

Often these solutions have additional functionalities, such as scanning for known vulnerabilities or packages that have operational risks (e.g. not maintained any more). This does NOT encompass scanning for unknown vulnerabilities - this should be part of testing of a package. It also does not help you for configuring your application insecurely.

There are many different package formats for applications and libraries. Additionally, there are many different repositories. While you will need to support multiple formats and possibly different repositories, you should think about supporting only a selected set of trusted repositories. Ideally you have an agreement with them to us them and fund them (either through commercial contracts or donations) to ensure that they implement security measures, so they can be trusted in the future. Ideally, they are operated and controlled within the same region as your business operates (e.g. European Union) as this will facilitate legal aspects and reduces geopolitical risks.

Examples for package formats: * Flatpak - mainly for desktop and mobile applications. Includes also a sophisticated Desktop/Mobile application permission management system * Open Container Initative (OCI) (Image specification) - default format for software containers in many environments * RPM Package Manager (RPM) - Used by many distributions, such as SuSE, OpenSuSE, Red Hat * Deb - Used by distributions related to Debian * Arch Linux Package Management (ALPM)

Only recently standards have emerged to properly describe packages and their dependencies (software bill of material (SBOM)) to enable secure software supply chain analysis: * CycloneDX - mainly for security operations and vulnerability management * Software Package Data Exchange (SPDX) - mainly for procurement, licensing, governance, legal * Software Identification Tags (SWID) - mainly for software inventory and asset management * Vulnerability Exploitability eXchange (VEX) - describe software vulnerabilities and their exploitability (often used by security posture management solutions)

Make sure that they are available and cryptographically signed.

The Cryptographic Bill of Materials (CBOM) is an inventory of cryptographic assets, such as cryptographic algorithms and their configuration.

You should refrain from packages that are not in a package format, but the installer is only available as shell script or archive. This would be much less secure and does not work well with immutable Linux distributions.

Technologies, such as OSTree enable to optimise the download and installation process for packages (e.g. used in Flatpaks) as it does not require to fetch the full content if only a small update has been done. It also allows in case of layering to exchange a layer (e.g. standard library) with another version (if supported by the application).

The following solutions allow you to create your own repositories and remotes (a remote is comparable to a proxy and cache): * Pulp * Forgejo - currently only own packages and no remotes * JFrog Artifactory * Sonatype Nexus

You need to configure your package managers to use your private repository and not the Internet. Additionally, you should configure for your organisations Internet egress control that it does not allow direct access to the Internet repositories.

Network Management

IP Address Management (IPAM)

IP Address Management (IPAM) collects and tracks centrally which server and device use which IP addresses (in IPv6 you often assign a /64 address spaces to an individual server or device). This is not for device authorisation (see for this next section), but to simply ensure that two devices or server do not reuse the same IPv6 address range. Additionally IPAM provides other functionalities, such as visualising your network.

Example for tools for supporting IPAM: * Some of the tools mentioned in the section “Asset Management” support out of the box or with plugins IPAM * phpIPAM

Device Authorisation

You must control which devices access your on-premise physical network (this is different from the Connectivity scenario which deals with access from outside). This should not be based on static IPs, but instead by implementing the IEEE 802.1X standard with using device certificates issued by your private CA to the device. You must NOT use shared secrets for authentication as this is less secure in case they leak.

You must activate RadSec to avoid that critical authentication information are sent in clear text.

On the client side you can use pam_radius or wpa_supplicant for this for wired and wireless networks.

On the server side you need a RADIUS server to manage the authentication of the device. Example radius servers are: * FreeRADIUS * Most of the software mentioned in the section Identity and Authentication have a built-in radius server. If you use them already then consider first to use their RADIUS server.

Any device without a certificate (i.e. non-company devices) will be denied access and thus they cannot sniff on your network or manipulate packages.

This may not be feasible with certain types of IoT devices. You should run them in a dedicated virtual network as they are less trusted than properly authenticated devices. There is also a risk that other non-authorised devices may join this dedicated virtual network, so make sure that it is properly isolated from the real company network (ideally also physically).

IPv6 and private networking

Private networking and IPv6 is a complex topic in itself. If you have to chance to set up your network from scratch then opt for an IPv6-only network with only few endpoints to translate to legacy IPv4 (e.g. for legacy Internet websites).

As said before you should do micro-segmentation. Always have disconnected development and test networks from the production network.

IPv6 does not have a direct equivalent to IPv4 private networks (RFC1918). The reason for IPv4 private networks was not security, but simply due to the IPv4 address scarcity. This problem does not exist in IPv6.

There are unique local addresses (ULA) in IPv6, but they are for a different use case (e.g. private household routers that might be offline from the Internet and thus have not received an IPv6 address from their Internet provider).

Instead you should simply buy an IPv6 public address range and split it into private use and public use. Make sure that the private part is not routed to the Internet (only traffic coming from public part). This also means that you need to manage your network traffic centrally, which you anyway should do for security reasons. You should assign your Enterprise Linux Desktop an IP address range (/64) and not a single IP address as this is common in IPv6 and the client may need multiple IP addresses for various IP connectivity options anyway (e.g. wireless network, wired network etc).

Private DNS

You will have in your organisation a couple of servers (especially if you implement everything described here) in your private network. Thus, you need a Domain Name System (DNS) server to provide them addresses. This provides them human-understandable addresses and allows you also to specify load-balancing for high availability and/or disaster recovery scenarios.

Since recently, you should use the .internal top-level domain for this. Alternatively, you can purchase an own top-level domain that you only use internally. However, this costs more and has significant maintenance effort. Do not use public top-level domains for your private network (e.g. .de, .eu etc). If you do this then attackers can buy the domains that you use for internal purposes and use this to attack your network/clients.

Example software for managing your private DNS server are: * PowerDNS * Bind

Public DNS Resolution

All Linux Enterprise Desktops will need a DNS server to resolve public addresses on the Internet. This DNS server should also filter DNS requests for known malicious or inappropriate DNS addresses. Those can be based on existing lists on the Internet. This measure should be seen in addition to a filtering proxy (see next section), because otherwise malware would be still able to connect through IP addresses.

You can use similar software as in the previous sections, but usually you would need for a public DNS server only a subset of it (e.g. PowerDNS Recursor). You should have for sure separate and isolated DNS server instances for private networking and for public DNS resolution.

Internet egress filtering

It is imperative and one of the most important measures in an Enterprise to filter outgoing (egress) Internet traffic. This allows you to centrally allow/deny flows to the Internet to web sites which are not needed or are known malicious.

As a prerequisite you need on all servers and desktop forbid via firewall any direct Internet traffic. They can only connect through a private web proxy (forwarding proxy). This proxy should by default deny any access to known malicious or harmful sites on the Internet. As a start you can use many of the public lists. If you need to detect any malicious traffic then you may need to break TLS encryptions in the proxy by deploying a custom certificate (signed by your private certification authority). The public part needs to be deployed on all Linux Enterprise Desktops including their browsers (see “Application and System Repository”).

Possibly, you should also deny certain ad servers as they can be used to spy on your organisation and/or location of specific employees.

Additionally, you should enable also a deny list in your DNS resolver (see previous section).

Furthermore, for servers in your organisation, you should configure a default deny all rule. For each server you should only allow access to the Internet web pages that it needs to access. This is individual and minimal per server. Servers must not connect directly to the Internet for operating system or application updates - they should instead use the private “Application and System Repository” mentioned previously.

This can be relatively easy configured in the proxy as you can by default deny every outgoing Internet-access based on the IP address range of your server landscape.

Since you may have many Linux Enterprise Desktops take care to size the infrastructure correctly, e.g. have enough proxy servers to fulfil all requests with high availability and have the bandwidth needed.

Another benefit of a proxy is that if you have an IPv6-only internal network then the proxy can automatically take care for you to be able to connect to legacy IPv4 Internet web sites without the need to support IPv4 internally. Simply provide IPv4 connectivity only to the proxy server.

Example software for proxying and filtering Internet traffic are: * Squid * Haproxy * Apache HTTPD with proxy module

Desktop

Immutable Operating System

Immutable Operating Systems have emerged since some years mainly at server level (e.g. OpenSuSE MicroOS, SUSE Linux Micro, RHEL 10 image mode).

Key features are: * Linux is immutable which means updates bring the system from one stable state into another stable state. An update cannot break the system. Techniques, such as btrfs snapshots, enable this. If a system cannot boot successfully until startup it will simply select the last known working snapshot automatically. * There is only a minimal set of base packages installed. All other services/applications are deployed using OCI containers, which can be replaced with different versions using Podman (Podman systemd aka Quadlet). It also can automatically fallback to a known working version. * Deployment of updates happens into a file system snapshot - meaning they do not impact a running server and restarts into a new updated version take few seconds without longer downtimes * Updates are done more frequently as they do usually not cause any troubles (e.g. every 24 hours) * You can change configurations of your system or have user data (home folders) - immutable does not mean unchangeable (see first point)

The idea has been brought also to Desktops. While it is not yet as common as the server side, it will have a lot of benefits: * You can install updates to operating system or applications without impacting the user. * Updated system is available nearly instantly after restart. If it does not work it automatically resorts to a last known working version * Flatpaks package Desktop applications and allow fine-granular permission management (e.g. Clipboard access, Internet access, file access etc.), so that the application has only access to what it needs and usually much less permissions than the user using it. This can effectively reduce the blast impact of malware. * You can only allow selected and vetted applications to be installed through the application system repository * You can update the Linux Enterprise Desktop every 24 hours without impacting the user negatively

Examples for immutable Linux Desktops for the Linux Enterprise Desktop: * Kalpa Desktop - Immutable Linux Desktop based on OpenSuSE Tumbleweed and KDE * Aeon Desktop - Immutable Linux Desktop based on OpenSuSE Tumbleweed and Gnome * Fedora Kinoite - Immutable Linux Desktop based on Fedora Atomic and KDE * Fedora Silverblue - Immutable Linux Desktop based on Fedora Atomic and Gnome

Note: Immutable Linux Desktops are in 2025 considered previews, but they are often very stable and expected soon to be enterprise-ready.

There are others, but some have a different focus than enterprise (e.g. Bazzite for Gaming Desktops).

Kairos is an immutable Linux framework that works across Linux distributions.

Encryption-at-Rest

Encryption-at-rest is crucial to protect your critical enterprise data on the device running your Enterprise Linux Desktop.

You can use dm-crypt for full filesystem encryption of all filesystem (also temporary ones, such as swap). You should use the latest symmetric encryption algorithms (e.g. AES256) with latest block ciphers (e.g. XTS). Ideally the CPU of your device has hardware support for the encryption as it will significantly increase performance.

If you use software-based encryption then a passphrase needs to be defined to decrypt the devices when booting the device. You can define multiple passphrases (e.g. one selected by the owner and another automatically generated recovery phrase, see further below). You need to select a latest key derivation function, such as Argon2id with known secure parameters.

Alternatively you can use a trusted platform module (TPM) - see below. This stores the key in a safe area and you must define a password/pin to access it.

Another alternative is to use one or more FIDO2 keys (see below) for device encryption. You usually need to only attach them during start of the device.

You can use also a combination of these methods (e.g. a FIDO2 key for decryption by user and a complex password recovery phrase for recovery in case of loss of the FIDO2 key).

Independent of method you should always put a password/pin for device encryption, otherwise attackers may be able to decrypt it too easily if they have access to the device.

As already described under organisation in the section “Remote Storage” you should synchronise company critical files and user settings with a remote storage running in your data center that is backed up. This can then serve also as a backup for your Enterprise Linux Desktop in case it gets lost or the user forgets the access password or looses the hardware token to access the device.

Additionally you may store Enterprise Linux Desktop recovery keys centrally that can be used in exceptional situations to recover the device key. You should have clear procedures who and how they can be accessed (an “Asset Management” or “Device Management” solution may provide functionality), e.g. your service desk staff must make sure that it hands the recovery key only to the owner of the device and not to anyone.

Policies

As mentioned previously, you can have multiple types of policies. Each type of policy maybe enforced and/or monitored through one or more combinations of technologies.

Generally, Linux distributions already bring a set of policies - also for the Linux Desktop. I recommend to review them and enhance them base on your needs. Make sure and test regularly (ideally in an automated fashion) that they are enforced. Some policies only detect issues, but not prevent them. This is still useful and you must make sure that those detection end up in your SIEM system. You will probably find also a lot of additional policies in various open source repositories or based on best practices, such as CIS Benchmarks.

However, you need to be clear on what you need. This means you should define a threat model that is relevant to your organisation. For example, can employees install any software? What would be the risk and how do I mitigate the risk (e.g. by limiting the permissions of the users so they cannot install any software).

Additionally, audit tools, such as Lynis can give you some hints if you miss something.

You can distribute policies simply via a package that is installed on all your Linux Enterprise Desktops and automatically upgraded through the package manager (see section “Application and System Repository”).

Example technologies: * Networking: nftables, selinux network policies * Malware detection: Yara rules * Mandatory Access Control (MAC) for files, system calls, cgroups and other resources: selinux * Application/Desktop: Flatpak Sandbox permissions. Note: You may need to repackage a Flatpak to include your organisation-specific permissions * Agents are often not different from applications, i.e. the same policy languages and tools also apply to them. The point is that you do not forget about them. Additionally, they should run under their own limited user identity in the Linux system and not under the identity of the user. Although not recommended due to large security issues you could run them as the user identity if there is no other way, but make sure using the previous policy languages and tools that they have minimal permissions (and carefully assess dangerous permissions, such as network access or writing to remote storage).

Audit

You need to keep an audit log of denied actions on the Linux Enterprise Desktop and regularly transfer them to your organisations SIEM and Logging/Monitoring system to be analysed for suspicious activities or problems.

This audit log is often provided by the policy engine described previously, e.g. SELinux audit logs. Usually many audit logs leverage the Linux audit subsystem (see next section).

The audit log itself can be transferred to the SIEM and/or Logging/Monitor system using software agents (clients) mentioned in the corresponding sections.

End User Computing Services

End user computing services supports security and troubleshooting. Examples for end user computing services: * Endpoint Security Service: We already mentioned above better alternatives to endpoint security (e.g. policies, isolating applications, allowing only approved applications,Internet egress control, strip macros from documents), but you can scan using yara rules (e.g. using inotify) and/or use scanners, such as ClamAV. You will find in public repositories signatures for them. However, make sure these are reliable and cover what malware you need to cover. Another powerful tool to detect if the device has been tampered with is the Advanced Intrusion Detection Environment (AIDE). Velociraptor provides real-time detection of malicious activity. * Audit Service: auditd allows you to specify audit events and log them. SELinux can also leverage this * Remote Support Service: We mentioned in section “Remote Access” various technologies for the server part. These provide also software for the client part, so you can deploy this on the Enterprise Linux Desktop so that service desk staff upon request can connect to the Enterprise Linux Desktop. However, you should try to avoid the need for remote access software where possible and rely on logging/alerting as well as automated package installations.

Trusted Platform Module (TPM)

Many devices for the Linux Enterprise System have a trusted platform module (TPM) that is also supported by Linux. This can be used for encryption-at-rest or for storing secrets (e.g. certificates for device authentication). Additionally it can be used for verifiable and/or measured boot.

Nevertheless, these TPMs are often not very open and may contain backdoors. They may also not be maintained. It is up to you to decide if you want to use them. External FIDO2 keys can be based on open source hardware and software. Thus, they can be a viable alternative.

Generally, if you use TPM you should enforce that any action using the TPM (e.g. decrypting a hard drive) should at least has as a prerequisite that the user once during startup has entered a passphrase of adequate length or use an external hardware device such as FIDO2 key with a passphrase of adequate length.

Verified Boot and Measured Boot

Unless you have your own private CA and CI/CD pipelines signing your firmware/bootloader as well as the possibility to deploy on your devices custom public keys for verification, you will rely on existing verified boot (e.g. provided by a third party that you may trust or not). Additionally you may deploy a solution for the user that implements measured boot and provides more control on what is verified. Measured boot requires more investment as it requires to provide USB security dongles or an app. Also you need to think about what happens if the user looses them. However, it is very useful for computers at high risk (e.g. containing secret data), so you can train the users handling this type of data.

In both cases the software solution should highlight any issue to the user. You may then carefully assess if you allow the users to proceed anyway or if they need to contact your IT support team.

Note: Negative results for verified boot or measured boot imply that someone was able to tamper the firmware and/or bootloader, which while not impossible can be reduced by following other aspects described here (e.g. controlled installation of packages, minimal permissions, SIEM, Internet egress control, temporary devices for travelling to untrusted countries etc.). Once tampering is detected the user can also not continue to work with the laptop and requires that IT support provides a new device. Nevertheless, they can still make sense in certain scenarios.

Verified Boot

UEFI Secure Boot is a form of verified boot. However, often this is bound to a single vendor as a trusted third party. This can be problematic as there are often no other trusted third parties and the vendor has their own interest as well as it maybe not in your jurisdiction. You can though also provide your own keys if you have an own private CA (see above) and if you have CI/CD pipelines for signing the software.

Secure boot requires that a public key is written to the device (“platform key”) by which the bootloader and other element such as drivers are verified. You can add own custom public keys if they have been signed by the platform key.

You can setup secure boot into a custom mode, which allows to add own public keys without relation to a platform key. You should remove any other key not added by you afterwards.

Generally, adding own keys and signing them will require some effort. Especially it requires dedicated infrastructure to securely handle the signing private keys and add them to each device. Furthermore, any updates to bootloaders must be signed. If the private keys need to be revoked then this is difficult to verify if the revocation is genuine. The validation process of the signatures during boot might be circumvented.

Measured Boot

Measured boot gives the user control to sign the bootloader, other software and configurations locally. You will need the following modules: * Trusted Security Module (TPM) built-in the device (see above) * USB security dongle that the user needs to plug in for signature verification (alternatively an app running on another device)

There are different methods that work with either one of the options or both: * HMAC-based one-time password (HOTP) * https://en.wikipedia.org/wiki/Time-based_one-time_password (TOTP) * OpenPGP Smartcard signing

During boot the hashes of what you want to measure are stored in the TPM. Based on the hashes a secret code is generated in the TPM using the methods mentioned before and that is verified with the USB security dongle. If it matches then one can assume that the process has not been tampered with.

You must make sure for many of the methods that the clock of the device is precisely set or the verification will fail.

Often it makes sense to stop the boot process if tampering is detected. You may allow trained users also to continue if they understand the risk associated with it. Any tampering attempt should be reported to your enterprise security team.

Every time updates are applied the USB security dongle need to also update the secret.

Example software enabling measured boot: * Heads * Coreboot

Pluggable Authentication Module (PAM)

Linux offers a wide range of pluggable authentication modules. The previous described identity and access management (IAM) solutions mentioned provide their own. Usually these IAM solution provided PAMs include also advanced functionality, such as SELinux integration and offline login (e.g. in case the user is not connected to the organisation’s network, but still need to work with the Linux Enterprise Desktop).

While there are PAMs for FIDO2 usb keys, smartcards or IR camera-based authentication (e.g. Howdy) they are often only locally integrated and not with the IAM solution. While the IAM solution supports these mechanisms in general (e.g. for web applications), the configuration for Linux authentication requires choice of a IAM solution that supports it and additional configuration (e.g. use of Kerberos and confiuration of SSSD).

The Credentials for Linux Project aims at standardising many techniques for authentication on the Linux Desktop. This focuses on Desktop technologies, such as D-Bus, and sandboxed applications (e.g. Flatpaks).

Minimal Permissions, Isolation and Application Sandboxes

Minimal Permissions

Minimal permissions are crucial for a secure Linux Enterprise Desktop. Users must not be able to run activities as privileged user (e.g. root) to avoid that security measures can be circumvented.

Some users may need to do development (e.g. of enterprise software). It might be tempting to provide them privileged access as they often need to install tools, frameworks and libraries. However, this should be avoided at all costs due to the sheer amount of software supply chain attacks.

The best way would be if they use for development activities web-based development environments (e.g. web-based IDEs, such as Eclipse Theia, hosted on specialised security hardened development services, such as Eclipse Che) instead of Desktop or command line tools on the Linux Enterprise Desktop. The advanced would be also that it can easily leveraged reproducible IDEs and runtime environments (e.g. Development Containers).

Another way for development activities would be one or more local VMs on the Linux Enterprise Desktop. However, this may be in certain cases more cumbersome to use and has a higher resource consumption.

Finally, the last but least recommended way would be to have on the Linux Enterprise Desktop a dedicated non-privileged local user for development. This could at least leverage the standard Linux mechanisms for protecting the normal user account from any attacks of the local non-privileged local development users. Tools, frameworks and libraries can be installed through Distrobox as this would allow to cleanly manage different heterogenous environments (it does though not provide any sandboxing or security features).

Isolation and Sandboxing

The user of the Linux Enterprise Desktop must work in a secure environment and at the same time have all the functionality needed. Additionally different user groups with different skills have different needs. It does not make sense to provide a complicated security solution that requires high user expertise to users who do not work with confidential/secret data.

For example, Qubes OS provides a sophisticated isolation environments through compartmentalization. This requires a lot of design and configuration. However, it also means that an application, such as a browser who has run some malware, cannot impact negatively other applications running on the Linux Enterprise Desktop. Nevertheless this requires additional end user training to be effective and efficient. Since applications may require to share information (e.g. via the clipboard) this requires additional considerations.

A more widely applicable approach is the use of Flatpaks. They provide a sophisticated permission system (e.g. access to network, to files, to clipboard, limiting system calls etc.) that can be configured and tested by an administrator. This requires much less understanding of the security concept for the end user compared to Qubes OS and still offers a good degree of protection.

Another tool is Landlock that allows to restrict permissions for a set of processes. Island is a sandboxing tool based on Landlock. It provides a policy manager and allows to restrict applications without the need to modify their code.

Protecting from malicious hardware

You should also not forget to limit access to hardware from applications, but importantly also to limit the hardware a user can add to a device of the Linux Enterprise Desktop. For example, BadUSB attacks allow to take over secretly the device by malicious third parties using USB. Tools, such as USBGuard help to only allow allow-listed USB devices. This can be also used to block USB storage sticks to avoid that employees by accident or on purpose put data on unencrypted USB storage keys or USB hard drives.

FIDO2 keys

There are a wide range of open hardware and/or open source FIDO2 keys available (they also Support WebAuthn (Passkeys)), such as: * Nitrokey - open hardware and software * Solo keys * Yubikey

Usually the keys can offer further functionality, such as support for measured boot, OpenPGP smartcard, HOTP/TOTP tokens, secret store.

Special attention should be paid where the keys are used by the user. For example, if a key is lost or gets stolen then you can simply remove it from your Identity and Access Management (IAM) solution and give the user a new key. However, the user may use it also for other things, e.g. as second factor for external web sites or as a OpenPGP smartcard signing software and/or document. In case the key is lost it may have big impact for the user. Thus, you should exclude these uses and communicate it to the user.

Conclusions

The Linux Enterprise Desktop supports a lot of needs in various organisations ranging from small ones to global players. Several organisations and public administrations already have deployed the Linux Enterprise Desktop and more plan to do so as it gives them the most value as well as unique competitive advantages.

Depending on your size and type of your organisation you may only implement a subset of the functionality described here. If you are a large organisation handling confidential/secret data then you may go beyond what is described here. You may also implement functionality in different stages depending on the maturity of your organisation and processes.

If you are a few persons company you can start with standardised immutable Linux distribution without administrative access for employees and the same laptop model. This means you focus on the Desktop part. The complementary organisation part (e.g. organisational network) will come as you grow > 10 persons or if you have very strict regulatory requirements.

What to implement next can be decided based on risk/threat scenarios (e.g. data loss on devices).

The organisational part described here can be achieved by hosting in your own data center, hosting in another data center managed by third parties (“cloud”) or a hybrid approach. This depends on your sovereignty and security needs. Especially when you delegate more tasks to a third party then you need to make additional risk considerations, especially if their headquarters are not in the same jurisdiction as your organisation.

This is just an excerpt focusing on Linux Enterprise Desktops. As mentioned before there is more software in an organisation (e.g. mail, office), but this will be part of future posts.