You’re probably familiar with the concept of containers – a piece of lightweight software that bundles an application and all its dependencies and configuration into a single package, that can be easily deployed and executed reliably and deterministically on multiple different platforms, environments, and operating systems.
Containers have been very popular in recent years and were used mostly to deploy and manage applications on Linux servers. You probably know Docker and its related technologies, such as Docker Swarm and Kubernetes – both are container orchestration systems that let you deploy, manage, and scale containers across Linux servers.
But what about Windows? In this post, we’ll try to shed more light on various technologies related to containers on Windows, including the various types of container isolation technologies, Linux on Windows, and other implications of this emerging Windows tech.
Some time ago, a variant of Docker for Windows called “Docker Toolbox”, actually ran a VirtualBox instance with a Linux OS on top of it, allowing Windows developers to test their containers before deploying on production Linux servers.
Recently, Microsoft has brought up the containers technology to run natively on Windows Server and Windows 10. Microsoft and Docker cooperated to build a native Docker for Windows variant and Docker Swarm and Kubernetes shortly followed.
Today, Windows developers can create and run native Windows and Linux containers on their Windows 10 devices and later deploy and orchestrate those on their Windows servers or even Linux servers, in case of Linux containers.
Traditional Linux containers are a light layer of isolation, leveraging kernel features such as namespaces and process isolation to execute software separately from other containers and the host. Although sharing the same kernel, containers have several OS services, such as a virtualized filesystem, preventing one container from tampering with the configuration of another. The lightness and decent security have made containers a great alternative to traditional virtual machines, allowing us to utilize servers more efficiently and focusing the compute power on our apps, instead of the infrastructure beneath. Of course, relying on the kernel for security is not always enough and some applications might still require the enhanced security of VM-grade isolation.
Container Isolation Technologies
On Windows, there are two types of containers, distinguished by their isolation level:
- Process Isolation – resemble the traditional Linux containers in their lightness and kernel sharing with sibling containers and the host. They’re internally based on new kernel objects called Silos, which is the Microsoft variant for Linux namespaces. With Silos, Windows kernel objects such as files, registry, and pipes can be isolated into separate logical units – containers.
One main drawback in this type of container is the lack of flexibility for choosing the underlying kernel – the application in the container is tightly coupled to the kernel of the host. Not only I wouldn’t be able to run my application locally on top of a Windows Server 2016 kernel, if my host is Windows 2019, I definitely wouldn’t be able to run it on a Linux kernel.
- Hyper-V Isolation – a new way of running containers, which provides much greater flexibility and compatibility between the container and the host’s kernel. In this mode, each container runs inside a highly optimized virtual machine, getting the entire kernel for itself. It’s possible to run both Windows and Linux inside the virtual machine, letting the developer choose the right OS for his needs, to power his containerized application.
Those containers might seem very similar to traditional virtual machines. However, they’re much more optimized to their specific purpose. One example would be Linux containers on Windows 10 (or LCOW), where running a Linux container wouldn’t spin a full-blown OS inside the virtual machine, but instead will use a very tiny Linux kernel “with just enough OS” to support native Linux containers. The application will then run under a traditional process-isolated Linux container, with a minimal performance penalty.
Another great benefit of this type of container is the much-increased security, derived from the hardware-level isolation used to power the virtual machines.
When looking at the security of traditional containers, we can easily conclude that most of it relies on the OS kernel, providing the containerization capabilities. A malware getting access to the kernel, can easily tear down the walls between Linux namespaces or Windows Silos, compromising sibling containers or even the host. We must remember that both Linux and Windows kernels are huge monolithic chunks of code (and we can add 3rd-party system drivers on top of that), performing many different tasks on the system. A larger code base has more vulnerabilities to exploit by attackers.
Virtual machines, on the other hand, and specifically the VMM, usually consist of less code (e.g. order of magnitude of ~100K lines of code exposed to the guest OS), which is written for a very specific task. Combined with hardware virtualization security features of the CPU, it’s much harder to attack those. For evidence, hypervisor CVEs are much rarer compared to critical OS CVEs.
Server Containers vs. Client Containers
When we speak about containers, we usually refer to server and data center application containers. But it turns out there are more use cases for this great technology.
On Windows 10, we can find several usages of containers to drive client applications, with user interaction and maybe even (non-console-based) UI.
One good example would be Windows Subsystem for Linux (or WSL) v2 which allows running a full Linux OS inside Windows. This is great news for developers who can now use the Linux shell and many Linux tools for their work, straight from their Windows CMD. Unlike a traditional Linux VM, WSL instances spawn within 2-3 seconds, with a very light memory footprint, and provide great interoperability performance (you can access your Windows files from Linux and vice versa).
Note that the older WSL v1 was emulating Linux system calls on top of a Windows Kernel. That was fine for some applications but many others required additional system calls that were not supported. Compared to traditional virtual machines, WSL v1 was much faster to “boot” and had a smaller footprint on the system. With the introduction of WSL v2, which is now based on LCOW, Microsoft managed to solve the compatibility problems of WSL v1 without compromising performance (they improved it in some aspects, e.g. file I/O over Ext4 volumes) and keeping the same user experience.
Another example would be Windows Defender Application Guard (or WDAG), which provides a separate and isolated Microsoft Edge instance, which is used to access potentially malicious web sites, keeping the threats inside the VM-based container, using hardware-level isolation. This container is highly optimized, providing a host-assisted kernel scheduler, dynamic memory management, and a virtual GPU, allowing it to run efficiently and reduce the load on your laptop’s battery. Another noticeable optimization is file system deduplication with the host – most of the kernel files appear only once on the physical disk, being mapped (as read-only) into the containers with NTFS reparse points.
It is clear that Microsoft is continuing its ongoing investment in Windows container technologies, and it seems that the technology has matured and expanded into new territories, including not just server-based applications, but client-based applications such as Edge. At Hysolate, we were able to leverage the latest Windows Containers technology to split the desktop into 2 isolated workspaces, running efficiently side-by-side and providing the great manageability and predictability of containers to the interactive user environment, while keeping them isolated with VM-grade isolation for enhanced security.