secluded/content/posts/lxd-containers-for-human-be...

7.7 KiB

title subtitle date categories tags draft rss_only cover
LXD: Containers for Human Beings Docker's great and all, but I prefer the workflow of interacting with VMs 2023-08-11T16:30:00-04:00
Technology
Sysadmin
Containers
VMs
Docker
LXD
true false ./cover.png

This is a blog post version of a talk I presented at both Ubuntu Summit 2022 and SouthEast LinuxFest 2023. The first was not recorded, but the second was and is on SELF's PeerTube instance. I apologise for the terrible audio, but there's unfortunately nothing I can do about that. If you're already intimately familiar with the core concepts of VMs or containers, I would suggest skipping those respective sections. If you're vaguely familiar with either, I would recommend reading them because I do go a little bit in-depth.

{{< adm type="warn" >}}

Note: Canonical has decided to pull LXD out from under the Linux Containers entity and instead continue development under the Canonical brand. The majority of the LXD creators and developers have congregated around a fork called Incus. I'll be keeping a close eye on the project and intend to migrate as soon as there's an installable release.

{{< /adm >}}

The benefits of VMs and containers

  • Isolation: you don't want to allow an attacker to infiltrate your email server through your web application; the two should be completely separate from each other and VMs/containers provide strong isolation guarantees.
  • Flexibility: VMs and containers only use the resources they've been given. If you tell the VM it has 200 MBs of RAM, it's going to make do with 200 MBs of RAM and the kernel's OOM killer is going to have a fun time 🤠
  • Portability: once set up and configured, VMs and containers can mostly be treated as black boxes; as long as the surrounding environment of the new host is similar to the previous in terms of communication (proxies, web servers, etc.), they can just be picked up and dropped between various hosts as necessary.
  • Density: applications are usually much lighter than the systems they're running on, so it makes sense to run many applications on one system. VMs and containers facilitate that without sacrificing security.
  • Cleanliness: VMs and containers are applications in black boxes. When you're done with the box, you can just throw it away and most everything related to the application is gone.

Virtual machines

As the name suggests, Virtual Machines are all virtual; a hypervisor creates virtual disks for storage, virtual CPUs, virtual NICs, virtual RAM, etc. On top of the virtualised hardware, you have your kernel. This is what facilitates communication between the operating system and the (virtual) hardware. Above that is the operating system and all your applications.

At this point, the stack is quite large; VMs aren't exactly lightweight, and this impacts how densely you can pack the host.

I mentioned a "hypervisor" a minute ago. I've explained what hypervisors in general do, but there are actually two different kinds of hypervisor. They're creatively named Type 1 and Type 2.

Type 1 hypervisors

These run directly in the host kernel without an intermediary OS. A good example would be KVM, a VM hypervisor than runs in the Kernel. Type 1 hypervisors can communicate directly with the host's hardware to allocate RAM, issue instructions to the CPU, etc.

hk: Host kernel
hk.h: Type 1 hypervisor
hk.h.k1: Guest kernel
hk.h.k2: Guest kernel
hk.h.k3: Guest kernel
hk.h.k1.os1: Guest OS
hk.h.k2.os2: Guest OS
hk.h.k3.os3: Guest OS
hk.h.k1.os1.app1: Many apps
hk.h.k2.os2.app2: Many apps
hk.h.k3.os3.app3: Many apps

Type 2 hypervisors

These run in userspace as an application, like VirtualBox. Type 2 hypervisors have to first go through the operating system, adding an additional layer to the stack.

hk: Host kernel
hk.os: Host OS
hk.os.h: Type 2 hypervisor
hk.os.h.k1: Guest kernel
hk.os.h.k2: Guest kernel
hk.os.h.k3: Guest kernel
hk.os.h.k1.os1: Guest OS
hk.os.h.k2.os2: Guest OS
hk.os.h.k3.os3: Guest OS
hk.os.h.k1.os1.app1: Many apps
hk.os.h.k2.os2.app2: Many apps
hk.os.h.k3.os3.app3: Many apps

Containers

As most people know them right now, containers are exclusive to Linux.1 This is because they use namespaces and cgroups to achieve isolation.

  • Linux namespaces partition kernel resources like process IDs, hostnames, user IDs, directory hierarchies, network access, etc.
  • Cgroups limit, track, and isolate the hardware resource use of a set of processes

Application containers

Host kernel.Container runtime.c1: Container
Host kernel.Container runtime.c2: Container
Host kernel.Container runtime.c3: Container

Host kernel.Container runtime.c1.One app
Host kernel.Container runtime.c2.Few apps
Host kernel.Container runtime.c3.Full OS.Many apps

System containers

hk: Host kernel
hk.c1: Container
hk.c2: Container
hk.c3: Container
hk.c1.os1: Full OS
hk.c2.os2: Full OS
hk.c3.os3: Full OS
hk.c1.os1.app1: Many apps
hk.c2.os2.app2: Many apps
hk.c3.os3.app3: Many apps

When to use VMs

  • Virtualising esoteric hardware
  • Virtualising non-Linux operating systems (Windows, macOS)
  • Completely isolating processes from one another with a decades-old, battle-tested technique

{{< adm type="note" >}} See Drew DeVault's blog post In praise of qemu for a great use of VMs {{< /adm >}}

When you use application containers

  • Microservices
  • Extremely reproducible builds
    • (NixOS.org would likely be a better fit though)
  • Dead-set on using cloud platforms with extreme scaling capabilities (AWS, GCP, etc.)
  • When the app you want to run is only distributed as a Docker container and the maintainers adamantly refuse to support any other deployment method
    • (Docker does run in LXD 😉)

System containers

  • Anything not listed above 👍

Crash course to LXD

Installation

{{< adm type="note" >}}

Note: the instructions below say to install LXD using Snap. I personally dislike Snap, but LXD is a Canonical product and they're doing their best to prmote it as much as possible. One of the first things the Incus project did was rip out Snap support, so it will eventually be installable as a proper native package.

{{< /adm >}}

  1. Install snap following Canonical's tutorial
    • LXD is natively packaged for Arch and Alpine, but configuration can be a massive headache.
  2. sudo snap install lxd
  3. lxd init
  4. lxc image copy images:debian/11 local: --alias deb-11
  5. lxc launch deb-11 container-name
  6. lxc shell container-name

Usage

{install my URL shortener}


  1. Docker containers on Windows and macOS actually run in a Linux VM. ↩︎