diff --git a/content/posts/lxd-containers-for-human-beings.md b/content/posts/lxd-containers-for-human-beings.md index 276df11..7b75639 100644 --- a/content/posts/lxd-containers-for-human-beings.md +++ b/content/posts/lxd-containers-for-human-beings.md @@ -18,7 +18,10 @@ cover: ./cover.png This is a blog post version of a talk I presented at both Ubuntu Summit 2022 and SouthEast LinuxFest 2023. The first was not recorded, but the second was and is on [SELF's PeerTube instance.][selfpeertube] I apologise for the terrible audio, -but there's unfortunately nothing I can do about that. +but there's unfortunately nothing I can do about that. If you're already +intimately familiar with the core concepts of VMs or containers, I would suggest +skipping those respective sections. If you're vaguely familiar with either, I +would recommend reading them because I do go a little bit in-depth. [selfpeertube]: https://peertube.linuxrocks.online/w/hjiTPHVwGz4hy9n3cUL1mq?start=1m @@ -26,9 +29,9 @@ but there's unfortunately nothing I can do about that. **Note:** Canonical has decided to [pull LXD out][lxd] from under the Linux Containers entity and instead continue development under the Canonical brand. -The majority of the LXD creators and developers have congregated around -[Incus.][inc] I'll be keeping a close eye on the project and intend to migrate -as soon as there's an installable release. +The majority of the LXD creators and developers have congregated around a fork +called [Incus.][inc] I'll be keeping a close eye on the project and intend to +migrate as soon as there's an installable release. [lxd]: https://linuxcontainers.org/lxd/ [inc]: https://linuxcontainers.org/incus/ @@ -37,32 +40,58 @@ as soon as there's an installable release. ## The benefits of VMs and containers -- **Isolation:** we don't want an attacker to get into our webserver and be able - to gain access to our email server +- **Isolation:** you don't want to allow an attacker to infiltrate your email + server through your web application; the two should be completely separate + from each other and VMs/containers provide strong isolation guarantees. - **Flexibility:** VMs and containers only use the resources they've been given. If you tell the VM it has 200 MBs of RAM, it's going to make do with 200 MBs of RAM and the kernel's OOM killer is going to have a fun time 🤠 - **Portability:** once set up and configured, VMs and containers can mostly be - treated as black boxes; as long as the surrounding environment is similar to - the previous in terms of communication, they can just be picked up and dropped - to various machines and hosts as necessary. + treated as black boxes; as long as the surrounding environment of the new host + is similar to the previous in terms of communication (proxies, web servers, + etc.), they can just be picked up and dropped between various hosts as + necessary. - **Density:** applications are usually much lighter than the systems they're running on, so it makes sense to run many applications on one system. VMs and containers facilitate that without sacrificing security. -- **Cleanliness:** VMs and containers are black boxes. When you're done with it, - you can just throw the box in the trash (delete it) and everything related to - that application is gone. +- **Cleanliness:** VMs and containers are applications in black boxes. When + you're done with the box, you can just throw it away and most everything + related to the application is gone. ## Virtual machines -```kroki {type=d2,d2theme=flagship-terrastruct,d2sketch=true} -title: |md - # Virtual machines -| { near: top-center } +As the name suggests, Virtual Machines are all virtual; a hypervisor creates +virtual disks for storage, virtual CPUs, virtual NICs, +virtual RAM, etc. On top of the +virtualised hardware, you have your kernel. This is what facilitates +communication between the operating system and the (virtual) hardware. Above +that is the operating system and all your applications. +At this point, the stack is quite large; VMs aren't exactly lightweight, and +this impacts how densely you can pack the host. + +I mentioned a "hypervisor" a minute ago. I've explained what hypervisors in +general do, but there are actually two different kinds of hypervisor. They're +creatively named **Type 1** and **Type 2**. + +### Type 1 hypervisors + +These run directly in the host kernel without an intermediary OS. A good example +would be [KVM,][kvm] a **VM** hypervisor than runs in the **K**ernel. Type 1 +hypervisors can communicate directly with the host's hardware to allocate RAM, +issue instructions to the CPU, etc. + +[debian]: https://debian.org +[kvm]: https://www.linux-kvm.org +[vb]: https://www.virtualbox.org/ + +```kroki {type=d2,d2theme=flagship-terrastruct,d2sketch=true} direction: up +hk: Host kernel +hk.1h: Type 1 hypervisor k1: Guest kernel k2: Guest kernel k3: Guest kernel @@ -73,10 +102,37 @@ app1: Many apps app2: Many apps app3: Many apps -Host kernel -> Hypervisor -Hypervisor -> k1 -> os1 -> app1 -Hypervisor -> k2 -> os2 -> app2 -Hypervisor -> k3 -> os3 -> app3 +app1 <- os1 <- k1 <- hk +app2 <- os2 <- k2 <- hk +app3 <- os3 <- k3 <- hk +``` + +### Type 2 hypervisors + +These run in userspace as an application, like [VirtualBox.][vb] Type 2 +hypervisors have to first go through the operating system, adding an additional +layer to the stack. + +```kroki {type=d2,d2theme=flagship-terrastruct,d2sketch=true} +direction: up + +hk: Host kernel +os: Operating system +os.2h: Type 2 hypervisor +k1: Guest kernel +k2: Guest kernel +k3: Guest kernel +os1: Guest OS +os2: Guest OS +os3: Guest OS +app1: Many apps +app2: Many apps +app3: Many apps + +os <- hk +app1 <- os1 <- k1 <- os +app2 <- os2 <- k2 <- os +app3 <- os3 <- k3 <- os ``` ## Containers @@ -88,14 +144,10 @@ title: |md direction: up -app1: App -app2: App -app3: App - Host kernel -> Hypervisor -Hypervisor -> app1 -Hypervisor -> app2 -Hypervisor -> app3 +Hypervisor -> One app +Hypervisor -> Few apps +Hypervisor -> Full OS -> Many apps ``` ```kroki {type=d2,d2theme=flagship-terrastruct,d2sketch=true} @@ -105,9 +157,9 @@ title: |md direction: up -os1: Guest OS -os2: Guest OS -os3: Guest OS +os1: Full OS +os2: Full OS +os3: Full OS app1: Many apps app2: Many apps app3: Many apps @@ -117,9 +169,7 @@ Host kernel -> os2 -> app2 Host kernel -> os3 -> app3 ``` -## When to use which - -### Virtual machines +## When to use VMs - Virtualising esoteric hardware - Virtualising non-Linux operating systems (Windows, macOS) @@ -129,7 +179,7 @@ Host kernel -> os3 -> app3 See Drew DeVault's blog post [_In praise of qemu_](https://earl.run/rmBs) for a great use of VMs {{< /adm >}} -### Application containers +### When you use application containers - Microservices - Extremely reproducible builds @@ -145,10 +195,30 @@ See Drew DeVault's blog post [_In praise of qemu_](https://earl.run/rmBs) for a ## Crash course to LXD +### Installation + +{{< adm type="note" >}} + +**Note:** the instructions below say to install LXD using [Snap.][snap] I +personally dislike Snap, but LXD is a Canonical product and Canonical is doing +their best to push Snap down everyone's throats ¯\\\_(ツ)\_/¯ One of the first +things the Incus project did was [rip out Snap support,][rsnap] and I can't wait +until they have proper `.deb`s 😁 + +[snap]: https://en.wikipedia.org/wiki/Snap_(software) +[rsnap]: https://github.com/lxc/incus/compare/9579f65cd0f215ecd847e8c1cea2ebe96c56be4a...3f64077a80e028bb92b491d42037124e9734d4c7 + +{{< /adm >}} + 1. Install snap following [Canonical's tutorial](https://earl.run/ZvUK) - - LXD is natively packaged for Arch and Alpine, but configuration can be a massive headache. + - LXD is natively packaged for Arch and Alpine, but configuration can be a + massive headache. 2. `sudo snap install lxd` 3. `lxd init` 4. `lxc image copy images:debian/11 local: --alias deb-11` 5. `lxc launch deb-11 container-name` 6. `lxc shell container-name` + +### Usage + +{install my URL shortener}