progress on LXD post

This commit is contained in:
Amolith 2023-08-27 14:16:14 -04:00
parent 650eddb495
commit 57f32743f3
Signed by: Amolith
GPG Key ID: 8AE30347CE28D101
1 changed files with 79 additions and 27 deletions

View File

@ -48,10 +48,10 @@ migrate as soon as there's an installable release.
RAM, it's going to make do with 200 MBs of RAM and the kernel's <abbr
title="Out Of Memory">OOM</abbr> killer is going to have a fun time 🤠
- **Portability:** once set up and configured, VMs and containers can mostly be
treated as black boxes; as long as the surrounding environment of the new host
is similar to the previous in terms of communication (proxies, web servers,
etc.), they can just be picked up and dropped between various hosts as
necessary.
treated as closed boxes; as long as the surrounding environment of the new
host is similar to the previous in terms of communication (proxies, web
servers, etc.), they can just be picked up and dropped between various hosts
as necessary.
- **Density:** applications are usually much lighter than the systems they're
running on, so it makes sense to run many applications on one system. VMs and
containers facilitate that without sacrificing security.
@ -124,19 +124,43 @@ hk.os.h.k3.os3.app3: Many apps
## Containers
As most people know them right now, containers are exclusive to Linux.[^1] This is
because they use namespaces and cgroups to achieve isolation.
VMs use virtualisation to achieve isolation. Containers use **namespaces** and
**cgroups**, technologies pioneered in the Linux kernel. By now, though, there
are [equivalents for Windows] and possibly other platforms.
- **[Linux namespaces]** partition kernel resources like process IDs, hostnames,
user IDs, directory hierarchies, network access, etc.
- **[Cgroups]** limit, track, and isolate the hardware resource use of a set of
processes
[equivalents for Windows]: https://learn.microsoft.com/en-us/virtualization/community/team-blog/2017/20170127-introducing-the-host-compute-service-hcs
**[Linux namespaces]** partition kernel resources like process IDs, hostnames,
user IDs, directory hierarchies, network access, etc. This prevents one
collection of processes from seeing or gaining access to data regarding another
collection of processes.
**[Cgroups]** limit, track, and isolate the hardware resource use of a
collection of processes. If you tell a cgroup that it's only allowed to spawn
500 child processes and someone executes a fork bomb, the fork bomb will expand
until it hits that limit. The kernel will prevent it from spawning further
children and you'll have to resolve the issue the same way you would with VMs:
delete and re-create it, restore from a good backup, etc. You can also limit CPU
use, the number of CPU cores it can access, RAM, disk use, and so on.
[Linux namespaces]: https://en.wikipedia.org/wiki/Linux_namespaces
[Cgroups]: https://en.wikipedia.org/wiki/Cgroups
### Application containers
The most well-known example of application container tech is probably
[Docker.][docker] The goal here is to run a single application as minimally as
possible inside each container. In the case of a single, statically-linked Go
binary, a minimal Docker container might contain nothing more than the binary.
If it's a Python application, you're more likely to use an [Alpine Linux image]
and add your Python dependencies on top of that. If a database is required, that
goes in a separate container. If you've got a web server to handle TLS
termination and proxy your application, that's a third container. One cohesive
system might require many Docker containers to function as intended.
[docker]: https://docker.com/
[Alpine Linux image]: https://hub.docker.com/_/alpine
```kroki {type=d2,d2theme=flagship-terrastruct,d2sketch=true}
Host kernel.Container runtime.c1: Container
Host kernel.Container runtime.c2: Container
@ -149,6 +173,21 @@ Host kernel.Container runtime.c3.Full OS.Many apps
### System containers
One of the most well-known examples of system container tech is the subject of
this post: LXD! Rather than containing a single application or a very small set
of them, system containers are designed to house entire operating systems, like
[Debian] or [Rocky Linux,][rocky] along with everything required for your
application. Using our examples from above, a single statically-linked Go binary
might run in a full Debian container, just like the Python application might.
The database and webserver might go in _that same_ container.
[Debian]: https://www.debian.org/
[rocky]: https://rockylinux.org/
You treat each container more like you would a VM, but you get the performance
benefit of _not_ virtualising everything. Containers are _much_ lighter than any
virtual machine.
```kroki {type=d2,d2theme=flagship-terrastruct,d2sketch=true}
hk: Host kernel
hk.c1: Container
@ -162,29 +201,42 @@ hk.c2.os2.app2: Many apps
hk.c3.os3.app3: Many apps
```
## When to use VMs
## When to use which
- Virtualising esoteric hardware
- Virtualising non-Linux operating systems (Windows, macOS)
- Completely isolating processes from one another with a decades-old, battle-tested technique
{{< adm type="note" >}}
See Drew DeVault's blog post [_In praise of qemu_](https://earl.run/rmBs) for a great use of VMs
{{< adm type="warn" >}}
**Warning:** this is my personal opinion. Please evaluate each technology and
determine for yourself whether it's a suitable fit for your environment.
{{< /adm >}}
### When you use application containers
As far as I'm aware, VMs are your only option when you want to work with
esoteric hardware or hardware you don't physically have on-hand. It's also your
only option when you want to work with foreign operating systems: running Linux
on Windows, Windows on Linux, or OpenBSD on a Mac all require virtualisation.
Another reason to stick with VMs is for compliance purposes. Containers are
still very new and some regulatory bodies require virtualisation because it's a
decades-old and battle-tested isolation technique.
- Microservices
- Extremely reproducible builds
- (NixOS.org would likely be a better fit though)
- Dead-set on using cloud platforms with extreme scaling capabilities (AWS, GCP, etc.)
- When the app you want to run is _only_ distributed as a Docker container and
{{< adm type="note" >}}
See Drew DeVault's blog post [_In praise of qemu_][qemu] for a great use of VMs
[qemu]: https://drewdevault.com/2022/09/02/2022-09-02-In-praise-of-qemu.html
{{< /adm >}}
Application containers are particularly popular for [microservices] and
[reproducible builds,][repb] though I personally think [NixOS] is a better fit
for the latter. App containers are also your only option if you want to use
cloud platforms with extreme scaling capabilities like Google Cloud's App Engine
standard environment or AWS's Fargate.
[microservices]: https://en.wikipedia.org/wiki/Microservices
[repb]: https://en.wikipedia.org/wiki/Reproducible_builds
[NixOS]: https://nixos.org/
- When the app you want to run is _only_ distributed as a Docker container and
the maintainers adamantly refuse to support any other deployment method
- (Docker does run in LXD 😉)
### System containers
- Anything not listed above 👍
- System containers
- Anything not listed above 👍
## Crash course to LXD