make progress on lxd post

This commit is contained in:
Amolith 2023-08-28 19:49:44 -04:00
parent 9f7474b810
commit c9d3043feb
Signed by: Amolith
GPG Key ID: 8AE30347CE28D101
1 changed files with 126 additions and 23 deletions

View File

@ -11,6 +11,7 @@ tags:
- Docker
- LXD
draft: true
toc: true
rss_only: false
cover: ./cover.png
---
@ -203,25 +204,30 @@ hk.c3.os3.app3: Many apps
## When to use which
{{< adm type="warn" >}}
**Warning:** this is my personal opinion. Please evaluate each technology and
determine for yourself whether it's a suitable fit for your environment.
{{< /adm >}}
These are personal opinions. Please evaluate each technology and determine for
yourself whether it's a suitable fit for your environment.
### VMs
As far as I'm aware, VMs are your only option when you want to work with
esoteric hardware or hardware you don't physically have on-hand. It's also your
only option when you want to work with foreign operating systems: running Linux
on Windows, Windows on Linux, or OpenBSD on a Mac all require virtualisation.
Another reason to stick with VMs is for compliance purposes. Containers are
still very new and some regulatory bodies require virtualisation because it's a
decades-old and battle-tested isolation technique.
esoteric hardware or hardware you don't physically have on-hand. You can tell
your VM that it's running with RAM that's 20 years old, a still-in-development
RISC-V CPU, and a 420p monitor. That's not possible with containers. VMs are
also your only option when you want to work with foreign operating systems:
running Linux on Windows, Windows on Linux, or OpenBSD on a Mac all require
virtualisation. Another reason to stick with VMs is for compliance purposes.
Containers are still very new and some regulatory bodies require virtualisation
because it's a decades-old and battle-tested isolation technique.
{{< adm type="note" >}}
See Drew DeVault's blog post [_In praise of qemu_][qemu] for a great use of VMs
[qemu]: https://drewdevault.com/2022/09/02/2022-09-02-In-praise-of-qemu.html
{{< /adm >}}
### Application containers
Application containers are particularly popular for [microservices] and
[reproducible builds,][repb] though I personally think [NixOS] is a better fit
for the latter. App containers are also your only option if you want to use
@ -232,11 +238,21 @@ standard environment or AWS's Fargate.
[repb]: https://en.wikipedia.org/wiki/Reproducible_builds
[NixOS]: https://nixos.org/
- When the app you want to run is _only_ distributed as a Docker container and
the maintainers adamantly refuse to support any other deployment method
- (Docker does run in LXD 😉)
- System containers
- Anything not listed above 👍
Application containers also tend to be necessary when the application you want
to self-host is _only_ distributed as a Docker image and the maintainers
adamantly refuse to support any other deployment method. This is a _massive_ pet
peeve of mine; yes, Docker can make running self-hosted applications easier for
inexperienced individuals,[^1] but application orchestration system _does not_
fit in every single environment. By refusing to provide proper "manual"
deployment instructions, maintainers of these projects alienate an entire class
of potential users and it pisses me off.
Just document your shit.
### System containers
Personally, I use system containers for everything else. I prefer the simplicity
of being able to shell into a system and work with it almost exactly
## Crash course to LXD
@ -246,9 +262,9 @@ standard environment or AWS's Fargate.
**Note:** the instructions below say to install LXD using [Snap.][snap] I
personally dislike Snap, but LXD is a Canonical product and they're doing their
best to prmote it as much as possible. One of the first things the Incus project
did was [rip out Snap support,][rsnap] so it will eventually be installable as a
proper native package.
best to promote it as much as possible. One of the first things the Incus
project did was [rip out Snap support,][rsnap] so it will eventually be
installable as a proper native package.
[snap]: https://en.wikipedia.org/wiki/Snap_(software)
[rsnap]: https://github.com/lxc/incus/compare/9579f65cd0f215ecd847e8c1cea2ebe96c56be4a...3f64077a80e028bb92b491d42037124e9734d4c7
@ -260,12 +276,99 @@ proper native package.
massive headache.
2. `sudo snap install lxd`
3. `lxd init`
4. `lxc image copy images:debian/11 local: --alias deb-11`
5. `lxc launch deb-11 container-name`
6. `lxc shell container-name`
- Defaults are fine for the most part; you may want to increase the size of
the storage pool.
4. `lxc launch images:debian/12 container-name`
5. `lxc shell container-name`
### Usage
{install my URL shortener}
As an example of how to use LXD in a real situation, we'll set up [my URL
shortener.][earl] You'll need a VPS with LXD installed and a (sub)domain pointed
to the VPS.
[^1]: Docker containers on Windows and macOS actually run in a Linux VM.
Run `lxc launch images:debian/12 earl` followed by `lxc shell earl` and `apt
install curl`. Also `apt install` a text editor, like `vim` or `nano` depending
on what you're comfortable with. Head to the **Installation** section of [earl's
SourceHut page][earl] and expand the **List of latest binaries**. Copy the link
to the binary appropriate for your platform, head back to your terminal, type
`curl -LO`, and paste the link you copied. This will download the binary to your
system. Run `mv <filename> earl` to rename it, `chmod +x earl` to make it
executable, then `./earl` to execute it. It will create a file called
`config.yaml` that you need to edit before proceeding. Change the `accessToken`
to something else and replace the `listen` value, `127.0.0.1`, with `0.0.0.0`.
This exposes the application to the host system so we can reverse proxy it.
[earl]: https://earl.run/source
The next step is daemonising it so it runs as soon as the system boots. Edit the
file located at `/etc/systemd/system/earl.service` and paste the following code
snippet into it.
```ini
[Unit]
Description=personal link shortener
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/root/
ExecStart=/root/earl -c config.yaml
[Install]
WantedBy=multi-user.target
```
Save, then run `systemctl daemon-reload` followed by `systemctl enable --now
earl`. You should be able to `curl localhost:8275` and see some HTML.
Now we need a reverse proxy on the host. Exit the container with `exit` or
`Ctrl+D`, and if you have a preferred webserver, install it. If you don't have a
preferred webserver yet, I recommend [installing Caddy.][caddy] All that's left
is running `lxc list`, making note of the `earl` container's `IPv4` address, and
reverse proxying it. If you're using Caddy, edit `/etc/caddy/Caddyfile` and
replace everything that's there with the following.
[caddy]: https://caddyserver.com/docs/install
```text
<(sub)domain> {
encode zstd gzip
reverse_proxy <container IP address>:1313
}
```
Run `systemctl restart caddy` and head to whatever domain or subdomain you
entered. You should see the home page with just the text `earl` on it. If you go
to `/login`, you'll be able to enter whatever access token you set earlier and
log in.
### Executing a fork bomb
I've seen some people say that executing a fork bomb from inside a container is
equivalent to executing it on the host. The fork bomb will blow up the whole
system and render every application and container you're running inoperable.
That's partially true because LXD _by default_ doesn't put a limit on how many
processes a particular container can spawn. You can limit that number yourself
by running
```text
lxc profile set default limits.processes <num-processes>
```
Any container you create under the `default` profile will have a total process
limit of `<num-processes>`. I can't tell you what a good process limit is
though; you'll need to do some testing and experimentation on your own.
Note that this doesn't _save_ you from fork bombs, all it does is prevent an
affected container from affecting _other_ containers. If someone executes a fork
bomb in a container, it'll be the same as if they executed it in a virtual
machine; assuming it's a one-off, you'll need to fix it by rebooting the
container. If it was set to run at startup, you'll need to recreate the
container, restore from a backup, revert to a snapshot, etc.
[^1]:
Until they need to do _anything_ more complex than pull a newer image. Then
it's twice as painful as the "manual" method might have been.