|
|
|
@ -87,7 +87,7 @@ device and give it access to your network. Store it in a safe,
|
|
|
|
|
preferably encrypted location.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
nebula-cert ca -name "nebula.example.com"
|
|
|
|
|
nebula-cert ca -name "nebula.example.com"
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
I'll explain why we used a Fully-Qualified Domain Name (FQDN) as the
|
|
|
|
@ -101,7 +101,7 @@ Now that we have the CA's `.crt` and `.key` files, we can create and sign
|
|
|
|
|
keys and certificates for the lighthouse.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
nebula-cert sign -name "buyvm.lh.nebula.example.com" -ip "192.168.100.1/24"
|
|
|
|
|
nebula-cert sign -name "buyvm.lh.nebula.example.com" -ip "192.168.100.1/24"
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Here, we're using a FQDN for the same reason as we did in the CA. You
|
|
|
|
@ -219,17 +219,17 @@ executable, then move it to `/usr/local/bin` (or some other location
|
|
|
|
|
fitting for your platform).
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
wget https://github.com/slackhq/nebula/releases/download/vX.X.X/nebula-PLATFORM-ARCH.tar.gz
|
|
|
|
|
tar -xvf nebula-*
|
|
|
|
|
chmod +x nebula
|
|
|
|
|
mv nebula /usr/local/bin/
|
|
|
|
|
rm nebula-*
|
|
|
|
|
wget https://github.com/slackhq/nebula/releases/download/vX.X.X/nebula-PLATFORM-ARCH.tar.gz
|
|
|
|
|
tar -xvf nebula-*
|
|
|
|
|
chmod +x nebula
|
|
|
|
|
mv nebula /usr/local/bin/
|
|
|
|
|
rm nebula-*
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Now we need a place to store our config file, keys, and certificates.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
mkdir /etc/nebula/
|
|
|
|
|
mkdir /etc/nebula/
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
The next step is copying the config, keys, and certificates to the
|
|
|
|
@ -246,17 +246,17 @@ installed on the VPS before attempting to run the commands though;
|
|
|
|
|
you'll get an error otherwise.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
rsync -avmzz ca.crt user@example.com:
|
|
|
|
|
rsync -avmzz config.yml user@example.com:
|
|
|
|
|
rsync -avmzz buyvm.lh.* user@example.com:
|
|
|
|
|
rsync -avmzz ca.crt user@example.com:
|
|
|
|
|
rsync -avmzz config.yml user@example.com:
|
|
|
|
|
rsync -avmzz buyvm.lh.* user@example.com:
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
SSH back into the server and move everything to `/etc/nebula/`.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
mv ca.crt /etc/nebula/
|
|
|
|
|
mv config.yml /etc/nebula/
|
|
|
|
|
mv buyvm.lh* /etc/nebula/
|
|
|
|
|
mv ca.crt /etc/nebula/
|
|
|
|
|
mv config.yml /etc/nebula/
|
|
|
|
|
mv buyvm.lh* /etc/nebula/
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Edit the config file and ensure the `pki:` section looks something like
|
|
|
|
@ -272,7 +272,7 @@ pki:
|
|
|
|
|
Run the following command to make sure everything works properly.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
nebula -config /etc/nebula/config.yml
|
|
|
|
|
nebula -config /etc/nebula/config.yml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
The last step is daemonizing Nebula so it runs every time the server
|
|
|
|
@ -282,20 +282,20 @@ you're using something else, check the [the examples directory](https://github.c
|
|
|
|
|
options.
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
[Unit]
|
|
|
|
|
Description=nebula
|
|
|
|
|
Wants=basic.target
|
|
|
|
|
After=basic.target network.target
|
|
|
|
|
Before=sshd.service
|
|
|
|
|
[Unit]
|
|
|
|
|
Description=nebula
|
|
|
|
|
Wants=basic.target
|
|
|
|
|
After=basic.target network.target
|
|
|
|
|
Before=sshd.service
|
|
|
|
|
|
|
|
|
|
[Service]
|
|
|
|
|
SyslogIdentifier=nebula
|
|
|
|
|
ExecReload=/bin/kill -HUP $MAINPID
|
|
|
|
|
ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yml
|
|
|
|
|
Restart=always
|
|
|
|
|
[Service]
|
|
|
|
|
SyslogIdentifier=nebula
|
|
|
|
|
ExecReload=/bin/kill -HUP $MAINPID
|
|
|
|
|
ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yml
|
|
|
|
|
Restart=always
|
|
|
|
|
|
|
|
|
|
[Install]
|
|
|
|
|
WantedBy=multi-user.target
|
|
|
|
|
[Install]
|
|
|
|
|
WantedBy=multi-user.target
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We're almost done!
|
|
|
|
@ -311,7 +311,7 @@ address `192.168.100.2`. The resulting files would go on the _remote_ node
|
|
|
|
|
not yours. Replace `HOST` and `USER` with fitting values.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24"
|
|
|
|
|
nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24"
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
The following command will create a _similar_ cert/key but it will be part
|
|
|
|
@ -321,7 +321,7 @@ will be able to VNC and SSH into other nodes. Your nodes need to be in
|
|
|
|
|
the `support` group so you'll have access to the others.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24" -groups "support"
|
|
|
|
|
nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24" -groups "support"
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
On to the config now. This tells the node that it is _not_ a lighthouse,
|
|
|
|
@ -384,7 +384,7 @@ start up, make sure it's running correctly, press `Ctrl` + `C`, then add the
|
|
|
|
|
command to the DE's startup applications!
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
x11vnc --loop -usepw -listen <nebula-ip> -display :0
|
|
|
|
|
x11vnc --loop -usepw -listen <nebula-ip> -display :0
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
`--loop` tells `x11vnc` to restart once you disconnect from the session.
|
|
|
|
@ -466,7 +466,7 @@ Nebula to start up and connect before it tells SSH to start; run
|
|
|
|
|
section, above `[Service]`.
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
After=nebula.service
|
|
|
|
|
After=nebula.service
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Even now, there's still a bit of a hiccup. Systemd won't start SSH until
|
|
|
|
@ -476,7 +476,7 @@ causing SSH to crash. To fix _this_, add the following line directly below
|
|
|
|
|
`[Service]`.
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
ExecStartPre=/usr/bin/sleep 30
|
|
|
|
|
ExecStartPre=/usr/bin/sleep 30
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
If the `sleep` executable is stored in a different location, make sure you
|
|
|
|
@ -494,7 +494,7 @@ restart sshd`. You should be able to connect to the remote node from your
|
|
|
|
|
node using the following command.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
ssh USER@<nebula-ip>
|
|
|
|
|
ssh USER@<nebula-ip>
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
If you want to make the command a little simpler so you don't have to
|
|
|
|
@ -502,9 +502,9 @@ remember the IP every time, create `~/.ssh/config` on your node and add
|
|
|
|
|
these lines to it.
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
Host USER
|
|
|
|
|
Hostname <nebula-ip>
|
|
|
|
|
User USER
|
|
|
|
|
Host USER
|
|
|
|
|
Hostname <nebula-ip>
|
|
|
|
|
User USER
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Now you can just run `ssh USER` to get in. If you duplicate the above
|
|
|
|
|