Attempt to streamline the installation steps and README a little

This commit is contained in:
Scott Wallace 2022-04-11 09:38:23 +01:00
parent df58a6b847
commit 78a45629dd
Signed by: scott
GPG key ID: AA742FDC5AFE2A72
3 changed files with 17 additions and 39 deletions

View file

@ -46,24 +46,22 @@ Distributed & E2EE self-hosting. The goal is to have nodes voluntarily join the
The underlaying hardware type shouldn't be a constraint, within reason.
## Lighthouse installation
1. Clone the repo.
2. Create a directory to hold the config and certificates.
3. Copy `lighthouse-config.yaml` as `config.yaml` in the new directory.
4. Update the `docker-compose-lighthouse.yaml` to bind mount the newly created directory to `/etc/nebula`; check and set a value for the `/storage` bind mount.
5. Run the container with `docker-compose up -d`. This will create two files, `host.key` and `host.csr`.
6. Send the contents of the `host.csr` file to a cluster admin to sign.
7. The returned, signed certificate should go alongside the `host.csr` file and be called, `host.crt`.
8. Start the container again and it should find the config and certificates and then connect to the existing cluster.
9. Update the `static_host_map` entry in the repo's `node-config.yaml` with the new Lighthouse mesh and public IP address and encourage node admins to update their nodes' config files from the repo.
## Node installation
## Installation
1. Clone the repo.
2. Create a directory to hold the config and certificates.
3. Copy `node-config.yaml` as `config.yaml` in the new directory.
4. Update the `docker-compose-node.yaml` to bind mount the newly created directory to `/etc/nebula`; check and set a value for the `/storage` bind mount.
5. Run the container with `docker-compose up -d`. This will create two files, `host.key` and `host.csr`.
6. Send the contents of the `host.csr` file to a cluster admin to sign.
7. The returned, signed certificate should go alongside the `host.csr` file and be called, `host.crt`.
8. Start the container again and it should find the config and certificates and then connect to the existing cluster.
2. Create two directories; one to hold the Nebula config and certificates and the other for the SeaweedFS config and certificates.
3. Create `config.yaml` in the Nebula config directory.
1. Use `config-node.yaml` as the template for a normal cluster node.
2. Use `config-lighthouse.yaml` as the template for a Lighthouse.
4. Update the `docker-compose.yaml` volume values for the bind mount directories for both the Nebula and SeaweedFS config directories; check and set a value for the `/storage` bind mount.
1. Set the `LIGHTHOUSE` environment variable to `true` for a Lighthouse.
5. Decrypt and un-tar the contents of the `seaweed-conf.enc` file into the SeaweedFS config directory.
```shell
openssl enc -aes-256-cbc -iter 30 -d -salt -in seaweed-conf.enc | (cd /path/to/infranet/config/seaweedfs && tar xvz)
```
Ask a cluster admin or member for the password.
6. Run the container with `docker-compose up -d`. This will create two files in the Nebula config directory, `host.key` and `host.csr`.
7. Send the contents of the `host.csr` file to a cluster admin to sign.
8. The returned, signed certificate should go alongside the `host.csr` file and be called, `host.crt`.
9. Start the container again and it should find the config and certificates and then connect to the existing cluster.
1. For a Lighthouse: create a pull request to update the `static_host_map` entry in the repo's `node-config.yaml` amended with the Lighthouse's Nebula mesh and public IP addresses and encourage node admins to update their nodes' config files from the repo.

View file

@ -1,20 +0,0 @@
---
version: "3"
services:
infranet:
container_name: infranet
image: dcr.wallace.sh/scott/infranet:latest
volumes:
- /path/to/infranet/config/nebula:/etc/nebula
- /path/to/infranet/config/seaweedfs:/etc/seaweedfs
- /path/to/infranet/filestore:/storage
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
environment:
- TZ=UTC
- LIGHTHOUSE=true
ports:
- 4242:4242/udp
restart: unless-stopped