Over the last few days I have been working on moving some of my on prem services to the cloud. I’ve made a lot of changes and want to document everything here in this blog post. This is mostly for my reference and this is my blog!

I moved my SWAG server to a Digital Ocean droplet. I did this because I hate that every time I move stuff around in my home office I end up knocking things offline.

SWAG

Domains: https://sisto.xyz https://joshsisto.com https://sisto.blog

docker-compose.yaml

---
version: "2.1"
services:
  swag:
    image: lscr.io/linuxserver/swag:latest
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - URL=sisto.xyz
      - VALIDATION=http
      - SUBDOMAINS=vpn,monitor,portfolio,notes #optional
      - CERTPROVIDER= #optional
      - DNSPLUGIN=cloudflare #optional
      - PROPAGATION= #optional
      - DUCKDNSTOKEN= #optional
      - EMAIL= #josh@joshsisto.com #optional
      - ONLY_SUBDOMAINS=false #optional
      - EXTRA_DOMAINS=joshsisto.com,notes.joshsisto.com,blog.joshsisto.com,sisto.blog #optional
      - STAGING=false #optional
    volumes:
      - /path/to/appdata/config:/config
    ports:
      - 443:443
      - 80:80 #optional
    restart: unless-stopped

sisto.xyz

I have sisto.xyz running as my main website on the SWAG instance which I thought I was just going to play around with put now it is production. Instead of using the default /path/to/appdata/config/www/index.html I changed the root path in /path/to/appdata/config/nginx/site-confs/default file.

I updated root path to point to where I copied my site.

# main server block
server {
    listen 443 ssl http2 default_server;
    listen [::]:443 ssl http2 default_server;

    root /config/startbootstrap-the-big-picture/dist;

This required me to copy my site data to /path/to/appdata/config/

joshsisto.com

For joshsisto.com domain I needed to create a second www folder. I used the following command to copy over cp -r /path/to/appdata/config/www /path/to/appdata/config/www2 www2 is where I will update index.html to control joshsisto.com website. I also need to copy the default file for the domain. I can use the following command cp -r /path/to/appdata/config/nginx/site-confs/default /path/to/appdata/config/nginx/site-confs/default2

default2

server {
listen 80;
listen [::]:80;
server_name joshsisto.com;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl http2;
listen [::]:443 ssl http2;

root /config/www2;
server_name joshsisto.com;

# all ssl related config moved to ssl.conf
include /config/nginx/ssl.conf;

client_max_body_size 0;

location / {
    include /config/nginx/proxy.conf;
#    proxy_pass http://10.1.1.241:8080;
  }
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    root /config/www2;
    index index.html index.htm index.php;

    server_name notes.joshsisto.com;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
#        auth_basic "Restricted";
#        auth_basic_user_file /config/nginx/.htpasswd;
        include /config/nginx/proxy.conf;
        proxy_pass http://10.48.0.5:8999;
    }
}

sisto.blog

For sisto.blog I copied over www and default files again. Here is default3

server {
listen 80;
listen [::]:80;
server_name sisto.blog;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl http2;
listen [::]:443 ssl http2;

root /config/www3;
server_name sisto.blog;

# all ssl related config moved to ssl.conf
include /config/nginx/ssl.conf;

client_max_body_size 0;

location / {
    include /config/nginx/proxy.conf;
    proxy_pass http://10.48.0.5:4000;
  }
}

Some of these services are still running at my house because I am only paying for a $5 droplet therefore I have 1 core and 1GB of RAM. I don’t want these exposed to the internet. Before I used Cloudflare to proxy traffic effectively masking my public IP address. What I did this time is use SSH tunneling to connect from the server to my home services. I created port forwards on my firewall that I only allow access from my Digital Ocean server.

I use the following script to connect to my home servers. Make sure to replace with the IP you are trying to connect to.

#!/bin/bash
SSH_COMMAND="ssh -fNT -p 22224 pi@<Home_IP> -L 10.48.0.5:5000:192.168.88.27:5000"

if [[ -z $(ps -aux | grep "$SSH_COMMAND" | sed '$ d') ]]
then exec $SSH_COMMAND
fi

SSH_COMMAND2="ssh -fNT -p 22223 ubuntu@<Home_IP> -L 10.48.0.5:8999:10.1.1.241:8999"

if [[ -z $(ps -aux | grep "$SSH_COMMAND2" | sed '$ d') ]]
then exec $SSH_COMMAND2
fi

SSH_COMMAND3="ssh -fNT -p 22222 ubuntu@<Home_IP> -L 10.48.0.5:7878:10.1.1.242:80"

if [[ -z $(ps -aux | grep "$SSH_COMMAND3" | sed '$ d') ]]
then exec $SSH_COMMAND3
fi

SSH_COMMAND4="ssh -fNT -p 22225 ubuntu@<Home_IP> -L 10.48.0.5:4000:10.1.1.235:4000"

if [[ -z $(ps -aux | grep "$SSH_COMMAND4" | sed '$ d') ]]
then exec $SSH_COMMAND4
fi

I have these run as a cron job to automatically reconnect if the session is dropped. * * * * * /home/josh/autossh.sh