I rent a VPS from the excellent OVH and use it to host my Wiki and my own VPN to avoid the pernicious Snoopers Charter.

Web Hosting

I wanted to host this wiki and a second one sheffieldboulder.uk on my VPS. This requires installing and configuring a web-server as well as the Dokuwiki software. In the past I've used lighttpd as a web server but have struggled to set it up to server multiple domains so instead opted for nginx.

Installation

Install the following software…

snippet.bash
pacman -Syu dokuwiki certbot-nginx nginx php-fpm certbot

nginx

TLS/SSL LetsEncrypt

It is good practice to use https-Everywhere and that requires that servers are configured to do so. You can self-certify using OpenSSL or LetsEncrypt, but I've opted for the later (the former is documented under the Arch Linux Wiki : nginx -TLS/SSL if you would rather use that method). The [Arch Linux Wiki : LetsEncrypt]https://wiki.archlinux.org/index.php/Let%E2%80%99s_Encrypt) page is the primary source of information.

Configuration

Initially you will not be able to run your nginx server on https as you don't have a certificate so you must in the first instance define a server on port 80 (http) in order to obtain your certificate. The Arch Linux Wiki : LetsEncrypt provides code for such a server which should be placed within http { }

snippet.bash
#user http;
#user html;
worker_processes  2;
 
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
 
#pid        logs/nginx.pid;
 
 
events {
    worker_connections  1024;
}
 
http {
  server {
    listen 80;
    listen [::]:80;
    server_name domain.tld;
    root /usr/share/nginx/html;
    location / {
      index index.htm index.html;
    }
 
    # ACME challenge
    location ^~ /.well-known/acme-challenge/ {
      default_type "text/plain";
      root /var/lib/letsencrypt;
    }
  }
}

Create the directory that will be used for the ACME challenge (I found this slightly confusing as the Wiki says this should be the publicly readable web-root, yet I went with /var/lib/letsencrypt and it seems to have worked)…

snippet.bash
mkdir /var/lib/letsencrypt

Obtain your certificate…

snippet.bash
certbot certonly --email your@email.com --webroot -w /var/lib/letsencrypt/ -d my-domain.org

Your certificates reside in /etc/letsencrypt/live/my-domain.org/ as symbolic links to the ../../archive directory (presumably so its easy to update in the future).

Switch to TLS/SSL

You must now configure nginx to use the certificates and, ideally, redirect http requests on port 80 to https on port 443. Modify your /etc/nginx/nginx.conf

snippet.bash
#user http;
#user html;
worker_processes  2;
 
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
 
#pid        logs/nginx.pid;
 
 
events {
    worker_connections  1024;
}
 
 
http {
    include       mime.types;
    default_type  application/octet-stream;
 
    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
 
    #access_log  logs/access.log  main;
 
    sendfile        on;
    #tcp_nopush     on;
 
    #keepalive_timeout  0;
    keepalive_timeout  65;
 
    #gzip  on;
 
    # SSL
    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s; # Google DNS Servers change if you wish
    resolver_timeout 5s;
 
    # Redirect to HTTPS
    server {
        listen 80;
        server_name localhost;
        return 301 https://$server_name$request_uri;
    }
    # my-domain.org
    server {
	listen 443 ssl http2;
        listen [::]:443 ssl http2;
        server_name  my-domain.org;
        root   /usr/share/nginx/html/my-domain;
	index doku.php;
        #Remember to comment the below out when you're installing DokuWiki, and uncomment it when you're done.
        # location ~ /(data/|conf/|bin/|inc/|install.php) { deny all; } # secure Dokuwiki
 
        # LetsEncrypt : ACME challenge
        # location ^~ /.well-known/acme-challenge/ {
        #     default_type "text/plain";
        #     root /var/lib/letsencrypt;
        # }
 
        # LestEncrypt : Certificates
        ssl_certificate /etc/letsencrypt/live/my-domain.org/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/my-domain.org/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/my-domain.org/chain.pem;
    }
}

You should now have a working server that uses TLS/SSL. Enable and start bot php-fpm and the nginx server and visit your IP address…

snippet.bash
systemctl enable --now php-fpm.service
systemctl enable --now nginx.service

It should resolve to https://xxx.xxx.xxx.xxx/index.html with a green lock to indicate its secure. If all is good you are ready to forge ahead with installing DokuWiki.

DokuWiki

You've got a certificate and nginx is now up and running but you want to deploy DokuWiki so that you and others can create content. You've installed it above (if you've been following these instructions, if not pacman -syu dokuwiki), and by default the DokuWiki software is installed at /usr/share/webapps/dokuwiki/. If like me you want to deploy multiple wiki's then you need to copy these to a separate directory and adjust some symbolic links. I opted to place everything under the nginx directory /usr/share/nginx/html

snippet.bash
mkdir /usr/share/nginx/html/my-domain
'cp' -r /usr/share/webapps/dokuwiki/. /usr/share/nginx/html/my-domain/.

The conf directory is a symbolic link to /etc/webapps/dokuwiki, whilst data, lib/plugins and lib/tpl directories are symbolic links to the same directories under /var/lib/dokuwiki. As I intend to use multiple wiki's I needed to tweak this to allow two DokuWiki installations in different locations and copy the files as they were installed to these directories and adjust the symbolic links appropriately…

snippet.bash
mkdir /etc/webapps/dokuwiki/my-domain /var/lib/dokuwiki/my-domain
mv /etc/webapps/dokuwiki/*.php* /etc/webapps/dokuwiki/*.conf /etc/webapps/dokuwiki/my-domain/.
rm /usr/share/nginx/html/my-domain/conf
ln -s /etc/webapps/dokuwiki/my-domain /usr/share/nginx/html/my-domain/conf
mv    /var/lib/dokuwiki/tpl /var/lib/dokuwiki/my-domain/.
rm    /usr/share/nginx/html/my-domain/tpl
ln -s /var/lib/dokuwiki/my-domain/tpl /usr/share/nginx/html/my-domain/tpl
mv     /var/lib/dokuwiki/data /var/lib/dokuwiki/my-domain/.
rm     /usr/share/nginx/html/my-domain/data
ln -s /var/lib/dokuwiki/my-domain/data /usr/share/nginx/html/my-domain/data
mv    /var/lib/dokuwiki/plugins /var/lib/dokuwiki/my-domain/.
rm    /usr/share/nginx/html/my-domain/plugins
ln -s /var/lib/dokuwiki/my-domain/plugins /usr/share/nginx/html/my-domain/plugins

A new server definition is now required that ensure that request for .php are handled by php-fpm (NB - This is required for ever server you define). The /etc/nginx/nginx.conf should now look like….

snippet.bash
h
#user http;
#user html;
worker_processes  2;
 
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
 
#pid        logs/nginx.pid;
 
 
events {
    worker_connections  1024;
}
 
 
http {
    include       mime.types;
    default_type  application/octet-stream;
 
    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
 
    #access_log  logs/access.log  main;
 
    sendfile        on;
    #tcp_nopush     on;
 
    #keepalive_timeout  0;
    keepalive_timeout  65;
 
    #gzip  on;
 
    # SSL
    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s; # Google DNS Servers possibly change in the future?
    resolver_timeout 5s;
 
    # Redirect to HTTPS
    server {
        listen 80;
        server_name localhost;
        return 301 https://$server_name$request_uri;
    }
    # sheffieldboulder.uk
    server {
        # listen       80;
        # listen [::]:80;
	listen 443 ssl http2;
        listen [::]:443 ssl http2;
        server_name  my-domain.org;
        root   /usr/share/nginx/html/my-domain;
	index doku.php;
        #Remember to comment the below out when you're installing DokuWiki, and uncomment it when you're done.
        # location ~ /(data/|conf/|bin/|inc/|install.php) { deny all; } # secure Dokuwiki
 
        # LetsEncrypt : ACME challenge
        # location ^~ /.well-known/acme-challenge/ {
        #     default_type "text/plain";
        #     root /var/lib/letsencrypt;
        # }
 
        # LestEncrypt : Certificates
        ssl_certificate /etc/letsencrypt/live/my-domain.org/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/my-domain.org/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/my-domain.org/chain.pem;
 
 
        location ~^/\.ht { deny all; } # also secure the Apache .htaccess files
        # Have requests for php handled by php-fpm
        location @dokuwiki {
            #rewrites "doku.php/" out of the URLs if you set the userewrite setting to .htaccess in dokuwiki config page
            rewrite ^/_media/(.*) /lib/exe/fetch.php?media=$1 last;
            rewrite ^/_detail/(.*) /lib/exe/detail.php?media=$1 last;
            rewrite ^/_export/([^/]+)/(.*) /doku.php?do=export_$1&id=$2 last;
            rewrite ^/(.*) /doku.php?id=$1&$args last;
        }
 
        location / { try_files $uri $uri/ @dokuwiki; }
        location ~ \.php$ {
            try_files $uri =404;
            fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
            fastcgi_index index.php;
            include fastcgi.conf;
        }
        #charset koi8-r;
 
        #access_log  logs/host.access.log  main;
 
        #error_page  404              /404.html;
 
        # redirect server error pages to the static page /50x.html
        #
        # error_page   500 502 503 504  /50x.html;
        # location = /50x.html {
        #     root   /usr/share/nginx/html;
	# }
    }
}

If this is a fresh install then, assuming you have purchased your domain and sorted out DNS entries to have it pointing at your IP address you can open https://my-domain.org/install.php and follow the instructions for completing a fresh install and then start creating pages. I migrated pages, themes and plugins from a previous DokuWiki installation so already had pages.

To add a second DokuWiki with a seperate domain repeat all of these steps (including LetsEncrypt certification) using the domain of your second site and adjusting the directory conf/pages/plugins/tpl are located in (e.g. replace all my-domain with my-other-domain).

Obviously you need to restart the nginx server for each time you modify /etc/nginx/nginx.conf for the changes to take place and be accessible.

snippet.bash
systemctl restart nginx.service

FreshRSS

I like RSS and was disappointed when Google removed their RSS News Feeds. Since then I've floundered looking for an aggregator I liked but recently stumbled across FresRSS so thought I'd give it a whirl as its self-hosted.

User www

If you want to be able to serve up random pages/files from a users ~/www/ directory its pretty straight forward. Add the following to /etc/nginx/nginx.conf

snippet.bash
location ~ ^/~(.+?)(/.*)?$ {
    alias /home/$1/www$2;
    autoindex on;
}

Virtual Private Network

OpenVPN

I wanted to have my own VPN rather than paying someone else for the service so needed to install and configure OpenVPN. There are a lot of configuration options and the simplest solution was to use the modified OpenVPN Install script.

Wireguard

A newer, faster and more secure approach to this is to use Wireguard which works at the kernel level to provide a VPN connection. The entry in the Arch Linux Wiki has a Specific Use Case : VPN Server which I followed to get this working and after exchanging public and pre-shared keys between my server and VPS and my phone I could connect to the VPN running on my VPS, but the IP address shown when visiting whatismyipaddress.com was still that of my ISP rather than that of my VPS as is the case when connected via OpenVPN. Thus some troubleshooting was required.

snippet.bash
[Interface]
Address = 10.200.200.1/24
ListenPort = 123456
PrivateKey = #####
DNS = 1.1.1.1
 
# note - substitute eth0 in the following lines to match the Internet-facing interface
# if the server is behind a router and receive traffic via NAT, this iptables rules are not needed
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
 
[Peer]
PublicKey = #####
PresharedKey = #####
AllowedIPs = 10.200.200.2/32

Once this is configured you can start the interface with wg-quick up wg0, but to make the service available on boot you must enable it. ```bash systemctl enable wg-quick@wg0.service ``` TODO - Check the above works and starts the service on boot. ### Secure DNS At some point I'll go through [BetterWayElectronics/secure-wireguard-implementation: A guide on implementing a secure Wireguard server on OVH with DNSCrypt, Port Knocking & an SSH-Honeypot](https://github.com/BetterWayElectronics/secure-wireguard-implementation). ### Wirt [Wirt](https://wirt.network/) looks like a handy configuration tool. # Backups OVH discontinued support for Arch Linux, but my existing install persists. I therefore take manual snapshots periodically (as I'm too cheap to purchase automated backups). This requires that qemu-guest-agent` is installed and running (see here). It is by default but should it ever be required to be installed manually under Arch Linux the steps are…

snippet.bash
pacman -Syu qemu-guest-agent
l /dev/virtio-ports/org.qemu.guest_agent.0 
lrwxrwxrwx 1 root root 11 Aug 23 09:38 /dev/virtio-ports/org.qemu.guest_agent.0 -> ../vport2p1
systemctl enable --now qemu-guest-agent.service

You can then use the Dashboard on the OVH Panel to create a snapshot, although you can only have one snapshot at a time and so are required to delete old ones before taking another.

Links

Web Hosting

Virtual Private Network

OpenVPN

Wireguard

Testing Security

linux/arch/vps.txt · Last modified: 2021/11/06 08:36 by admin
CC Attribution-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0