Table of Contents
Arch Linux is a lightweight, rolling release based distribution with good package management (via `pacman`) and excellent documentation. I've started using the Arch Linux ARM (Alarm) version on Raspberry Pis and the x86_64 version on a Virtual Private Server and more recently switched from Gentoo to Arch Linux on my Asus UX21e as it was struggling compiling larger packages (and I couldn't get distcc working).
Generic issues are documented below.
Initial Setup
Any fresh install can be made more secure by doing a few house keeping tasks such as disabling root login, changing the port the SSH daemon is listening on, adding a user account etc.
SSH
Even if you are not running Arch on a VPS chances are you will want to connect to your system remotely and a few simple changes to the SSH daemon can improve security and reduce the chances of getting hacked. Modify the following lines in /etc/ssh/sshd_config
to prevent root
from logging in remotely and change the port the SSH daemon runs on (remember to make a note of the port).
- snippet.bash
PermitRootLogin no Port 8946
As you've changed the port it would be sensible to set the port that is used on any computer you log in from using the ~/.ssh/config
file. If your server has IP address 123.134.145.156
then an entry should look like (assuming you are using SSH keys and keychain to assist with your login)…
- snippet.bash
Host arch Hostname 123.134.145.156 Port 8946 User you IdentityFile ~/.ssh/id_ed25519
etckeeper
etckeeper uses Git to version control your system configuration files that reside under /etc/
, and you can optionally setup a repository online should you need to restore your system. Once you've got everything up and running its then time to deploy its as simple as…
- snippet.bash
etckeeper init
Edit /etc/.git/config
to reflect the repository that you set up on GitLab (recommended as it can be private) or GitHub. Enable a hook for pushing after commits by adding the following to /etc/etckeeper/etckeeper.conf
- snippet.bash
PUSH_REMOTE="origin"
Package Management
Installation
Pretty straight-forward using [[https://wiki.archlinux.org/index.php/pacman|pacman]]
Install
- snippet.bash
pacman -Syu [package_name]
Search
- snippet.bash
pacman -Qs [keyword] # Search package database pacman -Ss [keyword] # Search sync database pacman -Fs [keyword] # Search file database
Uninstall
- snippet.bash
pacman -Rs [package_name]
Masking Packages
Occasionally you may need to mask packages. This is done by adding the package to IgnorePkg
in /etc/pacman.conf
. Multiple entries should be space separated and glob patterns are accepted and its also possible to IgnoreGroup
. See ArchLinux Wiki : Pacman.
- snippet.bash
IgnorePkg linux-firmware liburing
Database Locks
If Pacman complains about…
- snippet.bash
:: Synchronising package databases... error: failed to update core (unable to lock database) error: failed to update extra (unable to lock database) error: failed to update community (unable to lock database) error: failed to synchronize all databases
Then if no other instance of Pacman is running then you can…
- snippet.bash
rm /var/lib/pacman/db.lck
Cleaning Up
Over time the downloaded packages that have been installed will need cleaning up from /var/cache/pacman/pkg/
. This can be done manually…
- snippet.bash
rm /var/cache/pacman/pkg/*
…or using pacman
which will retain currently installed versions and only remove old ones…
- snippet.bash
pacman -Sc
An alternative is to use paccache
(part of the pacman-contrib
package). The default retains the last three versions…
- snippet.bash
paccache -r
You can clean out all uninstalled packages with…
- snippet.bash
paccache -ruk0
Downgrade
Arch keeps packages in /var/cache/pacman/pkg
and these can be used to downgrade packages.
- snippet.bash
pacman -U file://var/cache/pacman/pkg/<package>.pkg.tar.xz
If you limit the number of historical packages retained in order to save space then the Arch Linux Archive can be used to obtain older packages (for ArchLinuxARM there are tardis and alaa).
Extra
There are various ways in which using pacman
can be streamlined, one major one is to use hooks
, which are small scripts/setups that execute certain commands after certain events.
Cleaning Cache with a hook.
Install community/pacman-contrib package, then place the following in /etc/pacman.d/hooks/clean-cache.hook
- snippet.bash
[Trigger] Operation = Remove Operation = Install Operation = Upgrade Type = Package Target = * [Action] Description = Removing unnecessary cached files (keeping the latest two)… When = PostTransaction Exec = /usr/bin/paccache -rvk2
File belongs to which package
- snippet.bash
pacman -Qo <file>
Corrupt PGP signatures
Occasionally things get out of sync and a package signed with a PGP signature won't match. To get back on track generally you can just updated the signatures with…
- snippet.bash
pacman -S archlinux-keyring
Explicitly installed packages
You can list the explicitly installed packages that you installed yourself with…
- snippet.bash
pacman -Qet
Save them to a file without the version with…
- snippet.bash
pacman -Qet | awk '{ print $1 }' | > installed_packages.txt
AUR
The Arch User Repository contains packages that are outside the official package repository and are maintained by users. Installing is easy, navigate to the package page and copy the Git URL then as a normal user…
- snippet.bash
cd && mkdir aur git clone https://aur.archlinux.org/[package].git cd [package] # The -c flag should clean up after a successful install, keeping the directory tidy and small makepkg -sric
Install pre-built package
I have trouble building arch available the facilitate using packages from Arch User Repository. I opted to give pikaur a whirl…
- snippet.bash
cd ~/aur git clone https://aur.archlinux.org/pikaur.git
To update packages installed by AUR use pikaur -Syu
as you would pacman -Syu
.
Keys
If you encounter the situation where the PGP signature for packages are corrupted and you can't install anything the following resets things and allows you to sync and install again…
- snippet.bash
rm -R /etc/pacman.d/gnupg/ rm -R /root/.gnupg/ # only if the directory exists gpg --refresh-keys pacman-key --init && pacman-key --populate pacman-key --refresh-keys
One one occasion I still encountered the following error message
- snippet.bash
error: GPGME error: No data error: failed to synchronize all databases (invalid or corrupted database (PGP signature))
…and found that removing the existing synced databases solved things…
- snippet.bash
rm -rf /var/lib/pacman/sync/*
File already exists
If you encounter Pacman reporting that a file already exists then you should read pacman error: FILENAME exists in filesystem / Newbie Corner / Arch Linux Forums.
You can attempt to fix this by installing the individual package and using the --overwrite
option, as long as you are sure you know what package owns the file (you can check this first).
- snippet.bash
pacman -Qo /path/to/file/that/exists pacman -S some_package --overwrite '*' # This overwrites ALL files that are part of the package if any already exist on the system
systemd
Arch Linux uses systemd by default whilst OpenRC on Gentoo so have noted down some useful commands for adding/starting/stopping/checking services here.
Add a service and start it
- snippet.bash
systemctl enable --now [name].service
Check the Status
- snippet.bash
systemctl status [name].service
Restart
Required after changing the configuration of services you have to first reload-daemon
and then restart the service…
- snippet.bash
systemctl reload-daemon systemctl restart [name].service
Listing Loaded Modules
List loaded modules with…
- snippet.bash
systemctl list-unit-files --state=enabled
Enabling and Disabling Modules
Check and enable modules that are started on reboots with…
- snippet.bash
systemctl is-enabled [module name] systemctl enable [module name] systemctl enable --now [module name] systemctl disable [module name]
User Services
Users can have services run by systemd
simply enable them as the user using…
- snippet.bash
systemctl --user enable [service]
journald logs
By default systemd logs to /var/log/journal/
and over time this can fill up. You can control how much space is used overall and by individual files (lower size means more log rotation) by editing /etc/systemd/journald.conf
and setting the various parameters such as SystemMaxUse
, SystemKeepFree
, SystemMaxFileSize
, SystemMaxFiles
and their Runtime
equivalents (see man journald.conf
for full details). I've opted to set some limits on my VPS and Pi's to….
- snippet.bash
SystemMaxUse=250M SystemMaxFileSize=50M
Autorestarting crashed applications
Really you should work out why the service is crashing but it is possible to restart services automatically. Add the following to the <service-name>.service
(from Auto-restart a crashed service in systemd)…
- snippet.bash
[Unit] ... StartLimitIntervalSec=500 StartLimitBurst=5 [Service] ... Restart=on-failure RestartSec=5s
netctl
Troubleshooting
I encountered a problem whereby one of my Raspberry Pi's Networking failed and it would not connect on booting. After some digging around I found a few mistakes had been made thanks to a very useful old thread.
- Firstly you should not enable any networking under
systemctl
if you are usingnetctl
(check what is enabled withsystemctl --type=service
, if there are anydhcpcd@[if].service
then disable them (see also third point). - Secondly I had not included
DNS=('192.168.1.1')
in my configuration/etc/netctl/wired
configuration. - Thirdly as suggested in the thread there were perhaps some things kicking around in
/etc/systemd/system/multi-user.target.wants/
that should be removed.
After changing all these things were smoother but I still found wlan0
interface was trying to be started on booting when I didn't even have the USB dongle plugged in which was annoying as it was a 90 second delay and it was because I'd not removed /etc/systemd/system/multi-user.target.wants/dhcpcd\@wlan0.service
, should have paid closer attention to the third point above!
Services
ntp
Network Time Protocol is used to synchronise clocks across the internet, useful for when you use services that rely on accurate dates/times. Install and enable on boot with…
- snippet.bash
pacman -Syu ntp systemctl enable --now ntpd.service
fcron
I use fcron as my systems aren't necessarily up all the time. The Cron - ArchWiki page is useful but doesn't tell you how to allow or deny users, the Cron - Gentoo Wiki page does though. You add all
to /etc/fcron/fcron.deny
and then add explicit users by name to /etc/fcron/fcron.allow
. You then need to enable the fcron service with systemctl
(see next section).
- snippet.bash
systemctl enable --now fcron.service # Enable and start the service all in one
OpenVPN
OBSOLETE - Use wireguard instead.
I decided to install and run my own VPN service on my VPS rather than pay for a separate VPN service. The instructions on OpenVPN - ArchWiki are very useful.
Installation
- snippet.bash
pamcan -Syu openvpn
Configuration
You need to create configuration files and a simple method of doing so is to use the openvpn-install-advanced script which makes configuration a fair bit easier by asking you key questions.
I'm lazy and opted to share certificates between my devices, so make sure you include duplicate-cn
option (see Multiple OpenVPN Clients Sharing the Same Certificate) otherwise if you have multiple clients connected at the same time one will keep on getting disconnected.
Server
My configuration file (/etc/openvpn/server/server.conf
) looks like…
- snippet.bash
port 7934 proto udp dev tun user nobody group nobody persist-key persist-tun keepalive 10 120 topology subnet server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "dhcp-option DNS 1.1.1.1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" #push "dhcp-option DNS 87.98.175.85" #push "dhcp-option DNS 130.255.73.90" #push "dhcp-option DNS 5.135.183.146" #push "dhcp-option DNS 172.104.136.243" push "redirect-gateway def1 bypass-dhcp" crl-verify crl.pem ca ca.crt cert server.crt key server.key tls-auth tls-auth.key 0 dh dh.pem auth SHA256 cipher AES-128-CBC tls-server tls-version-min 1.2 tls-cipher TLS-DHE-RSA-WITH-AES-128-GCM-SHA256 status /var/log/openvpn-status.log log /var/log/openvpn.log verb 3 duplicate-cn
Client
Whilst a client config looks like…
- snippet.bash
client proto udp remote ###.###.###.### 7934 dev tun resolv-retry infinite nobind persist-key persist-tun remote-cert-tls server auth SHA256 auth-nocache cipher AES-128-CBC tls-client tls-version-min 1.2 tls-cipher TLS-DHE-RSA-WITH-AES-128-GCM-SHA256 setenv opt block-outside-dns verb 3 <ca> -----BEGIN CERTIFICATE----- ## REDACTED -----END CERTIFICATE----- </ca> <cert> Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha256WithRSAEncryption Issuer: CN=ChangeMe Validity Not Before: Jan 8 22:18:42 2018 GMT Not After : Jan 6 22:18:42 2028 GMT Subject: CN=client Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (4096 bit) Modulus: ## REDACTED Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:FALSE X509v3 Subject Key Identifier: 20:9C:F1:8A:FE:D9:3E:10:33:08:A0:D4:1C:F0:D7:0F:D7:1B:D7:69 X509v3 Authority Key Identifier: keyid:5B:70:28:DB:70:28:59:82:75:1B:37:2C:31:A6:9B:8D:A6:BC:35:27 DirName:/CN=ChangeMe serial:E2:4B:2D:62:A3:1B:41:15 X509v3 Extended Key Usage: TLS Web Client Authentication X509v3 Key Usage: Digital Signature Signature Algorithm: sha256WithRSAEncryption ## REDACTED -----BEGIN CERTIFICATE----- ## REDACTED -----END CERTIFICATE----- </cert> <key> -----BEGIN PRIVATE KEY----- ## REDACTED -----END PRIVATE KEY----- </key> key-direction 1 <tls-auth> # 2048 bit OpenVPN static key -----BEGIN OpenVPN Static key V1----- ## REDACTED -----END OpenVPN Static key V1----- </tls-auth>
nginx
I've opted to use nginx for serving up web-pages (this wiki and sheffieldboulder.uk) and wanted to be a good netizen so enabled https using the self-certification service provided by LetsEncrypt.
Install
- snippet.bash
pacman -Syu dokuwiki nginx certbot
Configuration
nginx
Extensive documentation exists and a more detailed walk-through is here. My overall configuration file is below (with some obfuscation).
- snippet.bash
#user http; #user html; worker_processes 2; session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; # Google DNS Servers change if you wish resolver_timeout 5s; # Redirect to HTTPS server { listen 80; server_name localhost; return 301 https://$server_name$request_uri; } # my-domain.org server { listen 443 ssl http2; listen [::]:443 ssl http2; # Domain name and root and redirect requests for index to doku.php server_name my-domain.org; root /usr/share/nginx/html/my-domain; index doku.php; # Remember to comment the below out when you're installing DokuWiki, and uncomment it when you're done. location ~ /(data/|conf/|bin/|inc/|install.php) { deny all; } # secure Dokuwiki # LetsEncrypt : ACME challenge # location ^~ /.well-known/acme-challenge/ { # default_type "text/plain"; # root /var/lib/letsencrypt; # } # LestEncrypt : Certificates ssl_certificate /etc/letsencrypt/live/my-domain.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/my-domain.org/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/my-domain.org/chain.pem; # Define server to work with dokuwiki location ~^/\.ht { deny all; } # also secure the Apache .htaccess files location @dokuwiki { #rewrites "doku.php/" out of the URLs if you set the userewrite setting to .htaccess in dokuwiki config page rewrite ^/_media/(.*) /lib/exe/fetch.php?media=$1 last; rewrite ^/_detail/(.*) /lib/exe/detail.php?media=$1 last; rewrite ^/_export/([^/]+)/(.*) /doku.php?do=export_$1&id=$2 last; rewrite ^/(.*) /doku.php?id=$1&$args last; } location / { try_files $uri $uri/ @dokuwiki; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } } }
LetsEncrypt/Certbot
Initialisation
On first run simply run certbot
it will detect what sites you have and ask which you want to obtain certificates for.
- snippet.bash
# certbot Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator nginx, Installer nginx renewing certificates (reverse once done!) Which names would you like to activate HTTPS for? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1: neils-snaps.co.uk 2: kimura.no-ip.info 3: sheffieldboulder.uk - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - https://lollyrock.com/posts/content-security-policy/) Select the appropriate numbers separated by commas and/or spaces, or leave input blank to select all options shown (Enter 'c' to cancel):
Renewal
Certificates don't last forever and need renewing. First time I had to do this I needed some help. Second time I thought I knew what I was doing but still ran into problems and only succeeded through trial and error. I edited /etc/nginx/nginx.conf
and for each server {...}
definition I…
- Disabled
server { listen 80; ...}
which redirected requests on port80
to443
- Disabled
listen 443 ...
and enabledlisten 80
under each server configuration.
I could then run cerbot to successfully renew my certificates after a --dry-run
(if successful remove this option and re-run).
- snippet.bash
certbot --renew-by-default renew --dry-run
I then reversed all of the above disable/enables and restarted the nginx.service
(systemctl restart nginx.service
). I opted to leave comments in-line in /etc/nginx/nginx.conf
to remind me what I needed to enable/disable.
Securing Your Site
You can test the security of your site at moz:lla - Infosec and follow the recommendations to add various security features such as TLS and headers. On the basis of using this site I added the following custom headers under each server {...}
definition (so that it works with each location
that is defined within).
- snippet.bash
server { ... ##################################### ## Normal server config ## ##################################### # Comment the following lines when renewing certificates (reverse once done!) listen 443 ssl http2; listen [::]:443 ssl http2; ##################################### ## Security Headers ## ##################################### # See https://infosec.mozilla.org/guidelines/web_security add_header Content-Security-Policy "default-src 'self' always; frame-ancestors 'none';"; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload;"; add_header X-Content-Type-Options "nosniff"; add_header Referrer-Policy "no-referrer"; ... }
DokuWiki
Theme Customisation
I use the arctic theme and decided to customise the style to be Dark Solarized) as I find it easier on my eyes. I've not got everything tweaked just yet but for now the following is a vast improvement on the default white theme (for me at least).
- snippet.php
[stylesheets] layout.css = screen design.css = screen style.css = screen media.css = screen _mediaoptions.css = screen _admin.css = screen _linkwiz.css = screen _subscription.css = screen _mediamanager.css = screen _tabs.css = screen _fileuploader.css = screen _skiplinks.css = screen rtl.css = rtl print.css = print arctic_layout.css = screen arctic_design.css = screen arctic_media.css = screen arctic_rtl.css = rtl arctic_print.css = print ; This section is used to configure some placeholder values used in ; the stylesheets. Changing this file is the simplest method to ; give your wiki a new look. [replacements] ; arctic template LAYOUT __wiki_width__ = "90%" __wiki_full_width__="100%" __header_height__ = "5em" __body_margin__ = "1.5em" __page_padding__ = "0.5em;" __footer_padding__ = "2em" ; arctic template FONT-SIZES AND FONT-COLORS __font_size__ = "0.8125em" __line_height__ = "150%" __pagename_color__ = "#d33682" __logo_color__ = "#d33682" __headline_color__ = "#d33682" ; arctic template LAYOUT-COLORS __body_background__ = "#002b36" __header_background__ = "#436976" __footer_background__ = "#436976" __form_border__ = "#c3c3c3" ;-------------------------------------------------------------------------- ;------ guaranteed dokuwiki color placeholders that every plugin can use ; main text and background colors __text__ = "#2aa198" __background__ = "#002b36" ; alternative text and background colors __text_alt__ = "#d33682" __background_alt__ = "#dee7ec" ; neutral text and background colors __text_neu__ = "#d33682" __background_neu__ = "#f5f5f5" ; border color __border__ = "#ccc" ;-------------------------------------------------------------------------- ; other text and background colors __text_other__ = "#859900" __background_other__ = "#f7f9fa" ; these are used for links __extern__ = "#436976" __existing__ = "#56b04f" __missing__ = "#ed5353" __visited__ = "#436976" ; highlighting search snippets __highlight__ = "#ff9" ;-------------------------------------------------------------------------- ;------ for keeping old templates and plugins compatible to the old pattern ; (to be deleted at the next or after next release) __white__ = "#fff" __lightgray__ = "#f5f5f5" __mediumgray__ = "#ccc" __darkgray__ = "#666" __black__ = "#000" ; these are the shades of blue __lighter__ = "#f7f9fa" __light__ = "#eef3f8" __medium__ = "#dee7ec" __dark__ = "#8cacbb" __darker__ = "#638c9c" ; setup vim: ts=2 sw=2:
Troubleshooting
Suddenly my other DokuWiki install sheffieldboulder.uk stopped showing images, found a couple of threads here and here with possible solutions.
Turns out the underlying cause was the Content Security Policy I'd added. For now I've disabled these headers but a few things needed modifying, allowing img-src
for external sites (currently lazily went for the catch all *
but should really work out the URL for OSM sources), It seems the frame-ancestors 'none'
doesn't help either, using a wild card didn't solve this. OSM also requires that a valid referrer under the Referrer Policy be specified and setting Referrer-Policy "origin"
didn't work.
- snippet.bash
# add_header Content-Security-Policy "default-src 'self' always; frame-ancestors 'none'; img-src *"; # add_header Referrer-Policy "origin";
WordPress
I needed to host a WordPress site in order to host and with a bit of luck sell, some of my pictures. There are instructions but they only include configuration with Apache and I'm already using Nginx as a web-server.
Domain
Register a new domain for your site at your preferred Domain Registration service, my new site is neils-snaps.co.uk.
Installation
WordPress uses an SQL database in the background, I don't quite understand the MySQL v MariaDB issues but went with the later…
- snippet.bash
pacman -Syu wordpress mariadb
Configuration
MariaDB/MySQL
Follow the Arch Linux MariaDB Installation instructions.
- snippet.bash
mariadb-install-db --user=mysql --basedir=/usr --datadir=/var/lib/mysql systemctl enable --now mariadb.service mysql -u root -p MariaDB> CREATE DATABASE wordpress; MariaDB> GRANT ALL PRIVILEGES ON wordpress.* TO "wp-user"@"localhost" IDENTIFIED BY "choose_db_password"; MariaDB> FLUSH PRIVILEGES; MariaDB> EXIT
PHP
You need to enable the mysqli
extension in PHP, uncomment the following line in /etc/php/php.ini
- snippet.bash
extensions=mysqli
Nginx
There is a recipe here but as I already have fastcgi/php-fpm up and running for DokuWiki I didn't need the upstream {...}
section and instead copy and pasted the location ~\.php {...}
from my DokuWiki sites with the new server {...}
definition which looks like…
- snippet.php
################################################################### ## (Wordpress) ## ################################################################### server { ## Your website name goes here. server_name neils-snaps.co.uk; ## Your only path reference. # root /var/www/wordpress; root /usr/share/webapps/wordpress; ## This should be in your http block and if it is, it's not needed here. index index.php; ##################################### ## Security Headers ## ##################################### # Add Content Security Policy (see https://lollyrock.com/posts/content-security-policy/) # but currently using a striped form james.alssopp@gmail.com suggested and # an option to block X-frame (see https://infosec.mozilla.org/guidelines/web_security#x-frame-options) add_header Content-Security-Policy "default-src 'self' always; frame-ancestors 'none';"; # Add Strict Transport Security (see https://infosec.mozilla.org/guidelines/web_security#http-strict-transport-security add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload;"; add_header X-Content-Type-Options "nosniff"; add_header Referrer-Policy "no-referrer"; # Comment the below out when you're installing DokuWiki, and uncomment it when you're done. location ~ /(data/|conf/|bin/|inc/|install.php) { deny all; } # secure Dokuwiki location ~^/\.ht { deny all; } # also secure the Apache .htaccess files location @dokuwiki { # rewrites "doku.php/" out of the URLs if you set the userewrite setting to .htaccess in dokuwiki config page rewrite ^/_media/(.*) /lib/exe/fetch.php?media=$1 last; rewrite ^/_detail/(.*) /lib/exe/detail.php?media=$1 last; rewrite ^/_export/([^/]+)/(.*) /doku.php?do=export_$1&id=$2 last; rewrite ^/(.*) /doku.php?id=$1&$args last; } ##################################### ## Set upload limits ## ##################################### client_max_body_size 10M; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } }
Once this is done you have to restart nginx and php-fpm services for the additions and changes to take effect…
- snippet.bash
systemctl restart php-fpm.service systemctl restart nginx.service
Ownership/Permissions
The ownership of the wp-content
directory should be that of the process that is running Nginx which on most Arch systems will by default be http
. Perimssions should be 755
on the directory and shouldn't need changing.
- snippet.bash
chown -R http:http /usr/share/webapps/wordpress/wp-content
It is also vital that you modify the configuration of php-fpm.service
to allow writing to /usr
which is by default read only, for details see here
- snippet.bash
systemctl edit php-fpm.service
…and add the following lines (note the comments about where additions should be made)…
- snippet.bash
[Service] ReadWritePaths=/usr/share/webapps/wordpress
Alternatively you can add the above directly to /etc/systemd/system/php-fpm.service/override.conf
. Remember to restart the php-fpm.service
(systemctl restart php-fpm.service
).
File Size Limits
To increase the size limit for file uploads you need to modify /etc/nginx/nginx.conf
and and the following line under the server {...}
definition…
- snippet.php
client_max_body_size 10M;
Images in Search Results
Very quickly a number of people pointed out to me that images were not included in the search results. The solution was to use the Ajax Search Lite plugin. Which you can install through the configuration page, or as I do SSH into my VPS cd to the /path/wordpress/is/installed/wp-content/plugins/
then use wget
and unzip
to extract…
- snippet.bash
cd /usr/share/webapps/wordpress/wp-content/plugins/ wget https://downloads.wordpress.org/plugin/ajax-search-lite.4.9.3.zip unzip ajax-search-lite.4.9.3.zip
However, you need either gd and/or Imagemagick (which is worth having around anyway) installed and enabled…
- snippet.bash
pacman -Syu php-gd imagemagick
Then you need to enable Ajax to override your themes default search/tag/content search results and enable and configure the image to show that you want (whether thats “Featured Image” or “Primary Content” etc.)
Plugins
I installed the following plugins…
Themes
robots.txt
This site has a simple exposition of how the robots.txt
works. Under Nginx you have to specify this under each virtual host (thread).
I've gone with the following for now…
- snippet.bash
User-agent: * Disallow: /images/ Disallow: /cgi-bin/ User-agent: Googlebot-Image Disallow: /
On just neils-snaps.co.uk and will add it for this blog and sheffieldboulder.uk at a later date .
Radicale
Radicale is a server for calendars for those who want to divest themselves of Google Calendars. Installation on Arch Linux is simple..
- snippet.bash
pacman -Syu radicale
And the configuration instructions are straight-forward.
- snippet.bash
useradd --system --home-dir / --shell /sbin/nologin radicale mkdir -p /var/lib/radicale/collections && chown -R radicale:radicale /var/lib/radicale/collections chmod -R o= /var/lib/radicale/collections
To have the service run and managed by systemd
create the file /etc/systemd/system/radicale.service
- snippet.bash
[Unit] Description=A simple CalDAV (calendar) and CardDAV (contact) server After=network.target Requires=network.target [Service] ExecStart=/usr/bin/env python3 -m radicale Restart=on-failure User=radicale # Deny other users access to the calendar data UMask=0027 # Optional security settings PrivateTmp=true ProtectSystem=strict ProtectHome=true PrivateDevices=true ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true NoNewPrivileges=true ReadWritePaths=/var/lib/radicale/collections [Install] WantedBy=multi-user.target
…and it can be configured via /etc/radicale/config
, at a bare minimum you'll want to add the IP address and port to run on
- snippet.bash
[server] hosts = 192.168.1.1:4287 [auth] type = htpasswd htpasswd_filename = /etc/radicale/users htpasswd_encryption = bcrypt
To enable and start under systemd
- snippet.bash
systemctl enable radicale systemctl start radicale systemctl status radicale journalctl --unit radicale.service
nginx
Nginx can be used as a reverse proxy for accessing Radicale, see here for example configuration. Under an existing server{...}
config add the following lines…
- snippet.bash
##################################################################### ## Radicle (https://radicale.org/proxy/) ## ##################################################################### location /radicale/ { proxy_pass http://localhost:5232/; proxy_set_header X-Script-Name /radicale; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Remote-User $remote_user; auth_basic "Radicale - Password Required"; auth_basic_user_file /etc/radicle/htpasswd; }
Now generate SSL certificates for the localhost and reverse proxy, you can accept defaults when asked questions or add details if you want…
- snippet.bash
cd /etc/radicale/ openssl req -x509 -newkey rsa:4096 -keyout server_key.pem -out server_cert.pem -nodes -days 9999 openssl req -x509 -newkey rsa:4096 -keyout client_key.pem -out client_cert.pem -nodes -days 9999
Now modify /etc/radicale/config
to use the certificates…
- snippet.bash
[server] ssl = True certificate = /etc/radicale/server_cert.pem key = /etc/radicale/server_key.pem certificate_authority = /etc/radicale/client_cert.pem
Finally configure /etc/nginx/nginx.conf
to use the SSL credentials…
- snippet.bash
**TODO**
Users
To make this secure you now need to generate user passwords using htpasswd
this is part of the apache-tools, which requires installing from AUR…
- snippet.bash
git clone https://aur.archlinux.org/apache-tools.git cd apache-tools makepkg -sric
You then add users using…
- snippet.bash
htpasswd -B -c /etc/radicale/users user1
Now you can go to the WebUI and login to create calendars/tasks
DAVx5
Troubleshooting
Collected here are details of how I resolved various general errors I've encountered using Arch Linux. The list is far from complete, its highly dependant on me having the inclination to write it up after solving or being diligent enough to document it as I'm resolving the issue.
error: /boot/vmlinuz-linux not found
`chroot`
- snippet.bash
mount /dev/sda1 /mnt mount /dev/sda2 /home mount /dev/sda4 /mnt/efi cd /mnt mount -t proc /proc proc/ mount -t sysfs /sys sys/ mount --rbind /dev dev/ mount --rbind /run run/ mount --rbind /sys/firmware/efi/efivars sys/firmware/efi/efivars/ cp /etc/resolv.conf /mnt/etc/resolv.conf chroot /mnt /bin/bash
- snippet.bash
pacman -S linux pacman -S mkinitcpio
…as suggested towards the end of the thread and rebooted, to no avail. Next I grabbed the last 5.9.14 and the latest 5.10.4 kernel and kernel headers packages from archive.archlinux.org and had a whirl installing the 5.10.4 kernel and headers as described under Downgrading the kernel.
This didn't work the second time I encountered the problem. It had something to do with the EFI partition being corrupted as I went through recreating that (after mistakenly mounting it as /mnt/boot
under chrooted environments, old habits die hard!). That got me further and the system booted but hung at the next step, regardless of whether I selected the EFI entry from the BIOS (i.e. GRUB
) or the Boot partition (i.e. arch
) as the primary partition (although in reality I've no idea if it was failing one and loading the next one!).
Links
Broken GRUB - error symbol 'grub_malloc' not found
- snippet.bash
cryptsetup luksOpen /dev/nvme0n1p2 enc_part1 mkdir /mnt/arch && mount /dev/mapper/enc_part1 /mnt/arch mount /dev/nvme0n1p1 /mnt/arch/boot arch-chroot /mnt/arch grub-install --target=x86_64-efi --efi-directory=/boot
Turns out specifying the --target=x86_64-efi
is kind of important!