:: linux, meta

By: Hazel Levine

This post is going to be odd, as unlike all my others, it’s structured more like a live account and will be continually updated as I continue to work.

For a while, I’ve been running this website, and its related services, off of a Linode VPS running Debian with 2GB of RAM. In addition, I recently completed my computer vision project for AP Capstone Research, so I have a free Raspberry Pi 4 to work with. Under this logic, I’ve begun trying to move my services to a locally hosted server, since I already have all of the components. This shouldn’t impact anybody, because everything I have is private anyway…

The server is:

  • A Raspberry Pi 4 with 2GB of RAM (possibly 4GB in the future, not difficult to migrate) running Alpine Linux
  • A generic 32GB SD card with just enough Alpine to load from…
  • A 64GB USB 3.0 SSD with Alpine installed on it
  • ~~A 4TB USB 3.0 mechanical hard drive as a storage medium (possibly Nextcloud in the future)~~
  • A 2TB WD My Passport

This setup as a whole cost me a grand total of around ~~15~~ 70 US dollars, solely because I didn’t have a ~~power supply or cable for the 4TB drive~~ drive for Nextcloud.


Installing Alpine

I decided to use Alpine rather than a more traditional distribution like Raspbian because of the limitations Raspbian imposes on the Pi. Notably, Raspbian is distributed as a single armhf image, whereas models not including the Pi 1 and the Pi Zero are aarch64 (with the exception of the original Pi 2, which is armv7). This means that Raspbian doesn’t actually take full advantage of the later Pi’s resources. Also, I have experience with Alpine, and it’s an extremely small distro for an… extremely small computer.

Alpine for the Raspberry Pi comes by default as diskless, and loads itself into RAM upon boot. To save the Alpine install, you’re expected to use lbu (Local BackUp, presumably). This is great for embedded applications, because if power gets pulled, it can’t corrupt the filesystem. For a home server, especially one with routine backups and one running file-based services (namely, Git), I’m more concerned about losing data that’s been committed than FS corruption from pulling the plug.

In addition, the Raspberry Pi 4’s bootrom can’t currently boot directly from USB (unlike other previous Raspberry Pi models). Thus, I had to continue using the SD as a boot medium, but then specify the root to be /dev/sda in the kernel command line.

To install Alpine permanently, I followed this guide from the Alpine wiki. The only real difference was that I changed /dev/mmcblk0p2 to /dev/sda (my USB SSD) throughout. I didn’t experience any issues with this approach.


I have very little experience with networking as a whole, so this was a learning experience to me. Because I’m presumably living in a dorm (if IU’s campus opens in the fall), I won’t have the ability to port forward. To get around this, I bought the cheapest Linode VPS available, and set up a WireGuard VPN. I’d previously used the Streisand VPN “generator”, but it’s not designed for custom configuration (no documentation exists on how to get a shell), so I had to replace it with my own setup.

The design is to use the WireGuard Linode as a router, and port forward through it to my (notably more powerful, albeit ARM) Pi. In addition, this means that services like SSH can’t actually be accessed unless you’re behind the VPN by just not forwarding those ports but leaving them open on the Pi.

I’m using Ubuntu’s ufw firewall, and discovered the existence of /etc/ufw/before.rules, which allows you to specify arbitrary iptables rules. In my case, that’s just setting up a NAT, like this:


No port forwarding has been done yet because I don’t have anything I need to be running.


Playing with repos

I didn’t mention this earlier, but ufw, the firewall I like to use, isn’t part of the Alpine 3.11 repos — it’s in testing. To install packages selectively from testing and edge, which I had to do numerous times, I set up repository pinning as described on the Alpine Wiki:

@edge http://nl.alpinelinux.org/alpine/edge/main
@edgecommunity http://nl.alpinelinux.org/alpine/edge/community
@testing http://nl.alpinelinux.org/alpine/edge/testing

Then I ran apk add ufw@testing and all was well.

Finalizing the network setup

Throughout the beginning of the day, I had weird issues with not being able to curl aster.qtp2t.club or curl "the IP address", but being able to access the machine over the VPN fine. It turned out that I had to set PersistentKeepalive=25 in both the Pi’s and my laptop’s WireGuard configuration, since I’m under a NAT and routing all traffic. I also had to update my firewall rules to enable port forwarding, like this:

# NAT table rules

# Port forwarding
-A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination
-A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination

# Forward traffic through eth0

# Okay

As clear from the firewall rules, I also moved my WireGuard to the subnet. Ports are forwarded by just redirecting traffic over the VPN. This has two major consequences:

  • DHCP is fine, since WireGuard is roaming
  • If I want things to be internal, just not forwarding them is enough — WireGuard routes traffic, so if I don’t forward SSH (for example) it’s only accessible via my home network and VPN.

After doing this, aster.qtp2t.club was live with a 404 (default nginx config). I then moved the Pi physically to my basement, and plugged it in directly to the router to ensure connectivity.

I then started to move services… after spending around an hour copying dotfiles and making cool ASCII art for the MOTD.

Moving nginx over

Before doing anything, I copied all the files I knew I’d need via scp to a USB stick, then plugged it in, including my nginx config, everything in /var/www, and various databases and configs. I then changed Alpine’s default config to use /etc/nginx/sites-available and sites-enabled, since that’s just what I’m used to. I copied everything formerly in /var/www to the new /var/www, and reconfigured nginx to not use SSL temporarily. After doing this, I was serving the https://qtp2t.club homepage.

I then re-enabled my blog and Lemniscation’s fan translation page, and changed the DNS records. I then used the EFF’s Certbot to automatically set up HTTPS, un-mangled my configs (since Certbot loves to put whitespace everywhere and make things borderline unreadable), and deployed it.

Sure enough, it more or less just worked, and you’re now reading this post off of the new box.

Moving Gitea over

This was generally more involved. I’d backed up the repos for Gitea, as well as its configuration and database, but things being in different places complicated things. Since I’m using SQLite, copying the database was pretty un-involved.

I also had to pull gitea from Alpine’s edgecommunity repository, since I’d been running v1.11 and Alpine 3.11 was packaging v1.10 — I wanted to avoid any potential conflicts in moving my configuration over.

Notably, the differences from a binary install on Debian were:

  • Alpine’s user upon installing the package is gitea, when I was formerly executing it as git
  • Alpine puts repositories in /var/lib/gitea/git by default, where they were formerly in /home/git/gitea-repositories
  • Alpine puts logs by default in /var/log/gitea, where they were in /var/lib/gitea/log (the former makes more sense…)
  • STATIC_ROOT_PATH should be set to /usr/share/webapps/gitea/, or Gitea can’t render any pages (due to no templates)

After accounting for all of these changes, I was free to have Gitea up and running without any hassle, and similarly to before, I changed nginx, changed DNS, waited, ran Certbot, and it’s up.

Realizing the drive was dead

Yeah. It was a Seagate drive I had lying around, I wasn’t expecting much, but still…


Fixing Gitea

So apparently the way Gitea updates its web interface is via a series if pre-commit and post-commit hooks. This is fine, but when Gitea was installed on the old box, it was in /usr/local/bin/gitea — it’s now in /usr/bin/gitea

To fix this, I ran the command (in /var/lib/gitea/git):

rg usr/local/bin/gitea -l0 | xargs -0 sed -i 's/usr\/local\/bin\/gitea/usr\/bin\/gitea/g'

rg automatically ignores Git objects, which is why this is safe.

Moving Bitwarden over

I do run an instance of bitwardenrs for my own usage. This was the only service on the former VPS that ran under podman. I’m not one for containerization, usually, but it’s just simpler with containers here.

I copied the former /bw-data/ to /bw-data/ on the new box. Instead of using the bitwardenrs/server:latest, I had to use the bitwardenrs/server:aarch64 (for obvious reasons). I ended up using Docker instead of podman, since podman’s tightly integrated with systemd and Alpine uses OpenRC, for better or for worse. Additionally, podman is in Alpine’s testing repo, and I’m trying to limit the number of packages installed there.

After enabling the Docker OpenRC service, putting up the container on port 8080 (with -e ROCKET_PORT=8080), and jamming to Car Seat Headrest’s Monomania, Bitwarden just worked. Unfortunately, no testing could occur further because bitwarden_rs mandates HTTPS, so I made a shot in the dark and updated the DNS anyway. I later had to change all the references to qtp2t.club to localhost, inexplicably, and it worked.

Moving Linx over

Thankfully, the binaries compiled for Linx are actually static against musl already. They even provide ARM64 builds, so I can just use the official binaries!

The unfun part about working with this was writing an OpenRC service for Linx. Honestly, I’d take systemd over OpenRC any day — it’s just that Alpine happened to be the best server OS I found for a Pi 4, so I have to deal with it. Writing OpenRC services is generally unfun, but this one was simple.


name="linx server"
command_args="-config /etc/linx.ini"

depend() {
	need net

…still better than sysvinit

Anyway, by now you know the drill… set up nginx, change DNS, wait, run Certbot, and bam. That’s the last of the web services!