Server Management for the Faint of Heart, Featuring Caddy

Photo by Ivan N on Unsplash

Server Management for the Faint of Heart, Featuring Caddy

Tips to overcome the setup hump and start enjoying Caddy.

Introduction

Caddy is an awesome piece of software that puts SSL cert management on cruise-control, provides approachable yet flexible reverse proxying, and offers a powerful and configurable HTTP server, with some extra goodies for static files. And while providing all that value, Caddy still makes server management sooo much easier and more intuitive! If you can’t tell, I’m a fan of Caddy. I recently setup a server with Caddy, so this article features the tips and unblocks I picked up along the way. Let’s dive in!

Getting Started

If you’re new to Caddy, I highly recommend Caddy’s recommendation 🤭 that you get started in the following order:

This is one of the rare occasions when the official guides are as clear as any others you’re likely to find. When you’re done with those guides, you can then consult the reference for the API or Caddyfile, depending on what you want to do and how you want to do it with Caddy. Let’s explore an example.

Use Case: Reverse Proxies Made Easy 🤌🏾

One of the reasons I love Caddy is the ease it brings to setting up a reverse proxy. For instance, putting reverse_proxy :9000 in a Caddyfile is enough to route all traffic to the application running on port 9000. You’ll often need something more complicated than that, yet the simplicity remains. To illustrate, imagine that you own the domain yourdomain.tld. You’ve built a helpful web application and you want it live on the internet at sub.yourdomain.tld. What will you do with the main yourdomain.tld site? Maybe it’s too much effort to worry about that right now. You do know, however, that you don’t want to redirect elsewhere. So if anyone chooses to visit yourdomain.tld, you decide to just show a very boring placeholder message in plain text: “Welcome to yourdomain.tld! Full website coming soon.” How to do this? 🤔

You could (1) capture all the traffic heading to the main domain and any subdomains you want to use, (2) use your helpful web app to serve the requests intended for sub.yourdomain.tld, and (3) show your boring message for all the remaining traffic. In Code This Means the following config in a Caddyfile will serve your needs:

*.yourdomain.tld, yourdomain.tld {
    @sub host sub.yourdomain.tld
    route {
        reverse_proxy @sub localhost:port
        respond "Welcome to yourdomain.tld! Full website coming soon."
    }
}
  • *.yourdomain.tld, yourdomain.tld: matches incoming requests for all the domains and subdomains that should be handled by the directives inside this site block.

  • @sub host sub.yourdomain.tld: further matches the subset of requests where the hostname is sub.yourdomain.tld, and assigns them the shorthand name @sub.

  • reverse_proxy @sub localhost:port: routes all traffic matched by @sub to the application running on localhost:port.

  • respond "Welcome to ...": responds with the specified plain text to all other requests not matched by @sub.

  • route directive allows you to override the default order in which other directives are handled. For instance, by default Caddy gives the respond directive higher priority over the reverse_proxy directive. Without using the route directive, requests to sub.yourdomain.tld will get the plain text response instead of being served by your helpful web app. Of course that’s not the behavior you want here. You can therefore use route to specify your desired order of priorities, as shown above.

So, 7 lines of code. Not bad. Let’s briefly consider doing the same thing using perhaps the most popular reverse proxy solution, NGINX.

I’ll be honest: I’ve never gotten proficient with NGINX. That’s mostly because it can be rather verbose and unwieldy, increasing the likelihood that I’ll do the wrong thing. So, I hope you’ll forgive me for consulting with ChatGPT to generate an NGINX config that’s equivalent to the Caddyfile config above. Hallucinations 👻 are not welcome here, so I ran the chat bot’s answer through this validator to identify and fix any obvious problems. Without further ado, here’s the final snippet:

server {
    listen 80;
    listen [::]:80;
    server_name *.yourdomain.tld yourdomain.tld;

    # Redirect HTTP to HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name *.yourdomain.tld yourdomain.tld;

    # SSL configuration
    ssl_certificate /path/to/your/certificate.crt;
    ssl_certificate_key /path/to/your/private.key;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;

    # Rule for sub.yourdomain.tld
    if ($host = sub.yourdomain.tld) {
        proxy_pass http://localhost:port;
        break;
    }

    # Default response for other subdomains and yourdomain.tld
    location / {
        return 200 "Welcome to yourdomain.tld! Full website coming soon.";
        default_type text/plain;
    }
}

😳😲😱

Bear in mind that Caddy provides automatic SSL out of the box. So the extra NGINX settings for SSL are actually required to match the absent SSL settings in the caddy config.

Well... easy choice for me. I prefer the Caddy way.

Going Live

As you may know, it’s pretty easy to do the wrong thing when manually setting up your live environment. So, I highly recommend reading through Digital Ocean’s guide to deploying Caddy for a live website, combined with the following troubleshooting notes.

Memory Requirements

Do you want to build the Caddy binary yourself? I might have some bad news for you: you need about 2GB of RAM for the build process to succeed. If you’re using a small DO droplet like the one I used, that’s definitely bad news because you don’t have that much RAM. Well then… is there good news? Yep 🙂‍↕️ you can build the binary on your computer using Go’s cross-platform compilation feature, as mentioned here. Then copy the binary to your server using rsync -avz /path/to/binary_file user@destination:/path/to/destination. user@destination represents your SSH username and server IP address.

Does building from source sound too stressful for you? Fear not — you can still download a prebuilt binary and start serving right away!

Uniform Firewalls

At the step for configuring the firewalls on your instance (i.e. sudo ufw allow …), avoid using ufw if you previously configured your firewall rules in a different place, such as in the DO dashboard. Instead, you should go back to that place and add 2 inbound rules for ports 80 and 443. Those ports correspond to HTTP and HTTPS, respectively. Here’s a screenshot of what this looks like in DO’s dashboard:

DNS Awareness

In your domain host’s DNS settings, you need to add CNAME alias records that define the subdomains you want Caddy to control. This step is critical. Without it, the subdomain traffic will never even reach your server instance for Caddy to handle as you desire. Setting up only the main A record for your domain is not sufficient.

Location, Location, Location!

Does your systemd setup fail when you try to enable the Caddy service with systemctl? If you’re using a Caddyfile, check whether you saved the file at the location described in the ExecStart and ExecReload lines of the general caddy.service file. At the time of writing, both lines default to /etc/caddy/Caddyfile. So if your Caddyfile is located elsewhere, you need to edit the caddy.service file on your system to point to the correct location.

Logging 🪵

You probably don’t need me to tell you that you should set up logging so you can track what Caddy is doing. Caddy’s logging philosophy is quite powerful, but you may not be familiar with it. Thankfully they have an explainer you can read through. Setting up logging is easy enough though:

  • create the folder /var/log/caddy (use sudo if necessary)

  • create the access.log file inside that folder

  • give Caddy full control of the folder: sudo chown -R caddy:caddy /var/log/caddy

    • this ensures Caddy can write to the log file without permission issues
  • add a log directive to your Caddyfile and specify access.log as the output file, like this:

      log {
          output file /var/log/caddy/access.log
      }
    

Logging setup complete ✅.

TLS Configuration 🤝

Depending on your cloud host provider, when starting Caddy you may get an error like the following:

parsing caddyfile tokens for 'tls': getting module named 'dns.providers.digitalocean': module not registered: dns.providers.digitalocean

If so, you need to confirm that your Caddy binary actually contains the necessary TLS module. To do that, run this command: caddy list-modules | grep -i <name-of-your-tls-plugin-provider>. If that command doesn’t find the TLS plugin provider, then the binary doesn’t include it. This happened to me when I downloaded a prebuilt Caddy binary from Caddy’s downloads page using curl on the server instance. For some reason, that command didn’t get the correct binary in that environment. My solution: download the binary to my computer via the browser, then copy the file to my server using the rsync command mentioned earlier. Easy peasy 😊. Of course if you built the binary yourself then you instead need to rebuild it with the necessary plugin included.

Other Cloud Providers

If you’re deploying to a server on a cloud host provider that’s not Digital Ocean, the setup is mostly the same provided your server is running Ubuntu linux. However, behind the scenes DO uses some juju for authentication and cert management. So the automatic TLS step must plug into that juju when hosting on DO. What does this mean for you? Simply put, you may be able to skip the automatic TLS step entirely if Caddy’s default TLS module works smoothly with your provider. If not, you will need to use a different and appropriate TLS plugin for that step. Don’t get too worried though. The TLS config steps may still be very similar because you generally want to achieve these 4 things:

  • build (or download) Caddy with the TLS plugin for your cloud host

  • get an auth token with permission to interact with your cloud host’s SSL juju

    • an account token with general read/write access should work, though you may want to limit the token’s scope to fit your needs
  • set that token as an environment variable for the caddy start command

    • just like in the Digital Ocean example, you can update the Environment key in your caddy.service file with your token, in the form Environment=CLOUD_HOST_AUTH_TOKEN=your_token_here
  • use that environment variable for the tls directive in your Caddyfile, like this:

      tls {
          dns <tls_plugin_name> {env.CLOUD_HOST_AUTH_TOKEN}
      }
    

You can then restart the caddy service and voila! Automatically renewing HTTPS!

Conclusion

Don’t you just love it when powerful software remains as friendly to use as it is capable? With Caddy you can derive great value right from the moment you start your local server, up until you’re configuring load balancers for a busy web app. There’s a range of use cases to consider, and an entire other half of Caddy (the API) that I didn’t even discuss! I urge you to explore the documentation to see if any of it can work for you.


Questions? Feedback? Nice words, or mean ones? Feel free to reach out to @CodeWithOz on all the socials, or on LinkedIn.