I have a few servers running some services using a custom domain I bought some time ago.
Each server has its own instance of caddy to handle a reverse proxy.
Only one of those servers can actually do the DNS challenge to generate the certificates, so I was manually copying the certificates to each other caddy instance that needed them and using the tls directive for that domain to read the files.
Just found there are two ways to automate this: shared storage, and on demand certificates.
So here's what I did to make it work with each one, hope someone finds it useful.
Shared storage
This one is in theory straight forward, you just mount a folder which all caddy instances will use.
I went through the route of using sshfs, so I created a user and added acls to allow the local caddy user and the new remote user to write the storage.
This one requires a separate service since caddy can't properly serve the file needed to the get_certificate directive
We could run a service which reads the key and crt files and combines them directly from the main caddy instance, but I went to serve the files and combine them in the server which needs them.
So, in my main caddy instance I have this:
I restrict the access by my tailscale IP, and include the /ask endpoint required by the on demand configuration.
Then on the server which will use the certs I run a service for caddy to make the http request.
This also includes another way to handle the /ask endpoint since wildcard certificates are not handled with *, caddy actually asks for each subdomain individually and the example above can't handle wildcard like domain=*.my.domain.
Mostly a nitpick, but for that little helper I would have stuck to the stdlib and not pulled in a dependency like echo.
Otherwise: nice idea. I did something similar but since caddy runs directly on my host, I added permissions for the other services that need the cert and then pointed them directly at it.