I can't route to the service through the public internet as there is a firewall/vpn in the way. I want to know if there is anything that can be done to get containerized traefik (non-swarm mode, just straight docker-compose) to route to the local webserver (running on port 8080, which is blocked from public access).
An example of the setup can be found on open-source repo, Dean Kayton / research_db · GitLab (This contains the working traefik configuration without the local service that can't be containerized).
Edit:
I did some tinkering on a local Vagrant dev VM and got it to work. Here is the config located at /config/dynamic.yml:
tls:
options:
default:
minVersion: VersionTLS12
mintls13:
minVersion: VersionTLS13
stores:
default:
defaultCertificate:
certFile: /certs/local.crt
keyFile: /certs/local.key
http:
routers:
galaxy:
rule: Host(`galaxy.mylocalserver.uct.lan`)
tls: true
entrypoints:
- websecure
service: galaxy
services:
galaxy:
loadBalancer:
servers:
- url: http://mylocalserver.lan:8080
To read the above config I added the following flags to the start command:
--providers.file.directory=/config --providers.file.watch=true
The other two steps I took (still undecided whether they were necessary or not, I had to run the "galaxy" service on 0.0.0.0 and I needed to tell Vagrant to forward port 8080 from the guest to the host.
I am unsure in the above scenario which route traefik takes to get to my local service. Is it finding the VM's public ip and entering through the host port, which is forwarded to the guest then to the local service? Is there a way to not have to publish/forward any ports?
I will do some further experiments to try work out the answer for myself.