In previous post, we discussed creating a basic Nomad cluster in the Vultr cloud. Here, we will use the cluster created to deploy a load-balanced sample web app using the service discovery capability of Nomad and its native integration with the Traefik load balancer. The source code is available here for the reference.
Traefik acts as an API gateway for your online services, ensuring that incoming requests are properly routed to the appropriate parts of your system. It can perform host-based or path-based routing, allowing the server to run a wide variety of services with varied domain names.
What makes traefik stand out is its ability to automatically detect and configure services without any extra effort on your part. It’s like having a helpful assistant that knows exactly where everything belongs. This information is obtained through connection to Nomad services, a process we will undertake in this article.
To run the service, all we need is a Nomad job file and a nomad cluster. A nomad job file is configuration file which contains information about the application to run, its networking and the service discovery information.
This is how the job file looks like:
job "traefik" {
datacenters = ["dc1"]
type = "system"
group "traefik" {
network {
mode = "host"
port "http"{
static = 80
}
port "admin"{
static = 8080
}
}
task "server" {
driver = "docker"
config {
image = "traefik:2.11"
ports = ["admin", "http"]
args = [
"--api.dashboard=true",
"--api.insecure=true", # not for production
"--entrypoints.web.address=:${NOMAD_PORT_http}",
"--entrypoints.traefik.address=:${NOMAD_PORT_admin}",
"--providers.nomad=true",
"--providers.nomad.endpoint.address=http://<nomad server ip>:4646"
]
}
resources {
cpu = 100 # Mhz
memory = 100 # MB
}
}
service {
name = "traefik-http"
provider = "nomad"
port = "http"
}
}
}
The main components of a Job file are the job, group and task stanzas. A job file can have multiple groups, and each group can have multiple tasks. The group contains networking information like the type of network, ports exposed and the service info. If you are interested in nomad networking, this article is a great source of information. The task can run docker containers using a driver and configs
To successfully route traffic, the traefik proxy needs to have information about the IP address and port of applications running in the cluster. The arguments in the configuration contain information about how traefik will obtain details about applications running in Nomad. Nomad offers a native service discovery option, and in this case, traefik takes advantage of this service discovery information to retrieve application details. While running the job, it is important to modify the endpoint address to the Nomad server address, here: providers.nomad.endpoint.address=http://<nomad server ip>:4646
. More configuration options are available in the traefik documentation.
Running Traefik Job file in the cluster
Let us try to run the job file in a nomad cluster. In my previous blog, I have created a nomad cluster in vultr cloud and will use the same cluster to deploy this job file. The cluster consists of 1 server and 3 clients, both traefik and sample web app will be deployed in the cluster and we will observe traefik distributing traffic effectively with minimal configuration.
There are two ways to run a job, manually deploying the job file from Nomad UI or running it with nomad cli. In this case manually triggering the job from UI is suitable. Before triggering the job, I will update the job file with nomad server endpoint address, here the endpoint is the address of nomad server load balancer we created earlier.
Traefik job file in nomad UI:
Planning:
Job run Result:
Once traefik started running, it will be accessible in port 80 of the vultr load balancer created. The configuration of attaching loadbalancer to the vm is done with the help of terraform.
The 404 error is expected as there is no running services present. Now we are confident that the traefik is running and waiting for redirecting the requests.
Running sample webapp in the cluster
A sample web app is also deployed along with traefik and its function is to return the IP of the machine it is running. This is ideal for testing to verify the proxy is working as expected.The job file for the web app is present here.
job "webapp" {
datacenters = ["dc1"]
type = "service"
group "webapp" {
count = 1
network {
mode = "host"
port "http" {
to = 80
}
}
service {
name = "webapp"
port = "http"
provider = "nomad"
tags = [
"traefik.enable=true",
"traefik.http.routers.webapp.rule=Path(`/`)",
]
}
task "server" {
env {
WHOAMI_PORT_NUMBER = "${NOMAD_PORT_http}"
}
driver = "docker"
config {
image = "traefik/whoami"
ports = ["http"]
}
}
}
}
This is a simple job configuration for the sample web app and the important thing to note is the tags.
Traefik proxy find the service to load balance using this tags and based on the tag information, it can do host based or path based routing. With this approach, the proxy get information about the web app without any modification in its configuration and restarts unlike proxies like nginx. This is a great advantage of using traefik for load balancing.
Deploying the job:
Jobs successfully ran:
Now we have both webapp and traefik running in nomad cluster. Doing a curl on the client load balancer give us back the response from any of these clients with the help of traefik.
When we do curl on the client load balancer, we get back a different remote address each time, indicating the proxy is doing load balancing between the 3 instances of the web app deployed.
In this blog post, we explored the seamless integration of HashiCorp Nomad and Traefik for deploying a load-balanced web application infrastructure in the Vultr cloud. By utilizing Nomad’s service discovery and Traefik’s dynamic routing capabilities, we demonstrated the straightforward setup and management of a scalable cluster environment. In the upcoming posts, we will deploy more services and dig deeper into the nomad ecosystem.
Related blog posts: