Deploying Nomad on OpenBSD and Linux
OpenBSD
Nomad supports syslog natively, which is useful since OpenBSD's
rc_bg=YES
parameter doesn't redirect logs.
As far as I can tell, the
raw_exec
plugin is the only execution mechanism supported on OpenBSD, but it is not
enabled by default.
# /etc/nomad/nomad.hcl data_dir = "/tmp/nomad" enable_syslog = true plugin "raw_exec" { config { enabled = true } }
Server
# /etc/nomad/nomad.hcl server { enabled = true bootstrap_expect = 1 }
Clients
These are installed on the worker nodes
# /etc/nomad/nomad.hcl client { enabled = true servers = ["172.16.0.1:4647"] }
$ nomad node status ID DC Name Class Drain Eligibility Status 689aee1d dc1 db2.eradman.com <none> false eligible ready 7af99546 dc1 db1.eradman.com <none> false eligible ready
Execution Test
job "test" { type = "service" region = "global" datacenters = ["dc1"] group "maint" { task "test_task" { driver = "raw_exec" config { command = "sleep" args = ["100"] } } } }
Start using
rcctl start nomad
Linux
The following is a test configuration for running three control nodes and a docker container
The following assumes three servers
10.10.0.[50-52]
for the fault-tolerent scheduling and
10.10.0.[53-55]
another three for running jobs
# etc/nomad.hcl log_level = "DEBUG" leave_on_terminate = true leave_on_interrupt = true data_dir = "/tmp/nomad" datacenter = "nyc" consul { }
Server
# etc/nomad.hcl server { enabled = true bootstrap_expect = 3 server_join { retry_join = [ "10.10.0.50" ] retry_max = 3 retry_interval = "15s" } }
Start on using
nomad agent -config etc/nomad-server.hcl -bind=$(hostname -i)
Client
# etc/nomad.hcl client { enabled = true servers = ["10.10.0.50:4647", "10.10.0.51:4647", "10.10.0.52:4647"] }
Start using
nomad agent -config etc/nomad-client.hcl -bind=$(hostname -i)
Docker Test
# node-hello.nomad job "node-hello" { type = "service" region = "global" datacenters = ["nyc"] group "test" { count = 2 restart { attempts = 3 delay = "5s" interval = "10m" mode = "fail" } network { #mode = "bridge" port "http" { to = 3000 static = 3000 } } task "frontend" { driver = "docker" config { image = "eradman/node-hello-app" ports = ["http"] } resources { cpu = 500 # MHz memory = 128 # MHz } } } }
Example Status
$ nomad server members Name Address Port Status Leader Protocol Build Datacenter Region fe-dev1.eradman.com.com.global 10.10.0.85 4648 alive false 2 1.0.1 nyc global fe-dev2.eradman.com.com.global 10.10.0.86 4648 alive false 2 1.0.1 nyc global fe-dev3.eradman.com.com.global 10.10.0.87 4648 alive true 2 1.0.1 nyc global
$ nomad job run node-hello.nomad ==> Monitoring evaluation "3dcc87a0" Evaluation triggered by job "node-hello" ==> Monitoring evaluation "3dcc87a0" Evaluation within deployment: "a39ba7e9" Allocation "44769c4c" created: node "414534d5", group "test" Allocation "e8ef03c1" created: node "7fb15111", group "test" Evaluation status changed: "pending" -> "complete" ==> Evaluation "3dcc87a0" finished with status "complete"
$ nomad job status ID Type Priority Status Submit Date node-hello service 50 running 2020-12-30T13:03:28-05:00
$ nomad node status -allocs ID DC Name Class Drain Eligibility Status Running Allocs 724317d7 nyc fe-dev6.eradman.com.com <none> false eligible ready 1 414534d5 nyc fe-dev5.eradman.com.com <none> false eligible ready 1 7fb15111 nyc fe-dev4.eradman.com.com <none> false eligible ready 0
$ nomad node eligibility -disable 724317d7 $ nomad node drain -enable -yes 724317d7