OpenBSD VPS Installation
Some virtual private server providers support OpenBSD as an install target. This is a wonderful start, but only a start because the image applied and resided will probably not a useful partition scheme and mount options that makes sense for a given security and operational strategy.
Fortunately, a hosting provider such as Vultr can deliver an OpenBSD instance that can reinstall itself using and autoinstall answers file and disklabel fetched over HTTP.
# /etc/fstab /dev/sd0a / ffs rw 1 1 /dev/sd0b none swap sw /dev/sd0d /usr ffs rw,nodev 1 2 /dev/sd0e /usr/local ffs rw,wxallowed,nodev 1 2 /dev/sd0f /tmp ffs rw,nodev,nosuid 1 2 /dev/sd0g /home ffs rw,nodev,nosuid 1 2 /dev/sd0h /var ffs rw,nodev,nosuid 1 2
Using a pre-built cloud image as a trampoline also provides the opportunity to
exclude some base filesets or include other capabilities using
siteXX.tgz
.
Declaring Cloud Resrouces
Infrastructure configuration systems such as terraform are not strictly necessary if there is a well-documented REST interface, but such a tool can save the very repetitive work of using the API
- listing existing resources
- creating missing resources
- memorizing the IDs of resources
- updating existing resources
- removing resources that are no longer defined
Vultr offers a resource called a startup script which allows a shell script to be run when the instance first boots. Define the contents of the auto install script using a heredoc
# .tf locals { auto_install_script = <<-EOF # ... EOF } resource "vultr_startup_script" "auto_reinstall" { name = "auto_reinstall" script = base64encode(local.auto_install_script) } resource "vultr_instance" "svc1" { plan = "vc2-1c-1gb" region = "ord" label = "svc1" os_id = "2464" # OpenBSD 7.6 hostname = "svc1.eradman.com" backups = "disabled" ddos_protection = false activation_email = false ssh_key_ids = ["06f97132-e980-4c13-b9e7-d2e8f228f194"] # root script_id = vultr_startup_script.auto_reinstall.id }
Where the OS ID for the OpenBSD installation can be found in their web UI or using their API
$ curl -s "https://api.vultr.com/v2/os" | flattenjs | grep -B1 'OpenBSD 7.6' {os,39,id} 2464 {os,39,name} OpenBSD 7.6 x64
Spin up the new infrastructure using
terraform apply
.
Changing the
os_id
will result in a destroy/create operation, which will assign a new IP.
To work around this, change the OS in the Vultr web user interface, then
change the terraform plan to match.
OpenBSD Install Templates
There are two configuration files we will want to host on our install mirror. The first is the disklabel containing our partition layout
/ 1G swap 512M /usr 4G /usr/local 5G /tmp 2G-4G 20% /home 2G-4G 20% /var 5G-* 40%
The second is a script that will emit a completed auto_install(8) configuration
#!/bin/sh -eu cat <<EOF System hostname = $(hostname) Password for root = ${root_password} Network interfaces = vio0 IPv4 address for vio0 = autoconf Do you expect to run the X Window System = no Setup a user = ${admin_user} Password for user ${admin_user} = ${admin_password} Public ssh key for user = ${pub_key} Which disk is the root disk = sd0 What timezone are you in = US/Eastern Unable to connect using https. Use http instead = yes Location of sets = http Server = ${mirror} Set name(s) = -all bsd* base* etc* man* URL to autopartitioning template for disklabel = http://${mirror}/install/$(hostname -s).disklabel EOF
Building a Self-Installer at First Boot
The trick to automating an install without PXE is to spin a new ramdisk
to boot from that contains the
auto_install.conf
answer file
# auto_install_script cd /tmp mkdir -p mnt for mirror in "relay1.sidecomment.io:8080" "svc1.sidecomment.io:8080" do ftp -o auto_install.sh "http://$mirror/install/$(hostname -s).auto_install" ftp -o bsd.rd.gz "http://$mirror/pub/OpenBSD/7.6/amd64/bsd.rd" [ -f bsd.rd.gz ] && [ -f auto_install.sh ] && break done gzip -d bsd.rd.gz rdsetroot -x bsd.rd disk.fs vnconfig vnd0 disk.fs mount /dev/vnd0a mnt root_password='$2b$08$v2y8L...5DYQllk.8ji' admin_user='admin' admin_password='$2b$08$0MMjh...2ZN2VOZSQWC' pub_key='ssh-ed25519 AAAAC3NzaC1...g3Aqre admin@localhost' export root_password admin_user admin_password pub_key mirror sh -eu auto_install.sh > mnt/auto_install.conf umount mnt vnconfig -u vnd0 rdsetroot bsd.rd disk.fs gzip -c9n bsd.rd > /bsd.rd.new chmod 700 /bsd.rd.new echo "boot bsd.rd.new" > /etc/boot.conf shutdown -r now
The steps this configuration script takes are
-
Fetch
auto_install(8)
template and the latest release of
bsd.rd
-
Unpack
bsd.rd
and mount it with vnconfig(8) - Set password hashes generated by encrypt(1) and public key from ssh-keygen(1)
- Evaluate the autoinstall template and install to the ramdisk root
- Update the boot image rdsetroot(8), compress it with gzip(1)
-
Copy the new
.rd
image to/
, set it as the default boot file - Reboot
The Vultr REST API and support for a startup script makes automating builds seamless, but if the hosting service that does not have a similar feature, copy the auto-install script to the target host and start it by hand.
Local Terraform Build
Upstream builds are typcially not rebuilt when a new release of OpenBSD. If the upstream provider does not work, the Vultr provider can be built locally
cd /tmp RELEASE="2.21.0" DST="$HOME/.terraform.d/plugins/local/vultr/vultr/$RELEASE/openbsd_amd64" mkdir -p $DST ftp https://github.com/vultr/terraform-provider-vultr/archive/refs/tags/v$RELEASE.tar.gz tar -zxf v$RELEASE.tar.gz cd terraform-provider-vultr-$RELEASE go build mv terraform-provider-vultr $DST/terraform-provider-vultr_v$RELEASE
Update
providers.tf
to source this module locally
terraform { required_providers { vultr = { source = "local/vultr/vultr" version = "2.21.0" } } }