iSCSI Volume Claims
Persistent Volumes provide a means of connecting a service with storage. Local SSDs will, of course be uncontested in terms of latency, but iSCSI allows a container to be scheduled on any compute node.
| Local Disk | iSCSI | NFSv3 | |
|---|---|---|---|
| Volume Claim Required | yes | yes | no |
| Authenticated | – | yes | no |
| Schedule on any Node | no | yes | yes |
| Concurrent Access | yes | no | yes |
| Native Linux File System | yes | yes | no |
| Uses Buffer Cache | yes | yes | no |
Target
Using ZFS, create a vdev for each target/lun
zfs create -o canmount=off zpool2/iscsi zfs create -V 100G -o volmode=dev zpool2/iscsi/t0-lun0 zfs create -V 100G -o volmode=dev zpool2/iscsi/t0-lun1
Following the FreeBSD Network Services guide
# /etc/ctl.conf auth-group ag0 { chap postgresql ************ } portal-group pg0 { discovery-auth-group no-authentication listen 192.168.0.9 } target iqn.2026-04.lan.fs1:target0 { auth-group ag0 portal-group pg0 lun 0 { path /dev/zvol/zpool2/iscsi/t0-lun0 blocksize 4096 size 100G } lun 1 { path /dev/zvol/zpool2/iscsi/t0-lun1 blocksize 4096 size 100G } }
Initiator
Kubernetes needs the initiator service running on each node:
dnf -qy install iscsi-initiator-utils systemctl enable --now iscsid
The volume claim will exect a device formatted with a file system without GPT or MBR patitions. Since the iSCSI device is connected on the kubernetes node it will appear as local device
$ mount | grep sda /dev/sda on /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.0.9:3260-iqn.2026-04.lan.fs1:target0-lun-0 type ext4 (rw,relatime,stripe=4) /dev/sda on /var/lib/kubelet/pods/bb8a3569-ceb8-4ee3-a6d6-308194f07ff8/volumes/kubernetes.io~iscsi/iscsi0 type ext4 (rw,relatime,stripe=4)
Volume Claims
--- apiVersion: v1 kind: PersistentVolume metadata: name: iscsi0 spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 192.168.0.9:3260 iqn: iqn.2026-04.lan.fs1:target0 lun: 0 fsType: ext4 readOnly: false chapAuthSession: true secretRef: name: postgresql-data claimRef: namespace: default name: pg2-pvc storageClassName: fs-vdev
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pg2-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: fs-vdev
Logically, a PersistentVolumeClaim would references a PersistentVolume, but in fact it is the other way around. Without a claimRef volumes may be assigned to the wrong application.
Authentication is provided using a specialized secret provider
--- apiVersion: v1 kind: Secret metadata: name: postgresql-data type: "kubernetes.io/iscsi-chap" stringData: node.session.auth.username: postgresql node.session.auth.password: ************
Testing
Initial connection test without authentication
t="iqn.2026-04.lan.fs1:target0" p="fs1.lan:3260" iscsiadm -m node --targetname $t --portal $p -o new # add connection iscsiadm -m node -L all # log in iscsiadm -m node # list iscsiadm -m node --targetname $t --portal $p -u # log out iscsiadm -m node --targetname $t --portal $p -o delete # remove connection
Example Application
--- apiVersion: v1 kind: Service metadata: name: pg2 spec: clusterIP: None ports: - name: http protocol: TCP port: 5432 targetPort: 8123 selector: app: pg2 --- apiVersion: apps/v1 kind: Deployment metadata: name: pg2 spec: replicas: 1 selector: matchLabels: app: pg2 template: metadata: labels: app: pg2 spec: containers: - name: pg2 image: docker.io/postgres:18-alpine Copy ports: - containerPort: 5432 volumeMounts: - name: pg2-data mountPath: "/var/lib/postgresql" volumes: - name: pg2-data persistentVolumeClaim: claimName: pg2-pvc