Infrastructure
Kubernetes
k3s cluster architecture, namespaces, workload configuration, and security hardening
Cluster Overview
All HanseNexus workloads run on a single-node k3s v1.34 cluster hosted on a Hetzner CPX52 instance.
| Property | Value |
|---|---|
| Distribution | k3s v1.34 |
| Nodes | 1 (single-node) |
| Server | hn-k3s (Hetzner CPX52) |
| Public IP | 91.99.1.144 |
| Tailscale IP | 100.90.51.49 |
| CNI | Flannel (embedded) |
| Ingress Controller | Traefik |
| Certificate Manager | cert-manager |
Namespaces
| Namespace | Purpose | Workloads |
|---|---|---|
hn-apps | Production frontends | archus, bgs-service, calnexus, elbe-akustik, lexilink, nexus-lms, planex, portfolio, qript |
convex | Convex backends | Per-app StatefulSets for archus, bgs-service, calnexus, elbe-akustik, lexilink, nexus-lms, planex |
hn-staging | Staging environment | lexilink (frontend + Convex) |
hn-preview | PR preview deployments | lexilink (frontend + Convex) |
harbor | Container registry | Harbor (registry.hansenexus.dev) |
signoz | Observability platform | SigNoz + OpenTelemetry Collector |
op-system | Secret management | 1Password Connect Operator |
mcp-system | MCP server access | ServiceAccount for Kubernetes MCP server |
cert-manager | TLS certificate management | cert-manager controller + webhook |
Kustomize Structure
All Kubernetes manifests are managed via Kustomize in the k8s/ directory at the monorepo root.
k8s/
├── kustomization.yaml # Root: includes base/, apps/, convex/, secrets/
├── base/
│ ├── kustomization.yaml
│ ├── namespace.yaml # hn-apps namespace
│ └── signoz/ # OpenTelemetry collector DaemonSet + RBAC
├── apps/<app>/ # Per-app manifests (hn-apps namespace)
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── serviceaccount.yaml
│ ├── pdb.yaml # Multi-replica apps only
│ └── onepassworditem.yaml
├── convex/<app>/ # Per-app Convex instances (convex namespace)
│ ├── statefulset.yaml
│ ├── services.yaml
│ ├── ingress.yaml
│ ├── dashboard.yaml
│ ├── onepassworditem.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── staging/ # hn-staging namespace
│ └── preview/ # hn-preview namespace
├── rbac/
│ ├── mcp-server/ # MCP server ServiceAccount + RBAC
│ └── ci-deploy/ # CI/CD deploy ServiceAccount + scoped Role
└── secrets/ # Shared auth-secret across namespaces
Overlays use standalone manifests (full resource definitions), not Kustomize patches. This keeps each environment self-contained and avoids patch merge complexity.
App Deployments
Each app in hn-apps runs as a standard Kubernetes Deployment with the following configuration:
Rolling Update Strategy
All deployments enforce zero-downtime updates:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
At least one old pod stays ready until the new pod passes its readiness probe.
Health Probes
Every app exposes GET /api/health on port 3000, returning { status: "ok", timestamp: <epoch_ms> }.
- Liveness probe: starts after 15s, checks every 30s, 3 failures to restart
- Readiness probe: starts after 5s, checks every 10s, 3 failures to remove from service
Image Pull Policy
All containers use imagePullPolicy: Always to ensure the correct image is pulled regardless of tag format (SHA vs latest).
Per-App ServiceAccounts
Each app runs with a dedicated ServiceAccount with automountServiceAccountToken: false. Next.js apps have no need for Kubernetes API access, so the token is not mounted.
Pod Disruption Budgets
Multi-replica apps (those with replicas >= 2) have PodDisruptionBudgets to ensure availability during voluntary disruptions such as node drains or cluster upgrades:
apiVersion: policy/v1
kind: PodDisruptionBudget
spec:
minAvailable: 1
selector:
matchLabels:
app: <app>
Check which apps have PDBs: kubectl get pdb -n hn-apps.
Pod Anti-Affinity
Multi-replica apps include preferred pod anti-affinity rules to spread pods across nodes. On the current single-node cluster this has no effect, but ensures automatic spread if the cluster scales:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values: [<app>]
topologyKey: kubernetes.io/hostname
Convex StatefulSets
Each Convex-enabled app runs its own Convex backend as a StatefulSet in the convex namespace.
Update strategy: OnDelete — this is intentional. Convex is a data-stateful workload and upgrades should be manually controlled. To update a Convex backend:
- Delete the pod:
kubectl delete pod <convex-pod> -n convex - The StatefulSet controller recreates it with the new image
Each Convex instance includes:
- A StatefulSet for the backend process
- ClusterIP services for backend and site traffic
- An Ingress for external API and site access
- A dashboard Deployment with BasicAuth middleware
- A
OnePasswordItemfor the admin key secret
Ingress and TLS
All external traffic is handled by Traefik (embedded in k3s) with TLS certificates managed by cert-manager.
- Ingress resources define
tlsblocks withsecretNamereferences - cert-manager automatically provisions and renews Let’s Encrypt certificates
- Convex dashboard Ingresses use Traefik BasicAuth middleware (
convex-dashboard-auth)
DNS Records
All *.hansenexus.dev domains point to the cluster public IP 91.99.1.144. The exception is lexilink.app which has its own DNS configuration.
RBAC
CI/CD Deploy
A dedicated ci-deploy ServiceAccount in hn-apps namespace has minimal permissions:
get,patchon Deployments (forkubectl set image)get,list,watchon Pods and ReplicaSets (forkubectl rollout status)
Defined in k8s/rbac/ci-deploy/.
MCP Server
A scoped mcp-server ServiceAccount in mcp-system namespace provides read access for the Kubernetes MCP server running on hn-hub. Defined in k8s/rbac/mcp-server/.
Network Policies
Current status: Not enforced. The k3s cluster uses embedded Flannel as CNI, which does not enforce NetworkPolicies. All pods can communicate with all other pods across namespaces.
Migrating to Calico or Cilium is required to enable network segmentation. This is tracked as future work.
Monitoring Integration
The SigNoz OpenTelemetry Collector runs as a DaemonSet (defined in k8s/base/signoz/). App pods are auto-instrumented via the OTel Operator annotation:
instrumentation.opentelemetry.io/inject-nodejs: "true"
See the Monitoring page for details.