Kubernetes Deployment
Run MAPS Messaging on Kubernetes with a pinned image tag and production‑grade settings.
Use a specific version tag (e.g.,
4.0.1
) or parameterize via Helm/Kustomize. Avoid:latest
in production.
Prerequisites
- Kubernetes 1.23+ and
kubectl
configured - StorageClass available for PersistentVolumeClaims
- Optional: Prometheus stack (Operator or scraper), Ingress controller
Quick Start (Deployment + Service + PVC)
Minimal example that exposes MQTT (1883), AMQP (5672) and REST API (8080). Add/remove ports as needed.> Replace
{MAPS_VERSION}
with your release (e.g.,4.0.1
).
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: maps-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: maps-messaging
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: maps-messaging
template:
metadata:
labels:
app: maps-messaging
spec:
securityContext:
runAsUser: 10001
runAsGroup: 10001
runAsNonRoot: true
fsGroup: 10001
containers:
- name: maps-messaging
image: mapsmessaging/maps-messaging:{MAPS_VERSION}
imagePullPolicy: IfNotPresent
ports:
- name: mqtt
containerPort: 1883
- name: amqp
containerPort: 5672
- name: http
containerPort: 8080
# Optional: Prometheus JMX exporter (if enabled as -javaagent)
# - name: metrics
# containerPort: 9404
env:
# Example envs; keep minimal, add only what's used in your setup
- name: SCHEMA_VALIDATION
value: "true"
volumeMounts:
- name: maps-config
mountPath: /opt/maps/config
readOnly: true
- name: maps-schemas
mountPath: /opt/maps/schemas
readOnly: true
- name: maps-data
mountPath: /data
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/v1/ping
port: http
initialDelaySeconds: 30
periodSeconds: 20
timeoutSeconds: 3
failureThreshold: 3
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
volumes:
- name: maps-config
configMap:
name: maps-config
optional: true
- name: maps-schemas
configMap:
name: maps-schemas
optional: true
- name: maps-data
persistentVolumeClaim:
claimName: maps-data
---
apiVersion: v1
kind: Service
metadata:
name: maps-messaging
labels:
app: maps-messaging
# If using Prometheus without the Operator, you can annotate for scraping:
# annotations:
# prometheus.io/scrape: "true"
# prometheus.io/port: "9404"
spec:
type: ClusterIP
selector:
app: maps-messaging
ports:
- name: mqtt
port: 1883
targetPort: mqtt
- name: amqp
port: 5672
targetPort: amqp
- name: http
port: 8080
targetPort: http
# - name: metrics
# port: 9404
# targetPort: metrics
Expose externally with
Service.type: LoadBalancer
or via an Ingress for the REST API (8080
). Expose only the ports you actually need.
Configuration & Data
- Config: mount configuration into
/opt/maps/config
and schemas into/opt/maps/schemas
usingConfigMap
s (as shown). To populate, createmaps-config
andmaps-schemas
ConfigMaps from your files:kubectl create configmap maps-config --from-file=./conf/
kubectl create configmap maps-schemas --from-file=./schemas/ - Data: persistent runtime data is stored at
/data
. Size your PVC accordingly.
Health Probes
- Readiness:
GET /health
(200 indicates ready). The body returnsOK | Warning | Error
. Kubernetes HTTP probes do not inspect the body; if you require body-based logic, use a sidecar or an exec probe. - Liveness:
GET /api/v1/ping
should return{"status":"Success"}
with 200.
Scaling & Availability
- Replicas: start with 3 for HA. Adjust based on throughput and persistence mode.
- Disruption control: add a PodDisruptionBudget to avoid dropping below 1 pod during maintenance:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: maps-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: maps-messaging - Spread: use pod anti‑affinity or topology spread constraints to avoid co‑locating all replicas on the same node/zone.
Observability (Prometheus & Jolokia)
- Prometheus: if you’ve enabled the JMX Exporter (
-javaagent
), expose9404
and either:- add
prometheus.io/*
annotations to theService
, or - create a
ServiceMonitor
(Prometheus Operator).
- add
- Jolokia: optional port
8778
if you use JMX over HTTP.
See the dedicated guide: Prometheus (JMX Exporter)
Security
- Run as non‑root (UID/GID set above).
- Read‑only root filesystem where possible; ensure
/data
is writable. - NetworkPolicies (optional) to restrict inbound/outbound traffic.