Common Deployment Patterns
This guide shows practical examples of deploying different types of applications with m2p, based on real-world configurations.
Muppy Managed Services
Important: Muppy provides managed, high-availability services that you should use instead of deploying them as containers:
PostgreSQL (Managed)
- Never deploy PostgreSQL in containers - it's bad practice (low performance, stateful clusters)
- Use Muppy's managed HA PostgreSQL cluster - accessible via standard
PG*environment variables - Automatically injected into your application environment
Vault (Managed)
- Managed secrets and configuration through Muppy's Vault integration
- Define
vault_objectsin your config instead of managing secrets manually - Supports both
configmapandsecrettypes
Multi-Service Web Application
Based on the XtraHo stack - a complete application with multiple services and managed vault integration:
mpy_meta_config:
parts:
# Main application service
xtraho:
dashboard_name: xTraho
type: server
deployment:
docker_image: nginxdemos/hello
replica_count: 1
ports:
- name: http
port: 80
routes:
- name: LO
pathPrefix: PathPrefix(`/`)
port_name: http
middlewares:
ipwhitelist: true
basicauth: true
toctoc: false
# Use managed vault for config and secrets
envFrom:
- configMapRef:
name: xtraho-service-env
- secretRef:
name: xtraho-creds
debug_mode_available: basic
# Authentication service with subdomain
keyclk:
dashboard_name: Keycloak
type: server
deployment:
docker_image: quay.io/keycloak/keycloak:26.0.7
args: ["start-dev"]
env_vars:
- name: KC_BOOTSTRAP_ADMIN_USERNAME
value: "admin"
- name: KC_BOOTSTRAP_ADMIN_PASSWORD
value: "changeme"
ports:
- name: http
port: 9080
host_prefix: "keycloak-" # Creates keycloak-myapp.example.com
routes:
- name: gui
pathPrefix: PathPrefix(`/`)
port_name: http
middlewares:
ipwhitelist: true
basicauth: true
volumes:
- name: keycloak-realm-data
mountPath: /opt/keycloak/data/import
# Service discovery with custom subdomain
consul:
dashboard_name: Consul
type: server
deployment:
docker_image: docker.io/bitnami/consul:1.20.1
command: [consul]
args: [agent, -dev, -ui, -client, 0.0.0.0, -log-level=INFO]
replica_count: 1
ports:
- name: http
port: 8500
host_prefix: "consul-" # Creates consul-myapp.example.com
routes:
- name: gui
pathPrefix: PathPrefix(`/`)
port_name: http
middlewares:
ipwhitelist: true
# Managed vault objects instead of manual secrets
vault_objects:
- name: xtraho-service-env
type: configmap
- name: xtraho-creds
type: secret
- name: keycloak-realm-data
scope: instance
type: configmap
Production Muppy Application
Based on Muppy's own deployment configuration - shows advanced patterns:
mpy_meta_config:
parts:
# Web interface with PostgreSQL integration
gui:
type: server
dashboard_name: Muppy
deployment:
args:
- --max-cron-threads=0
- --workers=4
env_vars:
- name: INOUK_SESSION_STORE
value: postgresql
- name: INOUK_SESSION_STORE_DBNAME
value: '@{ obj.dash_pg_database }@' # Uses managed PostgreSQL
- name: INOUK_SESSION_STORE_SAMEDB
value: 'TRUE'
envFrom:
- name: muppy-run-env
secretRef:
resources:
requests:
cpu: 0.2
memory: 640M
limits:
cpu: 0.5
memory: 1280M
ports:
- name: odoo
port: 8069
- name: gevent
port: 8072
- name: codr
port: 8765
debug_mode: true
routes:
- name: gui
pathPrefix: PathPrefix(`/`)
port_name: odoo
middlewares:
ipwhitelist: true
- name: longpolling
pathPrefix: PathPrefix(`/websocket`) || PathPrefix(`/longpolling`)
port_name: gevent
# Advanced debug mode with VSCode Server
debug_mode_available: codr
debug_config:
debug_fqdn_prefix: codr-
command: [/usr/bin/code-server]
args: [--bind-addr=0.0.0.0:8765, --auth=password]
resources:
requests:
cpu: 0.5
memory: 768M
limits:
cpu: 2
memory: 1536M
# Background workers
workers:
type: worker
dashboard_name: Workers
deployment:
args:
- --workers=1
- --no-http
- --limit-time-cpu=31536000
- --max-cron-threads=4
env_vars:
- name: INOUK_SESSION_STORE
value: postgresql # Uses managed PostgreSQL
envFrom:
- name: muppy-run-env
secretRef:
resources:
limits:
cpu: 400m
memory: 1024M
requests:
cpu: 200m
memory: 512M
debug_mode_available: basic
# Network service with special permissions
tscale:
type: server
dashboard_name: Tailscale
deployment:
docker_image: tailscale/tailscale:stable
envFrom:
- name: tailscale-params
secretRef:
securityContext:
capabilities:
add:
- NET_ADMIN
drop:
- ALL
allowPrivilegeEscalation: false
privileged: false
volumes:
- name: tailscale-data
mountPath: "/var/lib/tailscale"
volumesDefinition:
- name: tailscale-data
description: Tailscale state data
claim:
size: 1Mi
# Advanced vault configuration with templating
vault_objects:
- name: muppy-run-env
scope: qualifier
type: secret
default_values:
cfgmap_use_renderer: true
cfgmap_type: envfile
cfgmap_file: |-
IKB_SMTP=@{ vault_model.get('muppy-signup-smtp').server }@
IKB_SMTP_PORT=587
IKB_SMTP_SSL=ON
IKB_SMTP_USER=@{ vault_model.get('muppy-signup-smtp').user }@
IKB_SMTP_PASSWORD=@{ vault_model.get('muppy-signup-smtp').password }@
Simple Application with Managed PostgreSQL
Instead of deploying a PostgreSQL container, use Muppy's managed service:
mpy_meta_config:
parts:
api:
type: server
deployment:
docker_image: myapp/api:latest
replica_count: 2
env_vars:
# PostgreSQL connection automatically provided by Muppy
# Standard PG* environment variables are injected
- name: APP_ENV
value: production
- name: REDIS_URL
value: "redis://@{ obj.key }@-redis-svc:6379"
# Deploy Redis since it's not managed by Muppy
redis:
type: server
deployment:
docker_image: redis:7-alpine
replica_count: 1
# No PostgreSQL volumesDefinition needed - it's managed!
Background Processing with Queues
Worker services that process jobs from Redis queues:
mpy_meta_config:
parts:
worker:
type: worker # No external network access
deployment:
docker_image: myapp/worker:latest
replica_count: 3
env_vars:
- name: REDIS_URL
value: "redis://@{ obj.key }@-redis-svc:6379"
- name: WORKER_CONCURRENCY
value: "4"
scheduler:
type: worker
deployment:
docker_image: myapp/scheduler:latest
replica_count: 1
redis:
type: server
deployment:
docker_image: redis:7-alpine
replica_count: 1
volumes:
- name: redis-data
mountPath: /data
volumesDefinition:
- name: redis-data
claim:
size: 5Gi
Multi-Container Application (MongoDB Example)
When you need databases not managed by Muppy (like MongoDB):
mpy_meta_config:
parts:
backend:
type: server
deployment:
docker_image: ghcr.io/bluewave-labs/checkmate-backend-mono:latest
replica_count: 1
env_vars:
- name: MONGODB_URI
value: "mongodb://@{ obj.key }@-mongodb-svc:27017/checkmate"
# MongoDB is not managed by Muppy, so deploy as container
mongodb:
type: server
deployment:
docker_image: ghcr.io/bluewave-labs/checkmate-mongo:latest
replica_count: 1
volumes:
- name: mongo-data
mountPath: /data/db
# Redis cache
redis:
type: server
deployment:
docker_image: redis:7-alpine
replica_count: 1
volumesDefinition:
- name: mongo-data
claim:
size: 10Gi
accessModes: ReadWriteOnce
Key Patterns and Best Practices
Use Managed Services
- ✅ PostgreSQL: Use Muppy's managed HA cluster (automatic
PG*env vars) - ✅ Vault: Use
vault_objectsfor secrets and config - ⚠️ Redis/MongoDB: Deploy as containers only if needed
Networking
- Use
host_prefixfor multiple HTTP services under one package - Define appropriate
middlewaresfor security - Use template variables for internal communication (
"redis://@{ obj.key }@-redis-svc:6379") - Use HTTP parts to create additional routes to server parts (see pattern below)
Resource Management
Debug Modes
basic: Shell access for troubleshootingcodr: Full VSCode Server with web IDE
Security Context
securityContext:
capabilities:
add: ["NET_ADMIN"]
drop: ["ALL"]
allowPrivilegeEscalation: false
privileged: false
Advanced Routing Patterns
Multiple Subdomains for One Container
You can expose different ports of a single container on separate subdomains with different security policies.
Use Case: A REST API with Prometheus metrics exposed on a secured subdomain.
mpy_meta_config:
parts:
api:
type: server
deployment:
docker_image: "mycompany/api:v2"
env_vars:
- name: PORT
value: "8000"
- name: METRICS_PORT
value: "9090"
ports:
- name: http
port: 8000 # REST API endpoint
- name: metrics
port: 9090 # Prometheus /metrics endpoint
routes:
- name: api
pathPrefix: PathPrefix(`/`)
port_name: http
# No middleware - public API
metrics:
type: http
host_prefix: "metrics-"
routes:
- name: prometheus
pathPrefix: PathPrefix(`/`)
service_name: "@{ obj.key }@-api-svc" # Routes to api's service
port_name: metrics # References api's metrics port
middlewares:
ipwhitelist: true # Secured - only from monitoring IPs
Results:
- myapp.example.com → api:8000 (public REST API)
- metrics-myapp.example.com → api:9090 (secured Prometheus metrics)
Why This Works:
- Single container, multiple ports: Modern apps often expose multiple ports (main app + metrics/health/admin)
- HTTP parts for routing-only: The
metricspart creates IngressRoutes without deploying containers - Template variables:
"@{ obj.key }@-api-svc"works in quoted strings withinmpy_meta_config - Granular security: Different subdomains can have different middleware (public vs IP-restricted)
Benefits: - ✅ Single deployment to manage - ✅ Separate security policies per endpoint - ✅ Clean subdomain organization - ✅ Easy visibility in Muppy dashboard (separate "parts" for routing)
Common Applications:
- Metrics: Prometheus /metrics on secured subdomain
- Admin panels: Management UI with basicAuth + IP whitelist
- Health checks: /health endpoints for internal monitoring
- Debug endpoints: Profile, trace, or debug routes restricted to dev team IPs
Best Practice: Follow the single-process-per-container principle. A properly designed app listens on multiple ports for different purposes (e.g., port 8000 for API, port 9090 for metrics), not multiple processes.
See also: Routing to Internal Service Parts
Vault Objects with Templates
vault_objects:
- name: app-config
type: configmap
default_values:
cfgmap_use_renderer: true
cfgmap_type: envfile
cfgmap_file: |-
API_KEY=@{ vault_model.get('external-api').key }@
DB_HOST=@{ obj.dash_pg_hostname }@
Migration from Container Databases
If you have existing configs with PostgreSQL containers:
Before (bad practice):
postgres:
type: server
deployment:
docker_image: postgres:15
env_vars:
- name: POSTGRES_DB
value: myapp
volumes:
- name: postgres-data
mountPath: /var/lib/postgresql/data
After (using managed PostgreSQL):
# No PostgreSQL part needed!
# Just use the automatic PG* environment variables
# that Muppy injects into all applications
Your application will automatically receive:
- PGHOST, PGPORT, PGDATABASE, PGUSER, PGPASSWORD
- High availability, automated backups, performance optimization
- No storage management or container overhead