Parts Configuration Reference
This is the complete reference for configuring parts in mpy_meta_config. Use this as the authoritative guide for writing configurations.
Configuration Structure
mpy_meta_config:
parts:
{part_code}: # ≤ 8 characters, unique identifier
type: server # server | worker | cronjob | http
# Part-specific configuration sections...
# Optional global sections
volumesDefinition: [...] # Shared volume definitions
vault_objects: [...] # Vault object declarations
user_manifests: [...] # Custom Kubernetes manifests
Part Types Overview
| Type | Purpose | Use Cases | Key Sections |
|---|---|---|---|
server |
Web applications with HTTP endpoints | Web apps, APIs, frontends | deployment, ports, routes |
worker |
Background processing services | Job processors, message consumers | deployment only |
cronjob |
Scheduled tasks | Cleanup jobs, reports, backups | cronjob |
http |
Route existing services | Expose existing K8s services | routes only |
Server Parts
Server parts are long-running processes that expose network ports and handle HTTP traffic.
Basic Structure
parts:
{part_code}:
type: server # Optional, default type
deployment: # Required - container configuration
# Container settings...
ports: # Required if exposing network ports
# Network port definitions...
host_prefix: "api-" # FQDN prefix for multi-service apps
expose_publicly: true # (Since muppy 14.88) will trigger Public DNS Records generation
routes: # Optional - HTTP routing rules
# Traffic routing configuration...
# Optional sections
volumes: [...] # Volume mounts
envFrom: [...] # ConfigMap/Secret references
securityContext: {...} # Pod security settings
debug_mode_available: ... # Debug configuration
dashboard_name: "..." # Dashboard display name
Host Prefix for Multi-Service Routing
The host_prefix attribute allows you to create unique subdomains for parts when deploying multiple services under a single Installed Package.
How It Works
- Each Installed Package receives one base FQDN (e.g.,
myapp.example.com) - This FQDN is defined in
package_release.main_fqdn - Parts can define
host_prefixto create subdomains by prepending a prefix to the base FQDN
Example: Basic Usage
parts:
api:
host_prefix: "api-" # Results in: api-myapp.example.com
routes:
- name: main
pathPrefix: PathPrefix(`/`)
port_name: http
With base FQDN myapp.example.com, this creates api-myapp.example.com.
Example: Multiple Services
parts:
api:
host_prefix: "api-" # Creates: api-example.com
routes: [...]
admin:
host_prefix: "admin-" # Creates: admin-example.com
routes: [...]
frontend:
# No host_prefix # Uses base: example.com
routes: [...]
Location
The host_prefix is defined at the part level and applies to all routes within that part.
Note:
host_prefixcannot be overridden at the route level. All routes within a part share the same FQDN prefix.
Backward Compatibility
Note: The snake_case
host_prefixis the current standard. The camelCasehostPrefixis still supported for backward compatibility. -host_prefix(snake_case) - recommended ✅ -hostPrefix(camelCase) - supported for legacy configs - If both are present,host_prefixtakes priority
See Also
Deployment Section
The deployment section configures the container runtime:
deployment:
# Container image (optional - can be set in values.docker_image)
docker_image: "myregistry/myapp:v1.2.3"
# Command and arguments
command: ["/app/start"] # Optional - overrides Docker ENTRYPOINT
args: ["--port=8080"] # Optional - container arguments
# Environment variables
env_vars:
- name: LOG_LEVEL
value: "info"
- name: DATABASE_URL
value: "postgresql://..."
# Resource requirements
resources:
requests:
cpu: "100m" # CPU request (millicores)
memory: "128Mi" # Memory request
limits:
cpu: "500m" # CPU limit
memory: "512Mi" # Memory limit
# Replica count (can be overridden by dashboard)
replica_count: 2
Important: The deployment section must contain at least one property, even if empty (e.g., resources: {}).
Ports Section
Define network ports the container exposes:
ports:
- name: http # Port identifier for routing
port: 8080 # Container port number
- name: metrics
port: 9090
- name: grpc
port: 50051
Routes Section
Configure HTTP traffic routing through Traefik:
expose_publicly: true # [OPTIONAL - since muppy 14.88] will trigger Public DNS Records generation. Default = false
host_prefix: "crm-" # [OPTIONAL] Creates subdomain crm-{main-fqdn} for all routes in this part
routes:
- name: main # Route identifier
pathPrefix: PathPrefix(`/`) # Traefik path matching
port_name: "http" # References port name above
middlewares: # Available middleware (not activated by default)
ipwhitelist: true # IP filtering capability
basicauth: true # Basic authentication capability
toctoc: true # TocToc authentication capability
sticky: # Session stickiness (optional)
cookie:
httpOnly: true
name: SESSIONID
secure: true
sameSite: none
- name: api
pathPrefix: PathPrefix(`/api`) || PathPrefix(`/v1`) # Multiple path prefixes
port_name: "http"
middlewares:
ipwhitelist: true
Complete Server Example
parts:
webapp:
type: server
host_prefix: "app-" # Creates app-{main-fqdn} routing
deployment:
docker_image: "myapp:latest"
args: ["--web-server"]
env_vars:
- name: PORT
value: "8080"
- name: ENV
value: "production"
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "1000m"
memory: "1Gi"
replica_count: 3
ports:
- name: http
port: 8080
- name: metrics
port: 9090
expose_publicly: true # [OPTIONAL - since muppy 14.88] will trigger Public DNS Records generation. Default = false
host_prefix: "crm-" # [OPTIONAL] Creates subdomain crm-{main-fqdn} for all routes in this part
routes:
- name: web
pathPrefix: PathPrefix(`/`)
port_name: "http"
middlewares:
ipwhitelist: true
basicauth: true
- name: metrics
pathPrefix: PathPrefix(`/metrics`)
port_name: "metrics"
middlewares:
ipwhitelist: true
debug_mode_available: basic
debug_config:
args: ["sleep", "infinity"]
dashboard_name: "Web Application"
Worker Parts
Worker parts are background services that don't expose network ports.
Basic Structure
parts:
{part_code}:
type: worker
deployment: # Required - same as server deployment
# Container configuration...
# Optional sections (same as server, except no ports/routes)
volumes: [...]
envFrom: [...]
securityContext: {...}
debug_mode_available: ...
dashboard_name: "..."
Worker Example
parts:
processor:
type: worker
deployment:
docker_image: "myapp:latest"
args: ["--worker", "--queues=high,normal"]
env_vars:
- name: WORKER_CONCURRENCY
value: "4"
- name: REDIS_URL
value: "redis://redis-service:6379"
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "1Gi"
replica_count: 2
volumes:
- name: temp-storage
mountPath: "/tmp/processing"
emptyDir: {}
debug_mode_available: basic
debug_config:
args: ["sleep", "infinity"]
resources:
requests:
cpu: "50m"
memory: "128Mi"
dashboard_name: "Background Processor"
CronJob Parts
CronJob parts run scheduled tasks using Kubernetes CronJobs.
Basic Structure
parts:
{part_code}:
type: cronjob # Required for cronjobs
cronjob: # Required - cronjob configuration
# Scheduling and execution settings...
# Optional sections
volumes: [...]
envFrom: [...]
securityContext: {...}
debug_mode_available: basic # Only 'basic' mode supported
dashboard_name: "..."
CronJob Section
cronjob:
# Scheduling
suspend: false # Whether job is suspended (can be controlled by dashboard)
schedule: "0 2 * * *" # Cron expression (daily at 2 AM)
# History management
successfulJobsHistoryLimit: 3 # Keep last 3 successful jobs
failedJobsHistoryLimit: 5 # Keep last 5 failed jobs
# Execution policy
concurrencyPolicy: Forbid # Forbid | Allow | Replace
restartPolicy: OnFailure # OnFailure | Never
# Container settings (same as deployment section)
docker_image: "myapp:latest" # Optional - can be set globally
command: ["/app/cleanup"] # Optional
args: ["--mode=daily"] # Optional
env_vars:
- name: CLEANUP_DAYS
value: "30"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "300m"
memory: "256Mi"
CronJob Example
parts:
cleanup:
type: cronjob
cronjob:
schedule: "0 3 * * 0" # Weekly on Sunday at 3 AM
suspend: false
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 3
concurrencyPolicy: Forbid
restartPolicy: OnFailure
docker_image: "myapp:cleanup"
command: ["/usr/local/bin/cleanup.sh"]
args: ["--verbose", "--dry-run=false"]
env_vars:
- name: RETENTION_DAYS
value: "90"
- name: LOG_LEVEL
value: "info"
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "200m"
memory: "128Mi"
volumes:
- name: cleanup-logs
mountPath: "/var/log/cleanup"
emptyDir: {}
debug_mode_available: basic
debug_config:
command: ["sleep"]
args: ["infinity"]
dashboard_name: "Weekly Cleanup Job"
HTTP Parts
HTTP parts create routing rules for existing Kubernetes services without deploying new containers.
Basic Structure
parts:
{part_code}:
type: http
host_prefix: "dashboard-" # [OPTIONAL] Creates subdomain dashboard-{main-fqdn} for all routes
expose_publicly: false # [OPTIONAL - since muppy 14.88] will trigger Public DNS Records generation. Default = false
routes: # Required - routing configuration
# Route definitions...
# Optional
dashboard_name: "..." # Dashboard display name
HTTP Routes
expose_publicly: false # [OPTIONAL - since muppy 14.88] will trigger Public DNS Records generation. Default = false
host_prefix: "dashboard-" # [OPTIONAL] Part-level: Creates subdomain dashboard-{main-fqdn}
routes:
- name: external-service
pathPrefix: PathPrefix(`/external`)
service: # External service reference
name: "external-api-service" # K8s service name
port: 8080 # Service port
namespace: "other-namespace" # Optional, defaults to current namespace
middlewares:
ipwhitelist: true
basicauth: true
HTTP Example
parts:
dbadmin:
type: http
expose_publicly: true # [OPTIONAL - since muppy 14.88] will trigger Public DNS Records generation. Default = false
host_prefix: "admin-" # [OPTIONAL] Creates subdomain admin-{main-fqdn}
routes:
- name: database-ui
pathPrefix: PathPrefix(`/dbadmin`)
service:
name: "postgresql-admin"
port: 8080
middlewares:
ipwhitelist: true
basicauth: true
- name: monitoring
pathPrefix: PathPrefix(`/grafana`)
service:
name: "grafana"
port: 3000
namespace: "monitoring"
middlewares:
ipwhitelist: true
dashboard_name: "Database Administration"
Routing to Internal Service Parts
HTTP parts can create additional routes to services from other parts within the same package. This is useful for exposing different ports with different security policies or subdomains.
Use Case: Expose an API on a public subdomain and its metrics endpoint on a separate, secured subdomain.
Pattern:
parts:
api:
type: server
deployment:
docker_image: "myapi:latest"
ports:
- name: http
port: 8000 # Main API endpoint
- name: metrics
port: 9090 # Prometheus metrics (same process)
routes:
- name: api
pathPrefix: PathPrefix(`/`)
port_name: http
# Public access - no middleware restrictions
metrics:
type: http
host_prefix: "metrics-"
# NO ports section - HTTP parts only create routing rules
routes:
- name: prometheus
pathPrefix: PathPrefix(`/`)
service_name: "@{ obj.key }@-api-svc" # Routes to api's service
port_name: metrics # Must match port name in api
middlewares:
ipwhitelist: true # Secured access
Result:
- myapp.example.com → api:8000 (public API)
- metrics-myapp.example.com → api:9090 (secured metrics)
How It Works:
- The
apipart deploys a single-process container that listens on multiple ports - Port 8000: REST API
-
Port 9090:
/metricsendpoint (typical for apps with Prometheus instrumentation) -
The
metricsHTTP part creates an additional IngressRoute - Routes to the same pod as
api(viaservice_name) - Uses a different subdomain (via
host_prefix) - Applies different security (via
middlewares)
Key Points:
- HTTP parts should NOT define
ports - They only create IngressRoutes, not Services or Deployments
-
If
portsis defined, an empty Service is created with no backend -
Use template variables for service names
- Pattern:
service_name: "@{ obj.key }@-{part-code}-svc" - Works because template variables are allowed in quoted strings
-
The package key is injected by Muppy at render time
-
Port name must exist in target service
port_namemust match a port name from the target part'sportslist- Uses Kubernetes service port names, not port numbers
Additional Use Cases: - Admin panels with stricter authentication - Health check endpoints on internal-only routes - Debug endpoints accessible only from specific IPs
Note: This pattern also works incidentally with multi-process containers (e.g., a dev container running both an app and code-server), though single-process containers following the "one process per container" principle are the recommended practice.
See also: - Multi-Service Deployments - Advanced Routing Patterns
Common Configuration Sections
Environment Variables
Direct environment variables:
deployment:
env_vars:
- name: NODE_ENV
value: "production"
- name: PORT
value: "8080"
- name: FEATURE_FLAG_X
value: "true"
From ConfigMaps and Secrets:
envFrom:
- configMapRef:
name: app-config # References vault object
- secretRef:
name: app-secrets # References vault object
Volumes
Persistent volume:
volumes:
- name: app-data # Must match volumesDefinition name
mountPath: "/data"
readOnly: false # Optional, default false
EmptyDir volume:
volumes:
- name: temp-storage
mountPath: "/tmp"
emptyDir: {} # Basic emptyDir
- name: memory-storage
mountPath: "/memory"
emptyDir:
medium: "Memory" # In-memory storage
ConfigMap as file:
volumes:
- name: app-config # ConfigMap vault object
mountPath: "/etc/app/"
subPath: "app.conf" # Mount as single file
Secret as files:
Security Context
securityContext:
fsGroup: 1001 # File system group
runAsUser: 1001 # Run as specific user
runAsGroup: 1001 # Run as specific group
runAsNonRoot: true # Require non-root user
Debug Configuration
Basic debug mode (shell access):
debug_mode_available: basic
debug_config:
command: ["sleep"] # Override container command
args: ["infinity"] # Keep container running
env_vars: # Debug-specific environment
- name: DEBUG
value: "true"
resources: # Debug-specific resources
requests:
cpu: "50m"
memory: "64Mi"
Advanced debug mode (VSCode Server):
debug_mode_available: codr
debug_config:
debug_fqdn_prefix: "code-" # Creates code-{main-fqdn} routing
args: [
"/usr/bin/code-server",
"--bind-addr=0.0.0.0:8765",
"--auth=password"
]
env_vars:
- name: PASSWORD
value: "debug123" # Better to set via dashboard
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
Kubernetes Probes
Readiness probe:
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
Liveness probe:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
Startup probe:
startupProbe:
httpGet:
path: /startup
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 30 # Allow 150s for startup (30 * 5s)
Disabled probe:
Global Configuration Sections
Volume Definitions
Define shared volumes used by parts:
volumesDefinition:
- name: app-data
description: "Application data storage"
claim:
storageClassName: "fast-ssd" # Optional, uses default if not set
accessModes:
- ReadWriteOnce # RWO | ROX | RWX | ReadWriteOncePod
size: "10Gi"
- name: shared-cache
description: "Shared cache volume"
claim:
accessModes:
- ReadWriteMany # Multi-node access
size: "5Gi"
Vault Objects
Declare ConfigMaps and Secrets managed by Muppy Vault:
vault_objects:
- name: app-config
scope: qualifier # qualifier | instance
type: configmap # configmap | secret
- name: app-secrets
scope: qualifier
type: secret
- name: instance-config
scope: instance # Unique per deployment instance
type: configmap
Naming Convention:
- Qualifier scope: {app-code}-{qualifier}-{type}-{name}
- Instance scope: {app-code}-{qualifier}-{instance}-{type}-{name}
Complete Working Examples
Full Web Application with Worker
mpy_meta_config:
# Vault object declarations
vault_objects:
- name: app-config
type: configmap
- name: app-secrets
type: secret
# Volume definitions
volumesDefinition:
- name: app-uploads
description: "User uploaded files"
claim:
size: "20Gi"
accessModes: [ReadWriteOnce]
# Application parts
parts:
web:
type: server
deployment:
docker_image: "mycompany/webapp:v2.1.0"
args: ["--mode=web"]
env_vars:
- name: PORT
value: "8080"
- name: WORKER_QUEUE
value: "redis://redis:6379/0"
resources:
requests:
cpu: "200m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "1Gi"
replica_count: 3
ports:
- name: http
port: 8080
routes:
- name: main
pathPrefix: PathPrefix(`/`)
port_name: http
middlewares:
ipwhitelist: true
basicauth: true
volumes:
- name: app-uploads
mountPath: "/app/uploads"
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
debug_mode_available: codr
debug_config:
debug_fqdn_prefix: "dev-"
args: [
"/usr/bin/code-server",
"--bind-addr=0.0.0.0:8765",
"--auth=password"
]
resources:
requests:
cpu: "500m"
memory: "1Gi"
dashboard_name: "Web Frontend"
worker:
type: worker
deployment:
docker_image: "mycompany/webapp:v2.1.0"
args: ["--mode=worker", "--concurrency=4"]
env_vars:
- name: WORKER_QUEUE
value: "redis://redis:6379/0"
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
replica_count: 2
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
debug_mode_available: basic
debug_config:
args: ["sleep", "infinity"]
dashboard_name: "Background Workers"
cleanup:
type: cronjob
cronjob:
schedule: "0 2 * * *" # Daily at 2 AM
suspend: false
docker_image: "mycompany/webapp:v2.1.0"
args: ["--mode=cleanup", "--days=30"]
resources:
requests:
cpu: "50m"
memory: "128Mi"
limits:
cpu: "200m"
memory: "256Mi"
envFrom:
- secretRef:
name: app-secrets
dashboard_name: "Daily Cleanup"
Quick Reference
Part Type Decision Matrix
Need HTTP endpoints? → server
Background processing only? → worker
Scheduled execution? → cronjob
Route existing service? → http
Required Sections by Type
server: deployment + ports (if routing) + routes (if external access)
worker: deployment only
cronjob: cronjob only
http: routes only
Common Gotchas
- Part codes must be ≤ 8 characters
- Deployment section needs at least one property
mpy_meta_configcannot use Helm templating- Volumes persist after chart deletion (use
helm.sh/resource-policy: keep) - Debug mode requires dashboard configuration
- CronJobs only support
basicdebug mode
Resource Units
CPU: "100m" (millicores), "0.1" (cores), "2" (cores)
Memory: "128Mi", "1Gi", "512M", "2G"
Storage: "10Gi", "100Mi", "1Ti"
Access Modes
ReadWriteOnce (RWO): Single node read-write
ReadOnlyMany (ROX): Multi-node read-only
ReadWriteMany (RWX): Multi-node read-write
ReadWriteOncePod: Single pod read-write (K8s 1.22+)
Troubleshooting
host_prefix Issues
Both host_prefix and hostPrefix Defined
If both naming conventions are present in your configuration, host_prefix (snake_case) takes priority:
parts:
mypart:
host_prefix: "new-" # ✅ This will be used
hostPrefix: "old-" # ❌ This will be ignored
Resolution: Use only host_prefix (snake_case) - it's the current standard for m2p.
Prefix Not Applied to Routes
If your routes aren't using the expected subdomain:
Check:
1. ✅ You have defined host_prefix at the part level (not route level)
2. ✅ The prefix ends with a hyphen (e.g., "api-" not "api")
3. ✅ You clicked "Sync 'Parts Config.'" in the Muppy GUI after changing the configuration
4. ✅ The part has routes defined
5. ✅ expose_publicly: true is set if you need public DNS records
Example:
parts:
api:
host_prefix: "api-" # ✅ Correct: at part level, with trailing hyphen
routes:
- name: main
pathPrefix: PathPrefix(`/`)
port_name: http
Invalid Characters in Prefix
DNS names have character restrictions. Ensure your prefix: - Contains only lowercase letters, numbers, and hyphens - Starts with a letter or number - Ends with a hyphen (connects to the base FQDN) - Total FQDN length (prefix + base) doesn't exceed 253 characters
# ✅ Valid
host_prefix: "api-"
host_prefix: "admin-panel-"
host_prefix: "v2-"
host_prefix: "app1-"
# ❌ Invalid
host_prefix: "API-" # No uppercase letters
host_prefix: "_private-" # No underscores
host_prefix: "admin" # Missing trailing hyphen
host_prefix: "-api-" # Cannot start with hyphen
This reference covers all configuration options available in mpy-metapackage. For advanced features like user manifests and complex integrations, see Advanced Features.