Advanced Features
This guide covers advanced features and integration patterns for mpy-metapackage power users.
User Manifests
Available from version 2.3.17
User manifests allow you to inject custom Kubernetes resources directly through mpy-metapackage, bypassing Helm templating limitations and enabling rapid prototyping of complex configurations.
Basic Usage
mpy_meta_config:
user_manifests:
- description: "Custom TCP ingress for SSH access"
definition: |
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: "{{ $.Values.package_release.key }}-ssh-route"
namespace: "{{ $.Release.Namespace }}"
labels:
app.kubernetes.io/component: "user-ssh-route"
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`ssh-{{ $.Values.package_release.main_fqdn }}`)
services:
- name: "{{ $.Values.package_release.key }}-ssh-svc"
port: 2222
tls:
passthrough: true
Automatic Labels and Annotations
Every user manifest receives standard Kubernetes labels:
metadata:
labels:
app.kubernetes.io/name: "{{ $.Values.package_release.app_name }}"
app.kubernetes.io/instance: "{{ $.Release.Name }}"
app.kubernetes.io/version: "{{ $.Values.package_release.app_version }}"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: "mpy-metapackage-{{ $.Chart.Version }}"
muppy.io/package-release: "{{ $.Values.package_release.key }}"
annotations:
m2p_user_manifest: "true"
m2p_injection_timestamp: "{{ now | date \"2006-01-02T15:04:05Z07:00\" }}"
m2p_metapackage_version: "{{ $.Chart.Version }}"
Namespace Handling
If no namespace is specified in the manifest, it defaults to:
Always specify the namespace explicitly:
Advanced Examples
Custom Service Monitor for Prometheus:
user_manifests:
- description: "Prometheus ServiceMonitor for custom metrics"
definition: |
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: "{{ $.Values.package_release.key }}-metrics"
namespace: "{{ $.Release.Namespace }}"
spec:
selector:
matchLabels:
app.kubernetes.io/instance: "{{ $.Release.Name }}"
endpoints:
- port: metrics
path: /metrics
interval: 30s
Network Policy:
user_manifests:
- description: "Restrict network access to database"
definition: |
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: "{{ $.Values.package_release.key }}-netpol"
namespace: "{{ $.Release.Namespace }}"
spec:
podSelector:
matchLabels:
app.kubernetes.io/instance: "{{ $.Release.Name }}"
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/instance: "{{ $.Release.Name }}"
egress:
- to:
- namespaceSelector:
matchLabels:
name: database-namespace
Best Practices
-
Always validate syntax before deployment:
-
Use meaningful descriptions for documentation and debugging
-
Specify namespaces explicitly to avoid fallback behavior
-
Test with
kubectl getto verify deployment:
Advanced Volume Management
Volume Types and Use Cases
Persistent Volumes - Data that must survive pod restarts:
volumesDefinition:
- name: database-data
description: "PostgreSQL data directory"
claim:
storageClassName: "fast-ssd"
accessModes: [ReadWriteOnce]
size: "100Gi"
Shared Volumes - Data shared between multiple pods:
volumesDefinition:
- name: shared-assets
description: "Static assets shared by web pods"
claim:
storageClassName: "shared-storage"
accessModes: [ReadWriteMany] # Multi-pod access
size: "10Gi"
Memory Volumes - High-performance temporary storage:
parts:
webapp:
volumes:
- name: memory-cache
mountPath: "/tmp/cache"
emptyDir:
medium: "Memory" # RAM-based storage
sizeLimit: "1Gi" # Optional size limit
Advanced Storage Classes
Performance tiers:
# High-performance SSD storage
volumesDefinition:
- name: high-perf-data
claim:
storageClassName: "premium-ssd"
size: "50Gi"
# Cost-effective standard storage
- name: backup-data
claim:
storageClassName: "standard-hdd"
size: "500Gi"
Regional storage:
# Zone-specific storage
volumesDefinition:
- name: zone-local-data
claim:
storageClassName: "zone-ssd"
size: "20Gi"
Volume Mounting Patterns
Configuration files from ConfigMaps:
parts:
webapp:
volumes:
# Mount entire ConfigMap as directory
- name: app-config
mountPath: "/etc/app/config"
# Mount single file from ConfigMap
- name: nginx-config
mountPath: "/etc/nginx/nginx.conf"
subPath: "nginx.conf"
Secrets as individual files:
volumes:
# Each key in secret becomes a separate file
- name: ssl-certificates
mountPath: "/etc/ssl/certs"
readOnly: true
# Secret with keys: tls.crt, tls.key, ca.crt
# Creates: /etc/ssl/certs/tls.crt, /etc/ssl/certs/tls.key, /etc/ssl/certs/ca.crt
Multi-container volume sharing:
parts:
webapp:
volumes:
- name: shared-uploads
mountPath: "/app/uploads"
processor:
volumes:
- name: shared-uploads # Same volume, different path
mountPath: "/data/incoming"
Vault Object Integration
Vault Object Scopes
Qualifier Scope - Shared across all instances of an application:
vault_objects:
- name: app-config
scope: qualifier # Default scope
type: configmap
# Generated name: myapp-dev-configmap-app-config
Instance Scope - Unique per deployment instance:
vault_objects:
- name: instance-secrets
scope: instance
type: secret
# Generated name: myapp-dev-instance1-secret-instance-secrets
ConfigMap Types and Usage
ENV File ConfigMaps:
# In Muppy Vault, create ConfigMap with content:
DATABASE_URL=postgresql://localhost:5432/myapp
REDIS_URL=redis://localhost:6379/0
LOG_LEVEL=info
# In parts configuration:
envFrom:
- configMapRef:
name: app-env-config
Config File ConfigMaps:
# In Muppy Vault, create ConfigMap with content:
server {
listen 80;
server_name _;
location / {
proxy_pass http://backend;
}
}
# In parts configuration:
volumes:
- name: nginx-config
mountPath: "/etc/nginx/conf.d/default.conf"
subPath: "default.conf"
Secret Management Patterns
Database credentials:
vault_objects:
- name: database-creds
type: secret
scope: qualifier
# Usage in parts:
envFrom:
- secretRef:
name: database-creds
TLS certificates:
vault_objects:
- name: tls-certificates
type: secret
scope: instance
# Usage in parts:
volumes:
- name: tls-certificates
mountPath: "/etc/ssl/private"
readOnly: true
Dynamic Configuration with Rendering
Vault objects support Jinja templating with the same context as Helm values:
Config file with dynamic values:
# In ConfigMap vault object:
database:
host: "{{ obj.database_host }}"
port: {{ obj.database_port }}
name: "{{ obj.database_name }}"
app:
base_url: "https://{{ obj.fqdn_hostname }}"
debug: {{ obj.debug_mode | lower }}
Environment file with computed values:
# In ENV file vault object:
DATABASE_URL=postgresql://{{ obj.db_user }}:{{ obj.db_password }}@{{ obj.db_host }}:{{ obj.db_port }}/{{ obj.db_name }}
APP_URL=https://{{ obj.fqdn_hostname }}
WORKER_CONCURRENCY={{ obj.get_dashboard('worker').replica_count * 2 }}
Advanced Debugging
Debug Mode Types
Basic Debug Mode - Shell access:
debug_mode_available: basic
debug_config:
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
env_vars:
- name: DEBUG
value: "true"
- name: PS1
value: "debug:\\w\\$ "
resources:
requests:
cpu: "50m"
memory: "128Mi"
Advanced Debug Mode - VSCode Server:
debug_mode_available: codr
debug_config:
debug_fqdn_prefix: "code-"
command: ["/usr/bin/code-server"]
args: [
"--bind-addr=0.0.0.0:8765",
"--auth=password",
"--disable-telemetry",
"--disable-update-check"
]
env_vars:
- name: PASSWORD
value: "debug-session-{{ now | date \"20060102\" }}"
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2000m"
memory: "4Gi"
Debug Session Management
Temporary debug containers:
debug_config:
# Use different image for debugging
docker_image: "myapp:debug"
command: ["/debug/start.sh"]
args: ["--wait-for-debugger"]
# Mount additional debug tools
volumes:
- name: debug-tools
mountPath: "/debug"
emptyDir: {}
# Different resource allocation for debugging
resources:
requests:
cpu: "200m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "2Gi"
Debug Route Configuration
Debug routes are automatically created for codr mode:
# Automatically generated route for debug_mode_available: codr
routes:
- name: codr
pathPrefix: PathPrefix(`/`)
port: codr # Special port for debug access
debug_mode: true # Only active in debug mode
middlewares:
ipwhitelist: true # Recommend IP filtering for security
Advanced Networking
Complex Routing Patterns
Path-based microservices routing:
parts:
api-v1:
routes:
- name: api-v1
pathPrefix: PathPrefix(`/api/v1`)
port_name: http
middlewares:
ipwhitelist: true
api-v2:
routes:
- name: api-v2
pathPrefix: PathPrefix(`/api/v2`)
port_name: http
middlewares:
ipwhitelist: true
frontend:
routes:
- name: frontend
pathPrefix: PathPrefix(`/`)
port_name: http
# No IP restrictions for public frontend
Host-based routing with prefixes:
parts:
admin-api:
host_prefix: "admin-" # admin-myapp.example.com
routes:
- name: admin
pathPrefix: PathPrefix(`/`)
port_name: http
middlewares:
basicauth: true
ipwhitelist: true
public-api:
host_prefix: "api-" # api-myapp.example.com
routes:
- name: public
pathPrefix: PathPrefix(`/`)
port_name: http
Session Stickiness
Cookie-based stickiness:
routes:
- name: webapp
pathPrefix: PathPrefix(`/`)
port_name: http
sticky:
cookie:
name: "SESSIONID"
httpOnly: true
secure: true
sameSite: "strict"
maxAge: 3600 # 1 hour
path: "/"
Advanced stickiness configuration:
sticky:
cookie:
name: "APP_SESSION"
httpOnly: true
secure: true
sameSite: "none" # For cross-origin requests
maxAge: 86400 # 24 hours
path: "/app" # Specific path
Middleware Combinations
Security-focused routing:
routes:
- name: admin
pathPrefix: PathPrefix(`/admin`)
port_name: http
middlewares:
ipwhitelist: true # IP restrictions
basicauth: true # Username/password
toctoc: true # Advanced authentication
Performance-focused routing:
routes:
- name: api
pathPrefix: PathPrefix(`/api`)
port_name: http
sticky: # Session stickiness for performance
cookie:
name: "API_SESSION"
maxAge: 300 # 5 minutes
Integration Patterns
Multi-Environment Configurations
Environment-specific parts:
# values-dev.yaml
mpy_meta_config:
parts:
webapp:
deployment:
replica_count: 1
resources:
requests:
cpu: "100m"
memory: "256Mi"
# values-prod.yaml
mpy_meta_config:
parts:
webapp:
deployment:
replica_count: 5
resources:
requests:
cpu: "500m"
memory: "1Gi"
External Service Integration
Database proxy pattern:
parts:
dbproxy:
type: http
routes:
- name: database-admin
pathPrefix: PathPrefix(`/dbadmin`)
service:
name: "postgresql-admin"
port: 8080
namespace: "database"
middlewares:
ipwhitelist: true
basicauth: true
Message queue integration:
parts:
worker:
type: worker
deployment:
env_vars:
- name: RABBITMQ_URL
value: "amqp://rabbitmq.messaging:5672"
- name: QUEUE_NAME
value: "processing-tasks"
Monitoring Integration
Custom metrics exposure:
parts:
webapp:
ports:
- name: http
port: 8080
- name: metrics
port: 9090
routes:
- name: app
pathPrefix: PathPrefix(`/`)
port_name: http
- name: metrics
pathPrefix: PathPrefix(`/metrics`)
port_name: metrics
middlewares:
ipwhitelist: true # Restrict metrics access
# Use with user_manifests for ServiceMonitor
user_manifests:
- description: "Prometheus ServiceMonitor"
definition: |
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: "{{ $.Values.package_release.key }}-metrics"
namespace: "{{ $.Release.Namespace }}"
spec:
selector:
matchLabels:
app.kubernetes.io/instance: "{{ $.Release.Name }}"
endpoints:
- port: metrics
path: /metrics
Troubleshooting Advanced Scenarios
Template Debugging
Complex template validation:
# Render templates with debug info
helm template my-app . \
--debug \
--set-file mpy_meta_config=mpy_meta_config.yaml \
-f values.yaml > debug-output.yaml
# Validate Kubernetes resources
kubectl apply --dry-run=client -f debug-output.yaml
User manifest validation:
# Extract only user manifests
helm template my-app . -f values.yaml | \
grep -A 50 "m2p_user_manifest.*true" > user-manifests.yaml
kubectl apply --dry-run=client -f user-manifests.yaml
Volume Issues
Check volume status:
# List PVCs and their status
kubectl get pvc -n namespace
# Check volume events
kubectl describe pvc volume-name -n namespace
# Check storage class availability
kubectl get storageclass
Volume permission issues:
# Fix with security context
securityContext:
fsGroup: 1001 # Match volume ownership
runAsUser: 1001
runAsGroup: 1001
Vault Object Issues
Debug vault object resolution:
# Check generated ConfigMaps/Secrets
kubectl get configmap -n namespace | grep package-key
kubectl get secret -n namespace | grep package-key
# Inspect vault object content
kubectl describe configmap vault-object-name -n namespace
Common vault object problems: 1. Wrong scope: Check qualifier vs instance naming 2. Missing objects: Ensure vault objects are created in Muppy 3. Encoding issues: Verify base64 encoding for secrets
Networking Issues
Debug Traefik routing:
# Check IngressRoute status
kubectl get ingressroute -n namespace
# Check middleware status
kubectl get middleware -n namespace
# Traefik dashboard for route inspection
kubectl port-forward -n traefik svc/traefik 8080:8080
# Visit http://localhost:8080
Common networking problems: 1. "Middleware not found": Normal log noise, check Traefik dashboard 2. Wrong service names: Verify service name generation 3. Port mismatches: Ensure port names match between parts and routes
Performance Optimization
Resource Optimization
Right-sizing resources:
# CPU-intensive workload
deployment:
resources:
requests:
cpu: "1000m" # Guarantee full CPU
memory: "512Mi"
limits:
cpu: "2000m" # Allow burst to 2 CPUs
memory: "1Gi"
# Memory-intensive workload
deployment:
resources:
requests:
cpu: "100m"
memory: "2Gi" # Guarantee 2GB RAM
limits:
cpu: "500m"
memory: "4Gi" # Allow up to 4GB
Horizontal Pod Autoscaler integration:
user_manifests:
- description: "HPA for webapp"
definition: |
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: "{{ $.Values.package_release.key }}-webapp-hpa"
namespace: "{{ $.Release.Namespace }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: "{{ $.Values.package_release.key }}-webapp-deployment"
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Storage Optimization
Storage class selection:
volumesDefinition:
# Hot data - fast access required
- name: active-data
claim:
storageClassName: "premium-ssd"
size: "10Gi"
# Warm data - occasional access
- name: archive-data
claim:
storageClassName: "standard-ssd"
size: "100Gi"
# Cold data - infrequent access
- name: backup-data
claim:
storageClassName: "standard-hdd"
size: "1Ti"
This covers the advanced features and integration patterns available in mpy-metapackage. These features enable sophisticated deployment architectures while maintaining the simplicity of configuration-driven deployment.