Muppy Platform Integration Guide
This guide covers how to use mpy-metapackage (m2p) within the Muppy platform, including GUI workflows, template variables, and platform-specific features.
Overview
The Muppy platform provides a web-based interface for managing m2p deployments. Each deployment becomes an "Installed Package" in Muppy with its own FQDN, dashboard controls, and networking configuration.
Key Concepts
Installed Packages
- Each m2p deployment creates one "Installed Package" in Muppy
- Receives a single base FQDN (e.g.,
my-app.lair.ovh) - Contains multiple "parts" (services) that can share or extend this FQDN
- Managed through the Muppy web interface
Package States
- Draft: Configuration being edited, not yet deployed
- Deploying: Actively being deployed to Kubernetes
- Running: Successfully deployed and operational
- Failed: Deployment encountered errors
Working with the Muppy GUI
Package Editing Workflow
When editing an Installed Package in Muppy:
- Click Edit on the package to enter edit mode
- Modify Configuration in the Parts Config tab
- Click "Sync 'Parts Config.'" - This critical step triggers Muppy to:
- Parse the YAML configuration to understand the parts structure
- Generate dashboard entries for each part with their resource limits
- Introspect routes defined in parts to create network mappings
- Prepare the configuration for deployment
- Review Changes in Dashboard and Network tabs
- Save to apply changes
GUI Tabs Explained
Parts Config Tab
- Contains the
mpy_meta_configYAML configuration - Where you define all parts, volumes, and vault objects
- Must sync after changes for other tabs to update
Dashboard Tab
- Shows controls for each part after sync
- Displays resource allocations (CPU/memory)
- Provides debug mode controls (if configured)
- Empty before Parts Config sync
Network Tab
- Displays routing configuration after sync
- Shows generated FQDNs for each part
- Lists path prefixes and middleware settings
- Empty before Parts Config sync
Vault Objects Tab
- Shows vault-managed ConfigMaps and Secrets
- Populated based on
vault_objectsconfiguration - Not automatically changed by Parts Config sync
Template Variables in Parts Config
Template variables allow dynamic values in your configuration, but with specific constraints:
✅ Allowed: Inside Quoted Strings
deployment:
env_vars:
FRONTEND_URL: "https://@{ obj.fqdn_hostname }@"
API_BASE: "https://@{ obj.fqdn_hostname }@/api"
❌ Not Allowed: In Keys or Unquoted Values
# Wrong - template in key
@{ obj.name }@:
type: server
# Wrong - unquoted value
deployment:
replicas: @{ obj.replicas }@
Available Template Variables
Common variables available in Parts Config:
- @{ obj.fqdn_hostname }@ - The package's FQDN
- @{ obj.k8s_cluster_id.provider }@ - Kubernetes cluster provider
- Additional variables injected by Muppy based on package configuration
Multi-Service Deployments with host_prefix
When deploying multiple services that need HTTP(S) exposure, m2p uses the host_prefix attribute:
How It Works
- Each Installed Package receives one base FQDN (e.g.,
my-app.lair.ovh) - This FQDN is injected as
package_release.main_fqdn - Parts can define
host_prefixto create unique subdomains by prepending a prefix
Note: The snake_case
host_prefixis the current standard for m2p. The camelCasehostPrefixis still supported for backward compatibility, buthost_prefixtakes priority if both are specified.
Example Configuration
mpy_meta_config:
parts:
# Main application - uses base FQDN
api:
type: server
# No host_prefix - accessible at: my-app.lair.ovh
routes:
- name: api
pathPrefix: PathPrefix(`/`)
port_name: http
# Admin panel - uses subdomain
admin:
type: server
host_prefix: "admin-" # Accessible at: admin-my-app.lair.ovh
routes:
- name: admin
pathPrefix: PathPrefix(`/`)
port_name: http
# Monitoring dashboard - uses subdomain
grafana:
type: server
host_prefix: "metrics-" # Accessible at: metrics-my-app.lair.ovh
routes:
- name: grafana
pathPrefix: PathPrefix(`/`)
port_name: http
Common Workflows
Creating a New Package
- Navigate to Muppy dashboard
- Click "New Package"
- Select mpy-metapackage as the chart
- Configure basic settings (name, namespace, FQDN)
- Add Parts Config in the configuration tab
- Click "Sync 'Parts Config.'" to validate
- Deploy the package
Updating an Existing Package
- Find package in Muppy dashboard
- Click "Edit"
- Navigate to "Parts Config" tab
- Make changes to
mpy_meta_config - Important: Click "Sync 'Parts Config.'"
- Verify changes in Dashboard and Network tabs
- Click "Save" to apply
Debugging Failed Deployments
- Check package status and error messages
- Review Parts Config for syntax errors
- Verify all template variables are properly quoted
- Check Dashboard tab for resource constraints
- Use debug mode if available:
Best Practices
Configuration Management
- Always sync Parts Config after changes
- Use template variables for environment-specific values
- Keep part codes ≤ 8 characters
- Test configurations in development first
Multi-Service Patterns
- Use clear, consistent
host_prefixnaming - Document which services use which subdomains
- Consider path-based routing vs subdomain routing
- Plan for service discovery between parts
Security Considerations
- Use vault objects for sensitive data
- Configure appropriate middleware (IP filtering, auth)
- Avoid hardcoding secrets in Parts Config
- Use template variables for environment-specific URLs
Troubleshooting
Parts Config Won't Sync
- Check YAML syntax validity
- Ensure part codes are ≤ 8 characters
- Verify deployment sections have at least one property
- Look for unquoted template variables
Dashboard Not Showing Parts
- Ensure you clicked "Sync 'Parts Config.'"
- Check for errors in the sync process
- Verify parts have proper type definitions
Network Routes Not Appearing
- Sync Parts Config first
- Ensure routes are properly defined in parts
- Check port definitions match route references
Failed Deployment Analysis
When a Helm installation fails, use this systematic approach to diagnose issues:
1. Refresh Package Status
- Click "Update Info" button to refresh the latest state
- Package state will change from "Draft" to "Error" if installation failed
- Check "Running State" field for overall health status
- State changes may take 10-30 seconds to reflect in UI
2. Access Config Journal
- Navigate to "Config Journal" tab
- Look for entries with "Error" state
- Each entry shows: Code, User, Timestamp, Operation, Type, Package Version, State
- Recent entries appear at the top
3. Analyze Error Details
- Click on the error entry row to open detailed analysis dialog
- Review "Console Output" section for specific Kubernetes error messages
- Check "Helm Command" and "Helm Exit Code" for execution details
- Examine "Templates", "Values", and "Diff analysis" tabs for configuration issues
4. Common Error Patterns
Missing Image Field:
- Cause: Image field not properly configured in Parts Config - Solution: Ensure image is specified under deployment sectionEnvironment Variable Issues:
- Cause: Environment variables incorrectly placed in configuration - Solution: Move env vars todeployment.env_vars array structure
Volume Mount Errors:
- Cause: Volume mount references non-existent volume definition - Solution: Ensure volume is defined involumes section with matching name
Multiple Validation Failures:
- Cause: Multiple configuration issues preventing deployment - Solution: Address each error listed in the console output sequentially5. Package State Transitions
Understanding state changes helps track deployment progress:
- Draft → Error: Installation attempt failed (check Config Journal for details)
- Draft → Applied: Successful installation and deployment
- Applied → Failed: Runtime failure after successful initial deployment
- Error → Applied: Fixed configuration and successful retry installation
- Any State → Deleted: Package removal via helm delete
6. Troubleshooting Workflow
- Initial Deployment: Click "Helm Install" → Wait 10+ seconds → Click "Update Info"
- Check Status: Review package state (Draft/Error/Applied) and Running State
- Error Analysis: Open Config Journal → Click error entry → Review console output
- Fix Configuration: Edit Parts Config based on specific errors found
- Retry Deployment: Sync Parts Config → Try "Helm Install" again
- Verify Success: Confirm state changes to "Applied" and Running State shows success
Integration with CI/CD
When automating deployments: 1. Use Muppy API or CLI for package management 2. Always sync Parts Config after updates 3. Validate configuration before deployment 4. Monitor package status after deployment
Next Steps
- Review Parts Configuration Reference for detailed configuration options
- See Advanced Features for complex scenarios
- Check Usage Guide for general deployment patterns
This guide covers Muppy platform integration as of mpy-metapackage version 2.3.17