Skip to content

Muppy Platform Integration Guide

This guide covers how to use mpy-metapackage (m2p) within the Muppy platform, including GUI workflows, template variables, and platform-specific features.

Overview

The Muppy platform provides a web-based interface for managing m2p deployments. Each deployment becomes an "Installed Package" in Muppy with its own FQDN, dashboard controls, and networking configuration.

Key Concepts

Installed Packages

  • Each m2p deployment creates one "Installed Package" in Muppy
  • Receives a single base FQDN (e.g., my-app.lair.ovh)
  • Contains multiple "parts" (services) that can share or extend this FQDN
  • Managed through the Muppy web interface

Package States

  • Draft: Configuration being edited, not yet deployed
  • Deploying: Actively being deployed to Kubernetes
  • Running: Successfully deployed and operational
  • Failed: Deployment encountered errors

Working with the Muppy GUI

Package Editing Workflow

When editing an Installed Package in Muppy:

  1. Click Edit on the package to enter edit mode
  2. Modify Configuration in the Parts Config tab
  3. Click "Sync 'Parts Config.'" - This critical step triggers Muppy to:
  4. Parse the YAML configuration to understand the parts structure
  5. Generate dashboard entries for each part with their resource limits
  6. Introspect routes defined in parts to create network mappings
  7. Prepare the configuration for deployment
  8. Review Changes in Dashboard and Network tabs
  9. Save to apply changes

GUI Tabs Explained

Parts Config Tab

  • Contains the mpy_meta_config YAML configuration
  • Where you define all parts, volumes, and vault objects
  • Must sync after changes for other tabs to update

Dashboard Tab

  • Shows controls for each part after sync
  • Displays resource allocations (CPU/memory)
  • Provides debug mode controls (if configured)
  • Empty before Parts Config sync

Network Tab

  • Displays routing configuration after sync
  • Shows generated FQDNs for each part
  • Lists path prefixes and middleware settings
  • Empty before Parts Config sync

Vault Objects Tab

  • Shows vault-managed ConfigMaps and Secrets
  • Populated based on vault_objects configuration
  • Not automatically changed by Parts Config sync

Template Variables in Parts Config

Template variables allow dynamic values in your configuration, but with specific constraints:

✅ Allowed: Inside Quoted Strings

deployment:
  env_vars:
    FRONTEND_URL: "https://@{ obj.fqdn_hostname }@"
    API_BASE: "https://@{ obj.fqdn_hostname }@/api"

❌ Not Allowed: In Keys or Unquoted Values

# Wrong - template in key
@{ obj.name }@:
  type: server

# Wrong - unquoted value
deployment:
  replicas: @{ obj.replicas }@

Available Template Variables

Common variables available in Parts Config: - @{ obj.fqdn_hostname }@ - The package's FQDN - @{ obj.k8s_cluster_id.provider }@ - Kubernetes cluster provider - Additional variables injected by Muppy based on package configuration

Multi-Service Deployments with host_prefix

When deploying multiple services that need HTTP(S) exposure, m2p uses the host_prefix attribute:

How It Works

  • Each Installed Package receives one base FQDN (e.g., my-app.lair.ovh)
  • This FQDN is injected as package_release.main_fqdn
  • Parts can define host_prefix to create unique subdomains by prepending a prefix

Note: The snake_case host_prefix is the current standard for m2p. The camelCase hostPrefix is still supported for backward compatibility, but host_prefix takes priority if both are specified.

Example Configuration

mpy_meta_config:
  parts:
    # Main application - uses base FQDN
    api:
      type: server
      # No host_prefix - accessible at: my-app.lair.ovh
      routes:
        - name: api
          pathPrefix: PathPrefix(`/`)
          port_name: http

    # Admin panel - uses subdomain
    admin:
      type: server
      host_prefix: "admin-"  # Accessible at: admin-my-app.lair.ovh
      routes:
        - name: admin
          pathPrefix: PathPrefix(`/`)
          port_name: http

    # Monitoring dashboard - uses subdomain
    grafana:
      type: server
      host_prefix: "metrics-"  # Accessible at: metrics-my-app.lair.ovh
      routes:
        - name: grafana
          pathPrefix: PathPrefix(`/`)
          port_name: http

Common Workflows

Creating a New Package

  1. Navigate to Muppy dashboard
  2. Click "New Package"
  3. Select mpy-metapackage as the chart
  4. Configure basic settings (name, namespace, FQDN)
  5. Add Parts Config in the configuration tab
  6. Click "Sync 'Parts Config.'" to validate
  7. Deploy the package

Updating an Existing Package

  1. Find package in Muppy dashboard
  2. Click "Edit"
  3. Navigate to "Parts Config" tab
  4. Make changes to mpy_meta_config
  5. Important: Click "Sync 'Parts Config.'"
  6. Verify changes in Dashboard and Network tabs
  7. Click "Save" to apply

Debugging Failed Deployments

  1. Check package status and error messages
  2. Review Parts Config for syntax errors
  3. Verify all template variables are properly quoted
  4. Check Dashboard tab for resource constraints
  5. Use debug mode if available:
    parts:
      myapp:
        debug_mode_available: basic
    

Best Practices

Configuration Management

  • Always sync Parts Config after changes
  • Use template variables for environment-specific values
  • Keep part codes ≤ 8 characters
  • Test configurations in development first

Multi-Service Patterns

  • Use clear, consistent host_prefix naming
  • Document which services use which subdomains
  • Consider path-based routing vs subdomain routing
  • Plan for service discovery between parts

Security Considerations

  • Use vault objects for sensitive data
  • Configure appropriate middleware (IP filtering, auth)
  • Avoid hardcoding secrets in Parts Config
  • Use template variables for environment-specific URLs

Troubleshooting

Parts Config Won't Sync

  • Check YAML syntax validity
  • Ensure part codes are ≤ 8 characters
  • Verify deployment sections have at least one property
  • Look for unquoted template variables

Dashboard Not Showing Parts

  • Ensure you clicked "Sync 'Parts Config.'"
  • Check for errors in the sync process
  • Verify parts have proper type definitions

Network Routes Not Appearing

  • Sync Parts Config first
  • Ensure routes are properly defined in parts
  • Check port definitions match route references

Failed Deployment Analysis

When a Helm installation fails, use this systematic approach to diagnose issues:

1. Refresh Package Status

  • Click "Update Info" button to refresh the latest state
  • Package state will change from "Draft" to "Error" if installation failed
  • Check "Running State" field for overall health status
  • State changes may take 10-30 seconds to reflect in UI

2. Access Config Journal

  • Navigate to "Config Journal" tab
  • Look for entries with "Error" state
  • Each entry shows: Code, User, Timestamp, Operation, Type, Package Version, State
  • Recent entries appear at the top

3. Analyze Error Details

  • Click on the error entry row to open detailed analysis dialog
  • Review "Console Output" section for specific Kubernetes error messages
  • Check "Helm Command" and "Helm Exit Code" for execution details
  • Examine "Templates", "Values", and "Diff analysis" tabs for configuration issues

4. Common Error Patterns

Missing Image Field:

spec.template.spec.containers[0].image: Required value
- Cause: Image field not properly configured in Parts Config - Solution: Ensure image is specified under deployment section

Environment Variable Issues:

unknown field "spec.template.spec.containers[0].CLIENT_HOST"
- Cause: Environment variables incorrectly placed in configuration - Solution: Move env vars to deployment.env_vars array structure

Volume Mount Errors:

spec.template.spec.containers[0].volumeMounts[0].name: Not found: "volume-name"
- Cause: Volume mount references non-existent volume definition - Solution: Ensure volume is defined in volumes section with matching name

Multiple Validation Failures:

Error: INSTALLATION FAILED: 3 errors occurred
- Cause: Multiple configuration issues preventing deployment - Solution: Address each error listed in the console output sequentially

5. Package State Transitions

Understanding state changes helps track deployment progress:

  • Draft → Error: Installation attempt failed (check Config Journal for details)
  • Draft → Applied: Successful installation and deployment
  • Applied → Failed: Runtime failure after successful initial deployment
  • Error → Applied: Fixed configuration and successful retry installation
  • Any State → Deleted: Package removal via helm delete

6. Troubleshooting Workflow

  1. Initial Deployment: Click "Helm Install" → Wait 10+ seconds → Click "Update Info"
  2. Check Status: Review package state (Draft/Error/Applied) and Running State
  3. Error Analysis: Open Config Journal → Click error entry → Review console output
  4. Fix Configuration: Edit Parts Config based on specific errors found
  5. Retry Deployment: Sync Parts Config → Try "Helm Install" again
  6. Verify Success: Confirm state changes to "Applied" and Running State shows success

Integration with CI/CD

When automating deployments: 1. Use Muppy API or CLI for package management 2. Always sync Parts Config after updates 3. Validate configuration before deployment 4. Monitor package status after deployment

Next Steps


This guide covers Muppy platform integration as of mpy-metapackage version 2.3.17