Skip to content

Introduce new optional "relay" backend docker image that can be self-hosted to relay AI requests #928

@Logende

Description

@Logende

AI backend / relay

Create a small self-hosted backend relay for MC that forwards requests to a user-configured LLM endpoint with a user-configured API key. This solves browser CORS issues for providers that cannot be called directly from the frontend, while keeping provider keys out of the browser.

Scope

  • Minimal Flask app.py + simple Dockerfile.
  • Config via environment variables only.
  • Intended for self-hosting by individuals, teams, or companies.

Required config

  • UPSTREAM_BASE_URL
  • UPSTREAM_API_KEY
  • MC_RELAY_USERNAME
  • MC_RELAY_PASSWORD
  • Optional: ALLOWED_ORIGIN, timeout, extra upstream headers.

Required features

  • HTTP Basic Auth for MC -> relay access.
  • CORS support for the configured MC origin.
  • Proxy OpenAI-compatible endpoints, at minimum:
    • GET /health
    • GET /v1/models
    • POST /v1/chat/completions
  • Pass upstream responses through with minimal transformation.
  • Support streaming if possible.

Security requirements

  • No provider key in frontend/browser.
  • No secrets in logs.
  • Deploy behind HTTPS.
  • Restrict CORS to configured origin, not *.

Goal

Add support for custom/self-hosted and CORS-blocked LLM providers via a simple relay that can live inside the MetaConfigurator repo and be easy to run with Docker.

Also sufficiently document how to set up and use this "relay".

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions