By "adding a remote model" we mean a model that's not running in your Kubernetes cluster. This could be a model running on a private cloud service or other providers.
You basically have 2 options.
chat/completions
API.chat/completions
APIGreat news. Add the URL for your provider and any API keys directly into the models section of the user interface.
chat/completions
API (Work in Progress)Luckily the guys at Lite LLM have got you covered. They basically connect to any provider and create an Open AI chat/completions
API endpoint to your model.