Appearance
Self-hosted LLM
A Self-Hosted LLM (Large Language Model) allows organizations to deploy and manage powerful language models on their own infrastructure. This setup provides full control over the data, security, and customization of the AI models, making it an ideal solution for enterprises requiring privacy, scalability, and tailored functionalities.
Unlike cloud-hosted alternatives, a self-hosted LLM ensures that all data remains within the organization's network, which is crucial for industries with strict data compliance requirements.
Key Features of Self-hosted LLM:
Data Privacy and Security: By hosting the LLM on your own servers, you maintain complete control over your data, ensuring that sensitive information is not exposed to third-party cloud providers.
Customization: Self-hosted LLMs can be customized to meet specific business needs, including fine-tuning models with proprietary data, integrating with internal tools, and adjusting model parameters.
Scalability: Organizations can scale their LLM deployment according to their needs, whether for small projects or large-scale enterprise applications, by leveraging their existing infrastructure.
Cost Efficiency: By utilizing existing hardware and avoiding recurring cloud service fees, organizations can reduce costs while maintaining high performance.
Compliance and Control: Self-hosting ensures adherence to industry-specific regulations and data governance policies, which is critical for sectors like healthcare, finance, and government.
Performance Optimization: Organizations can optimize the performance of their LLM deployment by leveraging dedicated resources and tuning the infrastructure to meet specific computational requirements.
Integration: Seamlessly integrate the LLM with existing systems, databases, and workflows, ensuring smooth operation within the organization’s tech ecosystem.
Offline Capability: Self-hosted LLMs can function without an internet connection, making them ideal for environments where connectivity is limited or data security is a priority.
Actions:
- Send prompt: Sends a prompt.
- Send chat prompt: Sends a chat prompt.
Which model can I use with Workflow Automation?
Any OpenAI-compatible model can be used as long as the credentials—URL, Secret Key, and V1 Models—match. The endpoint must be publicly accessible.
How does it work with the different Workflow Automation plans?
- For the Basic plan, the endpoint must be publicly accessible.
- For the Pro plan, the endpoint must be accessible via VPN or publicly.
- For the Self-hosted plan, the endpoint must be accessible from the deployment environment.
Further information
- Read more detailed information on completions endpoints here.
- Read more detailed information on chat endpoints here.
Connect with Self-hosted LLM:
Log in to your openrouter.ai account.
INFO
Alternatively, you could also use another AI platform that provides access to OpenAI-compatible large language models through a unified API.
Click the User icon in the top right corner and navigate to Keys.
Click the Create Key button.
A pop-up opens.
Enter a Name for your API Key. Optionally, you may enter a Credit limit. Click the Create button.
Your new
Api Key
is revealed.WARNING
Make sure to copy and save your
API Key
now in a secure location for later use, since you will not be able to see it again.Close the pop-up window to access the API Key overview page.
Go to Workflow Automation and navigate to My Apps and Add a new app connection choosing Self-hosted LLM from the list.
Enter a Screen Name.
Paste your Workflow Automation
application instance URL
into the API URL field.Paste the
API Key
you saved earlier into the API Key field.Click the Submit button.
Your Self-hosted LLM connection is now established.
Start using your new Self-hosted LLM connection with Workflow Automation.