Epsilla
HomeDiscordTwitterGithubEmail
  • Welcome
    • Register and Login
    • Explore App Portal
  • Build Your First AI Agent
    • Create a Knowledge Base
    • Set Up Your AI Agent
    • Publish Your AI Agent
  • Knowledge Base
    • Local Files
    • Website
    • Google Drive
    • S3
    • Notion
    • Share Point
    • Google Cloud Storage
    • Azure Blob Storage
    • Confluence
    • Jira
    • Advanced Settings
      • Auto Sync
      • Embedding
      • Data Parsing
      • Data Chunking
      • Hypothetical Questions
      • Webhook
      • Meta Data
    • Data Storage
    • Programmatically Manage Knowledge Bases
  • Application
    • Create New AI Agent
    • Basic Chat Agent Config
    • Basic Smart Search Agent Config
    • Advanced Workflow Customization
    • Publish and Deployment
    • User Engagement Analytics
  • Evaluation
    • Create New Evaluation
    • Run Evaluation
    • Evaluation Run History
  • Integration
  • Team Member Management
  • Project Management
  • Billing Management
  • Release Notes
  • Epsilla Vector Database
    • Overview
    • Quick Start
      • Run with Docker
      • Epsilla Cloud
    • User Manual
      • Connect to a database
      • Create a new table
      • Drop a table
      • Delete a database
      • Insert records
      • Upsert records
      • Search the top K semantically similar records
      • Retrieve records (with filters and pagination)
      • Delete records
      • Performance Tuning
    • Advanced Topics
      • Embeddings
      • Dense vector vs. sparse vector
      • Hybrid Search
    • Integrations
      • OpenAI
      • Mistral AI
      • Jina AI
      • Voyage AI
      • Mixedbread AI
      • Nomic AI
    • Roadmap
Powered by GitBook
On this page
  • Model Providers
  • Web Search Providers

Integration

PreviousEvaluation Run HistoryNextTeam Member Management

Last updated 5 months ago

Model Providers

Integrating with large language models (LLMs) and embedding model providers in Epsilla allows users to tap into cutting-edge AI models for a variety of applications. Epsilla offers seamless connections to a range of providers such as OpenAI, Anthropic, and others, supporting models like GPT-4, Claude, Mistral, and embedding solutions like JinaAI, VoyageAI, etc.

Users can easily enable these integrations by entering their API keys, giving them access to the provider’s models for tasks such as text generation, embedding.

Once the API key is validated, the models are ready for immediate use across the Epsilla platform.

If the key validation fails, users can quickly update or correct their credentials, ensuring smooth and secure model access for their business needs.

By integrating your own LLM API key into Epsilla, all usage will be directly linked to your own LLM account, meaning that any costs or message usage will be handled by your LLM provider. This setup bypasses Epsilla's monthly message limits, allowing you full control and flexibility over your LLM usage and billing.

OpenAI has a sophisticated API key billing system that includes multiple tiers, model access options, and rate limits, and this can make the integration overwhelming and confusing. If you encounter any issues during the API key validation process or have questions about your integration, feel free to reach out to us for assistance.

Web Search Providers

Web search providers enable your chat agent to retrieve real-time information from the internet. By integrating with services like Tavily, you can use their API to enhance the chat agent's capabilities with up-to-date web search functionality. Simply input and validate your API key to activate this feature, ensuring seamless access to internet-based knowledge for your AI agent.