Epsilla
HomeDiscordTwitterGithubEmail
  • Welcome
    • Register and Login
    • Explore App Portal
  • Build Your First AI Agent
    • Create a Knowledge Base
    • Set Up Your AI Agent
    • Publish Your AI Agent
  • Knowledge Base
    • Local Files
    • Website
    • Google Drive
    • S3
    • Notion
    • Share Point
    • Google Cloud Storage
    • Azure Blob Storage
    • Confluence
    • Jira
    • Advanced Settings
      • Auto Sync
      • Embedding
      • Data Parsing
      • Data Chunking
      • Hypothetical Questions
      • Webhook
      • Meta Data
    • Data Storage
    • Programmatically Manage Knowledge Bases
  • Application
    • Create New AI Agent
    • Basic Chat Agent Config
    • Basic Smart Search Agent Config
    • Advanced Workflow Customization
    • Publish and Deployment
    • User Engagement Analytics
  • Evaluation
    • Create New Evaluation
    • Run Evaluation
    • Evaluation Run History
  • Integration
  • Team Member Management
  • Project Management
  • Billing Management
  • Release Notes
  • Epsilla Vector Database
    • Overview
    • Quick Start
      • Run with Docker
      • Epsilla Cloud
    • User Manual
      • Connect to a database
      • Create a new table
      • Drop a table
      • Delete a database
      • Insert records
      • Upsert records
      • Search the top K semantically similar records
      • Retrieve records (with filters and pagination)
      • Delete records
      • Performance Tuning
    • Advanced Topics
      • Embeddings
      • Dense vector vs. sparse vector
      • Hybrid Search
    • Integrations
      • OpenAI
      • Mistral AI
      • Jina AI
      • Voyage AI
      • Mixedbread AI
      • Nomic AI
    • Roadmap
Powered by GitBook
On this page
  • Choose Embedding Model
  • Which Embedding Model Fits My Needs Best
  1. Knowledge Base
  2. Advanced Settings

Embedding

PreviousAuto SyncNextData Parsing

Last updated 7 months ago

Embedding setting allows users to select specific embedding models for efficient text representation, improving search quality.

Choose Embedding Model

Choose an embedding model from the dropdown to use for embedding data chunks:

The default epsilla/text-embedding-3-small and epsilla/text-embedding-3-large models leverage Epsilla's built-in OpenAI integration and do not require any additional config from user side. However, if you choose other embedding models, you must first add an integration for the respective vendors:

Which Embedding Model Fits My Needs Best

When selecting an embedding model for your specific use case, it's important to recognize that there is no one-size-fits-all model. The ideal model depends on factors such as the nature of your use case, cost considerations, and performance requirements. OpenAI offers large and small embedding models: the large embedding model excels in general-purpose scenarios where higher quality is crucial, while the smaller model is a more cost-efficient option for use cases with tighter budget constraints. JinaAI provides embedding models well-suited for multilingual applications, making them an excellent choice for use cases involving diverse languages. For more specialized needs, VoyageAI models are optimized for vertical domains, such as financial or legal contexts, offering domain-specific insights and improved accuracy in these areas.

Read more about .

For an overview of different models' performance across various benchmarks, you can refer to the . However, it's important not to blindly rely on these results, as the datasets used in MTEB might not fully represent your specific use case and could be biased. Always consider evaluating models in the context of your own data and requirements.

embedding
MTEB leaderboard