Embeddings
Last updated
Last updated
Embeddings/vectors are numerical representations of complex data like text or images in a format that machines can process. Embedding models convert text, images, audios, and videos into vectors of real numbers. This process captures semantic meaning, allowing algorithms to understand content similarity and context. This technique is pivotal in various applications, from Retrieval Augmented Generation (RAG) to recommendation systems to language translation, as it enables computers to 'understand' and work with human language.
In embedding models, the mathematical premise is that the closer two vectors are in high-dimensional space, the more semantically similar they are. This characteristic is leveraged in vector databases for semantic similarity search, using algorithms like nearest neighbor search. These algorithms compute the distances between vectors, interpreting smaller distances as higher similarity. This approach enables applications to find closely related items (like texts, images, audios, videos) based on their embedded vector representations, making it possible to conduct searches and analyses based on the meaning and context of the data, rather than just literal matches.
Starting in v0.3, Epsilla supports automatically embed your documents and questions within the vector database, which significantly simplify the end to end semantic similarity search workflow.
When creating tables, you can define indices to let Epsilla automatically create embeddings for the STRING fields:
You can omit the model when defining indices, and Epsilla uses BAAI/bge-small-en-v1.5 by default.
Then you can insert records in their raw format and let Epsilla handle the embedding:
After inserting records, you can query the table with natural language questions:
Output
Here is the list of built-in embedding models Epsilla supports:
BAAI/bge-small-en
BAAI/bge-small-en-v1.5
BAAI/bge-small-zh-v1.5
BAAI/bge-base-en
BAAI/bge-base-en-v1.5
sentence-transformers/all-MiniLM-L6-v2
BAAI/bge-small-en-v1.5 and sentence-transformers/all-MiniLM-L6-v2 are enabled by default. You can turn on other models via docker run command environment variable EMBEDDING_MODELS (using comma separated string):
When using these built-in embedding models, the embedding are conducted within your local laptop without outbound network. They use CPU first to do the embedding. So make sure you have enough CPU power and memory to handle the models before enabling them.
Epsilla supports these OpenAI embedding models:
When using OpenAI embedding on Docker, make sure provide the X-OpenAI-API-Key header when connecting to the vector database:
If you are using Epsilla Cloud, make sure to add OpenAI integration instead of passing the header.
And use the embedding model when defining the index:
Epsilla supports these JinaAI embedding models (learn more about Jina AI embedding at https://jina.ai/embeddings/):
When using Jina AI embedding on Docker, make sure provide the X-JinaAI-API-Key header when connecting to the vector database:
If you are using Epsilla Cloud, make sure to add JinaAI integration instead of passing the header.
And use the embedding model when defining the index:
Epsilla supports these VoyageAI embedding models (learn more about Voyage AI embedding at https://www.voyageai.com/):
When using Voyage AI embedding on Docker, make sure provide the X-VoyageAI-API-Key header when connecting to the vector database:
If you are using Epsilla Cloud, make sure to add VoyageAI integration instead of passing the header.
And use the embedding model when defining the index:
Epsilla supports these Mixedbread AI embedding models (learn more about Mixedbread AI embedding at https://www.mixedbread.ai/docs/models/embeddings#models):
When using Mixedbread AI embedding on Docker, make sure provide the X-MixedbreadAI-API-Key header when connecting to the vector database:
If you are using Epsilla Cloud, make sure to add Mixedbread AI integration instead of passing the header.
And use the embedding model when defining the index:
Epsilla supports these Nomic AI embedding models (learn more about Nomic AI embedding at https://docs.nomic.ai/reference/endpoints/nomic-embed-text):
When using Nomic AI embedding on Docker, make sure provide the X-NOMIC-API-Key header when connecting to the vector database:
If you are using Epsilla Cloud, make sure to add Nomic AI integration instead of passing the header.
And use the embedding model when defining the index:
Epsilla supports these Mistral AI embedding models (learn more about Mistral AI embedding at https://docs.mistral.ai/guides/embeddings/):
When using Mistral AI embedding on Docker, make sure provide the X-MistralAI-API-Key header when connecting to the vector database:
If you are using Epsilla Cloud, make sure to add Mistral AI integration instead of passing the header.
And use the embedding model when defining the index:
Name | Dimensions | Support Dimension Reduction |
---|---|---|
Name | Dimensions |
---|---|
Name | Dimensions |
---|---|
Name | Dimensions |
---|---|
Name | Dimensions |
---|---|
Name | Dimensions |
---|---|
openai/text-embedding-3-large
3072
Yes
openai/text-embedding-3-small
1536
Yes
openai/text-embedding-ada-002
1536
No
jinaai/jina-embeddings-v2-base-en
768
jinaai/jina-embeddings-v2-base-de
768
jinaai/jina-embeddings-v2-base-zh
768
jinaai/jina-embeddings-v2-base-code
768
jinaai/jina-embeddings-v2-small-en
512
voyageai/voyage-large-2
1536
voyageai/voyage-code-2
1536
voyageai/voyage-2
1024
voyageai/voyage-02
1024
voyageai/voyage-law-2
1024
voyageai/voyage-finance-2
1024
voyageai/voyage-multilingual-2
1024
voyageai/voyage-lite-02-instruct
1024
voyageai/voyage-3
1024
voyageai/voyage-3-lite
512
mixedbreadai/UAE-Large-V1
1024
mixedbreadai/bge-large-en-v1.5
1024
mixedbreadai/gte-large
1024
mixedbreadai/e5-large-v2
1024
mixedbreadai/multilingual-e5-large
1024
mixedbreadai/multilingual-e5-base
768
mixedbreadai/gte-large-zh
1024
nomicai/nomic-embed-text-v1.5
768
nomicai/nomic-embed-text-v1
768
mistralai/mistral-embed
1024