Dense vector vs. sparse vector
This page provides an overview of dense and sparse vectors in the context of Epsilla vector database, outlining their use cases, advantages, and disadvantages.
Last updated
This page provides an overview of dense and sparse vectors in the context of Epsilla vector database, outlining their use cases, advantages, and disadvantages.
Last updated
Dense vectors are numerical representations of semantic meaning, typically generated by embedding models like , , etc. These vectors, where most or all elements are non-zero, are particularly effective for semantic search, returning the most similar results according to specific distance metrics even in the absence of exact matches.
Dense vector enable complex semantic queries, useful in scenarios where understanding the context or meaning is more important than keyword matches. It is widely used in (1) Natural language processing, where dense vectors can represent word embeddings where each dimension captures a syntactic or semantic property; (2) Image and video analysis, where dense vectors are ideal for representing pixel-based data in images and videos; (3) Recommendation systems, where dense vectors represent user or item profiles in a feature space for content-based filtering.
Compared with sparse vectors, computational and storage requirements are higher due to the nature of dense vectors, especially for large datasets.
Epsilla represents dense vector as an array of float32 numbers.
Sparse vectors, characterized by a large number of dimensions with few non-zero values, are ideal for keyword-based searches. Each vector represents a document where dimensions are words from a dictionary, and values indicate the importance of these words in the document. This representation is effective in situations where precise keyword matches and their frequency are critical.
Compared with dense vectors, sparse vectors are less effective in capturing the context or semantic meaning.
Epsilla represents sparse values as a dictionary of two arrays: indices and values. The elements of indices have type uint32; the elements of values have type float32.
Famous sparse vector generation algorithms including , , etc. Algorithms like BM25 computes text document relevance based on keyword matches and their distribution. They are very efficient in keyword-based search, and especially effective for data with large, but sparsely populated feature spaces.