Dense vector vs. sparse vector

This page provides an overview of dense and sparse vectors in the context of Epsilla vector database, outlining their use cases, advantages, and disadvantages.

Dense Vectors

Dense vectors are numerical representations of semantic meaning, typically generated by embedding models like OpenAI/text-embedding-ada-002, sentence-transformers/all-MiniLM-L6-v2, etc. These vectors, where most or all elements are non-zero, are particularly effective for semantic search, returning the most similar results according to specific distance metrics even in the absence of exact matches.

Dense vector enable complex semantic queries, useful in scenarios where understanding the context or meaning is more important than keyword matches. It is widely used in (1) Natural language processing, where dense vectors can represent word embeddings where each dimension captures a syntactic or semantic property; (2) Image and video analysis, where dense vectors are ideal for representing pixel-based data in images and videos; (3) Recommendation systems, where dense vectors represent user or item profiles in a feature space for content-based filtering.

Compared with sparse vectors, computational and storage requirements are higher due to the nature of dense vectors, especially for large datasets.

Define dense vector field in Epsilla table:

status_code, response = db.create_table(
    table_name="MyTable",
    table_fields=[
        {"name": "Embedding", "dataType": "VECTOR_FLOAT", "dimensions": 1536}
    ]
)

Insert a dense vector record:

Epsilla represents dense vector as an array of float32 numbers.

status_code, response = db.insert(
  table_name="MyTable",
  records=[
    {"Embedding": [-0.074163,0.238575,-0.141831,0.117338, ...]},
  ]
)

Query a dense vector field:

status_code, response = db.query(
  table_name="MyTable",
  query_field="Embedding",
  query_vector=[-0.074163,0.238575,-0.141831,0.117338, ...],
  limit=2
)

Sparse Vectors

Sparse vectors, characterized by a large number of dimensions with few non-zero values, are ideal for keyword-based searches. Each vector represents a document where dimensions are words from a dictionary, and values indicate the importance of these words in the document. This representation is effective in situations where precise keyword matches and their frequency are critical.

Famous sparse vector generation algorithms including BM25, SPLADE, etc. Algorithms like BM25 computes text document relevance based on keyword matches and their distribution. They are very efficient in keyword-based search, and especially effective for data with large, but sparsely populated feature spaces.

Compared with dense vectors, sparse vectors are less effective in capturing the context or semantic meaning.

Define sparse vector field in Epsilla table:

status_code, response = db.create_table(
    table_name="MyTable",
    table_fields=[
        {
            "name": "Embedding",
            "dataType": "SPARSE_VECTOR_FLOAT",
            "dimensions": 65535,
            "metricType": "DOT_PRODUCT"
        }
    ]
)

Insert a sparse vector record:

Epsilla represents sparse values as a dictionary of two arrays: indices and values. The elements of indices have type uint32; the elements of values have type float32.

status_code, response = db.insert(
  table_name="MyTable",
  records=[
    {
      "Embedding": {
        "indices": [32, 103, 2345, 10384],
        "values": [0.074163, 0.238575, 0.141831, 0.117338]
      }
    }
  ]
)

Query a sparse vector field:

status_code, response = db.query(
  table_name="MyTable",
  query_field="Embedding",
  query_vector={
    "indices": [32, 103, 2345, 10384],
    "values": [0.074163, 0.238575, 0.141831, 0.117338]
  },
  limit=2
)

Last updated