Docs Menu

Docs HomeLaunch & Manage MongoDBMongoDB Atlas

Atlas Vector Search Overview

On this page

  • What is Vector Search?
  • Key Concepts
  • Atlas Vector Search Indexes
  • Atlas Vector Search Queries
  • Use Cases
  • AI Integrations

You can use Atlas Vector Search to perform vector search on your data in Atlas. When you define an Atlas Vector Search index on your collection, you can seamlessly index vector data along with your other data and then perform vector search queries against the indexed fields.

Atlas Vector Search enables various search use cases, including semantic, hybrid, and generative search. By storing vector embeddings alongside your other data in Atlas, you can filter semantic search queries on other fields in your collection and combine semantic search with full-text search. In addition, you can leverage Atlas Vector Search in AI applications and integrate it with popular AI frameworks and services.

Atlas Vector Search is supported on Atlas clusters running MongoDB version 6.0.11, 7.0.2, or later.

Get Started with Atlas Vector Search

Note

For optimal performance, we recommend deploying separate search nodes for workload isolation. Search nodes support concurrent query execution to improve individual query latency. To learn more, see Review Deployment Options.

Vector search is a search method that returns results based on your data's semantic, or underlying, meaning. Unlike traditional full-text search which finds text matches, vector search finds vectors that are close to your search query in multi-dimensional space. The closer the vectors are to your query, the more similar they are in meaning.

By interpreting the meaning of your search query and data, vector search allows you to consider the searcher's intent and search context to retrieve more relevant results.

For example, if you searched for the term "red fruit," full-text search returns only data that explicitly contains these keywords. However, semantic search might return data that is similar in meaning, such as fruits that are red in color like apples or strawberries.

vector

A vector is an array of numbers that represents your data in multiple dimensions. Vectors can represent any kind of data, from text, image, and audio data to unstructured data. Semantic similarity is determined by measuring the distance between vectors.

Specifically, Atlas Vector Search uses dense vectors, which are a type of high-dimensional vector that favors smaller storage and semantic richness. As opposed to sparse vectors, dense vectors can be packed with more data, which enables Atlas Vector Search to capture more complex relationships.

vector embeddings

Vector embeddings, or vectorization, is the process of converting your data into vectors. You create these embeddings by passing your data through an embedding model, and you store these embeddings as a field in your Atlas collection.

Atlas Vector Search determines semantic similarity by identifying the vector embeddings that are closest in distance to your query vector. To learn more, see Atlas Vector Search Queries.

embedding model

Embedding models are algorithms that convert complex data into vectors. To do this, embedding models use LLMs, machine learning models trained on a large corpus of data, to generate vector embeddings that encapsulate the semantic meaning of your data.

These embeddings allow Atlas Vector Search to better understand relationships in your data and perform tasks like semantic search and retrieval. Depending on your data and task, different embedding models offer varying advantages.

To perform vector search on your data in Atlas, you must create an Atlas Vector Search index. Atlas Vector Search indexes are separate from your other database indexes and are used to efficiently retrieve documents that contain vector embeddings at query-time. In your Atlas Vector Search index definition, you index the fields in your collection that contain your embeddings to enable vector search against those fields. Atlas Vector Search supports embeddings that are less than and equal to 4096 dimensions in width.

You can also pre-filter your data by indexing any boolean, string, and numeric fields in your collection that you want to run your Atlas Vector Search queries against. Filtering your data narrows the scope of your search and ensures that certain vector embeddings aren't considered for comparison.

To learn how to index fields for Atlas Vector Search, see How to Index Fields for Vector Search.

Atlas Vector Search supports approximate nearest neighbor (ANN) search with the Hierarchical Navigable Small Worlds algorithm. ANN optimizes for speed by approximating the most similar vectors in multi-dimensional space without scanning every vector. This approach is particularly useful for retrieving data from large vector datasets.

Atlas Vector Search queries consist of aggregation pipeline stages where the $vectorSearch stage is the first stage in the pipeline. The process for a basic Atlas Vector Search query is as follows:

  1. You specify the query vector, which is the vector embedding that represents your search query.

  2. Atlas Vector Search uses ANN search to find vector embeddings in your data that are closest to the query vector.

  3. Atlas Vector Search returns the documents that contain the most similar vectors.

To customize your vector search query, you can pre-filter your data on fields that you've indexed by using an MQL match expression with supported comparison or aggregation operators, or you can add additional aggregation stages to further process and organize your results.

To learn how to create and run Atlas Vector Search queries, see Run Vector Search Queries.

Atlas Vector Search supports the following types of vector search queries:

By using Atlas as a vector database, you can use Atlas Vector Search to build natural language processing (NLP), machine learning (ML), and generative AI applications.

Specifically, you can implement Retrieval-Augmented Generation (RAG) by storing data in Atlas, using Atlas Vector Search to retrieve relevant documents from your data, and leveraging an LLM to answer questions on your data. You can implement RAG locally or by integrating Atlas Vector Search with popular frameworks and services. To learn more, see AI Key Concepts.

You can use Atlas Vector Search with popular chat and embedding models from AI providers such as OpenAI, AWS, and Google. MongoDB and partners also provide specific product integrations to help you leverage Atlas Vector Search in your generative AI and AI-powered applications. These integrations include built-in tools and libraries that enable you to build applications and implement RAG from start to finish.

For example, by integrating Atlas Vector Search with open-source frameworks such as LangChain and LlamaIndex, you can answer questions about your data on top of popular LLMs.

To learn more and get started, see Integrate Vector Search with AI Technologies.

← Atlas Search Changelog