Approximate Nearest Neighbor Search with Annoy

As we navigate deeper into the realm of machine learning and natural language processing, the need for efficient search mechanisms becomes paramount. Imagine sifting through millions of document embeddings or images to find the ones most similar to a given input. Traditional exact search methods can be computationally intensive and slow.

Enter the world of Approximate Nearest Neighbor (ANN) algorithms, where speed and scalability are at the forefront.

The Need for Approximate Nearest Neighbor Algorithms

While exact nearest neighbor search can guarantee the most similar items, it often does so at the cost of performance, especially with large datasets in high-dimensional spaces. This is where approximate methods shine:

  • Speed: ANN algorithms can find neighbors incredibly fast, making them suitable for real-time applications.
  • Scalability: They can handle vast datasets, as they don't need to scan every item in the dataset.
  • Reduced Computational Overhead: By allowing small compromises on accuracy, they significantly reduce the computation required.

However, with various ANN algorithms available, why should one consider Annoy?

Introduction to Annoy

Annoy, which stands for "Approximate Nearest Neighbors Oh Yeah", is a powerful library that provides a balance between speed and accuracy. But its uniqueness lies in features that cater to modern distributed computing needs.

Salient Features of Annoy:

  • Static Files as Indexes: Annoy's ability to use static files for indexes means you can share an index across different processes. This decoupling of index creation and loading provides a level of flexibility rarely seen.

  • Minimal Memory Footprint: In scenarios with millions or even billions of vectors, memory usage becomes critical. Annoy ensures the indexes are compact.

  • Built for Production: The architecture of Annoy is such that you can easily pass around and distribute static index files, making it apt for production environments, Hadoop jobs, and more.

  • Real-world Usage: An endorsement of its capabilities, Annoy is used by Spotify for music recommendations. By representing every user/item as a vector post-matrix factorization, Annoy aids in efficiently searching for similar users or items, even when dealing with millions of tracks in high-dimensional space.

import sqlite3
from annoy import AnnoyIndex
from sentence_transformers import SentenceTransformer

# Load the SentenceTransformer model for encoding text chunks
model = SentenceTransformer("all-MiniLM-L6-v2")

# Dimensions for the vector embeddings (adjust based on model's output)
VEC_INDEX_DIM = 384

# Initialize the Annoy index for storing and searching vectors
vec_index = AnnoyIndex(VEC_INDEX_DIM, 'angular')

# Connect to the SQLite database
conn = sqlite3.connect('chunks.db')
cursor = conn.cursor()


# Fetch a batch of rows based on the offset and limit
cursor.execute(f"SELECT chunk_id, chunk_text FROM pdf_chunks LIMIT 100")
rows = cursor.fetchall()

# For each row in the batch, extract the text chunk, encode it, and add it to the Annoy index
for row in rows:
    chunk_id = row[0]
    chunk_text = row[1]
    embeddings = model.encode(str(chunk_text))
    vec_index.add_item(chunk_id, embeddings)          

# Close the SQLite database connection
conn.close()

# Build the Annoy index for efficient querying
vec_index.build(100)

# Save the built index to a file for future use
vec_index.save("./vecindex.ann")

Querying an Annoy Vector Index with Text Embeddings

Vector search is a powerful way to find items with similar characteristics. In the realm of text processing, these characteristics are usually captured as embeddings – high-dimensional vectors that encapsulate semantic information. By querying a vector index like Annoy with a text's embedding, you can quickly find texts that are semantically similar.

Here's how to harness this capability:

from annoy import AnnoyIndex
import sqlite3
from sentence_transformers import SentenceTransformer


# Load the Sentence Transformer Model
# We'll use this model to convert our query text into a vector embedding.


model = SentenceTransformer('all-MiniLM-L6-v2')
VEC_INDEX_DIM = 384

u = AnnoyIndex(VEC_INDEX_DIM, 'angular')
u.load("/kaggle/working/vecindex.ann")

# Generate the Embedding for the Query Text
# Let's convert the query text into an embedding using the Sentence Transformer model


text = "zero-shot prompting"
embedding = model.encode([text])
input_vec = embedding[0]


# Retrieve the IDs of the top 10 most similar text chunks based on the query text's embedding:

chunk_ids = u.get_nns_by_vector(input_vec, 10, search_k=-1, include_distances=False)
print(chunk_ids)

# Retrieve the Actual Text Chunks from the SQLite Database
# First, establish a connection to the SQLite database.


con = sqlite3.connect("/kaggle/working/chunks.db")
cur = con.cursor()

# Then, retrieve and print the original query text:


list_chunk_ids = ','.join([str(k) for k in chunk_ids])
cur.execute("select chunk_id, chunk_text from pdf_chunks where chunk_id in (" + list_chunk_ids + ")")
res = cur.fetchall()

for i in res:
    print(i[1])
    print("----------")

By following this process, you can easily search for and retrieve texts semantically similar to a given input, even from a vast dataset, in a matter of milliseconds!

Conclusion

The age of big data necessitates tools and techniques that can handle the volume, velocity, and variety of information.

While traditional search algorithms have their place, when it comes to large datasets in high-dimensional spaces, approximate methods like Annoy emerge as the frontrunners.

Whether you're recommending songs, finding similar images, or sifting through document embeddings, Annoy offers a scalable and efficient solution, ensuring your applications remain swift and responsive.