Qdrant MCP Server
Official Qdrant vector database MCP server. Acts as a semantic memory layer on top of Qdrant: store information with metadata, retrieve via similarity search. Two tools, very small surface area, exceptionally maintained by the Qdrant team. Configurable embedding provider (fastembed default), supports remote and local Qdrant clusters.
“Official Qdrant vector database MCP server. Acts as a semantic memory layer on top of Qdrant: store information with metadata, retrieve via similarity search. Two tools, very small surface area, exceptionally maintained by the Qdrant team. Pushed two days ago with 2 commits in the last 30 days; 6 releases shipped, 0.8.x line stable. The simplest path to durable agent memory backed by a real vector database. Configurable embedding provider (fastembed default with sentence-transformers/all-MiniLM-L6-v2). Supports both remote Qdrant clusters via QDRANT_URL and local databases via QDRANT_LOCAL_PATH. Read-only mode available via QDRANT_READ_ONLY for query-only deployments. FastMCP-based, distributed via PyPI as mcp-server-qdrant.”
INSTALL THIS SERVER
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
2 TOOLS AVAILABLE
OUR ASSESSMENT
- Official Qdrant org publication.
- Tiny tool surface (qdrant-store, qdrant-find) keeps the agent tool selection clean.
- Configurable embedding model via EMBEDDING_PROVIDER and EMBEDDING_MODEL.
- Read-only mode via QDRANT_READ_ONLY for query-only deployments.
- Supports remote and local Qdrant via QDRANT_URL or QDRANT_LOCAL_PATH.
- FastMCP-based with all FastMCP environment variables available.
- Apache-2.0 license.
- 6 releases shipped with 1,373 stars and 267 forks.
- Two tools only; teams wanting collection management or batched operations need the Qdrant client SDK.
- Latest release v0.8.1 from December 2025; release cadence is slower than commit cadence.
- Embedding provider currently locked to fastembed family.
Stores in plaintext within Qdrant (encryption at rest is a Qdrant configuration concern). For multi-tenant deployments, scope each agent to a dedicated COLLECTION_NAME. QDRANT_API_KEY pins the Qdrant cluster; rotate on credential exposure. For local-only operation use QDRANT_LOCAL_PATH so vector data stays on the host.
Teams already running Qdrant for production vector search, agents needing durable queryable memory across sessions, and read-only knowledge bases where the LLM operates in query-only mode.
TECHNICAL DETAILS
ADOPTION METRICS
// Reading this1,373 stars and 267 forks confirm it as the canonical Qdrant integration path. Smithery badge in the README amplifies discovery.
// Reading thisSecond-ranked in ai-ml category. Cleaner Tier 1 trade-off than Pinecone for evaluation: more stars, more forks, freshly maintained.
SOURCES & VERIFICATION
We don't take any single directory's word for it. Before scoring, we cross-reference 5 public MCP sources, install the server ourselves against the clients we cover, and record when we last re-verified.
The same server, 5 different lenses. We reconcile these signals into our editorial score, which is why our number sometimes diverges from a directory-aggregate star count.
| Source | Their rating | Their star count | Their downloads | Last synced |
|---|---|---|---|---|
| AutomationSwitch This page | 4.5editorial | 1,373 | — | APR 29, 2026 |
| PulseMCP | — unrated | unavailable | unavailable | APR 29, 2026 |
| Smithery | — unrated | unavailable | unavailable | APR 29, 2026 |
| Glama | — unrated | unavailable | unavailable | APR 29, 2026 |
| MCP.so | — unrated | unavailable | unavailable | APR 29, 2026 |
| Official MCP Registry | — unrated | unavailable | unavailable | APR 29, 2026 |
// Counts are directory-reported; we don't adjust them. Discrepancies usually come from different snapshot times or star-caching.
OTHER AI / ML MCP SERVERS
Codebase Memory MCP
High-performance code intelligence MCP server for AI coding agents. Indexes a codebase into a queryable knowledge graph in milliseconds, with 14 MCP tools spanning structural search, call-chain tracing, impact analysis, dead-code detection, and Cypher queries. Single static C binary, 66 languages via tree-sitter, zero runtime dependencies.
ElevenLabs MCP
Official ElevenLabs MCP server. Wraps the full ElevenLabs API surface: text-to-speech, voice cloning, speech-to-text, dubbing, sound effect generation, audio isolation, voice design. MIT-licensed, distributed via PyPI as elevenlabs-mcp. Free tier with 10,000 credits per month.
Pinecone Developer MCP
Official Pinecone MCP for developer workflows. Lets coding assistants search Pinecone documentation, list and configure indexes, generate code informed by index data, and upsert/query records during dev iteration. Apache-2.0, npm-distributed as @pinecone-database/mcp.
DISCUSS YOUR
MCP REQUIREMENTS.
Evaluating a server, scoping an internal deployment, or working out whether MCP is the right fit at all. Start the conversation and we will point you at the right piece of the ecosystem.