BASEL.FM

Personal dispatches & reflections

I Built a Qdrant Exporter in Afternoon

Qdrant is a vector database. Multiple projects in my stack use it, different embeddings, different collections, different read/write patterns. It's a shared surface, and like any shared surface, you want to see what's happening on it.

I was building a Grafana dashboard for a deployment project. Every other service had a Prometheus exporter. Qdrant didn't. There's no official one.

So I built it.


Starting with Research

Before writing any code I opened Claude and started asking questions. What does Qdrant expose by default? What endpoints exist? What data is already there?

Turns out Qdrant ships with built-in HTTP endpoints that return operational data. Collection statistics, cluster health, segment info, telemetry. The raw data is there. It's just not in Prometheus format, and each piece lives on a different endpoint with its own JSON shape. You can't point Prometheus at Qdrant directly.

That's the gap. A Prometheus exporter sits in between, polls those endpoints, and serves everything in a format Prometheus understands. Claude helped me map out which endpoints had what, so I knew exactly what I was working with before writing a single line.


Designing What I Actually Needed

Before touching code I took pen and paper and listed the metrics I actually cared about.

Not everything Qdrant exposes. The things I'd want to see on a real dashboard:

  • Vectors per collection
  • Points count vs. indexed count ratio
  • Segment counts
  • Memory usage per collection
  • Cluster node health
  • Payload indexes

The question for each metric was simple: what does this tell me, and what action does it prompt? If a metric doesn't change my behavior, it doesn't go on the dashboard. That kept the list honest.


Building It with Claude Code

Once I had the metrics list, I opened Claude Code and described the whole thing. The Qdrant endpoints I'd found, the metrics I'd written down, Python as the backend, Prometheus format for the output. I described what I wanted the exporter to do, step by step.

Claude Code built it. It handled the endpoint polling, the JSON parsing, the metric registration, all of it. I stayed in the loop reviewing and testing, but the actual construction happened fast. That's the part that would have taken me most of the afternoon on my own. With Claude Code, it was done in under an hour.

I tested it locally against a running Qdrant instance. The data was there. The format was right. Prometheus could scrape it.


Making It Plug and Play

A script that runs on my machine isn't a product. I wanted something anyone could use in five minutes: pull an image, set two environment variables, and be done.

I asked Claude Code to containerize the exporter into a Docker image with everything included. Configuration via environment variables so nothing is hardcoded. The whole setup is on Docker Hub.

The idea was simple. You point it at your Qdrant instance, point Prometheus at the exporter, and the metrics start flowing. No changes to Qdrant. No changes to your application. It just slots in.


The Grafana Dashboard

A Prometheus exporter without a dashboard is half a product. I described to Claude what I wanted: good layout, good colors, each metric displayed in the right way.

That last part mattered. Vectors count over time belongs in a time series chart. Indexed vs. total vectors is a ratio — it belongs as a stat panel with color thresholds so you can see at a glance if indexing is falling behind. Segment count per collection reads cleaner as a bar chart. Not every metric is a graph, and putting everything in a line chart is the fastest way to make a dashboard nobody looks at.

Claude generated the full Grafana JSON. I imported it, reviewed it, tweaked a few things, and published it.

The result is the Qdrant Observatory dashboard on Grafana. Import it, point it at your Prometheus datasource, done.


CI/CD So It Stays Maintained

The last thing I wanted to sort out was the release process. Manually building and pushing a Docker image every time I made a change is the kind of friction that quietly kills side projects.

I described the setup to Claude Code: a GitHub Actions pipeline that builds the image on every tagged release and pushes it to both Docker Hub and the GitHub Container Registry automatically. Two registries because different environments pull from different places, and keeping both in sync costs nothing once the pipeline is wired up.

One push to a new tag and both registries update. That's the right amount of automation for this kind of tool.


What This Speed Actually Feels Like

There's something worth naming here. An afternoon to go from "this tool doesn't exist" to a published Docker image, a live Grafana dashboard, and an automated release pipeline. That's not normal. That used to be a weekend project at best, more realistically a full week if you had other things going on.

With the right AI tools and the right way of using them, it genuinely feels like a superpower. Not because the AI does the thinking, it doesn't. You still have to know what you want, design the metrics, make the calls about how to present data. But the distance between an idea and a working thing collapses. The construction part, which used to be most of the time, becomes fast. The thinking part, which is actually where the value is, gets more of the time.

The result: within one week of publishing, the Qdrant Exporter had over 500 downloads across Docker Hub and GHCR combined. People were pulling it. Using it. That's not viral, but it's real, and it happened fast because the tool exists and works and does one thing well.

The lesson isn't "use AI to build fast." The lesson is: build something that's genuinely useful, keep it simple, and make it easy for people to start. The speed just means you can actually finish the thing instead of abandoning it halfway through.


Wrapping Up

The whole thing took an afternoon. Research with Claude, metric design on paper, exporter built with Claude Code, Docker image, Grafana dashboard, CI/CD. None of the individual steps were complicated. What made it fast was having AI handle the construction while I stayed focused on what the tool should actually do.

The full source is on GitHub. The image is on Docker Hub. The project site has the full setup walkthrough.

If you're running Qdrant and wanted this, now it exists.


Have questions or ran into something I didn't cover? Feel free to reach out.