You probably don't need another database.

Do you really need separate systems for caching, queues, search, documents, and vector embeddings when you already have Postgres?

The Gist

It started with a gist and a lively Hacker News thread. The premise was simple: Postgres isn't the best at everything, but it's good enough for most things. In practice, most teams are running too many microservices and databases. It's all premature optimization - more operational overhead, more maintenance burden, more monitoring complexity, higher costs, harder tracing, and longer debugging sessions.

The Typical Pattern

You need caching, so you add Redis. Full-text search? Bolt-on Elasticsearch. Background jobs? Another Redis, or maybe Sidekiq. Documents with flexible schemas? Default to MongoDB. Analytics? Snowflake. Events? Reach for Kafka. Before long, your "simple" application talks to seven different data stores and microservices, each with its own deployment, backup strategy, failure modes, and 3 AM pages when they stop talking to each other. Each system adds operational surface area: monitoring, alerting, failover testing, security patching, version upgrades. Plus, don't forget that you'll be maintaining all of it forever.

The "Best Tool for the Job"

Application
Redis
Postgres
Elastic
MongoDB
Snowflake
Kafka
Pinecone
Sidekiq
InfluxDB

Multiple systems to operate and monitor

With Postgres

Application
PostgreSQL

One database. One backup strategy. One set of failure modes.

Postgres Is Enough

Before reaching for another database, see if you can accomplish it with what Postgres already offers:

You need... You reach for... But Postgres has...
Caching Redis, Memcached UNLOGGED tables, materialized views →
Job queues Redis + Sidekiq, RabbitMQ SKIP LOCKED, pgmq, pgflow →
Full-text search Elasticsearch, Algolia tsvector, pg_trgm, ParadeDB →
Document store MongoDB, CouchDB JSONB, FerretDB →
Vector search / AI Pinecone, Weaviate pgvector, pgvectorscale →
Time-series data InfluxDB, TimescaleDB TimescaleDB, pg_partman →
Analytics / OLAP Snowflake, BigQuery pg_analytics, DuckDB integration →
Graph database Neo4j, Neptune Apache AGE, recursive CTEs →
Geospatial Specialized GIS systems PostGIS →

"But Specialized Databases Are Better!"

Are they? At Google scale, maybe. But are you Google scale? The reality is that most specialized databases use similar underlying algorithms as Postgres extensions. ParadeDB's full-text search uses the same BM25 ranking as Elasticsearch. pgvector uses HNSW indexing like the dedicated vector databases. The performance difference for most applications is negligible while the operational complexity explodes.

Notion chose Postgres. Netflix runs on Postgres. Even Instagram trusts Postgres. If companies serving millions of users trust "boring" technology, your startup can probably handle ten thousand users without a seven-database architecture.

When You Actually Need Something Else

This isn't about dogma. Sometimes you genuinely need specialized infrastructure. But the bar should be high: only after pushing Postgres to its limits, documenting why it was insufficient, and accepting the operational cost of the alternative. Until then, every system you add is a bet that the benefit outweighs years of maintenance, monitoring, and debugging.