You probably don't need another database.

Do you really need separate systems for caching, queues, search, documents, and vector embeddings when you already have Postgres?

The Gist

It started with a gist and a lively Hacker News thread. The premise was simple: Postgres isn't the best at everything, but it's good enough for most things. In practice, most teams are running too many microservices and databases. It's all premature optimization. More operational overhead, more maintenance burden, more monitoring complexity, higher costs, harder tracing, and longer debugging sessions.

The Typical Pattern

You need caching, so you add Redis. Full-text search? Bolt-on Elasticsearch. Background jobs? Another Redis, or maybe Sidekiq. Documents with flexible schemas? Default to MongoDB. Analytics? Snowflake. Events? Reach for Kafka. Before long, your "simple" application talks to seven different data stores and microservices, each with its own deployment, backup strategy, failure modes, and 3 AM pages when they stop talking to each other. Each system adds operational surface area: monitoring, alerting, failover testing, security patching, version upgrades.

The "Webscale™" Stack

Application
Redis
Postgres
Elastic
MongoDB
Snowflake
Kafka
Pinecone
Sidekiq
InfluxDB

Multiple systems to operate and monitor

With Postgres

Application
PostgreSQL

One database. One backup strategy. One set of failure modes.

"But Postgres Isn't Webscale™!"

We hear this argument all the time. But what percentage of software projects actually ever reach so-called "webscale"? About 0.3%? For your stealth startup or saas, should you really be burning your innovation tokens on multiple microservices and databases instead of the actual problem at hand?

If companies serving millions of users like Notion, Netflix, Instagram, etc trust "boring" technology, your startup can probably get by without a seven-database architecture. Besides, if you ever truly get to webscale and tap out Postgres's capabilities, you can just bring the additional pieces as needed, when truly needed.

Maybe Postgres Is Enough

Before reaching for another database, see if you can accomplish it with what Postgres already offers:

You need... You reach for... But Postgres has...
Caching Redis, Memcached UNLOGGED tables, materialized views →
Job queues Redis + Sidekiq, RabbitMQ SKIP LOCKED, pgmq, pgflow →
Full-text search Elasticsearch, Algolia tsvector, pg_trgm, ParadeDB →
Document store MongoDB, CouchDB JSONB, FerretDB →
Vector search / AI Pinecone, Weaviate pgvector, pgvectorscale →
Time-series data InfluxDB, TimescaleDB TimescaleDB, pg_partman →
Analytics / OLAP Snowflake, BigQuery pg_analytics, DuckDB integration →
Graph database Neo4j, Neptune Apache AGE, recursive CTEs →
Geospatial Specialized GIS systems PostGIS →

When You Actually Need Something Else

This isn't about dogma. Sometimes you genuinely need specialized infrastructure. But the bar should be high: only after pushing Postgres to its limits, documenting why it was insufficient, and accepting the operational cost of the alternative. Until then, every system you add is a bet that the benefit outweighs years of maintenance, monitoring, and debugging.