PostgreSQL 17 JSON vs MongoDB: Benchmark Reality Check
The Lead
On January 26, 2026, MongoDB published a benchmark comparing PostgreSQL JSONB and MongoDB BSON under an update-heavy workload. The setup was concrete: 256 concurrent users, 30 minutes, and roughly 13 million existing documents. PostgreSQL ran on AWS RDS m5.xlarge; MongoDB ran on Atlas M40 with auto-scale up to M50. MongoDB reported steadier throughput and lower tail-latency behavior as the test progressed.
That result is plausible. It is also narrower than many readers will infer.
PostgreSQL 17, released on September 26, 2024, materially improved its JSON story with SQL/JSON query functions, JSON_TABLE, and broad query-performance work, while retaining its core strength: JSON does not live in a separate universe. It lives inside a relational engine with joins, constraints, mature indexing, and ACID semantics across structured and semi-structured data in one system.
The real question is not “Which database is faster for JSON?” It is “Faster for which operation, under which storage model, with which indexing strategy, and at what operational cost?” The 2026 benchmark becomes useful as a case study in architectural tradeoffs — not as a verdict.
Bottom Line
The 2026 benchmark proves one thing: MongoDB leads on sustained partial-document updates at high concurrency. It does not prove PostgreSQL 17 is a worse JSON database overall — especially for mixed relational and document workloads.
PostgreSQL 17 vs MongoDB — Side-by-Side
This table covers the dimensions engineers actually care about when choosing between the two. The 2026 benchmark touched only one row of it.
| Dimension | PostgreSQL 17 (JSONB) | MongoDB (BSON) | Edge |
|---|---|---|---|
| Primary abstraction | Row + JSONB column | Document as first-class unit | MongoDB for pure-doc workloads |
| Partial field update | jsonb_set() — triggers row rewrite + WAL |
$set — WiredTiger field-level mutation |
MongoDB |
| Update-heavy benchmark (2026) | Slower, degrades over time | Steadier throughput, lower p99 | MongoDB |
| Read / filter performance | Strong — planner optimizes JSON paths | Strong — native document scans | Tie / workload-dependent |
| Joins | Native SQL — hash, merge, nested loop | $lookup in aggregation pipeline only |
PostgreSQL |
| SQL / analytics | Full SQL + JSON_TABLE (PG17), window functions, CTEs | MQL / aggregation framework — no SQL | PostgreSQL |
| Transactions (ACID) | Full ACID, cross-table, multi-statement | Multi-document ACID since v4.0 | PostgreSQL (more mature) |
| JSON indexing | GIN, jsonb_path_ops, expression indexes |
Compound, wildcard, Atlas Search (Lucene) | Tie — different strengths |
| Schema flexibility | Hybrid — relational + JSONB columns | Fully schemaless by default | MongoDB for free-form docs |
| Mixed relational + JSON | Native — one engine for all data | Requires separate RDBMS or denormalization | PostgreSQL |
| Horizontal scaling | Citus, Neon, AWS Aurora — needs extension | Native sharding built in | MongoDB |
| Operational complexity | One system — no cross-DB consistency work | Separate from RDBMS — dual failover, dual observability | PostgreSQL for unified stacks |
| License | PostgreSQL License — true open source | SSPL — source-available, restrictions apply | PostgreSQL |
| Managed services | AWS RDS, Aurora, Supabase, Neon, Cloud SQL | MongoDB Atlas (primary), Cosmos DB (API compat) | Both well-supported |
Architecture & Implementation
The gap starts with how each engine treats a “document.” In PostgreSQL, a JSON payload is stored as jsonb — a decomposed binary representation. PostgreSQL’s own documentation notes that jsonb is slower to ingest than raw json because of conversion overhead, but faster to process because reparsing is avoided and indexing is supported. That design is flexible and performant for query workloads, but it still lives inside PostgreSQL’s row-and-version model.
That last part matters. An update acquires a row-level lock on the whole row. If your application repeatedly mutates a small field inside a large indexed jsonb document, the API call may look surgical, but the storage consequences include:
- A new row version written to the heap
- Index maintenance across all relevant GIN indexes
- WAL record for the full new row
- Dead tuple accumulation requiring eventual autovacuum
The API surface is document-friendly; the storage engine is still relational. PostgreSQL 17 makes that environment much more capable — JSON_TABLE lets teams project JSON into relational form inside the optimizer, and jsonb_path_ops indexes are often smaller and faster than generic GIN for containment-heavy patterns.
-- PostgreSQL 17: targeted JSON update
UPDATE events
SET payload = jsonb_set(payload, ‘{status}’, ‘”processed”’::jsonb)
WHERE payload @> ‘{“type”:”checkout”}’::jsonb;
-- More surgical: promote hot fields to real columns
ALTER TABLE events ADD COLUMN status text GENERATED ALWAYS AS (payload->>’status’) STORED;
CREATE INDEX ON events(status);
MongoDB starts from the opposite direction. The document is the primary abstraction, not an embedded type inside a row. Under WiredTiger, write operations use document-level concurrency control with optimistic conflict handling. The $set operator updates a field path directly — the hot path is natively aligned with workloads that mutate subdocuments frequently.
// MongoDB: targeted field update — no full-doc rewrite overhead
db.events.updateOne(
{ type: “checkout” },
{ $set: { status: “processed” } }
)
Both systems support nested value updates, JSON indexing, and transactions. But PostgreSQL 17 optimizes for “JSON inside a general-purpose database,” while MongoDB optimizes for “the document as the primary unit of modeling, mutation, and concurrency.” Those are not equivalent premises — and most benchmark comparisons conflate them.
For teams preparing benchmark datasets from production traces, mask customer-shaped JSON before using it as a fixture. TechBytes’ Data Masking Tool handles this before any data leaves your environment.
Benchmarks & Metrics
The 2026 MongoDB benchmark is valuable: it is real, recent, and explicit about its scenario. It is also limited in ways senior engineers should not ignore.
What the benchmark likely proves
- MongoDB has a structural advantage on repeated partial updates to large document-shaped records
- PostgreSQL JSONB can degrade over time in that workload as row rewrites, index churn, and dead tuple accumulation compound
- The longer the test runs, the more storage-engine behavior dominates over API similarity
What the benchmark does not prove
- It does not show a normalized cost comparison — Atlas auto-scaled from M40 to M50 mid-run while PostgreSQL stayed on m5.xlarge
- It does not establish superiority for reads, analytics, joins, or mixed transactional workloads
- It does not quantify how much PostgreSQL’s result came from schema shape, document size, index design, autovacuum tuning, or table bloat
- It does not test the effect of replacing generic GIN indexes with jsonb_path_ops or expression indexes on the hot field
- It does not test the hybrid model where frequently-mutated fields are promoted to real columns
Metrics a credible benchmark must publish
If you want a benchmark to survive architecture review, throughput numbers alone are not enough. At minimum, collect and publish:
- p50, p95, and p99 latency sampled over time (not just averaged across the run)
- Write amplification — WAL volume on PostgreSQL vs oplog/journal on MongoDB
- Index size growth and GIN/WiredTiger cache-hit ratio over time
- CPU saturation and checkpoint/vacuum event timing
- Storage consumed per million updates
- Effective dollar cost of the achieved throughput on both platforms
There is also a PostgreSQL-specific modeling point. PostgreSQL’s JSON docs recommend keeping documents to a manageable size because whole-row locking can raise contention. A benchmark that stores mutation-heavy state in a single large jsonb blob may be measuring a modeling anti-pattern as much as a database limitation. Promoting hot keys to relational columns — and keeping colder flexible attributes in jsonb — can shift outcomes materially.
When to Choose Each
Choose MongoDB when:
- The application’s dominant unit is a mutable document — not a row with a JSON column
- Partial document updates are the hot path, not an occasional operation
- The access pattern is shallow and path-oriented (update field X in document Y)
- Your team has no existing relational data to bridge
- Horizontal sharding is required from day one
- Document API ergonomics matter more than SQL composability
Choose PostgreSQL 17 when:
- JSON is important but not the only data — you also have relational entities, foreign keys, and constraints
- You need analytical SQL, window functions, CTEs, or
JOIN-heavy reporting - Transactional consistency across multiple data types in one system is required
- You want one database to absorb multiple access styles without a migration in 18 months
- Operational simplicity (one failover plan, one observability stack, one backup system) is a priority
- License terms matter — PostgreSQL License is true open source; MongoDB SSPL is not
The hidden strategic question is organizational maturity. MongoDB often rewards teams certain that the document model is their long-term primary abstraction. PostgreSQL rewards teams that expect requirements to drift, prefer unified infrastructure, and want SQL available for the analytics queries that inevitably arrive.
Road Ahead
The next credible comparison should be broader and stricter. It should run three benchmark families — update-heavy, read-heavy, and mixed relational-document — and publish results for each separately. It should normalize hardware, disable auto-scaling unless both sides have it, and publish index definitions, document sizes, storage growth, and total cost.
Most importantly, benchmark the model you would actually ship. If you would never put a rapidly mutating status field inside a giant jsonb blob in production, do not benchmark PostgreSQL that way and call it representative.
The real 2026 story is not that one database embarrassed the other. It is that architecture keeps showing up in the benchmark graphs. PostgreSQL 17 narrowed the ergonomic gap for JSON and expanded SQL/JSON capability. MongoDB still owns a meaningful advantage where fine-grained document mutation is the center of gravity. Senior teams should treat that as useful signal — not brand validation.
Primary references: PostgreSQL 17 release notes · PostgreSQL 17 JSON types · PostgreSQL 17 JSON functions · MongoDB’s 2026 update-heavy benchmark · MongoDB $set docs · WiredTiger docs
Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.
Related Deep-Dives
AI Agent Architecture: MCP, Sandboxing & Skills
How modern agent runtimes isolate context, tools, and permissions at the architecture level.
Cloud ArchitectureAgenticOps: The Cloud-Native Pivot
Why autonomous agents are reshaping infrastructure decisions from storage to networking.