Performance Improvements — Mar 4, 2026
Summary
This document covers every performance-related change made from commit 75ac55fabf162c8659e14b3212712b526697b8c3 (Mar 4, 2026) to the current HEAD of the dev branch. The work was done in a concentrated burst across a few days and can be grouped into six phases:
- Parallelizing sequential database queries (
Promise.all) - Adding
.lean()to Mongoose queries - Adding MongoDB compound indexes
- Tuning the MongoDB connection pool
- Introducing Redis caching and batched data loading
- Slimming down API response payloads (lightweight mappers/DTOs)
Improvements List (Date + Name)
| Date | Improvement Name |
|---|---|
| 2026-03-04 | Parallelize mapper/database queries with Promise.all |
| 2026-03-04 | Add .lean() to Mongoose read queries |
| 2026-03-04 | Add compound MongoDB indexes |
| 2026-03-04 | Start MongoDB connection pool tuning |
| 2026-03-06 | Slim ticket validation and event payload flow |
| 2026-03-09 | Add Redis cache layer for event and participant preview |
| 2026-03-09 | Add batched tier data loading (getEventTierBatchData) |
| 2026-03-09 | Split tiers into dedicated endpoint GET /event/:url/tiers |
| 2026-03-09 | Add Redis-based event views batching job |
| 2026-03-09 | Add k6 stress test and finalize pool tuning |
Motivation / Purpose
- Problem: The app-api was suffering from slow response times on critical endpoints — especially
GET /event/:url(the event detail page) andGET /event/:id/participants/preview. These endpoints are hit on every page load by every user, and they were executing many sequential database queries, each waiting for the previous one to finish. - Who it affects: All end-users of
app-console(the public-facing app) and indirectlyhub-consoleandbko-console. - Goal: Reduce p95 latency on the event detail page and related endpoints, and prepare the infrastructure to handle high-concurrency scenarios (up to 3000 concurrent users).
Phase 1: Parallelizing Sequential Database Queries
Commits: ad4214af, 0af8147d
What changed
The eventMapper, cartMapper, and userMapper functions were executing database queries one after another in sequence. Each query had to wait for the previous one to complete before starting, even though the queries were completely independent of each other.
Decision: Use Promise.all to run independent queries in parallel
eventMapper (apps/app-api/src/event/eventMapper.ts)
Before — 7 sequential queries:
const producerResponse = await ProducerModel.findOne({ ... });
// wait...
const subscriptionResponse = await SubscriptionModel.findOne({ ... });
// wait...
const participantsCount = await ParticipantModel.countDocuments({ ... });
// wait...
const tiersAvailable = await eventTierGetAvailables(event, tierNanoIds);
// wait...
const isParticipant = await ParticipantModel.exists({ ... });
// wait...
const isMainEvent = await EventModel.exists({ ... });
// wait...
const lastOrder = await OrderModel.findOne({ ... });
After — All 7 queries run simultaneously:
const [
producerResponse,
subscriptionResponse,
participantsCount,
tiersAvailable,
isParticipant,
isMainEvent,
lastOrder,
] = await Promise.all([
ProducerModel.findOne({ ... }).lean(),
user ? SubscriptionModel.findOne({ ... }).lean() : Promise.resolve(null),
ParticipantModel.countDocuments({ ... }),
eventTierGetAvailables(event, tierNanoIds),
user ? ParticipantModel.exists({ ... }) : Promise.resolve(null),
EventModel.exists({ ... }),
OrderModel.findOne({ ... }).sort({ createdAt: -1 }).limit(1).lean(),
]);
Why: If each query takes ~50ms, running 7 sequentially takes ~350ms. Running them in parallel takes ~50ms (the slowest one). This alone could cut the event mapper time by ~6x.
Additional optimization: The getBadges() function was async because it queried OrderModel for the last order internally. Since the last order is now fetched in the top-level Promise.all, getBadges() became a synchronous function — no more unnecessary async overhead.
cartMapper (apps/app-api/src/cart/cartMapper.ts)
Same pattern applied. The cart mapper had 5 sequential queries for user, event, producer plan, order, and product. These were consolidated into a single Promise.all block. Then a second Promise.all was added to run the dependent mappers (event, user, order, product, producerPlan, coupon, tickets) in parallel as well.
Before: ~5 sequential DB queries + ~5 sequential mapper calls = ~10 serial awaits. After: 2 parallel blocks (one for raw data, one for mapped data).
userMapper (apps/app-api/src/user/userMapper.ts)
Similar refactoring — sequential queries were grouped into Promise.all.
Decision: Pass subscriptionResponse to eventTierMapper
Previously, each tier mapper independently queried SubscriptionModel to check if the user had an active subscription. Since the event mapper already fetches this once, the subscription response is now passed as a parameter to eventTierMapper, avoiding N redundant queries (one per tier).
Phase 2: Adding .lean() to Mongoose Queries
Commits: ad4214af, 0af8147d
What changed
.lean() was added to virtually every findOne, find, and findOneAndUpdate query that didn't need Mongoose document methods.
Why
By default, Mongoose wraps every query result in a full Mongoose Document instance — with change tracking, getters/setters, validation, and .save() methods. This has significant memory and CPU overhead. .lean() returns plain JavaScript objects instead, which are:
- Faster to create (no prototype chain setup)
- Use less memory (no internal Mongoose state tracking)
- Faster to serialize (no getter interception)
This is especially impactful when mapping large result sets (e.g., event tiers, cart tickets).
Phase 3: Adding MongoDB Compound Indexes
Commit: d029afbd
What changed
Compound indexes were added to the most frequently queried collections:
| Collection | Index Fields | Purpose |
|---|---|---|
| Cart | { eventId: 1, orderId: 1, substitutedAt: 1, removedAt: 1, createdAt: -1 } | Cart lookups by event, filtering active carts |
| Order | { eventId: 1, status: 1, removedAt: 1, createdAt: -1 } | Finding recent paid orders for badges |
| Subscription | { producerId: 1, userId: 1, status: 1, removedAt: 1 } | Checking active subscriptions per user/producer |
| Ticket | Compound index on eventId, tierId, removedAt | Counting tickets per tier |
| Event | Single-field indexes on producerId and mainEventId | Producer lookups and main event checks |
Why
Without indexes, MongoDB performs collection scans — reading every document in the collection to find matches. With compound indexes, MongoDB can use an index scan that jumps directly to matching documents. For collections with hundreds of thousands of documents (tickets, orders, carts), this reduces query time from seconds to milliseconds.
The index field order follows the ESR rule (Equality → Sort → Range): equality filters first, then sort fields, then range filters.
writeConcern fix
The writeConcern object had a typo: W: 'majority' (uppercase W) was corrected to w: 'majority' (lowercase w). The uppercase version was silently ignored by MongoDB, meaning writes were using the default write concern instead of majority — a correctness issue, not just performance.
Phase 4: Tuning the MongoDB Connection Pool
Commits: d029afbd, bae9e74b, f7d4d414, 420a43a9, 0de18561, 65221c19, be677e8c
What changed
The MongoDB connection was previously created with only { dbName: 'nittio' } — using all Mongoose defaults. A defaultOptions object was introduced and iteratively tuned:
| Parameter | Default (Mongoose) | Final Value | Purpose |
|---|---|---|---|
maxPoolSize | 100 | 3000 | Max simultaneous connections to MongoDB |
minPoolSize | 0 | 100 | Pre-warmed connections always ready |
socketTimeoutMS | 0 (infinite) | 45000 | Kill stuck sockets after 45s |
connectTimeoutMS | 30000 | 10000 | Fail fast on initial connection |
serverSelectionTimeoutMS | 30000 | 15000 | Fail fast if no server available |
maxIdleTimeMS | 0 (infinite) | 120000 | Close idle connections after 2min |
waitQueueTimeoutMS | — | 10000 | Fail fast if pool is exhausted |
Evolution of pool size
The pool size was tuned iteratively during load testing:
d029afbd:maxPoolSize: 50, minPoolSize: 10— initial conservative valuesbae9e74b: Reverted (was testing)f7d4d414:maxPoolSize: 200, minPoolSize: 20— increased after seeing pool exhaustion420a43a9:maxPoolSize: 600, minPoolSize: 30— still seeing timeouts under load0de18561:maxPoolSize: 1000, minPoolSize: 50— approaching production needs65221c19:maxPoolSize: 2000, minPoolSize: 50— stress test showed need for morebe677e8c:maxPoolSize: 3000, minPoolSize: 100— final value for 32GB/32vCPU infra
Why
When the app receives many concurrent requests, each request needs a MongoDB connection from the pool. If maxPoolSize is too low, requests queue up waiting for a free connection — adding latency. The waitQueueTimeoutMS ensures that if the pool is fully exhausted, requests fail fast with a clear error instead of hanging indefinitely.
The minPoolSize ensures there are always pre-warmed connections ready, avoiding the overhead of establishing new TCP connections + TLS handshakes during traffic spikes.
Compression (attempted and reverted)
In d029afbd, compressors: ['zstd', 'snappy'] was added to compress data between the app and MongoDB. This was reverted in bae9e74b, likely because the CPU overhead of compression wasn't worth it for the payload sizes in this app, or because the MongoDB server didn't support those compressors.
Phase 5: Redis Caching and Batched Data Loading
Commits: e2a83e25, 74088051, cdb381f0, be677e8c
5.1 — Redis Cache Utilities
File: packages/redis/src/redisCache.ts
A new set of Redis cache utilities was created:
redisGet<T>(key)— Get and JSON-parse a cached valueredisSet(key, value, ttlSeconds)— JSON-stringify and set with TTLredisDel(key)— Delete a cache keyredisIncr(key)— Atomic increment (for counters)
All operations are wrapped in try/catch and silently fail — the cache is a best-effort optimization, never a hard dependency. If Redis is down, the app falls back to querying MongoDB directly.
5.2 — Event Response Caching (60s TTL)
File: apps/app-api/src/event/handleEventGet.ts
The full event response is now cached in Redis with key event:response:{url} and a 60-second TTL. On cache hit, the handler returns immediately without touching MongoDB at all.
Request → Check Redis → HIT → Return cached response (0 DB queries)
→ MISS → Query MongoDB → Map → Cache in Redis → Return
5.3 — Participants Preview Caching (60s TTL)
File: apps/app-api/src/event/participant/handleEventParticipantsPreviewGet.ts
The participants preview endpoint (which runs a heavy aggregation pipeline with $lookup, $sample, etc.) now caches its result in Redis with key event:participants:preview:{eventId} and a 60-second TTL.
Additionally, the full eventParticipantMapper was replaced with inline projection — the aggregation pipeline already returns the exact shape needed (_id, user.photos, user.name), so there's no need to run a mapper that would trigger additional queries.
5.4 — isMainEvent Caching (300s TTL)
File: packages/event/src/cache/getCachedIsMainEvent.ts
The check EventModel.exists({ mainEventId: eventId }) determines whether an event is a "main event" (has sub-events). This value rarely changes, so it's cached for 5 minutes with key event:isMain:{eventId}.
5.5 — Event Views Counter via Redis + Background Job
Before: Every GET /event/:url request ran EventModel.findOneAndUpdate({ $inc: { views: 1 } }) — a write operation on every read request. This caused write lock contention on the Event collection under high traffic.
After: Views are counted with redisIncr('event:views:{eventId}') — an atomic in-memory counter. A background job (jobFlushEventViews) periodically reads all event:views:* keys from Redis and flushes them to MongoDB in batch using $inc. This converts thousands of individual writes into a single batched write.
5.6 — Batched Tier Data Loading
File: apps/app-api/src/event/tier/getEventTierBatchData.ts
Previously, each tier mapper independently queried for its section, ticket count, reserved carts, and last ticket. For an event with 10 tiers, this meant 40 independent queries (4 per tier).
The new getEventTierBatchData function runs 5 aggregation queries that fetch data for all tiers at once:
TicketModel.aggregate— Groups ticket counts bytierId→Map<tierId, count>CartModel.find— Gets all reserved carts for the event (for availability check)CartModel.find— Gets all reserved carts for the event (for mapper display)TicketModel.aggregate— Gets the last ticket per tier (for "hot" badge)SectionModel.find— Gets all sections for the event's tiers
The results are stored in Map objects and passed to each tier mapper via a preloaded parameter, so the mapper can skip its own queries entirely.
Before: 4N queries (N = number of tiers) After: 5 queries total, regardless of tier count
5.7 — eventTierGetAvailables with preloaded data
File: packages/event/src/tier/eventTierGetAvailables.ts
The tier availability function was refactored to accept an optional PreloadedTierAvailablesData parameter. When provided, it uses the preloaded ticketCountsMap and reservedCarts instead of querying the database. This avoids redundant queries when the data has already been fetched by getEventTierBatchData.
5.8 — New dedicated GET /event/:url/tiers endpoint
File: apps/app-api/src/event/tier/handleEventTiersGet.ts
Tiers were extracted from the event response into their own dedicated endpoint. This is a key architectural decision:
- The event detail page now loads in two phases: first the event metadata (cached, fast), then the tiers (always fresh, with real-time availability).
- Tiers change frequently (sold out, new carts reserved) and shouldn't be cached with the event.
- The tiers endpoint uses
getEventTierBatchDatafor batched loading.
5.9 — cartLightMapper (created then later removed)
Commits: e2a83e25 (created), later superseded
A lightweight cart mapper was created that skips heavy sub-mappers (like the full event mapper) and instead builds the response inline with minimal data. This was an intermediate step that was later refined.
5.10 — eventLightMapper (created then removed)
Commits: 74088051 (created), 0a3966a6 (removed)
A lightweight event mapper was created that skipped the producerMapper call and instead built the producer object inline. This was an experimental approach that was later removed in favor of the cleaner caching + batching strategy in cdb381f0.
Phase 6: Slimming Down API Response Payloads
Commits: bd64b031, c99e5069, f3f0ab5d, 338032ff, cdb381f0
6.1 — Remove photos array from producer response
The producerZodRead schema was returning a photos array (with fullsize, thumbnail, and download URLs) in every event response. Since event listings don't display producer photos, this field was removed from the producer schema used in event responses. This reduces payload size significantly when events are listed.
6.2 — Slim producer events endpoint
File: apps/app-api/src/producer/handleProducerEventsGet.ts
The GET /producer/:username/events endpoint was returning the full event object (with all tiers, custom fields, installments, etc.) for each event in the catalog. A new lightweight response200DataZod schema was created that returns only what the catalog card needs:
_id,title,url,flyers,place,startAt,badges
A local eventMapper function builds this minimal object directly, avoiding the full eventMapper with all its database queries. The badge computation was extracted into a reusable getEventBadges() function in packages/event/src/badge/getEventBadges.ts.
6.3 — Slim participants preview endpoint
The participants preview endpoint was returning the full event participant object (with full event, full user, interactions, etc.) when it only needs _id, user.photos, and user.name. The response schema and mapper were replaced with a minimal inline projection.
6.4 — Slim ticket validation endpoint
File: apps/hub-api/src/lobby/handleLobbyEventTicketValidatePost.ts
The ticket validation endpoint (used by door validators at events) was calling the full ticketMapper which triggered multiple database queries. It was replaced with an inline object construction that only returns what the validator UI needs: _id, name, tier.name, tier.section.name, confirmedBy, confirmedAt. The section is fetched with a single SectionModel.findOne instead of the full mapper chain.
6.5 — Make flyers required in event schema
The flyers field was marked as .optional() in the event Zod schema, which meant the frontend had to handle undefined cases. Since every event should have flyers, the field was made required, simplifying frontend code and ensuring data consistency.
Flowchart: Event Detail Page Load (After Optimization)
Flowchart: Connection Pool and Caching Architecture
Phase 7: k6 Stress Test
Commit: be677e8c
File: apps/app-api/k6/stress-test.js
A k6 load test was created to validate all the performance improvements under realistic conditions:
- Stages: Ramp from 0 → 100 → 500 → 1000 → 2000 → 3000 concurrent users, hold at 3000 for 2 minutes, then ramp down.
- Endpoints tested:
GET /event/:slug,GET /event/:id/participants/preview,GET /user/me,GET /notifications - Thresholds: p95 < 2000ms overall, p95 < 1500ms for event endpoint, error rate < 10%
- Authentication: Supports both authenticated and unauthenticated flows via
SESSION_COOKIEenv var.
This test was the feedback loop that drove the connection pool tuning — each pool size increase was validated against this test.
Complete Commit Timeline
| Commit | Date | Description |
|---|---|---|
ad4214af | Mar 4, 12:26 | Parallelize eventMapper, eventTierMapper, producerMapper queries with Promise.all |
93d3d750 | Mar 4, 12:29 | Cache-bust test (added/removed comment) |
1f95c3fb | Mar 4, 12:31 | Revert cache-bust comment |
bd64b031 | Mar 4, 12:34 | Add producer features to schema, remove photos from producer response |
53ff85f2 | Mar 4, 12:38 | Temporarily disable producer photos component |
0af8147d | Mar 4, 12:48 | Parallelize cartMapper and userMapper, add .lean() throughout |
d029afbd | Mar 4, 12:51 | Add compound indexes (Cart, Order, Subscription, Ticket, Event), tune connection pool, fix writeConcern |
bae9e74b | Mar 4, 12:57 | Revert connection pool changes (testing) |
f7d4d414 | Mar 4, 13:16 | Re-add connection pool: maxPoolSize=200, minPoolSize=20 |
420a43a9 | Mar 4, 13:17 | Increase pool: maxPoolSize=600, minPoolSize=30 |
0de18561 | Mar 4, 13:21 | Increase pool: maxPoolSize=1000, minPoolSize=50 |
65221c19 | Mar 4, 13:30 | Increase pool: maxPoolSize=2000 |
e2a83e25 | Mar 4, 14:00 | Create cartLightMapper, orderCartMapper — lightweight cart mapping |
74088051 | Mar 4, 14:06 | Create eventLightMapper — lightweight event mapping (experimental) |
0a3966a6 | Mar 6, 16:36 | Remove eventLightMapper (superseded by caching strategy) |
f3f0ab5d | Mar 6, 17:42 | Slim ticket validation endpoint — inline response instead of full mapper |
cdb381f0 | Mar 9, 15:33 | Major refactor: Redis caching, batched tier loading, event views job, separate tiers endpoint, getEventBadges extraction |
be677e8c | Mar 9, 19:50 | k6 stress test, final pool tuning (3000/100), Redis caching for participants preview |
c99e5069 | Mar 9, 19:52 | Slim producer events endpoint — return only catalog-needed fields |
338032ff | Mar 9, 19:59 | Make flyers required in event schema |
Impact Summary
| Optimization | Estimated Impact |
|---|---|
| Promise.all in eventMapper | ~6x faster mapper (7 queries parallel vs sequential) |
| Promise.all in cartMapper | ~5x faster mapper |
| .lean() on all queries | ~30-50% less memory per query, faster serialization |
| Compound indexes | Orders of magnitude faster for filtered queries on large collections |
| Redis event cache (60s) | 0 DB queries on cache hit — sub-millisecond response |
| Redis participants preview cache (60s) | Avoids heavy aggregation pipeline on every load |
| Redis isMainEvent cache (300s) | Avoids a query that rarely changes |
| Batched tier data loading | 5 queries instead of 4N (N = tier count) |
| Event views via Redis + batch job | Eliminates write-on-every-read contention |
| Slim API payloads | Smaller JSON responses = less bandwidth, faster parsing |
| Connection pool tuning (3000 max) | Handles 3000 concurrent users without pool exhaustion |
| Separate tiers endpoint | Event loads fast (cached), tiers load fresh (real-time) |
References
apps/app-api/src/event/eventMapper.ts— Main event mapperapps/app-api/src/event/handleEventGet.ts— Event GET handler with Redis cacheapps/app-api/src/event/tier/handleEventTiersGet.ts— New tiers endpointapps/app-api/src/event/tier/getEventTierBatchData.ts— Batched tier data loaderapps/app-api/src/event/tier/eventTierMapper.ts— Tier mapper with preloaded data supportapps/app-api/src/cart/cartMapper.ts— Parallelized cart mapperapps/app-api/src/cart/cartLightMapper.ts— Lightweight cart mapperapps/app-api/src/producer/handleProducerEventsGet.ts— Slim producer events endpointapps/hub-api/src/lobby/handleLobbyEventTicketValidatePost.ts— Slim ticket validationpackages/event/src/badge/getEventBadges.ts— Extracted badge computationpackages/event/src/cache/getCachedIsMainEvent.ts— Cached isMainEvent checkpackages/event/src/tier/eventTierGetAvailables.ts— Tier availability with preloaded datapackages/event/src/jobs/jobFlushEventViews.ts— Background job for flushing view counterspackages/mongo/src/connectMongo.ts— Connection pool configurationpackages/redis/src/redisCache.ts— Redis cache utilitiesapps/app-api/k6/stress-test.js— k6 load test