End-to-end request/response payloads
Getting end-to-end request/response payloads (not just browser-side but also from internal service calls), is critical when resolving technical issues and complex bugs. However, most tools only collect browser-side request/response payloads, and teams have to manually instrument their systems to collect the rest.
Multiplayer automates this manual and cumbersome process, automatically providing you with request/response content and headers from deep within your system.
Why you need full stack request/response data
Modern software is complex. A single user action might pass through many layers: Mobile app → API gateway → Authentication service → Business logic service → Database → Cache → Third-party service
Request/response information tells you:
- Headers: Metadata about the communication between services - like authentication tokens, content type, tracking IDs.
- Content/Body: The actual data being sent between services - like user information, search results, order details, etc.
Most tools monitor only what goes in at the top layer and what comes out at the bottom layer. Or if they do trace through layers, they don’t capture full bodies.
However, bugs often happen in the middle layers where the data has been transformed, enriched, or corrupted.
For example: A user reports their order total is wrong. You check the mobile app (looks fine) and the database (looks fine). But somewhere in the middle, a discount service doubled a coupon code. Without seeing the request/response at that specific layer, you're searching blindly.
Limits of current approaches / tools
You can collect request/response information manually from deep layers in your system, but it’s time consuming and usually requires a series of steps:
- Know where to look: you must guess which layer is causing the problem before instrumenting it
- Instrument each service individually: add logging code (or add SDK code from a service) to manually capture messages or exceptions, set tags, user information, and extra data at specific points
- Deploy the changes: each service needs redeployment with instrumentation code
- Wait for the bug to happen again
- Parse through noise: logs from all services mixed together, making it hard to trace a single request's journey
- Remove the logging code afterward (or leave it cluttering the system)
This can take hours or even days for a single investigation. If they guess wrong about where the problem is, they start over.
Why APM tools fall short
APM tools (e.g. Datadog, New Relic) don’t automatically collect this information.
These tools are built for performance monitoring at scale, and they primarily capture performance metrics (response times, throughput), error rates and stack traces, infrastructure health.
When it comes to capturing request/response content and headers from deep within your system, you typically encounter these limitations:
- Need for explicit, manual configuration/instrumentation per service
- Sampled data rather than full capture (the specific request causing your bug might not be captured)
- Performance overhead of capturing/storing large payloads
- Privacy/security concerns (PII, credentials in bodies)
- Storage costs at scale
Why error monitoring tools fall short
Error monitoring tools (e.g., Sentry, Bugsnag, Rollbar) only capture data when exceptions are thrown. This means they miss the most challenging bugs, for example:
- Silent data corruption: When data is transformed incorrectly but doesn't trigger an exception (wrong calculations, missing fields, doubled values)
- Upstream causes: They capture the symptom (the service that threw the error) but not the root cause (the upstream service where data was corrupted)
- Successful-but-wrong requests: When a request completes with a 200 status but incorrect data, nothing is captured
- Cross-service issues: Correlating an error in one service with data from other services requires manual work matching IDs across systems
You see where something broke, but not why the data became incorrect as it flowed through your system.
How Multiplayer captures request/response data
Multiplayer automatically captures request/response content and headers from deep within your system and includes them in your session replays, without manual instrumentation or code changes for each investigation.
Examples of data captured
Multiplayer captures data across your entire stack:
- Internal service-to-service communications in your microservices architecture
- Database queries executed by middle-tier services
- Data transformations in background workers
- Messages flowing through queues (Kafka, RabbitMQ, etc.)
- Third-party API calls made by your services
- Async/background job execution
For a full list of all the data captured by a full stack session replay, please review: All data captured
What makes Multiplayer unique
| Challenge | Traditional Tools | Multiplayer |
|---|---|---|
| Manual setup | Requires explicit configuration per service | Automatic deep instrumentation. No code changes per issue |
| Cross-service correlation | Requires manual trace ID matching | Automatic layer-by-layer visibility as data flows through your system |
| Payload visibility | Typically limited to headers or requires opt-in | Complete request/response capture: headers AND body content |
| Sampling | Data sampled to reduce costs (it might miss your bug) | Full, unsampled capture for recorded sessions (sampling configurable if needed) |
| Exception-only | Only captures when errors are thrown | Captures all requests |
| Data retention/cost | Always-on recording with limited control over what's captured | Targeted session capture. Only record what you need, keeping costs contained (see available recording modes) |
| Privacy/PII | Manual configuration required to redact sensitive data | User inputs masked by default; customizable masking for frontend and backend |
How it works
Multiplayer uses industry-standard OpenTelemetry Protocol (OTLP) instrumentation to trace requests across your services:
- Distributed tracing: OTLP propagators pass trace and span IDs between services (via HTTP headers, gRPC metadata, Kafka headers, etc.) to correlate requests across service boundaries
- Automatic context propagation: If a service doesn't have trace context, Multiplayer automatically generates trace IDs
- Fan-out pattern support: Handles complex scenarios where one request triggers multiple downstream calls
- Full capture for sessions: When recording a session replay, all traced requests are captured without sampling (though you can configure sampling per service/endpoint if needed)
Collection options
Multiplayer offers two approaches for capturing request/response data:
1. In-service code capture libraries (Easiest to get started)
- Capture, serialize, and mask request/response content directly within your service code
- No additional infrastructure components needed
- Ideal for new projects or getting started quickly
2. Multiplayer Proxy (Best for scale)
- Handles data capture outside your services
- Reduces performance impact on application code
- Ideal for large-scale applications or when you can't modify service code
Collect request/response data with Multiplayer
To correctly configure the Multiplayer in-service code capture make sure to
- Login into your Multiplayer account. If you don't already have one, create a free trial at go.multiplayer.app
- Complete the Client setup in STEP 1 of the configuration steps
- Route traces and logs to Multiplayer using one of these 2 options in STEP 2 of the configuration steps
- Select the solution with which to capture request/response data:
Next steps
👀 If this is the first time you’ve heard about us, you may want to see full stack session recordings in action. You can do that in our free sandbox: sandbox.multiplayer.app
🚀 If you’re ready to trial Multiplayer with your own app, you can follow the Multiplayer configuration steps. You can start a free plan at any time: go.multiplayer.app
📌 If you have any questions shoot us an email or join us on Discord! 💜