Description
Is there an existing issue for this?
- I have checked for existing issues https://github.com/getsentry/sentry-javascript/issues
- I have reviewed the documentation https://docs.sentry.io/
- I am using the latest SDK release https://github.com/getsentry/sentry-javascript/releases
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/node
SDK Version
9.11.0
Framework Version
No response
Link to Sentry event
No response
Reproduction Example/SDK Setup
No response
Steps to Reproduce
Node: 18.20.6
Config
const Sentry = require("@sentry/node");
//...
Sentry.init({
dsn: sentryDsn,
environment: stage,
release: APP_VERSION,
// When enabled, stack traces are automatically attached to all messages logged. This defaults to false.
attachStacktrace: false,
// the total amount of breadcrumbs that should be captured. This defaults to 100.
maxBreadcrumbs: 100,
debug: true,
defaultIntegrations: false,
includeLocalVariables: true,
integrations: [
// Capture console logs as breadcrumbs. (default)
Sentry.consoleIntegration(),
// Deduplicate certain events to avoid receiving duplicate errors. (default)
// Sentry.dedupeIntegration(),
// Add local variables to exception frames. (default)
Sentry.localVariablesIntegration({ captureAllExceptions: true, }),
// Allows the SDK to provide original functions and method names, even when
// those functions or methods are wrapped by our error or breadcrumb handlers. (default)
Sentry.functionToStringIntegration(),
// Registers handlers to capture global uncaught exceptions. (default)
Sentry.onUncaughtExceptionIntegration(),
// Registers handlers to capture global unhandled promise rejections. (default)
Sentry.onUnhandledRejectionIntegration(),
// Extracts all non-native attributes from the error object and attaches them to the event as extra data.
Sentry.extraErrorDataIntegration(),
// Allows you to configure linked errors. (default)
Sentry.linkedErrorsIntegration({limit: 10, key: "cause"}),
// Capture spans & breadcrumbs for http requests. (default)
Sentry.httpIntegration(),
// Capture spans & breadcrumbs for node fetch requests. (default)
Sentry.nativeNodeFetchIntegration(),
// enable Express.js middleware tracing
Sentry.expressIntegration(),
],
// Data to be set to the initial scope
initialScope: {
tags: {
"cu.gitRev": GIT_REV,
"cu.gitTag": GIT_TAG,
"cu.appVersion": APP_VERSION,
"cu.releaseDate": RELEASE_DATE,
},
},
// Performance Monitoring
tracesSampleRate: 1.0, // Capture 100% of the transactions
// Set sampling rate for profiling - this is relative to tracesSampleRate
profilesSampleRate: 1.0,
});
// ..
const app = express();
// use middlewares
// use routes
Sentry.setupExpressErrorHandler(app);
app.use((err, req, res, next) => {
logger.info("Error handler start ", err);
logger.info(`Sentry: ${res.sentry}`);
if (res.headersSent) {
return next(err);
}
const isStatusError = err instanceof StatusError;
const status = typeof err.status === "number" ? err.status : 500;
const message = isStatusError ? err.message : "Server error. Please retry.";
if (status >= 500) {
console.error(err);
}
if (err instanceof ValidationError) {
res.status(err.statusCode).json(err);
return res.end();
}
res.status(status).json({
...(isStatusError ? err : {}),
message: message,
});
return res.end();
});
const handler = serverless(app, { binary: ["application/pdf"] });
// ...
sample testing endpoints in passed to Express.js router:
r.get("/api/status", (req, res) => res.status(200).json({ status: "api-v2 up" }));
r.get("/api/throw", async () => {
const someVariable = "contains some value";
console.log("test log from Console");
logger.info(`variable value: ${someVariable}`);
throw new Error(`Sentry test! ${moment().utc().format()}`);
});
r.post("/api/throw2", async (req, res) => {
const { msg } = req.body;
throw new Error(`${msg} ${moment().utc().format()}`);
});
Expected Result
All errors getting reported to Sentry.
Actual Result
Not all errors are reported to Sentry.
I set debug: true
in options and I noticed these:
2025-04-07T10:31:37.648000+00:00 2025/04/07/[$LATEST]17f2b086d6674e16a9ca0fc0dc3509e3 2025-04-07T10:31:37.648Z 1212eb8a-f7c5-4239-841d-2ea330a15875 DEBUG Sentry Logger [debug]: @opentelemetry_sentry-patched/instrumentation-http outgoingRequest on request error() Error: socket hang up
at connResetException (node:internal/errors:720:14)
at TLSSocket.socketOnEnd (node:_http_client:525:23)
at TLSSocket.emit (node:events:529:35)
at TLSSocket.emit (node:domain:489:12)
at endReadableNT (node:internal/streams/readable:1400:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'ECONNRESET'
}
2025-04-07T10:31:37.687000+00:00 2025/04/07/[$LATEST]17f2b086d6674e16a9ca0fc0dc3509e3 2025-04-07T10:31:37.687Z 1212eb8a-f7c5-4239-841d-2ea330a15875 INFO Sentry Logger [log]: Recording outcome: "network_error:error"
2025-04-07T10:31:37.705000+00:00 2025/04/07/[$LATEST]17f2b086d6674e16a9ca0fc0dc3509e3 2025-04-07T10:31:37.705Z 1212eb8a-f7c5-4239-841d-2ea330a15875 ERROR Sentry Logger [error]: Error while sending envelope: Error: socket hang up
at connResetException (node:internal/errors:720:14)
at TLSSocket.socketOnEnd (node:_http_client:525:23)
at TLSSocket.emit (node:events:529:35)
at TLSSocket.emit (node:domain:489:12)
at endReadableNT (node:internal/streams/readable:1400:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'ECONNRESET'
}
For scenario where I call /api/throw, then /api/throw2 and /api/status.
The first error got though, the second error is not reported.
While I've been testing it today on a deployed lambda, Sentry seems to drop more events then pass through. And it is not they are not accepted - they are not reaching to the Sentry server. The project it belongs are still running low numbers - in stats I see, on total of 16 error, 16 got accepted, but there should be more then 16 errors reported.
These logs come from SDK 9.11.0, but I tested with v8 and I got similar results.
The code runs on AWS Lambda.
Metadata
Metadata
Assignees
Type
Projects
Status