Closed
Description
Currently powertools doesn't track when a metric got added and uses the time of flushing as timestamp in the EMF metadata. This causes inaccurate time information when the function annotated with log_metrics
runs longer than the minimal resolution for metrics in CloudWatch Metrics.
Here is sample code to reproduce this problem:
#!/usr/bin/env python3
import json
import time
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
metrics = Metrics(namespace="Test", service="Test")
@metrics.log_metrics
def handler(event, context):
metrics.add_metric(name="TestMetric1", unit=MetricUnit.Count, value=1)
time.sleep(65)
metrics.add_metric(name="TestMetric2", unit=MetricUnit.Count, value=1)
handler(None, None)
This code produces the following output:
{
"_aws": {
"Timestamp": 1600326158339,
"CloudWatchMetrics": [
{
"Namespace": "Test",
"Dimensions": [
[
"service"
]
],
"Metrics": [
{
"Name": "TestMetric1",
"Unit": "Count"
},
{
"Name": "TestMetric2",
"Unit": "Count"
}
]
}
]
},
"service": "Test",
"TestMetric1": 1.0,
"TestMetric2": 1.0
}
Instead of a single log record with a single timestamp, I'd expect to get two distinct log records with timestamps 65 seconds apart.
I also already reported the same problem for aws-embedded-metrics-python
: awslabs/aws-embedded-metrics-python#53
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Triage