Description
Using Python 3.8 and aws-lambda-powertools==0.9.1
It could be that I'm not using metrics as their design intended however I would like to make use of them in the following pattern so metrics can be consumed by classes shared across my project which are imported into my lambdas as needed (so my lambdas handlers have few lines of code) e.g.
from aws_lambda_powertools.metrics import Metrics
from mynamespace.mypackage import MyClass
metrics = Metrics()
@metrics.log_metrics
def handler(request, context):
# passing metrics to my class that does all the work for the lambda:
return MyClass(metrics).do_stuff()
Adding metrics to MyClass might look something like this (using randint for demo purposes obviously):
class MyClass:
def __init__(self, metrics):
self.metrics = metrics
def do_stuff(self):
success = random.randint(0, 1)
if success:
self.metrics.add_metric(name="Success", unit=MetricUnit.Count, value=1)
self.metrics.add_dimension(name='service', value='MyClass')
return True
else:
self.metrics.add_metric(name="Failure", unit=MetricUnit.Count, value=1)
self.metrics.add_dimension(name='service', value='MyClass')
return False
Following this pattern the desired metrics are created correctly but on subsequent executions metrics are not flushed so subsequent executions will output both a 'Success' AND a 'Failure' metric from a single execution, so the print output in logs looks like this:
{
"Success": 1,
"Failure": 1,
"_aws": {
"Timestamp": 1590714776621,
"CloudWatchMetrics": [
{
"Namespace": "richard",
"Dimensions": [
[
"service"
]
],
"Metrics": [
{
"Name": "Success",
"Unit": "Count"
},
{
"Name": "Failure",
"Unit": "Count"
}
]
}
]
},
"service": "MyClass"
}
Metrics are supposed to be flushed by the log_metrics decorator on each handler execution no? Would love to get your thoughts on this if it's a bug or if I'm not using them as intended.
Metadata
Metadata
Assignees
Type
Projects
Status