You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/core/metrics.md
+57-84Lines changed: 57 additions & 84 deletions
Original file line number
Diff line number
Diff line change
@@ -197,19 +197,17 @@ This decorator also **validates**, **serializes**, and **flushes** all your metr
197
197
198
198
#### Raising SchemaValidationError on empty metrics
199
199
200
-
If you want to ensure that at least one metric is emitted, you can pass `raise_on_empty_metrics` to the **log_metrics** decorator:
200
+
If you want to ensure at least one metric is always emitted, you can pass `raise_on_empty_metrics` to the **log_metrics** decorator:
201
201
202
-
=== "app.py"
203
-
204
-
```python hl_lines="5"
205
-
from aws_lambda_powertools.metrics import Metrics
202
+
```python hl_lines="5" title="Raising SchemaValidationError exception if no metrics are added"
203
+
from aws_lambda_powertools.metrics import Metrics
206
204
207
-
metrics = Metrics()
205
+
metrics = Metrics()
208
206
209
-
@metrics.log_metrics(raise_on_empty_metrics=True)
210
-
def lambda_handler(evt, ctx):
211
-
...
212
-
```
207
+
@metrics.log_metrics(raise_on_empty_metrics=True)
208
+
deflambda_handler(evt, ctx):
209
+
...
210
+
```
213
211
214
212
???+ tip "Suppressing warning messages on empty metrics"
215
213
If you expect your function to execute without publishing metrics every time, you can suppress the warning with **`warnings.filterwarnings("ignore", "No metrics to publish*")`**.
@@ -218,36 +216,32 @@ If you want to ensure that at least one metric is emitted, you can pass `raise_o
218
216
219
217
When using multiple middlewares, use `log_metrics` as your **last decorator** wrapping all subsequent ones to prevent early Metric validations when code hasn't been run yet.
220
218
221
-
=== "nested_middlewares.py"
219
+
```python hl_lines="7-8" title="Example with multiple decorators"
220
+
from aws_lambda_powertools import Metrics, Tracer
221
+
from aws_lambda_powertools.metrics import MetricUnit
222
222
223
-
```python hl_lines="7-8"
224
-
from aws_lambda_powertools import Metrics, Tracer
225
-
from aws_lambda_powertools.metrics import MetricUnit
@@ -367,41 +357,24 @@ If you prefer not to use `log_metrics` because you might want to encapsulate add
367
357
368
358
Use `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation.
If you prefer setting environment variable for specific tests, and are using Pytest, you can use [monkeypatch](https://docs.pytest.org/en/latest/monkeypatch.html) fixture:
377
-
378
-
=== "pytest_env_var.py"
379
-
380
-
```python
381
-
def test_namespace_env_var(monkeypatch):
382
-
# Set POWERTOOLS_METRICS_NAMESPACE before initializating Metrics
`Metrics` keep metrics in memory across multiple instances. If you need to test this behaviour, you can use the following Pytest fixture to ensure metrics are reset incl. cold start:
392
367
393
-
=== "pytest_metrics_reset_fixture.py"
394
-
395
-
```python
396
-
@pytest.fixture(scope="function", autouse=True)
397
-
def reset_metric_set():
398
-
# Clear out every metric data prior to every test
399
-
metrics = Metrics()
400
-
metrics.clear_metrics()
401
-
metrics_global.is_cold_start = True # ensure each test has cold start
402
-
metrics.clear_default_dimensions() # remove persisted default dimensions, if any
403
-
yield
404
-
```
368
+
```python title="Clearing metrics between tests"
369
+
@pytest.fixture(scope="function", autouse=True)
370
+
defreset_metric_set():
371
+
# Clear out every metric data prior to every test
372
+
metrics = Metrics()
373
+
metrics.clear_metrics()
374
+
metrics_global.is_cold_start =True# ensure each test has cold start
375
+
metrics.clear_default_dimensions() # remove persisted default dimensions, if any
0 commit comments