Skip to content

Commit 7f6d94d

Browse files
committed
docs(metrics): update single code blocks to title
1 parent 08e6da7 commit 7f6d94d

File tree

1 file changed

+57
-84
lines changed

1 file changed

+57
-84
lines changed

docs/core/metrics.md

Lines changed: 57 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -197,19 +197,17 @@ This decorator also **validates**, **serializes**, and **flushes** all your metr
197197

198198
#### Raising SchemaValidationError on empty metrics
199199

200-
If you want to ensure that at least one metric is emitted, you can pass `raise_on_empty_metrics` to the **log_metrics** decorator:
200+
If you want to ensure at least one metric is always emitted, you can pass `raise_on_empty_metrics` to the **log_metrics** decorator:
201201

202-
=== "app.py"
203-
204-
```python hl_lines="5"
205-
from aws_lambda_powertools.metrics import Metrics
202+
```python hl_lines="5" title="Raising SchemaValidationError exception if no metrics are added"
203+
from aws_lambda_powertools.metrics import Metrics
206204

207-
metrics = Metrics()
205+
metrics = Metrics()
208206

209-
@metrics.log_metrics(raise_on_empty_metrics=True)
210-
def lambda_handler(evt, ctx):
211-
...
212-
```
207+
@metrics.log_metrics(raise_on_empty_metrics=True)
208+
def lambda_handler(evt, ctx):
209+
...
210+
```
213211

214212
???+ tip "Suppressing warning messages on empty metrics"
215213
If you expect your function to execute without publishing metrics every time, you can suppress the warning with **`warnings.filterwarnings("ignore", "No metrics to publish*")`**.
@@ -218,36 +216,32 @@ If you want to ensure that at least one metric is emitted, you can pass `raise_o
218216

219217
When using multiple middlewares, use `log_metrics` as your **last decorator** wrapping all subsequent ones to prevent early Metric validations when code hasn't been run yet.
220218

221-
=== "nested_middlewares.py"
219+
```python hl_lines="7-8" title="Example with multiple decorators"
220+
from aws_lambda_powertools import Metrics, Tracer
221+
from aws_lambda_powertools.metrics import MetricUnit
222222

223-
```python hl_lines="7-8"
224-
from aws_lambda_powertools import Metrics, Tracer
225-
from aws_lambda_powertools.metrics import MetricUnit
223+
tracer = Tracer(service="booking")
224+
metrics = Metrics(namespace="ExampleApplication", service="booking")
226225

227-
tracer = Tracer(service="booking")
228-
metrics = Metrics(namespace="ExampleApplication", service="booking")
229-
230-
@metrics.log_metrics
231-
@tracer.capture_lambda_handler
232-
def lambda_handler(evt, ctx):
233-
metrics.add_metric(name="BookingConfirmation", unit=MetricUnit.Count, value=1)
234-
```
226+
@metrics.log_metrics
227+
@tracer.capture_lambda_handler
228+
def lambda_handler(evt, ctx):
229+
metrics.add_metric(name="BookingConfirmation", unit=MetricUnit.Count, value=1)
230+
```
235231

236232
### Capturing cold start metric
237233

238234
You can optionally capture cold start metrics with `log_metrics` decorator via `capture_cold_start_metric` param.
239235

240-
=== "app.py"
236+
```python hl_lines="5" title="Generating function cold start metric"
237+
from aws_lambda_powertools import Metrics
241238

242-
```python hl_lines="5"
243-
from aws_lambda_powertools import Metrics
239+
metrics = Metrics(service="ExampleService")
244240

245-
metrics = Metrics(service="ExampleService")
246-
247-
@metrics.log_metrics(capture_cold_start_metric=True)
248-
def lambda_handler(evt, ctx):
249-
...
250-
```
241+
@metrics.log_metrics(capture_cold_start_metric=True)
242+
def lambda_handler(evt, ctx):
243+
...
244+
```
251245

252246
If it's a cold start invocation, this feature will:
253247

@@ -320,18 +314,16 @@ CloudWatch EMF uses the same dimensions across all your metrics. Use `single_met
320314

321315
**unique metric = (metric_name + dimension_name + dimension_value)**
322316

323-
=== "single_metric.py"
317+
```python hl_lines="6-7" title="Generating an EMF blob with a single metric"
318+
from aws_lambda_powertools import single_metric
319+
from aws_lambda_powertools.metrics import MetricUnit
324320

325-
```python hl_lines="6-7"
326-
from aws_lambda_powertools import single_metric
327-
from aws_lambda_powertools.metrics import MetricUnit
328321

329-
330-
def lambda_handler(evt, ctx):
331-
with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, namespace="ExampleApplication") as metric:
332-
metric.add_dimension(name="function_context", value="$LATEST")
333-
...
334-
```
322+
def lambda_handler(evt, ctx):
323+
with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, namespace="ExampleApplication") as metric:
324+
metric.add_dimension(name="function_context", value="$LATEST")
325+
...
326+
```
335327

336328
### Flushing metrics manually
337329

@@ -340,21 +332,19 @@ If you prefer not to use `log_metrics` because you might want to encapsulate add
340332
???+ warning
341333
Metrics, dimensions and namespace validation still applies
342334

343-
=== "manual_metric_serialization.py"
335+
```python hl_lines="9-11" title="Manually flushing and clearing metrics from memory"
336+
import json
337+
from aws_lambda_powertools import Metrics
338+
from aws_lambda_powertools.metrics import MetricUnit
344339

345-
```python hl_lines="9-11"
346-
import json
347-
from aws_lambda_powertools import Metrics
348-
from aws_lambda_powertools.metrics import MetricUnit
349-
350-
metrics = Metrics(namespace="ExampleApplication", service="booking")
340+
metrics = Metrics(namespace="ExampleApplication", service="booking")
351341

352-
def lambda_handler(evt, ctx):
353-
metrics.add_metric(name="ColdStart", unit=MetricUnit.Count, value=1)
354-
your_metrics_object = metrics.serialize_metric_set()
355-
metrics.clear_metrics()
356-
print(json.dumps(your_metrics_object))
357-
```
342+
def lambda_handler(evt, ctx):
343+
metrics.add_metric(name="ColdStart", unit=MetricUnit.Count, value=1)
344+
your_metrics_object = metrics.serialize_metric_set()
345+
metrics.clear_metrics()
346+
print(json.dumps(your_metrics_object))
347+
```
358348

359349
## Testing your code
360350

@@ -367,41 +357,24 @@ If you prefer not to use `log_metrics` because you might want to encapsulate add
367357

368358
Use `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation.
369359

370-
=== "test runner"
371-
372-
```bash
373-
POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest
374-
```
375-
376-
If you prefer setting environment variable for specific tests, and are using Pytest, you can use [monkeypatch](https://docs.pytest.org/en/latest/monkeypatch.html) fixture:
377-
378-
=== "pytest_env_var.py"
379-
380-
```python
381-
def test_namespace_env_var(monkeypatch):
382-
# Set POWERTOOLS_METRICS_NAMESPACE before initializating Metrics
383-
monkeypatch.setenv("POWERTOOLS_METRICS_NAMESPACE", namespace)
384-
385-
metrics = Metrics()
386-
...
387-
```
360+
```bash title="Injecting dummy Metric Namespace before running tests"
361+
POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest
362+
```
388363

389364
### Clearing metrics
390365

391366
`Metrics` keep metrics in memory across multiple instances. If you need to test this behaviour, you can use the following Pytest fixture to ensure metrics are reset incl. cold start:
392367

393-
=== "pytest_metrics_reset_fixture.py"
394-
395-
```python
396-
@pytest.fixture(scope="function", autouse=True)
397-
def reset_metric_set():
398-
# Clear out every metric data prior to every test
399-
metrics = Metrics()
400-
metrics.clear_metrics()
401-
metrics_global.is_cold_start = True # ensure each test has cold start
402-
metrics.clear_default_dimensions() # remove persisted default dimensions, if any
403-
yield
404-
```
368+
```python title="Clearing metrics between tests"
369+
@pytest.fixture(scope="function", autouse=True)
370+
def reset_metric_set():
371+
# Clear out every metric data prior to every test
372+
metrics = Metrics()
373+
metrics.clear_metrics()
374+
metrics_global.is_cold_start = True # ensure each test has cold start
375+
metrics.clear_default_dimensions() # remove persisted default dimensions, if any
376+
yield
377+
```
405378

406379
### Functional testing
407380

0 commit comments

Comments
 (0)