You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -121,6 +121,56 @@ make dialyzer # Runs type-checker
121
121
Source code can be formatted using `make fmt`.
122
122
This formats not only the Elixir code, but also the code under [`native/`](./native/).
123
123
124
+
### Consensus spec tests
125
+
126
+
You can run all of them with:
127
+
128
+
```shell
129
+
make spec-test
130
+
```
131
+
132
+
Or only run those of a specific config with:
133
+
134
+
```shell
135
+
make spec-test-config-`config`
136
+
137
+
# Some examples
138
+
make spec-test-config-mainnet
139
+
make spec-test-config-minimal
140
+
make spec-test-config-general
141
+
```
142
+
143
+
Or by a single runner in all configs, with:
144
+
145
+
```shell
146
+
make spec-test-runner-`runner`
147
+
148
+
# Some examples
149
+
make spec-test-runner-ssz_static
150
+
make spec-test-runner-bls
151
+
make spec-test-runner-operations
152
+
```
153
+
154
+
The complete list of test runners can be found [here](https://github.com/ethereum/consensus-specs/tree/dev/tests/formats).
155
+
156
+
If you want to specify both a config and a runner:
157
+
158
+
```shell
159
+
make spec-test-mainnet-operations
160
+
make spec-test-minimal-epoch_processing
161
+
make spec-test-general-bls
162
+
```
163
+
164
+
More advanced filtering (e.g. by fork or handler) will be re-added again, but if you want to only run a specific test, you can always do that manually with:
165
+
166
+
```shell
167
+
mix test --no-start test/generated/<config>/<fork>/<runner>.exs:<line_of_your_testcase>
168
+
```
169
+
You can put a "*" in any directory (e.g. config) you don't want to filter by, although that won't work if adding the line of the testcase.
170
+
171
+
> [!NOTE]
172
+
> We specify the `--no-start` flag to stop *ExUnit* from starting the application, to reduce resource consumption.
173
+
124
174
### Docker
125
175
126
176
The repo includes a `Dockerfile` for the consensus client. It can be built with:
@@ -135,7 +185,7 @@ Then you run it with `docker run`, adding CLI flags as needed:
135
185
docker run consensus --checkpoint-sync <url> --network <network> ...
136
186
```
137
187
138
-
# Testing Environment with Kurtosis
188
+
##Testing Environment with Kurtosis
139
189
140
190
To test the node locally, we can simulate other nodes and start from genesis using [`Kurtosis`](https://docs.kurtosis.com/) and the Lambda Class fork of [`ethereum-package`](https://github.com/lambdaclass/ethereum-package.git).
141
191
@@ -280,55 +330,114 @@ make kurtosis.clean
280
330
make kurtosis.purge
281
331
```
282
332
283
-
## Consensus spec tests
333
+
## Live Metrics
284
334
285
-
You can run all of them with:
335
+
When running the node, metrics are available at [`http://localhost:9568/metrics`](http://localhost:9568/metrics) in Prometheus format.
336
+
337
+
### Grafana
338
+
339
+
A docker-compose is available at [`metrics/`](./metrics) with a Grafana-Prometheus setup preloaded with dashboards that disponibilize the data.
340
+
To run it, install [Docker Compose](https://docs.docker.com/compose/) and execute:
286
341
287
342
```shell
288
-
make spec-test
343
+
make grafana-up
289
344
```
290
345
291
-
Or only run those of a specific config with:
346
+
After that, open [`http://localhost:3000/`](http://localhost:3000/) in a browser.
347
+
The default username and password are both `admin`.
292
348
293
-
```shell
294
-
make spec-test-config-`config`
349
+
To stop the containers run `make grafana-down`. For cleaning up the metrics data, run `make grafana-clean`.
295
350
296
-
# Some examples
297
-
make spec-test-config-mainnet
298
-
make spec-test-config-minimal
299
-
make spec-test-config-general
351
+
## Benchmarks
352
+
353
+
Several benchmarks are provided in the `/bench` directory. They are all standard elixir scripts, so they can be run as such. For example:
354
+
355
+
```bash
356
+
mix run bench/byte_reversal.exs
300
357
```
301
358
302
-
Or by a single runner in all configs, with:
359
+
Some of the benchmarks require a state or blocks to be available in the db. For this, the easiest thing is to run `make checkpoint-sync` so an anchor state and block are downloaded for mainnet, and optimistic sync starts. If the benchmark requires additional blocks, maybe wait until the first chunk is downloaded and block processing is executed at least once.
303
360
304
-
```shell
305
-
make spec-test-runner-`runner`
361
+
Some need to be executed with `--mode db` in order to not have the store replaced by the application. This needs to be added at the end, like so:
306
362
307
-
# Some examples
308
-
make spec-test-runner-ssz_static
309
-
make spec-test-runner-bls
310
-
make spec-test-runner-operations
363
+
```bash
364
+
mix run <script> --mode db
311
365
```
312
366
313
-
The complete list of test runners can be found [here](https://github.com/ethereum/consensus-specs/tree/dev/tests/formats).
367
+
A quick summary of the available benchmarks:
314
368
315
-
If you want to specify both a config and a runner:
369
+
-`deposit_tree`: measures the time of saving and loading an the "execution chain" state, mainly to test how much it costs to save and load a realistic deposit tree. Uses benchee. The conclusion was very low (the order of μs).
370
+
-`byte_reversal`: compares three different methods for byte reversal as a bitlist/bitvector operation. This concludes that using numbers as internal representation for those types would be the most efficient. If we ever need to improve them, that would be a good starting point.
371
+
-`shuffling_bench`: compares different methods for shuffling: shuffling a list in one go vs computing each shuffle one by one. Shuffling the full list was proved to be 10x faster.
372
+
-`block_processing`: builds a fork choice store with an anchor block and state. Uses the next block available to apply `on_block`, `on_attestation` and `on_attester_slashing` handlers. Runs these handlers 30 times. To run this, at least 2 blocks and a state must be available in the db. It also needs you to set the slot manually at the beginning of an epoch. Try it for the slot that appeared when you ran checkpoint sync (you'll see in the logs something along the lines of `[Checkpoint sync] Received beacon state and block slot=9597856`)
373
+
-`multiple_block_processing`: _currently under revision_. Similar to block processing but with a range of slots so state transition is performed multiple times. The main advantage is that by performing more than one state transition it helps test caches and have a more average-case measurement.
374
+
-`SSZ benchmarks`: they compare between our own library and the rust nif ssz library. To run any of these two benchmarks you previously need to have a BeaconState in the database.
375
+
-`encode_decode_bench`: compares the libraries at encoding and decoding a Checkpoint and a BeaconState container.
376
+
-`hash_tree_root_bench`: compares the libraries at performing the hash tree root of a Beacon State and packed list of numbers.
377
+
378
+
## Profiling
379
+
380
+
### QCachegrind
381
+
382
+
To install [QCachegrind](https://github.com/KDE/kcachegrind) via [Homebrew](https://formulae.brew.sh/formula/qcachegrind), run:
383
+
384
+
```sh
385
+
brew install qcachegrind
386
+
```
387
+
388
+
To build a qcachegrind profile, run, inside iex:
389
+
390
+
```elixir
391
+
LambdaEthereumConsensus.Profile.build()
392
+
```
393
+
394
+
Options and details are in the `Profile` package. After the profile trace is generated, you open it in qcachegrind with:
316
395
317
396
```shell
318
-
make spec-test-mainnet-operations
319
-
make spec-test-minimal-epoch_processing
320
-
make spec-test-general-bls
397
+
qcachegrind callgrind.out.<trace_name>
321
398
```
322
399
323
-
More advanced filtering (e.g. by fork or handler) will be re-added again, but if you want to only run a specific test, you can always do that manually with:
400
+
If you want to group the traces by function instead of process, you can use the following before viewing it in qcachegrind:
324
401
325
402
```shell
326
-
mix test --no-start test/generated/<config>/<fork>/<runner>.exs:<line_of_your_testcase>
You can put a "*" in any directory (e.g. config) you don't want to filter by, although that won't work if adding the line of the testcase.
329
405
330
-
> [!NOTE]
331
-
> We specify the `--no-start` flag to stop *ExUnit* from starting the application, to reduce resource consumption.
406
+
### etop
407
+
408
+
Another useful tool to quickly diagnose processes taking too much CPU is `:etop`, similar to UNIX `top` command. This is installed by default in erlang, and included in the `:observer` extra application in `mix.exs`. You can run it with:
409
+
410
+
```elixir
411
+
:etop.start()
412
+
```
413
+
414
+
In particular, the `reds` metric symbolizes `reductions`, which can roughly be interpreted as the number of calls a function got.
415
+
This can be used to identify infinite loops or busy waits.
416
+
417
+
Also of note is the `:sort` option, that allows sorting the list by, for example, message queue size:
418
+
419
+
```elixir
420
+
:etop.start(sort::msg_q)
421
+
```
422
+
423
+
_Note: If you want to use the `:observer` GUI and not just `etop`, you'll probably need `:wx` also set in your extra applications, there is an easy way to do this, just set the `EXTRA_APPLICATIONS` environment variable to `WX` (`export EXTRA_APPLICATIONS=WX`) before starting the node_
424
+
425
+
### eFlambè
426
+
427
+
When optimizing code, it might be useful to have a graphic way to determine bottlenecks in the system.
428
+
In that case, you can use [eFlambè](https://github.com/Stratus3D/eflambe) to generate flamegraphs of specific functions.
429
+
The following code will capture information from 10 calls to `Handlers.on_block/2`, dumping it in different files named \<timestamp\>-eflambe-output.bggg.
Another useful tool to quickly diagnose processes taking too much CPU is `:etop`, similar to UNIX `top` command. This is installed by default in erlang, and included in the `:observer` extra application in `mix.exs`. You can run it with:
519
-
520
-
```elixir
521
-
:etop.start()
522
-
```
523
-
524
-
In particular, the `reds` metric symbolizes `reductions`, which can roughly be interpreted as the number of calls a function got.
525
-
This can be used to identify infinite loops or busy waits.
526
-
527
-
Also of note is the `:sort` option, that allows sorting the list by, for example, message queue size:
528
-
529
-
```elixir
530
-
:etop.start(sort::msg_q)
531
-
```
532
-
533
-
_Note: If you want to use the `:observer` GUI and not just `etop`, you'll probably need `:wx` also set in your extra applications, there is an easy way to do this, just set the `EXTRA_APPLICATIONS` environment variable to `WX` (`export EXTRA_APPLICATIONS=WX`) before starting the node_
534
-
535
-
### eFlambè
536
-
537
-
When optimizing code, it might be useful to have a graphic way to determine bottlenecks in the system.
538
-
In that case, you can use [eFlambè](https://github.com/Stratus3D/eflambe) to generate flamegraphs of specific functions.
539
-
The following code will capture information from 10 calls to `Handlers.on_block/2`, dumping it in different files named \<timestamp\>-eflambe-output.bggg.
0 commit comments