Skip to content

perf: no deep cloning #72

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 7 commits into from
Closed

Conversation

jeswr
Copy link
Contributor

@jeswr jeswr commented Oct 28, 2023

An alternative to #71 which removes the need for any deep cloning.

All unit & spec tests in the jsonld-streaming-parser pass with these changes.

I have also done a (small) benchmark of these changes. As can be seen the time taken to parse a context with these changes significantly decreases.

Before:
Parse a context that has not been cached; and without caching in place x 78.17 ops/sec ±0.73% (78 runs sampled)
Parse a context object that has not been cached x 2,280 ops/sec ±0.65% (91 runs sampled)

After:
Parse a context that has not been cached; and without caching in place x 123 ops/sec ±0.78% (84 runs sampled)
Parse a context object that has not been cached x 5,417 ops/sec ±0.70% (87 runs sampled)

@coveralls
Copy link

coveralls commented Oct 28, 2023

Pull Request Test Coverage Report for Build 6679792446

  • 31 of 31 (100.0%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 100.0%

Totals Coverage Status
Change from base Build 6492290900: 0.0%
Covered Lines: 548
Relevant Lines: 548

💛 - Coveralls

@jeswr
Copy link
Contributor Author

jeswr commented Oct 29, 2023

Closing in favor of #73

@jeswr jeswr closed this Oct 29, 2023
rubensworks pushed a commit that referenced this pull request Nov 7, 2023
This significantly improves parsing performance.

I have also done a (small) benchmark of these changes. As can be seen the time taken to parse a context with these changes significantly decreases (in particular it is now 20x faster to parse a VC context).

Before:
Parse a context that has not been cached; and without caching in place x 78.17 ops/sec ±0.73% (78 runs sampled)
Parse a context object that has not been cached x 2,280 ops/sec ±0.65% (91 runs sampled)

in #72:
Parse a context that has not been cached; and without caching in place x 123 ops/sec ±0.78% (84 runs sampled)
Parse a context object that has not been cached x 5,417 ops/sec ±0.70% (87 runs sampled)

After (in this PR):
Parse a context that has not been cached; and without caching in place x 1,714 ops/sec ±0.78% (84 runs sampled)
Parse a context object that has not been cached x 8,902 ops/sec ±0.70% (87 runs sampled)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants