Skip to content

Create benchmark for fluent.runtime (fixes #85) #87

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 29, 2019

Conversation

Pike
Copy link
Contributor

@Pike Pike commented Jan 27, 2019

I haven't figured out how to actually run this on travis.

I'd love if this would be extra jobs, maybe just on py2.7, py3.7, pypy2, pypy3?

Reminds me, we're not testing py3.7 at all yet, we need to. Which is partly #70.

This has a benchmark for imports, one for bundle generation, and
one for a pseudo template.

The data here should inform down-stream consumers what to cache.
@Pike
Copy link
Contributor Author

Pike commented Jan 28, 2019

Luke, I guess you're the best to review this?

I intentionally didn't try to solve for the case that your benchmarks served, i.e., compare fluent in a hot loop to gettext in a hot loop.

I'd rather try to see what we intend to look at, and then see if there are equivalents in gettext.

Also, we should run these in automation, but I'm not yet sure how to best do that.

@spookylukey
Copy link
Collaborator

This all looks good to me, and I was able to run the benchmarks following your instructions.

Regarding running in automation, it is relatively easy to add things to the Travis matrix without adding all combinations - https://docs.travis-ci.com/user/customizing-the-build/#explicitly-including-jobs

But I have no idea what should be added - we presumably want something that will track the performance of these things over time? Is that even possible to do that reliably using cloud services? I found an article on benchmarking in the cloud which suggests not.

I found this issue - travis-ci/travis-ci#352 - but it didn't turn up anything immediately useful.

@Pike Pike merged commit d8d8a1d into projectfluent:master Jan 29, 2019
@Pike Pike deleted the benchmark branch January 29, 2019 10:31
@Pike
Copy link
Contributor Author

Pike commented Jan 29, 2019

Thanks for the review.

For automation, I think I want to wait on #70 and then add running the benchmarks as single jobs. Once that works, we can change the automation on PRs to run the benchmark on both master and PR, and compare the two. I think running the benchmark twice on the same machine is the only plausible way to get numbers to compare.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants