Skip to content

Remove unneeded macro witchery #23249

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 10, 2015
Merged

Conversation

tbu-
Copy link
Contributor

@tbu- tbu- commented Mar 10, 2015

No description provided.

@rust-highfive
Copy link
Contributor

r? @alexcrichton

(rust_highfive has picked a reviewer for you, use r? to override)

@huonw
Copy link
Member

huonw commented Mar 10, 2015

Does this have the same performance as the old version? (It certainly looks like it should, but that doesnt't mean that it actually does.)

@tbu-
Copy link
Contributor Author

tbu- commented Mar 10, 2015

@huonw I don't know, but it should. The functions are marked as #[inline], and then it's just like the code was written where the functions are invoked.

@huonw
Copy link
Member

huonw commented Mar 10, 2015

Yes, as I said that's the theory, but the practice may not match that.

@huonw
Copy link
Member

huonw commented Mar 10, 2015

It'd be good to at least run a (can be out-of-tree) microbenchmark of say for c in s.chars() { black_box(c) } on one or two strings. (The str tests in libcollections contain some strings with non-ASCII data if you need an example.)

@tbu-
Copy link
Contributor Author

tbu- commented Mar 10, 2015

I'm consistently getting worse results for the new solution...

running 2 tests
test bench::bench_new ... bench:      5439 ns/iter (+/- 163) = 11 MB/s
test bench::bench_old ... bench:      5307 ns/iter (+/- 30) = 12 MB/s

@bluss
Copy link
Member

bluss commented Mar 10, 2015

I don't think 12 MB/s is a reasonable number anyway, we need to examine that benchmark.

@tbu-
Copy link
Contributor Author

tbu- commented Mar 10, 2015

With #[inline(always)] the results are a bit closer together:

running 2 tests
test bench::bench_new ... bench:      5392 ns/iter (+/- 46) = 12 MB/s
test bench::bench_old ... bench:      5315 ns/iter (+/- 21) = 12 MB/s

The benchmark can be found here: https://gist.github.com/tbu-/c49e5a5ec4a4426b6d75

@alexcrichton
Copy link
Member

@tbu- are you perhaps forgetting to optimize the benchmark you gave? I'm getting some pretty wildly different results here:

$ rustc bench.rs --test -O       
$ ./bench --bench 
running 2 tests
test bench::bench_new ... bench:        49 ns/iter (+/- 1) = 1326 MB/s
test bench::bench_old ... bench:        49 ns/iter (+/- 2) = 1326 MB/s

test result: ok. 0 passed; 0 failed; 0 ignored; 2 measured

@tbu-
Copy link
Contributor Author

tbu- commented Mar 10, 2015

@alexcrichton Indeed. I somehow assumed that --bench would behave exactly as in cargo.

So this seems safe to merge.

@alexcrichton
Copy link
Member

@bors: r+ fb297d1

@bors
Copy link
Collaborator

bors commented Mar 10, 2015

⌛ Testing commit fb297d1 with merge 6048ba8...

@huonw
Copy link
Member

huonw commented Mar 10, 2015

@tbu- thanks for checking that! :)

@bors
Copy link
Collaborator

bors commented Mar 10, 2015

@bors bors merged commit fb297d1 into rust-lang:master Mar 10, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants