Skip to content

Remove legacy __sync_fetch_and_add with __atomic_fetch_add #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 11, 2021

Conversation

henrybear327
Copy link
Contributor

@henrybear327 henrybear327 commented Jun 11, 2021

__ATOMIC_SEQ_CST was picked to enforce total ordering with all other __ATOMIC_SEQ_CST operations.)

…SEQ_CST was picked to enforce total ordering with all other __ATOMIC_SEQ_CST operations.)

Signed-off-by: Chun-Hung Tseng <henrybear327@gmail.com>
@jserv jserv merged commit b9ca2d3 into sysprog21:master Jun 11, 2021
@jserv
Copy link
Contributor

jserv commented Jun 11, 2021

Thank @henrybear327 for contributing! I have amended the git commit message.

@henrybear327
Copy link
Contributor Author

Thank you @jserv !

linD026 added a commit to linD026/concurrent-programs that referenced this pull request Apr 22, 2023
Currently, the reader_func is counting the number of grace periods based
on the value of dut, which is updated by mthpc_rcu_replace_pointer.
However, the value of dut actually represents the time we update the
value, not the number of grace periods.

Also, the original method might result in incorrect counting if someone
tried to update the gp_idx while others who saw the same dut value with
prev_count still depend on the old gp_idx to increase the counter.

To fix the problem, instead of relying on the dut value to increase the
gp_idx, we manually increase gp_idx on write side. Then, we can easily
determine the gp on read side.

For dut value, we simply check the old count value is not greater than
the newest one.

Additionally, since synchronize_rcu is quite slow, readers generally
will pass through the critical section during the first grace period.
To generate more realistic output, we add a delay on read side before
entering the critical section.

Before:

100 reader(s), 5 update run(s), 6 grace period(s)
[grace period #0]  100 reader(s)
[grace period sysprog21#1]    0 reader(s)
[grace period sysprog21#2]    0 reader(s)
[grace period sysprog21#3]    0 reader(s)
[grace period sysprog21#4]    0 reader(s)
[grace period sysprog21#5]    0 reader(s)

After, we added a delay:

100 reader(s), 5 update run(s), 6 grace period(s)
[grace period #0]   76 reader(s)
[grace period sysprog21#1]    0 reader(s)
[grace period sysprog21#2]    1 reader(s)
[grace period sysprog21#3]    0 reader(s)
[grace period sysprog21#4]    3 reader(s)
[grace period sysprog21#5]   20 reader(s)
linD026 added a commit to linD026/concurrent-programs that referenced this pull request Apr 22, 2023
Currently, the reader_func is counting the number of grace periods based
on the value of dut, which is updated by rcu_assign_pointer. However, the
value of dut actually represents the time we update the value, not the
number of grace periods.

Also, the original method might result in incorrect counting if someone
tried to update the gp_idx while others who saw the same dut value with
prev_count still depend on the old gp_idx to increase the counter.

To fix the problem, instead of relying on the dut value to increase the
gp_idx, we manually increase gp_idx on write side. Then, we can easily
determine the gp on read side.

For dut value, we simply check the old count value is not greater than
the newest one.

Additionally, since synchronize_rcu is quite slow, readers generally
will pass through the critical section during the first grace period.
To generate more realistic output, we add a delay on read side before
entering the critical section.

Before:

100 reader(s), 5 update run(s), 6 grace period(s)
[grace period #0]  100 reader(s)
[grace period sysprog21#1]    0 reader(s)
[grace period sysprog21#2]    0 reader(s)
[grace period sysprog21#3]    0 reader(s)
[grace period sysprog21#4]    0 reader(s)
[grace period sysprog21#5]    0 reader(s)

After, we added a delay:

100 reader(s), 5 update run(s), 6 grace period(s)
[grace period #0]   76 reader(s)
[grace period sysprog21#1]    0 reader(s)
[grace period sysprog21#2]    1 reader(s)
[grace period sysprog21#3]    0 reader(s)
[grace period sysprog21#4]    3 reader(s)
[grace period sysprog21#5]   20 reader(s)
linD026 added a commit to linD026/concurrent-programs that referenced this pull request Apr 22, 2023
Currently, the reader_func is counting the number of grace periods based
on the value of dut, which is updated by rcu_assign_pointer. However,
the value of dut actually represents the time we update the value, not
the number of grace periods.

Also, the original method might result in incorrect counting if someone
tried to update the gp_idx while others who saw the same dut value with
prev_count still depend on the old gp_idx to increase the counter.

To fix the problem, instead of relying on the dut value to increase the
gp_idx, we manually increase gp_idx on write side. Then, we can easily
determine the gp on read side.

For dut value, we simply check the old count value is not greater than
the newest one.

Additionally, since synchronize_rcu is quite slow, readers generally
will pass through the critical section during the first grace period.
To generate more realistic output, we add a delay on read side before
entering the critical section.

Before:

100 reader(s), 5 update run(s), 6 grace period(s)
[grace period #0]  100 reader(s)
[grace period sysprog21#1]    0 reader(s)
[grace period sysprog21#2]    0 reader(s)
[grace period sysprog21#3]    0 reader(s)
[grace period sysprog21#4]    0 reader(s)
[grace period sysprog21#5]    0 reader(s)

After, we added a delay:

100 reader(s), 5 update run(s), 6 grace period(s)
[grace period #0]   76 reader(s)
[grace period sysprog21#1]    0 reader(s)
[grace period sysprog21#2]    1 reader(s)
[grace period sysprog21#3]    0 reader(s)
[grace period sysprog21#4]    3 reader(s)
[grace period sysprog21#5]   20 reader(s)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants