Skip to content

Commit 027ecee

Browse files
authored
Fix link
1 parent 6973ca6 commit 027ecee

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2024-08-07-flexattention.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ def soft_cap(score, b, h, q_idx, kv_idx):
150150
return score
151151
```
152152

153-
Note that we also automatically generate the backwards pass from the forwards pass here. Also, although this implementation is semantically correct, we likely want to use a tanh approximation in this case for performance reasons. See [attention-gym](https://github.com/pytorch-labs/attention-gym/blob/738268eae279c48dc8c4d1c6f40b3cfaec648831/attn\_gym/mods/softcapping.py\#L1) for more details.
153+
Note that we also automatically generate the backwards pass from the forwards pass here. Also, although this implementation is semantically correct, we likely want to use a tanh approximation in this case for performance reasons. See [attention-gym](https://github.com/pytorch-labs/attention-gym/blob/main/attn_gym/mods/softcapping.py) for more details.
154154

155155
### Causal Mask
156156

0 commit comments

Comments
 (0)