Skip to content

Commit c6a85b1

Browse files
authored
Merge pull request #110 from yanboliang/uplink
Update README link
2 parents 60a6df3 + b21ada3 commit c6a85b1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ We also supported [Mixtral 8x7B](https://mistral.ai/news/mixtral-of-experts/) wh
2929

3030
Note that the benchmarks run on an 8xA100-80GB, power limited to 330W with a hybrid cube mesh topology. Note that all benchmarks are run at *batch size=1*, making the reported tokens/s numbers equivalent to "tokens/s/user". In addition, they are run with a very small prompt length (just 5 tokens).
3131

32-
For more details about Mixtral 8x7B, please check [this page](./mixtral-moe) or this [note](https://chilli.substack.com/p/short-supporting-mixtral-in-gpt-fast).
32+
For more details about Mixtral 8x7B, please check [this page](./mixtral-moe) or this [note](https://thonking.substack.com/p/short-supporting-mixtral-in-gpt-fast).
3333

3434
## Community
3535

0 commit comments

Comments
 (0)