You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -313,7 +313,7 @@ For more information about the Core ML implementation please refer to PR [#566](
313
313
314
314
## NVIDIA GPU support via cuBLAS
315
315
316
-
With NVIDIA cards, the Encoder processing can be offloaded to the GPU to a large extend through cuBLAS.
316
+
With NVIDIA cards the Encoder processing can to a large extent be offloaded to the GPU through cuBLAS.
317
317
First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-downloads
318
318
319
319
Now build `whisper.cpp` with cuBLAS support:
@@ -325,7 +325,7 @@ WHISPER_CUBLAS=1 make -j
325
325
326
326
## OpenCL GPU support via CLBlast
327
327
328
-
For cards and integrated GPUs that support OpenCL, the Encoder processing can be largely offloaded to the GPU through CLBlast. This is especially useful for users with AMD APU's or low end devices for up to ~2x speedup.
328
+
For cards and integrated GPUs that support OpenCL, the Encoder processing can be largely offloaded to the GPU through CLBlast. This is especially useful for users with AMD APUs or low end devices for up to ~2x speedup.
329
329
330
330
First, make sure you have installed `CLBlast`for your OS or Distribution: https://github.com/CNugteren/CLBlast
0 commit comments