|
1 | 1 | ---
|
2 | 2 | layout: post
|
3 | 3 | title: "Announcing the Portable SIMD Project Group"
|
4 |
| -author: Jubilee, Lokathor, Ashley Mannix, and Caleb Zulawski |
| 4 | +author: Jubilee and Lokathor |
5 | 5 | description: "Announcing the Portable SIMD Project Group"
|
6 | 6 | team: the library team <https://www.rust-lang.org/governance/teams/library>
|
7 | 7 | ---
|
8 | 8 |
|
9 | 9 | We're announcing the start of the _Portable SIMD Project Group_ within the Libs team. This group is dedicated to making a portable SIMD API available to stable Rust users.
|
10 | 10 |
|
11 |
| -The Portable SIMD Project Group is being lead by [@calebzulawski](https://github.com/calebzulawski), [@Lokathor](https://github.com/Lokathor), and [@workingjubilee](https://github.com/workingjubilee). |
| 11 | +The Portable SIMD Project Group is lead by [@calebzulawski](https://github.com/calebzulawski), [@Lokathor](https://github.com/Lokathor), and [@workingjubilee](https://github.com/workingjubilee). |
12 | 12 |
|
13 |
| -### What are project groups? |
| 13 | +## What are project groups? |
14 | 14 |
|
15 | 15 | Rust uses [project groups](https://rust-lang.github.io/rfcs/2856-project-groups.html) to help coordinate work. They're a place for people to get involved in helping shape the parts of Rust that matter to them.
|
16 | 16 |
|
17 |
| -### What is portable SIMD? |
| 17 | +## What is SIMD? |
18 | 18 |
|
19 |
| -SIMD (Single Instruction, Multiple Data) instructions can apply the same operation to multiple values _simultaneously_. |
20 |
| -We say these instructions are _vectorized_ because they operate on a "vector" of values instead of a single value (it's similar to an array, but not to be confused with Rust's `Vec` type). |
21 |
| -Different chip vendors offer different hardware intrinsics for achieving vectorization. |
22 |
| -Rust's standard library has exposed some of these intrinsics to users directly through the [`std::arch` module](https://doc.rust-lang.org/core/arch/index.html) since [`1.27.0`](https://blog.rust-lang.org/2018/06/21/Rust-1.27.html) shipped back in mid 2018. |
| 19 | +SIMD stands for Single Instruction, Multiple Data. It lets the CPU apply a single instruction to a "vector" of data. The vector is a single extra-wide CPU register made of multiple "lanes" of the same data type. You can think of it as being *similar* to an array. Instead of processing each lane individually, all lanes have the same operation applied *simultaneously*. This lets you transform data much faster than with standard code. Not every problem can be accelerated with "vectorized" code, but for multimedia and list-processing applications there can be significant gains. |
23 | 20 |
|
24 |
| -You _can_ build vectorized algorithms on `std::arch` directly, but that could mean sacrificing portability, or having to maintain a different implementation for each CPU you want to support. |
25 |
| -They can also just be noisy to work with directly. |
26 | 21 |
|
27 |
| -The goal of the Portable SIMD project group is to provide a high-level API in a new `std::simd` module that abstracts these platform-specific intrinsics away. |
28 |
| -You just pick a vector type with the right size, like `f32x4`, and perform operations on them, like addition, and the appropriate intrinsics will be used behind the scenes. |
| 22 | +## Why do you need to make it portable? |
29 | 23 |
|
30 |
| -There are still reasons to want to choose intrinsics in `std::arch` directly though. |
31 |
| -`std::simd` cannot mirror the details of every possible vendor API. |
32 |
| -`std::simd` is Rust's explicit, _portable_ vectorization story. |
| 24 | +Different chip vendors offer different SIMD instructions. Some of these are available in Rust's [`std::arch`](https://doc.rust-lang.org/core/arch/index.html) module. You *can* build vectorized functions using that, but at the cost of maintaining a different version for each CPU you want to support. You can also *not* write vectorized operations and hope that LLVM's optimizations will "auto-vectorize" your code. However, the auto-vectorizer is easily confused and can fail to optimize "obvious" vector tasks. |
33 | 25 |
|
34 |
| -### How can I get involved? |
| 26 | +The portable SIMD API will enable writing SIMD code just once using a high-level API. By explicitly communicating your intent to the compiler, it's better able to generate the best possible final code. This is still only a best-effort process. If your target doesn't support a desired operation in SIMD, the compiler will fall back to using scalar code, processing one lane at a time. The details of what's available depend on the build target. |
35 | 27 |
|
36 |
| -If you'd like to get on board and help make portable SIMD a reality you can visit our [GitHub repository](https://github.com/rust-lang/project-portable-simd) or reach out on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/257879-project-portable-simd) and say hi! :wave: |
| 28 | +We intend to release the Portable SIMD API as `std::simd`. We will cover as many use cases as we can, but it might still be appropriate for you to use `std::arch` directly. For that reason the `std::simd` types will also be easily convertable to `std::arch` types where needed. |
| 29 | + |
| 30 | +## How can I get involved? |
| 31 | + |
| 32 | +Everyone can get involved! No previous experience necessary. If you'd like to help make portable SIMD a reality you can visit our [GitHub repository](https://github.com/rust-lang/project-portable-simd) or reach out on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/257879-project-portable-simd) and say hi! :wave: |
0 commit comments