|
| 1 | +--- |
| 2 | +layout: post |
| 3 | +title: "Announcing the Portable SIMD Project Group" |
| 4 | +author: Jubilee and Lokathor |
| 5 | +description: "Announcing the Portable SIMD Project Group" |
| 6 | +team: the library team <https://www.rust-lang.org/governance/teams/library> |
| 7 | +--- |
| 8 | + |
| 9 | +We're announcing the start of the _Portable SIMD Project Group_ within the Libs team. This group is dedicated to making a portable SIMD API available to stable Rust users. |
| 10 | + |
| 11 | +The Portable SIMD Project Group is lead by [@calebzulawski](https://github.com/calebzulawski), [@Lokathor](https://github.com/Lokathor), and [@workingjubilee](https://github.com/workingjubilee). |
| 12 | + |
| 13 | +## What are project groups? |
| 14 | + |
| 15 | +Rust uses [project groups](https://rust-lang.github.io/rfcs/2856-project-groups.html) to help coordinate work. |
| 16 | +They're a place for people to get involved in helping shape the parts of Rust that matter to them. |
| 17 | + |
| 18 | +## What is SIMD? |
| 19 | + |
| 20 | +SIMD stands for Single Instruction, Multiple Data. |
| 21 | +It lets the CPU apply a single instruction to a "vector" of data. |
| 22 | +The vector is a single extra-wide CPU register made of multiple "lanes" of the same data type. |
| 23 | +You can think of it as being *similar* to an array. |
| 24 | +Instead of processing each lane individually, all lanes have the same operation applied *simultaneously*. |
| 25 | +This lets you transform data much faster than with standard code. |
| 26 | +Not every problem can be accelerated with "vectorized" code, but for multimedia and list-processing applications there can be significant gains. |
| 27 | + |
| 28 | +## Why do you need to make it portable? |
| 29 | + |
| 30 | +Different chip vendors offer different SIMD instructions. |
| 31 | +Some of these are available in Rust's [`std::arch`](https://doc.rust-lang.org/core/arch/index.html) module. |
| 32 | +You *can* build vectorized functions using that, but at the cost of maintaining a different version for each CPU you want to support. |
| 33 | +You can also *not* write vectorized operations and hope that LLVM's optimizations will "auto-vectorize" your code. |
| 34 | +However, the auto-vectorizer is easily confused and can fail to optimize "obvious" vector tasks. |
| 35 | + |
| 36 | +The portable SIMD API will enable writing SIMD code just once using a high-level API. |
| 37 | +By explicitly communicating your intent to the compiler, it's better able to generate the best possible final code. |
| 38 | +This is still only a best-effort process. |
| 39 | +If your target doesn't support a desired operation in SIMD, the compiler will fall back to using scalar code, processing one lane at a time. |
| 40 | +The details of what's available depend on the build target. |
| 41 | + |
| 42 | +We intend to release the Portable SIMD API as `std::simd`. |
| 43 | +We will cover as many use cases as we can, but it might still be appropriate for you to use `std::arch` directly. |
| 44 | +For that reason the `std::simd` types will also be easily convertable to `std::arch` types where needed. |
| 45 | + |
| 46 | +## How can I get involved? |
| 47 | + |
| 48 | +Everyone can get involved! |
| 49 | +No previous experience necessary. |
| 50 | +If you'd like to help make portable SIMD a reality you can visit our [GitHub repository](https://github.com/rust-lang/project-portable-simd) or reach out on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/257879-project-portable-simd) and say hi! :wave: |
0 commit comments