Skip to content

Commit 991ceda

Browse files
authored
Add ExecuTorch Alpha blog post and other page updates (pytorch#1615)
* Add ExecuTorch Alpha blog post Signed-off-by: Chris Abraham <cjyabraham@gmail.com> * Updated PyTorch Edge page Signed-off-by: Chris Abraham <cjyabraham@gmail.com> * Updated PyTorch ExecuTorch page Signed-off-by: Chris Abraham <cjyabraham@gmail.com> * Updated GetStarted page to include ExecuTorch Signed-off-by: Chris Abraham <cjyabraham@gmail.com> --------- Signed-off-by: Chris Abraham <cjyabraham@gmail.com>
1 parent 6f79fc4 commit 991ceda

File tree

4 files changed

+73
-25
lines changed

4 files changed

+73
-25
lines changed

_get_started/mobile.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
---
22
layout: get_started
3-
title: Mobile
4-
permalink: /get-started/mobile/
3+
title: ExecuTorch
4+
permalink: /get-started/executorch/
55
background-class: get-started-background
66
body-class: get-started
77
order: 5
88
published: true
99
---
1010

11-
## Get Started with PyTorch Mobile
11+
## Get Started with PyTorch ExecuTorch
1212

13-
As of PyTorch 1.3, PyTorch supports an end-to-end workflow from Python to deployment on iOS and Android.
14-
This is an early, experimental release that we will be building on in several areas over the coming months.
13+
<p>
14+
<a href="https://pytorch.org/executorch/stable/index.html" class="btn btn-lg with-right-arrow">
15+
ExecuTorch Documentation
16+
</a>
17+
</p>
1518

16-
Get started on [Android]({{ site.baseurl }}/mobile/android)
17-
18-
Get started on [iOS]({{ site.baseurl }}/mobile/ios)
1919

2020
<script page-id="mobile" src="{{ site.baseurl }}/assets/menu-tab-selection.js"></script>
2121
<script src="{{ site.baseurl }}/assets/get-started-sidebar.js"></script>

_posts/2024-04-30-executorch-alpha.md

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
layout: blog_detail
3+
title: "ExecuTorch Alpha: Taking LLMs and AI to the Edge with Our Community and Partners"
4+
---
5+
6+
We are excited to announce the release of [ExecuTorch alpha](https://github.com/pytorch/executorch), focused on deploying large language models (LLMs) and large ML models to the edge, stabilizing the API surface, and improving our installation processes. It has been an exciting few months [from our 0.1 (preview) release](https://pytorch.org/blog/pytorch-edge/) in collaboration with our partners at Arm, Apple, and Qualcomm Technologies, Inc.
7+
8+
In this post we’ll discuss our full support for Meta’s Llama 2, early support for Meta’s Llama 3, broad model support in ExecuTorch, and highlight the important work our partners have done to move us forward.
9+
10+
## Large Language Models on Mobile
11+
12+
Mobile devices are highly constrained for compute, memory, and power. To bring LLMs to these devices, we heavily leverage quantization and other techniques to pack these models appropriately.
13+
14+
ExecuTorch alpha supports 4-bit post-training quantization using GPTQ. We've provided broad device support on CPU by landing dynamic shape support and new dtypes in XNNPack. We've also made significant improvements in export and lowering, reduced memory overhead and improved runtime performance. This enables running Llama 2 7B efficiently on iPhone 15 Pro, iPhone 15 Pro Max, Samsung Galaxy S22, S23, and S24 phones and other edge devices. [Early support](https://github.com/pytorch/executorch/releases/tag/v0.2.0) for [Llama 3 8B](https://ai.meta.com/blog/meta-llama-3/) is also included. We are always improving the token/sec on various edge devices and you can visit GitHub for the [latest performance numbers](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md).
15+
16+
We're working closely with our partners at Apple, Arm, and Qualcomm Technologies to delegate to GPU and NPU for performance through Core ML, MPS, TOSA, and Qualcomm AI Stack backends respectively.
17+
18+
## Supported Models
19+
20+
We remain committed to supporting an ever-expanding list of models with ExecuTorch. Since preview, we have significantly expanded our tested models across NLP, vision and speech, with full details [in our release notes](https://github.com/pytorch/executorch/releases/tag/v0.2.0). Although support for on-device LLMs is early, we anticipate most traditional models to function seamlessly out of the box, with delegation to XNNPACK, Core ML, MPS, TOSA, and HTP for performance. If you encounter any problems please open [a GitHub issue](https://github.com/pytorch/executorch/issues) with us.
21+
22+
## Productivity
23+
24+
Deploying performant models tuned for specific platforms often require deep visualization into the on-device runtime data to determine the right changes to make in the original PyTorch model. With ExecuTorch alpha, we provide a powerful SDK with observability throughout the process from model authoring to deployment, including delegate and hardware-level information.
25+
26+
The ExecuTorch SDK was enhanced to include better debugging and profiling tools. Because ExecuTorch is built on PyTorch, the debugging capabilities include the ability to map from operator nodes back to original Python source code for more efficient anomaly resolution and performance tuning for both delegated and non-delegated model instances. You can learn more about the ExecuTorch SDK [here](https://github.com/pytorch/executorch/blob/main/examples/sdk/README.md).
27+
28+
## Partnerships
29+
30+
ExecuTorch has only been possible because of strong collaborations across Arm, Apple, and Qualcomm Technologies. The collaboration for the initial launch of ExecuTorch continues as we support LLMs and large AI models on the edge for PyTorch. As we’ve seen with this early work for ExecuTorch alpha, there are unique challenges with these larger models and we’re excited to develop in the open.
31+
32+
We also want to highlight the great partnership with Google on [XNNPACK](https://github.com/google/XNNPACK) for CPU performance. The teams continue to work together upstreaming our changes and across the TensorFlow and PyTorch teams to make sure we can all support generative AI models on the edge with SOTA performance.
33+
34+
Lastly, our hardware partner MediaTek has been doing work enabling the Llama collection of models with ExecuTorch on their SoCs. We'll have more to share in the future.
35+
36+
## Alpha and Production Usage
37+
38+
With our alpha release, we have production-tested ExecuTorch. Meta is using ExecuTorch for hand tracking on Meta Quest 3 and a variety of models on Ray-Ban Meta Smart Glasses. In addition, we have begun the rollout of ExecuTorch with Instagram and are integrating with other Meta products. We are excited to see how ExecuTorch can be used for other edge experiences.
39+
40+
## Community
41+
42+
We are excited to see various efforts in the community to adopt or contribute to ExecuTorch. For instance, Unity recently [shared their work](https://schedule.gdconf.com/session/unity-developer-summit-drive-better-gameplay-experiences-on-user-devices-with-ai-presented-by-unity/903634) at the Game Developers Conference ([GDC](https://gdconf.com/)) on leveraging ExecuTorch and Edge IR to run PyTorch models with their neural network inference library Sentis. Leveraging ExecuTorch's hackability and extensibility, Unity introduced their own custom backend that serializes ExecuTorch’s Edge Dialect IR into Sentis’ native serialized format enabling developers to begin using PyTorch models easily in their games and apps.
43+
44+
We’ve been building and innovating with ExecuTorch in the open. Our north star is to empower the community to deploy any ML model on edge devices painlessly and efficiently. Whether you are a hobbyist or this is your day job, we’d love for you to [jump in to bring your ML models to the edge](https://pytorch.org/executorch/stable/getting-started-setup.html). We are looking for your help to:
45+
46+
1. Use ExecuTorch to [run your LLM models locally](https://github.com/pytorch/executorch/blob/main/docs/source/llm/getting-started.md) on various deployment targets and share your feedback
47+
2. Expand our supported models, including bug reports
48+
3. Expand our quantization schemes
49+
4. Help us build out delegates to GPU and NPU
50+
51+
To all individual contributors and early adopters of ExecuTorch, a big thank you as well. We can’t wait to have more of you [join us](https://github.com/pytorch/executorch)!

edge.html

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,10 @@ <h1 class="small">PyTorch Edge</h1>
2323
<div class="row">
2424
<div class="col-md-10">
2525
<h2 class="mt-5 mb-2">PyTorch Edge</h2>
26-
<p>The AI landscape is quickly evolving, with AI models being deployed beyond server to edge devices such as mobile phones, wearables, AR/VR/MR and embedded devices. PyTorch Edge extends PyTorch's research-to-production stack to these edge devices and paves the way for building innovative, privacy-aware experiences with superior productivity, portability, and performance, optimized for these diverse hardware platforms. </p>
27-
<h2 class="mt-5 mb-2">PyTorch on Edge - From PyTorch Mobile to ExecuTorch</h2>
28-
<p>In 2019, we announced <a href="/mobile/home/">PyTorch Mobile</a> powered by <a href="https://pytorch.org/docs/stable/jit.html">TorchScript</a> to address the ever-growing need for edge devices to execute AI models. To advance our PyTorch Edge offerings even further, we developed <a href="/executorch-overview">ExecuTorch</a>. ExecuTorch facilitates PyTorch inference on edge devices while supporting portability across hardware platforms with lower runtime and framework tax. ExecuTorch was developed collaboratively between industry leaders including Meta, Arm, Apple, and Qualcomm. </p>
29-
<p>PyTorch Mobile allowed users to stay in the PyTorch ecosystem from training to model deployment. However, the lack of consistent PyTorch semantics used across these and the focus on TorchScript inhibited the developer experience and slowed down research to production. PyTorch Mobile also didn’t provide well-defined entry points for third-party integration and optimizations, which we’ve addressed with ExecuTorch. </p>
30-
<p>We’ve renewed our commitment to on-device AI with <a href="/executorch-overview">ExecuTorch</a>. This extends our ecosystem in a much more “in the spirit of PyTorch” way, with productivity, hackability, and extensibility as critical components. We look forward to supporting edge and embedded applications with low latency, strong privacy, and innovation on the edge. </p>
26+
<p>The AI landscape is quickly evolving, with AI models being deployed beyond server to edge devices such as mobile phones, wearables, AR/VR/MR and embedded devices. PyTorch Edge extends PyTorch's research-to-production stack to these edge devices and paves the way for building innovative, privacy-aware experiences with superior productivity, portability, and performance, optimized for these diverse hardware platforms.</p>
27+
<h2 class="mt-5 mb-2">Introducing ExecuTorch</h2>
28+
<p>To advance our PyTorch Edge offering, we developed <a href="https://pytorch.org/executorch-overview">ExecuTorch</a>, our new runtime for edge devices. ExecuTorch facilitates PyTorch inference on edge devices while supporting portability across hardware platforms with lower runtime and framework tax. ExecuTorch was developed collaboratively between industry leaders including Meta, Arm, Apple, and Qualcomm. </p>
29+
<p>With ExecuTorch, we’ve renewed our commitment to on-device AI. This extends our ecosystem in a much more “in the spirit of PyTorch” way, with productivity, hackability, and extensibility as critical components. We look forward to supporting edge and embedded applications with low latency, strong privacy, and innovation on the edge. </p>
3130
</div>
3231
</div>
3332
</div>
@@ -41,15 +40,15 @@ <h2>Learn more about PyTorch Edge</h2>
4140
</div>
4241
<div class="row content">
4342
<div class="col-md-4 text-center">
44-
<p class="lead">New on-device inference</p>
45-
<a href="/executorch-overview" class="btn btn-lg mb-4 with-right-arrow">
43+
<p class="lead">What’s New in ExecuTorch</p>
44+
<a href="https://github.com/pytorch/executorch" class="btn btn-lg mb-4 with-right-arrow">
4645
ExecuTorch
4746
</a>
4847
</div>
4948
<div class="col-md-4 text-center">
50-
<p class="lead">Legacy PyTorch Mobile runtime</p>
51-
<a href="/mobile/home" class="btn btn-lg with-right-arrow">
52-
PyTorch Mobile
49+
<p class="lead">Try ExecuTorch</p>
50+
<a href="https://pytorch.org/executorch/stable/index.html" class="btn btn-lg with-right-arrow">
51+
ExecuTorch Documentation
5352
</a>
5453
</div>
5554
</div>

executorch.html

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,8 @@ <h1 class="small">ExecuTorch</h1>
2222
<div class="container mb-5">
2323
<div class="row">
2424
<div class="col-md-10">
25-
<p class="mt-4"><strong>IMPORTANT NOTE: This is a preview version of Executorch and should be used for testing and evaluation purposes only. It is not recommended for use in production settings. We welcome any feedback, suggestions, and bug reports from the community to help us improve the technology.</strong></p>
26-
2725
<h2 class="mt-5 mb-2" id="what-is-executorch">What is ExecuTorch?</h2>
28-
<p>ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of PyTorch models to edge devices. Key value propositions of ExecuTorch are:</p>
26+
<p>ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of various PyTorch models (vision, speech, Generative AI, and more) to edge devices. Key value propositions of ExecuTorch are:</p>
2927

3028
<div class="container">
3129
<div class="row mt-3">
@@ -56,10 +54,10 @@ <h2 class="mt-5 mb-2" id="what-is-executorch">What is ExecuTorch?</h2>
5654

5755
<h2 class="mt-5 mb-2" id="explore-executorch">Explore ExecuTorch</h2>
5856

59-
<p>We are excited to see how the community leverages our all new on-device AI stack. You can learn more about <a href="https://pytorch.org/executorch/stable/getting-started-architecture">key components</a> of ExecuTorch and its architecture, <a href="https://pytorch.org/executorch/stable/intro-how-it-works">how it works</a>, and explore <a href="/executorch">documentation page</a> and <a href="https://pytorch.org/executorch/stable/#tutorials-and-examples:~:text=Getting%20Started-,Tutorials%20and%20Examples,-Docs">detailed tutorials</a>.</p>
57+
<p>ExecuTorch is currently powering various experiences across AR, VR and Family of Apps (FOA) products and services at Meta. We are excited to see how the community leverages our all new on-device AI stack. You can learn more about <a href="https://pytorch.org/executorch/stable/getting-started-architecture">key components</a> of ExecuTorch and its architecture, <a href="https://pytorch.org/executorch/stable/intro-how-it-works">how it works</a>, and explore <a href="https://pytorch.org/executorch">documentation pages</a> and <a href="https://pytorch.org/executorch/stable/#tutorials-and-examples:~:text=Getting%20Started-,Tutorials%20and%20Examples,-Docs">detailed tutorials</a>.</p>
6058

6159
<p>
62-
<a href="/executorch" class="btn btn-lg with-right-arrow">
60+
<a href="https://pytorch.org/executorch/stable/index.html" class="btn btn-lg with-right-arrow">
6361
ExecuTorch Documentation
6462
</a>
6563
</p>
@@ -68,9 +66,9 @@ <h2 class="mt-5 mb-2" id="why-executorch">Why ExecuTorch?</h2>
6866

6967
<p>Supporting on-device AI presents unique challenges with diverse hardware, critical power requirements, low/no internet connectivity, and realtime processing needs. These constraints have historically prevented or slowed down the creation of scalable and performant on-device AI solutions. We designed ExecuTorch, backed by our industry leaders like Meta, Arm, Apple, and Qualcomm, to be highly portable and provide superior developer productivity without losing on performance.</p>
7068

71-
<h2 class="mt-5 mb-2" id="how-is-executorch-different-from-pytorch-mobile-lite-interpreter">How is ExecuTorch Different from <a href="/mobile/home/">PyTorch Mobile (Lite Interpreter)</a>?</h2>
69+
<h2 class="mt-5 mb-2" id="executorch-alpha-release">ExecuTorch Alpha Release</h2>
7270

73-
<p>PyTorch Mobile uses TorchScript to allow PyTorch models to run on devices with limited resources. ExecuTorch has a significantly smaller memory size and a dynamic memory footprint resulting in superior performance compared to PyTorch Mobile. Also ExecuTorch does not rely on TorchScript, and instead leverages PyTorch 2.0 compiler and export functionality for on-device execution of PyTorch models.</p>
71+
<p>ExecuTorch was initially introduced to the community at the 2023 <a href="https://pytorch.org/blog/pytorch-conference-2023/">PyTorch Conference</a>. With our most recent alpha release, we further expanded ExecuTorch’s capabilities across multiple dimensions. First, we enabled support for the deployment of large language models (LLMs) on various edge devices. Second, with ExecuTorch alpha, we have further stabilized the API surface. Lastly, we have significantly improved the developer experience by simplifying the installation flow as well as improving observability and developer productivity via the <a href="https://github.com/pytorch/executorch/blob/main/examples/sdk/README.md">ExecuTorch SDK</a>. ExecuTorch alpha release also provides early support for the recently announced Llama 3 8B along with demonstrations on how to run this model on an iPhone 15 Pro and a Samsung Galaxy S24 mobile phone.</p>
7472

7573
</div>
7674
</div>

0 commit comments

Comments
 (0)