Skip to content

Remove PyTorch Enterprise #1115

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Sep 8, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 0 additions & 9 deletions _get_started/installation/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,6 @@ Azure [provides](https://azure.microsoft.com/en-us/services/machine-learning-ser
* dedicated, pre-built [machine learning virtual machines](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/){:target="_blank"}, complete with PyTorch.
* bare Linux and Windows virtual machines for you to do a custom install of PyTorch.

## PyTorch Enterprise on Azure
{: #pytorch-enterprise-on-azure}

Microsoft is one of the founding members and also the inaugural participant of the [PyTorch Enterprise Support Program](https://pytorch.org/enterprise-support-program). Microsoft offers PyTorch Enterprise on Azure as a part of Microsoft [Premier](https://www.microsoft.com/en-us/msservices/premier-support) and [Unified](https://www.microsoft.com/en-us/msservices/unified-support-solutions?activetab=pivot1:primaryr4) Support. The PyTorch Enterprise support service includes long-term support to selected versions of PyTorch for up to 2 years, prioritized troubleshooting, and the latest integration with [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning/) and other PyTorch add-ons including ONNX Runtime for faster inference. 

To learn more and get started with PyTorch Enterprise on Microsoft Azure, [visit here](https://azure.microsoft.com/en-us/develop/pytorch/).

For documentation, [visit here](https://docs.microsoft.com/en-us/azure/pytorch-enterprise/).

## Azure Primer
{: #microsoft-azure-primer}

Expand Down
3 changes: 1 addition & 2 deletions _includes/quick-start-module.js
Original file line number Diff line number Diff line change
Expand Up @@ -259,14 +259,13 @@ $("[data-toggle='cloud-dropdown']").on("click", function(e) {

function commandMessage(key) {
var object = {{ installMatrix }};
var lts_notice = "<div class='alert-secondary'><b>Note</b>: Additional support for these binaries may be provided by <a href='/enterprise-support-program' style='font-size:100%'>PyTorch Enterprise Support Program Participants</a>.</div>";

if (!object.hasOwnProperty(key)) {
$("#command").html(
"<pre> # Follow instructions at this URL: https://github.com/pytorch/pytorch#from-source </pre>"
);
} else if (key.indexOf("lts") == 0 && key.indexOf('rocm') < 0) {
$("#command").html("<pre>" + object[key] + "</pre>" + lts_notice);
$("#command").html("<pre>" + object[key] + "</pre>");
} else {
$("#command").html("<pre>" + object[key] + "</pre>");
}
Expand Down
3 changes: 1 addition & 2 deletions _includes/quick_start_cloud_options.html
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,7 @@
<div class="cloud-option-row">
<div class="cloud-option" data-toggle="cloud-dropdown">
<div class="cloud-option-body microsoft-azure" id="microsoft-azure">
<p>Microsoft Azure -</p>
<span>PyTorch Enterprise Program</span>
<p>Microsoft Azure</p>
</div>

<ul>
Expand Down
2 changes: 0 additions & 2 deletions _includes/quick_start_local.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@
package manager since it installs all dependencies. You can also
<a href="{{ site.baseurl }}/get-started/previous-versions">install previous versions of PyTorch</a>. Note that LibTorch is only available for C++.
</p>
<p>Additional support or warranty for some PyTorch Stable and LTS binaries are available through the <a href="/enterprise-support-program">PyTorch Enterprise Support Program</a>.
</p>

<div class="row">
<div class="col-md-3 headings">
Expand Down
2 changes: 1 addition & 1 deletion _posts/2021-8-3-pytorch-profiler-1.9-released.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ For how to optimize batch size performance, check out the step-by-step tutorial
## What’s Next for the PyTorch Profiler?
You just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by ```pip install torch-tb-profiler``` to optimize your PyTorch model.

Look out for an advanced version of this tutorial in the future. If you want tailored enterprise-grade support for this, check out [PyTorch Enterprise on Azure](https://azure.microsoft.com/en-us/develop/pytorch/). We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue [here](https://github.com/pytorch/kineto/issues).
Look out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue [here](https://github.com/pytorch/kineto/issues).

For new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ Our journey in deploying the report generation models reflects the above discuss

### A maturing ecosystem

Is it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues. Finally, for those of us that require enterprise-level support, Microsoft now offers Premier Support for use of PyTorch on Azure.
Is it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues.

Where to go from here? For those that require the flexibility of serving PyTorch code directly, without going through the extra step of exporting self-contained models, it is worth pointing out that the TorchServe project now provides a way of bundling the code together with parameter checkpoints into a single servable archive, greatly reducing the risk of code and parameters running apart. To us, however, exporting models to TorchScript has proven beneficial. It provides a clear interface between modeling and deployment teams, and TorchScript further reduces the latency when serving models on GPU via its just-in-time compilation engine.

Expand Down
7 changes: 0 additions & 7 deletions _resources/enterprise.md

This file was deleted.

3 changes: 1 addition & 2 deletions assets/quick-start-module.js

Large diffs are not rendered by default.

103 changes: 0 additions & 103 deletions enterprise/enterprise_landing.html

This file was deleted.