You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deploying large models, like Stable Diffusion, can be challenging and time-consuming.
5
-
In this tutorial, we will show how you can streamline the deployment of a PyTorch Stable Diffusion
6
-
model by leveraging Vertex AI. PyTorch is the framework used by Stability AI on Stable
5
+
6
+
In this recipe, we will show how you can streamline the deployment of a PyTorch Stable Diffusion
7
+
model by leveraging Vertex AI.
8
+
9
+
PyTorch is the framework used by Stability AI on Stable
7
10
Diffusion v1.5. Vertex AI is a fully-managed machine learning platform with tools and
8
11
infrastructure designed to help ML practitioners accelerate and scale ML in production with
9
-
the benefit of open-source frameworks like PyTorch. In four steps you can deploy a PyTorch
10
-
Stable Diffusion model (v1.5).
12
+
the benefit of open-source frameworks like PyTorch.
13
+
14
+
In four steps you can deploy a PyTorch Stable Diffusion model (v1.5).
11
15
12
16
Deploying your Stable Diffusion model on a Vertex AI Endpoint can be done in four steps:
13
17
@@ -22,14 +26,14 @@ Deploying your Stable Diffusion model on a Vertex AI Endpoint can be done in fou
22
26
Let’s have a look at each step in more detail. You can follow and implement the steps using the
23
27
`Notebook example <https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/vertex_endpoints/torchserve/dreambooth_stablediffusion.ipynb>`__.
24
28
25
-
NOTE: please keep in mind that this tutorial requires a billable Vertex AI as explained in more details in the notebook example.
29
+
NOTE: Please keep in mind that this recipe requires a billable Vertex AI as explained in more details in the notebook example.
26
30
27
31
Create a custom TorchServe handler
28
32
----------------------------------
29
33
30
34
TorchServe is an easy and flexible tool for serving PyTorch models. The model deployed to Vertex AI
31
-
uses TorchServe to handle requests and return responses from the model. You must create a custom
32
-
TorchServe handler to include in the model artifacts uploaded to Vertex AI. Include the handler file in the
35
+
uses TorchServe to handle requests and return responses from the model.
36
+
You must create a custom TorchServe handler to include in the model artifacts uploaded to Vertex AI. Include the handler file in the
33
37
directory with the other model artifacts, like this: `model_artifacts/handler.py`.
34
38
35
39
After creating the handler file, you must package the handler as a model archiver (MAR) file.
@@ -70,7 +74,7 @@ Once you've uploaded the model artifacts into a GCS bucket, you can upload your
70
74
From the Vertex AI Model Registry, you have an overview of your models
71
75
so you can better organize, track, and train new versions. For this you can use the
72
76
`Vertex AI SDK <https://cloud.google.com/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk>`__
0 commit comments