Skip to content

Commit 8ade614

Browse files
authored
feat(infr): add region selector step (#4987)
1 parent 503dcf9 commit 8ade614

File tree

7 files changed

+59
-48
lines changed

7 files changed

+59
-48
lines changed

pages/managed-inference/how-to/create-deployment.mdx

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,9 @@ dates:
1717
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
1818

1919
1. Click the **AI** section of the [Scaleway console](https://console.scaleway.com/), and select **Managed Inference** from the side menu to access the Managed Inference dashboard.
20-
2. Click **Deploy a model** to launch the model deployment wizard.
21-
3. Provide the necessary information:
20+
2. From the drop-down menu, select the geographical region where you want to create your deployment.
21+
3. Click **Deploy a model** to launch the model deployment wizard.
22+
4. Provide the necessary information:
2223
- Select the desired model and quantization to use for your deployment [from the available options](/managed-inference/reference-content/).
2324
<Message type="important">
2425
Scaleway Managed Inference allows you to deploy various AI models, either from the Scaleway catalog or by importing a custom model. For detailed information about supported models, visit our [Supported models in Managed Inference](/managed-inference/reference-content/supported-models/) documentation.
@@ -28,12 +29,12 @@ dates:
2829
</Message>
2930
- Choose the geographical **region** for the deployment.
3031
- Specify the GPU Instance type to be used with your deployment.
31-
4. Enter a **name** for the deployment, and optional tags.
32-
5. Configure the **network connectivity** settings for the deployment:
32+
5. Enter a **name** for the deployment, and optional tags.
33+
6. Configure the **network connectivity** settings for the deployment:
3334
- Attach to a **Private Network** for secure communication and restricted availability. Choose an existing Private Network from the drop-down list, or create a new one.
3435
- Set up **Public connectivity** to access resources via the public internet. Authentication by API key is enabled by default.
3536
<Message type="important">
3637
- Enabling both private and public connectivity will result in two distinct endpoints (public and private) for your deployment.
3738
- Deployments must have at least one endpoint, either public or private.
3839
</Message>
39-
6. Click **Deploy model** to launch the deployment process. Once the model is ready, it will be listed among your deployments.
40+
7. Click **Deploy model** to launch the deployment process. Once the model is ready, it will be listed among your deployments.

pages/managed-inference/how-to/delete-deployment.mdx

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,11 @@ Once you have finished your inference tasks you can delete your deployment. This
2222
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
2323

2424
1. Click **Managed Inference** in the **AI** section of the [Scaleway console](https://console.scaleway.com) side menu. A list of your deployments displays.
25-
2. Choose a deployment either by clicking its name or selecting **More info** from the drop-down menu represented by the icon <Icon name="more" /> to access the deployment dashboard.
26-
3. Click the **Settings** tab of your deployment to display additional settings.
27-
4. Click **Delete deployment**.
28-
5. Type **DELETE** to confirm and click **Delete deployment** to delete your deployment.
25+
2. From the drop-down menu, select the geographical region you want to manage.
26+
3. Choose a deployment either by clicking its name or selecting **More info** from the drop-down menu represented by the icon <Icon name="more" /> to access the deployment dashboard.
27+
4. Click the **Settings** tab of your deployment to display additional settings.
28+
5. Click **Delete deployment**.
29+
6. Type **DELETE** to confirm and click **Delete deployment** to delete your deployment.
2930

3031
<Message type="important">
3132
Deleting a deployment is a permanent action and will erase all its associated data.

pages/managed-inference/how-to/import-custom-model.mdx

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -24,25 +24,26 @@ Scaleway provides a selection of common models for deployment from the Scaleway
2424
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) to perform actions in your Organization.
2525

2626
1. Click **Managed Inference** in the **AI** section of the side menu in the [Scaleway console](https://console.scaleway.com/) to access the dashboard.
27-
2. Click **Deploy a model** to launch the model deployment wizard.
28-
3. In the **Choose a model** section, select **Custom model**. If you have no model yet, click **Import a model** to start the model import wizard.
29-
4. Choose an upload source:
27+
2. From the drop-down menu, select the geographical region you want to manage.
28+
3. Click **Deploy a model** to launch the model deployment wizard.
29+
4. In the **Choose a model** section, select **Custom model**. If you have no model yet, click **Import a model** to start the model import wizard.
30+
5. Choose an upload source:
3031
- **Hugging Face**: Pull the model from Hugging Face.
3132
- **Object Storage**: This feature is coming soon.
32-
5. Enter your Hugging Face access token, which must have READ access to the repository.
33+
6. Enter your Hugging Face access token, which must have READ access to the repository.
3334
<Message type="note">
3435
[Learn how to generate a Hugging Face access token](https://huggingface.co/docs/hub/security-tokens).
3536
</Message>
36-
6. Enter the name of the Hugging Face repository to pull the model from.
37+
7. Enter the name of the Hugging Face repository to pull the model from.
3738
<Message type="note">
3839
Ensure you have access to gated models if applicable. Refer to the [Hugging Face documentation](https://huggingface.co/docs/hub/en/models-gated) for details.
3940
</Message>
40-
7. Choose a name for your model. The name must be unique within your Organization and Project and cannot be changed later.
41-
8. Click **Verify import** to check your Hugging Face credentials and ensure model compatibility.
41+
8. Choose a name for your model. The name must be unique within your Organization and Project and cannot be changed later.
42+
9. Click **Verify import** to check your Hugging Face credentials and ensure model compatibility.
4243
<Message type="tip">
4344
For detailed information about supported models, visit our [Supported models in Managed Inference](/managed-inference/reference-content/supported-models/) documentation.
4445
</Message>
45-
9. Review the summary of your import, which includes:
46+
10. Review the summary of your import, which includes:
4647
- Context size by node type.
4748
- Quantization options.
4849
- Estimated cost.

pages/managed-inference/how-to/manage-allowed-ips.mdx

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,13 +28,14 @@ Allowed IPs restrict the IPs allowed to access your Managed Inference endpoints.
2828
## How to allow an IP address to connect to a deployment
2929

3030
1. Click **Managed Inference** in the **AI** section of the [Scaleway console](https://console.scaleway.com) side menu. A list of your deployments displays.
31-
2. Click a deployment name or <Icon name="more" /> > **More info** to access the deployment dashboard.
32-
3. Click the **Security** tab and navigate to the **Allowed IPs** section. A list of your allowed IP addresses displays.
33-
4. Click **Add allowed IP**. The IP can be a single IP or an IP block.
31+
2. From the drop-down menu, select the geographical region you want to manage.
32+
3. Click a deployment name or <Icon name="more" /> > **More info** to access the deployment dashboard.
33+
4. Click the **Security** tab and navigate to the **Allowed IPs** section. A list of your allowed IP addresses displays.
34+
5. Click **Add allowed IP**. The IP can be a single IP or an IP block.
3435
<Message type="note">
3536
The IP must be specified in CIDR format, i.e. `198.51.100.135/32` for a single IP or `198.51.100.0/24` for an IP block.
3637
</Message>
37-
5. Enter a single IP address or a subnetwork.
38+
6. Enter a single IP address or a subnetwork.
3839
<Message type="note">
3940
To restore initial settings and allow connections from all IPs, delete all allowed IPs from the list.
4041
</Message>

pages/managed-inference/how-to/managed-inference-with-private-network.mdx

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -29,25 +29,27 @@ Using a Private Network for communications between your Instances hosting your a
2929
### Attaching a Private Network during deployment setup
3030

3131
1. Click **Managed Inference** in the **AI** section of the [Scaleway console](https://console.scaleway.com) side menu. A list of your deployments displays.
32-
2. Navigate to the **Deployments** section and click **Create New Deployment**. The setup wizard displays.
33-
3. During the [setup process](/managed-inference/how-to/create-deployment/), you access the **Networking** section.
34-
4. You will be asked to **attach a Private Network**. Two options are available:
32+
2. From the drop-down menu, select the geographical region you want to manage.
33+
3. Navigate to the **Deployments** section and click **Create New Deployment**. The setup wizard displays.
34+
4. During the [setup process](/managed-inference/how-to/create-deployment/), you access the **Networking** section.
35+
5. You will be asked to **attach a Private Network**. Two options are available:
3536
- **Attach an existing Private Network**: Select from the list of available networks.
3637
- **Add a new Private Network**: Choose this option if you need to create a new network.
37-
5. **Confirm your selection** and complete the deployment setup process.
38+
6. **Confirm your selection** and complete the deployment setup process.
3839

3940
### Attaching a Private Network to an existing deployment
4041

4142
1. Click **Managed Inference** in the **AI** section of the [Scaleway console](https://console.scaleway.com) side menu. A list of your deployments displays.
42-
2. Click a deployment name or <Icon name="more" /> > **More info** to access the deployment dashboard.
43-
3. Go to the **Overview** tab and locate the **Endpoints** section.
44-
4. Click **Attach Private Network**. Two options are available:
43+
2. From the drop-down menu, select the geographical region you want to manage.
44+
3. Click a deployment name or <Icon name="more" /> > **More info** to access the deployment dashboard.
45+
4. Go to the **Overview** tab and locate the **Endpoints** section.
46+
5. Click **Attach Private Network**. Two options are available:
4547
- **Attach an existing Private Network**: Select from the list of available networks.
4648
- **Add a new Private Network**: Choose this option if you need to create a new network.
4749
<Message type="tip">
4850
Alternatively, you can access the **Security tab** and attach a network from the **Private Network** section.
4951
</Message>
50-
5. **Save your changes** to apply the new network configuration.
52+
6. **Save your changes** to apply the new network configuration.
5153

5254
### Verifying the Private Network connection
5355

pages/managed-inference/how-to/monitor-deployment.mdx

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,9 @@ This documentation page shows you how to monitor your Managed Inference deployme
2424
## How to monitor your LLM dashboard
2525

2626
1. Click **Managed Inference** in the **AI** section of the [Scaleway console](https://console.scaleway.com) side menu. A list of your deployments displays.
27-
2. Click a deployment name or <Icon name="more" /> > **More info** to access the deployment dashboard.
28-
3. Click the **Monitoring** tab of your deployment. The Cockpit overview displays.
29-
4. Click **Open Grafana metrics dashboard** to open your Cockpit's Grafana interface.
30-
5. Authenticate with your [Grafana credentials](/cockpit/how-to/retrieve-grafana-credentials/). The Grafana dashboard displays.
31-
6. Select your Managed Inference dashboard from the [list of your preconfigured dashboards](/cockpit/how-to/access-grafana-and-managed-dashboards/) to visualize your metrics.
27+
2. From the drop-down menu, select the geographical region you want to manage.
28+
3. Click a deployment name or <Icon name="more" /> > **More info** to access the deployment dashboard.
29+
4. Click the **Monitoring** tab of your deployment. The Cockpit overview displays.
30+
5. Click **Open Grafana metrics dashboard** to open your Cockpit's Grafana interface.
31+
6. Authenticate with your [Grafana credentials](/cockpit/how-to/retrieve-grafana-credentials/). The Grafana dashboard displays.
32+
7. Select your Managed Inference dashboard from the [list of your preconfigured dashboards](/cockpit/how-to/access-grafana-and-managed-dashboards/) to visualize your metrics.

pages/managed-inference/quickstart.mdx

Lines changed: 18 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,9 @@ Here are some of the key features of Scaleway Managed Inference:
3232
## How to create a Managed Inference deployment
3333

3434
1. Navigate to the **AI** section of the [Scaleway console](https://console.scaleway.com/), and select **Managed Inference** from the side menu to access the Managed Inference dashboard.
35-
2. Click **Create deployment** to launch the deployment creation wizard.
36-
3. Provide the necessary information:
35+
2. From the drop-down menu, select the geographical region where you want to create your deployment.
36+
3. Click **Create deployment** to launch the deployment creation wizard.
37+
4. Provide the necessary information:
3738
- Select the desired model and the quantization to use for your deployment [from the available options](/managed-inference/reference-content/).
3839
<Message type="important">
3940
Scaleway Managed Inference allows you to deploy various AI models, either from the Scaleway catalog or by importing a custom model. For detailed information about supported models, visit our [Supported models in Managed Inference](/managed-inference/reference-content/supported-models/) documentation.
@@ -43,24 +44,25 @@ Here are some of the key features of Scaleway Managed Inference:
4344
</Message>
4445
- Choose the geographical **region** for the deployment.
4546
- Specify the GPU Instance type to be used with your deployment.
46-
4. Enter a **name** for the deployment, along with optional tags to aid in organization.
47-
5. Configure the **network** settings for the deployment:
47+
5. Enter a **name** for the deployment, along with optional tags to aid in organization.
48+
6. Configure the **network** settings for the deployment:
4849
- Enable **Private Network** for secure communication and restricted availability within Private Networks. Choose an existing Private Network from the drop-down list, or create a new one.
4950
- Enable **Public Network** to access resources via the public Internet. API key protection is enabled by default.
5051
<Message type="important">
5152
- Enabling both private and public networks will result in two distinct endpoints (public and private) for your deployment.
5253
- Deployments must have at least one endpoint, either public or private.
5354
</Message>
54-
6. Click **Create deployment** to launch the deployment process. Once the deployment is ready, it will be listed among your deployments.
55+
7. Click **Create deployment** to launch the deployment process. Once the deployment is ready, it will be listed among your deployments.
5556

5657
## How to access a Managed Inference deployment
5758

5859
Managed Inference deployments have authentication enabled by default. As such, your endpoints expect a secret key generated with Scaleway's Identity and Access Management service (IAM) for authentication.
5960

6061
1. Click **Managed Inference** in the **AI** section of the side menu. The Managed Inference dashboard displays.
61-
2. Click <Icon name="more" /> next to the deployment you want to edit. The deployment dashboard displays.
62-
3. Click **Generate key** in the **Deployment connection** section of the dashboard. The token creation wizard displays.
63-
4. Fill in the [required information for API key creation](/iam/how-to/create-api-keys/) and click **Generate API key**.
62+
2. From the drop-down menu, select the geographical region where you want to manage.
63+
3. Click <Icon name="more" /> next to the deployment you want to edit. The deployment dashboard displays.
64+
4. Click **Generate key** in the **Deployment connection** section of the dashboard. The token creation wizard displays.
65+
5. Fill in the [required information for API key creation](/iam/how-to/create-api-keys/) and click **Generate API key**.
6466

6567
<Message type="tip">
6668
You have full control over authentication from the **Security** tab of your deployment. Authentication is enabled by default.
@@ -69,8 +71,9 @@ Managed Inference deployments have authentication enabled by default. As such, y
6971
## How to interact with Managed Inference
7072

7173
1. Click **Managed Inference** in the **AI** section of the side menu. The Managed Inference dashboard displays.
72-
2. Click <Icon name="more" /> next to the deployment you want to edit. The deployment dashboard displays.
73-
3. Click the **Inference** tab. Code examples in various environments display. Copy and paste them into your code editor or terminal.
74+
2. From the drop-down menu, select the geographical region where you want to manage.
75+
3. Click <Icon name="more" /> next to the deployment you want to edit. The deployment dashboard displays.
76+
4. Click the **Inference** tab. Code examples in various environments display. Copy and paste them into your code editor or terminal.
7477

7578
<Message type="note">
7679
Prompt structure may vary from one model to another. Refer to the specific instructions for use in our [dedicated documentation](/managed-inference/reference-content/).
@@ -79,10 +82,11 @@ Managed Inference deployments have authentication enabled by default. As such, y
7982
## How to delete a deployment
8083

8184
1. Click **Managed Inference** in the **AI** section of the [Scaleway console](https://console.scaleway.com) side menu. A list of your deployments displays.
82-
2. Choose a deployment either by clicking its name or selecting **More info** from the drop-down menu represented by the icon <Icon name="more" /> to access the deployment dashboard.
83-
3. Click the **Settings** tab of your deployment to display additional settings.
84-
4. Click **Delete deployment**.
85-
5. Type **DELETE** to confirm and click **Delete deployment** to delete your deployment.
85+
2. From the drop-down menu, select the geographical region where you want to create your deployment.
86+
3. Choose a deployment either by clicking its name or selecting **More info** from the drop-down menu represented by the icon <Icon name="more" /> to access the deployment dashboard.
87+
4. Click the **Settings** tab of your deployment to display additional settings.
88+
5. Click **Delete deployment**.
89+
6. Type **DELETE** to confirm and click **Delete deployment** to delete your deployment.
8690

8791
<Message type="important">
8892
Deleting a deployment is a permanent action, and will erase all its associated configuration and resources.

0 commit comments

Comments
 (0)