Skip to content

Commit 8094a6c

Browse files
SamyOubouazizRoRoJjcirinosclwy
authored
docs(dwc): add doc on persistent volume MTA-5885 (#4978)
* docs(dwc): add doc on persistent volume MTA-5885 * docs(dwc): update concept * Update pages/data-lab/concepts.mdx Co-authored-by: Rowena Jones <36301604+RoRoJ@users.noreply.github.com> * Update pages/data-lab/concepts.mdx Co-authored-by: Jessica <113192637+jcirinosclwy@users.noreply.github.com> --------- Co-authored-by: Rowena Jones <36301604+RoRoJ@users.noreply.github.com> Co-authored-by: Jessica <113192637+jcirinosclwy@users.noreply.github.com>
1 parent 04945e4 commit 8094a6c

File tree

2 files changed

+15
-3
lines changed

2 files changed

+15
-3
lines changed

pages/data-lab/concepts.mdx

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,10 @@ categories:
1212
- managed-services
1313
---
1414

15+
## Apache Spark cluster
16+
17+
An Apache Spark cluster is an orchestrated set of machines over which distributed/Big data calculus is processed. In the case of Scaleway Data Lab, the Apache Spark cluster is a Kubernetes cluster, with Apache Spark installed in each pod. For more details, check out the [Apache Spark documentation](https://spark.apache.org/documentation.html).
18+
1519
## Data Lab
1620

1721
A Data Lab is a project setup that combines a Notebook and an Apache Spark Cluster for data analysis and experimentation. it comes with the required infrastructure and tools to allow data scientists, analysts, and researchers to explore data, create models, and gain insights.
@@ -40,14 +44,19 @@ Lighter is a technology that enables SparkMagic commands to be readable and exec
4044

4145
A notebook for an Apache Spark cluster is an interactive, web-based tool that allows users to write and execute code, visualize data, and share results in a collaborative environment. It connects to an Apache Spark cluster to run large-scale data processing tasks directly from the notebook interface, making it easier to develop and test data workflows.
4246

43-
## Apache Spark Cluster
47+
## Persistent volume
48+
49+
A Persistent Volume (PV) is a cluster-wide storage resource that ensures data persistence beyond the lifecycle of individual pods. Persistent volumes abstract the underlying storage details, allowing administrators to use various storage solutions.
4450

45-
An Apache Spark cluster is an orchestrated set of machines over which the distributed/Big data calculus is going to be processed. In the case of this project, the Apache Spark cluster is a Kubernetes cluster, upon which Apache Spark has been installed in every pod deployed. For more details, check out the [Apache Spark documentation](https://spark.apache.org/documentation.html).
51+
Apache Spark® executors require storage space for various operations, particularly to shuffle data during wide operations such as sorting, grouping, and aggregation. Wide operations are transformations that require data from different partitions to be combined, often resulting in data movement across the cluster. During the map phase, executors write data to shuffle storage, which is then read by reducers.
52+
53+
A PV sized properly ensures a smooth execution of your workload.
4654

4755
## SparkMagic
4856

4957
SparkMagic is a set of tools that allows you to interact with Apache Spark clusters through Jupyter notebooks. It provides magic commands for running Spark jobs, querying data, and managing Spark sessions directly within the notebook interface, facilitating seamless integration and execution of Spark tasks. For more details, check out the [SparkMagic repository](https://github.com/jupyter-incubator/sparkmagic).
5058

5159

5260
## Transaction
61+
5362
An SQL transaction is a sequence of one or more SQL operations (such as queries, inserts, updates, or deletions) executed as a single unit of work. These transactions ensure data integrity and consistency, following the ACID properties: Atomicity, Consistency, Isolation, and Durability, meaning all operations within a transaction either complete successfully or none of them take effect. An SQL transaction can be rolled back in case of an error.

pages/data-lab/how-to/create-data-lab.mdx

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,10 @@ Data Lab for Apache Spark™ is a product designed to assist data scientists and
3434
<Message type="note">
3535
Provisioning zero worker nodes lets you retain and access you cluster and notebook configurations, but will not allow you to run calculations.
3636
</Message>
37-
- Optionally, choose an Object Storage bucket in the desired region to store the data source and results.
37+
- Activate the [persistent volume](/data-lab/concepts/#persistent-volume) if required, then enter a volume size according to your needs.
38+
<Message type="note">
39+
Persistent volume usage depends on your workload, and only the actual usage will be billed, within the limit defined. A minimum of 1 GB is required to run the notebook.
40+
</Message>
3841
- Enter a name for your Data Lab.
3942
- Optionally, add a description and/or tags for your Data Lab.
4043
- Verify the estimated cost.

0 commit comments

Comments
 (0)