-
Notifications
You must be signed in to change notification settings - Fork 83
NSFS deployment with export of accounts (UID, GID configuration)
This is a WIP feature.
Download the operator binary:
curl URL > noobaa
Use the CLI to install to the noobaa namespace:
noobaa install -n noobaa --operator-image='noobaa/noobaa-operator:master-20210419' --noobaa-image='noobaa/noobaa-core:master-20210419'
I also suggest updating the current namespace to noobaa so you don’t need to add “-n noobaa” to all kubectl / noobaa commands:
kubectl config set-context --current --namespace noobaa
Create a PVC (that will be fulfilled by your undelying CSI driver) for the endpoint pod to mount and use it's FS path.
Here is an example: Assuming the filesystem to expose is mounted in /nsfs in the node.
We will create a local PV that represents the mounted file system on the node at /nsfs.
Download and create the yamls attached below -
kubectl create -f nsfs-local-class.yaml
kubectl create -f nsfs-local-pv.yaml
kubectl create -f nsfs-local-pvc.yaml
nsfs-local-class.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nsfs-local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
nsfs-local-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nsfs-vol
spec:
storageClassName: nsfs-local
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /nsfs/
capacity:
storage: 1Ti
accessModes:
- ReadWriteMany
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: Exists
nsfs-local-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nsfs-vol
spec:
storageClassName: nsfs-local
resources:
requests:
storage: 1Ti
accessModes:
- ReadWriteMany
Update the noobaa endpoints deployment to mount the volume -
kubectl patch deployment noobaa-endpoint --patch '{
"spec": { "template": { "spec": {
"volumes": [{
"name": "nsfs",
"persistentVolumeClaim": {"claimName": "nsfs-vol"}
}],
"containers": [{
"name": "endpoint",
"volumeMounts": [{ "name": "nsfs", "mountPath": "/nsfs" }]
}]
}}}
}'
Create a namespace resource:
noobaa api pool_api create_namespace_resource '{
"name": "fs1",
"nsfs_config": {
"fs_backend": "GPFS",
"fs_root_path": "/nsfs/nsfs"
}
}'
Supported backends: CEPH_FS, GPFS, NFSv4 The backend configuration allows optimization of flows for the underlying FS
Set up the ACLs/permissions of the mounted FS path to the needed UIDs, GIDs that would be used to access it
Here is an example: Locally on node giving a full access in order to support any UID, GID
mkdir -p /nsfs/nsfs
chmod -R 777 /nsfs
Create namespace bucket:
noobaa api bucket_api create_bucket '{
"name": "nsfs",
"namespace":{
"write_resource": { "resource": "fs1", path: "jenia/" },
"read_resources": [ { "resource": "fs1", path: "jenia/" }]
}
}'
These are the available parameters for this call:
{
// Bucket name as a string
name: { $ref: 'common_api#/definitions/bucket_name' },
// Tiering policy name as a string if we'd like to attach instead of creating a new one
// Not relevant to NSFS
tiering: { $ref: 'common_api#/definitions/tiering_name' },
// Chunk deduplication and coding configurations
// Not relevant to NSFS
chunk_split_config: { $ref: 'common_api#/definitions/chunk_split_config' },
chunk_coder_config: { $ref: 'common_api#/definitions/chunk_coder_config' },
// Bucket tagging as a string
tag: { type: 'string' },
// Object lock configrations for the bucket
object_lock_configuration: { $ref: '#/definitions/object_lock_configuration' },
// Namespace configurations (here we configure the NSFS resources and their paths)
namespace: { $ref: '#/definitions/namespace_bucket_config' },
// Bucket lock enabled boolean
lock_enabled: {
type: 'boolean'
},
// Bucket claim info for OBC flow
bucket_claim: { $ref: '#/definitions/bucket_claim' },
}
This is the structure of the namespace configuration properties:
namespace_bucket_config: {
type: 'object',
required: ['write_resource', 'read_resources'],
properties: {
// Write resources for the namespace
write_resource: {
$ref: '#/definitions/namespace_resource_config'
},
// Read resources for the namespace
read_resources: {
type: 'array',
items: {
$ref: '#/definitions/namespace_resource_config'
},
},
// Caching configuration for namespace_cache
caching: {
$ref: 'common_api#/definitions/bucket_cache_config'
}
}
}
This is the structure of the namespace resource configuration properties:
namespace_resource_config: {
type: 'object',
required: ['resource'],
properties: {
// Namespace resource name as string to be used
resource: { type: 'string' },
// Path as a string within the namespace resource to be used (exported path/bucket in NSFS case)
path: { type: 'string' }
}
}
Create an fs account:
noobaa api account_api create_account '{
"email": "jenia@noobaa.io",
"name" : "jenia",
"has_login": false,
"s3_access": true,
"allowed_buckets": {
"full_permission": true
},
"nsfs_account_config": {
"uid": *INSERT_UID*,
"gid": *INSERT_GID*
}
}'
This should give out a response with the credentials to use
INFO[0001] ✅ RPC: account.create_account() Response OK: took 205.7ms
access_keys:
- access_key: *NOOBAA_ACCOUNT_ACCESS_KEY*
secret_key: *NOOBAA_ACCOUNT_SECRET_KEY*
You can also perform a list accounts command in order to see the configured NSFS accounts (besides all other accounts of the system)
noobaa api account_api list_accounts
If you are interested in a particular account you can read it directly
noobaa api account_api read_account '{
"email": "jenia@noobaa.io"
}'
Configure the S3 client application and access the FS via S3 from the endpoint
Application S3 config:
AWS_ACCESS_KEY_ID=NOOBAA_ACCOUNT_ACCESS_KEY AWS_SECRET_ACCESS_KEY=NOOBAA_ACCOUNT_SECRET_KEY S3_ENDPOINT=s3.noobaa.svc (or nodePort address from noobaa status) BUCKET_NAME=nsfs