Skip to main content

Storage Client Guide

This guide covers three common ways to access NetActuate storage services: Ceph RBD block storage on Ubuntu, S3-compatible object storage with s3cmd, and Kubernetes persistent volumes with the Ceph CSI driver.


Ceph RBD on Ubuntu

Ceph RBD (RADOS Block Device) provides block storage that you can map and mount as a local disk on your VM.

Prerequisites

  • Ubuntu VM deployed on NetActuate
  • Ceph monitor addresses (provided by NetActuate)
  • Ceph user ID and key (provided by NetActuate)
  • Pool name and image/volume name

Install Ceph client tools

sudo apt update
sudo apt install ceph ceph-common -y

Configure authentication

Create the keyring file:

sudo tee /etc/ceph/ceph.client.<YOUR_USER_ID>.keyring <<'EOF'
[client.<YOUR_USER_ID>]
key = <YOUR_SECRET_KEY>
EOF
sudo chmod 600 /etc/ceph/ceph.client.<YOUR_USER_ID>.keyring

Create the Ceph configuration file:

sudo tee /etc/ceph/ceph.conf <<'EOF'
[global]
mon_host = <MON1>,<MON2>,<MON3>
EOF

Map the RBD image

sudo rbd map <POOL_NAME>/<IMAGE_NAME> \
--id <YOUR_USER_ID> \
--keyring /etc/ceph/ceph.client.<YOUR_USER_ID>.keyring

Format the device (first time only)

sudo mkfs.ext4 /dev/rbd/<POOL_NAME>/<IMAGE_NAME>

Mount the device

sudo mkdir -p /mnt/rbd
sudo mount /dev/rbd/<POOL_NAME>/<IMAGE_NAME> /mnt/rbd

Persist across reboots

echo "/dev/rbd/<POOL_NAME>/<IMAGE_NAME> /mnt/rbd ext4 defaults 0 0" | sudo tee -a /etc/fstab

Verify

df -h /mnt/rbd

s3cmd Object Storage

s3cmd is a command-line tool for working with S3-compatible object storage. Use it to manage buckets and objects on NetActuate S3 storage.

Prerequisites

  • Access Key and Secret Key (from the NetActuate portal)
  • Object Store Endpoint (from the NetActuate portal)

Install s3cmd

sudo apt update
sudo apt install s3cmd -y

Configure s3cmd

s3cmd --configure

Enter the following when prompted:

Access Key: <your-access-key>
Secret Key: <your-secret-key>
S3 Endpoint: <endpoint:port>
Use HTTPS: Yes

Common operations

# List buckets
s3cmd ls

# Create a bucket
s3cmd mb s3://mybucket

# Upload a file
s3cmd put file.txt s3://mybucket/

# Download a file
s3cmd get s3://mybucket/file.txt

# Make a file public
s3cmd setacl s3://mybucket/file.txt --acl-public

# Sync a folder
s3cmd sync ./local-folder/ s3://mybucket/

Public access

Objects are private by default. To make objects public:

  • Use s3cmd setacl --acl-public on individual objects.
  • Or disable Private in the portal, which applies a bucket policy allowing public read access.

Kubernetes CSI Driver

The Ceph CSI RBD driver allows Kubernetes pods to use Ceph block storage as persistent volumes. This section provides a working pattern for deploying the driver and provisioning tenant-scoped storage.

Prerequisites

  • A running Kubernetes cluster on NetActuate
  • kubectl configured with cluster access
  • Ceph cluster ID, monitor addresses, user ID, and key (provided by NetActuate)

Overview

The deployment includes:

  1. CSI driver namespace (ceph-csi-rbd) with ConfigMaps, RBAC, and the CSI controller and node plugin
  2. Tenant objects in your workload namespace: Secret, StorageClass, PVC, and an optional test Pod

Deploy the CSI driver and tenant storage

Save the full manifest (see Storage - RBD (Ceph) for the complete template), replace all placeholders with your actual values, and apply:

kubectl apply -f ceph-csi-rbd-tenant.yaml

Verify CSI pods

kubectl -n ceph-csi-rbd get pods

All pods should show Running status.

Verify PVC binding

kubectl -n <YOUR_NAMESPACE> get pvc

The PVC status should show Bound.

Verify mount in pod

kubectl -n <YOUR_NAMESPACE> exec -it <POD_NAME> -- sh -c "df -h /mnt/rbd && mount | grep rbd"

Key points

  • All block volumes live in the global-block-pool pool
  • Tenant isolation is enforced by the Ceph user and RADOS namespace scoping
  • The StorageClass supports volume expansion (allowVolumeExpansion: true)
  • The reclaim policy is set to Delete by default; change to Retain if you need to preserve data after PVC deletion

Need Help?

For guidance on storage setups, connect with a NetActuate infrastructure expert at support@netactuate.com or open a support ticket from the portal: portal.netactuate.com.