Skip to content

Mayastor⚓︎

Links to external documentation

→ OpenEBS Mayastor Documentation

OpenEBS Mayastor is a persistent volume provider for Kubernetes. prokube uses Mayastor as a distributed and redundant storage provider on highly-available setups (usually 3 nodes or more).

FAQ⚓︎

Some PVCs are pending and are not getting bound⚓︎

Possibly the cluster does not have enough storage available.

You can use the kubectl mayastor plugin (should be pre-installed on all mayastor enabled nodes) to check how much disk capacity is available, e.g.:

ansible@master-focal-still-goberian:~$ kubectl mayastor get pools
 ID                                     DISKS           MANAGED  NODE                           STATUS  CAPACITY  ALLOCATED  AVAILABLE  COMMITTED
 pool-on-worker-1-focal-still-goberian  aio:///dev/sdb  true     worker-1-focal-still-goberian  Online  99.9GiB   90GiB      9.9GiB     90GiB
 pool-on-master-focal-still-goberian    aio:///dev/sdb  true     master-focal-still-goberian    Online  99.9GiB   90GiB      9.9GiB     90GiB
 pool-on-worker-0-focal-still-goberian  aio:///dev/sdb  true     worker-0-focal-still-goberian  Online  99.9GiB   90GiB      9.9GiB     90GiB

CRs (custom resources) also report capacity and usage, so mayastor plugin is not strictly needed:

kubectl get diskpool -n mayastor

I'm having problems with Mayastor backed PVCs⚓︎

On Ubuntu, auto-updates upgrade the kernel but do not install kernel-extras, which are needed for nvme access. This leads to Mayastor not being able to access hard disks via nvme. In turn, PVCs cannot be access, which leads to error messages such as this:

kubelet  MountVolume.MountDevice failed for volume "pvc-15a0fe75-9961-469e-a3b8-e61b0d4cda6a" : rpc error: code = Internal desc = Failed to stage volume 15a0fe75-9961-469e-a3b8-e61b0d4cda6a: attach failed: NVMe connect failed: /dev/nvme-fabrics, No such file or directory (os error 2)

or

 Warning  FailedAttachVolume  112s (x10 over 17m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-e1ce0497-88a8-4dbe-b311-f7ade8a9838a" : rpc error: code = Internal desc = Operation failed: PreconditionFailed("error in response: status code '412 Precondition Failed', content: 'RestJsonError { details: \"\", message: \"SvcError :: NoOnlineReplicas: No online replicas are available for Volume 'e1ce0497-88a8-4dbe-b311-f7ade8a9838a'\", kind: FailedPrecondition }'")
or

ansible@worker-1-focal-capital-pekingese:~$ kubectl-mayastor get volumes
 ID                                    REPLICAS  TARGET-NODE                       ACCESSIBILITY  STATUS  SIZE   THIN-PROVISIONED  ALLOCATED
 24a14e2f-c881-4a82-9615-2938529e9315  3         master-focal-capital-pekingese    nvmf           Online  10GiB  false             10GiB
 2d04b9fd-7124-4ba1-84ed-febc42a104a7  3         master-focal-capital-pekingese    nvmf           Online  10GiB  false             10GiB
 46387e0f-4cf9-4724-b2ab-da90273b976a  3         master-focal-capital-pekingese    nvmf           Online  50GiB  false             50GiB
 654ce71f-64d8-49d5-b067-18a532751f6d  3         worker-0-focal-capital-pekingese  nvmf           Online  10GiB  false             10GiB
 80f28fe3-599c-4db9-adc1-c9d22982c7ee  3         master-focal-capital-pekingese    nvmf           Online  10GiB  false             10GiB
 8a82833e-212a-4e5d-b333-7774c2e7ad82  3         worker-0-focal-capital-pekingese  nvmf           Online  20GiB  false             20GiB
 c4b34d32-5527-4248-8945-3ef86164cac0  3         master-focal-capital-pekingese    nvmf           Online  10GiB  false             10GiB
 e1ce0497-88a8-4dbe-b311-f7ade8a9838a  3         <none>                            <none>         Online  5GiB   false             0 B

This can be fixed by installing the matching linux-modules-extra, e.g. sudo apt install linux-modules-extra-5.15.0-1049-gcp. As the temporary unavailability of disks can still lead to issues with mayastor, ideally the automatic upgrading of kernels is disabled.

Reclaiming Mayastor drives⚓︎

A Mayastor diskpool pool on a Mayastor formatted drive survives a kubernetes teardown and can be recreated on the node using the same unique name as before. If you want to change the name, or use the drive for other purposes, you need to wipe the partition table and overwrite the drive with zeros.

Wipe partition table⚓︎

First list them by:

sudo wipefs /dev/sdb
DEVICE OFFSET       TYPE UUID LABEL
sdb    0x200        gpt
sdb    0xf9fffffe00 gpt
sdb    0x1fe        PMBR
and wipe them, e.g. by type (WARNING YOU WILL LOSE ALL DATA!):

sudo wipefs -a -t gpt -f /dev/<replace>

Overwrite the drive with zeros⚓︎

Mayastor uses low level access and if a drive is reused it doesn't delete old data. In order to release the whole drive you have to overwrite it with zeros (WARNING YOU WILL LOSE ALL DATA!):

sudo dd if=/dev/zero of=/dev/<replace> bs=4096 status=progress