Troubleshooting OpenEBS - NDM
#
General guidelines for troubleshooting- Contact OpenEBS Community for support.
- Search for similar issues added in this troubleshooting section.
- Search for any reported issues on StackOverflow under OpenEBS tag
Blockdevices are not detected by NDM
Unable to claim blockdevices by NDM operator
#
Blockdevices are not detected by NDMOne additional disk is connected to the node, with multiple partitions on the disk. Some of the partitions have a filesystem and is mounted. kubectl get bd -n openebs
does not show any blockdevices. Ideally the blockdevice resources for the partitions should have been shown.
Troubleshooting:
Check the output of lsblk
on the node and check the mountpoints of the partitions. By default NDM excludes partitions mounted at /, /boot
and /etc/hosts
(which is same as the partition at which kubernetes / docker filesystem exists) and the parent disks of those partitions. In the above example /dev/sdb
is excluded because of root partitions on that disk. /dev/sda4
contains the docker filesystem, and hence /dev/sda
is also excluded.
Resolution:
The ndm-config-map
needs to be edited.
- Remove
/etc/hosts
entry from the os-disk-exclude-filter - Add the corresponding docker filesystem partition in exclude section of path filter. eg:
/dev/sda4
- Restart the NDM daemonset pods.
The blockdevices should now be created for the unused partitions.
#
Unable to claim blockdevices by NDM operatorBlockDeviceClaims may remain in pending state, even if blockdevices are available in Unclaimed and Active state. The main reason for this will be there are no blockdevices that match the criteria specified in the BlockDeviceClaim. Sometimes, even if the criteria matches the blockdevice may be in an Unclaimed state.
Troubleshooting:
Check if the blockdevice is having any of the following annotations:
or
If 1.
is present, it means the blockdevice was previously being used by cstor and it was not properly cleaned up. The cstor pool can be from a previous release or the disk already container some zfs labels.
If 2.
is present, it means the blockdevice was previously being used by localPV and the cleanup was not done on the device.
Resolution:
ssh to the node in which the blockdevice is present
If the disk has partitions, run wipefs on all the partitions
- Run wipefs on the disk
Restart NDM pod running on the node
New blockdevices should get created for those disks and it can be claimed and used. The older blockdevices will go into an Unknown/Inactive state.