Knowledge Base
#
How do I reuse an existing PV - after re-creating Kubernetes StatefulSet and its PVCThere are some cases where it had to delete the StatefulSet and re-install a new StatefulSet. In the process you may have to delete the PVCs used by the StatefulSet and retain PV policy by ensuring the Retain as the "Reclaim Policy". In this case, following are the procedures for re-using an existing PV in your StatefulSet application.
Get the PV name by following command and use it in Step 2.
Following is an example output
Patch corresponding PV's reclaim policy from "Delete" to "Retain". So that PV will retain even its PVC is deleted.This can be done by using the steps mentioned here.
Example Output:
Get the PVC name by following command and note down the PVC name. You have to use this same PVC name while creating new PVC.
Example Output:
Delete StatefulSet application and associated PVCs.
Create a new PVC YAML named newPVC.yaml with same configuration. Specify old PV name belongs to volumeName under the PVC spec.
Apply the modified PVC YAML using the following command
Example Output:
Get the newly created PVC UID using
kubectl get pvc mongo-persistent-storage-mongo-0 -o yaml
.Update the uid under the claimRef in the PV using the following command. The PVC will get attached to the PV after editing the PV with correct uid.
Get the updated PVC status using the following command.
Example Output:
Apply the same StatefulSet application YAML. The pod will come back online by re-using the existing PVC. The application pod status can be get by following command.
#
How to prevent container logs from exhausting disk space?Container logs, if left unchecked, can eat into the underlying disk space causing disk-pressure
conditions
leading to eviction of pods running on a given node. This can be prevented by performing log-rotation
based on file-size while specifying retention count. One recommended way to do this is by configuring the
docker logging driver on the individual cluster nodes. Follow the steps below to enable log-rotation.
Configure the docker configuration file /etc/docker/daemon.json (create one if not already found) with the log-options similar to ones shown below (with desired driver, size at which logs are rotated, maximum logfile retention count and compression respectively):
Restart the docker daemon on the nodes. This may cause a temporary disruption of the running containers and cause the node to show up as
Not Ready
until the daemon has restarted successfully.To verify that the newly set log-options have taken effect, the following commands can be used:
At a node-level, the docker logging driver in use can be checked via the following command:
The
LogConfig
section of the output must show the desired values:
At the individual container level, the log options in use can be checked via the following command:
The
LogConfig
section of the output must show the desired values:
To view the current and compressed files, check the contents of the
/var/lib/docker/containers/<container-id>/
directory. The symlinks at/var/log/containers/<container-name>
refer to the above.
note
The steps are common for Linux distributions (tested on CentOS, RHEL, Ubuntu).
Log rotation via the specified procedure is supported by docker logging driver types:
json-file (default), local
.Ensure there are no dockerd cli flags specifying the
--log-opts
(verify viaps -aux
or service definition files in/etc/init.d
or/etc/systemd/system/docker.service.d
). The docker daemon fails to start if an option is duplicated between the file and the flags, regardless their value.These log-options are applicable only to the containers created after the dockerd restart (which is automatically taken care by the kubelet).
The
kubectl log
reads the uncompressed files/symlinks at/var/log/containers
and thereby show rotated/rolled-over logs. If you would like to read the retained/compressed log content as well usedocker log
command on the nodes. Note that reading from compressed logs can cause temporary increase in CPU utilization (on account of decompression actions performed internally).The log-opt
compress: true:
is supported from Docker version: 18.04.0. Themax-file
andmax-size
opts are supported on earlier releases as well.
#
How to create a BlockDeviceClaim for a particular BlockDevice?There are certain use cases where the user does not need some of the BlockDevices discovered by OpenEBS to be used by any of the storage engines. In such scenarios, users can manually create a BlockDeviceClaim to claim that particular BlockDevice, so that it won't be used by Local PV. The following steps can be used to claim a particular BlockDevice:
Download the BDC CR YAML from
node-disk-manager
repository.Provide the BD name of the corresponding BlockDevice which can be obtained by running
kubectl get bd -n <openebs_installed_namespace>
Apply the modified YAML spec using the following command:
note
The blockdeviceclaim CR should be created on the same namespace where openebs is installed.
Verify if particular BDC is created for the given BD cr using the following command:
#
How to provision Local PV on K3OS?K3OS can be installed on any hypervisor The procedure for deploying K3OS on VMware environment is provided in the following section. There are 3 steps for provisioning OpenEBS Local PV on K3OS.
- Configure server(master)
- Configure agent(worker)
- Deploying OpenEBS
The detailed information of each steps are provided below.
Configure server(master)
Download the ISO file from the latest release and create a virtual machine in VMware. Mount the ISO file into hypervisor and start a virtual machine.
Select Run k3OS LiveCD or Installation and press
<ENTER>
.The system will boot-up and gives you the login prompt.
Login as rancher user without providing password.
Set a password for rancher user to enable connectivity from other machines by running
sudo passwd rancher
.Now, install K3OS into disk. This can be done by running the command
sudo os-config
.Choose the option 1.Install to disk . Answer the proceeding questions and provide rancher user password.
As part of above command execution, you can configure the host as either server or agent. Select
1.server
to configure K3s master.While configuring server, set cluster secret which would be used while joining nodes to the server. After successful installation and server reboot, check the cluster status.
Run following command to get the details of nodes:
Example output:
Configure agent(worker)
Follow the above steps till installing K3OS into disk in all the hosts that you want to be part of K3s cluster.
To configure kubernetes agent with K3OS, select the option
2. agent
while runningsudo os-config
command. You need to provide URL of server and secret configured during server configuration.After performing this, Kubernetes agent will be configured as follows and it will be added to the server.
Check the cluster configuration by checking the nodes using the following command:
Example output:
Installing OpenEBS
Run the following command to install OpenEBS from master console:
Check the OpenEBS components by running the following command:
note
openebs-ndm
pods are in not created successfully. This is due to the lack of udev support in K3OS. More details can be found here.Now user can install Local PV on this cluster. Check the StorageClasses created as part of OpenEBS deployment by running the following command.
Example output:
The default StorageClass
openebs-hostpath
can be used to create local PV on the path/var/openebs/local
in your Kubernetes node. You can either useopenebs-hostpath
storage class to create volumes or create new storage class by following the steps mentioned here.note
OpenEBS local PV will not be bound until the application pod is scheduled as its volumeBindingMode is set to WaitForFirstConsumer. Once the application pod is scheduled on a certain node, OpenEBS Local PV will be bound on that node.