Openebs Versions Save

Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.

v2.6.0

3 years ago

Release Summary

OpenEBS v2.6 contains some key enhancements and several fixes for the issues reported by the user community across all 9 types of OpenEBS volumes.

Here are some of the key highlights in this release.

New capabilities

  • OpenEBS is introducing a new CSI driver for dynamic provisioning of Jiva volumes. This driver is released as alpha and currently supports the following additional features compared to the non-CSI jiva volumes.

    • Jiva Replicas are backed by OpenEBS host path volumes
    • Auto-remount of volumes that are marked read-only by iSCSI client due to intermittent network issues
    • Handle the case of multi-attach error sometimes seen on on-premise clusters
    • A custom resource for Jiva volumes to help with easy access to the volume status

    For instructions on how to set up and use the Jiva CSI driver, please see. https://github.com/openebs/jiva-operator.

Key Improvements

  • Several bug fixes to the Mayastor volumes along with improvements to the API documentation. See Mayastor release notes.
  • Enhanced the NFS Dynamic Provisioner to support using Cluster IP for dynamically provisioned NFS server. It was observed that on some of the Kubernetes clusters the kubelet or the node trying to mount the NFS volume was unable to resolve the cluster local service.
  • ZFS Local PV added support for resizing of the raw block volumes.
  • LVM Local PV is enhanced with additional features and some key bug fixes like:
    • Raw block volume support
    • Snapshot support
    • Ability to schedule based on the capacity of the volumes provisioned
    • Ensure that LVM volume creation and deletion functions are idempotent
  • NDM partition discovery was updated to fetch the device details from its parent block device.

Key Bug Fixes

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CSI components have been upgraded to:
      • k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
      • k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
      • k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from an older version of cStor operators to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

  • The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds

  • The e2e pipelines include upgrade testing only from 1.5 and higher releases to 2.6. If you are running on release older than 1.5, OpenEBS recommends you upgrade to the latest version as soon as possible.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.
  • PSP support has been added to ZFS Local PV and cStor helm charts.
  • Improving the OpenEBS Rawfile Local PV in preparation for its beta release. In the current release, fixed some issues as well as added support for setting resource limits on the sidecar and few other optimizations.
  • Sample Grafana dashboards for managing OpenEBS are being developed here: https://github.com/openebs/charts/tree/gh-pages/grafana-charts

Show your Support

Thank you @coboluxx (IDNT) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @luizcarlosfaria, @Z0Marlin, @iyashu, @dyasny, @hanieh-m, @si458, @Ab-hishek

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.6.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.6.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.6 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.6, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.6 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.5.0

3 years ago

Release Summary

A warm and happy new year to all our users, contributors, and supporters. :tada: :tada: :tada:.

Keeping up with our tradition of monthly releases, OpenEBS v2.5 is now GA with some key enhancements and several fixes for the issues reported by the user community. Here are some of the key highlights in this release:

New capabilities

  • OpenEBS has support for multiple storage engines, and the feedback from users has shown that users tend to only use a few of these engines on any given cluster depending on the workload requirements. As a way to provide more flexibility for users, we are introducing separate helm charts per engine. With OpenEBS 2.5 the following helm charts are supported.

    • openebs - This is the most widely deployed that has support for Jiva, cStor, and Local PV hostpath and device volumes.
    • zfs-localpv - Helm chart for ZFS Local PV CSI driver.
    • cstor-operators - Helm chart for cStor CSPC Pools and CSI driver.
    • dynamic-localpv-provisioner - Helm chart for only installing Local PV hostpath and device provisioners.

    (Special shout out to @sonasingh46, @shubham14bajpai, @prateekpandey14, @xUnholy, @akhilerm for continued efforts in helping to build the above helm charts.)

  • OpenEBS is introducing a new CSI driver for dynamic provisioning to Kubernetes Local Volumes backed by LVM. This driver is released as alpha and currently supports the following features.

    • Create and Delete Persistent Volumes
    • Resize Persistent Volume

    For instructions on how to set up and use the LVM CSI driver, please see. https://github.com/openebs/lvm-localpv

Key Improvements

  • Enhanced the ZFS Local PV scheduler to support spreading the volumes across the nodes based on the capacity of the volumes that are already provisioned. After upgrading to this release, capacity-based spreading will be used by default. In the previous releases, the volumes were spread based on the number of volumes provisioned per node. https://github.com/openebs/zfs-localpv/pull/266.

  • Added support to configure image pull secrets for the pods launched by OpenEBS Local PV Provisioner and cStor (CSPC) operators. The image pull secrets (comma separated strings) can be passed as an environment variable (OPENEBS_IO_IMAGE_PULL_SECRETS) to the deployments that launch these additional pods. The following deployments need to be updated.

  • Added support to allow users to specify custom node labels for allowedTopologies under the cStor CSI StorageClass. https://github.com/openebs/cstor-csi/pull/135

Key Bug Fixes

  • Fixed an issue that could cause Jiva replica to fail to initialize if there was an abrupt shutdown of the node where the replica pod is scheduled during the Jiva replica initialization. https://github.com/openebs/jiva/pull/337.
  • Fixed an issue that was causing Restore (with automatic Target IP configuration enabled) to fail if cStor volumes are created with Target Affinity with application pod. https://github.com/openebs/velero-plugin/issues/141.
  • Fixed an issue where Jiva and cStor volumes will remain in pending state on Kubernetes 1.20 and above clusters. K8s 1.20 has deprecated SelfLink option which causes this failure with older Jiva and cStor Provisioners. https://github.com/openebs/openebs/issues/3314
  • Fixed an issue with cStor CSI Volumes that caused the Pods using cStor CSI Volumes on unmanaged Kubernetes clusters to remain in the pending state due to multi-attach error. This was caused due to cStor being dependent on the CSI VolumeAttachment object to determine where to attach the volume. In the case of unmanaged Kubernetes clusters, the VolumeAttachment object was not cleared by Kubernetes from the failed node and hence the cStor would assume that volume was still attached to the old node.

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CRD version has been upgraded to v1. (Thanks to the efforts from @RealHarshThakur, @prateekpandey14, @akhilerm)
    • The CSI components have been upgraded to:
      • quay.io/k8scsi/csi-node-driver-registrar:v2.1.0
      • quay.io/k8scsi/csi-provisioner:v2.1.0
      • quay.io/k8scsi/snapshot-controller:v4.0.0
      • quay.io/k8scsi/csi-snapshotter:v4.0.0
      • quay.io/k8scsi/csi-resizer:v1.1.0
      • quay.io/k8scsi/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from an older version of cStor Operators to this version, you will need to manually delete the cstor CSI driver object prior to upgrade. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor Operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.

Show your Support

Thank you @laimison (Renthopper) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @allenhaozi, @anandprabhakar0507, @Hoverbear, @kaushikp13, @praveengt

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.5.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.5.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.5 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.5, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.5 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.4.0

3 years ago

Release Summary

OpenEBS v2.4 is our last monthly release for the year with some key enhancements and several fixes for the issues reported by the user community.

Note: With Kubernetes 1.20, SelfLink option has been removed which is used by the OpenEBS jiva and cStor (non-csi) provisioners. This causes the PVCs to remain in a pending state. The workaround and fix for this are being tracked under this issue. A patch release will be made available as soon as the fix has been verified on 1.20 platforms.

Here are some of the key highlights in this release:

New capabilities

  • ZFS Local PV has now been graduated to stable with all the supported features and upgrade tests automated via e2e testing. ZFS Local PV is best suited for distributed workloads that require resilient local volumes that can sustain local disk failures. You can read more about using the ZFS Local volumes at https://github.com/openebs/zfs-localpv and check out how ZFS Local PVs are used in production at Optoro.

  • OpenEBS is introducing a new NFS dynamic provisioner to allow the creation and deletion of NFS volumes using Kernel NFS backed by block storage. This provisioner is being actively developed and released as alpha. This new provisioner allows users to provision OpenEBS RWX volumes where each volume gets its own NFS server instance. In the previous releases, OpenEBS RWX volumes were supported via the Kubernetes NFS Ganesha and External Provisioner - where multiple RWX volumes share the same NFS Ganesha Server. You can read more about the new OpenEBS Dynamic Provisioner at https://github.com/openebs/dynamic-nfs-provisioner.

Key Improvements

  • Added support for specifying a custom node affinity label for OpenEBS Local Hostpath volumes. By default, OpenEBS Local Hostpath volumes use kubenetes.io/hostname for setting the PV Node Affinity. Users can now specify a custom label to use for PV Node Affinity. Custom node affinity can be specified in the Local PV storage class as follows:
    kind: StorageClass
    metadata:
      name: openebs-hostpath
      annotations:
        openebs.io/cas-type: local
        cas.openebs.io/config: |
          - name: StorageType
            value: "hostpath"
          - name: NodeAffinityLabel
            value: "openebs.io/custom-node-id"
    provisioner: openebs.io/local
    volumeBindingMode: WaitForFirstConsumer
    reclaimPolicy: Delete
    
    This will help with use cases like:
    • Deployments where kubenetes.io/hostname is not unique across the cluster (Ref: https://github.com/openebs/openebs/issues/2875)
    • Deployments, where an existing Kubernetes node in the cluster running Local volumes is replaced with a new node, and storage attached to the old node, is moved to a new node. Without this feature, the Pods using the older node will remain in the pending state.
  • Added a configuration option to the Jiva volume provisioner to skip adding replica node affinity. This will help in deployments where replica nodes are frequently replaced with new nodes causing the replica to remain in the pending state. The replica node affinity should be used in cases where replica nodes are not replaced with new nodes or the new node comes back with the same node-affinity label. (Ref: https://github.com/openebs/openebs/issues/3226). The node affinity for jiva volumes can be skipped by specifying the following ENV variable in the OpenEBS Provisioner Deployment.
         - name: OPENEBS_IO_JIVA_PATCH_NODE_AFFINITY
           value: "disabled"
    
  • Enhanced OpenEBS Velero plugin (cStor) to automatically set the target IP once the cStor volumes is restored from a backup. (Ref: https://github.com/openebs/velero-plugin/pull/131). This feature can be enabled by updating the VolumeSnapshotLocal using configuration option autoSetTargetIP as follows:
    apiVersion: velero.io/v1
    kind: VolumeSnapshotLocation
    metadata:
      ...
    spec:
      config:
        ...
        ...
        autoSetTargetIP: "true"
    
    (Huge thanks to @zlymeda for working on this feature which involved co-ordinating this fix across multiple repositories).
  • Enhanced the OpenEBS Velero plugin used to automatically create the target namespace during restore, if the target namespace doesn't exist. (Ref: https://github.com/openebs/velero-plugin/issues/137).
  • Enhanced the OpenEBS helm chart to support Image pull secrets. https://github.com/openebs/charts/pull/174
  • Enhance OpenEBS helm chart to allow specifying resource limits on OpenEBS control plane pods. https://github.com/openebs/charts/issues/151
  • Enhanced the NDM filters to support discovering LVM devices both with /dev/dm-X and /dev/mapper/x patterns. (Ref: https://github.com/openebs/openebs/issues/3310).

Key Bug Fixes

Backward Incompatibilities

  • Velero has updated the configuration for specifying a different node selector during restore. The configuration changes from velero.io/change-pvc-node to velero.io/change-pvc-node-selector. ( Ref: https://github.com/openebs/velero-plugin/pull/139)

Other notable updates

  • OpenEBS ZFS Local PV CI has been updated to include CSI Sanity tests and fixed some minor issue to confirm with CSI test suite. ( Ref: https://github.com/openebs/zfs-localpv/pull/232).
  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided.
  • Significant work is underway to make it easier to install only the components that the users finally decide to use for their workloads. These features will allow users to run different flavors of OpenEBS in K8s clusters optimized for the workloads they intend to run in the cluster. This can be achieved in the current version using a customized helm values file or using a modified Kubernetes manifest file. We have continued to make some significant progress with the help of the community towards supporting individual helm charts for each of the storage engines. The location for the various helm charts are as follows:
    • Dynamic Local PV ( host path and device)
    • Dynamic Local PV CSI ( ZFS )
    • Dynamic Local PV CSI ( Rawfile )
    • cStor
    • Mayastor
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Keeping the OpenEBS generated Kubernetes custom resources in sync with the upstream Kubernetes versions, like moving CRDs from v1beta1 to v1

Show your Support

Thank you @FeynmanZhou (KubeSphere) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @alexppg, @arne-rusek, @Atharex, @bobek, @Mosibi, @mpartel, @nareshdesh, @rahulkrishnanfs, @ssytnikov18, @survivant

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.4.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.4.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.4 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.4, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.4 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.3.0

3 years ago

Release Summary

OpenEBS v2.3 is our Hacktoberfest release with 40+ new contributors added to the project and ships with ARM64 support for cStor, Jiva, Dynamic Local PV. Mayastor seeing higher adoption rates resulting in further fixes and enhancements.

Here are some of the key highlights in this release:

New capabilities

  • ARM64 support (declared beta) for OpenEBS Data Engines - cStor, Jiva, Local PV (hostpath and device), ZFS Local PV.

    • A significant improvement in this release is the support for multi-arch container images for amd64 and arm64. The multi-arch images are available on the docker hub and will enable the users to run OpenEBS in the Kubernetes cluster that has a mix of arm64 and amd64 nodes.
    • In addition to ARM64 support, Local PV (hostpath and device) multi-arch container images include support for arm32 and power system.
    • The arch-specific container images like <image name>-amd64:<image-tag>, are also made available from docker hub and quay to support backward compatibility to users running OpenEBS deployments with arch-specific images.
    • To upgrade your volumes to multi-arch containers, make sure you specify the to-image to the multi-arch image available from docker or your own copy of it.
    • A special shout and many thanks to @xUnholy @shubham14bajpai, @akhilerm, and @prateekpandey14 for adding the multi-arch support to 27 OpenEBS container images generated from 14+ GitHub repositories. @wangzihao3, @radicand, @sgielen, @Pensu, and many more users from our slack community for helping with testing, feedback, and fixes by using the early versions of ARM64 builds in dev and production.
  • Enhanced the cStor Velero Plugin to help with automating the restore from an incremental backup. Restoring an incremental backup involves restoring the full backup (also called base backup and subsequent incremental backups till the desired incremental backup. With this release, the user can set a new parameter(restoreAllIncrementalSnapshots) in the VolumeSnapshotLocation to automate the restore of the required base and incremental backups. For detailed instructions to try this feature, please refer to this doc.

  • OpenEBS Mayastor is seeing tremendous growth in terms of users trying it out and providing feedback. A lot of work in this release has gone into fixing issues, enhancing the e2e tests and control plane, and adding initial support for snapshots. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor.

Key Improvements

  • Enhanced Node Disk Manager (NDM) to discover and create Block Device custom resources for device mapper(dm) devices like loopback devices, luks encrypted devices, and LVM devices. Prior to this release, if users had to use dm devices, they would have to manually create the corresponding Block Device CRs.
  • Enhanced the NDM block device tagging feature to reserve a block device from being assigned to Local PV (device) or cStor data engines. The block device can be reserved by specifying an empty value for the block device tag.
  • Added support to install ZFS Local PV using Kustomize. Also updated the default upgrade strategy for the ZFS CSI driver to run in parallel instead of rolling upgrades.
  • Several enhancements and fixes from the Community towards OpenEBS documentation, build and release scripts from the Hacktoberfest participation.

Key Bug Fixes

  • Fixed an issue with the upgrade of cStor and Jiva volumes in cases where volumes are provisioned without enabling monitoring side car.
  • Fixed an issue with the upgrade that would always set the image registry asquay.io/openebs, when upgrade job doesn't specify the registry location. The upgrade job will now fallback to the registry that is already configured on the existing pods.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided.
  • Significant work is underway to make it easier to install only the components that the users finally decide to use for their workloads. These features will allow users to run different flavors of OpenEBS in K8s clusters optimized for the workloads they intend to run in the cluster. This can be achieved in the current version using a customized helm values file or using a modified Kubernetes manifest file.
  • Repositories are being re-factored to help simplify the contributor onboarding process. For instance, with this release, the dynamic-localpv-provisioner has been moved from openebs/maya to its own repository as openebs/dynamic-localpv-provisioner. This refactoring of the source code will also help with the simplified build and faster release process per data engine.
  • Automation of further Day 2 operations like - setting of cStor target IP after the cstor volume has been restored from a backup (Thanks to @zlymeda), automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Keeping the OpenEBS generated Kubernetes custom resources in sync with the upstream Kubernetes versions, like moving CRDs from v1beta1 to v1

Show your Support

Thank you @shock0572 (ExactLab), @yydzhou (ByteDance), @kuja53, and @darioneto for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @filip-lebiecki, @hack3r-0m, @mtzaurus, @niladrih, @Akshay-Nagle, @Aman1440, @AshishMhrzn10, @Hard-Coder05, @ItsJulian, @KaranSinghBisht, @Naveenkhasyap, @Nelias, @Shivam7-1, @ShyamGit01, @Sumindar, @Taranzz25, @archit041198, @aryanrawlani28, @codegagan, @de-sh, @harikrishnajiju, @heygroot, @hnifmaghfur, @iTechsTR, @iamrajiv, @infiniteoverflow, @invidian, @kichloo, @lambda2, @lucasqueiroz, @prakhargurunani, @prakharshreyash15, @rafael-rosseto, @sabbatum, @salonigoyal2309, @sparkingdark, @sudhinm, @trishitapingolia, @vijay5158, @vmr1532.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.3.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.3.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.3 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.3, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.3 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.2.0

3 years ago

Release Summary

OpenEBS v2.2 comes with a critical fix to NDM and several enhancements to cStor, ZFS Local PV and Mayastor. Here are some of the key highlights in this release:

New capabilities

  • OpenEBS ZFS Local PV adds support for Incremental Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.

  • OpenEBS Mayastor instances now expose a gRPC API which is used to enumerate block disk devices attached to the host node, as an aid to the identification of suitable candidates for inclusion within storage Pools during configuration. This functionality is also accessible within the mayastor-client diagnostic utility. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor release notes.

Key Improvements

Key Bug Fixes

  • Fixes an issue where NDM could cause data loss by creating a partition table on an uninitialized iSCSI volume. This can happen due to a race condition between NDM pod initializing and iSCSI volume initializing after a node reboot and if the iSCSI volume is not fully initialized when NDM probes for device details. This issue has been observed with NDM 0.8.0 released with OpenEBS 2.0 and has been fixed in OpenEBS 2.1.1 and OpenEBS 2.2.0 (latest) release.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @didier-durand, @zlymeda, @avats-dev, and many more contributing via Hacktoberfest.

Show your Support

Thank you @danielsand for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.2.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.2.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.2 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.2, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.2 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.1.0

3 years ago

Release Summary

OpenEBS v2.1 is a developer release focused on code, tests and build refactoring along with some critical bug fixes and user enhancements. This release also introduces support for remote Backup and Restore of ZFS Local PV using OpenEBS Velero plugin.

Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS ZFS Local PV adds support for Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.
  • OpenEBS Mayastor continues its momentum by enhancing support for Rebuild and other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.

Key Improvements:

  • Enhanced the Velero Plugin to perform Backup of a volume and Restore of another volume to run simultaneously.
  • Added a validation to restrict OpenEBS Namespace deletion if there are pools or volumes configured. The validation is added via Kubernetes admission webhook.
  • Added support to restrict creation of cStor Pools (via CSPC) on Block Devices that are tagged (or reserved).
  • Enhanced NDM to automatically create a block device tag on the discovered device if the device matches a certain path name pattern.

Key Bug Fixes:

  • Fixes an issue where local backup and restore of cStor volumes provisioned via CSI were failing.
  • Fixes an issue where cStor CSI Volume remount would fail intermittently when application pod is restarted or after recovering from a network loss between application pod and target node.
  • Fixes an issue where BDC cleanup by NDM would cause a panic, if the bound BD was manually deleted.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @rohansadale, @AJEETRAI707, @smijolovic, @jlcox1970

Thanks, also to @sonasingh46 for being the 2.1 release coordinator.

Show your Support

Thank you @SeMeKh (Hamravesh), @tobg(TOBG Services Ltd) for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.1.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.1.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.1 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.1, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.1 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.0.0

3 years ago

Release Summary

OpenEBS has reached a significant milestone with v2.0 with support for cStor CSI drivers graduating to beta, improved NDM capabilities to manage virtual and partitioned block devices, and much more.

OpenEBS v2.0 includes the following Storage Engines that are currently deployed in production by various organizations:

  • Jiva
  • cStor (CSI Driver available from 2.0 onwards)
  • ZFS Local PV
  • Dynamic Local PV hostpath
  • Dynamic Local PV Block

OpenEBS v2.0 also includes the following Storage Engines, going through alpha testing at a few organizations. Please get in touch with us, if you would like to participate in the alpha testing of these engines.

  • Mayastor
  • Dynamic Local PV - Rawfile

For a change summary since v1.12, please refer to Release 2.0 Change Summary.


Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS cStor provisioning with the new schema and CSI drivers has been declared as beta. For detailed instructions on how to get started with new cStor Operators please refer to the Quickstart guide. The new version of cStor Schema addresses the user feedback in terms of ease of use for cStor provisioning as well as to making it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim (SPC) pools will continue to function as-is and there is support available to migrate from SPC schema to new schema. In addition to supporting all the features of SPC based cStor pools, the CSPC ( cStor Storage Pool Cluster) enables the following:
    • cStor Pool expansion by adding block devices to CSPC YAML
    • Replace a block device used within cStor pool via editing the CSPC YAML
    • Scale-up or down the cStor volume replicas via editing cStor Volume Config YAML
    • Expand Volume by updating the PVC YAML
  • Significant improvements to NDM in supporting (and better handling) of partitions and virtual block devices across reboots.
  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability, and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • Continuing the focus on additional integration and e2e tests for all engines, more documentation.

Key Improvements:

  • Enhanced the Jiva target controller to track the internal snapshots and re-claim the space.
  • Support for enabling/disabling leader election mechanism which involves interacting with kube-apiserver. In deployments where provisioners are configured with single replicas, the leader election can be disabled. The default value is enabled. The configuration is controlled via environment variable "LEADER_ELECTION" in operator yaml or via helm value (enableLeaderElection).

Key Bug Fixes:

  • Fixes an issue where NDM would fail to wipe the filesystem of the released sparse block device.
  • Fixes an issue with the mounting of XFS cloned volume.
  • Fixes an issue when PV with fsType: ZFS will fail if the capacity is not a multiple of record size specified in StorageClass.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @silentred, @whoan, @sonicaj, @dhoard, @akin-ozer, @alexppg, @FestivalBobcats

Thanks, also to @akhilerm for being the 2.0 release coordinator.

Show your Support

Thank you @nd2014-public(D-Rating), @baskinsy(Stratus5), @evertmulder(KPN) for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

A very special thanks to @yhrenlee for sharing the story in DoK Community, about how OpenEBS helped Arista with migrating their services to Kubernetes.

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.0.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.0.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.0 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.0, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.0 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v1.12.0

3 years ago

Release Summary

The theme for OpenEBS v1.12 continues to be about polishing OpenEBS Storage engines Mayastor, cStor CSI Driver, and preparing them for Beta. A lot of efforts from the contributors were around evaluating more CI/CD and testing frameworks.

For a detailed change summary, please refer to Release 1.12 Change Summary.

Before getting into the release summary,


Important Announcement: OpenEBS Community Slack channels have migrated to Kubernetes Slack Workspace by Jun 22nd

The OpenEBS channels on Kubernetes Slack are:

More details about this migration can be found here.


Here are some of the key highlights in this release:

Breaking Change/Deprecation

  • Important Note for OpenEBS Helm Users: The repository https://github.com/helm/charts is being deprecated. All the charts are now being moved to Helm Hub or to project-specific repositories. OpenEBS charts have migrated to openebs/charts repository. Starting with 1.12.0, openebs can be installed via the following helm commands:
    helm repo add openebs https://openebs.github.io/charts
    helm repo update
    helm install --namespace openebs --name openebs openebs/openebs
    

Key Improvements:

  • [Build] Refactor and add multi-arch image generation support on the NDM repo. node-disk-manager#428 (@xUnholy)
  • [Install] Support specifying the webhook validation policy to fail/ignore via ENV (ADMISSION_WEBHOOK_FAILURE_POLICY) on admission server deployment. maya#1726 (@prateekpandey14)
  • [NDM] Enhanced NDM Operator to attach events to BDC CR while processing BDC operations. node-disk-manager#425 (@rahulchheda)
  • [ZFS Local PV] Add support for btrfs as an additional FS Type. zfs-localpv#170 (@pawanpraka1, @mikroskeem)
  • [ZFS Local PV] Add support for a shared mount on ZFS Volume to support RWX use cases. zfs-localpv#164 (@pawanpraka1, @stevefan1999-personal)

Key Bug Fixes:

  • [Provisioners] Fixes a panic on maya-apiserver caused due to PVC names longer than 63 chars. maya#1720 (@kmova @stuartpb)
  • [Upgrade] Fixes an issue where the upgrade was failing some pre-flight checks when the maya-apiserver was deployed in HA mode. maya#1720 (@shubham14bajpai @utkudarilmaz)
  • [Upgrade] Fixes an issue where the upgrade was failing if the deployment rollout was taking longer than 5 min. maya#1719 (@shubham14bajpai @sgielen)

Alpha and Beta Engine updates

  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability, and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • OpenEBS ZFS Local PV has been declared as beta. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.
  • OpenEBS cStor CSI support is marked as feature-complete and further releases will focus on additional integration and e2e tests. For detailed instructions on getting started with CSI driver for cStor, please refer to the Quick start guide

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @mikroskeem, @stuartpb, @utkudarilmaz

Thanks, also to @mittachaitu for being the 1.12 release coordinator.

Announcing new Maintainers/Reviewers

With gratitude and joy, we welcome the following members to the OpenEBS organization as reviewers for their continued contributions and commitment to help the OpenEBS project and community.

  • "Mehran Kholdi",@SeMeKh,Hamravesh #control-plane-maintainers
  • "Michael Fornaro",@xUnholy,Independent-Raspbernetes #control-plane-maintainers
  • "Peeyush Gupta",@Pensu,DigitalOcean #control-plane-maintainers

Check out our full list of maintainers and reviewers here. Our Governance policy is here.

Show your Support

Thank you @dstathos and @mikroskeem for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/1.12.0/openebs-operator.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.12.1

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.12 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.12, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.12 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these instructions.

v1.11.0

3 years ago

Release Summary

The theme for OpenEBS v1.11 has been about polishing OpenEBS Storage engines Mayastor, ZFS Local PV, cStor CSI Driver, and preparing them for Beta. This release also includes several supportability enhancements and fixes for the existing engines.

For a detailed change summary, please refer to Release 1.11 Change Summary.

Before getting into the release details,

Important Announcement: OpenEBS Community Slack channels will be migrated to Kubernetes Slack Workspace by Jun 22nd

In the interest of neutral governance, the OpenEBS community support via slack is being migrated from openebs-community slack (a free version of slack with limited support for message retention) to the following OpenEBS channels on Kubernetes Slack owned by CNCF.

The #openebs-users channel will be marked as read-only by June 22nd.

More details about this migration can be found here.

Given that openebs-community slack has been a neutral home for many vendors that are offering free and commercial support/products on top of OpenEBS, the workspace will continue to live on. These vendors are requested to create their own public channels and the information about those channels can be communicated to users via the OpenEBS website by raising an issue/pr to https://github.com/openebs/website.


Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • OpenEBS ZFS Local PV has been declared as beta. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.
  • OpenEBS cStor CSI support is marked as feature-complete and further releases will focus on additional integration and e2e tests.

Key Improvements:

  • Enhanced helm charts to make NDM filterconfigs.state configurable. charts#107 (@fukuta-tatsuya-intec)
  • Added configuration to exclude rbd devices from being used for creating Block Devices charts#111 (@GTB3NW)
  • Added support to display FSType information in Block Devices node-disk-manager#438 (@harshthakur9030)
  • Add support to mount ZFS datasets using legacy mount property to allow for multiple mounts on a single node. zfs-localpv#151 (@pawanpraka1)
  • Add additional automation tests for validating ZFS Local PV and cStor Backup/Restore. (@w3aman @shashank855)

Key Bug Fixes:

  • Fixes an issue where volumes meant to be filesystem datasets got created as zvols due to misspelled case for StorageClass parameter. The fix makes the StorageClass parameters case insensitive zfs-localpv#144 (@cruwe)
  • Fixes an issue where the read-only option was not being set of ZFS volumes. zfs-localpv#137 (@pawanpraka1)
  • Fixes an issue where incorrect pool name or other parameters in Storage Class would result in stale ZFS Volume CRs being created. zfs-localpv#121 zfs-localpv#145 (@pawanpraka1)
  • Fixes an issue where the user configured ENV variable for MAX_CHAIN_LENGTH was not being read by Jiva. jiva#309 (@payes)
  • Fixes an issue where cStor Pool was being deleted forcefully before the replicas on cStor Pool were deleted. This can cause data loss in situations where SPCs are incorrectly edited by the user, and a cStor Pool deletion is attempted. maya#1710 (@mittachaitu)
  • Fixes an issue where a failure to delete the cStor Pool on the first attempt will leave an orphaned cStor custom resource (CSP) in the cluster. maya#1595 (@mittachaitu)

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests and docs: @cruwe, @sgielen, @ShubhamB99, @GTB3NW, @Icedroid, @fukuta-tatsuya-intec, @mtmn, @nrusinko, @radicand, @zadunn, @xUnholy,

We also are delighted to have @harshthakur9030, @semekh, @vaniisgh contributing to OpenEBS via the CNCF Community Bridge Program.

Thanks, also to @shubham14bajpai for being the 1.11 release co-ordinator.

Show your Support

Thank you @zadunn (Optoro), @meyskens, @stevefan1999-personal, @darias1986(DISID) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/1.11.0/openebs-operator.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.11.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.11 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.11, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.11 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these instructions.

v1.10.0

3 years ago

Release Summary

The theme for OpenEBS v1.10 has been about polishing the new OpenEBS Storage engines Mayastor, ZFS Local PV, and preparing them for Beta. This release also includes several supportability enhancements and fixes for the existing engines.

For a detailed change summary, please refer to Release 1.10 Change Summary.

Here are some of the key highlights in this release:

New capabilities:

  • The first release of OpenEBS Mayastor developed using NVMe based architecture, targetted at addressing performance requirements of IO-intensive workloads is ready for alpha testing. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • Enhancements to OpenEBS ZFS Local PV that includes resolving issues found during scale testing, fully functional CSI driver, and sample Grafana Dashboard for monitoring metrics on ZFS Volumes and Pools. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.

Key Improvements:

Key Bug Fixes:

Shout outs!

MANY THANKS for everyone helping OpenEBS Community Slack going and very special thanks to the following people who joined us on GitHub for this release:

  • As first-time contributions: @AntonioCarlini, @blaisedias, @chriswldenyer, @cjones1024, @filippobosi, @gahag, @GlennBullingham, @jamie-0, @jonathan-teh, @paulyoong, @tiagolobocastro, @tjoshum, @yannis218
  • As users finding issues and testing the fixes: @chornlgscout, @cortopy, @freym, @Icedroid, @ppodolsky, @spencergilbert, @surajssd @erbiao3k, @sgielen, @vishal-biyani, @willzhang, @xUnholy
  • As contributors to code, tests, and docs: @akhilerm, @gila, @gprasath, @IsAmrish, @jkryl, @kmova, @ksatchit, @mittachaitu, @muratkars, @mynktl, @obeyler, @nsathyaseelan, @pawanpraka1, @payes, @prateekpandey14, @ranjithwingrider, @slalwani97, @somesh2905, @sonasingh46, @utkarshmani1997, @vishnuitta, @w3aman

Show your Support

You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Thank you @aretakisv @alexjmbarton for adding your OpenEBS usage story to ADOPTERS.md

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.10.0.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.10.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.10 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.10, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.10 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].