Openebs Versions Save

Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.

v3.2.0

2 years ago

Release Summary

๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ OpenEBS 3.2 is another maintenance release focused on code, tests and build refactoring along with some critical bug fixes and user enhancements. This release includes fixes for user-reported critical bugs as well as fixes and enhancements to improve the E2e test coverage.


Deprecation Notice: Jiva and cStor out-of-tree external provisioners are deprecated now in favor of the corresponding CSI Drivers. The out of tree provisioners for Jiva and cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to cStor CSI or Jiva CSI as early as possible.

If you have any questions or need help with the migration please reach out to us on our Kubernetes Community slack channel #openebs.


Upgrade and Backward Incompatibilities

Please review this list prior to deciding to upgrade:

  • Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases. Some of the engines might require you to have a higher Kubernetes version as the CSI drivers have been upgraded to the latest versions. For example, Kubernetes 1.19.12 or higher is recommended for using Rawfile Local PV.
  • OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like cstor-pool-arm64:x.y.x should be replaced with corresponding multi-arch image cstor-pool:x.y.x.
  • The non-csi provisioners for cstor and jiva are not included by default with 3.0 helm chart or operator.yaml. You can still continue to use them. The older provisioners are released with the v2.12.2 version at the moment and only patch releases (to fix severe security vulnerabilities) will be supported going forward. If you need help making a decision on upgrading or migrating, please reach out to us on our Kubernetes Community slack channel #openebs.

Component versions

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v3.2.0 are as follows:

Change Summary

A detailed Changelog is available under the component repositories listed above. The focus was to close on the refactoring and maintenance-related activities and a few bug fixes that were required for some of the components to be declared GA or beta. Here is a quick summary of what has changed since the last release.

  • Data populator (alpha)
  • Add support for log and log streams for upgrade jobs for Jiva
  • added manager errors for Jiva for better debuggability
  • fixed Jiva controller issue where service does not select controller pods
  • fixed NDM issue where disk partition cannot be released after creating lvm
  • fixed concurrent map write issue in cstor-csi
  • fixed OpenEBS security vulnerablility in github action
  • added node selector and toleration to ndm exporter
  • fixed xfs quota issue for LVM volumes in dynamic-localpv
  • made ZFS-LocalPV aware of pool import
  • fixed cstor-csi to avoid simultaneous creation and deletion of resource
  • enahanced LVM-LocalPV to pick the node for provisioning the volume where there is enough free space available
  • honor klog logging options for dynamic-localpv

In-Progress items

Show your Support

Thank you @RytisLT(Rytis Ilciukas) for sharing your OpenEBS Adoption story.

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going. @AVRahul @Ab-hishek @Abhinandan-Purkait @IsAmrish @Pallavi-PH @ParthS007 @SeMeKh @Z0Marlin @akhilerm @anupriya0703 @avishnu @blaisedias @chriswldenyer @cjones1024 @gila @iyashu @jonathan-teh @kmova @mittachaitu @mtzaurus @mynktl @niladrih @nsathyaseelan @paulyoong @pawanpraka1 @prateekpandey14 @rajaSahil @rakeshPRaghu @satyapriyamishra222 @shovanmaity @shubham14bajpai @tiagolobocastro @vharsh @w3aman

A very special thanks to our first-time contributors to code, tests, and docs: @csschwe, @karanssj4, @reitermarkus, @MukulKolpe, @gozssky, @adamcharnock.

Documentation

https://openebs.io/docs

Install

OpenEBS can be installed via kubectl or helm. Follow the installation instructions here.

Upgrade

The upgrade instructions for various OpenEBS engines are here.

Do not upgrade if you are using legacy cstor or jiva provisioners. You have to first migrate those to the corresponding CSI Drivers. Please reach out to us for support

Known Issues

Check our open issues uncovered through e2e and community testing.

Support

If you are having issues in setting up or upgrade, you can contact:

v3.1.0

2 years ago

Release Summary

๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ OpenEBS 3.1 is a maintenance release focused on code, tests and build refactoring along with some critical bug fixes and user enhancements. This release includes fixes for user-reported critical bugs as well as fixes and enhancements to improve the E2e test coverage.


Deprecation Notice: Jiva and cStor out-of-tree external provisioners are deprecated now in favor of the corresponding CSI Drivers. The out of tree provisioners for Jiva and cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to cStor CSI or Jiva CSI as early as possible.

If you have any questions or need help with the migration please reach out to us on our Kubernetes Community slack channel #openebs.


Upgrade and Backward Incompatibilities

Please review this list prior to deciding to upgrade:

  • Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases. Some of the engines might require you to have a higher Kubernetes version as the CSI drivers have been upgraded to the latest versions. For example, Kubernetes 1.19.12 or higher is recommended for using Rawfile Local PV.
  • OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like cstor-pool-arm64:x.y.x should be replaced with corresponding multi-arch image cstor-pool:x.y.x.
  • The non-csi provisioners for cstor and jiva are not included by default with 3.0 helm chart or operator.yaml. You can still continue to use them. The older provisioners are released with the v2.12.2 version at the moment and only patch releases (to fix severe security vulnerabilities) will be supported going forward. If you need help making a decision on upgrading or migrating, please reach out to us on our Kubernetes Community slack channel #openebs.

Component versions

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v3.1.0 are as follows:

Change Summary

A detailed Changelog is available under the component repositories listed above. The focus was to close on the refactoring and maintenance-related activities and a few bug fixes that were required for some of the components to be declared GA or beta. Here is a quick summary of what has changed since the last release.

  • Added the operator to clean up the Stale PersistentVolumeClaims (alpha)
  • Enhanced Cstor storage engine to make it compatible with kernel ZFS
  • Added support for zstd compression in ZFS-LocalPV
  • Enhanced ZFS-LocalPV to register topologyKeys from environment
  • Added Support for child dataset in CSI Storagecapacity feature for ZFS-LocalPV
  • Enhanced NFS Provisioner to have support to change shared filesystem ownership and mode
  • Enhanced OpenebsCtl to support creating an cStor Pool Cluster (CSPC) template
  • Added error propagation for device localpv for better debugability
  • Allow setting a different OPENEBS_IO_BASE_DIR (and others) in Helm chart
  • Fixed the crash because of concurrent map write in cstor-csi
  • Fixed Prometheus metrics issue for Jiva Volumes
  • Added meta information on blockdevice to provide better disk selection
  • Enhanced dynamic device localpv provisioner to bind bdc to bd using node affinity

Show your Support

Thank you @trathborne(Tom Rathborne), @jggc(Jean-Gab) for sharing your OpenEBS Adoption story.

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going. @AVRahul @Ab-hishek @Abhinandan-Purkait @IsAmrish @Pallavi-PH @ParthS007 @SeMeKh @Z0Marlin @akhilerm @anupriya0703 @avishnu @blaisedias @chriswldenyer @cjones1024 @gila @iyashu @jonathan-teh @kmova @mittachaitu @mtzaurus @mynktl @niladrih @nsathyaseelan @paulyoong @pawanpraka1 @prateekpandey14 @rajaSahil @rakeshPRaghu @satyapriyamishra222 @shovanmaity @shubham14bajpai @tiagolobocastro @vharsh @w3aman

A very special thanks to our first-time contributors to code, tests, and docs: @jdkramhoft, @ianroberts, @davidkarlsen, @abhisheksinghbaghel, @vakul-gupta-flp, @shazadbrohi, @jggc.

Documentation

https://openebs.io/docs

Install

OpenEBS can be installed via kubectl or helm. Follow the installation instructions here.

Upgrade

The upgrade instructions for various OpenEBS engines are here.

Do not upgrade if you are using legacy cstor or jiva provisioners. You have to first migrate those to the corresponding CSI Drivers. Please reach out to us for support

Known Issues

Check our open issues uncovered through e2e and community testing.

Support

If you are having issues in setting up or upgrade, you can contact:

v3.0.0

2 years ago

Release Summary

๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ OpenEBS 3.0 is a culmination of efforts geared towards laying the foundation for making it easier to onboard and accept community contributions, making each of the data engine operators ready for future Kubernetes releases, making it easy to manage, and troubleshoot various data engines. This has been achieved via migration to the latest Kubernetes constructs, ease of use improvements, bug fixes and most importantly refactoring the control plane and e2e test suites to independently enhance and release each of the engines.


Deprecation Notice: Jiva and cStor out-of-tree external provisioners will be deprecated by Dec 2021 in favor of the corresponding CSI Drivers. The out of tree provisioners for Jiva and cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to cStor CSI or Jiva CSI as early as possible.

If you have any questions or need help with the migration please reach out to us on our Kubernetes Community slack channel #openebs.


Upgrade and Backward Incompatibilities

Please review this list prior to deciding to upgrade:

  • Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases. Some of the engines might require you to have a higher Kubernetes version as the CSI drivers have been upgraded to the latest versions. For example, Kubernetes 1.19.12 or higher is recommended for using Rawfile Local PV.
  • OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like cstor-pool-arm64:x.y.x should be replaced with corresponding multi-arch image cstor-pool:x.y.x.
  • The non-csi provisioners for cstor and jiva are not included by default with 3.0 helm chart or operator.yaml. You can still continue to use them till Dec 2021. The older provisioners are released with the v2.12.2 version at the moment and only patch releases (to fix severe security vulnerabilities) will be supported going forward. If you need help making a decision on upgrading or migrating, please reach out to us on our Kubernetes Community slack channel #openebs.

Component versions

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v3.0.0 are as follows:

Change Summary

A detailed Changelog is available under the component repositories listed above. The focus was to close on the refactoring and maintenance-related activities and a few bug fixes that were required for some of the components to be declared GA or beta. Here is a quick summary of what has changed since the last release.

  • Added support for pushing the container images to GHCR, in addition to pushing the to DockerHub and Quay.io.
  • Rename the branches to "develop" or "main" on all the active repositories which are accepting contributions.
  • Update the CRD references to v1 across all components - even with deprecated provisioners - so users can continue to use the older provisioners beyond 1.22 as well.
  • Update the Kubernetes CSI driver side-cars to the latest version compatible with Kubernetes 1.18 and higher.
  • Enhance the iSCSI Targets functionality in Jiva and cStor volumes to only accept connections from one node at a time. The connection from a new node will be accepted only after the previous connection is torn down.
  • Enhanced the Local PV hostpath with a feature to enforce capacity limits using XFS quota - for volumes provisioned on XFS filesystem.
  • Enhanced the Jiva Specs (Jiva Volume Policy) to remove unused fields and make most of the fields with default configuration as optional. Also fixed the issues around specifying pod affinity and anti-affinity policies on jiva replica and target.
  • Enhanced the NDM operator and helm chart to include NDM exporter.
  • Enhanced NDM to detect filesystem and size changes, and update the block device resource. (under feature-gate in this release).
  • Dashboard and CLI have made enhanced their support to display details about cStor, Jiva, ZFS, LVM and Device Local PV.
  • Enhanced OpenEBS helm chart that can easily enable or disable a data engine of choice. The 3.0 helm chart stops installing the legacy cstor and jiva provisioners. If you would like to continue to use them, you have to set the flag โ€œlegacy.enabled=trueโ€.
  • OpenEBS helm chart includes sample kyverno policies that can be used as an option for PodSecurityPolicies(PSP) replacement.
  • New revamped website for https://openebs.io is live.

Show your Support

Thank you @turowicz(Surveily), @WillyRL(Teknologi Anak Rantau Indonesia), @Somsubhra1, @t3hmrman for sharing your OpenEBS Adoption story.

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going. @AVRahul @Ab-hishek @Abhinandan-Purkait @IsAmrish @Pallavi-PH @ParthS007 @SeMeKh @Z0Marlin @akhilerm @anupriya0703 @avishnu @blaisedias @chriswldenyer @cjones1024 @gila @iyashu @jonathan-teh @kmova @mittachaitu @mtzaurus @mynktl @niladrih @nsathyaseelan @paulyoong @pawanpraka1 @prateekpandey14 @rajaSahil @rakeshPRaghu @satyapriyamishra222 @shovanmaity @shubham14bajpai @tiagolobocastro @vharsh @w3aman

A very special thanks to our first-time contributors to code, tests, and docs: @burntcarrot, @aamirqs, @sbidoul, @dsavitskiy, @almas33, @liuminjian, @zeenix, @Nivedita-coder, @fengye87, @Abhishek-kumar09, @Amishakumari544, @eripa, @Quarky9, @tathougies, @omeiirr, @g-linville, @rweilg

Documentation

https://openebs.io/docs

Install

OpenEBS can be installed via kubectl or helm. Follow the installation instructions here.

Upgrade

The upgrade instructions for various OpenEBS engines are here.

Do not upgrade if you are using legacy cstor or jiva provisioners. You have to first migrate those to the corresponding CSI Drivers. Please reach out to us for support

Known Issues

Check our open issues uncovered through e2e and community testing.

Support

If you are having issues in setting up or upgrade, you can contact:

v2.11.0

2 years ago

Release Summary

OpenEBS v2.11 is another maintenance release before moving towards 3.0 primarily focusing on enhancing the E2E tests, build, release workflows, and documentation. This release also includes enhancements to improve the user experience and fixes for bugs reported by users and E2E tools. There has been some significant progress made on the alpha features as well.


Deprecation Notice: Jiva and cStor out-of-tree external provisioners will be deprecated by Dec 2021 in favor of the corresponding CSI Drivers. The out of tree provisioners for Jiva and cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to cStor CSI or Jiva CSI as early as possible.

If you have any questions or need help with the migration please reach out to us on our Kubernetes Community slack channel #openebs.


Key Improvements

  • Enhanced CLI to provide additional information about OpenEBS storage components like:
    • Block devices managed by OpenEBS (kubectl openebs get bd)
    • Jiva Volumes
    • LVM Local PV Volumes
    • ZFS Local PV Volumes
  • Added a new Stateful workload dashboard to Monitoring helm chart to display the CPU, RAM and Filesystem stats of a given Pod. This dashboard currently supports fetching details for LVM Local PV. In addition new alert rules related to PVC status are supported.
  • Enhanced LVM Local PV snapshot support by allowing users to configure the size that should be reserved for LVM snapshots. By default, the size reserved for a snapshot is equal to the size of the volume. In cases, where snapshots are created for backup purposes, the snapshots may not require the entire space. This feature helps in creating snapshots on VGs that don't have enough space to reserve full capacity for snapshots.
  • Enhanced the way custom topology keys can be specified for LVM Local PV. Prior to this enhancement, LVM driver would load the topology keys from node labels and cache them and if someone modified the labels and missed to restart the driver pods, there could be an impact to volume scheduling. This enhancement requires users to specify the topology key via ENV allowing users to know the current key and if there is a change, requires for a ENV modification that will force a restart of all the drivers.
  • NFS Provisioner has been updated with several new features like:
    • Ability to configure the LeaseTime and GraceTime for the NFS server to tune the restart times
    • Added a prometheus metrics end point to report volume creation and failure events
    • Added a configuration option to allow users to specify the GUID to set on the NFS server to allow non-root applications to access NFS share
    • Allow specifying a different namespace than provisioner namespace to create NFS volume related objects
    • Allow specifying the node affinity for NFS server deployment
  • Rawfile Local PV has been enhanced to support xfs filesystem.
  • Enhanced Jiva and cStor CSI drivers to handle split brain condition that could cause the Kubelet to attach the volume on new node while still mounted on disconnected node. The CSI drivers have been enhanced to allow iSCSI login connection only from one node at any given time.

Key Bug Fixes

  • Fixed an issue in Jiva Volume Replica STS keeps crashing due to change in the cluster domain and failed attempts to access the controller.
  • Fixed an issue in Jiva Volume that was causing log flooding while fetching volume status using Service DNS. Switched to using controller IP.
  • Fixed an issue in ZFS Local Volumes that was causing an intermittent crash of controller pod due erroneously accessing a variable.
  • Fixed an issue in Device Local PV causing a crash due to a race condition between creating partition and clearing a partition.
  • Several usability fixes to documentation and helm charts for various engines.

Backward Incompatibilities

  • Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases.
  • Kubernetes 1.19.12 or higher is recommended for using Rawfile Local PV.
  • OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like cstor-pool-arm64:x.y.x should be replaced with corresponding multi-arch image cstor-pool:x.y.x.

Component versions

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v2.11.0 are as follows:

Other notable updates

  • A E2e CI Dashboard is being developed to show the status of pipelines being run on various engines. (https://openebs.ci)
  • The OpenEBS Website and Documentation sites are being redesigned for a new look and feel - in preparation for 3.0 release. Preview link.

Show your Support

Thank you @survivant (Jerabi Inc.) for sharing your OpenEBS Adoption story.

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @RolandMa1986, @hrenard, @huangfangfeng

Documentation

https://docs.openebs.io/

Install

OpenEBS can be installed via kubectl or helm3. Follow the installation instructions here.

Upgrade

The upgrade instructions for various OpenEBS engines are here

Known Issues

Check our open issues uncovered through e2e and community testing.

Support

If you are having issues in setting up or upgrade, you can contact:

v2.10.0

2 years ago

Release Summary

OpenEBS v2.10 is another maintenance release before moving towards 3.0 primarily focusing on enhancing the E2e tests, build, release workflows, and documentation. This release also includes enhancements to improve the user experience and fixes for bugs reported by users and E2e tools. There has been some significant progress made on the alpha features as well.


Deprecation Notice: Jiva and cStor out-of-tree external provisioners will be deprecated by Dec 2021 in favor of the corresponding CSI Drivers. The out of tree provisioners for Jiva and cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to cStor CSI or Jiva CSI as early as possible.

If you have any questions or need help with the migration please reach out to us on our Kubernetes Community slack channel #openebs.


New Capabilities

  • kubectl plugin for openebs to help manage OpenEBS components. This release includes support for displaying details about:
    • cStor Pools and volumes,
    • Jiva Volumes.
  • OpenEBS monitoring add-on A set of Grafana dashboards and Prometheus alerts for OpenEBS, packaged as helm chart. This release includes support for:
    • cStor overview dashboard
    • cStor Pools, Replicas and Volumes Dashboard
    • Jiva Volume Dashboard
    • Alerts for Volume crossing capacity threshold

A very special thanks to @cncf and 2021 LFX Mentees @ParthS007, @rahul799 for contributing to the above features!!

Key Improvements

Key Bug Fixes

  • [Local PV] Fixed an issue in CSI Controllers of LVM Local PV and Device Local PV that could potentially cause stale Volume custom resources to be created in cases where PVC gets deleted prior to completion of the create volume request. https://github.com/openebs/lib-csi/pull/11
  • [Dynamic NFS] Fixed an issue with Liveness probe in the NFS provisioner. https://github.com/openebs/dynamic-nfs-provisioner/pull/41
  • Several fixes to docs were also included in this release.
  • OpenEBS participated in the CNCF BugBash program in KubeCon EU 2021 and has received more than 90 PRs. This release includes several PRs from the program that were accepted after the 2.9 release.

Backward Incompatibilities

  • Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases.
  • OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like cstor-pool-arm64:x.y.x should be replaced with corresponding multi-arch image cstor-pool:x.y.x.

Component versions

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v2.10.0 are as follows:

Other notable updates

  • A E2e CI Dashboard is being developed to show the status of pipelines being run on various engines. (https://openebs.ci)
  • The OpenEBS Website and Documentation sites are being redesigned for a new look and feel - in preparation for 3.0 release.

Show your Support

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @Pallavi-PH, @sreeharimohan, @Atharex, @rakeshPRaghu, @Sanjay1611 @pankaj892

Documentation

https://docs.openebs.io/

Install

OpenEBS can be installed via kubectl or helm3. Follow the installation instructions here.

Upgrade

The upgrade instructions for various OpenEBS engines are here

Known Issues

Check our open issues uncovered through e2e and community testing.

Support

If you are having issues in setting up or upgrade, you can contact:

v2.9.0

2 years ago

Release Summary

OpenEBS v2.9 is another maintenance release before moving towards 3.0 primarily focusing on enhancing the E2e tests and build/release workflows. This release includes fixes for user-reported critical bugs as well as fixes and enhancements to improve the E2e test coverage. There has been some significant progress made on the alpha features as well.

Key Improvements

  • Enhanced ZFS Local PV to use a custom node label called openebs.io/nodeid to set the node affinity for the provisioned volume. By default, the value will be the same as kubernetes.io/hostname. Using a custom label like this will help in quickly migrating the volumes to a new node in cases where a node fails and the user needs to move the underlying disks to a new node in the cluster. After moving the disks, the user can set the openebs.io/nodeid with the value used in the previous node. (https://github.com/openebs/zfs-localpv/issues/304). You can read more about this feature here.
  • Enhanced cStor Velero plugin to allow users to specify a custom timeout for completing snapshot operations. The timeout can be configured via the restApiTimeout field in VolumeSnapshotLocation. See example. (https://github.com/openebs/velero-plugin/issues/148)
  • Added helm chart for LVM Local PV.
  • Enhanced LVM Local PV to allow users to specify a pattern string of volume groups from which LVM Local PV should be provisioned. This feature will help in cases where a node can have multiple volume groups or volume group names across the cluster have to be unique. (https://github.com/openebs/lvm-localpv/pull/28)

Key Bug Fixes

Backward Incompatibilities

  • Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases.
  • OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like cstor-pool-arm64:x.y.x should be replaced with corresponding multi-arch image cstor-pool:x.y.x.

Component versions

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v2.9.0 are as follows:

Alpha Feature Updates

  • Dynamic NFS Provisioner
    • Multi-arch container images with support for amd64, arm64, arm, and ppc64le
    • Helm chart
  • CSI Driver for Local PV Partitions
    • Support Creation and Deletion of a Local PV - by creating a partition of the requested capacity on local block devices.
  • openebsctl
    • Refactor the CLI to support APP VERB NOUN format. Example: kubectl openebs describe volume [cstor-pv-name]
    • Added support for kubectl openebs get pools
  • Monitoring-addon
    • Helm chart for setting up Prometheus operator with support for monitoring cStor pools
    • The sample dashboards will be moved to this add-on in the upcoming releases.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage TAG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Migrated the CI pipelines from Travis to GitHub actions. (https://github.com/openebs/openebs/issues/3352)

Show your Support

Thank you @weizenberg from Lannister Investments LTD for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

We are excited to welcome our new maintainers Sjors Gielen for the cStor engine and Yashpal for Local PV engines.

A very special thanks to our first-time contributors to code, tests, and docs: @jj-2020, @abhiTamrakar, @ParthS007, @Abhinandan-Purkait, @JanKoehnlein, @soniasingla, @rahulgrover99, @nisarg1499, @asquare14, @rajaSahil, @arcolife, @satyapriyamishra222, @rahul799, @is-ashish

Documentation

https://docs.openebs.io/

Install

OpenEBS can be installed via kubectl or helm3. Follow the installation instructions here.

Upgrade

The upgrade instructions for various OpenEBS engines are here

Known Issues

Check our open issues uncovered through e2e and community testing.

Support

If you are having issues in setting up or upgrade, you can contact:

v2.8.0

3 years ago

Release Summary

OpenEBS v2.8 is the another maintenance release before moving towards 3.0, and includes fixes and enhancements geared towards migrating non CSI volumes to CSI and improvements to E2e. This release also includes some key user-requested bug fixes and enhancements.


Important Announcement: KubeCon + CloudNativeCon Europe 2021 will take place May 4 - 7, 2021! Meet the OpenEBS maintainers and end-users to learn more about OpenEBS Roadmap, implementation details, best practices, and more. RSVP to one of the following events:


Component versions

The latest release versions of each of the engine are as follows:

Key Improvements

  • Updated the Kubernetes resources like CRDs, RBAC, CSIDriver, and Admission Controller used by OpenEBS project to v1, as the corresponding beta or alpha versioned objects will be deprecated in Kubernetes 1.22. This change requires that OpenEBS 2.8 release be used with Kubernetes 1.18 or higher.
  • Jiva CSI driver is promoted to beta. For instructions on how to set up and use the Jiva CSI driver, please see. https://github.com/openebs/jiva-operator. Major updates in this release include:
    • Upgrade support for Jiva volumes provisioned via CSI Driver
    • Migration of external-provisioner provisioned Jiva volumes to Jiva CSI Driver.
    • E2e tests for Jiva CSI volumes
  • Enhanced ZFS Local PV to allow users to set up custom finalizers on ZFS volumes. This will provide control to users to plug-in custom volume life-cycle operations. (https://github.com/openebs/zfs-localpv/issues/302)
  • Enhanced ZFS Local PV volume creation with ImmediateBinding to attempt to pick a new node for volume, if the selected node couldn't provision the volume. (https://github.com/openebs/zfs-localpv/pull/270)
  • LVM Local PV is promoted to beta. For instructions on how to set up and use the Local PV LVM CSI driver, please see. https://github.com/openebs/lvm-localpv. Major updates in this release include:
    • Enhance the capacity reporting feature by updating lvmetad cache, prior to reporting the current status.
    • E2e tests updated with resiliency tests.
  • OpenEBS Rawfile Local PV is promoted to beta. For instructions on how to set up and use, please see. https://github.com/openebs/rawfile-localpv

Key Bug Fixes

Backward Incompatibilities

  • Kubernetes 1.18 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CSI components have been upgraded to:
      • k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1 (for Mayastor CSI volumes)
      • k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
      • k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from a version of cStor operators older than 2.6 to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

  • The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Working on automating further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.
  • Verify that PSP support is disabled by default as they are going to be deprecated in future versions of K8s.
  • Sample Grafana dashboards for managing OpenEBS are being developed here: https://github.com/openebs/charts/tree/gh-pages/grafana-charts

Show your Support

Thank you @jayheinlein from Sharecare, Inc. for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

We are excited to welcome Harsh Thakur as maintainer for Local PV engines.

A very special thanks to our first-time contributors to code, tests, and docs: @etherealvisage, @ntdt, @centromere, @watcher00090, @t3hmrman

Getting Started

Prerequisite to install

  • Kubernetes 1.18+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.8.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.8.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.8 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.8, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.8 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Note: The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.7 and higher) to 2.8. If you are running on release older than 1.7, OpenEBS recommends you upgrade to the latest version as soon as possible.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you donโ€™t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.7.0

3 years ago

Release Summary

OpenEBS v2.7 is a maintenance release geared towards preparing for better structuring of the code and improving on the E2e frameworks. This release also includes some key user-requested bug fixes and enhancements.

The latest release versions of each of the engine are as follows:

Here are some of the key highlights in this release.

Key Improvements

Key Bug Fixes

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CSI components have been upgraded to:
      • k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1 (for Mayastor CSI volumes)
      • k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
      • k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from a version of cStor operators older than 2.6 to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

  • The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Working on automating further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.
  • PSP support has been added to ZFS Local PV and cStor helm charts.
  • Improving the OpenEBS Rawfile Local PV in preparation for its beta release. In the current release, fixed some issues as well as added support for setting resource limits on the sidecar and few other optimizations.
  • Sample Grafana dashboards for managing OpenEBS are being developed here: https://github.com/openebs/charts/tree/gh-pages/grafana-charts

Show your Support

Thank you Armel Soro, Art Win, @ssytnikov18 from Verizon Media, Mike T, grouchojeff for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

We are excited to welcome Praveen Kumar G T as maintainer for Local PV engines.

A very special thanks to our first-time contributors to code, tests, and docs: @luizcarlosfaria, @Z0Marlin, @iyashu, @dyasny, @hanieh-m, @si458, @Ab-hishek

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.7.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.7.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.7 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.7, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.7 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Note: The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.6 and higher) to 2.7. If you are running on release older than 1.6, OpenEBS recommends you upgrade to the latest version as soon as possible.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you donโ€™t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.6.0

3 years ago

Release Summary

OpenEBS v2.6 contains some key enhancements and several fixes for the issues reported by the user community across all 9 types of OpenEBS volumes.

Here are some of the key highlights in this release.

New capabilities

  • OpenEBS is introducing a new CSI driver for dynamic provisioning of Jiva volumes. This driver is released as alpha and currently supports the following additional features compared to the non-CSI jiva volumes.

    • Jiva Replicas are backed by OpenEBS host path volumes
    • Auto-remount of volumes that are marked read-only by iSCSI client due to intermittent network issues
    • Handle the case of multi-attach error sometimes seen on on-premise clusters
    • A custom resource for Jiva volumes to help with easy access to the volume status

    For instructions on how to set up and use the Jiva CSI driver, please see. https://github.com/openebs/jiva-operator.

Key Improvements

  • Several bug fixes to the Mayastor volumes along with improvements to the API documentation. See Mayastor release notes.
  • Enhanced the NFS Dynamic Provisioner to support using Cluster IP for dynamically provisioned NFS server. It was observed that on some of the Kubernetes clusters the kubelet or the node trying to mount the NFS volume was unable to resolve the cluster local service.
  • ZFS Local PV added support for resizing of the raw block volumes.
  • LVM Local PV is enhanced with additional features and some key bug fixes like:
    • Raw block volume support
    • Snapshot support
    • Ability to schedule based on the capacity of the volumes provisioned
    • Ensure that LVM volume creation and deletion functions are idempotent
  • NDM partition discovery was updated to fetch the device details from its parent block device.

Key Bug Fixes

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CSI components have been upgraded to:
      • k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
      • k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
      • k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from an older version of cStor operators to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

  • The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds

  • The e2e pipelines include upgrade testing only from 1.5 and higher releases to 2.6. If you are running on release older than 1.5, OpenEBS recommends you upgrade to the latest version as soon as possible.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.
  • PSP support has been added to ZFS Local PV and cStor helm charts.
  • Improving the OpenEBS Rawfile Local PV in preparation for its beta release. In the current release, fixed some issues as well as added support for setting resource limits on the sidecar and few other optimizations.
  • Sample Grafana dashboards for managing OpenEBS are being developed here: https://github.com/openebs/charts/tree/gh-pages/grafana-charts

Show your Support

Thank you @coboluxx (IDNT) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @luizcarlosfaria, @Z0Marlin, @iyashu, @dyasny, @hanieh-m, @si458, @Ab-hishek

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.6.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.6.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.6 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.6, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.6 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you donโ€™t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.

v2.5.0

3 years ago

Release Summary

A warm and happy new year to all our users, contributors, and supporters. :tada: :tada: :tada:.

Keeping up with our tradition of monthly releases, OpenEBS v2.5 is now GA with some key enhancements and several fixes for the issues reported by the user community. Here are some of the key highlights in this release:

New capabilities

  • OpenEBS has support for multiple storage engines, and the feedback from users has shown that users tend to only use a few of these engines on any given cluster depending on the workload requirements. As a way to provide more flexibility for users, we are introducing separate helm charts per engine. With OpenEBS 2.5 the following helm charts are supported.

    • openebs - This is the most widely deployed that has support for Jiva, cStor, and Local PV hostpath and device volumes.
    • zfs-localpv - Helm chart for ZFS Local PV CSI driver.
    • cstor-operators - Helm chart for cStor CSPC Pools and CSI driver.
    • dynamic-localpv-provisioner - Helm chart for only installing Local PV hostpath and device provisioners.

    (Special shout out to @sonasingh46, @shubham14bajpai, @prateekpandey14, @xUnholy, @akhilerm for continued efforts in helping to build the above helm charts.)

  • OpenEBS is introducing a new CSI driver for dynamic provisioning to Kubernetes Local Volumes backed by LVM. This driver is released as alpha and currently supports the following features.

    • Create and Delete Persistent Volumes
    • Resize Persistent Volume

    For instructions on how to set up and use the LVM CSI driver, please see. https://github.com/openebs/lvm-localpv

Key Improvements

  • Enhanced the ZFS Local PV scheduler to support spreading the volumes across the nodes based on the capacity of the volumes that are already provisioned. After upgrading to this release, capacity-based spreading will be used by default. In the previous releases, the volumes were spread based on the number of volumes provisioned per node. https://github.com/openebs/zfs-localpv/pull/266.

  • Added support to configure image pull secrets for the pods launched by OpenEBS Local PV Provisioner and cStor (CSPC) operators. The image pull secrets (comma separated strings) can be passed as an environment variable (OPENEBS_IO_IMAGE_PULL_SECRETS) to the deployments that launch these additional pods. The following deployments need to be updated.

  • Added support to allow users to specify custom node labels for allowedTopologies under the cStor CSI StorageClass. https://github.com/openebs/cstor-csi/pull/135

Key Bug Fixes

  • Fixed an issue that could cause Jiva replica to fail to initialize if there was an abrupt shutdown of the node where the replica pod is scheduled during the Jiva replica initialization. https://github.com/openebs/jiva/pull/337.
  • Fixed an issue that was causing Restore (with automatic Target IP configuration enabled) to fail if cStor volumes are created with Target Affinity with application pod. https://github.com/openebs/velero-plugin/issues/141.
  • Fixed an issue where Jiva and cStor volumes will remain in pending state on Kubernetes 1.20 and above clusters. K8s 1.20 has deprecated SelfLink option which causes this failure with older Jiva and cStor Provisioners. https://github.com/openebs/openebs/issues/3314
  • Fixed an issue with cStor CSI Volumes that caused the Pods using cStor CSI Volumes on unmanaged Kubernetes clusters to remain in the pending state due to multi-attach error. This was caused due to cStor being dependent on the CSI VolumeAttachment object to determine where to attach the volume. In the case of unmanaged Kubernetes clusters, the VolumeAttachment object was not cleared by Kubernetes from the failed node and hence the cStor would assume that volume was still attached to the old node.

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CRD version has been upgraded to v1. (Thanks to the efforts from @RealHarshThakur, @prateekpandey14, @akhilerm)
    • The CSI components have been upgraded to:
      • quay.io/k8scsi/csi-node-driver-registrar:v2.1.0
      • quay.io/k8scsi/csi-provisioner:v2.1.0
      • quay.io/k8scsi/snapshot-controller:v4.0.0
      • quay.io/k8scsi/csi-snapshotter:v4.0.0
      • quay.io/k8scsi/csi-resizer:v1.1.0
      • quay.io/k8scsi/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from an older version of cStor Operators to this version, you will need to manually delete the cstor CSI driver object prior to upgrade. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor Operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.

Show your Support

Thank you @laimison (Renthopper) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @allenhaozi, @anandprabhakar0507, @Hoverbear, @kaushikp13, @praveengt

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.5.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.5.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.5 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.5, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.5 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you donโ€™t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.