Krustlet Versions Save

Kubernetes Rust Kubelet

v1.0.0-alpha.1

2 years ago

v1.0.0-alpha.1

Krustlet v1.0.0 is the first alpha release of 1.0! This marks the the start of API and feature stability to Krustlet. For more information on our roadmap to 1.0, please read our recently released blog post. These release notes contain all breaking changes and new features from 0.7 and will be released with the final 1.0 release as well (for those who go straight from 0.7 to the final 1.0).

Backwards compatibility

As we move towards 1.0, backwards compatibility and API stability will now be respected. However, during the alpha-beta-RC period, there may be times where we will need to break the API. To be clear, here are the guarantees for each type of release:

  • Alpha: No backwards compatibility or API guarantees. However, we do not intend to do any major breaking changes, but we reserve the right to should the need arise
  • Beta: Similar to alpha, we will not likely be breaking anything. The one exception would be to solve any bugs, security issues, or missing features found during beta releases
  • RC: Backwards compatibility and API stability guarantees will go into effect. Breaking changes will only be considered for showstopper bugs or security issues
  • 1.0.0+: No breaking changes will be allowed nor anything that changes backwards compatibility. The major exception to this is for security issues where breaking changes can be allowed based on maintainer consensus. Any other breaking changes will be deferred to a future 2.0 release

Notable Features/Changes

  • krator has now been moved to its own repo so it can continue to evolve as its own project! This also means that the kubelet crate now consumes krator directly from crates.io
  • Krustlet now has support for device plugins
  • Krustlet providers can now implement custom shutdown behavior
  • The WASI provider now supports outgoing HTTP calls!
  • CSI support now has testing and most outstanding TODOs have been addressed
  • oci-distribution now supports DockerHub authentication
  • Although not a feature, there are now Windows e2e tests so all supported OS versions are now tested
  • The kubelet crate is now instrumented with the tracing crate
  • Provider implementations can now implement an optional shutdown method for graceful shutdown behavior

Breaking changes

krustlet-wasi binary

  • Due to our migration to the tracing crate, the log output is significantly different than 0.7.0

Rust API

  • The Ref type has been replaced with the new VolumeRef enum. Each variant wraps the logic of each supported volume type while still allowing access to the underlying type. Any VolumeRef can be mounted or unmounted. This should be a cleaner experience for anyone leveraging this part of the API
  • VolumeType has been removed from the API as part of the VolumeRef work
  • The Provider::ProviderState associated type now has 2 additional bounds required:

Before:

type ProviderState: 'static + Send + Sync;

After:

type ProviderState: 'static + Send + Sync + PluginSupport + DevicePluginSupport;

These new traits are used to require implementation of plugin registries and device plugin support. A minimal implementation (that does nothing) is below:

struct ProviderState;

impl PluginSupport for ProviderState {}

impl DevicePluginSupport for ProviderState {}

Please note that these traits are also now required for implementors of GenericProvider.

Known Issues/Missing Features

  • Kubernetes networking support. See the "Networking" section for more information
  • Modifying a bare pod's image is not implemented. Nothing will error, but Krustlet will not restart the "container"
  • TLS bootstrapping does not auto-renew certificates when they are close to expiry

Networking

After many discussions, we have decided that CNI and 100% native Kubernetes network support will not be part of the 1.0 release. Networking implementations vary wildly between different Krustlet providers and it would be difficult and unwise to try and define a common abstraction at this time. For example, wasmCloud connects using a system called a “lattice” which can consist of an arbitrary graph of connected hosts. Whereas the CRI provider relies on the CNI spec implementations built in to many Kubernetes distros. Not only are we at a point where we need to understand how more complex networking should work, but the same discussions involve things like sidecars, service meshes, and so on. As a result, we decided to let you handle your own networking here, which you can do, while the community understands what's the best path forward with these more complex interactions.

Our hope is that once we have several different networking examples from various providers, it will be easier to add in a common abstraction at this time. Waiting on that information would only introduce additional delay for releasing Krustlet as an otherwise stable and production-ready (though still bleeding edge) project.

What's next?

Our next anticipated version is 1.0.0-alpha.2 after we finish the remaining tasks in the 1.0 milestone

Thanks

We want to express a huge thanks to all of those in the community who have helped the project evolve to this point. We appreciate your efforts in making this project a success.

Contributors to 1.0

If we missed you, let us know!

  • @VishnuJin
  • @bacongobbler
  • @cmhamakawa
  • @ereslibre
  • @fibonacci1729
  • @flavio
  • @gdesmott
  • @itowlson
  • @jdolitsky
  • @kate-goldenring
  • @kesselborn
  • @kflansburg
  • @lfrancke
  • @radu-matei
  • @siegfriedweber
  • @soenkeliebau
  • @thomastaylor312
  • @tot0

Installation

Download Krustlet 1.0.0-alpha.1:

Check out our installation docs for information on how to install Krustlet.

v0.7.0

3 years ago

Krustlet v0.7.0 is mainly an SDK focused release. The biggest under the hood change was the update of our underlying task runtime (Tokio) to its first stable release. There are also a few more small SDK feature adds. We have also switched out the logging implementation so that it can eventually be used to pass tracing data to another service. Lastly, this release marks the official move of the wasmCloud Provider to its own repo! For more details on what isn't implemented yet, see the Known Issues section.

Because this is pre-release software, there are no backwards compatibility guarantees for the Rust API or functionality. However, we will do our best to document any breaking changes in future releases.

Caveats

Please note that this is NOT production-ready software, but it is in a usable state. The WASI standard and wasmtime are still under heavy development, and because of this there are key features (like networking) that are missing; these will appear in the future. However, there is networking support available in the wasmCloud provider.

Notable Features/Changes

  • The wasmCloud provider is no longer released with the Krustlet artifacts. Its code and artifacts are now hosted over at https://github.com/wasmCloud/krustlet-wasmcloud-provider to facilitate better collaboration and maintenance with the wasmCloud maintainers.
  • Krator now supports admission webhooks
  • Logging has been switched to the tracing crate. We will be instrumenting more of our code over the next release
  • You can now add pod conditions using the StatusBuilder

Breaking changes

  • All Krustlet project crates (oci-distribution, kubelet, and krator) are now using the Tokio 1.X runtime. This means it will be a breaking change for SDK consumers that are using tokio 0.2 or 0.3 in their code

Known Issues/Missing Features

  • Kubernetes networking support. See the "What's next?" section for more information
  • Unsupported workloads (such as those dropped automatically onto a node like kube-proxy) can enter an error loop. This is more of a nuisance that will cause some logging noise, but not impact the running of Krustlet. If you have any ideas or feedback, please check out #167
  • Modifying a bare pod's image is not implemented. Nothing will error, but Krustlet will not restart the "container"
  • TLS bootstrapping does not auto-renew certificates when they are close to expiry

What's next?

Our next anticipated version is 1.0.0-alpha.1 (although we will cut a 0.7.1 if necessary). As we move to 1.0 our focus will be on fixing all outstanding bugs and polishing up the CSI support. We also are hoping to streamline the installation and deployment process as much as possible.

After many discussions, we have also decided that CNI and 100% native Kubernetes network support will not be part of the 1.0 release. Networking implementations vary wildly between different Krustlet providers and it would be difficult and unwise to try and define a common abstraction at this time. For example, wasmCloud connects using a system called a "lattice" which can consist of an arbitrary graph of connected hosts. Whereas the CRI provider relies on the CNI spec implementations built in to many Kubernetes distros. However, what we will have is basic host port (and probably node port) support for each of the Wasm based providers. Any provider will be in charge of creating their own networking implementation tailored to the needs of their provider. Our hope is that once we have several different networking examples from various providers, it will be easier to add in a common abstraction at this time. Waiting on that information would only introduce additional delay for releasing Krustlet as an otherwise stable and production-ready (though still bleeding edge) project.

Thanks

We want to express a huge thanks to all of those in the community who contributed to this release. There have been quite a few bug fix contributions from the community that have been very helpful. We appreciate your efforts in making this project a success.

Contributors to 0.7

  • @kflansburg
  • @itowlson
  • @bacongobbler
  • @thomastaylor312
  • @jiayihu
  • @vasu1124
  • @matsbror
  • @kesselborn
  • @lfrancke
  • @siegfriedweber
  • @yanganto

Installation

Download Krustlet 0.7.0:

Check out our installation docs for information on how to install Krustlet.

v0.6.0

3 years ago

Krustlet v0.6.0 has several major new features, particularly around alpha level support for Container Storage Interface volumes. There were also some new SDK features (and a new crate!) that necessitated a few more breaking API changes. These changes are explained below. For more details on what isn't implemented yet, see the Known Issues section.

Because this is pre-release software, there are no backwards compatibility guarantees for the Rust API or functionality. However, we will do our best to document any breaking changes in future releases.

Caveats

Please note that this is NOT production-ready software, but it is in a usable state. The WASI standard and wasmtime are still under heavy development, and because of this there are key features (like networking) that are missing; these will appear in the future. However, there is networking support available in wasCC.

Notable Features/Changes

  • Container Storage Interface volume support is now available though PVCs 🎉 . Please note that this is ALPHA support and there are still some rough edges (namely around validation of some of the read access modes and other "advanced" configuration options). We plan on continuing to improve this during the 0.7 milestone.
  • We have also broken out the state machine into its own SDK and crate called krator! This crate allows for generic reuse of the state machine logic to write any type of Kubernetes controller. Please read the introduction blog post for more information
  • Lots of new doc updates. Thanks community members for all your help with that!
  • Generic, reusable states. If you are a provider implementor, you're welcome. These are states that generally stay the same across provider implementations (like Error and Backoff states).
  • A secondary state machine was introduced to manage individual containers. These utilize states defined in each of the two providers using Krator's API, a run_to_completion method provided by the Kubelet crate, and are spawned from within the Pod state machines of the providers. This is not required to implement a Provider, but we found that it significantly simplified our implementation.

Common State Implementations

We have implemented many common Pod states such that you can borrow one or more of these state handlers from the Kubelet crate rather than rewriting this boiler plate. These states currently include Registered, ImagePull, ImagePullBackoff, VolumeMount, CrashLoopBackoff, and Error.

If you would like to make use of these states, you must implement a few new traits:

  • GenericProviderState for your shared state.
  • GenericPodState for your object state.
  • GenericProvider for your Provider type.

Please refer to either wascc-provider or wasi-provider for examples of how to implement these traits.

Breaking changes

Providers Trait

We had to make several small changes to the Provider trait to allow for generic state support.

The first is the addition of a new required method and associated type that captures the state of the provider itself (e.g. the container store and handles to running pods) that is shared across all pods:

type ProviderState: 'static + Send + Sync;

fn provider_state(&self) -> crate::state::SharedState<Self::ProviderState>;

SharedState<_> is simply a type alias for Arc<tokio::sync::RwLock<_>>, so you can refer to the Tokio documentation to understand its API.

The second is a change to the associated type for the PodState. This must now be something that implements the ObjectState type from the new krator crate:

type PodState: ObjectState<
    Manifest = Pod,
    Status = PodStatus,
    SharedState = Self::ProviderState,
>;

The last change is the addition of a new method that supports the Kubelet plugin registry (used for registering CSI plugins). This is an optional feature that has a default implementation returning None. If you want to opt in to CSI volumes, you can provide your own implementation of this function

fn plugin_registry(&self) -> Option<Arc<PluginRegistry>>;

All providers will need to implement these new fields and methods (with the exception of plugin_registry) upon upgrading to Krustlet 0.6. You can see an example of how these are implemented in the wasi-provider

Prelude

The prelude in kubelet::state has been moved to two separate preludes, kubelet::pod::state and kubelet::container::state, which export the same state machine API types but different status types, etc.

If you are not using the prelude, please be aware that a number of types were moved from kubelet::state to krator::state.

State

The State trait has changed in a few ways:

In the next method, a new argument has been introduced for accessing shared state: shared: SharedState<ProviderState>.

The next method now takes Manifest<Pod> instead of Pod. This wrapper allows access to a reflection of the Pod rather than a potentially out of date copy. At any time you can get a clone of the latest pod by calling pod.latest(). If you want to await updates to the manifest, Manifest implements Stream.

The status method now returns an arbitrary type which implements krator::ObjectState, rather than serde_json::Value. This allows you to wrap status-patch logic in an arbitrary type, rather than having to write JSON patches in every state handler.

AsyncDrop

The async_drop method implemented for PodState has been moved to a method on the newly introduced ObjectState trait, and the AsyncDrop trait has been removed.

Node Label

We have changed the node label kubernetes.io/os from linux to Provider::ARCH. The reason for this is that a number of vendors appear to use this label as an indication of the types of workloads that can run on the node, and not the host operating system. This is one of the culprits for frequent errors related to DaemonSets like kube-proxy being scheduled on Krustlet nodes. Unfortunately it does not completely eliminate this problem.

Known Issues/Missing Features

  • Kubernetes networking support. The waSCC provider currently exposes the service on one of the node's ports, but there is nothing that updates Services or Endpoints. This is one of the major focuses of 0.7
  • Unsupported workloads (such as those dropped automatically onto a node like kube-proxy) can enter an error loop. This is more of a nuisance that will cause some logging noise, but not impact the running of Krustlet. If you have any ideas or feedback, please check out #167
  • Modifying a bare pod's image is not implemented. Nothing will error, but Krustlet will not restart the "container"
  • TLS bootstrapping does not auto-renew certificates when they are close to expiry

What's next?

Our next anticipated version is 0.7.0 (although we will cut a 0.6.1 if necessary). Our main focus for 0.7 will be implementing networking and improving CSI support. During the next release cycle, we will also be moving out the waSCC provider to its own repo. Although we intended for this to occur during this release, we decided it would be better to do after we made these last changes to the provider trait. Full details will be in the 0.7 release notes.

Thanks

We want to express a huge thanks to all of those in the community who contributed to this release. We appreciate your efforts in making this project a success. As we mentioned before, there were a ton of doc updates from the community and we are very grateful.

Contributors to 0.6

  • @kflansburg
  • @itowlson
  • @bacongobbler
  • @thomastaylor312
  • @DazWilkin
  • @brooksmtownsend
  • @jiayihu
  • @willemneal

Installation

Download Krustlet 0.6.0:

Check out our installation docs for information on how to install Krustlet.

v0.5.0

3 years ago

Krustlet v0.5.0 is mostly an "under the hood" release if you are just a consumer of Krustlet, although there still are a few useful feature adds! However, if you are using the kubelet crate, there have been significant API changes that lay the groundwork for solid features in the future. These changes are explained below. For more details on what isn't implemented yet, see the Known Issues section.

Because this is pre-release software, there are no backwards compatibility guarantees for the Rust API or functionality. However, we will do our best to document any breaking changes in future releases.

Caveats

Please note that this is NOT production-ready software, but it is in a usable state. The WASI standard and wasmtime are still under heavy development, and because of this there are key features (like networking) that are missing; these will appear in the future. However, there is networking support available in wasCC.

Notable Features/Changes

  • An entirely new and shiny state machine API for the Provider trait. This includes a new State trait to be use in constructing a state machine
  • Krustlet now supports Kubernetes 1.19. However, this came with a slight breaking change (see next section)
  • You can now mount individual items from Secrets and ConfigMaps into pods
  • ImagePullSecrets are now supported
  • For those developing on Krustlet, there are now some additional e2e testing tools and justfile targets

Breaking changes

Due to the requirements in Kubernetes 1.19, Krustlet no longer can self label its node object with a role. Full details about why this change was needed can be found here. In practice, this means your node selectors on Pods meant to be run on a Krustlet node should not rely on the role. It is a little unclear what is the best practice here from the Kubernetes standpoint (automation after node joining or some other sort of reconciliation process), but in our experience we find it best to select on the architecture like so:

nodeSelector:
    beta.kubernetes.io/os: linux
    beta.kubernetes.io/arch: wasm32-wascc

You can see various examples of selectors in our demos.

New Provider API and State Machine

This section details the major changes to the kubelet crate API. If you do not consume this crate or write your own provider, feel free to skip to the next section.

First and foremost, the Provider trait has been hugely slimmed down. The current trait (with new members commented) as of publishing time is:

pub trait Provider: Sized {
    /// The state that is passed between Pod state handlers.
    type PodState: 'static + Send + Sync + AsyncDrop;

    /// The initial state for Pod state machine.
    type InitialState: Default + State<Self::PodState>;

    /// The a state to handle early Pod termination.
    type TerminatedState: Default + State<Self::PodState>;

    const ARCH: &'static str;

    async fn node(&self, _builder: &mut Builder) -> anyhow::Result<()> {
        Ok(())
    }

    /// Hook to allow provider to introduced shared state into Pod state.
    async fn initialize_pod_state(&self, pod: &Pod) -> anyhow::Result<Self::PodState>;

    async fn logs(
        &self,
        namespace: String,
        pod: String,
        container: String,
        sender: Sender,
    ) -> anyhow::Result<()>;

    async fn exec(&self, _pod: Pod, _command: String) -> anyhow::Result<Vec<String>> {
        Err(NotImplementedError.into())
    }

    async fn env_vars(
        container: &Container,
        pod: &Pod,
        client: &kube::Client,
    ) -> HashMap<String, String> {
        ...
    }
}

We have removed all of the separate functions for handling the basic CRUD operations as those are now handled by the new state machine (covered below). A few important details about the new parts of the Provider: The InitialState and TerminatedState allow Provider authors to specify what the starting state for a Pod is and the state it should jump to if deleted. The PodState and initialize_pod_state are used in conjunction to allow your Provider to exposed shared state (often stored within the Provider itself) to all of the pods. A simple example of this is the set of currently assigned ports used in the waSCC provider. This shared state must implement the custom AsyncDrop as Rust currently has no async drop standard.

In this new paradigm, providers are expected to provide a full state machine for the lifecycle of the pod. This new design allows for each step of processing to be encapsulated in its own state (e.g. pulling images is one state, starting containers is another). At the heart of this is one new trait, State, and one new enum: Transition as shown below:

pub trait State<PodState>: Sync + Send + 'static + std::fmt::Debug {
    /// Provider supplies method to be executed when in this state.
    async fn next(
        self: Box<Self>,
        pod_state: &mut PodState,
        pod: &Pod,
    ) -> anyhow::Result<Transition<PodState>>;

    /// Provider supplies JSON status patch to apply when entering this state.
    async fn json_status(
        &self,
        pod_state: &mut PodState,
        pod: &Pod,
    ) -> anyhow::Result<serde_json::Value>;
}
pub enum Transition<PodState> {
    /// Transition to new state.
    Next(StateHolder<PodState>),
    /// This is a terminal node of the state graph.
    Complete(anyhow::Result<()>),
}

Each state in your machine must implement the State trait. The json_status function is called when entering into the state to update the status in Kubernetes. The next function is called in an iterative fashion to walk through the state machine. Transitions to other states are done using the Transition next method, which abstracts away some boxing and other things behind the scenes to construct a Transition::Next.

For additional safety, each state must explicitly mark what its next possible states are using the TransitionTo trait, providing compile time guarantees that you aren't transitioning into a state you aren't expecting.

To tie it all together, here is a super simple implementation of a state:

use kubelet::state::{Transition, State, TransitionTo};
use kubelet::pod::Pod;

#[derive(Debug)]
struct TestState;

impl TransitionTo<TestState> for TestState {}

struct PodState;

#[async_trait::async_trait]
impl State<PodState> for TestState {
    async fn next(
        self: Box<Self>,
        _pod_state: &mut PodState,
        _pod: &Pod,
    ) -> anyhow::Result<Transition<PodState>> {
        Ok(Transition::next(self, TestState))
    }

    async fn json_status(
        &self,
        _pod_state: &mut PodState,
        _pod: &Pod,
    ) -> anyhow::Result<serde_json::Value> {
        Ok(serde_json::json!(null))
    }
}

If you are interested in the technical details and design decisions behind the state machine, there will be a forthcoming blog post on https://deislabs.io (we will update the link here when it releases). Please note that this API is still needs some polishing work. We don't expect any more major changes like what is in this release, but there will be various ergonomic changes to make it easier to use as we continue to iterate.

For real world examples of a provider implementation, please take a look at the WASI and waSCC providers in Krustlet. See the crate documentation for full documentation and more examples.

Known Issues/Missing Features

  • Cloud volume mounting support
  • Kubernetes networking support. The waSCC provider currently exposes the service on one of the node's ports, but there is nothing that updates Services or Endpoints
  • Support for all pod phases/conditions (ContainerCreating, CrashLoopBackoff, etc.). However, please note that running and error conditions are supported, so you'll know if your pod is in an erroneous state
  • Unsupported workloads (such as those dropped automatically onto a node like kube-proxy) can enter an error loop. This is more of a nuisance that will cause some logging noise, but not impact the running of Krustlet
  • Modifying a bare pod's image is not implemented. Nothing will error, but Krustlet will not restart the "container"
  • TLS bootstrapping does not auto-renew certificates when they are close to expiry

What's next?

Our next anticipated version is 0.6.0 (although we will cut a 0.5.1 if necessary). You can see a full list of issues planned for 0.6 in the milestone. Our main focus for 0.6 will be around supporting cloud provider volumes and continued API ergonomic enhancements. During the next release cycle, we will also be moving out the waSCC provider to its own repo. Full details will be in the 0.6 release notes.

Thanks

We want to express a huge thanks to all of those in the community who contributed to this release. We appreciate your efforts in making this project a success.

We also would like to call out @kflansburg's herculean efforts with the new state machine API and refactor. He recently joined us as a core maintainer and took the lead on this massive chunk of work. If you see him on the interwebs, please say thanks!

Contributors to 0.5

  • @kflansburg
  • @itowlson
  • @bacongobbler
  • @thomastaylor312
  • @jlegrone
  • @Porges

Installation

Download Krustlet 0.5.0:

Check out our installation docs for information on how to install Krustlet.

v0.4.0

3 years ago

Krustlet v0.4.0 focuses on Windows support, Init Containers, and testing improvements. For more details on what isn't implemented yet, see the Known Issues section.

Because this is pre-release software, there are no backwards compatibility guarantees for the Rust API or functionality. However, we will do our best to document any breaking changes in future releases.

Caveats

Please note that this is NOT production-ready software, but it is in a usable state. The WASI standard and wasmtime are still under heavy development, and because of this there are key features (like networking) that are missing; these will appear in the future. However, there is networking support available in wasCC.

Notable Features/Changes

  • Windows support has been added, including pre-built binaries. There are also build scripts written for PowerShell when building on Windows. Please consult the documentation to understand some of the caveats when running on Windows.
  • We now have full Init Container support in the WASI provider. Init Containers are not supported on waSCC due to it being an entirely different runtime where Init Containers don't make much sense.
  • The waSCC provider now supports automatic port allocation when nodePort isn't specified
  • The e2e test suite was greatly expanded and made more flexible
  • ImagePullPolicy is now respected in both providers

Breaking changes

The oci-distribution crate has unexported the Client's version and auth methods as they are only meant to be used internally. The Client fetch_manifest_digest method now requires an &mut self.

Known Issues/Missing Features

  • Cloud volume mounting support
  • Kubernetes networking support. The waSCC provider currently exposes the service on one of the node's ports, but there is nothing that updates Services or Endpoints
  • Support for all pod phases/conditions (ContainerCreating, CrashLoopBackoff, etc.). However, please note that running and error conditions are supported, so you'll know if your pod is in an erroneous state
  • Unsupported workloads (such as those dropped automatically onto a node like kube-proxy) can enter an error loop. This is more of a nuisance that will cause some logging noise, but not impact the running of Krustlet
  • Modifying a bare pod's image is not implemented. Nothing will error, but Krustlet will not restart the "container"
  • TLS bootstrapping does not auto-renew certificates when they are close to expiry

What's next?

Our next anticipated version is 0.5.0 (although we will cut a 0.4.1 if necessary). You can see a full list of issues planned for 0.5 in the milestone. We expect 0.5.0 to contain some major breaking changes, particularly around the Provider API. However, we believe these changes will make it easier for developers to write their own providers and simplify the process for updating pod statuses. These changes will be clearly documented in the 0.5 release notes

Thanks

We also want to express a huge thanks to all of those in the community who contributed to this release. We appreciate your efforts in making this project a success.

Contributors to 0.4

  • @kflansburg
  • @itowlson
  • @bacongobbler
  • @thomastaylor312
  • @radu-matei

Installation

Download Krustlet 0.4.0:

Check out our installation docs for information on how to install Krustlet.

v0.3.0

3 years ago

Krustlet v0.3.0 is the third release of Krustlet. This release was focused on refactoring, documentation, and TLS bootstrapping. For more details on what isn't implemented yet, see the Known Issues section.

Because this is pre-release software, there are no backwards compatibility guarantees for the Rust API or functionality. However, we will do our best to document any breaking changes in future releases.

Caveats

Please note that this is NOT production-ready software, but it is in a usable state. The WASI standard and wasmtime are still under heavy development, and because of this there are key features (like networking) that are missing; these will appear in the future. However, there is networking support available in wasCC.

Notable Features/Changes

  • TLS bootstrapping support has been added :tada: Krustlet will now automatically request the proper client certificate credentials and the serving certificates for its API. See the bootstrapping docs for more details.
  • We now have support for using a config file and better documentation of our configuration options.
  • Improved error handling for the Kubelet API
  • The WASI provider now supports passing arguments in the pod spec
  • Graceful shutdown support has been added. This will try to evict all pods before shutting down

Breaking changes

  • We have moved away from using a .pfx certificate bundle for TLS in favor of a separate certificate and key. Because of this, the --pfx-password flag has been removed and the new flags for TLS are --cert-file and --private-key-file
  • The environment variables for configuring the certificate and key file locations have been standardized and renamed with the KRUSTLET_ prefix:
    • TLS_PRIVATE_KEY_FILE => KRUSTLET_PRIVATE_KEY_FILE
    • TLS_CERT_FILE => KRUSTLET_CERT_FILE
  • Many modules in the kubelet crate have shifted around. When upgrading your Providers to kubelet 0.3.0, note the following changes in your import paths:
    • kubelet::Provider -> kubelet::provider::Provider
    • kubelet::NodeBuilder -> kubelet::node::Builder
    • kubelet::Pod -> kubelet::pod::Pod
    • kubelet::module_store::ModuleStore -> kubelet::store::Store
    • kubelet::module_store::FileModuleStore -> kubelet::store::oci::FileStore
    • kubelet::handle::key_from_pod -> kubelet::pod::key_from_pod
    • kubelet::handle::pod_key -> kubelet::pod::pod_key
    • kubelet::handle::PodHandle -> kubelet::pod::Handle
    • kubelet::handle::RuntimeHandle -> kubelet::container::Handle
    • kubelet::status::update_pod_status -> kubelet::pod::update_status
    • kubelet::status::Phase -> kubelet::pod::Phase
    • kubelet::status::Status -> kubelet::pod::Status
    • kubelet::status::ContainerStatus -> kubelet::container::Status
    • kubelet::handle::Stop -> kubelet::handle::StopHandler
    • kubelet::volumes::VolumeRef -> kubelet::volume::Ref
    • kubelet::logs::LogOptions -> kubelet::log::Options
    • kubelet::logs::LogSender -> kubelet::log::Sender

Known Issues/Missing Features

  • Cloud volume mounting support
  • Init containers
  • Only linux and Darwin 64 bit architectures are supported. We hope to be adding ARM and other targets in the future. Right now, Windows is do-able, but we are trying to improve the developer process and testing before supporting it as a build target
  • Support for all pod phases/conditions (ContainerCreating, CrashLoopBackoff, etc.). However, please note that running and error conditions are supported, so you'll know if your pod is erroring
  • Unsupported workloads (such as those dropped automatically onto a node like kube-proxy) can get into an error loop. This is more of a nuisance that will cause some logging noise, but not impact the running of Krustlet
  • Modifying a bare pod's image is not implemented. Nothing will error, but Krustlet will not restart the "container"
  • TLS bootstrapping does not auto-renew certificates when they are close to expiry

What's next?

Our next anticipated version is 0.4.0 (although we will cut a 0.3.1 if necessary). You can see a full list of issues planned for 0.4 in the milestone.

Thanks

We also want to express a huge thanks to all of those in the community who contributed to this release. We had a whole slew of new contributors and their work has been invaluable in improving quality. We appreciate your efforts in making this project a success.

Contributors to 0.3

  • @bketelsen
  • @kflansburg
  • @DerekStrickland
  • @itowlson
  • @bacongobbler
  • @thomastaylor312

Installation

Download Krustlet 0.3.0:

Check out our installation docs for information on how to install Krustlet.

v0.2.1

3 years ago

This is purely a maintenance release for the Rust crate. There was a missing feature flag on the docs build that was causing it to fail. Otherwise this has the same functionality as 0.2.0. We apologize for the mixup

Installation

Download Krustlet 0.2.1:

Check out our installation docs for information on how to install Krustlet

v0.2.0

3 years ago

Krustlet 0.2.0 is the second release of Krustlet. This release was focused on adding features and improving overall ergonomics for running Krustlet. For more details on what isn't implemented yet, see the Known Issues section.

Because this is pre-release software, there are no backwards compatibility guarantees for the Rust API or functionality. However, we will do our best to document any breaking changes in future releases

Caveats

Please note that this is NOT production-ready software, but it is in a usable state. The WASI standard and wasmtime are still under heavy development, and because of this there are key features (like networking) that are missing; these will appear in the future. However, there is networking support available in wasCC.

Using Krustlet as a library

All of the functionality of Krustlet is also available as a Rust crate, and this is the first release that we are publishing to crates.io! We have pushed the oci-distribution crate and the kubelet crate for use in other projects. The kubelet crate can be used by anyone who wants to write a Virtual Kubelet style provider in Rust. We'd love to see some other examples of Providers if anyone implements one!

Please remember that the oci-distribution crate is a partial implementation of the OCI spec that we hope to fully flesh out in the future (as of right now, it is just an implementation of what is needed to pull modules for Krustlet)

Notable Features/Changes

  • A whole slew of additional docs for running Krustlet with an Inlets tunnel, GKE, and on WSL2
  • waSCC capabilities are now statically compiled into the krustlet-wascc binary, eliminating the need for downloading object files
  • Long running modules in the WASI provider can now be interrupted
  • Volume support for Secrets, ConfigMaps, and HostPath
  • Log streaming and better error handling for Kubelet

Known Issues/Missing Features

  • Cloud volume mounting support
  • Init containers
  • Only linux and Darwin 64 bit architectures are supported. We hope to be adding ARM and other targets in the future. Right now, Windows is doable, but we are trying to improve the developer process and testing before supporting it as a build target
  • Support for all pod phases/conditions (ContainerCreating, CrashLoopBackoff, etc.). However, please note that running and error conditions are supported, so you'll know if your pod is erroring
  • Unsupported workloads (such as those dropped automatically onto a node like kube-proxy) can get into an error loop. This is more of a nuisance that will cause some logging noise, but not impact the running of Krustlet
  • We were planning on having TLS bootstrapping available, but ran into some problems with getting it to work with external nodes. Because of this, it does not update the ready heartbeat of the node (though it does update its lease) and it has to delete and recreate the node on restart.
  • Modifying a bare pod's image is not implemented. Nothing will error, but Krustlet will not restart the "container"

What's next?

Our next anticipated version is 0.3.0 (although we will cut a 0.2.1 if necessary). You can see a full list of issues planned for 0.3 in the milestone

Thanks

We also want to express a huge thanks to all of those in the community who contributed to this release. We had a whole slew of new contributors and their work has been invaluable in improving quality. We appreciate your efforts in making this project a success

Installation

Download Krustlet 0.2.0:

Check out our installation docs for information on how to install Krustlet

v0.1.0

4 years ago

Krustlet 0.1.0 is the first release of Krustlet. For more details on the project and why it exists, see the blog post. This release contains a minimally functional Kubelet implementation for using WASI ("wasmtime") or wasCC as a backend provider. You can create a pod, have it run (with status updates), and be able to fetch the logs. For more details on what isn't implemented yet, see the Known Issues section.

Because this is pre-release software, there are no backwards compatibility guarantees for the Rust API or functionality. However, we will do our best to document any breaking changes in future releases

Caveats

Please note that this is not fully production ready software, but it is definitely in a usable state. The WASI standard and wasmtime are still under heavy development. There are some key features (like networking) that are currently missing, but will be made available in future updates. However, there is networking support available in wasCC.

Right now, workloads for the WASI provider should be short lived. There is no current way to safely interrupt a long-running WASM module in wasmtime, though there will be in the future.

Using Krustlet as a library

All of the functionality of Krustlet is also available as a Rust crate, but since this is still experimental we have not published the crates to https://crates.io. If you are interested in using these as published crates, please let us know so we can prioritize it!

To use any of the crates as a dependency, you'll need to add it to your dependencies like so (replacing kubelet with the name of the crate you want):

kubelet = { git = "https://github.com/deislabs/krustlet", tag = "v0.1.0" }

The kubelet crate is a generic Kubelet implementation that can be used to create your own Kubelet or Provider. The oci-distribution crate is a partial implementation of the OCI spec that we hope to fully flesh out in the future (as of right now, it is just an implementation of what is needed to pull images for Krustlet)

Known Issues/Missing Features

  • Volume mounting support
  • Init containers
  • Stopping long running instances
  • Only linux and Darwin 64 bit architectures are supported. We hope to be adding ARM and other targets in the future
  • Support for all pod phases/conditions (ContainerCreating, CrashLoopBackoff, etc.). However, please note that running and error conditions are supported, so you'll know if your pod is erroring
  • Unsupported workloads (such as those dropped automatically onto a node like kube-proxy) can get into an error loop. This is more of a nuisance that will cause some logging noise, but not impact the running of Krustlet
  • Status updating: Krustlet does not currently have all of the bootstrapping for TLS and users that allows it to update the status of nodes. Because of this, it does not update the ready heartbeat of the node (though it does update its lease) and it has to delete and recreate the node on restart. This will be improved in version 0.2
  • "Modify" events for pods are not currently supported. However, this shouldn't impact most workloads if you are using things from the apps/v1 API (e.g. Deployments)
  • If you are using the wasCC provider, you'll need the object files for the logging and http "capabilities." This is only temporary and these capabilities will be compiled in to the wasCC provider in 0.2.

What's next?

Our next anticipated version is 0.2.0 (although we will cut a 0.1.1 if necessary). You can see a full list of issues planned for 0.2 in the milestone

Thanks

We also want to express a huge thanks to all of those in the community who contributed to this release. We appreciate your efforts in making this project a success

Installation

Download Krustlet 0.1.0:

Check out our installation docs for information on how to install Krustlet