NATS Streaming Operator
Same as release v0.4.0, changes are:
default
namespace instead of nats-io
Image:
docker run synadia/nats-streaming-operator:0.4.2
To install:
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.4.2/default-rbac.yaml
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.4.2/deployment.yaml
Image:
docker run synadia/nats-streaming-operator:0.4.0
To install:
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.4.0/default-rbac.yaml
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.4.0/deployment.yaml
config
to all examples to prevent issues on starting the cluster (https://github.com/nats-io/nats-streaming-operator/pull/76)Image:
synadia/nats-streaming-operator:v0.3.0-v1alpha1
To install:
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.3.0/default-rbac.yaml
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.3.0/deployment.yaml
ftGroup
option to switch to fault tolerance mode when using a shared filesystem.---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "stan"
spec:
natsSvc: "nats"
size: 3
image: "nats-streaming:latest"
config:
storeDir: "/pv/stan"
ftGroup: "stan"
# Define mounts in the Pod Spec
template:
spec:
volumes:
- name: stan-store-dir
persistentVolumeClaim:
claimName: efs
containers:
- name: nats-streaming
volumeMounts:
- mountPath: /pv
name: stan-store-dir
Image:
synadia/nats-streaming-operator:v0.2.2-v1alpha1
To install:
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.2.2/default-rbac.yaml
kubectl apply -f https://github.com/nats-io/nats-streaming-operator/releases/download/v0.2.2/deployment.yaml
-m 8222
flag to the container to activate the monitoring endpoint.
Now possible to attach prometheus-exporter to collect metrics as follows:---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "example-stan"
spec:
# Number of nodes in the cluster
size: 3
# NATS Streaming Server image to use, by default
# the operator will use a stable version
#
image: "nats-streaming:0.12.2"
# Service to which NATS Streaming Cluster nodes will connect.
#
natsSvc: "example-nats"
config:
debug: true
trace: true
raftLogging: true
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7777"
spec:
containers:
# Need to list the first container for the server, the operator
# will fill in the rest of the parameters.
- name: "stan"
# Define the sidecar container and the paths that should be polled.
- name: "metrics"
image: "synadia/prometheus-nats-exporter:0.2.0"
args: ["-varz", "-channelz", "-serverz", "http://localhost:8222"]
ports:
- name: "metrics"
containerPort: 7777
protocol: TCP
Add template
field to be able to customize pod spec for a NATS Streaming pod.
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "example-stan-pv"
spec:
natsSvc: "example-nats"
config:
storeDir: "/pv/stan"
# Define custom service account for pods
template:
spec:
serviceAccountName: "admin"
Add storeDir
configuration fields, which can be used to point to persistent volume mount for example.
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "example-stan-pv"
spec:
natsSvc: "example-nats"
config:
storeDir: "/pv/stan"
# Define mounts in the Pod Spec
template:
spec:
volumes:
- name: stan-store-dir
persistentVolumeClaim:
claimName: streaming-pvc
containers:
- name: nats-streaming
volumeMounts:
- mountPath: /pv
name: stan-store-dir
Added store
and configFile
configuration option which can be used to choose storage backend to be an SQL store and load DB credentials securely as part of a secret.
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "example-stan-db"
spec:
natsSvc: "example-nats"
# Explicitly set that the managed NATS Streaming instance
# will be using an SQL storage, to ensure that only a single
# instance is available.
store: SQL
# In order to use DB store support, it is needed to include
# the credentials as a secret on a mounted file.
configFile: "/etc/stan/config/secret.conf"
# Define Pod Spec
template:
spec:
volumes:
- name: stan-secret
secret:
secretName: stan-secret
containers:
- name: nats-streaming
volumeMounts:
- mountPath: /etc/stan/config
name: stan-secret
readOnly: true
Added debug
, trace
, raftLogging
flags to increase verbosity of the logs.'
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "example-stan"
spec:
# Number of nodes in the cluster
size: 3
# NATS Streaming Server image to use, by default
# the operator will use a stable version
#
# image: "nats-streaming:latest"
# Service to which NATS Streaming Cluster nodes will connect.
#
natsSvc: "example-nats"
config:
debug: true
trace: true
raftLogging: true
Changed pod restart policy to be on failure, though can still be
overriden via the template
.
Internals were rewritten to use typed clients instead of Operator SDK which currently still alpha.
First alpha release of the NATS Streaming Operator
# Install latest version of NATS Operator on nats-io namespace
kubectl -n nats-io apply -f https://raw.githubusercontent.com/nats-io/nats-operator/master/example/deployment-rbac.yaml
# Installing the NATS Streaming Operator on nats-io namespace
kubectl -n nats-io apply -f https://raw.githubusercontent.com/nats-io/nats-streaming-operator/master/deploy/deployment-rbac.yaml