Kubernetes controller for running load testing
Lotus is a Kubernetes controller for running load testing. Lotus schedules & monitors the load test workers, collects & stores the metrics and notifies the test result.
Once installed, Lotus provides the following features:
Checks
(like asserts, fails in normal test) for easy and flexible CI configurationI am thinking about adding a feature that helps us determine the maximum number of users (requests) the target services can handle. This can be done by automatically running the load tests with the number of virtual users increasing gradually until one of the checks fails. Or a feature that helps us determine the needed resources of the target services so that they can handle the given number of users. more
Firstly, you need to install Lotus controller on your Kubernetes cluster to start using.
Lotus requires a Kubernetes cluster of version >=1.9.0
.
The Lotus controller can be installed either by using the helm chart
or by using Kubernetes manifests
directly.
(Using the helm chart is recommended.)
helm install --name lotus ./install/helm
See install
for more details.
We have 2 steps to start running a load test:
Theoretically, you can write your scenarios by using any language you like. The only thing you need to have is a metrics exporter for Prometheus.
In the case of Golang, I have already prepared some util packages (e.g. metrics
, virtualuser
) that help you write your scenarios faster and easier.
main.go
import "github.com/lotusload/lotus/pkg/metrics"
m, err := metrics.NewServer(8081)
if err != nil {
return err
}
defer m.Stop()
go m.Run()
grpcmetrics.ClientHandler
as the StatsHandler
of your gRPC connection.grpc.Dial(
grpc.WithStatsHandler(&grpcmetrics.ClientHandler{}),
)
Transport
from httpmetrics
package.http.Client{
Transport: &httpmetrics.Transport{},
}
apiVersion: lotus.lotusload.com/v1beta1
kind: Lotus
metadata:
name: simple-scenario-12345 // The unique testID
spec:
worker:
runTime: 10m // How long the load test will be run
replicas: 15 // How many workers should be created
metricsPort: 8081 // What port number should be used to collect metrics
containers:
- name: worker
image: your-registry/your-worker-image // The scenario image you published above
ports:
- name: metrics
containerPort: 8081
checks: // You can add some checks to be checked while running
- name: GRPCHighErrorRate
expr: lotus_grpc_client_failure_percentage > 10
for: 30s
Then apply this file to your Kubernetes cluster. Lotus will handle this test for you.
See crd-configurations.md
for all configurable fields.
See examples
for more examples.
Lotus collects the metrics data and evaluates the checks
to build a summary result for each test.
Lotus can be configured to upload this summary file to external services (e.g: GCS, Slack...) or to log into stdout
.
3 formats of the summary file are supported: Text
, Markdown
, JSON
.
TestID: test-scenario-12345
TestStatus: Succeeded
Start: 09:02:59 2018-12-03
End: 09:12:59 2018-12-03
MetricsSummary:
1. Virtual User
- Started: 1M
- Failed: 0
2. GRPC
- RPCTotal: 25M
- FailurePercentage: 2.507
GroupByMethod:
RPCs Failure% Latency SentBytes RecvBytes
- helloworld.Hello 12.5M 1.015 105 15 8
- helloworld.Profile 12.5M 1.415 152 8 256
- all 25M 1.207 135 12 245
Grafana: http://localhost:3000/dashboard/db/grpc?from=1543827779598&to=1543828379598
To be able to fully explore and understand your test, Lotus is providing some Grafana dashboards to view the visualizations of the metrics. You can also set up Lotus to persist the time series data to a long-term storage (GCS or S3) for accessing after the test is deleted.
After applying the Lotus CRD to your Kubernetes cluster you can also use the following command to check the status of your test.
kubectl describe Lotus your-lotus-name
Your test can be one of these status: Pending
, Preparing
, Running
, Cleaning
, FailureCleaning
, Failed
, Succeeded
Please checkout /examples
directory that contains some prepared examples.
Refer to FAQ.md
Refer to development.md
Lotus is released under the MIT license. See LICENSE file for the details.