Table of Contents
- Sonobuoy Plugins
In addition to querying API objects, Sonobuoy also supports a plugin model. In this model, worker pods are dispatched into the cluster to collect data from each node, and use an aggregation URL to submit their results back to a waiting aggregation pod. See the diagram below:
Two main components specify plugin behavior:
Plugin Selection: A section in the main config (
config.json) that declares which plugins to use in the Sonobuoy run. This can be generated or passed in with the
These configs are defined by the end user.
Plugin Definition: A YAML document that defines metadata and a pod to produce a result.
This YAML is defined by the plugin developer, and can be taken as a given by the end user.
This search path can be overridden by the
PluginSearchPath value of the Sonobuoy config.
Writing your own plugin
The plugin definition file
--- sonobuoy-config: driver: Job # Job or DaemonSet. Job runs once per run, Daemonset runs on every node per run. plugin-name: e2e # The name of the plugin result-type: e2e # The name of the "result type." Usually the name of the plugin. spec: # A kubernetes container spec env: - name: E2E_FOCUS value: Pods should be submitted and removed image: gcr.io/heptio-images/kube-conformance:latest imagePullPolicy: Always name: e2e volumeMounts: - mountPath: /tmp/results name: results readOnly: false - mountPath: /var/log/test name: test-volume extra-volumes: - name: test-volume hostPath: # directory location on host path: /data
A definition file defines a container that runs the tests. This container can be anything you want, but must fulfil a contract.
After your container completes its work, it needs to signal to Sonobuoy that
it’s done. This is done by writing out a filename to a results file. The default
/tmp/results/done, which you can configure with the
in the Sonobuoy config.
Sonobuoy waits for the
done file to be present, then transmits the indicated
file back to the aggregator. The results file is opaque to Sonobuoy, and is
made available in the Sonobuoy results tarball in its original form.
If you need additional mounts besides the default
results mount that Sonobuoy
always provides, you can define them in the
Choosing which plugins to run
All of the plugin definition files get mounted as files on the aggregator pod which runs them.
The aggregator loads all the plugins it finds, but a separate list controls which plugins actually get run. There is a separate
config.json which gets mounted on the aggregator which sets the configuration options for the aggregator. It has a field
Plugins which is an array of plugin names. The default value includes both the e2e and systemd-logs plugin:
If you want to prevent one of those plugins from being run, simply remove that item from the list. Likewise, if you’d like to run your own custom plugin, you need to add it to this list (in addition to adding its definition file to the plugin configmap):
In either case, you use the sonobuoy
gen flow to edit the YAML and start the run with
The default Sonobuoy plugins are available in the
examples/plugins.d directory in this repository.
Here’s the current list:
|Plugin||Overview||Source Code Repository||Env Variables (Config)|
|Gather the latest system logs from each node, using systemd’s ||heptio/sonobuoy-plugin-systemd-logs||(1) |
|Run Kubernetes end-to-end tests (e.g. conformance) and gather the results.||heptio/kube-conformance|
|Perform CIS Benchmark scans from each node using Aqua Security’s