Add falco chart
This commit is contained in:
parent
e435dda713
commit
b052831184
21
falco/.helmignore
Normal file
21
falco/.helmignore
Normal file
@ -0,0 +1,21 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
230
falco/BREAKING-CHANGES.md
Normal file
230
falco/BREAKING-CHANGES.md
Normal file
@ -0,0 +1,230 @@
|
||||
# Helm chart Breaking Changes
|
||||
- [4.0.0](#400)
|
||||
- [Drivers](#drivers)
|
||||
- [K8s Collector](#k8s-collector)
|
||||
- [Plugins](#plugins)
|
||||
- [3.0.0](#300)
|
||||
- [Falcoctl](#falcoctl-support)
|
||||
- [Rulesfiles](#rulesfiles)
|
||||
- [Falco Images](#drop-support-for-falcosecurityfalco-image)
|
||||
- [Driver Loader Init Container](#driver-loader-simplified-logic)
|
||||
|
||||
## 4.0.0
|
||||
### Drivers
|
||||
The `driver` section has been reworked based on the following PR: https://github.com/falcosecurity/falco/pull/2413.
|
||||
It is an attempt to uniform how a driver is configured in Falco.
|
||||
It also groups the configuration based on the driver type.
|
||||
Some of the drivers has been renamed:
|
||||
* kernel modules has been renamed from `module` to `kmod`;
|
||||
* the ebpf probe has not been changed. It's still `ebpf`;
|
||||
* the modern ebpf probe has been renamed from `modern-bpf` to `modern_ebpf`.
|
||||
|
||||
The `gvisor` configuration has been moved under the `driver` section since it is considered a driver on its own.
|
||||
|
||||
### K8s Collector
|
||||
The old Kubernetes client has been removed in Falco 0.37.0. For more info checkout this issue: https://github.com/falcosecurity/falco/issues/2973#issuecomment-1877803422.
|
||||
The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) and [k8s-meta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) substitute
|
||||
the old implementation.
|
||||
|
||||
The following resources needed by Falco to connect to the API server are no longer needed and has been removed from the chart:
|
||||
* service account;
|
||||
* cluster role;
|
||||
* cluster role binding.
|
||||
|
||||
When the `collectors.kubernetes` is enabled the chart deploys the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) and configures Falco to load the
|
||||
[k8s-meta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin.
|
||||
|
||||
By default, the `collectors.kubernetes.enabled` is off; for more info, see the following issue: https://github.com/falcosecurity/falco/issues/2995.
|
||||
|
||||
### Plugins
|
||||
The Falco docker image does not ship anymore the plugins: https://github.com/falcosecurity/falco/pull/2997.
|
||||
For this reason, the `resolveDeps` is now enabled in relevant values files (ie. `values-k8saudit.yaml`).
|
||||
When installing `rulesfile` artifacts `falcoctl` will try to resolve its dependencies and install the required plugins.
|
||||
|
||||
## 3.0.0
|
||||
The new chart deploys new *k8s* resources and new configuration variables have been added to the `values.yaml` file. People upgrading the chart from `v2.x.y` have to port their configuration variables to the new `values.yaml` file used by the `v3.0.0` chart.
|
||||
|
||||
If you still want to use the old values, because you do not want to take advantage of the new and shiny **falcoctl** tool then just run:
|
||||
```bash=
|
||||
helm upgrade falco falcosecurity/falco \
|
||||
--namespace=falco \
|
||||
--reuse-values \
|
||||
--set falcoctl.artifact.install.enabled=false \
|
||||
--set falcoctl.artifact.follow.enabled=false
|
||||
```
|
||||
This way you will upgrade Falco to `v0.34.0`.
|
||||
|
||||
**NOTE**: The new version of Falco itself, installed by the chart, does not introduce breaking changes. You can port your previous Falco configuration to the new `values.yaml` by copy-pasting it.
|
||||
|
||||
|
||||
### Falcoctl support
|
||||
|
||||
[Falcoctl](https://github.com/falcosecurity/falcoctl) is a new tool born to automatize operations when deploying Falco.
|
||||
|
||||
Before the `v3.0.0` of the charts *rulesfiles* and *plugins* were shipped bundled in the Falco docker image. It precluded the possibility to update the *rulesfiles* and *plugins* until a new version of Falco was released. Operators had to manually update the *rulesfiles or add new *plugins* to Falco. The process was cumbersome and error-prone. Operators had to create their own Falco docker images with the new plugins baked into it or wait for a new Falco release.
|
||||
|
||||
Starting from the `v3.0.0` chart release, we add support for **falcoctl** in the charts. By deploying it alongside Falco it allows to:
|
||||
- *install* artifacts of the Falco ecosystem (i.e plugins and rules at the moment of writing)
|
||||
- *follow* those artifacts(only *rulesfile* artifacts are recommended), to keep them up-to-date with the latest releases of the Falcosecurity organization. This allows, for instance, to update rules detecting new vulnerabilities or security issues without the need to redeploy Falco.
|
||||
|
||||
The chart deploys *falcoctl* using an *init container* and/or *sidecar container*. The first one is used to install artifacts and make them available to Falco at start-up time, the latter runs alongside Falco and updates the local artifacts when new updates are detected.
|
||||
|
||||
Based on your deployment scenario:
|
||||
|
||||
1. Falco without *plugins* and you just want to upgrade to the new Falco version:
|
||||
```bash=
|
||||
helm upgrade falco falcosecurity/falco \
|
||||
--namespace=falco \
|
||||
--reuse-values \
|
||||
--set falcoctl.artifact.install.enabled=false \
|
||||
--set falcoctl.artifact.follow.enabled=false
|
||||
```
|
||||
When upgrading an existing release, *helm* uses the new chart version. Since we added new template files and changed the values schema(added new parameters) we explicitly disable the **falcoctl** tool. By doing so, the command will reuse the existing configuration but will deploy Falco version `0.34.0`
|
||||
|
||||
2. Falco without *plugins* and you want to automatically get new *falco-rules* as soon as they are released:
|
||||
```bash=
|
||||
helm upgrade falco falcosecurity/falco \
|
||||
--namespace=falco \
|
||||
```
|
||||
Helm first applies the values coming from the new chart version, then overrides them using the values of the previous release. The outcome is a new release of Falco that:
|
||||
* uses the previous configuration;
|
||||
* runs Falco version `0.34.0`;
|
||||
* uses **falcoctl** to install and automatically update the [*falco-rules*](https://github.com/falcosecurity/rules/);
|
||||
* checks for new updates every 6h (default value).
|
||||
|
||||
|
||||
3. Falco with *plugins* and you want just to upgrade Falco:
|
||||
```bash=
|
||||
helm upgrade falco falcosecurity/falco \
|
||||
--namespace=falco \
|
||||
--reuse-values \
|
||||
--set falcoctl.artifact.install.enabled=false \
|
||||
--set falcoctl.artifact.follow.enabled=false
|
||||
```
|
||||
Very similar to scenario `1.`
|
||||
4. Falco with plugins and you want to use **falcoctl** to download the plugins' *rulesfiles*:
|
||||
* Save **falcoctl** configuration to file:
|
||||
```yaml=
|
||||
cat << EOF > ./falcoctl-values.yaml
|
||||
####################
|
||||
# falcoctl config #
|
||||
####################
|
||||
falcoctl:
|
||||
image:
|
||||
# -- The image pull policy.
|
||||
pullPolicy: IfNotPresent
|
||||
# -- The image registry to pull from.
|
||||
registry: docker.io
|
||||
# -- The image repository to pull from.
|
||||
repository: falcosecurity/falcoctl
|
||||
# -- Overrides the image tag whose default is the chart appVersion.
|
||||
tag: "main"
|
||||
artifact:
|
||||
# -- Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before
|
||||
# Falco starts. It provides them to Falco by using an emptyDir volume.
|
||||
install:
|
||||
enabled: true
|
||||
# -- Extra environment variables that will be pass onto falcoctl-artifact-install init container.
|
||||
env: {}
|
||||
# -- Arguments to pass to the falcoctl-artifact-install init container.
|
||||
args: ["--verbose"]
|
||||
# -- Resources requests and limits for the falcoctl-artifact-install init container.
|
||||
resources: {}
|
||||
# -- Security context for the falcoctl init container.
|
||||
securityContext: {}
|
||||
# -- Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for
|
||||
# updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir)
|
||||
# that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the
|
||||
# correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible
|
||||
# with the running version of Falco before installing it.
|
||||
follow:
|
||||
enabled: true
|
||||
# -- Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container.
|
||||
env: {}
|
||||
# -- Arguments to pass to the falcoctl-artifact-follow sidecar container.
|
||||
args: ["--verbose"]
|
||||
# -- Resources requests and limits for the falcoctl-artifact-follow sidecar container.
|
||||
resources: {}
|
||||
# -- Security context for the falcoctl-artifact-follow sidecar container.
|
||||
securityContext: {}
|
||||
# -- Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers.
|
||||
config:
|
||||
# -- List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see:
|
||||
# https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview
|
||||
indexes:
|
||||
- name: falcosecurity
|
||||
url: https://falcosecurity.github.io/falcoctl/index.yaml
|
||||
# -- Configuration used by the artifact commands.
|
||||
artifact:
|
||||
|
||||
# -- List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained
|
||||
# in the list it will refuse to downloade and install that artifact.
|
||||
allowedTypes:
|
||||
- rulesfile
|
||||
install:
|
||||
# -- Do not resolve the depenencies for artifacts. By default is true, but for our use carse we disable it.
|
||||
resolveDeps: false
|
||||
# -- List of artifacts to be installed by the falcoctl init container.
|
||||
refs: [k8saudit-rules:0.5]
|
||||
# -- Directory where the *rulesfiles* are saved. The path is relative to the container, which in this case is an emptyDir
|
||||
# mounted also by the Falco pod.
|
||||
rulesfilesDir: /rulesfiles
|
||||
# -- Same as the one above but for the artifacts.
|
||||
pluginsDir: /plugins
|
||||
follow:
|
||||
# -- List of artifacts to be installed by the falcoctl init container.
|
||||
refs: [k8saudit-rules:0.5]
|
||||
# -- Directory where the *rulesfiles* are saved. The path is relative to the container, which in this case is an emptyDir
|
||||
# mounted also by the Falco pod.
|
||||
rulesfilesDir: /rulesfiles
|
||||
# -- Same as the one above but for the artifacts.
|
||||
pluginsDir: /plugins
|
||||
EOF
|
||||
```
|
||||
* Set `falcoctl.artifact.install.enabled=true` to install *rulesfiles* of the loaded plugins. Configure **falcoctl** to install the *rulesfiles* of the plugins you are loading with Falco. For example, if you are loading **k8saudit** plugin then you need to set `falcoctl.config.artifact.install.refs=[k8saudit-rules:0.5]`. When Falco is deployed the **falcoctl** init container will download the specified artifacts based on their tag.
|
||||
* Set `falcoctl.artifact.follow.enabled=true` to keep updated *rulesfiles* of the loaded plugins.
|
||||
* Proceed to upgrade your Falco release by running:
|
||||
```bash=
|
||||
helm upgrade falco falcosecurity/falco \
|
||||
--namespace=falco \
|
||||
--reuse-values \
|
||||
--values=./falcoctl-values.yaml
|
||||
```
|
||||
5. Falco with **multiple sources** enabled (syscalls + plugins):
|
||||
1. Upgrading Falco to the new version:
|
||||
```bash=
|
||||
helm upgrade falco falcosecurity/falco \
|
||||
--namespace=falco \
|
||||
--reuse-values \
|
||||
--set falcoctl.artifact.install.enabled=false \
|
||||
--set falcoctl.artifact.follow.enabled=false
|
||||
```
|
||||
2. Upgrading Falco and leveraging **falcoctl** for rules and plugins. Refer to point 4. for **falcoctl** configuration.
|
||||
|
||||
|
||||
### Rulesfiles
|
||||
Starting from `v0.3.0`, the chart drops the bundled **rulesfiles**. The previous version was used to create a configmap containing the following **rulesfiles**:
|
||||
* application_rules.yaml
|
||||
* aws_cloudtrail_rules.yaml
|
||||
* falco_rules.local.yaml
|
||||
* falco_rules.yaml
|
||||
* k8s_audit_rules.yaml
|
||||
|
||||
The reason why we are dropping them is pretty simple, the files are already shipped within the Falco image and do not apport any benefit. On the other hand, we had to manually update those files for each Falco release.
|
||||
|
||||
For users out there, do not worry, we have you covered. As said before the **rulesfiles** are already shipped inside
|
||||
the Falco image. Still, this solution has some drawbacks such as users having to wait for the next releases of Falco
|
||||
to get the latest version of those **rulesfiles**. Or they could manually update them by using the [custom rules](.
|
||||
/README.md#loading-custom-rules).
|
||||
|
||||
We came up with a better solution and that is **falcoctl**. Users can configure the **falcoctl** tool to fetch and install the latest **rulesfiles** as provided by the *falcosecurity* organization. For more info, please check the **falcoctl** section.
|
||||
|
||||
**NOTE**: if any user (wrongly) used to customize those files before deploying Falco please switch to using the
|
||||
[custom rules](./README.md#loading-custom-rules).
|
||||
|
||||
### Drop support for `falcosecurity/falco` image
|
||||
|
||||
Starting from version `v2.0.0` of the chart the`falcosecurity/falco-no-driver` is the default image. We were still supporting the `falcosecurity/falco` image in `v2.0.0`. But in `v2.2.0` we broke the chart when using the `falcosecurity/falco` image. For more info please check out the following issue: https://github.com/falcosecurity/charts/issues/419
|
||||
|
||||
#### Driver-loader simplified logic
|
||||
There is only one switch to **enable/disable** the driver-loader init container: driver.loader.enabled=true. This simplification comes as a direct consequence of dropping support for the `falcosecurity/falco` image. For more info: https://github.com/falcosecurity/charts/issues/418
|
||||
1047
falco/CHANGELOG.md
Normal file
1047
falco/CHANGELOG.md
Normal file
File diff suppressed because it is too large
Load Diff
9
falco/Chart.lock
Normal file
9
falco/Chart.lock
Normal file
@ -0,0 +1,9 @@
|
||||
dependencies:
|
||||
- name: falcosidekick
|
||||
repository: https://falcosecurity.github.io/charts
|
||||
version: 0.7.15
|
||||
- name: k8s-metacollector
|
||||
repository: https://falcosecurity.github.io/charts
|
||||
version: 0.1.7
|
||||
digest: sha256:b1aa7f7bdaae7ea209e1be0f7e81b9dae7ec11c2a5ab0f18c2e590f847db3e8a
|
||||
generated: "2024-03-14T08:54:41.502551723Z"
|
||||
28
falco/Chart.yaml
Normal file
28
falco/Chart.yaml
Normal file
@ -0,0 +1,28 @@
|
||||
apiVersion: v2
|
||||
appVersion: 0.37.1
|
||||
dependencies:
|
||||
- condition: falcosidekick.enabled
|
||||
name: falcosidekick
|
||||
repository: https://falcosecurity.github.io/charts
|
||||
version: 0.7.15
|
||||
- condition: collectors.kubernetes.enabled
|
||||
name: k8s-metacollector
|
||||
repository: https://falcosecurity.github.io/charts
|
||||
version: 0.1.*
|
||||
description: Falco
|
||||
home: https://falco.org
|
||||
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/falco/horizontal/color/falco-horizontal-color.svg
|
||||
keywords:
|
||||
- monitoring
|
||||
- security
|
||||
- alerting
|
||||
- metric
|
||||
- troubleshooting
|
||||
- run-time
|
||||
maintainers:
|
||||
- email: cncf-falco-dev@lists.cncf.io
|
||||
name: The Falco Authors
|
||||
name: falco
|
||||
sources:
|
||||
- https://github.com/falcosecurity/falco
|
||||
version: 4.2.5
|
||||
2
falco/OWNERS
Normal file
2
falco/OWNERS
Normal file
@ -0,0 +1,2 @@
|
||||
emeritus_approvers:
|
||||
- bencer
|
||||
589
falco/README.gotmpl
Normal file
589
falco/README.gotmpl
Normal file
@ -0,0 +1,589 @@
|
||||
# Falco
|
||||
|
||||
[Falco](https://falco.org) is a *Cloud Native Runtime Security* tool designed to detect anomalous activity in your applications. You can use Falco to monitor runtime security of your Kubernetes applications and internal components.
|
||||
|
||||
## Introduction
|
||||
|
||||
The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in [values.yaml](./values.yaml) file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info.
|
||||
|
||||
## Attention
|
||||
|
||||
Before installing Falco in a Kubernetes cluster, a user should check that the kernel version used in the nodes is supported by the community. Also, before reporting any issue with Falco (missing kernel image, CrashLoopBackOff and similar), make sure to read [about the driver](#about-the-driver) section and adjust your setup as required.
|
||||
|
||||
## Adding `falcosecurity` repository
|
||||
|
||||
Before installing the chart, add the `falcosecurity` charts repository:
|
||||
|
||||
```bash
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `falco` in namespace `falco` run:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco
|
||||
```
|
||||
|
||||
After a few minutes Falco instances should be running on all your nodes. The status of Falco pods can be inspected through *kubectl*:
|
||||
```bash
|
||||
kubectl get pods -n falco -o wide
|
||||
```
|
||||
If everything went smoothly, you should observe an output similar to the following, indicating that all Falco instances are up and running in you cluster:
|
||||
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
falco-57w7q 1/1 Running 0 3m12s 10.244.0.1 control-plane <none> <none>
|
||||
falco-h4596 1/1 Running 0 3m12s 10.244.1.2 worker-node-1 <none> <none>
|
||||
falco-kb55h 1/1 Running 0 3m12s 10.244.2.3 worker-node-2 <none> <none>
|
||||
```
|
||||
The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in [values.yaml](./values.yaml) of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node.
|
||||
> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment.
|
||||
|
||||
### Falco, Event Sources and Kubernetes
|
||||
Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in Kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or **drivers** (system events).
|
||||
|
||||
Note that **a Falco instance can handle multiple event sources in parallel**. you can deploy Falco leveraging **drivers** for syscall events and at the same time loading **plugins**. A step by step guide on how to deploy Falco with multiple sources can be found [here](https://falco.org/docs/getting-started/third-party/learning/#falco-with-multiple-sources).
|
||||
|
||||
#### About Drivers
|
||||
|
||||
Falco needs a **driver** to analyze the system workload and pass security events to userspace. The supported drivers are:
|
||||
|
||||
* [Kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module)
|
||||
* [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe)
|
||||
* [Modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe)
|
||||
|
||||
The driver should be installed on the node where Falco is running. The _kernel module_ (default option) and the _eBPF probe_ are installed on the node through an *init container* (i.e. `falco-driver-loader`) that tries download a prebuilt driver or build it on-the-fly as a fallback. The _Modern eBPF probe_ doesn't require an init container because it is shipped directly into the Falco binary. However, the _Modern eBPF probe_ requires [recent BPF features](https://falco.org/docs/event-sources/kernel/#modern-ebpf-probe).
|
||||
|
||||
##### Pre-built drivers
|
||||
|
||||
The [kernel-crawler](https://github.com/falcosecurity/kernel-crawler) automatically discovers kernel versions and flavors. At the time being, it runs weekly. We have a site where users can check for the discovered kernel flavors and versions, [example for Amazon Linux 2](https://falcosecurity.github.io/kernel-crawler/?arch=x86_64&target=AmazonLinux2).
|
||||
|
||||
The discovery of a kernel version by the [kernel-crawler](https://falcosecurity.github.io/kernel-crawler/) does not imply that pre-built kernel modules and bpf probes are available. That is because once kernel-crawler has discovered new kernels versions, the drivers need to be built by jobs running on our [Driver Build Grid infra](https://github.com/falcosecurity/test-infra#dbg). Please keep in mind that the building process is based on best effort. Users can check the existence of prebuilt modules at the following [link](https://download.falco.org/driver/site/index.html?lib=3.0.1%2Bdriver&target=all&arch=all&kind=all).
|
||||
|
||||
##### Building the driver on the fly (fallback)
|
||||
|
||||
If a prebuilt driver is not available for your distribution/kernel, users can build the driver by them self or install the kernel headers on the nodes, and the init container (falco-driver-loader) will try and build the driver on the fly.
|
||||
|
||||
Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation.
|
||||
|
||||
##### Selecting a different driver loader image
|
||||
|
||||
Note that since Falco 0.36.0 and Helm chart version 3.7.0 the driver loader image has been updated to be compatible with newer kernels (5.x and above) meaning that if you have an older kernel version and you are trying to build the kernel module you may experience issues. In that case you can use the `falco-driver-loader-legacy` image to use the previous version of the toolchain. To do so you can set the appropriate value, i.e. `--set driver.loader.initContainer.image.repository=falcosecurity/falco-driver-loader-legacy`.
|
||||
|
||||
#### About Plugins
|
||||
[Plugins](https://falco.org/docs/plugins/) are used to extend Falco to support new **data sources**. The current **plugin framework** supports *plugins* with the following *capabilities*:
|
||||
|
||||
* Event sourcing capability;
|
||||
* Field extraction capability;
|
||||
|
||||
Plugin capabilities are *composable*, we can have a single plugin with both capabilities. Or on the other hand, we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this is the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco
|
||||
|
||||
Note that **the driver is not required when using plugins**.
|
||||
|
||||
#### About gVisor
|
||||
gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. For more information please consult the [official docs](https://gvisor.dev/docs/). In version `0.32.1`, Falco first introduced support for gVisor by leveraging the stream of system call information coming from gVisor.
|
||||
Falco requires the version of [runsc](https://gvisor.dev/docs/user_guide/install/) to be equal to or above `20220704.0`. The following snippet shows the gVisor configuration variables found in [values.yaml](./values.yaml):
|
||||
```yaml
|
||||
driver:
|
||||
gvisor:
|
||||
enabled: true
|
||||
runsc:
|
||||
path: /home/containerd/usr/local/sbin
|
||||
root: /run/containerd/runsc
|
||||
config: /run/containerd/runsc/config.toml
|
||||
```
|
||||
Falco uses the [runsc](https://gvisor.dev/docs/user_guide/install/) binary to interact with sandboxed containers. The following variables need to be set:
|
||||
* `runsc.path`: absolute path of the `runsc` binary in the k8s nodes;
|
||||
* `runsc.root`: absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it;
|
||||
* `runsc.config`: absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence.
|
||||
|
||||
If you want to know more how Falco uses those configuration paths please have a look at the `falco.gvisor.initContainer` helper in [helpers.tpl](./templates/_helpers.tpl).
|
||||
A preset `values.yaml` file [values-gvisor-gke.yaml](./values-gvisor-gke.yaml) is provided and can be used as it is to deploy Falco with gVisor support in a [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods) cluster. It is also a good starting point for custom deployments.
|
||||
|
||||
##### Example: running Falco on GKE, with or without gVisor-enabled pods
|
||||
|
||||
If you use GKE with k8s version at least `1.24.4-gke.1800` or `1.25.0-gke.200` with gVisor sandboxed pods, you can install a Falco instance to monitor them with, e.g.:
|
||||
|
||||
```
|
||||
helm install falco-gvisor falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco-gvisor \
|
||||
-f https://raw.githubusercontent.com/falcosecurity/charts/master/charts/falco/values-gvisor-gke.yaml
|
||||
```
|
||||
|
||||
Note that the instance of Falco above will only monitor gVisor sandboxed workloads on gVisor-enabled node pools. If you also need to monitor regular workloads on regular node pools you can use the eBPF driver as usual:
|
||||
|
||||
```
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set driver.kind=ebpf
|
||||
```
|
||||
|
||||
The two instances of Falco will operate independently and can be installed, uninstalled or configured as needed. If you were already monitoring your regular node pools with eBPF you don't need to reinstall it.
|
||||
|
||||
##### Falco+gVisor additional resources
|
||||
An exhaustive blog post about Falco and gVisor can be found on the [Falco blog](https://falco.org/blog/intro-gvisor-falco/).
|
||||
If you need help on how to set gVisor in your environment please have a look at the [gVisor official docs](https://gvisor.dev/docs/user_guide/quick_start/kubernetes/)
|
||||
|
||||
### About Falco Artifacts
|
||||
Historically **rules files** and **plugins** used to be shipped inside the Falco docker image and/or inside the chart. Starting from version `v0.3.0` of the chart, the [**falcoctl tool**](https://github.com/falcosecurity/falcoctl) can be used to install/update **rules files** and **plugins**. When referring to such objects we will use the term **artifact**. For more info please check out the following [proposal](https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md).
|
||||
|
||||
The default configuration of the chart for new installations is to use the **falcoctl** tool to handle **artifacts**. The chart will deploy two new containers along the Falco one:
|
||||
* `falcoctl-artifact-install` an init container that makes sure to install the configured **artifacts** before the Falco container starts;
|
||||
* `falcoctl-artifact-follow` a sidecar container that periodically checks for new artifacts (currently only *falco-rules*) and downloads them;
|
||||
|
||||
For more info on how to enable/disable and configure the **falcoctl** tool checkout the config values [here](./README.md#Configuration) and the [upgrading notes](./BREAKING-CHANGES.md#300)
|
||||
|
||||
### Deploying Falco in Kubernetes
|
||||
After the clarification of the different [**event sources**](#falco-event-sources-and-kubernetes) and how they are consumed by Falco using the **drivers** and the **plugins**, now let us discuss how Falco is deployed in Kubernetes.
|
||||
|
||||
The chart deploys Falco using a `daemonset` or a `deployment` depending on the **event sources**.
|
||||
|
||||
#### Daemonset
|
||||
When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster.
|
||||
|
||||
**Kernel module**
|
||||
To run Falco with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) you can use the default values of the helm chart:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco
|
||||
```
|
||||
|
||||
**eBPF probe**
|
||||
|
||||
To run Falco with the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set driver.kind=ebpf
|
||||
```
|
||||
|
||||
There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace "your-custom-name-space" \
|
||||
-f "path-to-custom-values.yaml-file"
|
||||
```
|
||||
|
||||
**modern eBPF probe**
|
||||
|
||||
To run Falco with the [modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe-experimental) you just need to set `driver.kind=modern_bpf` as shown in the following snippet:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set driver.kind=modern_ebpf
|
||||
```
|
||||
|
||||
#### Deployment
|
||||
In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service.
|
||||
The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins:
|
||||
|
||||
* need to be reachable when ingesting logs directly from remote services;
|
||||
* need only one active replica, otherwise events will be sent/received to/from different Falco instances;
|
||||
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`;
|
||||
|
||||
```bash
|
||||
helm uninstall falco --namespace falco
|
||||
```
|
||||
|
||||
## Showing logs generated by Falco container
|
||||
There are many reasons why we would have to inspect the messages emitted by the Falco container. When deployed in Kubernetes the Falco logs can be inspected through:
|
||||
```bash
|
||||
kubectl logs -n falco falco-pod-name
|
||||
```
|
||||
where `falco-pods-name` is the name of the Falco pod running in your cluster.
|
||||
The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command:
|
||||
```bash
|
||||
kubectl logs -f -n falco falco-pod-name
|
||||
```
|
||||
The `-f (--follow)` flag follows the logs and live stream them to your terminal and it is really useful when you are debugging a new rule and want to make sure that the rule is triggered when some actions are performed in the system.
|
||||
|
||||
If we need to access logs of a previous Falco run we do that by adding the `-p (--previous)` flag:
|
||||
```bash
|
||||
kubectl logs -p -n falco falco-pod-name
|
||||
```
|
||||
A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong.
|
||||
|
||||
### Enabling real time logs
|
||||
By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening.
|
||||
In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file.
|
||||
|
||||
## K8s-metacollector
|
||||
Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed.
|
||||
A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it.
|
||||
The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata
|
||||
from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.
|
||||
|
||||
Kubernetes' resources for which metadata will be collected and sent to Falco:
|
||||
* pods;
|
||||
* namespaces;
|
||||
* deployments;
|
||||
* replicationcontrollers;
|
||||
* replicasets;
|
||||
* services;
|
||||
|
||||
### Plugin
|
||||
Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component
|
||||
in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin.
|
||||
The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information
|
||||
in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the
|
||||
associated Falco instance is deployed, resulting in node-level granularity.
|
||||
|
||||
### Exported Fields: Old and New
|
||||
The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the
|
||||
[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still
|
||||
available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still
|
||||
usable and will return meaningful data when the `container runtime collectors` are enabled:
|
||||
* k8s.pod.name;
|
||||
* k8s.pod.id;
|
||||
* k8s.pod.label;
|
||||
* k8s.pod.labels;
|
||||
* k8s.pod.ip;
|
||||
* k8s.pod.cni.json;
|
||||
* k8s.pod.namespace.name;
|
||||
|
||||
The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new
|
||||
[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new
|
||||
`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco.
|
||||
|
||||
### Enabling the k8s-metacollector
|
||||
The following command will deploy Falco + k8s-metacollector + k8smeta:
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--namespace falco \
|
||||
--create-namespace \
|
||||
--set collectors.kubernetes.enabled=true
|
||||
```
|
||||
|
||||
## Loading custom rules
|
||||
|
||||
Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs.
|
||||
|
||||
So the question is: How can we load custom rules in our Falco deployment?
|
||||
|
||||
We are going to create a file that contains custom rules so that we can keep it in a Git repository.
|
||||
|
||||
```bash
|
||||
cat custom-rules.yaml
|
||||
```
|
||||
|
||||
And the file looks like this one:
|
||||
|
||||
```yaml
|
||||
customRules:
|
||||
rules-traefik.yaml: |-
|
||||
- macro: traefik_consider_syscalls
|
||||
condition: (evt.num < 0)
|
||||
|
||||
- macro: app_traefik
|
||||
condition: container and container.image startswith "traefik"
|
||||
|
||||
# Restricting listening ports to selected set
|
||||
|
||||
- list: traefik_allowed_inbound_ports_tcp
|
||||
items: [443, 80, 8080]
|
||||
|
||||
- rule: Unexpected inbound tcp connection traefik
|
||||
desc: Detect inbound traffic to traefik using tcp on a port outside of expected set
|
||||
condition: inbound and evt.rawres >= 0 and not fd.sport in (traefik_allowed_inbound_ports_tcp) and app_traefik
|
||||
output: Inbound network connection to traefik on unexpected port (command=%proc.cmdline pid=%proc.pid connection=%fd.name sport=%fd.sport user=%user.name %container.info image=%container.image)
|
||||
priority: NOTICE
|
||||
|
||||
# Restricting spawned processes to selected set
|
||||
|
||||
- list: traefik_allowed_processes
|
||||
items: ["traefik"]
|
||||
|
||||
- rule: Unexpected spawned process traefik
|
||||
desc: Detect a process started in a traefik container outside of an expected set
|
||||
condition: spawned_process and not proc.name in (traefik_allowed_processes) and app_traefik
|
||||
output: Unexpected process spawned in traefik container (command=%proc.cmdline pid=%proc.pid user=%user.name %container.info image=%container.image)
|
||||
priority: NOTICE
|
||||
```
|
||||
|
||||
So next step is to use the custom-rules.yaml file for installing the Falco Helm chart.
|
||||
|
||||
```bash
|
||||
helm install falco -f custom-rules.yaml falcosecurity/falco
|
||||
```
|
||||
|
||||
And we will see in our logs something like:
|
||||
|
||||
```bash
|
||||
Tue Jun 5 15:08:57 2018: Loading rules from file /etc/falco/rules.d/rules-traefik.yaml:
|
||||
```
|
||||
|
||||
And this means that our Falco installation has loaded the rules and is ready to help us.
|
||||
|
||||
## Kubernetes Audit Log
|
||||
|
||||
The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin. It is entirely up to you to set up the [webhook backend](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#webhook-backend) of the Kubernetes API server to forward the Audit Log event to the Falco listening port.
|
||||
|
||||
The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin:
|
||||
```yaml
|
||||
# -- Disable the drivers since we want to deploy only the k8saudit plugin.
|
||||
driver:
|
||||
enabled: false
|
||||
|
||||
# -- Disable the collectors, no syscall events to enrich with metadata.
|
||||
collectors:
|
||||
enabled: false
|
||||
|
||||
# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable.
|
||||
controller:
|
||||
kind: deployment
|
||||
deployment:
|
||||
# -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
|
||||
# For more info check the section on Plugins in the README.md file.
|
||||
replicas: 1
|
||||
|
||||
|
||||
falcoctl:
|
||||
artifact:
|
||||
install:
|
||||
# -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects.
|
||||
enabled: true
|
||||
follow:
|
||||
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
|
||||
enabled: true
|
||||
config:
|
||||
artifact:
|
||||
install:
|
||||
# -- Resolve the dependencies for artifacts.
|
||||
resolveDeps: true
|
||||
# -- List of artifacts to be installed by the falcoctl init container.
|
||||
# Only rulesfile, the plugin will be installed as a dependency.
|
||||
refs: [k8saudit-rules:0.5]
|
||||
follow:
|
||||
# -- List of artifacts to be followed by the falcoctl sidecar container.
|
||||
refs: [k8saudit-rules:0.5]
|
||||
|
||||
services:
|
||||
- name: k8saudit-webhook
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 9765 # See plugin open_params
|
||||
nodePort: 30007
|
||||
protocol: TCP
|
||||
|
||||
falco:
|
||||
rules_file:
|
||||
- /etc/falco/k8s_audit_rules.yaml
|
||||
- /etc/falco/rules.d
|
||||
plugins:
|
||||
- name: k8saudit
|
||||
library_path: libk8saudit.so
|
||||
init_config:
|
||||
""
|
||||
# maxEventBytes: 1048576
|
||||
# sslCertificate: /etc/falco/falco.pem
|
||||
open_params: "http://:9765/k8s-audit"
|
||||
- name: json
|
||||
library_path: libjson.so
|
||||
init_config: ""
|
||||
# Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.
|
||||
load_plugins: [k8saudit, json]
|
||||
|
||||
```
|
||||
Here is the explanation of the above configuration:
|
||||
* disable the drivers by setting `driver.enabled=false`;
|
||||
* disable the collectors by setting `collectors.enabled=false`;
|
||||
* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`;
|
||||
* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`;
|
||||
* enable the `falcoctl-artifact-install` init container;
|
||||
* configure `falcoctl-artifact-install` to install the required plugins;
|
||||
* disable the `falcoctl-artifact-follow` sidecar container;
|
||||
* load the correct ruleset for our plugin in `falco.rulesFile`;
|
||||
* configure the plugins to be loaded, in this case, the `k8saudit` and `json`;
|
||||
* and finally we add our plugins in the `load_plugins` to be loaded by Falco.
|
||||
|
||||
The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used:
|
||||
|
||||
|
||||
```bash
|
||||
#make sure the falco namespace exists
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
-f ./values-k8saudit.yaml
|
||||
```
|
||||
After a few minutes a Falco instance should be running on your cluster. The status of Falco pod can be inspected through *kubectl*:
|
||||
```bash
|
||||
kubectl get pods -n falco -o wide
|
||||
```
|
||||
If everything went smoothly, you should observe an output similar to the following, indicating that the Falco instance is up and running:
|
||||
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
falco-64484d9579-qckms 1/1 Running 0 101s 10.244.2.2 worker-node-2 <none> <none>
|
||||
```
|
||||
|
||||
Furthermore you can check that Falco logs through *kubectl logs*
|
||||
|
||||
```bash
|
||||
kubectl logs -n falco falco-64484d9579-qckms
|
||||
```
|
||||
In the logs you should have something similar to the following, indicating that Falco has loaded the required plugins:
|
||||
```bash
|
||||
Fri Jul 8 16:07:24 2022: Falco version 0.32.0 (driver version 39ae7d40496793cf3d3e7890c9bbdc202263836b)
|
||||
Fri Jul 8 16:07:24 2022: Falco initialized with configuration file /etc/falco/falco.yaml
|
||||
Fri Jul 8 16:07:24 2022: Loading plugin (k8saudit) from file /usr/share/falco/plugins/libk8saudit.so
|
||||
Fri Jul 8 16:07:24 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so
|
||||
Fri Jul 8 16:07:24 2022: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
|
||||
Fri Jul 8 16:07:24 2022: Starting internal webserver, listening on port 8765
|
||||
```
|
||||
*Note that the support for the dynamic backend (also known as the `AuditSink` object) has been deprecated from Kubernetes and removed from this chart.*
|
||||
|
||||
### Manual setup with NodePort on kOps
|
||||
|
||||
Using `kops edit cluster`, ensure these options are present, then run `kops update cluster` and `kops rolling-update cluster`:
|
||||
```yaml
|
||||
spec:
|
||||
kubeAPIServer:
|
||||
auditLogMaxBackups: 1
|
||||
auditLogMaxSize: 10
|
||||
auditLogPath: /var/log/k8s-audit.log
|
||||
auditPolicyFile: /srv/kubernetes/assets/audit-policy.yaml
|
||||
auditWebhookBatchMaxWait: 5s
|
||||
auditWebhookConfigFile: /srv/kubernetes/assets/webhook-config.yaml
|
||||
fileAssets:
|
||||
- content: |
|
||||
# content of the webserver CA certificate
|
||||
# remove this fileAsset and certificate-authority from webhook-config if using http
|
||||
name: audit-ca.pem
|
||||
roles:
|
||||
- Master
|
||||
- content: |
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: falco
|
||||
cluster:
|
||||
# remove 'certificate-authority' when using 'http'
|
||||
certificate-authority: /srv/kubernetes/assets/audit-ca.pem
|
||||
server: https://localhost:32765/k8s-audit
|
||||
contexts:
|
||||
- context:
|
||||
cluster: falco
|
||||
user: ""
|
||||
name: default-context
|
||||
current-context: default-context
|
||||
preferences: {}
|
||||
users: []
|
||||
name: webhook-config.yaml
|
||||
roles:
|
||||
- Master
|
||||
- content: |
|
||||
# ... paste audit-policy.yaml here ...
|
||||
# https://raw.githubusercontent.com/falcosecurity/plugins/master/plugins/k8saudit/configs/audit-policy.yaml
|
||||
name: audit-policy.yaml
|
||||
roles:
|
||||
- Master
|
||||
```
|
||||
## Enabling gRPC
|
||||
|
||||
The Falco gRPC server and the Falco gRPC Outputs APIs are not enabled by default.
|
||||
Moreover, Falco supports running a gRPC server with two main binding types:
|
||||
- Over a local **Unix socket** with no authentication
|
||||
- Over the **network** with mandatory mutual TLS authentication (mTLS)
|
||||
|
||||
> **Tip**: Once gRPC is enabled, you can deploy [falco-exporter](https://github.com/falcosecurity/falco-exporter) to export metrics to Prometheus.
|
||||
|
||||
### gRPC over unix socket (default)
|
||||
|
||||
The preferred way to use the gRPC is over a Unix socket.
|
||||
|
||||
To install Falco with gRPC enabled over a **unix socket**, you have to:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.grpc.enabled=true \
|
||||
--set falco.grpc_output.enabled=true
|
||||
```
|
||||
|
||||
### gRPC over network
|
||||
|
||||
The gRPC server over the network can only be used with mutual authentication between the clients and the server using TLS certificates.
|
||||
How to generate the certificates is [documented here](https://falco.org/docs/grpc/#generate-valid-ca).
|
||||
|
||||
To install Falco with gRPC enabled over the **network**, you have to:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.grpc.enabled=true \
|
||||
--set falco.grpc_output.enabled=true \
|
||||
--set falco.grpc.unixSocketPath="" \
|
||||
--set-file certs.server.key=/path/to/server.key \
|
||||
--set-file certs.server.crt=/path/to/server.crt \
|
||||
--set-file certs.ca.crt=/path/to/ca.crt
|
||||
|
||||
```
|
||||
|
||||
## Enable http_output
|
||||
|
||||
HTTP output enables Falco to send events through HTTP(S) via the following configuration:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.http_output.enabled=true \
|
||||
--set falco.http_output.url="http://some.url/some/path/" \
|
||||
--set falco.json_output=true \
|
||||
--set json_include_output_property=true
|
||||
```
|
||||
|
||||
Additionally, you can enable mTLS communication and load HTTP client cryptographic material via:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.http_output.enabled=true \
|
||||
--set falco.http_output.url="https://some.url/some/path/" \
|
||||
--set falco.json_output=true \
|
||||
--set json_include_output_property=true \
|
||||
--set falco.http_output.mtls=true \
|
||||
--set falco.http_output.client_cert="/etc/falco/certs/client/client.crt" \
|
||||
--set falco.http_output.client_key="/etc/falco/certs/client/client.key" \
|
||||
--set falco.http_output.ca_cert="/etc/falco/certs/client/ca.crt" \
|
||||
--set-file certs.client.key="/path/to/client.key",certs.client.crt="/path/to/client.crt",certs.ca.crt="/path/to/cacert.crt"
|
||||
```
|
||||
|
||||
Or instead of directly setting the files via `--set-file`, mounting an existing volume with the `certs.existingClientSecret` value.
|
||||
|
||||
## Deploy Falcosidekick with Falco
|
||||
|
||||
[`Falcosidekick`](https://github.com/falcosecurity/falcosidekick) can be installed with `Falco` by setting `--set falcosidekick.enabled=true`. This setting automatically configures all options of `Falco` for working with `Falcosidekick`.
|
||||
All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick#configuration).
|
||||
For example, to enable the deployment of [`Falcosidekick-UI`](https://github.com/falcosecurity/falcosidekick-ui), add `--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true`.
|
||||
|
||||
If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidekick` might be captured, use the full FQDN of `Falcosidekick` by using `--set falcosidekick.fullfqdn=true` to avoid that.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See [values.yaml](./values.yaml) for full list.
|
||||
|
||||
{{ template "chart.valuesSection" . }}
|
||||
758
falco/README.md
Normal file
758
falco/README.md
Normal file
@ -0,0 +1,758 @@
|
||||
# Falco
|
||||
|
||||
[Falco](https://falco.org) is a *Cloud Native Runtime Security* tool designed to detect anomalous activity in your applications. You can use Falco to monitor runtime security of your Kubernetes applications and internal components.
|
||||
|
||||
## Introduction
|
||||
|
||||
The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in [values.yaml](./values.yaml) file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info.
|
||||
|
||||
## Attention
|
||||
|
||||
Before installing Falco in a Kubernetes cluster, a user should check that the kernel version used in the nodes is supported by the community. Also, before reporting any issue with Falco (missing kernel image, CrashLoopBackOff and similar), make sure to read [about the driver](#about-the-driver) section and adjust your setup as required.
|
||||
|
||||
## Adding `falcosecurity` repository
|
||||
|
||||
Before installing the chart, add the `falcosecurity` charts repository:
|
||||
|
||||
```bash
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `falco` in namespace `falco` run:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco
|
||||
```
|
||||
|
||||
After a few minutes Falco instances should be running on all your nodes. The status of Falco pods can be inspected through *kubectl*:
|
||||
```bash
|
||||
kubectl get pods -n falco -o wide
|
||||
```
|
||||
If everything went smoothly, you should observe an output similar to the following, indicating that all Falco instances are up and running in you cluster:
|
||||
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
falco-57w7q 1/1 Running 0 3m12s 10.244.0.1 control-plane <none> <none>
|
||||
falco-h4596 1/1 Running 0 3m12s 10.244.1.2 worker-node-1 <none> <none>
|
||||
falco-kb55h 1/1 Running 0 3m12s 10.244.2.3 worker-node-2 <none> <none>
|
||||
```
|
||||
The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in [values.yaml](./values.yaml) of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node.
|
||||
> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment.
|
||||
|
||||
### Falco, Event Sources and Kubernetes
|
||||
Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in Kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or **drivers** (system events).
|
||||
|
||||
Note that **a Falco instance can handle multiple event sources in parallel**. you can deploy Falco leveraging **drivers** for syscall events and at the same time loading **plugins**. A step by step guide on how to deploy Falco with multiple sources can be found [here](https://falco.org/docs/getting-started/third-party/learning/#falco-with-multiple-sources).
|
||||
|
||||
#### About Drivers
|
||||
|
||||
Falco needs a **driver** to analyze the system workload and pass security events to userspace. The supported drivers are:
|
||||
|
||||
* [Kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module)
|
||||
* [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe)
|
||||
* [Modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe)
|
||||
|
||||
The driver should be installed on the node where Falco is running. The _kernel module_ (default option) and the _eBPF probe_ are installed on the node through an *init container* (i.e. `falco-driver-loader`) that tries download a prebuilt driver or build it on-the-fly as a fallback. The _Modern eBPF probe_ doesn't require an init container because it is shipped directly into the Falco binary. However, the _Modern eBPF probe_ requires [recent BPF features](https://falco.org/docs/event-sources/kernel/#modern-ebpf-probe).
|
||||
|
||||
##### Pre-built drivers
|
||||
|
||||
The [kernel-crawler](https://github.com/falcosecurity/kernel-crawler) automatically discovers kernel versions and flavors. At the time being, it runs weekly. We have a site where users can check for the discovered kernel flavors and versions, [example for Amazon Linux 2](https://falcosecurity.github.io/kernel-crawler/?arch=x86_64&target=AmazonLinux2).
|
||||
|
||||
The discovery of a kernel version by the [kernel-crawler](https://falcosecurity.github.io/kernel-crawler/) does not imply that pre-built kernel modules and bpf probes are available. That is because once kernel-crawler has discovered new kernels versions, the drivers need to be built by jobs running on our [Driver Build Grid infra](https://github.com/falcosecurity/test-infra#dbg). Please keep in mind that the building process is based on best effort. Users can check the existence of prebuilt modules at the following [link](https://download.falco.org/driver/site/index.html?lib=3.0.1%2Bdriver&target=all&arch=all&kind=all).
|
||||
|
||||
##### Building the driver on the fly (fallback)
|
||||
|
||||
If a prebuilt driver is not available for your distribution/kernel, users can build the driver by them self or install the kernel headers on the nodes, and the init container (falco-driver-loader) will try and build the driver on the fly.
|
||||
|
||||
Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation.
|
||||
|
||||
##### Selecting a different driver loader image
|
||||
|
||||
Note that since Falco 0.36.0 and Helm chart version 3.7.0 the driver loader image has been updated to be compatible with newer kernels (5.x and above) meaning that if you have an older kernel version and you are trying to build the kernel module you may experience issues. In that case you can use the `falco-driver-loader-legacy` image to use the previous version of the toolchain. To do so you can set the appropriate value, i.e. `--set driver.loader.initContainer.image.repository=falcosecurity/falco-driver-loader-legacy`.
|
||||
|
||||
#### About Plugins
|
||||
[Plugins](https://falco.org/docs/plugins/) are used to extend Falco to support new **data sources**. The current **plugin framework** supports *plugins* with the following *capabilities*:
|
||||
|
||||
* Event sourcing capability;
|
||||
* Field extraction capability;
|
||||
|
||||
Plugin capabilities are *composable*, we can have a single plugin with both capabilities. Or on the other hand, we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this is the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco
|
||||
|
||||
Note that **the driver is not required when using plugins**.
|
||||
|
||||
#### About gVisor
|
||||
gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. For more information please consult the [official docs](https://gvisor.dev/docs/). In version `0.32.1`, Falco first introduced support for gVisor by leveraging the stream of system call information coming from gVisor.
|
||||
Falco requires the version of [runsc](https://gvisor.dev/docs/user_guide/install/) to be equal to or above `20220704.0`. The following snippet shows the gVisor configuration variables found in [values.yaml](./values.yaml):
|
||||
```yaml
|
||||
driver:
|
||||
gvisor:
|
||||
enabled: true
|
||||
runsc:
|
||||
path: /home/containerd/usr/local/sbin
|
||||
root: /run/containerd/runsc
|
||||
config: /run/containerd/runsc/config.toml
|
||||
```
|
||||
Falco uses the [runsc](https://gvisor.dev/docs/user_guide/install/) binary to interact with sandboxed containers. The following variables need to be set:
|
||||
* `runsc.path`: absolute path of the `runsc` binary in the k8s nodes;
|
||||
* `runsc.root`: absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it;
|
||||
* `runsc.config`: absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence.
|
||||
|
||||
If you want to know more how Falco uses those configuration paths please have a look at the `falco.gvisor.initContainer` helper in [helpers.tpl](./templates/_helpers.tpl).
|
||||
A preset `values.yaml` file [values-gvisor-gke.yaml](./values-gvisor-gke.yaml) is provided and can be used as it is to deploy Falco with gVisor support in a [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods) cluster. It is also a good starting point for custom deployments.
|
||||
|
||||
##### Example: running Falco on GKE, with or without gVisor-enabled pods
|
||||
|
||||
If you use GKE with k8s version at least `1.24.4-gke.1800` or `1.25.0-gke.200` with gVisor sandboxed pods, you can install a Falco instance to monitor them with, e.g.:
|
||||
|
||||
```
|
||||
helm install falco-gvisor falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco-gvisor \
|
||||
-f https://raw.githubusercontent.com/falcosecurity/charts/master/charts/falco/values-gvisor-gke.yaml
|
||||
```
|
||||
|
||||
Note that the instance of Falco above will only monitor gVisor sandboxed workloads on gVisor-enabled node pools. If you also need to monitor regular workloads on regular node pools you can use the eBPF driver as usual:
|
||||
|
||||
```
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set driver.kind=ebpf
|
||||
```
|
||||
|
||||
The two instances of Falco will operate independently and can be installed, uninstalled or configured as needed. If you were already monitoring your regular node pools with eBPF you don't need to reinstall it.
|
||||
|
||||
##### Falco+gVisor additional resources
|
||||
An exhaustive blog post about Falco and gVisor can be found on the [Falco blog](https://falco.org/blog/intro-gvisor-falco/).
|
||||
If you need help on how to set gVisor in your environment please have a look at the [gVisor official docs](https://gvisor.dev/docs/user_guide/quick_start/kubernetes/)
|
||||
|
||||
### About Falco Artifacts
|
||||
Historically **rules files** and **plugins** used to be shipped inside the Falco docker image and/or inside the chart. Starting from version `v0.3.0` of the chart, the [**falcoctl tool**](https://github.com/falcosecurity/falcoctl) can be used to install/update **rules files** and **plugins**. When referring to such objects we will use the term **artifact**. For more info please check out the following [proposal](https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md).
|
||||
|
||||
The default configuration of the chart for new installations is to use the **falcoctl** tool to handle **artifacts**. The chart will deploy two new containers along the Falco one:
|
||||
* `falcoctl-artifact-install` an init container that makes sure to install the configured **artifacts** before the Falco container starts;
|
||||
* `falcoctl-artifact-follow` a sidecar container that periodically checks for new artifacts (currently only *falco-rules*) and downloads them;
|
||||
|
||||
For more info on how to enable/disable and configure the **falcoctl** tool checkout the config values [here](./README.md#Configuration) and the [upgrading notes](./BREAKING-CHANGES.md#300)
|
||||
|
||||
### Deploying Falco in Kubernetes
|
||||
After the clarification of the different [**event sources**](#falco-event-sources-and-kubernetes) and how they are consumed by Falco using the **drivers** and the **plugins**, now let us discuss how Falco is deployed in Kubernetes.
|
||||
|
||||
The chart deploys Falco using a `daemonset` or a `deployment` depending on the **event sources**.
|
||||
|
||||
#### Daemonset
|
||||
When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster.
|
||||
|
||||
**Kernel module**
|
||||
To run Falco with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) you can use the default values of the helm chart:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco
|
||||
```
|
||||
|
||||
**eBPF probe**
|
||||
|
||||
To run Falco with the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set driver.kind=ebpf
|
||||
```
|
||||
|
||||
There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace "your-custom-name-space" \
|
||||
-f "path-to-custom-values.yaml-file"
|
||||
```
|
||||
|
||||
**modern eBPF probe**
|
||||
|
||||
To run Falco with the [modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe-experimental) you just need to set `driver.kind=modern_bpf` as shown in the following snippet:
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set driver.kind=modern_ebpf
|
||||
```
|
||||
|
||||
#### Deployment
|
||||
In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service.
|
||||
The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins:
|
||||
|
||||
* need to be reachable when ingesting logs directly from remote services;
|
||||
* need only one active replica, otherwise events will be sent/received to/from different Falco instances;
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`;
|
||||
|
||||
```bash
|
||||
helm uninstall falco --namespace falco
|
||||
```
|
||||
|
||||
## Showing logs generated by Falco container
|
||||
There are many reasons why we would have to inspect the messages emitted by the Falco container. When deployed in Kubernetes the Falco logs can be inspected through:
|
||||
```bash
|
||||
kubectl logs -n falco falco-pod-name
|
||||
```
|
||||
where `falco-pods-name` is the name of the Falco pod running in your cluster.
|
||||
The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command:
|
||||
```bash
|
||||
kubectl logs -f -n falco falco-pod-name
|
||||
```
|
||||
The `-f (--follow)` flag follows the logs and live stream them to your terminal and it is really useful when you are debugging a new rule and want to make sure that the rule is triggered when some actions are performed in the system.
|
||||
|
||||
If we need to access logs of a previous Falco run we do that by adding the `-p (--previous)` flag:
|
||||
```bash
|
||||
kubectl logs -p -n falco falco-pod-name
|
||||
```
|
||||
A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong.
|
||||
|
||||
### Enabling real time logs
|
||||
By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening.
|
||||
In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file.
|
||||
|
||||
## K8s-metacollector
|
||||
Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed.
|
||||
A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it.
|
||||
The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata
|
||||
from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.
|
||||
|
||||
Kubernetes' resources for which metadata will be collected and sent to Falco:
|
||||
* pods;
|
||||
* namespaces;
|
||||
* deployments;
|
||||
* replicationcontrollers;
|
||||
* replicasets;
|
||||
* services;
|
||||
|
||||
### Plugin
|
||||
Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component
|
||||
in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin.
|
||||
The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information
|
||||
in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the
|
||||
associated Falco instance is deployed, resulting in node-level granularity.
|
||||
|
||||
### Exported Fields: Old and New
|
||||
The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the
|
||||
[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still
|
||||
available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still
|
||||
usable and will return meaningful data when the `container runtime collectors` are enabled:
|
||||
* k8s.pod.name;
|
||||
* k8s.pod.id;
|
||||
* k8s.pod.label;
|
||||
* k8s.pod.labels;
|
||||
* k8s.pod.ip;
|
||||
* k8s.pod.cni.json;
|
||||
* k8s.pod.namespace.name;
|
||||
|
||||
The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new
|
||||
[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new
|
||||
`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco.
|
||||
|
||||
### Enabling the k8s-metacollector
|
||||
The following command will deploy Falco + k8s-metacollector + k8smeta:
|
||||
```bash
|
||||
helm install falco falcosecurity/falco \
|
||||
--namespace falco \
|
||||
--create-namespace \
|
||||
--set collectors.kubernetes.enabled=true
|
||||
```
|
||||
|
||||
## Loading custom rules
|
||||
|
||||
Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs.
|
||||
|
||||
So the question is: How can we load custom rules in our Falco deployment?
|
||||
|
||||
We are going to create a file that contains custom rules so that we can keep it in a Git repository.
|
||||
|
||||
```bash
|
||||
cat custom-rules.yaml
|
||||
```
|
||||
|
||||
And the file looks like this one:
|
||||
|
||||
```yaml
|
||||
customRules:
|
||||
rules-traefik.yaml: |-
|
||||
- macro: traefik_consider_syscalls
|
||||
condition: (evt.num < 0)
|
||||
|
||||
- macro: app_traefik
|
||||
condition: container and container.image startswith "traefik"
|
||||
|
||||
# Restricting listening ports to selected set
|
||||
|
||||
- list: traefik_allowed_inbound_ports_tcp
|
||||
items: [443, 80, 8080]
|
||||
|
||||
- rule: Unexpected inbound tcp connection traefik
|
||||
desc: Detect inbound traffic to traefik using tcp on a port outside of expected set
|
||||
condition: inbound and evt.rawres >= 0 and not fd.sport in (traefik_allowed_inbound_ports_tcp) and app_traefik
|
||||
output: Inbound network connection to traefik on unexpected port (command=%proc.cmdline pid=%proc.pid connection=%fd.name sport=%fd.sport user=%user.name %container.info image=%container.image)
|
||||
priority: NOTICE
|
||||
|
||||
# Restricting spawned processes to selected set
|
||||
|
||||
- list: traefik_allowed_processes
|
||||
items: ["traefik"]
|
||||
|
||||
- rule: Unexpected spawned process traefik
|
||||
desc: Detect a process started in a traefik container outside of an expected set
|
||||
condition: spawned_process and not proc.name in (traefik_allowed_processes) and app_traefik
|
||||
output: Unexpected process spawned in traefik container (command=%proc.cmdline pid=%proc.pid user=%user.name %container.info image=%container.image)
|
||||
priority: NOTICE
|
||||
```
|
||||
|
||||
So next step is to use the custom-rules.yaml file for installing the Falco Helm chart.
|
||||
|
||||
```bash
|
||||
helm install falco -f custom-rules.yaml falcosecurity/falco
|
||||
```
|
||||
|
||||
And we will see in our logs something like:
|
||||
|
||||
```bash
|
||||
Tue Jun 5 15:08:57 2018: Loading rules from file /etc/falco/rules.d/rules-traefik.yaml:
|
||||
```
|
||||
|
||||
And this means that our Falco installation has loaded the rules and is ready to help us.
|
||||
|
||||
## Kubernetes Audit Log
|
||||
|
||||
The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin. It is entirely up to you to set up the [webhook backend](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#webhook-backend) of the Kubernetes API server to forward the Audit Log event to the Falco listening port.
|
||||
|
||||
The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin:
|
||||
```yaml
|
||||
# -- Disable the drivers since we want to deploy only the k8saudit plugin.
|
||||
driver:
|
||||
enabled: false
|
||||
|
||||
# -- Disable the collectors, no syscall events to enrich with metadata.
|
||||
collectors:
|
||||
enabled: false
|
||||
|
||||
# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable.
|
||||
controller:
|
||||
kind: deployment
|
||||
deployment:
|
||||
# -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
|
||||
# For more info check the section on Plugins in the README.md file.
|
||||
replicas: 1
|
||||
|
||||
falcoctl:
|
||||
artifact:
|
||||
install:
|
||||
# -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects.
|
||||
enabled: true
|
||||
follow:
|
||||
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
|
||||
enabled: true
|
||||
config:
|
||||
artifact:
|
||||
install:
|
||||
# -- Resolve the dependencies for artifacts.
|
||||
resolveDeps: true
|
||||
# -- List of artifacts to be installed by the falcoctl init container.
|
||||
# Only rulesfile, the plugin will be installed as a dependency.
|
||||
refs: [k8saudit-rules:0.5]
|
||||
follow:
|
||||
# -- List of artifacts to be followed by the falcoctl sidecar container.
|
||||
refs: [k8saudit-rules:0.5]
|
||||
|
||||
services:
|
||||
- name: k8saudit-webhook
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 9765 # See plugin open_params
|
||||
nodePort: 30007
|
||||
protocol: TCP
|
||||
|
||||
falco:
|
||||
rules_file:
|
||||
- /etc/falco/k8s_audit_rules.yaml
|
||||
- /etc/falco/rules.d
|
||||
plugins:
|
||||
- name: k8saudit
|
||||
library_path: libk8saudit.so
|
||||
init_config:
|
||||
""
|
||||
# maxEventBytes: 1048576
|
||||
# sslCertificate: /etc/falco/falco.pem
|
||||
open_params: "http://:9765/k8s-audit"
|
||||
- name: json
|
||||
library_path: libjson.so
|
||||
init_config: ""
|
||||
# Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.
|
||||
load_plugins: [k8saudit, json]
|
||||
|
||||
```
|
||||
Here is the explanation of the above configuration:
|
||||
* disable the drivers by setting `driver.enabled=false`;
|
||||
* disable the collectors by setting `collectors.enabled=false`;
|
||||
* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`;
|
||||
* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`;
|
||||
* enable the `falcoctl-artifact-install` init container;
|
||||
* configure `falcoctl-artifact-install` to install the required plugins;
|
||||
* disable the `falcoctl-artifact-follow` sidecar container;
|
||||
* load the correct ruleset for our plugin in `falco.rulesFile`;
|
||||
* configure the plugins to be loaded, in this case, the `k8saudit` and `json`;
|
||||
* and finally we add our plugins in the `load_plugins` to be loaded by Falco.
|
||||
|
||||
The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used:
|
||||
|
||||
```bash
|
||||
#make sure the falco namespace exists
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
-f ./values-k8saudit.yaml
|
||||
```
|
||||
After a few minutes a Falco instance should be running on your cluster. The status of Falco pod can be inspected through *kubectl*:
|
||||
```bash
|
||||
kubectl get pods -n falco -o wide
|
||||
```
|
||||
If everything went smoothly, you should observe an output similar to the following, indicating that the Falco instance is up and running:
|
||||
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
falco-64484d9579-qckms 1/1 Running 0 101s 10.244.2.2 worker-node-2 <none> <none>
|
||||
```
|
||||
|
||||
Furthermore you can check that Falco logs through *kubectl logs*
|
||||
|
||||
```bash
|
||||
kubectl logs -n falco falco-64484d9579-qckms
|
||||
```
|
||||
In the logs you should have something similar to the following, indicating that Falco has loaded the required plugins:
|
||||
```bash
|
||||
Fri Jul 8 16:07:24 2022: Falco version 0.32.0 (driver version 39ae7d40496793cf3d3e7890c9bbdc202263836b)
|
||||
Fri Jul 8 16:07:24 2022: Falco initialized with configuration file /etc/falco/falco.yaml
|
||||
Fri Jul 8 16:07:24 2022: Loading plugin (k8saudit) from file /usr/share/falco/plugins/libk8saudit.so
|
||||
Fri Jul 8 16:07:24 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so
|
||||
Fri Jul 8 16:07:24 2022: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
|
||||
Fri Jul 8 16:07:24 2022: Starting internal webserver, listening on port 8765
|
||||
```
|
||||
*Note that the support for the dynamic backend (also known as the `AuditSink` object) has been deprecated from Kubernetes and removed from this chart.*
|
||||
|
||||
### Manual setup with NodePort on kOps
|
||||
|
||||
Using `kops edit cluster`, ensure these options are present, then run `kops update cluster` and `kops rolling-update cluster`:
|
||||
```yaml
|
||||
spec:
|
||||
kubeAPIServer:
|
||||
auditLogMaxBackups: 1
|
||||
auditLogMaxSize: 10
|
||||
auditLogPath: /var/log/k8s-audit.log
|
||||
auditPolicyFile: /srv/kubernetes/assets/audit-policy.yaml
|
||||
auditWebhookBatchMaxWait: 5s
|
||||
auditWebhookConfigFile: /srv/kubernetes/assets/webhook-config.yaml
|
||||
fileAssets:
|
||||
- content: |
|
||||
# content of the webserver CA certificate
|
||||
# remove this fileAsset and certificate-authority from webhook-config if using http
|
||||
name: audit-ca.pem
|
||||
roles:
|
||||
- Master
|
||||
- content: |
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: falco
|
||||
cluster:
|
||||
# remove 'certificate-authority' when using 'http'
|
||||
certificate-authority: /srv/kubernetes/assets/audit-ca.pem
|
||||
server: https://localhost:32765/k8s-audit
|
||||
contexts:
|
||||
- context:
|
||||
cluster: falco
|
||||
user: ""
|
||||
name: default-context
|
||||
current-context: default-context
|
||||
preferences: {}
|
||||
users: []
|
||||
name: webhook-config.yaml
|
||||
roles:
|
||||
- Master
|
||||
- content: |
|
||||
# ... paste audit-policy.yaml here ...
|
||||
# https://raw.githubusercontent.com/falcosecurity/plugins/master/plugins/k8saudit/configs/audit-policy.yaml
|
||||
name: audit-policy.yaml
|
||||
roles:
|
||||
- Master
|
||||
```
|
||||
## Enabling gRPC
|
||||
|
||||
The Falco gRPC server and the Falco gRPC Outputs APIs are not enabled by default.
|
||||
Moreover, Falco supports running a gRPC server with two main binding types:
|
||||
- Over a local **Unix socket** with no authentication
|
||||
- Over the **network** with mandatory mutual TLS authentication (mTLS)
|
||||
|
||||
> **Tip**: Once gRPC is enabled, you can deploy [falco-exporter](https://github.com/falcosecurity/falco-exporter) to export metrics to Prometheus.
|
||||
|
||||
### gRPC over unix socket (default)
|
||||
|
||||
The preferred way to use the gRPC is over a Unix socket.
|
||||
|
||||
To install Falco with gRPC enabled over a **unix socket**, you have to:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.grpc.enabled=true \
|
||||
--set falco.grpc_output.enabled=true
|
||||
```
|
||||
|
||||
### gRPC over network
|
||||
|
||||
The gRPC server over the network can only be used with mutual authentication between the clients and the server using TLS certificates.
|
||||
How to generate the certificates is [documented here](https://falco.org/docs/grpc/#generate-valid-ca).
|
||||
|
||||
To install Falco with gRPC enabled over the **network**, you have to:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.grpc.enabled=true \
|
||||
--set falco.grpc_output.enabled=true \
|
||||
--set falco.grpc.unixSocketPath="" \
|
||||
--set-file certs.server.key=/path/to/server.key \
|
||||
--set-file certs.server.crt=/path/to/server.crt \
|
||||
--set-file certs.ca.crt=/path/to/ca.crt
|
||||
|
||||
```
|
||||
|
||||
## Enable http_output
|
||||
|
||||
HTTP output enables Falco to send events through HTTP(S) via the following configuration:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.http_output.enabled=true \
|
||||
--set falco.http_output.url="http://some.url/some/path/" \
|
||||
--set falco.json_output=true \
|
||||
--set json_include_output_property=true
|
||||
```
|
||||
|
||||
Additionally, you can enable mTLS communication and load HTTP client cryptographic material via:
|
||||
|
||||
```shell
|
||||
helm install falco falcosecurity/falco \
|
||||
--create-namespace \
|
||||
--namespace falco \
|
||||
--set falco.http_output.enabled=true \
|
||||
--set falco.http_output.url="https://some.url/some/path/" \
|
||||
--set falco.json_output=true \
|
||||
--set json_include_output_property=true \
|
||||
--set falco.http_output.mtls=true \
|
||||
--set falco.http_output.client_cert="/etc/falco/certs/client/client.crt" \
|
||||
--set falco.http_output.client_key="/etc/falco/certs/client/client.key" \
|
||||
--set falco.http_output.ca_cert="/etc/falco/certs/client/ca.crt" \
|
||||
--set-file certs.client.key="/path/to/client.key",certs.client.crt="/path/to/client.crt",certs.ca.crt="/path/to/cacert.crt"
|
||||
```
|
||||
|
||||
Or instead of directly setting the files via `--set-file`, mounting an existing volume with the `certs.existingClientSecret` value.
|
||||
|
||||
## Deploy Falcosidekick with Falco
|
||||
|
||||
[`Falcosidekick`](https://github.com/falcosecurity/falcosidekick) can be installed with `Falco` by setting `--set falcosidekick.enabled=true`. This setting automatically configures all options of `Falco` for working with `Falcosidekick`.
|
||||
All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick#configuration).
|
||||
For example, to enable the deployment of [`Falcosidekick-UI`](https://github.com/falcosecurity/falcosidekick-ui), add `--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true`.
|
||||
|
||||
If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidekick` might be captured, use the full FQDN of `Falcosidekick` by using `--set falcosidekick.fullfqdn=true` to avoid that.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the main configurable parameters of the falco chart v4.2.5 and their default values. See [values.yaml](./values.yaml) for full list.
|
||||
|
||||
## Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| affinity | object | `{}` | Affinity constraint for pods' scheduling. |
|
||||
| certs | object | `{"ca":{"crt":""},"client":{"crt":"","key":""},"existingClientSecret":"","existingSecret":"","server":{"crt":"","key":""}}` | certificates used by webserver and grpc server. paste certificate content or use helm with --set-file or use existing secret containing key, crt, ca as well as pem bundle |
|
||||
| certs.ca.crt | string | `""` | CA certificate used by gRPC, webserver and AuditSink validation. |
|
||||
| certs.client.crt | string | `""` | Certificate used by http mTLS client. |
|
||||
| certs.client.key | string | `""` | Key used by http mTLS client. |
|
||||
| certs.existingSecret | string | `""` | Existing secret containing the following key, crt and ca as well as the bundle pem. |
|
||||
| certs.server.crt | string | `""` | Certificate used by gRPC and webserver. |
|
||||
| certs.server.key | string | `""` | Key used by gRPC and webserver. |
|
||||
| collectors.containerd.enabled | bool | `true` | Enable ContainerD support. |
|
||||
| collectors.containerd.socket | string | `"/run/containerd/containerd.sock"` | The path of the ContainerD socket. |
|
||||
| collectors.crio.enabled | bool | `true` | Enable CRI-O support. |
|
||||
| collectors.crio.socket | string | `"/run/crio/crio.sock"` | The path of the CRI-O socket. |
|
||||
| collectors.docker.enabled | bool | `true` | Enable Docker support. |
|
||||
| collectors.docker.socket | string | `"/var/run/docker.sock"` | The path of the Docker daemon socket. |
|
||||
| collectors.enabled | bool | `true` | Enable/disable all the metadata collectors. |
|
||||
| collectors.kubernetes | object | `{"collectorHostname":"","collectorPort":"","enabled":false,"pluginRef":"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0"}` | kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973 |
|
||||
| collectors.kubernetes.collectorHostname | string | `""` | collectorHostname is the address of the k8s-metacollector. When not specified it will be set to match k8s-metacollector service. e.x: falco-k8smetacollecto.falco.svc. If for any reason you need to override it, make sure to set here the address of the k8s-metacollector. It is used by the k8smeta plugin to connect to the k8s-metacollector. |
|
||||
| collectors.kubernetes.collectorPort | string | `""` | collectorPort designates the port on which the k8s-metacollector gRPC service listens. If not specified the value of the port named `broker-grpc` in k8s-metacollector.service.ports is used. The default values is 45000. It is used by the k8smeta plugin to connect to the k8s-metacollector. |
|
||||
| collectors.kubernetes.enabled | bool | `false` | enabled specifies whether the Kubernetes metadata should be collected using the k8smeta plugin and the k8s-metacollector component. It will deploy the k8s-metacollector external component that fetches Kubernetes metadata and pushes them to Falco instances. For more info see: https://github.com/falcosecurity/k8s-metacollector https://github.com/falcosecurity/charts/tree/master/charts/k8s-metacollector When this option is disabled, Falco falls back to the container annotations to grab the metadata. In such a case, only the ID, name, namespace, labels of the pod will be available. |
|
||||
| collectors.kubernetes.pluginRef | string | `"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0"` | pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: k8smeta:0.1.0. |
|
||||
| containerSecurityContext | object | `{}` | Set securityContext for the Falco container.For more info see the "falco.securityContext" helper in "pod-template.tpl" |
|
||||
| controller.annotations | object | `{}` | |
|
||||
| controller.daemonset.updateStrategy.type | string | `"RollingUpdate"` | Perform rolling updates by default in the DaemonSet agent ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/ |
|
||||
| controller.deployment.replicas | int | `1` | Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. For more info check the section on Plugins in the README.md file. |
|
||||
| controller.kind | string | `"daemonset"` | |
|
||||
| customRules | object | `{}` | Third party rules enabled for Falco. More info on the dedicated section in README.md file. |
|
||||
| driver.ebpf | object | `{"bufSizePreset":4,"dropFailedExit":false,"hostNetwork":false,"leastPrivileged":false,"path":"${HOME}/.falco/falco-bpf.o"}` | Configuration section for ebpf driver. |
|
||||
| driver.ebpf.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. |
|
||||
| driver.ebpf.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. |
|
||||
| driver.ebpf.hostNetwork | bool | `false` | Needed to enable eBPF JIT at runtime for performance reasons. Can be skipped if eBPF JIT is enabled from outside the container |
|
||||
| driver.ebpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_SYS_ADMIN, CAP_SYS_PTRACE}. On kernel versions >= 5.8 'CAP_PERFMON' and 'CAP_BPF' could replace 'CAP_SYS_ADMIN' but please pay attention to the 'kernel.perf_event_paranoid' value on your system. Usually 'kernel.perf_event_paranoid>2' means that you cannot use 'CAP_PERFMON' and you should fallback to 'CAP_SYS_ADMIN', but the behavior changes across different distros. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-1 |
|
||||
| driver.ebpf.path | string | `"${HOME}/.falco/falco-bpf.o"` | path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init container deployed with the chart. |
|
||||
| driver.enabled | bool | `true` | Set it to false if you want to deploy Falco without the drivers. Always set it to false when using Falco with plugins. |
|
||||
| driver.gvisor | object | `{"runsc":{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}}` | Gvisor configuration. Based on your system you need to set the appropriate values. Please, remember to add pod tolerations and affinities in order to schedule the Falco pods in the gVisor enabled nodes. |
|
||||
| driver.gvisor.runsc | object | `{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}` | Runsc container runtime configuration. Falco needs to interact with it in order to intercept the activity of the sandboxed pods. |
|
||||
| driver.gvisor.runsc.config | string | `"/run/containerd/runsc/config.toml"` | Absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. |
|
||||
| driver.gvisor.runsc.path | string | `"/home/containerd/usr/local/sbin"` | Absolute path of the `runsc` binary in the k8s nodes. |
|
||||
| driver.gvisor.runsc.root | string | `"/run/containerd/runsc"` | Absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; |
|
||||
| driver.kind | string | `"kmod"` | kind tells Falco which driver to use. Available options: kmod (kernel driver), ebpf (eBPF probe), modern_ebpf (modern eBPF probe). |
|
||||
| driver.kmod | object | `{"bufSizePreset":4,"dropFailedExit":false}` | kmod holds the configuration for the kernel module. |
|
||||
| driver.kmod.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. |
|
||||
| driver.kmod.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. |
|
||||
| driver.loader | object | `{"enabled":true,"initContainer":{"args":[],"env":[],"image":{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falco-driver-loader","tag":""},"resources":{},"securityContext":{}}}` | Configuration for the Falco init container. |
|
||||
| driver.loader.enabled | bool | `true` | Enable/disable the init container. |
|
||||
| driver.loader.initContainer.args | list | `[]` | Arguments to pass to the Falco driver loader init container. |
|
||||
| driver.loader.initContainer.env | list | `[]` | Extra environment variables that will be pass onto Falco driver loader init container. |
|
||||
| driver.loader.initContainer.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. |
|
||||
| driver.loader.initContainer.image.registry | string | `"docker.io"` | The image registry to pull from. |
|
||||
| driver.loader.initContainer.image.repository | string | `"falcosecurity/falco-driver-loader"` | The image repository to pull from. |
|
||||
| driver.loader.initContainer.resources | object | `{}` | Resources requests and limits for the Falco driver loader init container. |
|
||||
| driver.loader.initContainer.securityContext | object | `{}` | Security context for the Falco driver loader init container. Overrides the default security context. If driver.kind == "module" you must at least set `privileged: true`. |
|
||||
| driver.modernEbpf.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. |
|
||||
| driver.modernEbpf.cpusForEachBuffer | int | `2` | cpusForEachBuffer is the index that controls how many CPUs to assign to a single syscall buffer. |
|
||||
| driver.modernEbpf.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. |
|
||||
| driver.modernEbpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the modern bpf driver is enabled (i.e., setting the `driver.kind` option to `modern-bpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_BPF, CAP_PERFMON, CAP_SYS_PTRACE}. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2 |
|
||||
| extra.args | list | `[]` | Extra command-line arguments. |
|
||||
| extra.env | list | `[]` | Extra environment variables that will be pass onto Falco containers. |
|
||||
| extra.initContainers | list | `[]` | Additional initContainers for Falco pods. |
|
||||
| falco.base_syscalls | object | `{"custom_set":[],"repair":false}` | - [Suggestions] NOTE: setting `base_syscalls.repair: true` automates the following suggestions for you. These suggestions are subject to change as Falco and its state engine evolve. For execve* events: Some Falco fields for an execve* syscall are retrieved from the associated `clone`, `clone3`, `fork`, `vfork` syscalls when spawning a new process. The `close` syscall is used to purge file descriptors from Falco's internal thread / process cache table and is necessary for rules relating to file descriptors (e.g. open, openat, openat2, socket, connect, accept, accept4 ... and many more) Consider enabling the following syscalls in `base_syscalls.custom_set` for process rules: [clone, clone3, fork, vfork, execve, execveat, close] For networking related events: While you can log `connect` or `accept*` syscalls without the socket syscall, the log will not contain the ip tuples. Additionally, for `listen` and `accept*` syscalls, the `bind` syscall is also necessary. We recommend the following as the minimum set for networking-related rules: [clone, clone3, fork, vfork, execve, execveat, close, socket, bind, getsockopt] Lastly, for tracking the correct `uid`, `gid` or `sid`, `pgid` of a process when the running process opens a file or makes a network connection, consider adding the following to the above recommended syscall sets: ... setresuid, setsid, setuid, setgid, setpgid, setresgid, setsid, capset, chdir, chroot, fchdir ... |
|
||||
| falco.buffered_outputs | bool | `false` | Enabling buffering for the output queue can offer performance optimization, efficient resource usage, and smoother data flow, resulting in a more reliable output mechanism. By default, buffering is disabled (false). |
|
||||
| falco.file_output | object | `{"enabled":false,"filename":"./events.txt","keep_alive":false}` | When appending Falco alerts to a file, each new alert will be added to a new line. It's important to note that Falco does not perform log rotation for this file. If the `keep_alive` option is set to `true`, the file will be opened once and continuously written to, else the file will be reopened for each output message. Furthermore, the file will be closed and reopened if Falco receives the SIGUSR1 signal. |
|
||||
| falco.grpc | object | `{"bind_address":"unix:///run/falco/falco.sock","enabled":false,"threadiness":0}` | gRPC server using a local unix socket |
|
||||
| falco.grpc.threadiness | int | `0` | When the `threadiness` value is set to 0, Falco will automatically determine the appropriate number of threads based on the number of online cores in the system. |
|
||||
| falco.grpc_output | object | `{"enabled":false}` | Use gRPC as an output service. gRPC is a modern and high-performance framework for remote procedure calls (RPC). It utilizes protocol buffers for efficient data serialization. The gRPC output in Falco provides a modern and efficient way to integrate with other systems. By default the setting is turned off. Enabling this option stores output events in memory until they are consumed by a gRPC client. Ensure that you have a consumer for the output events or leave it disabled. |
|
||||
| falco.http_output | object | `{"ca_bundle":"","ca_cert":"","ca_path":"/etc/falco/certs/","client_cert":"/etc/falco/certs/client/client.crt","client_key":"/etc/falco/certs/client/client.key","compress_uploads":false,"echo":false,"enabled":false,"insecure":false,"keep_alive":false,"mtls":false,"url":"","user_agent":"falcosecurity/falco"}` | Send logs to an HTTP endpoint or webhook. |
|
||||
| falco.http_output.ca_bundle | string | `""` | Path to a specific file that will be used as the CA certificate store. |
|
||||
| falco.http_output.ca_cert | string | `""` | Path to the CA certificate that can verify the remote server. |
|
||||
| falco.http_output.ca_path | string | `"/etc/falco/certs/"` | Path to a folder that will be used as the CA certificate store. CA certificate need to be stored as indivitual PEM files in this directory. |
|
||||
| falco.http_output.client_cert | string | `"/etc/falco/certs/client/client.crt"` | Path to the client cert. |
|
||||
| falco.http_output.client_key | string | `"/etc/falco/certs/client/client.key"` | Path to the client key. |
|
||||
| falco.http_output.compress_uploads | bool | `false` | compress_uploads whether to compress data sent to http endpoint. |
|
||||
| falco.http_output.echo | bool | `false` | Whether to echo server answers to stdout |
|
||||
| falco.http_output.insecure | bool | `false` | Tell Falco to not verify the remote server. |
|
||||
| falco.http_output.keep_alive | bool | `false` | keep_alive whether to keep alive the connection. |
|
||||
| falco.http_output.mtls | bool | `false` | Tell Falco to use mTLS |
|
||||
| falco.json_include_output_property | bool | `true` | When using JSON output in Falco, you have the option to include the "output" property itself in the generated JSON output. The "output" property provides additional information about the purpose of the rule. To reduce the logging volume, it is recommended to turn it off if it's not necessary for your use case. |
|
||||
| falco.json_include_tags_property | bool | `true` | When using JSON output in Falco, you have the option to include the "tags" field of the rules in the generated JSON output. The "tags" field provides additional metadata associated with the rule. To reduce the logging volume, if the tags associated with the rule are not needed for your use case or can be added at a later stage, it is recommended to turn it off. |
|
||||
| falco.json_output | bool | `false` | When enabled, Falco will output alert messages and rules file loading/validation results in JSON format, making it easier for downstream programs to process and consume the data. By default, this option is disabled. |
|
||||
| falco.libs_logger | object | `{"enabled":false,"severity":"debug"}` | The `libs_logger` setting in Falco determines the minimum log level to include in the logs related to the functioning of the software of the underlying `libs` library, which Falco utilizes. This setting is independent of the `priority` field of rules and the `log_level` setting that controls Falco's operational logs. It allows you to specify the desired log level for the `libs` library specifically, providing more granular control over the logging behavior of the underlying components used by Falco. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". It is not recommended for production use. |
|
||||
| falco.load_plugins | list | `[]` | Add here all plugins and their configuration. Please consult the plugins documentation for more info. Remember to add the plugins name in "load_plugins: []" in order to load them in Falco. |
|
||||
| falco.log_level | string | `"info"` | The `log_level` setting determines the minimum log level to include in Falco's logs related to the functioning of the software. This setting is separate from the `priority` field of rules and specifically controls the log level of Falco's operational logging. By specifying a log level, you can control the verbosity of Falco's operational logs. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". |
|
||||
| falco.log_stderr | bool | `true` | Send information logs to stderr. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. |
|
||||
| falco.log_syslog | bool | `true` | Send information logs to syslog. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. |
|
||||
| falco.metrics | object | `{"convert_memory_to_mb":true,"enabled":false,"include_empty_values":false,"interval":"1h","kernel_event_counters_enabled":true,"libbpf_stats_enabled":true,"output_rule":true,"resource_utilization_enabled":true,"state_counters_enabled":true}` | - [Usage] `enabled`: Disabled by default. `interval`: The stats interval in Falco follows the time duration definitions used by Prometheus. https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations Time durations are specified as a number, followed immediately by one of the following units: ms - millisecond s - second m - minute h - hour d - day - assuming a day has always 24h w - week - assuming a week has always 7d y - year - assuming a year has always 365d Example of a valid time duration: 1h30m20s10ms A minimum interval of 100ms is enforced for metric collection. However, for production environments, we recommend selecting one of the following intervals for optimal monitoring: 15m 30m 1h 4h 6h `output_rule`: To enable seamless metrics and performance monitoring, we recommend emitting metrics as the rule "Falco internal: metrics snapshot". This option is particularly useful when Falco logs are preserved in a data lake. Please note that to use this option, the Falco rules config `priority` must be set to `info` at a minimum. `output_file`: Append stats to a `jsonl` file. Use with caution in production as Falco does not automatically rotate the file. `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes when deployed as daemonset, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. Finally, we emit the overall host CPU and memory usages, along with the total number of processes and open file descriptors (fds) on the host, obtained from the proc file system unrelated to Falco's monitoring. These metrics help assess Falco's usage in relation to the server's workload intensity. `state_counters_enabled`: Emit counters related to Falco's state engine, including added, removed threads or file descriptors (fds), and failed lookup, store, or retrieve actions in relation to Falco's underlying process cache table (threadtable). We also log the number of currently cached containers if applicable. `kernel_event_counters_enabled`: Emit kernel side event and drop counters, as an alternative to `syscall_event_drops`, but with some differences. These counters reflect monotonic values since Falco's start and are exported at a constant stats interval. `libbpf_stats_enabled`: Exposes statistics similar to `bpftool prog show`, providing information such as the number of invocations of each BPF program attached by Falco and the time spent in each program measured in nanoseconds. To enable this feature, the kernel must be >= 5.1, and the kernel configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, or an equivalent statistics feature, is not available for non `*bpf*` drivers. Additionally, please be aware that the current implementation of `libbpf` does not support granularity of statistics at the bpf tail call level. `include_empty_values`: When the option is set to true, fields with an empty numeric value will be included in the output. However, this rule does not apply to high-level fields such as `n_evts` or `n_drops`; they will always be included in the output even if their value is empty. This option can be beneficial for exploring the data schema and ensuring that fields with empty values are included in the output. todo: prometheus export option todo: syscall_counters_enabled option |
|
||||
| falco.output_timeout | int | `2000` | The `output_timeout` parameter specifies the duration, in milliseconds, to wait before considering the deadline exceeded. By default, the timeout is set to 2000ms (2 seconds), meaning that the consumer of Falco outputs can block the Falco output channel for up to 2 seconds without triggering a timeout error. Falco actively monitors the performance of output channels. With this setting the timeout error can be logged, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. It's important to note that Falco outputs will not be discarded from the output queue. This means that if an output channel becomes blocked indefinitely, it indicates a potential issue that needs to be addressed by the user. |
|
||||
| falco.outputs | object | `{"max_burst":1000,"rate":0}` | A throttling mechanism, implemented as a token bucket, can be used to control the rate of Falco outputs. Each event source has its own rate limiter, ensuring that alerts from one source do not affect the throttling of others. The following options control the mechanism: - rate: the number of tokens (i.e. right to send a notification) gained per second. When 0, the throttling mechanism is disabled. Defaults to 0. - max_burst: the maximum number of tokens outstanding. Defaults to 1000. For example, setting the rate to 1 allows Falco to send up to 1000 notifications initially, followed by 1 notification per second. The burst capacity is fully restored after 1000 seconds of no activity. Throttling can be useful in various scenarios, such as preventing notification floods, managing system load, controlling event processing, or complying with rate limits imposed by external systems or APIs. It allows for better resource utilization, avoids overwhelming downstream systems, and helps maintain a balanced and controlled flow of notifications. With the default settings, the throttling mechanism is disabled. |
|
||||
| falco.outputs_queue | object | `{"capacity":0}` | Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter allows you to customize the queue capacity. Please refer to the official documentation: https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. On a healthy system with optimized Falco rules, the queue should not fill up. If it does, it is most likely happening due to the entire event flow being too slow, indicating that the server is under heavy load. `capacity`: the maximum number of items allowed in the queue is determined by this value. Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. In other words, when this configuration is set to 0, the number of allowed items is effectively set to the largest possible long value, disabling this setting. In the case of an unbounded queue, if the available memory on the system is consumed, the Falco process would be OOM killed. When using this option and setting the capacity, the current event would be dropped, and the event loop would continue. This behavior mirrors kernel-side event drops when the buffer between kernel space and user space is full. |
|
||||
| falco.plugins | list | `[{"init_config":null,"library_path":"libk8saudit.so","name":"k8saudit","open_params":"http://:9765/k8s-audit"},{"library_path":"libcloudtrail.so","name":"cloudtrail"},{"init_config":"","library_path":"libjson.so","name":"json"}]` | Customize subsettings for each enabled plugin. These settings will only be applied when the corresponding plugin is enabled using the `load_plugins` option. |
|
||||
| falco.priority | string | `"debug"` | Any rule with a priority level more severe than or equal to the specified minimum level will be loaded and run by Falco. This allows you to filter and control the rules based on their severity, ensuring that only rules of a certain priority or higher are active and evaluated by Falco. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug" |
|
||||
| falco.program_output | object | `{"enabled":false,"keep_alive":false,"program":"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"}` | Redirect the output to another program or command. Possible additional things you might want to do with program output: - send to a slack webhook: program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" - logging (alternate method than syslog): program: logger -t falco-test - send over a network connection: program: nc host.example.com 80 If `keep_alive` is set to `true`, the program will be started once and continuously written to, with each output message on its own line. If `keep_alive` is set to `false`, the program will be re-spawned for each output message. Furthermore, the program will be re-spawned if Falco receives the SIGUSR1 signal. |
|
||||
| falco.rule_matching | string | `"first"` | |
|
||||
| falco.rules_file | list | `["/etc/falco/falco_rules.yaml","/etc/falco/falco_rules.local.yaml","/etc/falco/rules.d"]` | The location of the rules files that will be consumed by Falco. |
|
||||
| falco.stdout_output | object | `{"enabled":true}` | Redirect logs to standard output. |
|
||||
| falco.syscall_event_drops | object | `{"actions":["log","alert"],"max_burst":1,"rate":0.03333,"simulate_drops":false,"threshold":0.1}` | For debugging/testing it is possible to simulate the drops using the `simulate_drops: true`. In this case the threshold does not apply. |
|
||||
| falco.syscall_event_drops.actions | list | `["log","alert"]` | Actions to be taken when system calls were dropped from the circular buffer. |
|
||||
| falco.syscall_event_drops.max_burst | int | `1` | Max burst of messages emitted. |
|
||||
| falco.syscall_event_drops.rate | float | `0.03333` | Rate at which log/alert messages are emitted. |
|
||||
| falco.syscall_event_drops.simulate_drops | bool | `false` | Flag to enable drops for debug purposes. |
|
||||
| falco.syscall_event_drops.threshold | float | `0.1` | The messages are emitted when the percentage of dropped system calls with respect the number of events in the last second is greater than the given threshold (a double in the range [0, 1]). |
|
||||
| falco.syscall_event_timeouts | object | `{"max_consecutives":1000}` | Generates Falco operational logs when `log_level=notice` at minimum Falco utilizes a shared buffer between the kernel and userspace to receive events, such as system call information, in userspace. However, there may be cases where timeouts occur in the underlying libraries due to issues in reading events or the need to skip a particular event. While it is uncommon for Falco to experience consecutive event timeouts, it has the capability to detect such situations. You can configure the maximum number of consecutive timeouts without an event after which Falco will generate an alert, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. The default value is set to 1000 consecutive timeouts without receiving any events. The mapping of this value to a time interval depends on the CPU frequency. |
|
||||
| falco.syslog_output | object | `{"enabled":true}` | Send logs to syslog. |
|
||||
| falco.time_format_iso_8601 | bool | `false` | When enabled, Falco will display log and output messages with times in the ISO 8601 format. By default, times are shown in the local time zone determined by the /etc/localtime configuration. |
|
||||
| falco.watch_config_files | bool | `true` | Watch config file and rules files for modification. When a file is modified, Falco will propagate new config, by reloading itself. |
|
||||
| falco.webserver | object | `{"enabled":true,"k8s_healthz_endpoint":"/healthz","listen_port":8765,"ssl_certificate":"/etc/falco/falco.pem","ssl_enabled":false,"threadiness":0}` | Falco supports an embedded webserver that runs within the Falco process, providing a lightweight and efficient way to expose web-based functionalities without the need for an external web server. The following endpoints are exposed: - /healthz: designed to be used for checking the health and availability of the Falco application (the name of the endpoint is configurable). - /versions: responds with a JSON object containing the version numbers of the internal Falco components (similar output as `falco --version -o json_output=true`). Please note that the /versions endpoint is particularly useful for other Falco services, such as `falcoctl`, to retrieve information about a running Falco instance. If you plan to use `falcoctl` locally or with Kubernetes, make sure the Falco webserver is enabled. The behavior of the webserver can be controlled with the following options, which are enabled by default: The `ssl_certificate` option specifies a combined SSL certificate and corresponding key that are contained in a single file. You can generate a key/cert as follows: $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem $ cat certificate.pem key.pem > falco.pem $ sudo cp falco.pem /etc/falco/falco.pem |
|
||||
| falcoctl.artifact.follow | object | `{"args":["--log-format=json"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir) that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible with the running version of Falco before installing it. |
|
||||
| falcoctl.artifact.follow.args | list | `["--log-format=json"]` | Arguments to pass to the falcoctl-artifact-follow sidecar container. |
|
||||
| falcoctl.artifact.follow.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container. |
|
||||
| falcoctl.artifact.follow.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-follow sidecar container. |
|
||||
| falcoctl.artifact.follow.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-follow sidecar container. |
|
||||
| falcoctl.artifact.follow.securityContext | object | `{}` | Security context for the falcoctl-artifact-follow sidecar container. |
|
||||
| falcoctl.artifact.install | object | `{"args":["--log-format=json"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before Falco starts. It provides them to Falco by using an emptyDir volume. |
|
||||
| falcoctl.artifact.install.args | list | `["--log-format=json"]` | Arguments to pass to the falcoctl-artifact-install init container. |
|
||||
| falcoctl.artifact.install.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-install init container. |
|
||||
| falcoctl.artifact.install.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-install init container. |
|
||||
| falcoctl.artifact.install.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-install init container. |
|
||||
| falcoctl.artifact.install.securityContext | object | `{}` | Security context for the falcoctl init container. |
|
||||
| falcoctl.config | object | `{"artifact":{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}},"indexes":[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]}` | Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. |
|
||||
| falcoctl.config.artifact | object | `{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}}` | Configuration used by the artifact commands. |
|
||||
| falcoctl.config.artifact.allowedTypes | list | `["rulesfile","plugin"]` | List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained in the list it will refuse to downloade and install that artifact. |
|
||||
| falcoctl.config.artifact.follow.every | string | `"6h"` | How often the tool checks for new versions of the followed artifacts. |
|
||||
| falcoctl.config.artifact.follow.falcoversions | string | `"http://localhost:8765/versions"` | HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible with the running Falco instance. |
|
||||
| falcoctl.config.artifact.follow.pluginsDir | string | `"/plugins"` | See the fields of the artifact.install section. |
|
||||
| falcoctl.config.artifact.follow.refs | list | `["falco-rules:3"]` | List of artifacts to be followed by the falcoctl sidecar container. |
|
||||
| falcoctl.config.artifact.follow.rulesfilesDir | string | `"/rulesfiles"` | See the fields of the artifact.install section. |
|
||||
| falcoctl.config.artifact.install.pluginsDir | string | `"/plugins"` | Same as the one above but for the artifacts. |
|
||||
| falcoctl.config.artifact.install.refs | list | `["falco-rules:3"]` | List of artifacts to be installed by the falcoctl init container. |
|
||||
| falcoctl.config.artifact.install.resolveDeps | bool | `true` | Resolve the dependencies for artifacts. |
|
||||
| falcoctl.config.artifact.install.rulesfilesDir | string | `"/rulesfiles"` | Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir mounted also by the Falco pod. |
|
||||
| falcoctl.config.indexes | list | `[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]` | List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview |
|
||||
| falcoctl.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. |
|
||||
| falcoctl.image.registry | string | `"docker.io"` | The image registry to pull from. |
|
||||
| falcoctl.image.repository | string | `"falcosecurity/falcoctl"` | The image repository to pull from. |
|
||||
| falcoctl.image.tag | string | `"0.7.2"` | The image tag to pull. |
|
||||
| falcosidekick | object | `{"enabled":false,"fullfqdn":false,"listenPort":""}` | For configuration values, see https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml |
|
||||
| falcosidekick.enabled | bool | `false` | Enable falcosidekick deployment. |
|
||||
| falcosidekick.fullfqdn | bool | `false` | Enable usage of full FQDN of falcosidekick service (useful when a Proxy is used). |
|
||||
| falcosidekick.listenPort | string | `""` | Listen port. Default value: 2801 |
|
||||
| fullnameOverride | string | `""` | Same as nameOverride but for the fullname. |
|
||||
| healthChecks | object | `{"livenessProbe":{"initialDelaySeconds":60,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}}` | Parameters used |
|
||||
| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | Tells the kubelet that it should wait X seconds before performing the first probe. |
|
||||
| healthChecks.livenessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. |
|
||||
| healthChecks.livenessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. |
|
||||
| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | Tells the kubelet that it should wait X seconds before performing the first probe. |
|
||||
| healthChecks.readinessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. |
|
||||
| healthChecks.readinessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. |
|
||||
| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. |
|
||||
| image.registry | string | `"docker.io"` | The image registry to pull from. |
|
||||
| image.repository | string | `"falcosecurity/falco-no-driver"` | The image repository to pull from |
|
||||
| image.tag | string | `""` | The image tag to pull. Overrides the image tag whose default is the chart appVersion. |
|
||||
| imagePullSecrets | list | `[]` | Secrets containing credentials when pulling from private/secure registries. |
|
||||
| mounts.enforceProcMount | bool | `false` | By default, `/proc` from the host is only mounted into the Falco pod when `driver.enabled` is set to `true`. This flag allows it to override this behaviour for edge cases where `/proc` is needed but syscall data source is not enabled at the same time (e.g. for specific plugins). |
|
||||
| mounts.volumeMounts | list | `[]` | A list of volumes you want to add to the Falco pods. |
|
||||
| mounts.volumes | list | `[]` | A list of volumes you want to add to the Falco pods. |
|
||||
| nameOverride | string | `""` | Put here the new name if you want to override the release name used for Falco components. |
|
||||
| namespaceOverride | string | `""` | Override the deployment namespace |
|
||||
| nodeSelector | object | `{}` | Selectors used to deploy Falco on a given node/nodes. |
|
||||
| podAnnotations | object | `{}` | Add additional pod annotations |
|
||||
| podLabels | object | `{}` | Add additional pod labels |
|
||||
| podPriorityClassName | string | `nil` | Set pod priorityClassName |
|
||||
| podSecurityContext | object | `{}` | Set securityContext for the pods These security settings are overriden by the ones specified for the specific containers when there is overlap. |
|
||||
| resources.limits | object | `{"cpu":"1000m","memory":"1024Mi"}` | Maximum amount of resources that Falco container could get. If you are enabling more than one source in falco, than consider to increase the cpu limits. |
|
||||
| resources.requests | object | `{"cpu":"100m","memory":"512Mi"}` | Although resources needed are subjective on the actual workload we provide a sane defaults ones. If you have more questions or concerns, please refer to #falco slack channel for more info about it. |
|
||||
| scc.create | bool | `true` | Create OpenShift's Security Context Constraint. |
|
||||
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. |
|
||||
| serviceAccount.create | bool | `false` | Specifies whether a service account should be created. |
|
||||
| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template |
|
||||
| services | string | `nil` | Network services configuration (scenario requirement) Add here your services to be deployed together with Falco. |
|
||||
| tolerations | list | `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]` | Tolerations to allow Falco to run on Kubernetes masters. |
|
||||
| tty | bool | `false` | Attach the Falco process to a tty inside the container. Needed to flush Falco logs as soon as they are emitted. Set it to "true" when you need the Falco logs to be immediately displayed. |
|
||||
620
falco/charts/falcosidekick/CHANGELOG.md
Normal file
620
falco/charts/falcosidekick/CHANGELOG.md
Normal file
@ -0,0 +1,620 @@
|
||||
# Change Log
|
||||
|
||||
This file documents all notable changes to Falcosidekick Helm Chart. The release
|
||||
numbering uses [semantic versioning](http://semver.org).
|
||||
|
||||
Before release 0.1.20, the helm chart can be found in `falcosidekick` [repository](https://github.com/falcosecurity/falcosidekick/tree/master/deploy/helm/falcosidekick).
|
||||
|
||||
## 0.7.15
|
||||
|
||||
- Fix ServiceMonitor selector labels
|
||||
|
||||
## 0.7.14
|
||||
|
||||
- Fix duplicate component labels
|
||||
|
||||
## 0.7.13
|
||||
|
||||
- Fix ServiceMonitor port name and selector labels
|
||||
|
||||
## 0.7.12
|
||||
|
||||
- Align README values with the values.yaml file
|
||||
|
||||
## 0.7.11
|
||||
|
||||
- Fix a link in the falcosidekick README to the policy report output documentation
|
||||
|
||||
## 0.7.10
|
||||
|
||||
- Set Helm recommended labels (`app.kubernetes.io/name`, `app.kubernetes.io/instance`, `app.kubernetes.io/version`, `helm.sh/chart`, `app.kubernetes.io/part-of`, `app.kubernetes.io/managed-by`) using helpers.tpl
|
||||
|
||||
## 0.7.9
|
||||
|
||||
- noop change to the chart itself. Updated makefile.
|
||||
|
||||
## 0.7.8
|
||||
|
||||
- Fix the condition for missing cert files
|
||||
|
||||
## 0.7.7
|
||||
|
||||
- Support extraArgs in the helm chart
|
||||
|
||||
## 0.7.6
|
||||
|
||||
- Fix the behavior with the `AWS IRSA` with a new value `aws.config.useirsa`
|
||||
- Add a section in the README to describe how to use a subpath for `Falcosidekick-ui` ingress
|
||||
- Add a `ServiceMonitor` for prometheus-operator
|
||||
- Add a `PrometheusRule` for prometheus-operator
|
||||
|
||||
## 0.7.5
|
||||
|
||||
- noop change just to test the ci
|
||||
|
||||
## 0.7.4
|
||||
|
||||
- Fix volume mount when `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables are defined.
|
||||
|
||||
## 0.7.3
|
||||
|
||||
- Allow to set (m)TLS Server cryptographic material via `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables or through `config.tlsserver.existingSecret` variables.
|
||||
|
||||
## 0.7.2
|
||||
|
||||
- Fix the wrong key of the secret for the user
|
||||
|
||||
## 0.7.1
|
||||
|
||||
- Allow to set a password `webui.redis.password` for Redis for `Falcosidekick-UI`
|
||||
- The user for `Falcosidekick-UI` is now set with an env var from a secret
|
||||
|
||||
## 0.7.0
|
||||
|
||||
- Support configuration of revisionHistoryLimit of the deployments
|
||||
|
||||
## 0.6.3
|
||||
|
||||
- Update Falcosidekick to 2.28.0
|
||||
- Add Mutual TLS Client config
|
||||
- Add TLS Server config
|
||||
- Add `bracketreplacer` config
|
||||
- Add `customseveritymap` to `alertmanager` output
|
||||
- Add Drop Event config to `alertmanager` output
|
||||
- Add `customheaders` to `elasticsearch` output
|
||||
- Add `customheaders` to `loki` output
|
||||
- Add `customheaders` to `grafana` output
|
||||
- Add `rolearn` and `externalid` for `aws` outputs
|
||||
- Add `method` to `webhook` output
|
||||
- Add `customattributes` to `gcp.pubsub` output
|
||||
- Add `region` to `pargerduty` output
|
||||
- Add `topiccreation` and `tls` to `kafka` output
|
||||
- Add `Grafana OnCall` output
|
||||
- Add `Redis` output
|
||||
- Add `Telegram` output
|
||||
- Add `N8N` output
|
||||
- Add `OpenObserver` output
|
||||
|
||||
## 0.6.2
|
||||
|
||||
- Fix interpolation of `SYSLOG_PORT`
|
||||
|
||||
## 0.6.1
|
||||
|
||||
- Add `webui.allowcors` value for `Falcosidekick-UI`
|
||||
|
||||
## 0.6.0
|
||||
|
||||
- Change the docker image for the redis pod for falcosidekick-ui
|
||||
|
||||
## 0.5.16
|
||||
|
||||
- Add `affinity`, `nodeSelector` and `tolerations` values for the Falcosidekick test-connection pod
|
||||
|
||||
## 0.5.15
|
||||
|
||||
- Set extra labels and annotations for `AlertManager` only if they're not empty
|
||||
|
||||
## 0.5.14
|
||||
|
||||
- Fix Prometheus extralabels configuration in Falcosidekick
|
||||
|
||||
## 0.5.13
|
||||
|
||||
- Fix missing quotes in Falcosidekick-UI ttl argument
|
||||
|
||||
## 0.5.12
|
||||
|
||||
- Fix missing space in Falcosidekick-UI ttl argument
|
||||
|
||||
## 0.5.11
|
||||
|
||||
- Fix missing space in Falcosidekick-UI arguments
|
||||
|
||||
## 0.5.10
|
||||
|
||||
- upgrade Falcosidekick image to 2.27.0
|
||||
- upgrade Falcosidekick-UI image to 2.1.0
|
||||
- Add `Yandex Data Streams` output
|
||||
- Add `Node-Red` output
|
||||
- Add `MQTT` output
|
||||
- Add `Zincsearch` output
|
||||
- Add `Gotify` output
|
||||
- Add `Spyderbat` output
|
||||
- Add `Tekton` output
|
||||
- Add `TimescaleDB` output
|
||||
- Add `AWS Security Lake` output
|
||||
- Add `config.templatedfields` to set templated fields
|
||||
- Add `config.slack.channel` to override `Slack` channel
|
||||
- Add `config.alertmanager.extralabels` and `config.alertmanager.extraannotations` for `AlertManager` output
|
||||
- Add `config.influxdb.token`, `config.influxdb.organization` and `config.influxdb.precision` for `InfluxDB` output
|
||||
- Add `config.aws.checkidentity` to disallow STS checks
|
||||
- Add `config.smtp.authmechanism`, `config.smtp.token`, `config.smtp.identity`, `config.smtp.trace` to manage `SMTP` auth
|
||||
- Update default doc type for `Elastichsearch`
|
||||
- Add `config.loki.user`, `config.loki.apikey` to manage auth to Grafana Cloud for `Loki` output
|
||||
- Add `config.kafka.sasl`, `config.kafka.async`, `config.kafka.compression`, `config.kafka.balancer`, `config.kafka.clientid` to manage auth and communication for `Kafka` output
|
||||
- Add `config.syslog.format` to manage the format of `Syslog` payload
|
||||
- Add `webui.ttl` to set TTL of keys in Falcosidekick-UI
|
||||
- Add `webui.loglevel` to set log level in Falcosidekick-UI
|
||||
- Add `webui.user` to set log user:password in Falcosidekick-UI
|
||||
|
||||
## 0.5.9
|
||||
|
||||
- Fix: remove `namespace` from `clusterrole` and `clusterrolebinding` metadata
|
||||
|
||||
## 0.5.8
|
||||
|
||||
- Support `storageEnabled` for `redis` to allow ephemeral installs
|
||||
|
||||
## 0.5.7
|
||||
|
||||
- Removing unused Kafka config values
|
||||
|
||||
## 0.5.6
|
||||
|
||||
- Fixing Syslog's port import in `secrets.yaml`
|
||||
|
||||
## 0.5.5
|
||||
|
||||
- Add `webui.externalRedis` with `enabled`, `url` and `port` to values to set an external Redis database with RediSearch > v2 for the WebUI
|
||||
- Add `webui.redis.enabled` option to disable the deployment of the database.
|
||||
- `webui.redis.enabled ` and `webui.externalRedis.enabled` are mutually exclusive
|
||||
|
||||
## 0.5.4
|
||||
|
||||
- Upgrade image to fix Panic of `Prometheus` output when `customfields` is set
|
||||
- Add `extralabels` for `Loki` and `Prometheus` outputs to set fields to use as labels
|
||||
- Add `expiresafter` for `AlertManager` output
|
||||
|
||||
## 0.5.3
|
||||
|
||||
- Support full configuration of `securityContext` blocks in falcosidekick and falcosidekick-ui deployments, and redis statefulset.
|
||||
|
||||
## 0.5.2
|
||||
|
||||
- Update Falcosidekick-UI image (fix wrong redirect to localhost when an ingress is used)
|
||||
|
||||
## 0.5.1
|
||||
|
||||
- Support `ingressClassName` field in falcosidekick ingresses.
|
||||
|
||||
## 0.5.0
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `Policy Report` output
|
||||
- Add `Syslog` output
|
||||
- Add `AWS Kinesis` output
|
||||
- Add `Zoho Cliq` output
|
||||
- Support IRSA for AWS authentication
|
||||
- Upgrade Falcosidekick-UI to v2.0.1
|
||||
|
||||
### Minor changes
|
||||
|
||||
- Allow to set custom Labels for pods
|
||||
|
||||
## 0.4.5
|
||||
|
||||
- Allow additional service-ui annotations
|
||||
|
||||
## 0.4.4
|
||||
|
||||
- Fix output after chart installation when ingress is enable
|
||||
|
||||
## 0.4.3
|
||||
|
||||
- Support `annotation` block in service
|
||||
|
||||
## 0.4.2
|
||||
|
||||
- Fix: Added the rule to use the podsecuritypolicy
|
||||
- Fix: Added `ServiceAccountName` to the UI deployment
|
||||
|
||||
## 0.4.1
|
||||
|
||||
- Removes duplicate `Fission` keys from secret
|
||||
|
||||
## 0.4.0
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Support Ingress API version `networking.k8s.io/v1`, see `ingress.hosts` and `webui.ingress.hosts` in [values.yaml](values.yaml) for a breaking change in the `path` parameter
|
||||
|
||||
## 0.3.17
|
||||
|
||||
- Fix: Remove the value for bucket of `Yandex S3`, it enabled the output by default
|
||||
|
||||
## 0.3.16
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Fix: set correct new image 2.24.0
|
||||
|
||||
## 0.3.15
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `Fission` output
|
||||
|
||||
## 0.3.14
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `Grafana` output
|
||||
- Add `Yandex Cloud S3` output
|
||||
- Add `Kafka REST` output
|
||||
|
||||
### Minor changes
|
||||
|
||||
- Docker image is now available on AWS ECR Public Gallery (`--set image.registry=public.ecr.aws`)
|
||||
|
||||
## 0.3.13
|
||||
|
||||
### Minor changes
|
||||
|
||||
- Enable extra volumes and volumemounts for `falcosidekick` via values
|
||||
|
||||
## 0.3.12
|
||||
|
||||
- Add AWS configuration field `config.aws.rolearn`
|
||||
|
||||
## 0.3.11
|
||||
|
||||
### Minor changes
|
||||
|
||||
- Make image registries for `falcosidekick` and `falcosidekick-ui` configurable
|
||||
|
||||
## 0.3.10
|
||||
|
||||
### Minor changes
|
||||
|
||||
- Fix table formatting in `README.md`
|
||||
|
||||
## 0.3.9
|
||||
|
||||
### Fixes
|
||||
|
||||
- Add missing `imagePullSecrets` in `falcosidekick/templates/deployment-ui.yaml`
|
||||
|
||||
## 0.3.8
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `GCP Cloud Run` output
|
||||
- Add `GCP Cloud Functions` output
|
||||
- Add `Wavefront` output
|
||||
- Allow MutualTLS for some outputs
|
||||
- Add basic auth for Elasticsearch output
|
||||
|
||||
## 0.3.7
|
||||
|
||||
### Minor changes
|
||||
|
||||
- Fix table formatting in `README.md`
|
||||
- Fix `config.azure.eventHub` parameter name in `README.md`
|
||||
|
||||
## 0.3.6
|
||||
|
||||
### Fixes
|
||||
|
||||
- Point to the correct name of aadpodidentnity
|
||||
|
||||
## 0.3.5
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Fix link to Falco in the `README.md`
|
||||
|
||||
## 0.3.4
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Bump up version (`v1.0.1`) of image for `falcosidekick-ui`
|
||||
|
||||
## 0.3.3
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Set default values for `OpenFaaS` output type parameters
|
||||
- Fixes of documentation
|
||||
|
||||
## 0.3.2
|
||||
|
||||
### Fixes
|
||||
|
||||
- Add config checksum annotation to deployment pods to restart pods on config change
|
||||
- Fix statsd config options in the secret to make them match the docs
|
||||
|
||||
## 0.3.1
|
||||
|
||||
### Fixes
|
||||
|
||||
- Fix for `s3.bucket`, it should be empty
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `AWS S3` output
|
||||
- Add `GCP Storage` output
|
||||
- Add `RabbitMQ` output
|
||||
- Add `OpenFaas` output
|
||||
|
||||
## 0.2.9
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Updated falcosidekuck-ui default image version to `v0.2.0`
|
||||
|
||||
## 0.2.8
|
||||
|
||||
### Fixes
|
||||
|
||||
- Fixed to specify `kafka.hostPort` instead of `kafka.url`
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Fixes
|
||||
|
||||
- Fixed missing hyphen in podidentity
|
||||
|
||||
## 0.2.6
|
||||
|
||||
### Fixes
|
||||
|
||||
- Fix repo and tag for `ui` image
|
||||
|
||||
## 0.2.5
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `CLOUDEVENTS` output
|
||||
- Add `WEBUI` output
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Add details about syntax for adding `custom_fields`
|
||||
|
||||
## 0.2.4
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Add `DATADOG_HOST` to secret
|
||||
|
||||
## 0.2.3
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Allow additional pod annotations
|
||||
- Remove namespace condition in aad-pod-identity
|
||||
|
||||
## 0.2.2
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `Kubeless` output
|
||||
|
||||
## 0.2.1
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `PagerDuty` output
|
||||
|
||||
## 0.2.0
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add option to use an existing secret
|
||||
- Add option to add extra environment variables
|
||||
- Add `Stan` output
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Use the Existing secret resource and add all possible variables to there, and make it simpler to read and less error-prone in the deployment resource
|
||||
|
||||
## 0.1.37
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Fix aws keys not being added to the deployment
|
||||
|
||||
## 0.1.36
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Fix helm test
|
||||
|
||||
## 0.1.35
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Update image to use release 2.19.1
|
||||
|
||||
## 0.1.34
|
||||
|
||||
- New outputs can be set : `Kafka`, `AWS CloudWatchLogs`
|
||||
|
||||
## 0.1.33
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Fixed GCP Pub/Sub values references in `deployment.yaml`
|
||||
|
||||
## 0.1.32
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Support release namespace configuration
|
||||
|
||||
## 0.1.31
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New outputs can be set : `Googlechat`
|
||||
|
||||
## 0.1.30
|
||||
|
||||
### Major changes
|
||||
|
||||
- New output can be set : `GCP PubSub`
|
||||
- Custom Headers can be set for `Webhook` output
|
||||
- Fix typo `aipKey` for OpsGenie output
|
||||
|
||||
## 0.1.29
|
||||
|
||||
- Fix falcosidekick configuration table to use full path of configuration properties in the `README.md`
|
||||
|
||||
## 0.1.28
|
||||
|
||||
### Major changes
|
||||
|
||||
- New output can be set : `AWS SNS`
|
||||
- Metrics in `prometheus` format can be scrapped from `/metrics` URI
|
||||
|
||||
## 0.1.27
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Replace extensions apiGroup/apiVersion because of deprecation
|
||||
|
||||
## 0.1.26
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Allow the creation of a PodSecurityPolicy, disabled by default
|
||||
|
||||
## 0.1.25
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Allow the configuration of the Pod securityContext, set default runAsUser and fsGroup values
|
||||
|
||||
## 0.1.24
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Remove duplicated `webhook` block in `values.yaml`
|
||||
|
||||
## 0.1.23
|
||||
|
||||
- fake release for triggering CI for auto-publishing
|
||||
|
||||
## 0.1.22
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add `imagePullSecrets`
|
||||
|
||||
## 0.1.21
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Fix `Azure Indentity` case sensitive value
|
||||
|
||||
## 0.1.20
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New outputs can be set : `Azure Event Hubs`, `Discord`
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Fix wrong port name in output
|
||||
|
||||
## 0.1.17
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New outputs can be set : `Mattermost`, `Rocketchat`
|
||||
|
||||
## 0.1.11
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Add Pod Security Policy
|
||||
|
||||
## 0.1.11
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Fix wrong value reference for Elasticsearch output in deployment.yaml
|
||||
|
||||
## 0.1.10
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New output can be set : `DogStatsD`
|
||||
|
||||
## 0.1.9
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New output can be set : `StatsD`
|
||||
|
||||
## 0.1.7
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New output can be set : `Opsgenie`
|
||||
|
||||
## 0.1.6
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New output can be set : `NATS`
|
||||
|
||||
## 0.1.5
|
||||
|
||||
### Major Changes
|
||||
|
||||
- `Falcosidekick` and its chart are now part of `falcosecurity` organization
|
||||
|
||||
## 0.1.4
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Use more recent image with `Golang` 1.14
|
||||
|
||||
## 0.1.3
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New output can be set : `Loki`
|
||||
|
||||
## 0.1.2
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New output can be set : `SMTP`
|
||||
|
||||
## 0.1.1
|
||||
|
||||
### Major Changes
|
||||
|
||||
- New outputs can be set : `AWS Lambda`, `AWS SQS`, `Teams`
|
||||
|
||||
## 0.1.0
|
||||
|
||||
### Major Changes
|
||||
|
||||
- Initial release of Falcosidekick Helm Chart
|
||||
16
falco/charts/falcosidekick/Chart.yaml
Normal file
16
falco/charts/falcosidekick/Chart.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
apiVersion: v1
|
||||
appVersion: 2.28.0
|
||||
description: Connect Falco to your ecosystem
|
||||
home: https://github.com/falcosecurity/falcosidekick
|
||||
icon: https://raw.githubusercontent.com/falcosecurity/falcosidekick/master/imgs/falcosidekick_color.png
|
||||
keywords:
|
||||
- monitoring
|
||||
- security
|
||||
- alerting
|
||||
maintainers:
|
||||
- email: cncf-falco-dev@lists.cncf.io
|
||||
name: Issif
|
||||
name: falcosidekick
|
||||
sources:
|
||||
- https://github.com/falcosecurity/falcosidekick
|
||||
version: 0.7.15
|
||||
187
falco/charts/falcosidekick/README.gotmpl
Normal file
187
falco/charts/falcosidekick/README.gotmpl
Normal file
@ -0,0 +1,187 @@
|
||||
# Falcosidekick
|
||||
|
||||

|
||||
|
||||
   
|
||||
|
||||
## Description
|
||||
|
||||
A simple daemon for connecting [`Falco`](https://github.com/falcosecurity/falco) to your ecossytem. It takes a `Falco`'s events and
|
||||
forward them to different outputs in a fan-out way.
|
||||
|
||||
It works as a single endpoint for as many as you want `Falco` instances :
|
||||
|
||||

|
||||
|
||||
## Outputs
|
||||
|
||||
`Falcosidekick` manages a large variety of outputs with different purposes.
|
||||
|
||||
> **Note**
|
||||
Follow the links to get the configuration of each output.
|
||||
|
||||
### Chat
|
||||
|
||||
- [**Slack**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/slack.md)
|
||||
- [**Rocketchat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rocketchat.md)
|
||||
- [**Mattermost**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mattermost.md)
|
||||
- [**Teams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/teams.md)
|
||||
- [**Discord**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/discord.md)
|
||||
- [**Google Chat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/googlechat.md)
|
||||
- [**Zoho Cliq**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cliq.md)
|
||||
- [**Telegram**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/telegram.md)
|
||||
|
||||
### Metrics / Observability
|
||||
|
||||
- [**Datadog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/datadog.md)
|
||||
- [**Influxdb**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/influxdb.md)
|
||||
- [**StatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/statsd.md) (for monitoring of `falcosidekick`)
|
||||
- [**DogStatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dogstatsd.md) (for monitoring of `falcosidekick`)
|
||||
- [**Prometheus**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/prometheus.md) (for both events and monitoring of `falcosidekick`)
|
||||
- [**Wavefront**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/wavefront.md)
|
||||
- [**Spyderbat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/spyderbat.md)
|
||||
- [**TimescaleDB**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/timescaledb.md)
|
||||
- [**Dynatrace**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dynatrace.md)
|
||||
|
||||
### Alerting
|
||||
|
||||
- [**AlertManager**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/alertmanager.md)
|
||||
- [**Opsgenie**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/opsgenie.md)
|
||||
- [**PagerDuty**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/pagerduty.md)
|
||||
- [**Grafana OnCall**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana_oncall.md)
|
||||
|
||||
### Logs
|
||||
|
||||
- [**Elasticsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/elasticsearch.md)
|
||||
- [**Loki**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/loki.md)
|
||||
- [**AWS CloudWatchLogs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_cloudwatch_logs.md)
|
||||
- [**Grafana**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana.md)
|
||||
- [**Syslog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/syslog.md)
|
||||
- [**Zincsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs//zincsearch.md)
|
||||
- [**OpenObserve**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openobserve.md)
|
||||
|
||||
### Object Storage
|
||||
|
||||
- [**AWS S3**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_s3.md)
|
||||
- [**GCP Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_storage.md)
|
||||
- [**Yandex S3 Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_s3.md)
|
||||
|
||||
### FaaS / Serverless
|
||||
|
||||
- [**AWS Lambda**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_lambda.md)
|
||||
- [**GCP Cloud Run**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_run.md)
|
||||
- [**GCP Cloud Functions**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_functions.md)
|
||||
- [**Fission**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/fission.md)
|
||||
- [**KNative (CloudEvents)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cloudevents.md)
|
||||
- [**Kubeless**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kubeless.md)
|
||||
- [**OpenFaaS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openfaas.md)
|
||||
- [**Tekton**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/tekton.md)
|
||||
|
||||
### Message queue / Streaming
|
||||
|
||||
- [**NATS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nats.md)
|
||||
- [**STAN (NATS Streaming)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/stan.md)
|
||||
- [**AWS SQS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sqs.md)
|
||||
- [**AWS SNS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sns.md)
|
||||
- [**AWS Kinesis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_kinesis.md)
|
||||
- [**GCP PubSub**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_pub_sub.md)
|
||||
- [**Apache Kafka**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafka.md)
|
||||
- [**Kafka Rest Proxy**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafkarest.md)
|
||||
- [**RabbitMQ**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rabbitmq.md)
|
||||
- [**Azure Event Hubs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/azure_event_hub.md)
|
||||
- [**Yandex Data Streams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_datastreams.md)
|
||||
- [**MQTT**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mqtt.md)
|
||||
- [**Gotify**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gotify.md)
|
||||
|
||||
### Email
|
||||
|
||||
- [**SMTP**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/smtp.md)
|
||||
|
||||
### Database
|
||||
|
||||
- [**Redis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/redis.md)
|
||||
|
||||
### Web
|
||||
|
||||
- [**Webhook**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/webhook.md)
|
||||
- [**Node-RED**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nodered.md)
|
||||
- [**WebUI (Falcosidekick UI)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md)
|
||||
|
||||
### SIEM
|
||||
|
||||
- [**AWS Security Lake**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_security_lake.md)
|
||||
|
||||
### Workflow
|
||||
|
||||
- [**n8n**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/n8n.md)
|
||||
|
||||
### Other
|
||||
- [**Policy Report**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/policy_report.md)
|
||||
|
||||
## Adding `falcosecurity` repository
|
||||
|
||||
Prior to install the chart, add the `falcosecurity` charts repository:
|
||||
|
||||
```bash
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
### Install Falco + Falcosidekick + Falcosidekick-ui
|
||||
|
||||
To install the chart with the release name `falcosidekick` run:
|
||||
|
||||
```bash
|
||||
helm install falcosidekick falcosecurity/falcosidekick --set webui.enabled=true
|
||||
```
|
||||
|
||||
### With Helm chart of Falco
|
||||
|
||||
`Falco`, `Falcosidekick` and `Falcosidekick-ui` can be installed together in one command. All values to configure `Falcosidekick` will have to be
|
||||
prefixed with `falcosidekick.`.
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco --set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true
|
||||
```
|
||||
|
||||
After a few seconds, Falcosidekick should be running.
|
||||
|
||||
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
|
||||
|
||||
## Minimum Kubernetes version
|
||||
|
||||
The minimum Kubernetes version required is 1.17.x
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall the `falcosidekick` deployment:
|
||||
|
||||
```bash
|
||||
helm uninstall falcosidekick
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the main configurable parameters of the Falcosidekick chart and their default values. See `values.yaml` for full list.
|
||||
|
||||
{{ template "chart.valuesSection" . }}
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Metrics
|
||||
|
||||
A `prometheus` endpoint can be scrapped at `/metrics`.
|
||||
|
||||
## Access Falcosidekick UI through an Ingress and a subpath
|
||||
|
||||
You may want to access the `WebUI (Falcosidekick UI)`](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md) dashboard not from `/` but from `/subpath` and use an Ingress, here's an example of annotations to add to the Ingress for `nginx-ingress controller`:
|
||||
|
||||
```yaml
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /$2
|
||||
nginx.ingress.kubernetes.io/use-regex: "true"
|
||||
```
|
||||
655
falco/charts/falcosidekick/README.md
Normal file
655
falco/charts/falcosidekick/README.md
Normal file
@ -0,0 +1,655 @@
|
||||
# Falcosidekick
|
||||
|
||||

|
||||
|
||||
   
|
||||
|
||||
## Description
|
||||
|
||||
A simple daemon for connecting [`Falco`](https://github.com/falcosecurity/falco) to your ecossytem. It takes a `Falco`'s events and
|
||||
forward them to different outputs in a fan-out way.
|
||||
|
||||
It works as a single endpoint for as many as you want `Falco` instances :
|
||||
|
||||

|
||||
|
||||
## Outputs
|
||||
|
||||
`Falcosidekick` manages a large variety of outputs with different purposes.
|
||||
|
||||
> **Note**
|
||||
Follow the links to get the configuration of each output.
|
||||
|
||||
### Chat
|
||||
|
||||
- [**Slack**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/slack.md)
|
||||
- [**Rocketchat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rocketchat.md)
|
||||
- [**Mattermost**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mattermost.md)
|
||||
- [**Teams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/teams.md)
|
||||
- [**Discord**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/discord.md)
|
||||
- [**Google Chat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/googlechat.md)
|
||||
- [**Zoho Cliq**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cliq.md)
|
||||
- [**Telegram**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/telegram.md)
|
||||
|
||||
### Metrics / Observability
|
||||
|
||||
- [**Datadog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/datadog.md)
|
||||
- [**Influxdb**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/influxdb.md)
|
||||
- [**StatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/statsd.md) (for monitoring of `falcosidekick`)
|
||||
- [**DogStatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dogstatsd.md) (for monitoring of `falcosidekick`)
|
||||
- [**Prometheus**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/prometheus.md) (for both events and monitoring of `falcosidekick`)
|
||||
- [**Wavefront**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/wavefront.md)
|
||||
- [**Spyderbat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/spyderbat.md)
|
||||
- [**TimescaleDB**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/timescaledb.md)
|
||||
- [**Dynatrace**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dynatrace.md)
|
||||
|
||||
### Alerting
|
||||
|
||||
- [**AlertManager**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/alertmanager.md)
|
||||
- [**Opsgenie**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/opsgenie.md)
|
||||
- [**PagerDuty**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/pagerduty.md)
|
||||
- [**Grafana OnCall**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana_oncall.md)
|
||||
|
||||
### Logs
|
||||
|
||||
- [**Elasticsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/elasticsearch.md)
|
||||
- [**Loki**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/loki.md)
|
||||
- [**AWS CloudWatchLogs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_cloudwatch_logs.md)
|
||||
- [**Grafana**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana.md)
|
||||
- [**Syslog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/syslog.md)
|
||||
- [**Zincsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs//zincsearch.md)
|
||||
- [**OpenObserve**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openobserve.md)
|
||||
|
||||
### Object Storage
|
||||
|
||||
- [**AWS S3**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_s3.md)
|
||||
- [**GCP Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_storage.md)
|
||||
- [**Yandex S3 Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_s3.md)
|
||||
|
||||
### FaaS / Serverless
|
||||
|
||||
- [**AWS Lambda**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_lambda.md)
|
||||
- [**GCP Cloud Run**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_run.md)
|
||||
- [**GCP Cloud Functions**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_functions.md)
|
||||
- [**Fission**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/fission.md)
|
||||
- [**KNative (CloudEvents)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cloudevents.md)
|
||||
- [**Kubeless**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kubeless.md)
|
||||
- [**OpenFaaS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openfaas.md)
|
||||
- [**Tekton**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/tekton.md)
|
||||
|
||||
### Message queue / Streaming
|
||||
|
||||
- [**NATS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nats.md)
|
||||
- [**STAN (NATS Streaming)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/stan.md)
|
||||
- [**AWS SQS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sqs.md)
|
||||
- [**AWS SNS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sns.md)
|
||||
- [**AWS Kinesis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_kinesis.md)
|
||||
- [**GCP PubSub**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_pub_sub.md)
|
||||
- [**Apache Kafka**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafka.md)
|
||||
- [**Kafka Rest Proxy**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafkarest.md)
|
||||
- [**RabbitMQ**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rabbitmq.md)
|
||||
- [**Azure Event Hubs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/azure_event_hub.md)
|
||||
- [**Yandex Data Streams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_datastreams.md)
|
||||
- [**MQTT**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mqtt.md)
|
||||
- [**Gotify**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gotify.md)
|
||||
|
||||
### Email
|
||||
|
||||
- [**SMTP**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/smtp.md)
|
||||
|
||||
### Database
|
||||
|
||||
- [**Redis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/redis.md)
|
||||
|
||||
### Web
|
||||
|
||||
- [**Webhook**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/webhook.md)
|
||||
- [**Node-RED**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nodered.md)
|
||||
- [**WebUI (Falcosidekick UI)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md)
|
||||
|
||||
### SIEM
|
||||
|
||||
- [**AWS Security Lake**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_security_lake.md)
|
||||
|
||||
### Workflow
|
||||
|
||||
- [**n8n**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/n8n.md)
|
||||
|
||||
### Other
|
||||
- [**Policy Report**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/policy_report.md)
|
||||
|
||||
## Adding `falcosecurity` repository
|
||||
|
||||
Prior to install the chart, add the `falcosecurity` charts repository:
|
||||
|
||||
```bash
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
### Install Falco + Falcosidekick + Falcosidekick-ui
|
||||
|
||||
To install the chart with the release name `falcosidekick` run:
|
||||
|
||||
```bash
|
||||
helm install falcosidekick falcosecurity/falcosidekick --set webui.enabled=true
|
||||
```
|
||||
|
||||
### With Helm chart of Falco
|
||||
|
||||
`Falco`, `Falcosidekick` and `Falcosidekick-ui` can be installed together in one command. All values to configure `Falcosidekick` will have to be
|
||||
prefixed with `falcosidekick.`.
|
||||
|
||||
```bash
|
||||
helm install falco falcosecurity/falco --set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true
|
||||
```
|
||||
|
||||
After a few seconds, Falcosidekick should be running.
|
||||
|
||||
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
|
||||
|
||||
## Minimum Kubernetes version
|
||||
|
||||
The minimum Kubernetes version required is 1.17.x
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall the `falcosidekick` deployment:
|
||||
|
||||
```bash
|
||||
helm uninstall falcosidekick
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the main configurable parameters of the Falcosidekick chart and their default values. See `values.yaml` for full list.
|
||||
|
||||
## Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| affinity | object | `{}` | Affinity for the Sidekick pods |
|
||||
| config.alertmanager.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.alertmanager.customseveritymap | string | `""` | comma separated list of tuple composed of a ':' separated Falco priority and Alertmanager severity that is used to override the severity label associated to the priority level of falco event. Example: debug:value_1,critical:value2. Default mapping: emergency:critical,alert:critical,critical:critical,error:warning,warning:warning,notice:information,informational:information,debug:information. |
|
||||
| config.alertmanager.dropeventdefaultpriority | string | `"critical"` | default priority of dropped events, values are emergency|alert|critical|error|warning|notice|informational|debug |
|
||||
| config.alertmanager.dropeventthresholds | string | `"10000:critical, 1000:critical, 100:critical, 10:warning, 1:warning"` | comma separated list of priority re-evaluation thresholds of dropped events composed of a ':' separated integer threshold and string priority. Example: `10000:critical, 100:warning, 1:informational` |
|
||||
| config.alertmanager.endpoint | string | `"/api/v1/alerts"` | alertmanager endpoint on which falcosidekick posts alerts, choice is: `"/api/v1/alerts" or "/api/v2/alerts" , default is "/api/v1/alerts"` |
|
||||
| config.alertmanager.expireafter | string | `""` | if set to a non-zero value, alert expires after that time in seconds (default: 0) |
|
||||
| config.alertmanager.extraannotations | string | `""` | comma separated list of annotations composed of a ':' separated name and value that is added to the Alerts. Example: my_annotation_1:my_value_1, my_annotation_1:my_value_2 |
|
||||
| config.alertmanager.extralabels | string | `""` | comma separated list of labels composed of a ':' separated name and value that is added to the Alerts. Example: my_label_1:my_value_1, my_label_1:my_value_2 |
|
||||
| config.alertmanager.hostport | string | `""` | AlertManager <http://host:port>, if not `empty`, AlertManager is *enabled* |
|
||||
| config.alertmanager.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.alertmanager.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.aws.accesskeyid | string | `""` | AWS Access Key Id (optionnal if you use EC2 Instance Profile) |
|
||||
| config.aws.checkidentity | bool | `true` | check the identity credentials, set to false for locale developments |
|
||||
| config.aws.cloudwatchlogs.loggroup | string | `""` | AWS CloudWatch Logs Group name, if not empty, CloudWatch Logs output is *enabled* |
|
||||
| config.aws.cloudwatchlogs.logstream | string | `""` | AWS CloudWatch Logs Stream name, if empty, Falcosidekick will try to create a log stream |
|
||||
| config.aws.cloudwatchlogs.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.aws.externalid | string | `""` | External id for the role to assume (optional if you use EC2 Instance Profile) |
|
||||
| config.aws.kinesis.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.aws.kinesis.streamname | string | `""` | AWS Kinesis Stream Name, if not empty, Kinesis output is *enabled* |
|
||||
| config.aws.lambda.functionname | string | `""` | AWS Lambda Function Name, if not empty, AWS Lambda output is *enabled* |
|
||||
| config.aws.lambda.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.aws.region | string | `""` | AWS Region (optionnal if you use EC2 Instance Profile) |
|
||||
| config.aws.rolearn | string | `""` | AWS IAM role ARN for falcosidekick service account to associate with (optionnal if you use EC2 Instance Profile) |
|
||||
| config.aws.s3.bucket | string | `""` | AWS S3, bucket name |
|
||||
| config.aws.s3.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.aws.s3.prefix | string | `""` | AWS S3, name of prefix, keys will have format: s3://<bucket>/<prefix>/YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json |
|
||||
| config.aws.secretaccesskey | string | `""` | AWS Secret Access Key (optionnal if you use EC2 Instance Profile) |
|
||||
| config.aws.securitylake.accountid | string | `""` | Account ID |
|
||||
| config.aws.securitylake.batchsize | int | `1000` | Max number of events by parquet file |
|
||||
| config.aws.securitylake.bucket | string | `""` | Bucket for AWS SecurityLake data, if not empty, AWS SecurityLake output is enabled |
|
||||
| config.aws.securitylake.interval | int | `5` | Time in minutes between two puts to S3 (must be between 5 and 60min) |
|
||||
| config.aws.securitylake.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.aws.securitylake.prefix | string | `""` | Prefix for keys |
|
||||
| config.aws.securitylake.region | string | `""` | Bucket Region |
|
||||
| config.aws.sns.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.aws.sns.rawjson | bool | `false` | Send RawJSON from `falco` or parse it to AWS SNS |
|
||||
| config.aws.sns.topicarn | string | `""` | AWS SNS TopicARN, if not empty, AWS SNS output is *enabled* |
|
||||
| config.aws.sqs.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.aws.sqs.url | string | `""` | AWS SQS Queue URL, if not empty, AWS SQS output is *enabled* |
|
||||
| config.aws.useirsa | bool | `true` | Use IRSA, if true, the rolearn value will be used to set the ServiceAccount annotations and not the env var |
|
||||
| config.azure.eventHub.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.azure.eventHub.name | string | `""` | Name of the Hub, if not empty, EventHub is *enabled* |
|
||||
| config.azure.eventHub.namespace | string | `""` | Name of the space the Hub is in |
|
||||
| config.azure.podIdentityClientID | string | `""` | Azure Identity Client ID |
|
||||
| config.azure.podIdentityName | string | `""` | Azure Identity name |
|
||||
| config.azure.resourceGroupName | string | `""` | Azure Resource Group name |
|
||||
| config.azure.subscriptionID | string | `""` | Azure Subscription ID |
|
||||
| config.bracketreplacer | string | `""` | if not empty, the brackets in keys of Output Fields are replaced |
|
||||
| config.cliq.icon | string | `""` | Cliq icon (avatar) |
|
||||
| config.cliq.messageformat | string | `""` | a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `cliq.outputformat`. If empty, no Text is displayed before sections. |
|
||||
| config.cliq.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.cliq.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Cliq), `fields` (only fields are displayed in Cliq) |
|
||||
| config.cliq.useemoji | bool | `true` | Prefix message text with an emoji |
|
||||
| config.cliq.webhookurl | string | `""` | Zoho Cliq Channel URL (ex: <https://cliq.zoho.eu/api/v2/channelsbyname/XXXX/message?zapikey=YYYY>), if not empty, Cliq Chat output is *enabled* |
|
||||
| config.cloudevents.address | string | `""` | CloudEvents consumer http address, if not empty, CloudEvents output is *enabled* |
|
||||
| config.cloudevents.extension | string | `""` | Extensions to add in the outbound Event, useful for routing |
|
||||
| config.cloudevents.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.customfields | string | `""` | a list of escaped comma separated custom fields to add to falco events, syntax is "key:value\,key:value" |
|
||||
| config.datadog.apikey | string | `""` | Datadog API Key, if not `empty`, Datadog output is *enabled* |
|
||||
| config.datadog.host | string | `""` | Datadog host. Override if you are on the Datadog EU site. Defaults to american site with "<https://api.datadoghq.com>" |
|
||||
| config.datadog.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.debug | bool | `false` | DEBUG environment variable |
|
||||
| config.discord.icon | string | `""` | Discord icon (avatar) |
|
||||
| config.discord.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.discord.webhookurl | string | `""` | Discord WebhookURL (ex: <https://discord.com/api/webhooks/xxxxxxxxxx>...), if not empty, Discord output is *enabled* |
|
||||
| config.dogstatsd.forwarder | string | `""` | The address for the DogStatsD forwarder, in the form <http://host:port>, if not empty DogStatsD is *enabled* |
|
||||
| config.dogstatsd.namespace | string | `"falcosidekick."` | A prefix for all metrics |
|
||||
| config.dogstatsd.tags | string | `""` | A comma-separated list of tags to add to all metrics |
|
||||
| config.elasticsearch.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.elasticsearch.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" |
|
||||
| config.elasticsearch.hostport | string | `""` | Elasticsearch <http://host:port>, if not `empty`, Elasticsearch is *enabled* |
|
||||
| config.elasticsearch.index | string | `"falco"` | Elasticsearch index |
|
||||
| config.elasticsearch.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.elasticsearch.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.elasticsearch.password | string | `""` | use this password to authenticate to Elasticsearch if the password is not empty |
|
||||
| config.elasticsearch.suffix | string | `"daily"` | |
|
||||
| config.elasticsearch.type | string | `"_doc"` | Elasticsearch document type |
|
||||
| config.elasticsearch.username | string | `""` | use this username to authenticate to Elasticsearch if the username is not empty |
|
||||
| config.existingSecret | string | `""` | Existing secret with configuration |
|
||||
| config.extraArgs | list | `[]` | Extra command-line arguments |
|
||||
| config.extraEnv | list | `[]` | Extra environment variables |
|
||||
| config.fission.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.fission.function | string | `""` | Name of Fission function, if not empty, Fission is enabled |
|
||||
| config.fission.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.fission.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.fission.routernamespace | string | `"fission"` | Namespace of Fission Router, "fission" (default) |
|
||||
| config.fission.routerport | int | `80` | Port of service of Fission Router |
|
||||
| config.fission.routerservice | string | `"router"` | Service of Fission Router, "router" (default) |
|
||||
| config.gcp.cloudfunctions.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.gcp.cloudfunctions.name | string | `""` | The name of the Cloud Function which is in form `projects/<project_id>/locations/<region>/functions/<function_name>` |
|
||||
| config.gcp.cloudrun.endpoint | string | `""` | the URL of the Cloud Run function |
|
||||
| config.gcp.cloudrun.jwt | string | `""` | JWT for the private access to Cloud Run function |
|
||||
| config.gcp.cloudrun.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.gcp.credentials | string | `""` | Base64 encoded JSON key file for the GCP service account |
|
||||
| config.gcp.pubsub.customattributes | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" |
|
||||
| config.gcp.pubsub.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.gcp.pubsub.projectid | string | `""` | The GCP Project ID containing the Pub/Sub Topic |
|
||||
| config.gcp.pubsub.topic | string | `""` | Name of the Pub/Sub topic |
|
||||
| config.gcp.storage.bucket | string | `""` | The name of the bucket |
|
||||
| config.gcp.storage.minimumpriority | string | `"debug"` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.gcp.storage.prefix | string | `""` | Name of prefix, keys will have format: gs://<bucket>/<prefix>/YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json |
|
||||
| config.googlechat.messageformat | string | `""` | a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `config.googlechat.outputformat`. If empty, no Text is displayed before Attachment |
|
||||
| config.googlechat.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.googlechat.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Google chat) |
|
||||
| config.googlechat.webhookurl | string | `""` | Google Chat Webhook URL (ex: <https://chat.googleapis.com/v1/spaces/XXXXXX/YYYYYY>), if not `empty`, Google Chat output is *enabled* |
|
||||
| config.gotify.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.gotify.format | string | `"markdown"` | Format of the messages (plaintext, markdown, json) |
|
||||
| config.gotify.hostport | string | `""` | http://{domain or ip}:{port}, if not empty, Gotify output is enabled |
|
||||
| config.gotify.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.gotify.token | string | `""` | API Token |
|
||||
| config.grafana.allfieldsastags | bool | `false` | if true, all custom fields are added as tags (default: false) |
|
||||
| config.grafana.apikey | string | `""` | API Key to authenticate to Grafana, if not empty, Grafana output is *enabled* |
|
||||
| config.grafana.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.grafana.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" |
|
||||
| config.grafana.dashboardid | string | `""` | annotations are scoped to a specific dashboard. Optionnal. |
|
||||
| config.grafana.hostport | string | `""` | <http://{domain> or ip}:{port}, if not empty, Grafana output is *enabled* |
|
||||
| config.grafana.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.grafana.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.grafana.panelid | string | `""` | annotations are scoped to a specific panel. Optionnal. |
|
||||
| config.grafanaoncall.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.grafanaoncall.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" |
|
||||
| config.grafanaoncall.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.grafanaoncall.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.grafanaoncall.webhookurl | string | `""` | if not empty, Grafana OnCall output is enabled |
|
||||
| config.influxdb.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.influxdb.database | string | `"falco"` | Influxdb database |
|
||||
| config.influxdb.hostport | string | `""` | Influxdb <http://host:port>, if not `empty`, Influxdb is *enabled* |
|
||||
| config.influxdb.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.influxdb.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.influxdb.organization | string | `""` | Influxdb organization |
|
||||
| config.influxdb.password | string | `""` | Password to use if auth is *enabled* in Influxdb |
|
||||
| config.influxdb.precision | string | `"ns"` | write precision |
|
||||
| config.influxdb.token | string | `""` | API token to use if auth in enabled in Influxdb (disables user and password) |
|
||||
| config.influxdb.user | string | `""` | User to use if auth is *enabled* in Influxdb |
|
||||
| config.kafka.async | bool | `false` | produce messages without blocking |
|
||||
| config.kafka.balancer | string | `"round_robin"` | partition balancing strategy when producing |
|
||||
| config.kafka.clientid | string | `""` | specify a client.id when communicating with the broker for tracing |
|
||||
| config.kafka.compression | string | `"NONE"` | enable message compression using this algorithm, no compression (GZIP|SNAPPY|LZ4|ZSTD|NONE) |
|
||||
| config.kafka.hostport | string | `""` | comma separated list of Apache Kafka bootstrap nodes for establishing the initial connection to the cluster (ex: localhost:9092,localhost:9093). Defaults to port 9092 if no port is specified after the domain, if not empty, Kafka output is *enabled* |
|
||||
| config.kafka.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.kafka.password | string | `""` | use this password to authenticate to Kafka via SASL |
|
||||
| config.kafka.requiredacks | string | `"NONE"` | number of acknowledges from partition replicas required before receiving |
|
||||
| config.kafka.sasl | string | `""` | SASL authentication mechanism, if empty, no authentication (PLAIN|SCRAM_SHA256|SCRAM_SHA512) |
|
||||
| config.kafka.tls | bool | `false` | Use TLS for the connections |
|
||||
| config.kafka.topic | string | `""` | Name of the topic, if not empty, Kafka output is enabled |
|
||||
| config.kafka.topiccreation | bool | `false` | auto create the topic if it doesn't exist |
|
||||
| config.kafka.username | string | `""` | use this username to authenticate to Kafka via SASL |
|
||||
| config.kafkarest.address | string | `""` | The full URL to the topic (example "http://kafkarest:8082/topics/test") |
|
||||
| config.kafkarest.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.kafkarest.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.kafkarest.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.kafkarest.version | int | `2` | Kafka Rest Proxy API version 2|1 (default: 2) |
|
||||
| config.kubeless.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.kubeless.function | string | `""` | Name of Kubeless function, if not empty, EventHub is *enabled* |
|
||||
| config.kubeless.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.kubeless.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.kubeless.namespace | string | `""` | Namespace of Kubeless function (mandatory) |
|
||||
| config.kubeless.port | int | `8080` | Port of service of Kubeless function. Default is `8080` |
|
||||
| config.loki.apikey | string | `""` | API Key for Grafana Logs |
|
||||
| config.loki.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.loki.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" |
|
||||
| config.loki.endpoint | string | `"/loki/api/v1/push"` | Loki endpoint URL path, more info: <https://grafana.com/docs/loki/latest/api/#post-apiprompush> |
|
||||
| config.loki.extralabels | string | `""` | comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields |
|
||||
| config.loki.hostport | string | `""` | Loki <http://host:port>, if not `empty`, Loki is *enabled* |
|
||||
| config.loki.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.loki.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.loki.tenant | string | `""` | Loki tenant, if not `empty`, Loki tenant is *enabled* |
|
||||
| config.loki.user | string | `""` | user for Grafana Logs |
|
||||
| config.mattermost.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.mattermost.footer | string | `""` | Mattermost Footer |
|
||||
| config.mattermost.icon | string | `""` | Mattermost icon (avatar) |
|
||||
| config.mattermost.messageformat | string | `""` | a Go template to format Mattermost Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment |
|
||||
| config.mattermost.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.mattermost.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.mattermost.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Mattermost) |
|
||||
| config.mattermost.username | string | `""` | Mattermost username |
|
||||
| config.mattermost.webhookurl | string | `""` | Mattermost Webhook URL (ex: <https://XXXX/hooks/YYYY>), if not `empty`, Mattermost output is *enabled* |
|
||||
| config.mqtt.broker | string | `""` | Broker address, can start with tcp:// or ssl://, if not empty, MQTT output is enabled |
|
||||
| config.mqtt.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.mqtt.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.mqtt.password | string | `""` | Password if the authentication is enabled in the broker |
|
||||
| config.mqtt.qos | int | `0` | QOS for messages |
|
||||
| config.mqtt.retained | bool | `false` | If true, messages are retained |
|
||||
| config.mqtt.topic | string | `"falco/events"` | Topic for messages |
|
||||
| config.mqtt.user | string | `""` | User if the authentication is enabled in the broker |
|
||||
| config.mutualtlsclient.cacertfile | string | `""` | CA certification file for server certification for mutual TLS authentication, takes priority over mutualtlsfilespath if not empty |
|
||||
| config.mutualtlsclient.certfile | string | `""` | client certification file for mutual TLS client certification, takes priority over mutualtlsfilespath if not empty |
|
||||
| config.mutualtlsclient.keyfile | string | `""` | client key file for mutual TLS client certification, takes priority over mutualtlsfilespath if not empty |
|
||||
| config.mutualtlsfilespath | string | `"/etc/certs"` | folder which will used to store client.crt, client.key and ca.crt files for mutual tls for outputs, will be deprecated in the future (default: "/etc/certs") |
|
||||
| config.n8n.address | string | `""` | N8N address, if not empty, N8N output is enabled |
|
||||
| config.n8n.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.n8n.headerauthname | string | `""` | Header Auth Key to authenticate with N8N |
|
||||
| config.n8n.headerauthvalue | string | `""` | Header Auth Value to authenticate with N8N |
|
||||
| config.n8n.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" |
|
||||
| config.n8n.password | string | `""` | Password to authenticate with N8N in basic auth |
|
||||
| config.n8n.user | string | `""` | Username to authenticate with N8N in basic auth |
|
||||
| config.nats.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.nats.hostport | string | `""` | NATS "nats://host:port", if not `empty`, NATS is *enabled* |
|
||||
| config.nats.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.nats.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.nodered.address | string | `""` | Node-RED address, if not empty, Node-RED output is enabled |
|
||||
| config.nodered.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.nodered.customheaders | string | `""` | Custom headers to add in POST, useful for Authentication, syntax is "key:value\,key:value" |
|
||||
| config.nodered.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.nodered.password | string | `""` | Password if Basic Auth is enabled for 'http in' node in Node-RED |
|
||||
| config.nodered.user | string | `""` | User if Basic Auth is enabled for 'http in' node in Node-RED |
|
||||
| config.openfaas.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.openfaas.functionname | string | `""` | Name of OpenFaaS function, if not empty, OpenFaaS is *enabled* |
|
||||
| config.openfaas.functionnamespace | string | `"openfaas-fn"` | Namespace of OpenFaaS function, "openfaas-fn" (default) |
|
||||
| config.openfaas.gatewaynamespace | string | `"openfaas"` | Namespace of OpenFaaS Gateway, "openfaas" (default) |
|
||||
| config.openfaas.gatewayport | int | `8080` | Port of service of OpenFaaS Gateway Default is `8080` |
|
||||
| config.openfaas.gatewayservice | string | `"gateway"` | Service of OpenFaaS Gateway, "gateway" (default) |
|
||||
| config.openfaas.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.openfaas.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.openobserve.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.openobserve.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" |
|
||||
| config.openobserve.hostport | string | `""` | http://{domain or ip}:{port}, if not empty, OpenObserve output is enabled |
|
||||
| config.openobserve.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" |
|
||||
| config.openobserve.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.openobserve.organizationname | string | `"default"` | Organization name |
|
||||
| config.openobserve.password | string | `""` | use this password to authenticate to OpenObserve if the password is not empty |
|
||||
| config.openobserve.streamname | string | `"falco"` | Stream name |
|
||||
| config.openobserve.username | string | `""` | use this username to authenticate to OpenObserve if the username is not empty |
|
||||
| config.opsgenie.apikey | string | `""` | Opsgenie API Key, if not empty, Opsgenie output is *enabled* |
|
||||
| config.opsgenie.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.opsgenie.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.opsgenie.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.opsgenie.region | `us` or `eu` | `""` | region of your domain |
|
||||
| config.pagerduty.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.pagerduty.region | string | `"us"` | Pagerduty Region, can be 'us' or 'eu' |
|
||||
| config.pagerduty.routingkey | string | `""` | Pagerduty Routing Key, if not empty, Pagerduty output is *enabled* |
|
||||
| config.policyreport.enabled | bool | `false` | if true; policyreport output is *enabled* |
|
||||
| config.policyreport.kubeconfig | string | `"~/.kube/config"` | Kubeconfig file to use (only if falcosidekick is running outside the cluster) |
|
||||
| config.policyreport.maxevents | int | `1000` | the max number of events that can be in a policyreport |
|
||||
| config.policyreport.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.policyreport.prunebypriority | bool | `false` | if true; the events with lowest severity are pruned first, in FIFO order |
|
||||
| config.prometheus.extralabels | string | `""` | comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields |
|
||||
| config.rabbitmq.minimumpriority | string | `"debug"` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.rabbitmq.queue | string | `""` | Rabbitmq Queue name |
|
||||
| config.rabbitmq.url | string | `""` | Rabbitmq URL, if not empty, Rabbitmq output is *enabled* |
|
||||
| config.redis.address | string | `""` | Redis address, if not empty, Redis output is enabled |
|
||||
| config.redis.database | int | `0` | Redis database number |
|
||||
| config.redis.key | string | `"falco"` | Redis storage key name for hashmap, list |
|
||||
| config.redis.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" |
|
||||
| config.redis.password | string | `""` | Password to authenticate with Redis |
|
||||
| config.redis.storagetype | string | `"list"` | Redis storage type: hashmap or list |
|
||||
| config.rocketchat.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.rocketchat.icon | string | `""` | Rocketchat icon (avatar) |
|
||||
| config.rocketchat.messageformat | string | `""` | a Go template to format Rocketchat Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment |
|
||||
| config.rocketchat.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.rocketchat.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.rocketchat.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Rocketcaht), `fields` (only fields are displayed in Rocketchat) |
|
||||
| config.rocketchat.username | string | `""` | Rocketchat username |
|
||||
| config.rocketchat.webhookurl | string | `""` | Rocketchat Webhook URL (ex: <https://XXXX/hooks/YYYY>), if not `empty`, Rocketchat output is *enabled* |
|
||||
| config.slack.channel | string | `""` | Slack channel (optionnal) |
|
||||
| config.slack.footer | string | `""` | Slack Footer |
|
||||
| config.slack.icon | string | `""` | Slack icon (avatar) |
|
||||
| config.slack.messageformat | string | `""` | a Go template to format Slack Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment |
|
||||
| config.slack.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.slack.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Slack) |
|
||||
| config.slack.username | string | `""` | Slack username |
|
||||
| config.slack.webhookurl | string | `""` | Slack Webhook URL (ex: <https://hooks.slack.com/services/XXXX/YYYY/ZZZZ>), if not `empty`, Slack output is *enabled* |
|
||||
| config.smtp.authmechanism | string | `"plain"` | SASL Mechanisms : plain, oauthbearer, external, anonymous or "" (disable SASL) |
|
||||
| config.smtp.from | string | `""` | Sender address (mandatory if SMTP output is *enabled*) |
|
||||
| config.smtp.hostport | string | `""` | "host:port" address of SMTP server, if not empty, SMTP output is *enabled* |
|
||||
| config.smtp.identity | string | `""` | identity string for Plain and External Mechanisms |
|
||||
| config.smtp.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.smtp.outputformat | string | `"html"` | html, text |
|
||||
| config.smtp.password | string | `""` | password to access SMTP server |
|
||||
| config.smtp.tls | bool | `true` | use TLS connection (true/false) |
|
||||
| config.smtp.to | string | `""` | comma-separated list of Recipident addresses, can't be empty (mandatory if SMTP output is *enabled*) |
|
||||
| config.smtp.token | string | `""` | OAuthBearer token for OAuthBearer Mechanism |
|
||||
| config.smtp.trace | string | `""` | trace string for Anonymous Mechanism |
|
||||
| config.smtp.user | string | `""` | user to access SMTP server |
|
||||
| config.spyderbat.apikey | string | `""` | Spyderbat API key with access to the organization |
|
||||
| config.spyderbat.apiurl | string | `"https://api.spyderbat.com"` | Spyderbat API url |
|
||||
| config.spyderbat.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.spyderbat.orguid | string | `""` | Organization to send output to, if not empty, Spyderbat output is enabled |
|
||||
| config.spyderbat.source | string | `"falcosidekick"` | Spyderbat source ID, max 32 characters |
|
||||
| config.spyderbat.sourcedescription | string | `""` | Spyderbat source description and display name if not empty, max 256 characters |
|
||||
| config.stan.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.stan.clientid | string | `""` | Client ID, if not empty, STAN output is *enabled* |
|
||||
| config.stan.clusterid | string | `""` | Cluster name, if not empty, STAN output is *enabled* |
|
||||
| config.stan.hostport | string | `""` | Stan nats://{domain or ip}:{port}, if not empty, STAN output is *enabled* |
|
||||
| config.stan.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.stan.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.statsd.forwarder | string | `""` | The address for the StatsD forwarder, in the form <http://host:port>, if not empty StatsD is *enabled* |
|
||||
| config.statsd.namespace | string | `"falcosidekick."` | A prefix for all metrics |
|
||||
| config.syslog.format | string | `"json"` | Syslog payload format. It can be either "json" or "cef" |
|
||||
| config.syslog.host | string | `""` | Syslog Host, if not empty, Syslog output is *enabled* |
|
||||
| config.syslog.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.syslog.port | string | `""` | Syslog endpoint port number |
|
||||
| config.syslog.protocol | string | `"tcp"` | Syslog transport protocol. It can be either "tcp" or "udp" |
|
||||
| config.teams.activityimage | string | `""` | Teams section image |
|
||||
| config.teams.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.teams.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Teams), `facts` (only facts are displayed in Teams) |
|
||||
| config.teams.webhookurl | string | `""` | Teams Webhook URL (ex: <https://outlook.office.com/webhook/XXXXXX/IncomingWebhook/YYYYYY>"), if not `empty`, Teams output is *enabled* |
|
||||
| config.tekton.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.tekton.eventlistener | string | `""` | EventListener address, if not empty, Tekton output is enabled |
|
||||
| config.tekton.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.telegram.chatid | string | `""` | telegram Identifier of the shared chat |
|
||||
| config.telegram.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.telegram.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" |
|
||||
| config.telegram.token | string | `""` | telegram bot authentication token |
|
||||
| config.templatedfields | string | `""` | a list of escaped comma separated Go templated fields to add to falco events, syntax is "key:template\,key:template" |
|
||||
| config.timescaledb.database | string | `""` | TimescaleDB database used |
|
||||
| config.timescaledb.host | string | `""` | TimescaleDB host, if not empty, TImescaleDB output is enabled |
|
||||
| config.timescaledb.hypertablename | string | `"falco_events"` | Hypertable to store data events (default: falco_events) See TimescaleDB setup for more info |
|
||||
| config.timescaledb.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.timescaledb.password | string | `"postgres"` | Password to authenticate with TimescaleDB |
|
||||
| config.timescaledb.port | int | `5432` | TimescaleDB port (default: 5432) |
|
||||
| config.timescaledb.user | string | `"postgres"` | Username to authenticate with TimescaleDB |
|
||||
| config.tlsserver.cacertfile | string | `"/etc/certs/server/ca.crt"` | CA certification file path for client certification if mutualtls is true |
|
||||
| config.tlsserver.cacrt | string | `""` | |
|
||||
| config.tlsserver.certfile | string | `"/etc/certs/server/server.crt"` | server certification file path for TLS Server |
|
||||
| config.tlsserver.deploy | bool | `false` | if true TLS server will be deployed instead of HTTP |
|
||||
| config.tlsserver.existingSecret | string | `""` | existing secret with server.crt, server.key and ca.crt files for TLS Server |
|
||||
| config.tlsserver.keyfile | string | `"/etc/certs/server/server.key"` | server key file path for TLS Server |
|
||||
| config.tlsserver.mutualtls | bool | `false` | if true mutual TLS server will be deployed instead of TLS, deploy also has to be true |
|
||||
| config.tlsserver.notlspaths | string | `"/ping"` | a comma separated list of endpoints, if not empty, and tlsserver.deploy is true, a separate http server will be deployed for the specified endpoints (/ping endpoint needs to be notls for Kubernetes to be able to perform the healthchecks) |
|
||||
| config.tlsserver.notlsport | int | `2810` | port to serve http server serving selected endpoints |
|
||||
| config.tlsserver.servercrt | string | `""` | server.crt file for TLS Server |
|
||||
| config.tlsserver.serverkey | string | `""` | server.key file for TLS Server |
|
||||
| config.wavefront.batchsize | int | `10000` | Wavefront batch size. If empty uses the default 10000. Only used when endpointtype is 'direct' |
|
||||
| config.wavefront.endpointhost | string | `""` | Wavefront endpoint address (only the host). If not empty, with endpointhost, Wavefront output is *enabled* |
|
||||
| config.wavefront.endpointmetricport | int | `2878` | Port to send metrics. Only used when endpointtype is 'proxy' |
|
||||
| config.wavefront.endpointtoken | string | `""` | Wavefront token. Must be used only when endpointtype is 'direct' |
|
||||
| config.wavefront.endpointtype | string | `""` | Wavefront endpoint type, must be 'direct' or 'proxy'. If not empty, with endpointhost, Wavefront output is *enabled* |
|
||||
| config.wavefront.flushintervalseconds | int | `1` | Wavefront flush interval in seconds. Defaults to 1 |
|
||||
| config.wavefront.metricname | string | `"falco.alert"` | Metric to be created in Wavefront. Defaults to falco.alert |
|
||||
| config.wavefront.minimumpriority | string | `"debug"` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.webhook.address | string | `""` | Webhook address, if not empty, Webhook output is *enabled* |
|
||||
| config.webhook.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.webhook.customHeaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value\,key:value" |
|
||||
| config.webhook.method | string | `"POST"` | HTTP method: POST or PUT |
|
||||
| config.webhook.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.webhook.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) |
|
||||
| config.yandex.accesskeyid | string | `""` | yandex access key |
|
||||
| config.yandex.datastreams.endpoint | string | `""` | yandex data streams endpoint (default: https://yds.serverless.yandexcloud.net) |
|
||||
| config.yandex.datastreams.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.yandex.datastreams.streamname | string | `""` | stream name in format /${region}/${folder_id}/${ydb_id}/${stream_name} |
|
||||
| config.yandex.region | string | `""` | yandex storage region (default: ru-central-1) |
|
||||
| config.yandex.s3.bucket | string | `""` | Yandex storage, bucket name |
|
||||
| config.yandex.s3.endpoint | string | `""` | yandex storage endpoint (default: https://storage.yandexcloud.net) |
|
||||
| config.yandex.s3.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.yandex.s3.prefix | string | `""` | name of prefix, keys will have format: s3://<bucket>/<prefix>/YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json |
|
||||
| config.yandex.secretaccesskey | string | `""` | yandex secret access key |
|
||||
| config.zincsearch.checkcert | bool | `true` | check if ssl certificate of the output is valid |
|
||||
| config.zincsearch.hostport | string | `""` | http://{domain or ip}:{port}, if not empty, ZincSearch output is enabled |
|
||||
| config.zincsearch.index | string | `"falco"` | index |
|
||||
| config.zincsearch.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` |
|
||||
| config.zincsearch.password | string | `""` | use this password to authenticate to ZincSearch |
|
||||
| config.zincsearch.username | string | `""` | use this username to authenticate to ZincSearch |
|
||||
| extraVolumeMounts | list | `[]` | Extra volume mounts for sidekick deployment |
|
||||
| extraVolumes | list | `[]` | Extra volumes for sidekick deployment |
|
||||
| fullnameOverride | string | `""` | Override the name |
|
||||
| image | object | `{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falcosidekick","tag":"2.28.0"}` | number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10) revisionHistoryLimit: 1 |
|
||||
| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy |
|
||||
| image.registry | string | `"docker.io"` | The image registry to pull from |
|
||||
| image.repository | string | `"falcosecurity/falcosidekick"` | The image repository to pull from |
|
||||
| image.tag | string | `"2.28.0"` | The image tag to pull |
|
||||
| imagePullSecrets | list | `[]` | Secrets for the registry |
|
||||
| ingress.annotations | object | `{}` | Ingress annotations |
|
||||
| ingress.enabled | bool | `false` | Whether to create the ingress |
|
||||
| ingress.hosts | list | `[{"host":"falcosidekick.local","paths":[{"path":"/"}]}]` | Ingress hosts |
|
||||
| ingress.tls | list | `[]` | Ingress TLS configuration |
|
||||
| nameOverride | string | `""` | Override name |
|
||||
| nodeSelector | object | `{}` | Sidekick nodeSelector field |
|
||||
| podAnnotations | object | `{}` | additions annotations on the pods |
|
||||
| podLabels | object | `{}` | additions labels on the pods |
|
||||
| podSecurityContext | object | `{"fsGroup":1234,"runAsUser":1234}` | Sidekick pod securityContext |
|
||||
| podSecurityPolicy | object | `{"create":false}` | podSecurityPolicy |
|
||||
| podSecurityPolicy.create | bool | `false` | Whether to create a podSecurityPolicy |
|
||||
| priorityClassName | string | `""` | Name of the priority class to be used by the Sidekickpods, priority class needs to be created beforehand |
|
||||
| prometheusRules.alerts.additionalAlerts | object | `{}` | |
|
||||
| prometheusRules.alerts.alert.enabled | bool | `true` | enable the high rate rule for the alert events |
|
||||
| prometheusRules.alerts.alert.rate_interval | string | `"5m"` | rate interval for the high rate rule for the alert events |
|
||||
| prometheusRules.alerts.alert.threshold | int | `0` | threshold for the high rate rule for the alert events |
|
||||
| prometheusRules.alerts.critical.enabled | bool | `true` | enable the high rate rule for the critical events |
|
||||
| prometheusRules.alerts.critical.rate_interval | string | `"5m"` | rate interval for the high rate rule for the critical events |
|
||||
| prometheusRules.alerts.critical.threshold | int | `0` | threshold for the high rate rule for the critical events |
|
||||
| prometheusRules.alerts.emergency.enabled | bool | `true` | enable the high rate rule for the emergency events |
|
||||
| prometheusRules.alerts.emergency.rate_interval | string | `"5m"` | rate interval for the high rate rule for the emergency events |
|
||||
| prometheusRules.alerts.emergency.threshold | int | `0` | threshold for the high rate rule for the emergency events |
|
||||
| prometheusRules.alerts.error.enabled | bool | `true` | enable the high rate rule for the error events |
|
||||
| prometheusRules.alerts.error.rate_interval | string | `"5m"` | rate interval for the high rate rule for the error events |
|
||||
| prometheusRules.alerts.error.threshold | int | `0` | threshold for the high rate rule for the error events |
|
||||
| prometheusRules.alerts.output.enabled | bool | `true` | enable the high rate rule for the errors with the outputs |
|
||||
| prometheusRules.alerts.output.rate_interval | string | `"5m"` | rate interval for the high rate rule for the errors with the outputs |
|
||||
| prometheusRules.alerts.output.threshold | int | `0` | threshold for the high rate rule for the errors with the outputs |
|
||||
| prometheusRules.alerts.warning.enabled | bool | `true` | enable the high rate rule for the warning events |
|
||||
| prometheusRules.alerts.warning.rate_interval | string | `"5m"` | rate interval for the high rate rule for the warning events |
|
||||
| prometheusRules.alerts.warning.threshold | int | `0` | threshold for the high rate rule for the warning events |
|
||||
| prometheusRules.enabled | bool | `false` | enable the creation of PrometheusRules for alerting |
|
||||
| replicaCount | int | `2` | number of running pods |
|
||||
| resources | object | `{}` | The resources for falcosdekick pods |
|
||||
| securityContext | object | `{}` | Sidekick container securityContext |
|
||||
| service.annotations | object | `{}` | Service annotations |
|
||||
| service.port | int | `2801` | Service port |
|
||||
| service.type | string | `"ClusterIP"` | Service type |
|
||||
| serviceMonitor.additionalLabels | object | `{}` | specify Additional labels to be added on the Service Monitor. |
|
||||
| serviceMonitor.enabled | bool | `false` | enable the deployment of a Service Monitor for the Prometheus Operator. |
|
||||
| serviceMonitor.interval | string | `""` | specify a user defined interval. When not specified Prometheus default interval is used. |
|
||||
| serviceMonitor.scrapeTimeout | string | `""` | specify a user defined scrape timeout. When not specified Prometheus default scrape timeout is used. |
|
||||
| testConnection.affinity | object | `{}` | Affinity for the test connection pod |
|
||||
| testConnection.nodeSelector | object | `{}` | test connection nodeSelector field |
|
||||
| testConnection.tolerations | list | `[]` | Tolerations for pod assignment |
|
||||
| tolerations | list | `[]` | Tolerations for pod assignment |
|
||||
| webui.affinity | object | `{}` | Affinity for the Web UI pods |
|
||||
| webui.allowcors | bool | `false` | Allow CORS |
|
||||
| webui.disableauth | bool | `false` | Disable the basic auth |
|
||||
| webui.enabled | bool | `false` | enable Falcosidekick-UI |
|
||||
| webui.existingSecret | string | `""` | Existing secret with configuration |
|
||||
| webui.externalRedis.enabled | bool | `false` | Enable or disable the usage of an external Redis. Is mutually exclusive with webui.redis.enabled. |
|
||||
| webui.externalRedis.port | int | `6379` | The port of the external Redis database with RediSearch > v2 |
|
||||
| webui.externalRedis.url | string | `""` | The URL of the external Redis database with RediSearch > v2 |
|
||||
| webui.image.pullPolicy | string | `"IfNotPresent"` | The web UI image pull policy |
|
||||
| webui.image.registry | string | `"docker.io"` | The web UI image registry to pull from |
|
||||
| webui.image.repository | string | `"falcosecurity/falcosidekick-ui"` | The web UI image repository to pull from |
|
||||
| webui.image.tag | string | `"2.2.0"` | The web UI image tag to pull |
|
||||
| webui.ingress.annotations | object | `{}` | Web UI ingress annotations |
|
||||
| webui.ingress.enabled | bool | `false` | Whether to create the Web UI ingress |
|
||||
| webui.ingress.hosts | list | `[{"host":"falcosidekick-ui.local","paths":[{"path":"/"}]}]` | Web UI ingress hosts configuration |
|
||||
| webui.ingress.tls | list | `[]` | Web UI ingress TLS configuration |
|
||||
| webui.loglevel | string | `"info"` | Log level ("debug", "info", "warning", "error") |
|
||||
| webui.nodeSelector | object | `{}` | Web UI nodeSelector field |
|
||||
| webui.podAnnotations | object | `{}` | additions annotations on the pods web UI |
|
||||
| webui.podLabels | object | `{}` | additions labels on the pods web UI |
|
||||
| webui.podSecurityContext | object | `{"fsGroup":1234,"runAsUser":1234}` | Web UI pod securityContext |
|
||||
| webui.priorityClassName | string | `""` | Name of the priority class to be used by the Web UI pods, priority class needs to be created beforehand |
|
||||
| webui.redis.affinity | object | `{}` | Affinity for the Web UI Redis pods |
|
||||
| webui.redis.enabled | bool | `true` | Is mutually exclusive with webui.externalRedis.enabled |
|
||||
| webui.redis.existingSecret | string | `""` | Existing secret with configuration |
|
||||
| webui.redis.image.pullPolicy | string | `"IfNotPresent"` | The web UI image pull policy |
|
||||
| webui.redis.image.registry | string | `"docker.io"` | The web UI Redis image registry to pull from |
|
||||
| webui.redis.image.repository | string | `"redis/redis-stack"` | The web UI Redis image repository to pull from |
|
||||
| webui.redis.image.tag | string | `"6.2.6-v3"` | The web UI Redis image tag to pull from |
|
||||
| webui.redis.nodeSelector | object | `{}` | Web UI Redis nodeSelector field |
|
||||
| webui.redis.password | string | `""` | Set a password for Redis |
|
||||
| webui.redis.podAnnotations | object | `{}` | additions annotations on the pods |
|
||||
| webui.redis.podLabels | object | `{}` | additions labels on the pods |
|
||||
| webui.redis.podSecurityContext | object | `{}` | Web UI Redis pod securityContext |
|
||||
| webui.redis.priorityClassName | string | `""` | Name of the priority class to be used by the Web UI Redis pods, priority class needs to be created beforehand |
|
||||
| webui.redis.resources | object | `{}` | The resources for the redis pod |
|
||||
| webui.redis.securityContext | object | `{}` | Web UI Redis container securityContext |
|
||||
| webui.redis.service.annotations | object | `{}` | The web UI Redis service annotations (use this to set a internal LB, for example.) |
|
||||
| webui.redis.service.port | int | `6379` | The web UI Redis service port dor the falcosidekick-ui |
|
||||
| webui.redis.service.targetPort | int | `6379` | The web UI Redis service targetPort |
|
||||
| webui.redis.service.type | string | `"ClusterIP"` | The web UI Redis service type (i. e: LoadBalancer) |
|
||||
| webui.redis.storageClass | string | `""` | Storage class of the PVC for the redis pod |
|
||||
| webui.redis.storageEnabled | bool | `true` | Enable the PVC for the redis pod |
|
||||
| webui.redis.storageSize | string | `"1Gi"` | Size of the PVC for the redis pod |
|
||||
| webui.redis.tolerations | list | `[]` | Tolerations for pod assignment |
|
||||
| webui.replicaCount | int | `2` | number of running pods |
|
||||
| webui.resources | object | `{}` | The resources for the web UI pods |
|
||||
| webui.securityContext | object | `{}` | Web UI container securityContext |
|
||||
| webui.service.annotations | object | `{}` | The web UI service annotations (use this to set a internal LB, for example.) |
|
||||
| webui.service.nodePort | int | `30282` | The web UI service nodePort |
|
||||
| webui.service.port | int | `2802` | The web UI service port dor the falcosidekick-ui |
|
||||
| webui.service.targetPort | int | `2802` | The web UI service targetPort |
|
||||
| webui.service.type | string | `"ClusterIP"` | The web UI service type |
|
||||
| webui.tolerations | list | `[]` | Tolerations for pod assignment |
|
||||
| webui.ttl | int | `0` | TTL for keys, the syntax in X<unit>, with <unit>: s, m, d, w (0 for no ttl) |
|
||||
| webui.user | string | `"admin:admin"` | User in format <login>:<password> |
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Metrics
|
||||
|
||||
A `prometheus` endpoint can be scrapped at `/metrics`.
|
||||
|
||||
## Access Falcosidekick UI through an Ingress and a subpath
|
||||
|
||||
You may want to access the `WebUI (Falcosidekick UI)`](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md) dashboard not from `/` but from `/subpath` and use an Ingress, here's an example of annotations to add to the Ingress for `nginx-ingress controller`:
|
||||
|
||||
```yaml
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /$2
|
||||
nginx.ingress.kubernetes.io/use-regex: "true"
|
||||
```
|
||||
44
falco/charts/falcosidekick/templates/NOTES.txt
Normal file
44
falco/charts/falcosidekick/templates/NOTES.txt
Normal file
@ -0,0 +1,44 @@
|
||||
1. Get the URL for Falcosidekick by running these commands:
|
||||
{{- if .Values.ingress.enabled }}
|
||||
{{- range $host := .Values.ingress.hosts }}
|
||||
{{- range .paths }}
|
||||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else if contains "NodePort" .Values.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "falcosidekick.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "falcosidekick.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "falcosidekick.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
kubectl port-forward svc/{{ include "falcosidekick.name" . }} {{ .Values.service.port }}:{{ .Values.service.port }} --namespace {{ .Release.Namespace }}
|
||||
echo "Visit http://127.0.0.1:{{ .Values.service.port }} to use your application"
|
||||
{{- end }}
|
||||
{{- if .Values.webui.enabled }}
|
||||
2. Get the URL for Falcosidekick-UI (WebUI) by running these commands:
|
||||
{{- if .Values.webui.ingress.enabled }}
|
||||
{{- range $host := .Values.webui.ingress.hosts }}
|
||||
http{{ if $.Values.webui.ingress.tls }}s{{ end }}://{{ $host.host }}{{ index .paths 0 }}
|
||||
{{- end }}
|
||||
{{- else if contains "NodePort" .Values.webui.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "falcosidekick.fullname" . }})-ui
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT/ui
|
||||
{{- else if contains "LoadBalancer" .Values.webui.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "falcosidekick.fullname" . }}-ui'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "falcosidekick.fullname" . }}-ui -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ .Values.webui.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
kubectl port-forward svc/{{ include "falcosidekick.name" . }}-ui {{ .Values.webui.service.port }}:{{ .Values.webui.service.port }} --namespace {{ .Release.Namespace }}
|
||||
echo "Visit http://127.0.0.1:{{ .Values.webui.service.port }}/ui to use your application"
|
||||
{{- end }}
|
||||
{{ else }}
|
||||
2. Try to enable Falcosidekick-UI (WebUI) by adding this argument to your command:
|
||||
--set webui.enabled=true
|
||||
{{- end }}
|
||||
|
||||
80
falco/charts/falcosidekick/templates/_helpers.tpl
Normal file
80
falco/charts/falcosidekick/templates/_helpers.tpl
Normal file
@ -0,0 +1,80 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "falcosidekick.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "falcosidekick.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "falcosidekick.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for ingress.
|
||||
*/}}
|
||||
{{- define "falcosidekick.ingress.apiVersion" -}}
|
||||
{{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" .Capabilities.KubeVersion.Version) -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "falcosidekick.labels" -}}
|
||||
helm.sh/chart: {{ include "falcosidekick.chart" . }}
|
||||
{{ include "falcosidekick.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/part-of: {{ include "falcosidekick.name" . }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "falcosidekick.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "falcosidekick.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Return if ingress is stable.
|
||||
*/}}
|
||||
{{- define "falcosidekick.ingress.isStable" -}}
|
||||
{{- eq (include "falcosidekick.ingress.apiVersion" .) "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return if ingress supports pathType.
|
||||
*/}}
|
||||
{{- define "falcosidekick.ingress.supportsPathType" -}}
|
||||
{{- or (eq (include "falcosidekick.ingress.isStable" .) "true") (and (eq (include "falcosidekick.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) -}}
|
||||
{{- end -}}
|
||||
25
falco/charts/falcosidekick/templates/aadpodidentity.yaml
Normal file
25
falco/charts/falcosidekick/templates/aadpodidentity.yaml
Normal file
@ -0,0 +1,25 @@
|
||||
{{- if and .Values.config.azure.podIdentityClientID .Values.config.azure.podIdentityName -}}
|
||||
---
|
||||
apiVersion: "aadpodidentity.k8s.io/v1"
|
||||
kind: AzureIdentity
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
type: 0
|
||||
resourceID: /subscriptions/{{ .Values.config.azure.subscriptionID }}/resourcegroups/{{ .Values.config.azure.resourceGroupName }}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{{ .Values.config.azure.podIdentityName }}
|
||||
clientID: {{ .Values.config.azure.podIdentityClientID }}
|
||||
---
|
||||
apiVersion: "aadpodidentity.k8s.io/v1"
|
||||
kind: AzureIdentityBinding
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
spec:
|
||||
azureIdentity: {{ include "falcosidekick.fullname" . }}
|
||||
selector: {{ include "falcosidekick.fullname" . }}
|
||||
{{- end }}
|
||||
19
falco/charts/falcosidekick/templates/certs-secret.yaml
Normal file
19
falco/charts/falcosidekick/templates/certs-secret.yaml
Normal file
@ -0,0 +1,19 @@
|
||||
{{- if and .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-certs
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
type: Opaque
|
||||
data:
|
||||
{{ $key := .Values.config.tlsserver.serverkey }}
|
||||
server.key: {{ $key | b64enc | quote }}
|
||||
{{ $crt := .Values.config.tlsserver.servercrt }}
|
||||
server.crt: {{ $crt | b64enc | quote }}
|
||||
falcosidekick.pem: {{ print $key $crt | b64enc | quote }}
|
||||
ca.crt: {{ .Values.config.tlsserver.cacrt | b64enc | quote }}
|
||||
ca.pem: {{ .Values.config.tlsserver.cacrt | b64enc | quote }}
|
||||
{{- end }}
|
||||
19
falco/charts/falcosidekick/templates/clusterrole.yaml
Normal file
19
falco/charts/falcosidekick/templates/clusterrole.yaml
Normal file
@ -0,0 +1,19 @@
|
||||
{{- if .Values.podSecurityPolicy.create }}
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: {{ template "falcosidekick.fullname" .}}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
resourceNames:
|
||||
- {{ template "falcosidekick.fullname" . }}
|
||||
verbs:
|
||||
- use
|
||||
{{- end }}
|
||||
233
falco/charts/falcosidekick/templates/deployment-ui.yaml
Normal file
233
falco/charts/falcosidekick/templates/deployment-ui.yaml
Normal file
@ -0,0 +1,233 @@
|
||||
{{- if .Values.webui.enabled }}
|
||||
{{- if and .Values.webui.redis.enabled .Values.webui.externalRedis.enabled }}
|
||||
{{ fail "Both webui.redis and webui.externalRedis modules are enabled. Please disable one of them." }}
|
||||
{{- else if and (not .Values.webui.redis.enabled) (not .Values.webui.externalRedis.enabled) }}
|
||||
{{ fail "Neither the included Redis nor the external Redis is enabled. Please enable one of them." }}
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
spec:
|
||||
replicas: {{ .Values.webui.replicaCount }}
|
||||
{{- if .Values.webui.revisionHistoryLimit }}
|
||||
revisionHistoryLimit: {{ .Values.webui.revisionHistoryLimit }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "falcosidekick.selectorLabels" . | nindent 6 }}
|
||||
app.kubernetes.io/component: ui
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 8 }}
|
||||
app.kubernetes.io/component: ui
|
||||
{{- if .Values.webui.podLabels }}
|
||||
{{ toYaml .Values.webui.podLabels | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.webui.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.webui.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range .Values.imagePullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "falcosidekick.fullname" . }}-ui
|
||||
{{- if .Values.webui.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.webui.priorityClassName }}"
|
||||
{{- end }}
|
||||
{{- if .Values.webui.podSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.webui.podSecurityContext | nindent 8}}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}-ui
|
||||
image: "{{ .Values.webui.image.registry }}/{{ .Values.webui.image.repository }}:{{ .Values.webui.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.webui.image.pullPolicy }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
{{- if .Values.webui.existingSecret }}
|
||||
name: {{ .Values.webui.existingSecret }}
|
||||
{{- else }}
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
{{- end }}
|
||||
args:
|
||||
- "-r"
|
||||
{{- if .Values.webui.redis.enabled }}
|
||||
- {{ include "falcosidekick.fullname" . }}-ui-redis{{ if .Values.webui.redis.fullfqdn }}.{{ .Release.Namespace }}.svc.cluster.local{{ end }}:{{ .Values.webui.redis.service.port | default "6379" }}
|
||||
{{- else if .Values.webui.externalRedis.enabled }}
|
||||
- "{{ required "External Redis is enabled. Please set the URL to the database." .Values.webui.externalRedis.url }}:{{ .Values.webui.externalRedis.port | default "6379" }}"
|
||||
{{- end}}
|
||||
{{- if .Values.webui.ttl }}
|
||||
- "-t"
|
||||
- {{ .Values.webui.ttl | quote }}
|
||||
{{- end}}
|
||||
{{- if .Values.webui.loglevel }}
|
||||
- "-l"
|
||||
- {{ .Values.webui.loglevel }}
|
||||
{{- end}}
|
||||
{{- if .Values.webui.allowcors }}
|
||||
- "-x"
|
||||
{{- end}}
|
||||
{{- if .Values.webui.disableauth }}
|
||||
- "-d"
|
||||
{{- end}}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2802
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /api/v1/healthz
|
||||
port: http
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/v1/healthz
|
||||
port: http
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
{{- if .Values.webui.securityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.webui.securityContext | nindent 12 }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml .Values.webui.resources | nindent 12 }}
|
||||
{{- with .Values.webui.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.webui.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.webui.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.webui.redis.enabled }}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui-redis
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui-redis
|
||||
spec:
|
||||
replicas: 1
|
||||
serviceName: {{ include "falcosidekick.fullname" . }}-ui-redis
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "falcosidekick.selectorLabels" . | nindent 6 }}
|
||||
app.kubernetes.io/component: ui-redis
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 8 }}
|
||||
app.kubernetes.io/component: ui-redis
|
||||
{{- if .Values.webui.redis.podLabels }}
|
||||
{{ toYaml .Values.webui.redis.podLabels | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.webui.redis.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.webui.redis.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range .Values.imagePullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "falcosidekick.fullname" . }}-ui
|
||||
{{- if .Values.webui.redis.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.webui.redis.priorityClassName }}"
|
||||
{{- end }}
|
||||
{{- if .Values.webui.redis.podSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.webui.redis.podSecurityContext | nindent 8}}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: redis
|
||||
image: "{{ .Values.webui.redis.image.registry }}/{{ .Values.webui.redis.image.repository }}:{{ .Values.webui.redis.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.webui.redis.image.pullPolicy }}
|
||||
{{- if .Values.webui.redis.password }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
{{- if .Values.webui.redis.existingSecret }}
|
||||
name: {{ .Values.webui.redis.existingSecret }}
|
||||
{{- else }}
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui-redis
|
||||
{{- end }}
|
||||
{{- end}}
|
||||
args: []
|
||||
ports:
|
||||
- name: redis
|
||||
containerPort: 6379
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 6379
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
timeoutSeconds: 2
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 6379
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
timeoutSeconds: 2
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
{{- if .Values.webui.redis.securityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.webui.redis.securityContext | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.webui.redis.storageEnabled }}
|
||||
volumeMounts:
|
||||
- name: {{ include "falcosidekick.fullname" . }}-ui-redis-data
|
||||
mountPath: /data
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml .Values.webui.redis.resources | nindent 12 }}
|
||||
{{- with .Values.webui.redis.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.webui.redis.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.webui.redis.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.webui.redis.storageEnabled }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui-redis-data
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.webui.redis.storageSize }}
|
||||
{{- if .Values.webui.redis.storageClass }}
|
||||
storageClassName: {{ .Values.webui.redis.storageClass }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
175
falco/charts/falcosidekick/templates/deployment.yaml
Normal file
175
falco/charts/falcosidekick/templates/deployment.yaml
Normal file
@ -0,0 +1,175 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
{{- if .Values.revisionHistoryLimit }}
|
||||
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "falcosidekick.selectorLabels" . | nindent 6 }}
|
||||
app.kubernetes.io/component: core
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 8 }}
|
||||
app.kubernetes.io/component: core
|
||||
{{- if and .Values.config.azure.podIdentityClientID .Values.config.azure.podIdentityName }}
|
||||
aadpodidbinding: {{ include "falcosidekick.fullname" . }}
|
||||
{{- end }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{ toYaml .Values.podLabels | indent 8 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
|
||||
{{- if .Values.podAnnotations }}
|
||||
{{ toYaml .Values.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range .Values.imagePullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "falcosidekick.fullname" . }}
|
||||
{{- if .Values.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.priorityClassName }}"
|
||||
{{- end }}
|
||||
{{- if .Values.podSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.podSecurityContext | nindent 8}}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2801
|
||||
protocol: TCP
|
||||
{{- if .Values.config.tlsserver.deploy }}
|
||||
- name: http-notls
|
||||
containerPort: 2810
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
{{- if .Values.config.tlsserver.deploy }}
|
||||
port: http-notls
|
||||
{{- else }}
|
||||
port: http
|
||||
{{- end }}
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
{{- if .Values.config.tlsserver.deploy }}
|
||||
port: http-notls
|
||||
{{- else }}
|
||||
port: http
|
||||
{{- end }}
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
{{- if .Values.securityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.config.extraArgs }}
|
||||
args:
|
||||
{{ toYaml .Values.config.extraArgs | nindent 12 }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
{{- if .Values.config.existingSecret }}
|
||||
name: {{ .Values.config.existingSecret }}
|
||||
{{- else }}
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
{{- end }}
|
||||
env:
|
||||
- name: DEBUG
|
||||
value: {{ .Values.config.debug | quote }}
|
||||
- name: CUSTOMFIELDS
|
||||
value: {{ .Values.config.customfields | quote }}
|
||||
- name: TEMPLATEDFIELDS
|
||||
value: {{ .Values.config.templatedfields | quote }}
|
||||
- name: BRACKETREPLACER
|
||||
value: {{ .Values.config.bracketreplacer | quote }}
|
||||
- name: MUTUALTLSFILESPATH
|
||||
value: {{ .Values.config.mutualtlsfilespath | quote }}
|
||||
- name: MUTUALTLSCLIENT_CERTFILE
|
||||
value: {{ .Values.config.mutualtlsclient.certfile | quote }}
|
||||
- name: MUTUALTLSCLIENT_KEYFILE
|
||||
value: {{ .Values.config.mutualtlsclient.keyfile | quote }}
|
||||
- name: MUTUALTLSCLIENT_CACERTFILE
|
||||
value: {{ .Values.config.mutualtlsclient.cacertfile | quote }}
|
||||
{{- if .Values.config.tlsserver.deploy }}
|
||||
- name: TLSSERVER_DEPLOY
|
||||
value: {{ .Values.config.tlsserver.deploy | quote }}
|
||||
- name: TLSSERVER_CERTFILE
|
||||
value: {{ .Values.config.tlsserver.certfile | quote }}
|
||||
- name: TLSSERVER_KEYFILE
|
||||
value: {{ .Values.config.tlsserver.keyfile | quote }}
|
||||
- name: TLSSERVER_CACERTFILE
|
||||
value: {{ .Values.config.tlsserver.cacertfile | quote }}
|
||||
- name: TLSSERVER_MUTUALTLS
|
||||
value: {{ .Values.config.tlsserver.mutualtls | quote }}
|
||||
- name: TLSSERVER_NOTLSPORT
|
||||
value: {{ .Values.config.tlsserver.notlsport | quote }}
|
||||
- name: TLSSERVER_NOTLSPATHS
|
||||
value: {{ .Values.config.tlsserver.notlspaths | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.config.extraEnv }}
|
||||
{{ toYaml .Values.config.extraEnv | nindent 12 }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
{{- if or .Values.extraVolumeMounts (and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt)) }}
|
||||
volumeMounts:
|
||||
{{- if and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt) }}
|
||||
- mountPath: /etc/certs/server
|
||||
name: certs-volume
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if or .Values.extraVolumeMounts }}
|
||||
{{ toYaml .Values.extraVolumeMounts | indent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.extraVolumes (and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt)) }}
|
||||
volumes:
|
||||
{{- if and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt) }}
|
||||
- name: certs-volume
|
||||
secret:
|
||||
{{- if .Values.config.tlsserver.existingSecret }}
|
||||
secretName: {{.Values.config.tlsserver.existingSecret }}
|
||||
{{- else }}
|
||||
secretName: {{ include "falcosidekick.fullname" . }}-certs
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.extraVolumes }}
|
||||
{{ toYaml .Values.extraVolumes | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
54
falco/charts/falcosidekick/templates/ingress-ui.yaml
Normal file
54
falco/charts/falcosidekick/templates/ingress-ui.yaml
Normal file
@ -0,0 +1,54 @@
|
||||
{{- if and .Values.webui.enabled .Values.webui.ingress.enabled -}}
|
||||
{{- $fullName := include "falcosidekick.fullname" . -}}
|
||||
{{- $ingressApiIsStable := eq (include "falcosidekick.ingress.isStable" .) "true" -}}
|
||||
{{- $ingressSupportsPathType := eq (include "falcosidekick.ingress.supportsPathType" .) "true" -}}
|
||||
---
|
||||
apiVersion: {{ include "falcosidekick.ingress.apiVersion" . }}
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}-ui
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
{{- with .Values.webui.ingress.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.webui.ingress.ingressClassName }}
|
||||
ingressClassName: {{ .Values.webui.ingress.ingressClassName }}
|
||||
{{- end }}
|
||||
{{- if .Values.webui.ingress.tls }}
|
||||
tls:
|
||||
{{- range .Values.webui.ingress.tls }}
|
||||
- hosts:
|
||||
{{- range .hosts }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
secretName: {{ .secretName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.webui.ingress.hosts }}
|
||||
- host: {{ .host | quote }}
|
||||
http:
|
||||
paths:
|
||||
{{- range .paths }}
|
||||
- path: {{ .path }}
|
||||
{{- if $ingressSupportsPathType }}
|
||||
pathType: {{ default "ImplementationSpecific" .pathType }}
|
||||
{{- end }}
|
||||
backend:
|
||||
{{- if $ingressApiIsStable }}
|
||||
service:
|
||||
name: {{ $fullName }}-ui
|
||||
port:
|
||||
name: http
|
||||
{{- else }}
|
||||
serviceName: {{ $fullName }}-ui
|
||||
servicePort: http
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
54
falco/charts/falcosidekick/templates/ingress.yaml
Normal file
54
falco/charts/falcosidekick/templates/ingress.yaml
Normal file
@ -0,0 +1,54 @@
|
||||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $fullName := include "falcosidekick.fullname" . -}}
|
||||
{{- $ingressApiIsStable := eq (include "falcosidekick.ingress.isStable" .) "true" -}}
|
||||
{{- $ingressSupportsPathType := eq (include "falcosidekick.ingress.supportsPathType" .) "true" -}}
|
||||
---
|
||||
apiVersion: {{ include "falcosidekick.ingress.apiVersion" . }}
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
{{- with .Values.ingress.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.ingress.ingressClassName }}
|
||||
ingressClassName: {{ .Values.ingress.ingressClassName }}
|
||||
{{- end }}
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{- range .Values.ingress.tls }}
|
||||
- hosts:
|
||||
{{- range .hosts }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
secretName: {{ .secretName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.ingress.hosts }}
|
||||
- host: {{ .host | quote }}
|
||||
http:
|
||||
paths:
|
||||
{{- range .paths }}
|
||||
- path: {{ .path }}
|
||||
{{- if $ingressSupportsPathType }}
|
||||
pathType: {{ default "ImplementationSpecific" .pathType }}
|
||||
{{- end }}
|
||||
backend:
|
||||
{{- if $ingressApiIsStable }}
|
||||
service:
|
||||
name: {{ $fullName }}
|
||||
port:
|
||||
name: http
|
||||
{{- else }}
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: http
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
33
falco/charts/falcosidekick/templates/podsecuritypolicy.yaml
Normal file
33
falco/charts/falcosidekick/templates/podsecuritypolicy.yaml
Normal file
@ -0,0 +1,33 @@
|
||||
{{- if .Values.podSecurityPolicy.create}}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: {{ template "falcosidekick.fullname" . }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
spec:
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
hostNetwork: false
|
||||
readOnlyRootFilesystem: true
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
fsGroup:
|
||||
ranges:
|
||||
- max: 65535
|
||||
min: 1
|
||||
rule: MustRunAs
|
||||
runAsUser:
|
||||
rule: MustRunAsNonRoot
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
ranges:
|
||||
- max: 65535
|
||||
min: 1
|
||||
rule: MustRunAs
|
||||
volumes:
|
||||
- configMap
|
||||
- secret
|
||||
{{- end }}
|
||||
92
falco/charts/falcosidekick/templates/prometheusrule.yaml
Normal file
92
falco/charts/falcosidekick/templates/prometheusrule.yaml
Normal file
@ -0,0 +1,92 @@
|
||||
{{- if and .Values.prometheusRules.enabled .Values.serviceMonitor.enabled }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
{{- if .Values.prometheusRules.namespace }}
|
||||
namespace: {{ .Values.prometheusRules.namespace }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
{{- if .Values.prometheusRules.additionalLabels }}
|
||||
{{- toYaml .Values.prometheusRules.additionalLabels | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
groups:
|
||||
- name: falcosidekick
|
||||
rules:
|
||||
{{- if .Values.prometheusRules.enabled }}
|
||||
- alert: FalcosidekickAbsent
|
||||
expr: absent(up{job="{{- include "falcosidekick.fullname" . }}"})
|
||||
for: 10m
|
||||
annotations:
|
||||
summary: Falcosidekick has dissapeared from Prometheus service discovery.
|
||||
description: No metrics are being scraped from falcosidekick. No events will trigger any alerts.
|
||||
labels:
|
||||
severity: critical
|
||||
{{- end }}
|
||||
{{- if .Values.prometheusRules.alerts.warning.enabled }}
|
||||
- alert: FalcoWarningEventsRateHigh
|
||||
annotations:
|
||||
summary: Falco is experiencing high rate of warning events
|
||||
description: A high rate of warning events are being detected by Falco
|
||||
expr: rate(falco_events{priority="4"}[{{ .Values.prometheusRules.alerts.warning.rate_interval }}]) > {{ .Values.prometheusRules.alerts.warning.threshold }}
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
{{- end }}
|
||||
{{- if .Values.prometheusRules.alerts.error.enabled }}
|
||||
- alert: FalcoErrorEventsRateHigh
|
||||
annotations:
|
||||
summary: Falco is experiencing high rate of error events
|
||||
description: A high rate of error events are being detected by Falco
|
||||
expr: rate(falco_events{priority="3"}[{{ .Values.prometheusRules.alerts.error.rate_interval }}]) > {{ .Values.prometheusRules.alerts.error.threshold }}
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
{{- end }}
|
||||
{{- if .Values.prometheusRules.alerts.critical.enabled }}
|
||||
- alert: FalcoCriticalEventsRateHigh
|
||||
annotations:
|
||||
summary: Falco is experiencing high rate of critical events
|
||||
description: A high rate of critical events are being detected by Falco
|
||||
expr: rate(falco_events{priority="2"}[{{ .Values.prometheusRules.alerts.critical.rate_interval }}]) > {{ .Values.prometheusRules.alerts.critical.threshold }}
|
||||
for: 15m
|
||||
labels:
|
||||
severity: critical
|
||||
{{- end }}
|
||||
{{- if .Values.prometheusRules.alerts.alert.enabled }}
|
||||
- alert: FalcoAlertEventsRateHigh
|
||||
annotations:
|
||||
summary: Falco is experiencing high rate of alert events
|
||||
description: A high rate of alert events are being detected by Falco
|
||||
expr: rate(falco_events{priority="1"}[{{ .Values.prometheusRules.alerts.alert.rate_interval }}]) > {{ .Values.prometheusRules.alerts.alert.threshold }}
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
{{- end }}
|
||||
{{- if .Values.prometheusRules.alerts.emergency.enabled }}
|
||||
- alert: FalcoEmergencyEventsRateHigh
|
||||
annotations:
|
||||
summary: Falco is experiencing high rate of emergency events
|
||||
description: A high rate of emergency events are being detected by Falco
|
||||
expr: rate(falco_events{priority="0"}[{{ .Values.prometheusRules.alerts.emergency.rate_interval }}]) > {{ .Values.prometheusRules.alerts.emergency.threshold }}
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
{{- end }}
|
||||
{{- if .Values.prometheusRules.alerts.output.enabled }}
|
||||
- alert: FalcoEmergencyEventsRateHigh
|
||||
annotations:
|
||||
summary: Falcosidekick is experiencing high rate of errors for an output
|
||||
description: A high rate of errors are being detecting for an output
|
||||
expr: rate(falcosidekick_output{status="error"}[{{ .Values.prometheusRules.alerts.output.rate_interval }}]) by (destination) > {{ .Values.prometheusRules.alerts.output.threshold }}
|
||||
for: 1m
|
||||
labels:
|
||||
severity: warning
|
||||
{{- end }}
|
||||
{{- with .Values.prometheusRules.additionalAlerts }}
|
||||
{{ . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
37
falco/charts/falcosidekick/templates/rbac-ui.yaml
Normal file
37
falco/charts/falcosidekick/templates/rbac-ui.yaml
Normal file
@ -0,0 +1,37 @@
|
||||
{{- if .Values.webui.enabled -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
rules: []
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
{{- end }}
|
||||
92
falco/charts/falcosidekick/templates/rbac.yaml
Normal file
92
falco/charts/falcosidekick/templates/rbac.yaml
Normal file
@ -0,0 +1,92 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- if and .Values.config.aws.useirsa .Values.config.aws.rolearn }}
|
||||
annotations:
|
||||
eks.amazonaws.com/role-arn: {{ .Values.config.aws.rolearn }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
{{- if .Values.podSecurityPolicy.create }}
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
resourceNames:
|
||||
- {{ template "falcosidekick.fullname" . }}
|
||||
verbs:
|
||||
- use
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
{{- if .Values.config.policyreport.enabled }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
rules:
|
||||
- apiGroups:
|
||||
- "wgpolicyk8s.io"
|
||||
resources:
|
||||
- policyreports
|
||||
- clusterpolicyreports
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- delete
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
{{- end }}
|
||||
37
falco/charts/falcosidekick/templates/secrets-ui.yaml
Normal file
37
falco/charts/falcosidekick/templates/secrets-ui.yaml
Normal file
@ -0,0 +1,37 @@
|
||||
{{- if .Values.webui.enabled -}}
|
||||
{{- if eq .Values.webui.existingSecret "" }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
type: Opaque
|
||||
data:
|
||||
{{- if .Values.webui.user }}
|
||||
FALCOSIDEKICK_UI_USER: "{{ .Values.webui.user | b64enc}}"
|
||||
{{- end }}
|
||||
{{- if .Values.webui.redis.password }}
|
||||
FALCOSIDEKICK_UI_REDIS_PASSWORD: "{{ .Values.webui.redis.password | b64enc}}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.webui.redis.existingSecret "" }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui-redis
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
type: Opaque
|
||||
data:
|
||||
{{- if .Values.webui.redis.password }}
|
||||
REDIS_ARGS: "{{ printf "--requirepass %s" .Values.webui.redis.password | b64enc}}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
451
falco/charts/falcosidekick/templates/secrets.yaml
Normal file
451
falco/charts/falcosidekick/templates/secrets.yaml
Normal file
@ -0,0 +1,451 @@
|
||||
{{- if eq .Values.config.existingSecret "" }}
|
||||
{{- $fullName := include "falcosidekick.fullname" . -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
type: Opaque
|
||||
data:
|
||||
# Slack Output
|
||||
SLACK_WEBHOOKURL: "{{ .Values.config.slack.webhookurl | b64enc }}"
|
||||
SLACK_CHANNEL: "{{ .Values.config.slack.channel | b64enc }}"
|
||||
SLACK_OUTPUTFORMAT: "{{ .Values.config.slack.outputformat | b64enc }}"
|
||||
SLACK_FOOTER: "{{ .Values.config.slack.footer | b64enc }}"
|
||||
SLACK_ICON: "{{ .Values.config.slack.icon | b64enc }}"
|
||||
SLACK_USERNAME: "{{ .Values.config.slack.username | b64enc }}"
|
||||
SLACK_MINIMUMPRIORITY: "{{ .Values.config.slack.minimumpriority | b64enc }}"
|
||||
SLACK_MESSAGEFORMAT: "{{ .Values.config.slack.messageformat | b64enc }}"
|
||||
|
||||
# RocketChat Output
|
||||
ROCKETCHAT_WEBHOOKURL: "{{ .Values.config.rocketchat.webhookurl | b64enc }}"
|
||||
ROCKETCHAT_OUTPUTFORMAT: "{{ .Values.config.rocketchat.outputformat | b64enc }}"
|
||||
ROCKETCHAT_ICON: "{{ .Values.config.rocketchat.icon | b64enc }}"
|
||||
ROCKETCHAT_USERNAME: "{{ .Values.config.rocketchat.username | b64enc }}"
|
||||
ROCKETCHAT_MINIMUMPRIORITY: "{{ .Values.config.rocketchat.minimumpriority | b64enc }}"
|
||||
ROCKETCHAT_MESSAGEFORMAT: "{{ .Values.config.rocketchat.messageformat | b64enc }}"
|
||||
ROCKETCHAT_MUTUALTLS: "{{ .Values.config.rocketchat.mutualtls | printf "%t" | b64enc }}"
|
||||
ROCKETCHAT_CHECKCERT: "{{ .Values.config.rocketchat.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Mattermost Output
|
||||
MATTERMOST_WEBHOOKURL: "{{ .Values.config.mattermost.webhookurl | b64enc }}"
|
||||
MATTERMOST_OUTPUTFORMAT: "{{ .Values.config.mattermost.outputformat | b64enc }}"
|
||||
MATTERMOST_FOOTER: "{{ .Values.config.mattermost.footer | b64enc }}"
|
||||
MATTERMOST_ICON: "{{ .Values.config.mattermost.icon | b64enc }}"
|
||||
MATTERMOST_USERNAME: "{{ .Values.config.mattermost.username | b64enc }}"
|
||||
MATTERMOST_MINIMUMPRIORITY: "{{ .Values.config.mattermost.minimumpriority | b64enc }}"
|
||||
MATTERMOST_MESSAGEFORMAT: "{{ .Values.config.mattermost.messageformat | b64enc }}"
|
||||
MATTERMOST_MUTUALTLS: "{{ .Values.config.mattermost.mutualtls | printf "%t" | b64enc }}"
|
||||
MATTERMOST_CHECKCERT: "{{ .Values.config.mattermost.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Teams Output
|
||||
TEAMS_WEBHOOKURL: "{{ .Values.config.teams.webhookurl | b64enc }}"
|
||||
TEAMS_OUTPUTFORMAT: "{{ .Values.config.teams.outputformat | b64enc }}"
|
||||
TEAMS_ACTIVITYIMAGE: "{{ .Values.config.teams.activityimage | b64enc }}"
|
||||
TEAMS_MINIMUMPRIORITY: "{{ .Values.config.teams.minimumpriority | b64enc }}"
|
||||
|
||||
# Datadog Output
|
||||
DATADOG_APIKEY: "{{ .Values.config.datadog.apikey | b64enc }}"
|
||||
DATADOG_HOST: "{{ .Values.config.datadog.host | b64enc }}"
|
||||
DATADOG_MINIMUMPRIORITY: "{{ .Values.config.datadog.minimumpriority | b64enc }}"
|
||||
|
||||
# AlertManager Output
|
||||
ALERTMANAGER_HOSTPORT: "{{ .Values.config.alertmanager.hostport | b64enc }}"
|
||||
ALERTMANAGER_ENDPOINT: "{{ .Values.config.alertmanager.endpoint | b64enc }}"
|
||||
ALERTMANAGER_EXPIRESAFTER: "{{ .Values.config.alertmanager.expireafter | b64enc }}"
|
||||
{{- if .Values.config.alertmanager.extralabels }}
|
||||
ALERTMANAGER_EXTRALABELS: "{{ .Values.config.alertmanager.extralabels | b64enc }}"
|
||||
{{- end }}
|
||||
{{- if .Values.config.alertmanager.extraannotations }}
|
||||
ALERTMANAGER_EXTRAANNOTATIONS: "{{ .Values.config.alertmanager.extraannotations | b64enc }}"
|
||||
{{- end }}
|
||||
{{- if .Values.config.alertmanager.customseveritymap }}
|
||||
ALERTMANAGER_CUSTOMSEVERITYMAP: "{{ .Values.config.alertmanager.customseveritymap | b64enc }}"
|
||||
{{- end }}
|
||||
{{- if .Values.config.alertmanager.dropeventdefaultpriority }}
|
||||
ALERTMANAGER_DROPEVENTDEFAULTPRIORITY: "{{ .Values.config.alertmanager.dropeventdefaultpriority | b64enc }}"
|
||||
{{- end }}
|
||||
{{- if .Values.config.alertmanager.dropeventthresholds }}
|
||||
ALERTMANAGER_DROPEVENTTHRESHOLDS: "{{ .Values.config.alertmanager.dropeventthresholds | b64enc }}"
|
||||
{{- end }}
|
||||
ALERTMANAGER_MINIMUMPRIORITY: "{{ .Values.config.alertmanager.minimumpriority | b64enc }}"
|
||||
ALERTMANAGER_MUTUALTLS: "{{ .Values.config.alertmanager.mutualtls | printf "%t" | b64enc }}"
|
||||
ALERTMANAGER_CHECKCERT: "{{ .Values.config.alertmanager.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# InfluxDB Output
|
||||
INFLUXDB_USER: "{{ .Values.config.influxdb.user | b64enc }}"
|
||||
INFLUXDB_PASSWORD: "{{ .Values.config.influxdb.password | b64enc }}"
|
||||
INFLUXDB_TOKEN: "{{ .Values.config.influxdb.token | b64enc }}"
|
||||
INFLUXDB_HOSTPORT: "{{ .Values.config.influxdb.hostport | b64enc }}"
|
||||
INFLUXDB_ORGANIZATION: "{{ .Values.config.influxdb.organization | b64enc }}"
|
||||
INFLUXDB_PRECISION: "{{ .Values.config.influxdb.precision | b64enc }}"
|
||||
INFLUXDB_MINIMUMPRIORITY: "{{ .Values.config.influxdb.minimumpriority | b64enc }}"
|
||||
INFLUXDB_DATABASE: "{{ .Values.config.influxdb.database | b64enc }}"
|
||||
INFLUXDB_MUTUALTLS: "{{ .Values.config.influxdb.mutualtls | printf "%t" | b64enc }}"
|
||||
INFLUXDB_CHECKCERT: "{{ .Values.config.influxdb.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# AWS Output
|
||||
AWS_ACCESSKEYID: "{{ .Values.config.aws.accesskeyid | b64enc }}"
|
||||
{{- if not .Values.config.aws.useirsa }}
|
||||
AWS_ROLEARN: "{{ .Values.config.aws.rolearn | b64enc }}"
|
||||
AWS_EXTERNALID: "{{ .Values.config.aws.externalid | b64enc }}"
|
||||
{{- end }}
|
||||
AWS_SECRETACCESSKEY: "{{ .Values.config.aws.secretaccesskey | b64enc }}"
|
||||
AWS_REGION: "{{ .Values.config.aws.region | b64enc }}"
|
||||
AWS_CHECKIDENTITY: "{{ .Values.config.aws.checkidentity | printf "%t" | b64enc }}"
|
||||
AWS_LAMBDA_FUNCTIONNAME: "{{ .Values.config.aws.lambda.functionname | b64enc }}"
|
||||
AWS_LAMBDA_MINIMUMPRIORITY: "{{ .Values.config.aws.lambda.minimumpriority | b64enc }}"
|
||||
AWS_CLOUDWATCHLOGS_LOGGROUP: "{{ .Values.config.aws.cloudwatchlogs.loggroup | b64enc }}"
|
||||
AWS_CLOUDWATCHLOGS_LOGSTREAM: "{{ .Values.config.aws.cloudwatchlogs.logstream | b64enc }}"
|
||||
AWS_CLOUDWATCHLOGS_MINIMUMPRIORITY: "{{ .Values.config.aws.cloudwatchlogs.minimumpriority | b64enc }}"
|
||||
AWS_SNS_TOPICARN: "{{ .Values.config.aws.sns.topicarn | b64enc }}"
|
||||
AWS_SNS_RAWJSON: "{{ .Values.config.aws.sns.rawjson| printf "%t" | b64enc }}"
|
||||
AWS_SNS_MINIMUMPRIORITY: "{{ .Values.config.aws.sns.minimumpriority | b64enc }}"
|
||||
AWS_SQS_URL: "{{ .Values.config.aws.sqs.url | b64enc }}"
|
||||
AWS_SQS_MINIMUMPRIORITY: "{{ .Values.config.aws.sqs.minimumpriority | b64enc }}"
|
||||
AWS_S3_BUCKET: "{{ .Values.config.aws.s3.bucket | b64enc }}"
|
||||
AWS_S3_PREFIX: "{{ .Values.config.aws.s3.prefix | b64enc }}"
|
||||
AWS_S3_MINIMUMPRIORITY: "{{ .Values.config.aws.s3.minimumpriority | b64enc }}"
|
||||
AWS_KINESIS_STREAMNAME: "{{ .Values.config.aws.kinesis.streamname | b64enc }}"
|
||||
AWS_KINESIS_MINIMUMPRIORITY: "{{ .Values.config.aws.kinesis.minimumpriority | b64enc }}"
|
||||
AWS_SECURITYLAKE_BUCKET: "{{ .Values.config.aws.securitylake.bucket | b64enc }}"
|
||||
AWS_SECURITYLAKE_REGION: "{{ .Values.config.aws.securitylake.region | b64enc }}"
|
||||
AWS_SECURITYLAKE_PREFIX: "{{ .Values.config.aws.securitylake.prefix | b64enc }}"
|
||||
AWS_SECURITYLAKE_ACCOUNTID: "{{ .Values.config.aws.securitylake.accountid | b64enc }}"
|
||||
AWS_SECURITYLAKE_INTERVAL: "{{ .Values.config.aws.securitylake.interval | toString | b64enc }}"
|
||||
AWS_SECURITYLAKE_BATCHSIZE: "{{ .Values.config.aws.securitylake.batchsize | toString | b64enc }}"
|
||||
AWS_SECURITYLAKE_MINIMUMPRIORITY: "{{ .Values.config.aws.securitylake.minimumpriority | b64enc }}"
|
||||
|
||||
# SMTP Output
|
||||
SMTP_USER: "{{ .Values.config.smtp.user | b64enc }}"
|
||||
SMTP_PASSWORD: "{{ .Values.config.smtp.password | b64enc }}"
|
||||
SMTP_AUTHMECHANISM: "{{ .Values.config.smtp.authmechanism | b64enc }}"
|
||||
SMTP_TLS: "{{ .Values.config.smtp.tls | printf "%t" | b64enc }}"
|
||||
SMTP_HOSTPORT: "{{ .Values.config.smtp.hostport | b64enc }}"
|
||||
SMTP_FROM: "{{ .Values.config.smtp.from | b64enc }}"
|
||||
SMTP_TO: "{{ .Values.config.smtp.to | b64enc }}"
|
||||
SMTP_TOKEN: "{{ .Values.config.smtp.token | b64enc }}"
|
||||
SMTP_IDENTITY: "{{ .Values.config.smtp.identity | b64enc }}"
|
||||
SMTP_TRACE: "{{ .Values.config.smtp.trace | b64enc }}"
|
||||
SMTP_OUTPUTFORMAT: "{{ .Values.config.smtp.outputformat | b64enc }}"
|
||||
SMTP_MINIMUMPRIORITY: "{{ .Values.config.smtp.minimumpriority | b64enc }}"
|
||||
|
||||
# OpsGenie Output
|
||||
OPSGENIE_APIKEY: "{{ .Values.config.opsgenie.apikey | b64enc }}"
|
||||
OPSGENIE_REGION: "{{ .Values.config.opsgenie.region | b64enc }}"
|
||||
OPSGENIE_MINIMUMPRIORITY: "{{ .Values.config.opsgenie.minimumpriority | b64enc }}"
|
||||
OPSGENIE_MUTUALTLS: "{{ .Values.config.opsgenie.mutualtls | printf "%t" | b64enc }}"
|
||||
OPSGENIE_CHECKCERT: "{{ .Values.config.opsgenie.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Discord Output
|
||||
DISCORD_WEBHOOKURL: "{{ .Values.config.discord.webhookurl | b64enc }}"
|
||||
DISCORD_ICON: "{{ .Values.config.discord.icon | b64enc }}"
|
||||
DISCORD_MINIMUMPRIORITY: "{{ .Values.config.discord.minimumpriority | b64enc }}"
|
||||
|
||||
# GCP Output
|
||||
GCP_CREDENTIALS: "{{ .Values.config.gcp.credentials | b64enc }}"
|
||||
GCP_PUBSUB_PROJECTID: "{{ .Values.config.gcp.pubsub.projectid | b64enc }}"
|
||||
GCP_PUBSUB_TOPIC: "{{ .Values.config.gcp.pubsub.topic | b64enc }}"
|
||||
GCP_PUBSUB_CUSTOMATTRIBUTES: "{{ .Values.config.gcp.pubsub.customattributes | b64enc }}"
|
||||
GCP_PUBSUB_MINIMUMPRIORITY: "{{ .Values.config.gcp.pubsub.minimumpriority | b64enc }}"
|
||||
GCP_STORAGE_BUCKET: "{{ .Values.config.gcp.storage.bucket | b64enc }}"
|
||||
GCP_STORAGE_PREFIX: "{{ .Values.config.gcp.storage.prefix | b64enc }}"
|
||||
GCP_STORAGE_MINIMUMPRIORITY: "{{ .Values.config.gcp.storage.minimumpriority | b64enc }}"
|
||||
GCP_CLOUDFUNCTIONS_NAME: "{{ .Values.config.gcp.cloudfunctions.name | b64enc }}"
|
||||
GCP_CLOUDFUNCTIONS_MINIMUMPRIORITY: "{{ .Values.config.gcp.cloudfunctions.minimumpriority | b64enc }}"
|
||||
GCP_CLOUDRUN_ENDPOINT: "{{ .Values.config.gcp.cloudrun.endpoint | b64enc }}"
|
||||
GCP_CLOUDRUN_JWT: "{{ .Values.config.gcp.cloudrun.jwt | b64enc }}"
|
||||
GCP_CLOUDRUN_MINIMUMPRIORITY: "{{ .Values.config.gcp.cloudrun.minimumpriority | b64enc }}"
|
||||
|
||||
# GoogleChat Output
|
||||
GOOGLECHAT_WEBHOOKURL: "{{ .Values.config.googlechat.webhookurl | b64enc }}"
|
||||
GOOGLECHAT_OUTPUTFORMAT: "{{ .Values.config.googlechat.outputformat | b64enc }}"
|
||||
GOOGLECHAT_MINIMUMPRIORITY: "{{ .Values.config.googlechat.minimumpriority | b64enc }}"
|
||||
GOOGLECHAT_MESSAGEFORMAT: "{{ .Values.config.googlechat.messageformat | b64enc }}"
|
||||
|
||||
# ElasticSearch Output
|
||||
ELASTICSEARCH_HOSTPORT: "{{ .Values.config.elasticsearch.hostport | b64enc }}"
|
||||
ELASTICSEARCH_INDEX: "{{ .Values.config.elasticsearch.index | b64enc }}"
|
||||
ELASTICSEARCH_TYPE: "{{ .Values.config.elasticsearch.type | b64enc }}"
|
||||
ELASTICSEARCH_SUFFIX: "{{ .Values.config.elasticsearch.suffix | b64enc }}"
|
||||
ELASTICSEARCH_MINIMUMPRIORITY: "{{ .Values.config.elasticsearch.minimumpriority | b64enc }}"
|
||||
ELASTICSEARCH_MUTUALTLS: "{{ .Values.config.elasticsearch.mutualtls | printf "%t" | b64enc }}"
|
||||
ELASTICSEARCH_CHECKCERT: "{{ .Values.config.elasticsearch.checkcert | printf "%t" | b64enc }}"
|
||||
ELASTICSEARCH_USERNAME: "{{ .Values.config.elasticsearch.username | b64enc }}"
|
||||
ELASTICSEARCH_PASSWORD: "{{ .Values.config.elasticsearch.password | b64enc }}"
|
||||
ELASTICSEARCH_CUSTOMHEADERS: "{{ .Values.config.elasticsearch.customheaders | b64enc }}"
|
||||
|
||||
# Loki Output
|
||||
LOKI_HOSTPORT: "{{ .Values.config.loki.hostport | b64enc }}"
|
||||
LOKI_ENDPOINT: "{{ .Values.config.loki.endpoint | b64enc }}"
|
||||
LOKI_USER: "{{ .Values.config.loki.user | b64enc }}"
|
||||
LOKI_APIKEY: "{{ .Values.config.loki.apikey | b64enc }}"
|
||||
LOKI_TENANT: "{{ .Values.config.loki.tenant | b64enc }}"
|
||||
LOKI_EXTRALABELS: "{{ .Values.config.loki.extralabels | b64enc }}"
|
||||
LOKI_CUSTOMHEADERS: "{{ .Values.config.loki.customheaders | b64enc }}"
|
||||
LOKI_MINIMUMPRIORITY: "{{ .Values.config.loki.minimumpriority | b64enc }}"
|
||||
LOKI_MUTUALTLS: "{{ .Values.config.loki.mutualtls | printf "%t" | b64enc }}"
|
||||
LOKI_CHECKCERT: "{{ .Values.config.loki.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Prometheus Output
|
||||
PROMETHEUS_EXTRALABELS: "{{ .Values.config.prometheus.extralabels | b64enc }}"
|
||||
|
||||
# Nats Output
|
||||
NATS_HOSTPORT: "{{ .Values.config.nats.hostport | b64enc }}"
|
||||
NATS_MINIMUMPRIORITY: "{{ .Values.config.nats.minimumpriority | b64enc }}"
|
||||
NATS_MUTUALTLS: "{{ .Values.config.nats.mutualtls | printf "%t" | b64enc }}"
|
||||
NATS_CHECKCERT: "{{ .Values.config.nats.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Stan Output
|
||||
STAN_HOSTPORT: "{{ .Values.config.stan.hostport | b64enc }}"
|
||||
STAN_CLUSTERID: "{{ .Values.config.stan.clusterid | b64enc }}"
|
||||
STAN_CLIENTID: "{{ .Values.config.stan.clientid | b64enc }}"
|
||||
STAN_MINIMUMPRIORITY: "{{ .Values.config.stan.minimumpriority | b64enc }}"
|
||||
STAN_MUTUALTLS: "{{ .Values.config.stan.mutualtls | printf "%t" | b64enc }}"
|
||||
STAN_CHECKCERT: "{{ .Values.config.stan.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Statsd
|
||||
STATSD_FORWARDER: "{{ .Values.config.statsd.forwarder | b64enc }}"
|
||||
STATSD_NAMESPACE: "{{ .Values.config.statsd.namespace | b64enc }}"
|
||||
|
||||
# Dogstatsd
|
||||
DOGSTATSD_FORWARDER: "{{ .Values.config.dogstatsd.forwarder | b64enc }}"
|
||||
DOGSTATSD_NAMESPACE: "{{ .Values.config.dogstatsd.namespace | b64enc }}"
|
||||
DOGSTATSD_TAGS: "{{ .Values.config.dogstatsd.tags | b64enc }}"
|
||||
|
||||
# WebHook Output
|
||||
WEBHOOK_ADDRESS: "{{ .Values.config.webhook.address | b64enc }}"
|
||||
WEBHOOK_METHOD: "{{ .Values.config.webhook.method | b64enc }}"
|
||||
WEBHOOK_CUSTOMHEADERS: "{{ .Values.config.webhook.customHeaders | b64enc }}"
|
||||
WEBHOOK_MINIMUMPRIORITY: "{{ .Values.config.webhook.minimumpriority | b64enc }}"
|
||||
WEBHOOK_MUTUALTLS: "{{ .Values.config.webhook.mutualtls | printf "%t" | b64enc }}"
|
||||
WEBHOOK_CHECKCERT: "{{ .Values.config.webhook.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Azure Output
|
||||
AZURE_EVENTHUB_NAME: "{{ .Values.config.azure.eventHub.name | b64enc }}"
|
||||
AZURE_EVENTHUB_NAMESPACE: "{{ .Values.config.azure.eventHub.namespace | b64enc }}"
|
||||
AZURE_EVENTHUB_MINIMUMPRIORITY: "{{ .Values.config.azure.eventHub.minimumpriority | b64enc }}"
|
||||
|
||||
# Kafka Output
|
||||
KAFKA_HOSTPORT: "{{ .Values.config.kafka.hostport | b64enc }}"
|
||||
KAFKA_TOPIC: "{{ .Values.config.kafka.topic | b64enc }}"
|
||||
KAFKA_SASL: "{{ .Values.config.kafka.sasl | b64enc }}"
|
||||
KAFKA_TLS: "{{ .Values.config.kafka.tls | printf "%t" |b64enc }}"
|
||||
KAFKA_USERNAME: "{{ .Values.config.kafka.username | b64enc }}"
|
||||
KAFKA_PASSWORD: "{{ .Values.config.kafka.password | b64enc }}"
|
||||
KAFKA_ASYNC: "{{ .Values.config.kafka.async | printf "%t" | b64enc }}"
|
||||
KAFKA_REQUIREDACKS: "{{ .Values.config.kafka.requiredacks | b64enc }}"
|
||||
KAFKA_COMPRESSION: "{{ .Values.config.kafka.compression | b64enc }}"
|
||||
KAFKA_BALANCER: "{{ .Values.config.kafka.balancer | b64enc }}"
|
||||
KAFKA_TOPICCREATION: "{{ .Values.config.kafka.topiccreation | printf "%t" | b64enc }}"
|
||||
KAFKA_CLIENTID: "{{ .Values.config.kafka.clientid | b64enc }}"
|
||||
KAFKA_MINIMUMPRIORITY: "{{ .Values.config.kafka.minimumpriority | b64enc }}"
|
||||
|
||||
# PagerDuty Output
|
||||
PAGERDUTY_ROUTINGKEY: "{{ .Values.config.pagerduty.routingkey | b64enc }}"
|
||||
PAGERDUTY_REGION: "{{ .Values.config.pagerduty.region | b64enc }}"
|
||||
PAGERDUTY_MINIMUMPRIORITY: "{{ .Values.config.pagerduty.minimumpriority | b64enc }}"
|
||||
|
||||
# Kubeless Output
|
||||
KUBELESS_FUNCTION: "{{ .Values.config.kubeless.function | b64enc }}"
|
||||
KUBELESS_NAMESPACE: "{{ .Values.config.kubeless.namespace | b64enc }}"
|
||||
KUBELESS_PORT: "{{ .Values.config.kubeless.port | toString | b64enc }}"
|
||||
KUBELESS_MINIMUMPRIORITY: "{{ .Values.config.kubeless.minimumpriority | b64enc }}"
|
||||
KUBELESS_MUTUALTLS: "{{ .Values.config.kubeless.mutualtls | printf "%t" | b64enc }}"
|
||||
KUBELESS_CHECKCERT: "{{ .Values.config.kubeless.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# OpenFaaS
|
||||
OPENFAAS_GATEWAYNAMESPACE: "{{ .Values.config.openfaas.gatewaynamespace | b64enc }}"
|
||||
OPENFAAS_GATEWAYSERVICE: "{{ .Values.config.openfaas.gatewayservice | b64enc }}"
|
||||
OPENFAAS_FUNCTIONNAME: "{{ .Values.config.openfaas.functionname | b64enc }}"
|
||||
OPENFAAS_FUNCTIONNAMESPACE: "{{ .Values.config.openfaas.functionnamespace | b64enc }}"
|
||||
OPENFAAS_GATEWAYPORT: "{{ .Values.config.openfaas.gatewayport | toString | b64enc }}"
|
||||
OPENFAAS_MINIMUMPRIORITY: "{{ .Values.config.openfaas.minimumpriority | b64enc }}"
|
||||
OPENFAAS_MUTUALTLS: "{{ .Values.config.openfaas.mutualtls | printf "%t" | b64enc }}"
|
||||
OPENFAAS_CHECKCERT: "{{ .Values.config.openfaas.checkcert | printf "%t" | b64enc }}"
|
||||
|
||||
# Cloud Events Output
|
||||
CLOUDEVENTS_ADDRESS: "{{ .Values.config.cloudevents.address | b64enc }}"
|
||||
CLOUDEVENTS_EXTENSION: "{{ .Values.config.cloudevents.extension | b64enc }}"
|
||||
CLOUDEVENTS_MINIMUMPRIORITY: "{{ .Values.config.cloudevents.minimumpriority | b64enc }}"
|
||||
|
||||
# RabbitMQ Output
|
||||
RABBITMQ_URL: "{{ .Values.config.rabbitmq.url | b64enc}}"
|
||||
RABBITMQ_QUEUE: "{{ .Values.config.rabbitmq.queue | b64enc}}"
|
||||
RABBITMQ_MINIMUMPRIORITY: "{{ .Values.config.rabbitmq.minimumpriority | b64enc}}"
|
||||
|
||||
# Wavefront Output
|
||||
WAVEFRONT_ENDPOINTTYPE: "{{ .Values.config.wavefront.endpointtype | b64enc}}"
|
||||
WAVEFRONT_ENDPOINTHOST: "{{ .Values.config.wavefront.endpointhost | b64enc}}"
|
||||
WAVEFRONT_ENDPOINTTOKEN: "{{ .Values.config.wavefront.endpointtoken | b64enc}}"
|
||||
WAVEFRONT_ENDPOINTMETRICPORT: "{{ .Values.config.wavefront.endpointmetricport | toString | b64enc}}"
|
||||
WAVEFRONT_FLUSHINTERVALSECONDS: "{{ .Values.config.wavefront.flushintervalseconds | toString | b64enc}}"
|
||||
WAVEFRONT_BATCHSIZE: "{{ .Values.config.wavefront.batchsize | toString | b64enc}}"
|
||||
WAVEFRONT_METRICNAME: "{{ .Values.config.wavefront.metricname | b64enc}}"
|
||||
WAVEFRONT_MINIMUMPRIORITY: "{{ .Values.config.wavefront.minimumpriority | b64enc}}"
|
||||
|
||||
# Grafana Output
|
||||
GRAFANA_HOSTPORT: "{{ .Values.config.grafana.hostport | b64enc}}"
|
||||
GRAFANA_APIKEY: "{{ .Values.config.grafana.apikey | b64enc}}"
|
||||
GRAFANA_DASHBOARDID: "{{ .Values.config.grafana.dashboardid | toString | b64enc}}"
|
||||
GRAFANA_PANELID: "{{ .Values.config.grafana.panelid | toString | b64enc}}"
|
||||
GRAFANA_ALLFIELDSASTAGS: "{{ .Values.config.grafana.allfieldsastags | printf "%t" | b64enc}}"
|
||||
GRAFANA_CUSTOMHEADERS: "{{ .Values.config.grafana.customheaders | b64enc}}"
|
||||
GRAFANA_MUTUALTLS: "{{ .Values.config.grafana.mutualtls | printf "%t" | b64enc}}"
|
||||
GRAFANA_CHECKCERT: "{{ .Values.config.grafana.checkcert | printf "%t" | b64enc}}"
|
||||
GRAFANA_MINIMUMPRIORITY: "{{ .Values.config.grafana.minimumpriority | b64enc}}"
|
||||
|
||||
# Grafana On Call Output
|
||||
GRAFANAONCALL_WEBHOOKURL: "{{ .Values.config.grafanaoncall.webhookurl | b64enc}}"
|
||||
GRAFANAONCALL_CUSTOMHEADERS: "{{ .Values.config.grafanaoncall.customheaders | b64enc}}"
|
||||
GRAFANAONCALL_CHECKCERT: "{{ .Values.config.grafanaoncall.checkcert | printf "%t" | b64enc}}"
|
||||
GRAFANAONCALL_MUTUALTLS: "{{ .Values.config.grafanaoncall.mutualtls | printf "%t" | b64enc}}"
|
||||
GRAFANAONCALL_MINIMUMPRIORITY: "{{ .Values.config.grafanaoncall.minimumpriority | b64enc}}"
|
||||
|
||||
# Fission Output
|
||||
FISSION_FUNCTION: "{{ .Values.config.fission.function | b64enc}}"
|
||||
FISSION_ROUTERNAMESPACE: "{{ .Values.config.fission.routernamespace | b64enc}}"
|
||||
FISSION_ROUTERSERVICE: "{{ .Values.config.fission.routerservice | b64enc}}"
|
||||
FISSION_ROUTERPORT: "{{ .Values.config.fission.routerport | toString | b64enc}}"
|
||||
FISSION_MINIMUMPRIORITY: "{{ .Values.config.fission.minimumpriority| b64enc}}"
|
||||
FISSION_MUTUALTLS: "{{ .Values.config.fission.mutualtls | printf "%t" | b64enc}}"
|
||||
FISSION_CHECKCERT: "{{ .Values.config.fission.checkcert | printf "%t" | b64enc}}"
|
||||
|
||||
# Yandex Output
|
||||
YANDEX_ACCESSKEYID: "{{ .Values.config.yandex.accesskeyid | b64enc}}"
|
||||
YANDEX_SECRETACCESSKEY: "{{ .Values.config.yandex.secretaccesskey | b64enc}}"
|
||||
YANDEX_REGION: "{{ .Values.config.yandex.region | b64enc}}"
|
||||
YANDEX_S3_ENDPOINT: "{{ .Values.config.yandex.s3.endpoint | b64enc}}"
|
||||
YANDEX_S3_BUCKET: "{{ .Values.config.yandex.s3.bucket | b64enc}}"
|
||||
YANDEX_S3_PREFIX: "{{ .Values.config.yandex.s3.prefix | b64enc}}"
|
||||
YANDEX_S3_MINIMUMPRIORITY: "{{ .Values.config.yandex.s3.minimumpriority | b64enc}}"
|
||||
YANDEX_DATASTREAMS_ENDPOINT: "{{ .Values.config.yandex.datastreams.endpoint | b64enc}}"
|
||||
YANDEX_DATASTREAMS_STREAMNAME: "{{ .Values.config.yandex.datastreams.streamname | b64enc}}"
|
||||
YANDEX_DATASTREAMS_MINIMUMPRIORITY: "{{ .Values.config.yandex.datastreams.minimumpriority | b64enc}}"
|
||||
|
||||
# KafkaRest Output
|
||||
KAFKAREST_ADDRESS: "{{ .Values.config.kafkarest.address | b64enc}}"
|
||||
KAFKAREST_VERSION: "{{ .Values.config.kafkarest.version | toString | b64enc}}"
|
||||
KAFKAREST_MINIMUMPRIORITY : "{{ .Values.config.kafkarest.minimumpriority | b64enc}}"
|
||||
KAFKAREST_MUTUALTLS : "{{ .Values.config.kafkarest.mutualtls | printf "%t" | b64enc}}"
|
||||
KAFKAREST_CHECKCERT : "{{ .Values.config.kafkarest.checkcert | printf "%t" | b64enc}}"
|
||||
|
||||
# Syslog
|
||||
SYSLOG_HOST: "{{ .Values.config.syslog.host | b64enc}}"
|
||||
SYSLOG_PORT: "{{ .Values.config.syslog.port | toString | b64enc}}"
|
||||
SYSLOG_PROTOCOL: "{{ .Values.config.syslog.protocol | b64enc}}"
|
||||
SYSLOG_FORMAT: "{{ .Values.config.syslog.format | b64enc}}"
|
||||
SYSLOG_MINIMUMPRIORITY : "{{ .Values.config.syslog.minimumpriority | b64enc}}"
|
||||
|
||||
# Zoho Cliq
|
||||
CLIQ_WEBHOOKURL: "{{ .Values.config.cliq.webhookurl | b64enc}}"
|
||||
CLIQ_ICON: "{{ .Values.config.cliq.icon | b64enc}}"
|
||||
CLIQ_USEEMOJI: "{{ .Values.config.cliq.useemoji | printf "%t" | b64enc}}"
|
||||
CLIQ_OUTPUTFORMAT: "{{ .Values.config.cliq.outputformat | b64enc}}"
|
||||
CLIQ_MESSAGEFORMAT: "{{ .Values.config.cliq.messageformat | b64enc}}"
|
||||
CLIQ_MINIMUMPRIORITY : "{{ .Values.config.cliq.minimumpriority | b64enc}}"
|
||||
|
||||
# Policy Reporter
|
||||
POLICYREPORT_ENABLED: "{{ .Values.config.policyreport.enabled | printf "%t"| b64enc}}"
|
||||
POLICYREPORT_KUBECONFIG: "{{ .Values.config.policyreport.kubeconfig | b64enc}}"
|
||||
POLICYREPORT_MAXEVENTS: "{{ .Values.config.policyreport.maxevents | toString | b64enc}}"
|
||||
POLICYREPORT_PRUNEBYPRIORITY: "{{ .Values.config.policyreport.prunebypriority | printf "%t" | b64enc}}"
|
||||
POLICYREPORT_MINIMUMPRIORITY : "{{ .Values.config.policyreport.minimumpriority | b64enc}}"
|
||||
|
||||
# Node Red
|
||||
NODERED_ADDRESS: "{{ .Values.config.nodered.address | b64enc}}"
|
||||
NODERED_USER: "{{ .Values.config.nodered.user | b64enc}}"
|
||||
NODERED_PASSWORD: "{{ .Values.config.nodered.password | b64enc}}"
|
||||
NODERED_CUSTOMHEADERS: "{{ .Values.config.nodered.customheaders | b64enc}}"
|
||||
NODERED_CHECKCERT : "{{ .Values.config.nodered.checkcert | printf "%t" | b64enc}}"
|
||||
NODERED_MINIMUMPRIORITY : "{{ .Values.config.nodered.minimumpriority | b64enc}}"
|
||||
|
||||
# MQTT
|
||||
MQTT_BROKER: "{{ .Values.config.mqtt.broker | b64enc}}"
|
||||
MQTT_TOPIC: "{{ .Values.config.mqtt.topic | b64enc}}"
|
||||
MQTT_QOS: "{{ .Values.config.mqtt.qos | toString | b64enc}}"
|
||||
MQTT_RETAINED : "{{ .Values.config.mqtt.retained | printf "%t" | b64enc}}"
|
||||
MQTT_USER: "{{ .Values.config.mqtt.user | b64enc}}"
|
||||
MQTT_PASSWORD: "{{ .Values.config.mqtt.password | b64enc}}"
|
||||
MQTT_CHECKCERT : "{{ .Values.config.mqtt.checkcert | printf "%t" | b64enc}}"
|
||||
MQTT_MINIMUMPRIORITY : "{{ .Values.config.mqtt.minimumpriority | b64enc}}"
|
||||
|
||||
# Zincsearch
|
||||
ZINCSEARCH_HOSTPORT: "{{ .Values.config.zincsearch.hostport | b64enc}}"
|
||||
ZINCSEARCH_INDEX: "{{ .Values.config.zincsearch.index | b64enc}}"
|
||||
ZINCSEARCH_USERNAME: "{{ .Values.config.zincsearch.username | b64enc}}"
|
||||
ZINCSEARCH_PASSWORD: "{{ .Values.config.zincsearch.password | b64enc}}"
|
||||
ZINCSEARCH_CHECKCERT : "{{ .Values.config.zincsearch.checkcert | printf "%t" | b64enc}}"
|
||||
ZINCSEARCH_MINIMUMPRIORITY : "{{ .Values.config.zincsearch.minimumpriority | b64enc}}"
|
||||
|
||||
# Gotify
|
||||
GOTIFY_HOSTPORT: "{{ .Values.config.gotify.hostport | b64enc}}"
|
||||
GOTIFY_TOKEN: "{{ .Values.config.gotify.token | b64enc}}"
|
||||
GOTIFY_FORMAT: "{{ .Values.config.gotify.format | b64enc}}"
|
||||
GOTIFY_CHECKCERT : "{{ .Values.config.gotify.checkcert | printf "%t" | b64enc}}"
|
||||
GOTIFY_MINIMUMPRIORITY : "{{ .Values.config.gotify.minimumpriority | b64enc}}"
|
||||
|
||||
# Tekton
|
||||
TEKTON_EVENTLISTENER: "{{ .Values.config.tekton.eventlistener | b64enc}}"
|
||||
TEKTON_CHECKCERT : "{{ .Values.config.tekton.checkcert | printf "%t" | b64enc}}"
|
||||
TEKTON_MINIMUMPRIORITY : "{{ .Values.config.tekton.minimumpriority | b64enc}}"
|
||||
|
||||
# Spyderbat
|
||||
SPYDERBAT_ORGUID: "{{ .Values.config.spyderbat.orguid | b64enc}}"
|
||||
SPYDERBAT_APIKEY: "{{ .Values.config.spyderbat.apikey | b64enc}}"
|
||||
SPYDERBAT_APIURL: "{{ .Values.config.spyderbat.apiurl | b64enc}}"
|
||||
SPYDERBAT_SOURCE: "{{ .Values.config.spyderbat.source | b64enc}}"
|
||||
SPYDERBAT_SOURCEDESCRIPTION: "{{ .Values.config.spyderbat.sourcedescription | b64enc}}"
|
||||
SPYDERBAT_MINIMUMPRIORITY : "{{ .Values.config.spyderbat.minimumpriority | b64enc}}"
|
||||
|
||||
# TimescaleDB
|
||||
TIMESCALEDB_HOST: "{{ .Values.config.timescaledb.host | b64enc}}"
|
||||
TIMESCALEDB_PORT: "{{ .Values.config.timescaledb.port | toString | b64enc}}"
|
||||
TIMESCALEDB_USER: "{{ .Values.config.timescaledb.user | b64enc}}"
|
||||
TIMESCALEDB_PASSWORD: "{{ .Values.config.timescaledb.password | b64enc}}"
|
||||
TIMESCALEDB_DATABASE: "{{ .Values.config.timescaledb.database | b64enc}}"
|
||||
TIMESCALEDB_HYPERTABLENAME: "{{ .Values.config.timescaledb.hypertablename | b64enc}}"
|
||||
TIMESCALEDB_MINIMUMPRIORITY : "{{ .Values.config.timescaledb.minimumpriority | b64enc}}"
|
||||
|
||||
# Redis Output
|
||||
REDIS_ADDRESS: "{{ .Values.config.redis.address | b64enc}}"
|
||||
REDIS_PASSWORD: "{{ .Values.config.redis.password | b64enc}}"
|
||||
REDIS_DATABASE: "{{ .Values.config.redis.database | toString | b64enc}}"
|
||||
REDIS_KEY: "{{ .Values.config.redis.key | b64enc}}"
|
||||
REDIS_STORAGETYPE: "{{ .Values.config.redis.storagetype | b64enc}}"
|
||||
REDIS_MINIMUMPRIORITY : "{{ .Values.config.redis.minimumpriority | b64enc}}"
|
||||
|
||||
# TELEGRAM Output
|
||||
TELEGRAM_TOKEN: "{{ .Values.config.telegram.token | b64enc}}"
|
||||
TELEGRAM_CHATID: "{{ .Values.config.telegram.chatid | b64enc}}"
|
||||
TELEGRAM_MINIMUMPRIORITY : "{{ .Values.config.telegram.minimumpriority | b64enc}}"
|
||||
TELEGRAM_CHECKCERT : "{{ .Values.config.telegram.checkcert | printf "%t" | b64enc}}"
|
||||
|
||||
# N8N Output
|
||||
N8N_ADDRESS: "{{ .Values.config.n8n.address | b64enc}}"
|
||||
N8N_USER: "{{ .Values.config.n8n.user | b64enc}}"
|
||||
N8N_PASSWORD: "{{ .Values.config.n8n.password | b64enc}}"
|
||||
N8N_MINIMUMPRIORITY : "{{ .Values.config.n8n.minimumpriority | b64enc}}"
|
||||
N8N_CHECKCERT : "{{ .Values.config.n8n.checkcert | printf "%t" | b64enc}}"
|
||||
|
||||
# Open Observe Output
|
||||
OPENOBSERVE_HOSTPORT: "{{ .Values.config.openobserve.hostport | b64enc}}"
|
||||
OPENOBSERVE_USERNAME: "{{ .Values.config.openobserve.username | b64enc}}"
|
||||
OPENOBSERVE_PASSWORD: "{{ .Values.config.openobserve.password | b64enc}}"
|
||||
OPENOBSERVE_CHECKCERT : "{{ .Values.config.openobserve.checkcert | printf "%t" | b64enc}}"
|
||||
OPENOBSERVE_MUTUALTLS : "{{ .Values.config.openobserve.mutualtls | printf "%t" | b64enc}}"
|
||||
OPENOBSERVE_CUSTOMHEADERS : "{{ .Values.config.openobserve.customheaders | b64enc}}"
|
||||
OPENOBSERVE_ORGANIZATIONNAME: "{{ .Values.config.openobserve.organizationname | b64enc}}"
|
||||
OPENOBSERVE_STREAMNAME: "{{ .Values.config.openobserve.streamname | b64enc}}"
|
||||
OPENOBSERVE_MINIMUMPRIORITY : "{{ .Values.config.openobserve.minimumpriority | b64enc}}"
|
||||
|
||||
# WebUI Output
|
||||
{{- if .Values.webui.enabled -}}
|
||||
{{ $weburl := printf "http://%s-ui:2802" (include "falcosidekick.fullname" .) }}
|
||||
WEBUI_URL: "{{ $weburl | b64enc }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
53
falco/charts/falcosidekick/templates/service-ui.yaml
Normal file
53
falco/charts/falcosidekick/templates/service-ui.yaml
Normal file
@ -0,0 +1,53 @@
|
||||
{{- if .Values.webui.enabled -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
{{- with .Values.webui.service.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: {{ .Values.webui.service.type }}
|
||||
ports:
|
||||
- port: {{ .Values.webui.service.port }}
|
||||
{{ if eq .Values.webui.service.type "NodePort" }}
|
||||
nodePort: {{ .Values.webui.service.nodePort }}
|
||||
{{ end }}
|
||||
targetPort: {{ .Values.webui.service.targetPort }}
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
{{- include "falcosidekick.selectorLabels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
{{- if .Values.webui.redis.enabled }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}-ui-redis
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui
|
||||
{{- with .Values.webui.redis.service.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: {{ .Values.webui.redis.service.port }}
|
||||
targetPort: {{ .Values.webui.redis.service.targetPort }}
|
||||
protocol: TCP
|
||||
name: redis
|
||||
selector:
|
||||
{{- include "falcosidekick.selectorLabels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: ui-redis
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
30
falco/charts/falcosidekick/templates/service.yaml
Normal file
30
falco/charts/falcosidekick/templates/service.yaml
Normal file
@ -0,0 +1,30 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
{{- with .Values.service.annotations }}
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
ports:
|
||||
- port: {{ .Values.service.port }}
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
{{- if not (eq .Values.config.tlsserver.notlspaths "") }}
|
||||
- port: {{ .Values.config.tlsserver.notlsport }}
|
||||
targetPort: http-notls
|
||||
protocol: TCP
|
||||
name: http-notls
|
||||
{{- end }}
|
||||
selector:
|
||||
{{- include "falcosidekick.selectorLabels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
26
falco/charts/falcosidekick/templates/servicemonitor.yaml
Normal file
26
falco/charts/falcosidekick/templates/servicemonitor.yaml
Normal file
@ -0,0 +1,26 @@
|
||||
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.serviceMonitor.enabled }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: {{ include "falcosidekick.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falcosidekick.labels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: core
|
||||
{{- range $key, $value := .Values.serviceMonitor.additionalLabels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
spec:
|
||||
endpoints:
|
||||
- port: http
|
||||
{{- if .Values.serviceMonitor.interval }}
|
||||
interval: {{ .Values.serviceMonitor.interval }}
|
||||
{{- end }}
|
||||
{{- if .Values.serviceMonitor.scrapeTimeout }}
|
||||
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "falcosidekick.selectorLabels" . | nindent 6 }}
|
||||
app.kubernetes.io/component: core
|
||||
{{- end }}
|
||||
@ -0,0 +1,31 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: "{{ include "falcosidekick.fullname" . }}-test-connection"
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "falcosidekick.name" . }}
|
||||
helm.sh/chart: {{ include "falcosidekick.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
annotations:
|
||||
"helm.sh/hook": test-success
|
||||
spec:
|
||||
containers:
|
||||
- name: curl
|
||||
image: appropriate/curl
|
||||
command: ['curl']
|
||||
args: ["-X", "POST", '{{ include "falcosidekick.fullname" . }}:{{ .Values.service.port }}/ping']
|
||||
restartPolicy: Never
|
||||
{{- with .Values.testConnection.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.testConnection.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.testConnection.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
1178
falco/charts/falcosidekick/values.yaml
Normal file
1178
falco/charts/falcosidekick/values.yaml
Normal file
File diff suppressed because it is too large
Load Diff
23
falco/charts/k8s-metacollector/.helmignore
Normal file
23
falco/charts/k8s-metacollector/.helmignore
Normal file
@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
44
falco/charts/k8s-metacollector/CHANGELOG.md
Normal file
44
falco/charts/k8s-metacollector/CHANGELOG.md
Normal file
@ -0,0 +1,44 @@
|
||||
|
||||
# Change Log
|
||||
|
||||
This file documents all notable changes to `k8s-metacollector` Helm Chart. The release
|
||||
numbering uses [semantic versioning](http://semver.org).
|
||||
|
||||
## v0.1.7
|
||||
|
||||
* Lower initial delay seconds for readiness and liveness probes;
|
||||
|
||||
## v0.1.6
|
||||
|
||||
* Add grafana dashboard;
|
||||
|
||||
## v0.1.5
|
||||
|
||||
* Fix service monitor indentation;
|
||||
|
||||
## v0.1.4
|
||||
|
||||
* Lower `interval` and `scrape_timeout` values for service monitor;
|
||||
*
|
||||
## v0.1.3
|
||||
|
||||
* Bump application version to 0.1.3
|
||||
|
||||
## v0.1.2
|
||||
|
||||
### Major Changes
|
||||
|
||||
* Update unit tests;
|
||||
|
||||
## v0.1.1
|
||||
|
||||
### Major Changes
|
||||
|
||||
* Add `work in progress` disclaimer;
|
||||
* Update chart info.
|
||||
|
||||
## v0.1.0
|
||||
|
||||
### Major Changes
|
||||
|
||||
* Initial release of k8s-metacollector Helm Chart. **Note:** the chart uses the `main` tag, since we don't have released the k8s-metacollector yet.
|
||||
13
falco/charts/k8s-metacollector/Chart.yaml
Normal file
13
falco/charts/k8s-metacollector/Chart.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: v2
|
||||
appVersion: 0.1.0
|
||||
description: Install k8s-metacollector to fetch and distribute Kubernetes metadata
|
||||
to Falco instances.
|
||||
home: https://github.com/falcosecurity/k8s-metacollector
|
||||
maintainers:
|
||||
- email: cncf-falco-dev@lists.cncf.io
|
||||
name: The Falco Authors
|
||||
name: k8s-metacollector
|
||||
sources:
|
||||
- https://github.com/falcosecurity/k8s-metacollector
|
||||
type: application
|
||||
version: 0.1.7
|
||||
71
falco/charts/k8s-metacollector/README.gotmpl
Normal file
71
falco/charts/k8s-metacollector/README.gotmpl
Normal file
@ -0,0 +1,71 @@
|
||||
# k8s-metacollector
|
||||
|
||||
[k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart installs the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) in a kubernetes cluster. The main application will be deployed as Kubernetes deployment with replica count equal to 1. In order for the application to work correctly the following resources will be created:
|
||||
* ServiceAccount;
|
||||
* ClusterRole;
|
||||
* ClusterRoleBinding;
|
||||
* Service;
|
||||
* ServiceMonitor (optional);
|
||||
|
||||
*Note*: Incrementing the number of replicas is not recommended. The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) does not implement memory sharding techniques. Furthermore, events are distributed over `gRPC` using `streams` which does not work well with load balancing mechanisms implemented by Kubernetes.
|
||||
|
||||
## Adding `falcosecurity` repository
|
||||
|
||||
Before installing the chart, add the `falcosecurity` charts repository:
|
||||
|
||||
```bash
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with default values and release name `k8s-metacollector` run:
|
||||
|
||||
```bash
|
||||
helm install k8s-metacollector falcosecurity/k8s-metacollector --namespace metacollector --create-namespace
|
||||
```
|
||||
|
||||
After a few seconds, k8s-metacollector should be running in the `metacollector` namespace.
|
||||
|
||||
### Enabling ServiceMonitor
|
||||
Assuming that Prometheus scrapes only the ServiceMonitors that present a `release label` the following command will install and label the ServiceMonitor:
|
||||
|
||||
```bash
|
||||
helm install k8s-metacollector falcosecurity/k8s-metacollector \
|
||||
--create-namespace \
|
||||
--namespace metacollector \
|
||||
--set serviceMonitor.create=true \
|
||||
--set serviceMonitor.labels.release="kube-prometheus-stack"
|
||||
```
|
||||
|
||||
### Deploying the Grafana Dashboard
|
||||
By setting `grafana.dashboards.enabled=true` the k8s-metacollector's grafana dashboard is deployed in the cluster using a configmap.
|
||||
Based in Grafana's configuration, the configmap could be scraped by Grafana dashboard sidecar.
|
||||
The following command will deploy the k8s-metacollector + serviceMonitor + grafana dashboard:
|
||||
|
||||
```bash
|
||||
helm install k8s-metacollector falcosecurity/k8s-metacollector \
|
||||
--create-namespace \
|
||||
--namespace metacollector \
|
||||
--set serviceMonitor.create=true \
|
||||
--set serviceMonitor.labels.release="kube-prometheus-stack" \
|
||||
--set grafana.dashboards.enabled=true
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
To uninstall the `k8s-metacollector` release in namespace `metacollector`:
|
||||
```bash
|
||||
helm uninstall k8s-metacollector --namespace metacollector
|
||||
```
|
||||
The command removes all the Kubernetes resources associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See `values.yaml` for full list.
|
||||
|
||||
{{ template "chart.valuesSection" . }}
|
||||
150
falco/charts/k8s-metacollector/README.md
Normal file
150
falco/charts/k8s-metacollector/README.md
Normal file
@ -0,0 +1,150 @@
|
||||
# k8s-metacollector
|
||||
|
||||
[k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart installs the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) in a kubernetes cluster. The main application will be deployed as Kubernetes deployment with replica count equal to 1. In order for the application to work correctly the following resources will be created:
|
||||
* ServiceAccount;
|
||||
* ClusterRole;
|
||||
* ClusterRoleBinding;
|
||||
* Service;
|
||||
* ServiceMonitor (optional);
|
||||
|
||||
*Note*: Incrementing the number of replicas is not recommended. The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) does not implement memory sharding techniques. Furthermore, events are distributed over `gRPC` using `streams` which does not work well with load balancing mechanisms implemented by Kubernetes.
|
||||
|
||||
## Adding `falcosecurity` repository
|
||||
|
||||
Before installing the chart, add the `falcosecurity` charts repository:
|
||||
|
||||
```bash
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with default values and release name `k8s-metacollector` run:
|
||||
|
||||
```bash
|
||||
helm install k8s-metacollector falcosecurity/k8s-metacollector --namespace metacollector --create-namespace
|
||||
```
|
||||
|
||||
After a few seconds, k8s-metacollector should be running in the `metacollector` namespace.
|
||||
|
||||
### Enabling ServiceMonitor
|
||||
Assuming that Prometheus scrapes only the ServiceMonitors that present a `release label` the following command will install and label the ServiceMonitor:
|
||||
|
||||
```bash
|
||||
helm install k8s-metacollector falcosecurity/k8s-metacollector \
|
||||
--create-namespace \
|
||||
--namespace metacollector \
|
||||
--set serviceMonitor.create=true \
|
||||
--set serviceMonitor.labels.release="kube-prometheus-stack"
|
||||
```
|
||||
|
||||
### Deploying the Grafana Dashboard
|
||||
By setting `grafana.dashboards.enabled=true` the k8s-metacollector's grafana dashboard is deployed in the cluster using a configmap.
|
||||
Based in Grafana's configuration, the configmap could be scraped by Grafana dashboard sidecar.
|
||||
The following command will deploy the k8s-metacollector + serviceMonitor + grafana dashboard:
|
||||
|
||||
```bash
|
||||
helm install k8s-metacollector falcosecurity/k8s-metacollector \
|
||||
--create-namespace \
|
||||
--namespace metacollector \
|
||||
--set serviceMonitor.create=true \
|
||||
--set serviceMonitor.labels.release="kube-prometheus-stack" \
|
||||
--set grafana.dashboards.enabled=true
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
To uninstall the `k8s-metacollector` release in namespace `metacollector`:
|
||||
```bash
|
||||
helm uninstall k8s-metacollector --namespace metacollector
|
||||
```
|
||||
The command removes all the Kubernetes resources associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the main configurable parameters of the k8s-metacollector chart v0.1.7 and their default values. See `values.yaml` for full list.
|
||||
|
||||
## Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| affinity | object | `{}` | affinity allows pod placement based on node characteristics, or any other custom labels assigned to nodes. |
|
||||
| containerSecurityContext | object | `{"capabilities":{"drop":["ALL"]}}` | containerSecurityContext holds the security settings for the container. |
|
||||
| containerSecurityContext.capabilities | object | `{"drop":["ALL"]}` | capabilities fine-grained privileges that can be assigned to processes. |
|
||||
| containerSecurityContext.capabilities.drop | list | `["ALL"]` | drop drops the given set of privileges. |
|
||||
| fullnameOverride | string | `""` | fullNameOverride same as nameOverride but for the full name. |
|
||||
| grafana | object | `{"dashboards":{"configMaps":{"collector":{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}},"enabled":false}}` | grafana contains the configuration related to grafana. |
|
||||
| grafana.dashboards | object | `{"configMaps":{"collector":{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}},"enabled":false}` | dashboards contains configuration for grafana dashboards. |
|
||||
| grafana.dashboards.configMaps | object | `{"collector":{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}}` | configmaps to be deployed that contain a grafana dashboard. |
|
||||
| grafana.dashboards.configMaps.collector | object | `{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}` | collector contains the configuration for collector's dashboard. |
|
||||
| grafana.dashboards.configMaps.collector.folder | string | `""` | folder where the dashboard is stored by grafana. |
|
||||
| grafana.dashboards.configMaps.collector.name | string | `"k8s-metacollector-grafana-dashboard"` | name specifies the name for the configmap. |
|
||||
| grafana.dashboards.configMaps.collector.namespace | string | `""` | namespace specifies the namespace for the configmap. |
|
||||
| grafana.dashboards.enabled | bool | `false` | enabled specifies whether the dashboards should be deployed. |
|
||||
| healthChecks | object | `{"livenessProbe":{"httpGet":{"path":"/healthz","port":8081},"initialDelaySeconds":45,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"httpGet":{"path":"/readyz","port":8081},"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}}` | healthChecks contains the configuration for liveness and readiness probes. |
|
||||
| healthChecks.livenessProbe | object | `{"httpGet":{"path":"/healthz","port":8081},"initialDelaySeconds":45,"periodSeconds":15,"timeoutSeconds":5}` | livenessProbe is a diagnostic mechanism used to determine wether a container within a Pod is still running and healthy. |
|
||||
| healthChecks.livenessProbe.httpGet | object | `{"path":"/healthz","port":8081}` | httpGet specifies that the liveness probe will make an HTTP GET request to check the health of the container. |
|
||||
| healthChecks.livenessProbe.httpGet.path | string | `"/healthz"` | path is the specific endpoint on which the HTTP GET request will be made. |
|
||||
| healthChecks.livenessProbe.httpGet.port | int | `8081` | port is the port on which the container exposes the "/healthz" endpoint. |
|
||||
| healthChecks.livenessProbe.initialDelaySeconds | int | `45` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. |
|
||||
| healthChecks.livenessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the liveness probe will be repeated. |
|
||||
| healthChecks.livenessProbe.timeoutSeconds | int | `5` | timeoutSeconds is the number of seconds after which the probe times out. |
|
||||
| healthChecks.readinessProbe | object | `{"httpGet":{"path":"/readyz","port":8081},"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}` | readinessProbe is a mechanism used to determine whether a container within a Pod is ready to serve traffic. |
|
||||
| healthChecks.readinessProbe.httpGet | object | `{"path":"/readyz","port":8081}` | httpGet specifies that the readiness probe will make an HTTP GET request to check whether the container is ready. |
|
||||
| healthChecks.readinessProbe.httpGet.path | string | `"/readyz"` | path is the specific endpoint on which the HTTP GET request will be made. |
|
||||
| healthChecks.readinessProbe.httpGet.port | int | `8081` | port is the port on which the container exposes the "/readyz" endpoint. |
|
||||
| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. |
|
||||
| healthChecks.readinessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the readiness probe will be repeated. |
|
||||
| healthChecks.readinessProbe.timeoutSeconds | int | `5` | timeoutSeconds is the number of seconds after which the probe times out. |
|
||||
| image | object | `{"pullPolicy":"IfNotPresent","pullSecrets":[],"registry":"docker.io","repository":"falcosecurity/k8s-metacollector","tag":""}` | image is the configuration for the k8s-metacollector image. |
|
||||
| image.pullPolicy | string | `"IfNotPresent"` | pullPolicy is the policy used to determine when a node should attempt to pull the container image. |
|
||||
| image.pullSecrets | list | `[]` | pullSecects a list of secrets containing credentials used when pulling from private/secure registries. |
|
||||
| image.registry | string | `"docker.io"` | registry is the image registry to pull from. |
|
||||
| image.repository | string | `"falcosecurity/k8s-metacollector"` | repository is the image repository to pull from |
|
||||
| image.tag | string | `""` | tag is image tag to pull. Overrides the image tag whose default is the chart appVersion. |
|
||||
| nameOverride | string | `""` | nameOverride is the new name used to override the release name used for k8s-metacollector components. |
|
||||
| namespaceOverride | string | `""` | namespaceOverride overrides the deployment namespace. It's useful for multi-namespace deployments in combined charts. |
|
||||
| nodeSelector | object | `{}` | nodeSelector specifies a set of key-value pairs that must match labels assigned to nodes for the Pod to be eligible for scheduling on that node. |
|
||||
| podAnnotations | object | `{}` | podAnnotations are custom annotations to be added to the pod. |
|
||||
| podSecurityContext | object | `{"fsGroup":1000,"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000}` | These settings are override by the ones specified for the container when there is overlap. |
|
||||
| podSecurityContext.fsGroup | int | `1000` | fsGroup specifies the group ID (GID) that should be used for the volume mounted within a Pod. |
|
||||
| podSecurityContext.runAsGroup | int | `1000` | runAsGroup specifies the group ID (GID) that the containers inside the pod should run as. |
|
||||
| podSecurityContext.runAsNonRoot | bool | `true` | runAsNonRoot when set to true enforces that the specified container runs as a non-root user. |
|
||||
| podSecurityContext.runAsUser | int | `1000` | runAsUser specifies the user ID (UID) that the containers inside the pod should run as. |
|
||||
| replicaCount | int | `1` | replicaCount is the number of identical copies of the k8s-metacollector. |
|
||||
| resources | object | `{}` | resources defines the computing resources (CPU and memory) that are allocated to the containers running within the Pod. |
|
||||
| service | object | `{"create":true,"ports":{"broker-grpc":{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"},"health-probe":{"port":8081,"protocol":"TCP","targetPort":"health-probe"},"metrics":{"port":8080,"protocol":"TCP","targetPort":"metrics"}},"type":"ClusterIP"}` | service exposes the k8s-metacollector services to be accessed from within the cluster. ref: https://kubernetes.io/docs/concepts/services-networking/service/ |
|
||||
| service.create | bool | `true` | enabled specifies whether a service should be created. |
|
||||
| service.ports | object | `{"broker-grpc":{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"},"health-probe":{"port":8081,"protocol":"TCP","targetPort":"health-probe"},"metrics":{"port":8080,"protocol":"TCP","targetPort":"metrics"}}` | ports denotes all the ports on which the Service will listen. |
|
||||
| service.ports.broker-grpc | object | `{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"}` | broker-grpc denotes a listening service named "grpc-broker" |
|
||||
| service.ports.broker-grpc.port | int | `45000` | port is the port on which the Service will listen. |
|
||||
| service.ports.broker-grpc.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. |
|
||||
| service.ports.broker-grpc.targetPort | string | `"broker-grpc"` | targetPort is the port on which the Pod is listening. |
|
||||
| service.ports.health-probe | object | `{"port":8081,"protocol":"TCP","targetPort":"health-probe"}` | health-probe denotes a listening service named "health-probe" |
|
||||
| service.ports.health-probe.port | int | `8081` | port is the port on which the Service will listen. |
|
||||
| service.ports.health-probe.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. |
|
||||
| service.ports.health-probe.targetPort | string | `"health-probe"` | targetPort is the port on which the Pod is listening. |
|
||||
| service.ports.metrics | object | `{"port":8080,"protocol":"TCP","targetPort":"metrics"}` | metrics denotes a listening service named "metrics". |
|
||||
| service.ports.metrics.port | int | `8080` | port is the port on which the Service will listen. |
|
||||
| service.ports.metrics.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. |
|
||||
| service.ports.metrics.targetPort | string | `"metrics"` | targetPort is the port on which the Pod is listening. |
|
||||
| service.type | string | `"ClusterIP"` | type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible from within the cluster. |
|
||||
| serviceAccount | object | `{"annotations":{},"create":true,"name":""}` | serviceAccount is the configuration for the service account. |
|
||||
| serviceAccount.annotations | object | `{}` | annotations to add to the service account. |
|
||||
| serviceAccount.create | bool | `true` | create specifies whether a service account should be created. |
|
||||
| serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the full name template. |
|
||||
| serviceMonitor | object | `{"create":false,"interval":"15s","labels":{},"path":"/metrics","relabelings":[],"scheme":"http","scrapeTimeout":"10s","targetLabels":[],"tlsConfig":{}}` | serviceMonitor holds the configuration for the ServiceMonitor CRD. A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should discover and scrape metrics from the k8s-metacollector service. |
|
||||
| serviceMonitor.create | bool | `false` | create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. https://github.com/coreos/prometheus-operator Enable it only if the ServiceMonitor CRD is installed in your cluster. |
|
||||
| serviceMonitor.interval | string | `"15s"` | interval specifies the time interval at which Prometheus should scrape metrics from the service. |
|
||||
| serviceMonitor.labels | object | `{}` | labels set of labels to be applied to the ServiceMonitor resource. If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right label here in order for the ServiceMonitor to be selected for target discovery. |
|
||||
| serviceMonitor.path | string | `"/metrics"` | path at which the metrics are expose by the k8s-metacollector. |
|
||||
| serviceMonitor.relabelings | list | `[]` | relabelings configures the relabeling rules to apply the target’s metadata labels. |
|
||||
| serviceMonitor.scheme | string | `"http"` | scheme specifies network protocol used by the metrics endpoint. In this case HTTP. |
|
||||
| serviceMonitor.scrapeTimeout | string | `"10s"` | scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for that target. |
|
||||
| serviceMonitor.targetLabels | list | `[]` | targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. |
|
||||
| serviceMonitor.tlsConfig | object | `{}` | tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when scraping metrics from a service. It allows you to define the details of the TLS connection, such as CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support TLS configuration for the metrics endpoint. |
|
||||
| tolerations | list | `[]` | tolerations are applied to pods and allow them to be scheduled on nodes with matching taints. |
|
||||
File diff suppressed because it is too large
Load Diff
1
falco/charts/k8s-metacollector/templates/NOTES.txt
Normal file
1
falco/charts/k8s-metacollector/templates/NOTES.txt
Normal file
@ -0,0 +1 @@
|
||||
|
||||
121
falco/charts/k8s-metacollector/templates/_helpers.tpl
Normal file
121
falco/charts/k8s-metacollector/templates/_helpers.tpl
Normal file
@ -0,0 +1,121 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.namespace" -}}
|
||||
{{- default .Release.Namespace .Values.namespaceOverride -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.labels" -}}
|
||||
helm.sh/chart: {{ include "k8s-metacollector.chart" . }}
|
||||
{{ include "k8s-metacollector.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/component: "metadata-collector"
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "k8s-metacollector.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Return the proper k8s-metacollector image name
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.image" -}}
|
||||
"
|
||||
{{- with .Values.image.registry -}}
|
||||
{{- . }}/
|
||||
{{- end -}}
|
||||
{{- .Values.image.repository }}:
|
||||
{{- .Values.image.tag | default .Chart.AppVersion -}}
|
||||
"
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "k8s-metacollector.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Generate the ports for the service
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.servicePorts" -}}
|
||||
{{- if .Values.service.create }}
|
||||
{{- with .Values.service.ports }}
|
||||
{{- range $key, $value := . }}
|
||||
- name: {{ $key }}
|
||||
{{- range $key1, $value1 := $value }}
|
||||
{{ $key1}}: {{ $value1 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Generate the ports for the container
|
||||
*/}}
|
||||
{{- define "k8s-metacollector.containerPorts" -}}
|
||||
{{- if .Values.service.create }}
|
||||
{{- with .Values.service.ports }}
|
||||
{{- range $key, $value := . }}
|
||||
- name: "{{ $key }}"
|
||||
{{- range $key1, $value1 := $value }}
|
||||
{{- if ne $key1 "targetPort" }}
|
||||
{{- if eq $key1 "port" }}
|
||||
containerPort: {{ $value1 }}
|
||||
{{- else }}
|
||||
{{ $key1}}: {{ $value1 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
39
falco/charts/k8s-metacollector/templates/clusterrole.yaml
Normal file
39
falco/charts/k8s-metacollector/templates/clusterrole.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: {{ include "k8s-metacollector.fullname" . }}
|
||||
labels:
|
||||
{{- include "k8s-metacollector.labels" . | nindent 4 }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
- deployments
|
||||
- replicasets
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
- namespaces
|
||||
- pods
|
||||
- replicationcontrollers
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- discovery.k8s.io
|
||||
resources:
|
||||
- endpointslices
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- end }}
|
||||
@ -0,0 +1,16 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: {{ include "k8s-metacollector.fullname" . }}
|
||||
labels:
|
||||
{{- include "k8s-metacollector.labels" . | nindent 4 }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: {{ include "k8s-metacollector.fullname" . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ include "k8s-metacollector.serviceAccountName" . }}
|
||||
namespace: {{ include "k8s-metacollector.namespace" . }}
|
||||
{{- end }}
|
||||
@ -0,0 +1,21 @@
|
||||
{{- if .Values.grafana.dashboards.enabled -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ .Values.grafana.dashboards.configMaps.collector.name }}
|
||||
{{ if .Values.grafana.dashboards.configMaps.collector.namespace }}
|
||||
namespace: {{ .Values.grafana.dashboards.configMaps.collector.namespace }}
|
||||
{{- else -}}
|
||||
namespace: {{ include "k8s-metacollector.namespace" . }}
|
||||
{{- end }}
|
||||
labels:
|
||||
grafana_dashboard: "1"
|
||||
{{- if .Values.grafana.dashboards.configMaps.collector.folder }}
|
||||
annotations:
|
||||
k8s-sidecar-target-directory: /tmp/dashboards/{{ .Values.grafana.dashboards.configMaps.collector.folder}}
|
||||
grafana_dashboard_folder: {{ .Values.grafana.dashboards.configMaps.collector.folder }}
|
||||
{{- end }}
|
||||
data:
|
||||
dashboard.json: |-
|
||||
{{- .Files.Get "dashboards/k8s-metacollector-dashboard.json" | nindent 4 }}
|
||||
{{- end -}}
|
||||
62
falco/charts/k8s-metacollector/templates/deployment.yaml
Normal file
62
falco/charts/k8s-metacollector/templates/deployment.yaml
Normal file
@ -0,0 +1,62 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "k8s-metacollector.fullname" . }}
|
||||
namespace: {{ include "k8s-metacollector.namespace" . }}
|
||||
labels:
|
||||
{{- include "k8s-metacollector.labels" . | nindent 4 }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "k8s-metacollector.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
{{- with .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "k8s-metacollector.selectorLabels" . | nindent 8 }}
|
||||
spec:
|
||||
{{- with .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "k8s-metacollector.serviceAccountName" . }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.containerSecurityContext | nindent 12 }}
|
||||
image: {{ include "k8s-metacollector.image" . }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- /meta-collector
|
||||
args:
|
||||
- run
|
||||
ports:
|
||||
{{- include "k8s-metacollector.containerPorts" . | indent 12}}
|
||||
{{- with .Values.healthChecks.livenessProbe }}
|
||||
livenessProbe:
|
||||
{{- toYaml . | nindent 12}}
|
||||
{{- end }}
|
||||
{{- with .Values.healthChecks.readinessProbe }}
|
||||
readinessProbe:
|
||||
{{- toYaml . | nindent 12}}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
15
falco/charts/k8s-metacollector/templates/service.yaml
Normal file
15
falco/charts/k8s-metacollector/templates/service.yaml
Normal file
@ -0,0 +1,15 @@
|
||||
{{- if .Values.service.create}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "k8s-metacollector.fullname" . }}
|
||||
namespace: {{ include "k8s-metacollector.namespace" . }}
|
||||
labels:
|
||||
{{- include "k8s-metacollector.labels" . | nindent 4 }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
ports:
|
||||
{{- include "k8s-metacollector.servicePorts" . | indent 4 }}
|
||||
selector:
|
||||
{{- include "k8s-metacollector.selectorLabels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
13
falco/charts/k8s-metacollector/templates/serviceaccount.yaml
Normal file
13
falco/charts/k8s-metacollector/templates/serviceaccount.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "k8s-metacollector.serviceAccountName" . }}
|
||||
namespace: {{ include "k8s-metacollector.namespace" . }}
|
||||
labels:
|
||||
{{- include "k8s-metacollector.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
47
falco/charts/k8s-metacollector/templates/servicemonitor.yaml
Normal file
47
falco/charts/k8s-metacollector/templates/servicemonitor.yaml
Normal file
@ -0,0 +1,47 @@
|
||||
{{- if .Values.serviceMonitor.create }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: {{ include "k8s-metacollector.fullname" . }}
|
||||
{{- if .Values.serviceMonitor.namespace }}
|
||||
namespace: {{ tpl .Values.serviceMonitor.namespace . }}
|
||||
{{- else }}
|
||||
namespace: {{ include "k8s-metacollector.namespace" . }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "k8s-metacollector.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceMonitor.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
endpoints:
|
||||
- port: {{ .Values.service.ports.metrics.targetPort }}
|
||||
{{- with .Values.serviceMonitor.interval }}
|
||||
interval: {{ . }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceMonitor.scrapeTimeout }}
|
||||
scrapeTimeout: {{ . }}
|
||||
{{- end }}
|
||||
honorLabels: true
|
||||
path: {{ .Values.serviceMonitor.path }}
|
||||
scheme: {{ .Values.serviceMonitor.scheme }}
|
||||
{{- with .Values.serviceMonitor.tlsConfig }}
|
||||
tlsConfig:
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceMonitor.relabelings }}
|
||||
relabelings:
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
jobLabel: "{{ .Release.Name }}"
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "k8s-metacollector.selectorLabels" . | nindent 6 }}
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- {{ include "k8s-metacollector.namespace" . }}
|
||||
{{- with .Values.serviceMonitor.targetLabels }}
|
||||
targetLabels:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
34
falco/charts/k8s-metacollector/tests/unit/chartInfo.go
Normal file
34
falco/charts/k8s-metacollector/tests/unit/chartInfo.go
Normal file
@ -0,0 +1,34 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
func chartInfo(t *testing.T, chartPath string) (map[string]interface{}, error) {
|
||||
// Get chart info.
|
||||
output, err := helm.RunHelmCommandAndGetOutputE(t, &helm.Options{}, "show", "chart", chartPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
chartInfo := map[string]interface{}{}
|
||||
err = yaml.Unmarshal([]byte(output), &chartInfo)
|
||||
return chartInfo, err
|
||||
}
|
||||
@ -0,0 +1,222 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/stretchr/testify/suite"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
)
|
||||
|
||||
type commonMetaFieldsTest struct {
|
||||
suite.Suite
|
||||
chartPath string
|
||||
releaseName string
|
||||
namespace string
|
||||
templates []string
|
||||
}
|
||||
|
||||
func TestCommonMetaFields(t *testing.T) {
|
||||
t.Parallel()
|
||||
// Template files that will be rendered.
|
||||
templateFiles := []string{
|
||||
"templates/clusterrole.yaml",
|
||||
"templates/clusterrolebinding.yaml",
|
||||
"templates/deployment.yaml",
|
||||
"templates/service.yaml",
|
||||
"templates/serviceaccount.yaml",
|
||||
"templates/servicemonitor.yaml",
|
||||
}
|
||||
|
||||
chartFullPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
suite.Run(t, &commonMetaFieldsTest{
|
||||
Suite: suite.Suite{},
|
||||
chartPath: chartFullPath,
|
||||
releaseName: "releasename-test",
|
||||
namespace: "metacollector-test",
|
||||
templates: templateFiles,
|
||||
})
|
||||
}
|
||||
|
||||
func (s *commonMetaFieldsTest) TestNameOverride() {
|
||||
cInfo, err := chartInfo(s.T(), s.chartPath)
|
||||
s.NoError(err)
|
||||
chartName, found := cInfo["name"]
|
||||
s.True(found)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
"defaultValues, release name does not contain chart name",
|
||||
map[string]string{
|
||||
"serviceMonitor.create": "true",
|
||||
},
|
||||
fmt.Sprintf("%s-%s", s.releaseName, chartName),
|
||||
},
|
||||
{
|
||||
"overrideFullName",
|
||||
map[string]string{
|
||||
"fullnameOverride": "metadata",
|
||||
"serviceMonitor.create": "true",
|
||||
},
|
||||
"metadata",
|
||||
},
|
||||
{
|
||||
"overrideFullName, longer than 63 chars",
|
||||
map[string]string{
|
||||
"fullnameOverride": "aVeryLongNameForTheReleaseThatIsLongerThanSixtyThreeCharsaVeryLongNameForTheReleaseThatIsLongerThanSixtyThreeChars",
|
||||
"serviceMonitor.create": "true",
|
||||
},
|
||||
"aVeryLongNameForTheReleaseThatIsLongerThanSixtyThreeCharsaVeryL",
|
||||
},
|
||||
{
|
||||
"overrideName, not containing release name",
|
||||
map[string]string{
|
||||
"nameOverride": "metadata",
|
||||
"serviceMonitor.create": "true",
|
||||
},
|
||||
fmt.Sprintf("%s-metadata", s.releaseName),
|
||||
},
|
||||
|
||||
{
|
||||
"overrideName, release name contains the name",
|
||||
map[string]string{
|
||||
"nameOverride": "releasename",
|
||||
"serviceMonitor.create": "true",
|
||||
},
|
||||
s.releaseName,
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
for _, template := range s.templates {
|
||||
// Render the current template.
|
||||
output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, []string{template})
|
||||
// Unmarshal output to a map.
|
||||
var resource unstructured.Unstructured
|
||||
helm.UnmarshalK8SYaml(s.T(), output, &resource)
|
||||
|
||||
s.Equal(testCase.expected, resource.GetName(), "should be equal")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *commonMetaFieldsTest) TestNamespaceOverride() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
map[string]string{
|
||||
"serviceMonitor.create": "true",
|
||||
},
|
||||
"default",
|
||||
},
|
||||
{
|
||||
"overrideNamespace",
|
||||
map[string]string{
|
||||
"namespaceOverride": "metacollector",
|
||||
"serviceMonitor.create": "true",
|
||||
},
|
||||
"metacollector",
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
for _, template := range s.templates {
|
||||
// Render the current template.
|
||||
output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, []string{template})
|
||||
// Unmarshal output to a map.
|
||||
var resource unstructured.Unstructured
|
||||
helm.UnmarshalK8SYaml(s.T(), output, &resource)
|
||||
if resource.GetKind() == "ClusterRole" || resource.GetKind() == "ClusterRoleBinding" {
|
||||
continue
|
||||
}
|
||||
s.Equal(testCase.expected, resource.GetNamespace(), "should be equal")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestLabels tests that all rendered resources have the same base set of labels.
|
||||
func (s *commonMetaFieldsTest) TestLabels() {
|
||||
// Get chart info.
|
||||
cInfo, err := chartInfo(s.T(), s.chartPath)
|
||||
s.NoError(err)
|
||||
// Get app version.
|
||||
appVersion, found := cInfo["appVersion"]
|
||||
s.True(found, "should find app version in chart info")
|
||||
appVersion = appVersion.(string)
|
||||
// Get chart version.
|
||||
chartVersion, found := cInfo["version"]
|
||||
s.True(found, "should find chart version in chart info")
|
||||
// Get chart name.
|
||||
chartName, found := cInfo["name"]
|
||||
s.True(found, "should find chart name in chart info")
|
||||
chartName = chartName.(string)
|
||||
expectedLabels := map[string]string{
|
||||
"helm.sh/chart": fmt.Sprintf("%s-%s", chartName, chartVersion),
|
||||
"app.kubernetes.io/name": chartName.(string),
|
||||
"app.kubernetes.io/instance": s.releaseName,
|
||||
"app.kubernetes.io/version": appVersion.(string),
|
||||
"app.kubernetes.io/managed-by": "Helm",
|
||||
"app.kubernetes.io/component": "metadata-collector",
|
||||
}
|
||||
|
||||
// Adjust the values to render all components.
|
||||
options := helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}}
|
||||
|
||||
for _, template := range s.templates {
|
||||
// Render the current template.
|
||||
output := helm.RenderTemplate(s.T(), &options, s.chartPath, s.releaseName, []string{template})
|
||||
// Unmarshal output to a map.
|
||||
var resource unstructured.Unstructured
|
||||
helm.UnmarshalK8SYaml(s.T(), output, &resource)
|
||||
labels := resource.GetLabels()
|
||||
s.Len(labels, len(expectedLabels), "should have the same number of labels")
|
||||
for key, value := range labels {
|
||||
expectedVal := expectedLabels[key]
|
||||
s.Equal(expectedVal, value)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,76 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/utils/strings/slices"
|
||||
)
|
||||
|
||||
const chartPath = "../../"
|
||||
|
||||
// Using the default values we want to test that all the expected resources are rendered.
|
||||
func TestRenderedResourcesWithDefaultValues(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
releaseName := "rendered-resources"
|
||||
|
||||
// Template files that we expect to be rendered.
|
||||
templateFiles := []string{
|
||||
"clusterrole.yaml",
|
||||
"clusterrolebinding.yaml",
|
||||
"deployment.yaml",
|
||||
"service.yaml",
|
||||
"serviceaccount.yaml",
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
options := &helm.Options{}
|
||||
|
||||
// Template the chart using the default values.yaml file.
|
||||
output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Extract all rendered files from the output.
|
||||
pattern := `# Source: k8s-metacollector/templates/([^\n]+)`
|
||||
re := regexp.MustCompile(pattern)
|
||||
matches := re.FindAllStringSubmatch(output, -1)
|
||||
|
||||
var renderedTemplates []string
|
||||
for _, match := range matches {
|
||||
// Filter out test templates.
|
||||
if !strings.Contains(match[1], "test-") {
|
||||
renderedTemplates = append(renderedTemplates, match[1])
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the rendered resources are equal tho the expected ones.
|
||||
require.Equal(t, len(renderedTemplates), len(templateFiles), "should be equal")
|
||||
|
||||
for _, rendered := range renderedTemplates {
|
||||
require.True(t, slices.Contains(templateFiles, rendered), "template files should contain all the rendered files")
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,862 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/stretchr/testify/suite"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
)
|
||||
|
||||
type deploymentTemplateTest struct {
|
||||
suite.Suite
|
||||
chartPath string
|
||||
releaseName string
|
||||
namespace string
|
||||
templates []string
|
||||
}
|
||||
|
||||
func TestDeploymentTemplate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
chartFullPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
suite.Run(t, &deploymentTemplateTest{
|
||||
Suite: suite.Suite{},
|
||||
chartPath: chartFullPath,
|
||||
releaseName: "k8s-metacollector-test",
|
||||
namespace: "metacollector-test",
|
||||
templates: []string{"templates/deployment.yaml"},
|
||||
})
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestImage() {
|
||||
// Get chart info.
|
||||
cInfo, err := chartInfo(s.T(), s.chartPath)
|
||||
s.NoError(err)
|
||||
// Extract the appVersion.
|
||||
appVersion, found := cInfo["appVersion"]
|
||||
s.True(found, "should find app version from chart info")
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
fmt.Sprintf("docker.io/falcosecurity/k8s-metacollector:%s", appVersion),
|
||||
},
|
||||
{
|
||||
"changingImageTag",
|
||||
map[string]string{
|
||||
"image.tag": "testingTag",
|
||||
},
|
||||
"docker.io/falcosecurity/k8s-metacollector:testingTag",
|
||||
},
|
||||
{
|
||||
"changingImageRepo",
|
||||
map[string]string{
|
||||
"image.repository": "falcosecurity/testingRepository",
|
||||
},
|
||||
fmt.Sprintf("docker.io/falcosecurity/testingRepository:%s", appVersion),
|
||||
},
|
||||
{
|
||||
"changingImageRegistry",
|
||||
map[string]string{
|
||||
"image.registry": "ghcr.io",
|
||||
},
|
||||
fmt.Sprintf("ghcr.io/falcosecurity/k8s-metacollector:%s", appVersion),
|
||||
},
|
||||
{
|
||||
"changingAllImageFields",
|
||||
map[string]string{
|
||||
"image.registry": "ghcr.io",
|
||||
"image.repository": "falcosecurity/testingRepository",
|
||||
"image.tag": "testingTag",
|
||||
},
|
||||
"ghcr.io/falcosecurity/testingRepository:testingTag",
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
s.Equal(testCase.expected, deployment.Spec.Template.Spec.Containers[0].Image, "should be equal")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestImagePullPolicy() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
"IfNotPresent",
|
||||
},
|
||||
{
|
||||
"changingPullPolicy",
|
||||
map[string]string{
|
||||
"image.pullPolicy": "Always",
|
||||
},
|
||||
"Always",
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
s.Equal(testCase.expected, string(deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy), "should be equal")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestImagePullSecrets() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
"",
|
||||
},
|
||||
{
|
||||
"changingPullPolicy",
|
||||
map[string]string{
|
||||
"image.pullSecrets[0].name": "my-pull-secret",
|
||||
},
|
||||
"my-pull-secret",
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
if testCase.expected == "" {
|
||||
s.Nil(deployment.Spec.Template.Spec.ImagePullSecrets, "should be nil")
|
||||
} else {
|
||||
s.Equal(testCase.expected, deployment.Spec.Template.Spec.ImagePullSecrets[0].Name, "should be equal")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestServiceAccount() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
s.releaseName,
|
||||
},
|
||||
{
|
||||
"changingServiceAccountName",
|
||||
map[string]string{
|
||||
"serviceAccount.name": "service-account",
|
||||
},
|
||||
"service-account",
|
||||
},
|
||||
{
|
||||
"disablingServiceAccount",
|
||||
map[string]string{
|
||||
"serviceAccount.create": "false",
|
||||
},
|
||||
"default",
|
||||
},
|
||||
{
|
||||
"disablingServiceAccountAndSettingName",
|
||||
map[string]string{
|
||||
"serviceAccount.create": "false",
|
||||
"serviceAccount.name": "service-account",
|
||||
},
|
||||
"service-account",
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
s.Equal(testCase.expected, deployment.Spec.Template.Spec.ServiceAccountName, "should be equal")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestPodAnnotations() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected map[string]string
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
nil,
|
||||
},
|
||||
{
|
||||
"settingPodAnnotations",
|
||||
map[string]string{
|
||||
"podAnnotations.my-annotation": "annotationValue",
|
||||
},
|
||||
map[string]string{
|
||||
"my-annotation": "annotationValue",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
if testCase.expected == nil {
|
||||
s.Nil(deployment.Spec.Template.Annotations, "should be nil")
|
||||
} else {
|
||||
for key, val := range testCase.expected {
|
||||
val1 := deployment.Spec.Template.Annotations[key]
|
||||
s.Equal(val, val1, "should contain all the added annotations")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestPodSecurityContext() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(psc *corev1.PodSecurityContext)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(psc *corev1.PodSecurityContext) {
|
||||
s.Equal(true, *psc.RunAsNonRoot, "runAsNonRoot should be set to true")
|
||||
s.Equal(int64(1000), *psc.RunAsUser, "runAsUser should be set to 1000")
|
||||
s.Equal(int64(1000), *psc.FSGroup, "fsGroup should be set to 1000")
|
||||
s.Equal(int64(1000), *psc.RunAsGroup, "runAsGroup should be set to 1000")
|
||||
s.Nil(psc.SELinuxOptions)
|
||||
s.Nil(psc.WindowsOptions)
|
||||
s.Nil(psc.SupplementalGroups)
|
||||
s.Nil(psc.Sysctls)
|
||||
s.Nil(psc.FSGroupChangePolicy)
|
||||
s.Nil(psc.SeccompProfile)
|
||||
},
|
||||
},
|
||||
{
|
||||
"changingServiceAccountName",
|
||||
map[string]string{
|
||||
"podSecurityContext": "null",
|
||||
},
|
||||
func(psc *corev1.PodSecurityContext) {
|
||||
s.Nil(psc, "podSecurityContext should be set to nil")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.SecurityContext)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestContainerSecurityContext() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(sc *corev1.SecurityContext)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(sc *corev1.SecurityContext) {
|
||||
s.Len(sc.Capabilities.Drop, 1, "capabilities in drop should be set to one")
|
||||
s.Equal("ALL", string(sc.Capabilities.Drop[0]), "should drop all capabilities")
|
||||
s.Nil(sc.Capabilities.Add)
|
||||
s.Nil(sc.Privileged)
|
||||
s.Nil(sc.SELinuxOptions)
|
||||
s.Nil(sc.WindowsOptions)
|
||||
s.Nil(sc.RunAsUser)
|
||||
s.Nil(sc.RunAsGroup)
|
||||
s.Nil(sc.RunAsNonRoot)
|
||||
s.Nil(sc.ReadOnlyRootFilesystem)
|
||||
s.Nil(sc.AllowPrivilegeEscalation)
|
||||
s.Nil(sc.ProcMount)
|
||||
s.Nil(sc.SeccompProfile)
|
||||
},
|
||||
},
|
||||
{
|
||||
"changingServiceAccountName",
|
||||
map[string]string{
|
||||
"containerSecurityContext": "null",
|
||||
},
|
||||
func(sc *corev1.SecurityContext) {
|
||||
s.Nil(sc, "containerSecurityContext should be set to nil")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.Containers[0].SecurityContext)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestResources() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(res corev1.ResourceRequirements)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(res corev1.ResourceRequirements) {
|
||||
s.Nil(res.Claims)
|
||||
s.Nil(res.Requests)
|
||||
s.Nil(res.Limits)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.Containers[0].Resources)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestNodeSelector() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(ns map[string]string)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(ns map[string]string) {
|
||||
s.Nil(ns, "should be nil")
|
||||
},
|
||||
},
|
||||
{
|
||||
"Setting nodeSelector",
|
||||
map[string]string{
|
||||
"nodeSelector.mySelector": "myNode",
|
||||
},
|
||||
func(ns map[string]string) {
|
||||
value, ok := ns["mySelector"]
|
||||
s.True(ok, "should find the key")
|
||||
s.Equal("myNode", value, "should be equal")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.NodeSelector)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestTolerations() {
|
||||
tolerationString := `[
|
||||
{
|
||||
"key": "key1",
|
||||
"operator": "Equal",
|
||||
"value": "value1",
|
||||
"effect": "NoSchedule"
|
||||
}
|
||||
]`
|
||||
var tolerations []corev1.Toleration
|
||||
|
||||
err := json.Unmarshal([]byte(tolerationString), &tolerations)
|
||||
s.NoError(err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(tol []corev1.Toleration)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(tol []corev1.Toleration) {
|
||||
s.Nil(tol, "should be nil")
|
||||
},
|
||||
},
|
||||
{
|
||||
"Setting tolerations",
|
||||
map[string]string{
|
||||
"tolerations": tolerationString,
|
||||
},
|
||||
func(tol []corev1.Toleration) {
|
||||
s.Len(tol, 1, "should have only one toleration")
|
||||
s.True(reflect.DeepEqual(tol[0], tolerations[0]), "should be equal")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetJsonValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.Tolerations)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestAffinity() {
|
||||
affinityString := `{
|
||||
"nodeAffinity": {
|
||||
"requiredDuringSchedulingIgnoredDuringExecution": {
|
||||
"nodeSelectorTerms": [
|
||||
{
|
||||
"matchExpressions": [
|
||||
{
|
||||
"key": "disktype",
|
||||
"operator": "In",
|
||||
"values": [
|
||||
"ssd"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}`
|
||||
var affinity corev1.Affinity
|
||||
|
||||
err := json.Unmarshal([]byte(affinityString), &affinity)
|
||||
s.NoError(err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(aff *corev1.Affinity)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(aff *corev1.Affinity) {
|
||||
s.Nil(aff, "should be nil")
|
||||
},
|
||||
},
|
||||
{
|
||||
"Setting affinity",
|
||||
map[string]string{
|
||||
"affinity": affinityString,
|
||||
},
|
||||
func(aff *corev1.Affinity) {
|
||||
s.True(reflect.DeepEqual(affinity, *aff), "should be equal")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetJsonValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.Affinity)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestLiveness() {
|
||||
livenessProbeString := `{
|
||||
"httpGet": {
|
||||
"path": "/healthz",
|
||||
"port": 8081
|
||||
},
|
||||
"initialDelaySeconds": 45,
|
||||
"timeoutSeconds": 5,
|
||||
"periodSeconds": 15
|
||||
}`
|
||||
var liveness corev1.Probe
|
||||
|
||||
err := json.Unmarshal([]byte(livenessProbeString), &liveness)
|
||||
s.NoError(err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(probe *corev1.Probe)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(probe *corev1.Probe) {
|
||||
s.True(reflect.DeepEqual(*probe, liveness), "should be equal")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetJsonValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.Containers[0].LivenessProbe)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestReadiness() {
|
||||
readinessProbeString := `{
|
||||
"httpGet": {
|
||||
"path": "/readyz",
|
||||
"port": 8081
|
||||
},
|
||||
"initialDelaySeconds": 30,
|
||||
"timeoutSeconds": 5,
|
||||
"periodSeconds": 15
|
||||
}`
|
||||
var readiness corev1.Probe
|
||||
|
||||
err := json.Unmarshal([]byte(readinessProbeString), &readiness)
|
||||
s.NoError(err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(probe *corev1.Probe)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(probe *corev1.Probe) {
|
||||
s.True(reflect.DeepEqual(*probe, readiness), "should be equal")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetJsonValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.Containers[0].ReadinessProbe)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestContainerPorts() {
|
||||
|
||||
newPorts := `{
|
||||
"enabled": true,
|
||||
"type": "ClusterIP",
|
||||
"ports": {
|
||||
"metrics": {
|
||||
"port": 8080,
|
||||
"targetPort": "metrics",
|
||||
"protocol": "TCP"
|
||||
},
|
||||
"health-probe": {
|
||||
"port": 8081,
|
||||
"targetPort": "health-probe",
|
||||
"protocol": "TCP"
|
||||
},
|
||||
"broker-grpc": {
|
||||
"port": 45000,
|
||||
"targetPort": "broker-grpc",
|
||||
"protocol": "TCP"
|
||||
},
|
||||
"myNewPort": {
|
||||
"port": 1111,
|
||||
"targetPort": "myNewPort",
|
||||
"protocol": "UDP"
|
||||
}
|
||||
}
|
||||
}`
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(p []corev1.ContainerPort)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(p []corev1.ContainerPort) {
|
||||
portsJSON := `[
|
||||
{
|
||||
"name": "broker-grpc",
|
||||
"containerPort": 45000,
|
||||
"protocol": "TCP"
|
||||
},
|
||||
{
|
||||
"name": "health-probe",
|
||||
"containerPort": 8081,
|
||||
"protocol": "TCP"
|
||||
},
|
||||
{
|
||||
"name": "metrics",
|
||||
"containerPort": 8080,
|
||||
"protocol": "TCP"
|
||||
}
|
||||
]`
|
||||
var ports []corev1.ContainerPort
|
||||
|
||||
err := json.Unmarshal([]byte(portsJSON), &ports)
|
||||
s.NoError(err)
|
||||
s.True(reflect.DeepEqual(ports, p), "should be equal")
|
||||
},
|
||||
},
|
||||
{
|
||||
"addNewPort",
|
||||
map[string]string{
|
||||
"service": newPorts,
|
||||
},
|
||||
func(p []corev1.ContainerPort) {
|
||||
portsJSON := `[
|
||||
{
|
||||
"name": "broker-grpc",
|
||||
"containerPort": 45000,
|
||||
"protocol": "TCP"
|
||||
},
|
||||
{
|
||||
"name": "health-probe",
|
||||
"containerPort": 8081,
|
||||
"protocol": "TCP"
|
||||
},
|
||||
{
|
||||
"name": "metrics",
|
||||
"containerPort": 8080,
|
||||
"protocol": "TCP"
|
||||
},
|
||||
{
|
||||
"name": "myNewPort",
|
||||
"containerPort": 1111,
|
||||
"protocol": "UDP"
|
||||
}
|
||||
]`
|
||||
var ports []corev1.ContainerPort
|
||||
|
||||
err := json.Unmarshal([]byte(portsJSON), &ports)
|
||||
s.NoError(err)
|
||||
s.True(reflect.DeepEqual(ports, p), "should be equal")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetJsonValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
testCase.expected(deployment.Spec.Template.Spec.Containers[0].Ports)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *deploymentTemplateTest) TestReplicaCount() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected int32
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
1,
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var deployment appsv1.Deployment
|
||||
helm.UnmarshalK8SYaml(subT, output, &deployment)
|
||||
|
||||
s.Equal(testCase.expected, (*deployment.Spec.Replicas), "should be equal")
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,144 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/stretchr/testify/suite"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
)
|
||||
|
||||
type grafanaDashboardsTemplateTest struct {
|
||||
suite.Suite
|
||||
chartPath string
|
||||
releaseName string
|
||||
namespace string
|
||||
templates []string
|
||||
}
|
||||
|
||||
func TestGrafanaDashboardsTemplate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
chartFullPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
suite.Run(t, &grafanaDashboardsTemplateTest{
|
||||
Suite: suite.Suite{},
|
||||
chartPath: chartFullPath,
|
||||
releaseName: "k8s-metacollector-test",
|
||||
namespace: "metacollector-test",
|
||||
templates: []string{"templates/collector-dashboard-grafana.yaml"},
|
||||
})
|
||||
}
|
||||
|
||||
func (g *grafanaDashboardsTemplateTest) TestCreationDefaultValues() {
|
||||
// Render the dashboard configmap and check that it has not been rendered.
|
||||
_, err := helm.RenderTemplateE(g.T(), &helm.Options{}, g.chartPath, g.releaseName, g.templates, fmt.Sprintf("--namespace=%s", g.namespace))
|
||||
g.Error(err, "should error")
|
||||
g.Equal("error while running command: exit status 1; Error: could not find template templates/collector-dashboard-grafana.yaml in chart", err.Error())
|
||||
}
|
||||
|
||||
func (g *grafanaDashboardsTemplateTest) TestConfig() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(cm *corev1.ConfigMap)
|
||||
}{
|
||||
{"dashboard enabled",
|
||||
map[string]string{
|
||||
"grafana.dashboards.enabled": "true",
|
||||
},
|
||||
func(cm *corev1.ConfigMap) {
|
||||
// Check that the name is the expected one.
|
||||
g.Equal("k8s-metacollector-grafana-dashboard", cm.Name)
|
||||
// Check the namespace.
|
||||
g.Equal(g.namespace, cm.Namespace)
|
||||
g.Nil(cm.Annotations)
|
||||
},
|
||||
},
|
||||
{"namespace",
|
||||
map[string]string{
|
||||
"grafana.dashboards.enabled": "true",
|
||||
"grafana.dashboards.configMaps.collector.namespace": "custom-namespace",
|
||||
},
|
||||
func(cm *corev1.ConfigMap) {
|
||||
// Check that the name is the expected one.
|
||||
g.Equal("k8s-metacollector-grafana-dashboard", cm.Name)
|
||||
// Check the namespace.
|
||||
g.Equal("custom-namespace", cm.Namespace)
|
||||
g.Nil(cm.Annotations)
|
||||
},
|
||||
},
|
||||
{"folder",
|
||||
map[string]string{
|
||||
"grafana.dashboards.enabled": "true",
|
||||
"grafana.dashboards.configMaps.collector.folder": "custom-folder",
|
||||
},
|
||||
func(cm *corev1.ConfigMap) {
|
||||
// Check that the name is the expected one.
|
||||
g.Equal("k8s-metacollector-grafana-dashboard", cm.Name)
|
||||
g.NotNil(cm.Annotations)
|
||||
g.Len(cm.Annotations, 2)
|
||||
// Check sidecar annotation.
|
||||
val, ok := cm.Annotations["k8s-sidecar-target-directory"]
|
||||
g.True(ok)
|
||||
g.Equal("/tmp/dashboards/custom-folder", val)
|
||||
// Check grafana annotation.
|
||||
val, ok = cm.Annotations["grafana_dashboard_folder"]
|
||||
g.True(ok)
|
||||
g.Equal("custom-folder", val)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
g.Run(testCase.name, func() {
|
||||
subT := g.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
|
||||
// Render the configmap unmarshal it.
|
||||
output, err := helm.RenderTemplateE(subT, options, g.chartPath, g.releaseName, g.templates, "--namespace="+g.namespace)
|
||||
g.NoError(err, "should succeed")
|
||||
var cfgMap corev1.ConfigMap
|
||||
helm.UnmarshalK8SYaml(subT, output, &cfgMap)
|
||||
|
||||
// Common checks
|
||||
// Check that contains the right label.
|
||||
g.Contains(cfgMap.Labels, "grafana_dashboard")
|
||||
// Check that the dashboard is contained in the config map.
|
||||
file, err := os.Open("../../dashboards/k8s-metacollector-dashboard.json")
|
||||
g.NoError(err)
|
||||
content, err := io.ReadAll(file)
|
||||
g.NoError(err)
|
||||
cfgData, ok := cfgMap.Data["dashboard.json"]
|
||||
g.True(ok)
|
||||
g.Equal(strings.TrimRight(string(content), "\n"), cfgData)
|
||||
testCase.expected(&cfgMap)
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,172 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/stretchr/testify/suite"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
rbacv1 "k8s.io/api/rbac/v1"
|
||||
)
|
||||
|
||||
// Type used to implement the testing suite for service account
|
||||
// and the related resources: clusterrole, clusterrolebinding
|
||||
type serviceAccountTemplateTest struct {
|
||||
suite.Suite
|
||||
chartPath string
|
||||
releaseName string
|
||||
namespace string
|
||||
templates []string
|
||||
}
|
||||
|
||||
func TestServiceAccountTemplate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
chartFullPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
suite.Run(t, &serviceAccountTemplateTest{
|
||||
Suite: suite.Suite{},
|
||||
chartPath: chartFullPath,
|
||||
releaseName: "k8s-metacollector-test",
|
||||
namespace: "metacollector-test",
|
||||
templates: []string{"templates/serviceaccount.yaml"},
|
||||
})
|
||||
}
|
||||
|
||||
func (s *serviceAccountTemplateTest) TestSVCAccountResourceCreation() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
}{
|
||||
{"defaultValues",
|
||||
nil,
|
||||
},
|
||||
{"changeName",
|
||||
map[string]string{
|
||||
"serviceAccount.name": "TestName",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
|
||||
// Render the service account and unmarshal it.
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
var svcAccount corev1.ServiceAccount
|
||||
helm.UnmarshalK8SYaml(subT, output, &svcAccount)
|
||||
|
||||
// Render the clusterRole and unmarshal it.
|
||||
output, err = helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, []string{"templates/clusterrole.yaml"})
|
||||
s.NoError(err, "should succeed")
|
||||
var clusterRole rbacv1.ClusterRole
|
||||
helm.UnmarshalK8SYaml(subT, output, &clusterRole)
|
||||
|
||||
output, err = helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, []string{"templates/clusterrolebinding.yaml"})
|
||||
s.NoError(err, "should succeed")
|
||||
var clusterRoleBinding rbacv1.ClusterRoleBinding
|
||||
helm.UnmarshalK8SYaml(subT, output, &clusterRoleBinding)
|
||||
|
||||
// Check that clusterRoleBinding references the right svc account.
|
||||
s.Equal(svcAccount.Name, clusterRoleBinding.Subjects[0].Name, "should be the same")
|
||||
s.Equal(svcAccount.Namespace, clusterRoleBinding.Subjects[0].Namespace, "should be the same")
|
||||
|
||||
// Check that clusterRobeBinding references the right clusterRole.
|
||||
s.Equal(clusterRole.Name, clusterRoleBinding.RoleRef.Name)
|
||||
|
||||
if testCase.values != nil {
|
||||
s.Equal("TestName", svcAccount.Name)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *serviceAccountTemplateTest) TestSVCAccountResourceNonCreation() {
|
||||
options := &helm.Options{SetValues: map[string]string{"serviceAccount.create": "false"}}
|
||||
// Render the service account and unmarshal it.
|
||||
_, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
|
||||
s.Error(err, "should error")
|
||||
s.Equal("error while running command: exit status 1; Error: could not find template templates/serviceaccount.yaml in chart", err.Error())
|
||||
|
||||
// Render the clusterRole and unmarshal it.
|
||||
_, err = helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, []string{"templates/clusterrole.yaml"})
|
||||
s.Error(err, "should error")
|
||||
s.Equal("error while running command: exit status 1; Error: could not find template templates/clusterrole.yaml in chart", err.Error())
|
||||
|
||||
_, err = helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, []string{"templates/clusterrolebinding.yaml"})
|
||||
s.Error(err, "should error")
|
||||
s.Equal("error while running command: exit status 1; Error: could not find template templates/clusterrolebinding.yaml in chart", err.Error())
|
||||
}
|
||||
|
||||
func (s *serviceAccountTemplateTest) TestSVCAccountAnnotations() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected map[string]string
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
nil,
|
||||
},
|
||||
{
|
||||
"settingSvcAccountAnnotations",
|
||||
map[string]string{
|
||||
"serviceAccount.annotations.my-annotation": "annotationValue",
|
||||
},
|
||||
map[string]string{
|
||||
"my-annotation": "annotationValue",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
// Render the service account and unmarshal it.
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
var svcAccount corev1.ServiceAccount
|
||||
helm.UnmarshalK8SYaml(subT, output, &svcAccount)
|
||||
|
||||
if testCase.expected == nil {
|
||||
s.Nil(svcAccount.Annotations, "should be nil")
|
||||
} else {
|
||||
for key, val := range testCase.expected {
|
||||
val1 := svcAccount.Annotations[key]
|
||||
s.Equal(val, val1, "should contain all the added annotations")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,93 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/stretchr/testify/suite"
|
||||
)
|
||||
|
||||
type serviceMonitorTemplateTest struct {
|
||||
suite.Suite
|
||||
chartPath string
|
||||
releaseName string
|
||||
namespace string
|
||||
templates []string
|
||||
}
|
||||
|
||||
func TestServiceMonitorTemplate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
chartFullPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
suite.Run(t, &serviceMonitorTemplateTest{
|
||||
Suite: suite.Suite{},
|
||||
chartPath: chartFullPath,
|
||||
releaseName: "k8s-metacollector-test",
|
||||
namespace: "metacollector-test",
|
||||
templates: []string{"templates/servicemonitor.yaml"},
|
||||
})
|
||||
}
|
||||
|
||||
func (s *serviceMonitorTemplateTest) TestCreationDefaultValues() {
|
||||
// Render the servicemonitor and check that it has not been rendered.
|
||||
_, err := helm.RenderTemplateE(s.T(), &helm.Options{}, s.chartPath, s.releaseName, s.templates)
|
||||
s.Error(err, "should error")
|
||||
s.Equal("error while running command: exit status 1; Error: could not find template templates/servicemonitor.yaml in chart", err.Error())
|
||||
}
|
||||
|
||||
func (s *serviceMonitorTemplateTest) TestEndpoint() {
|
||||
defaultEndpointsJSON := `[
|
||||
{
|
||||
"port": "metrics",
|
||||
"interval": "15s",
|
||||
"scrapeTimeout": "10s",
|
||||
"honorLabels": true,
|
||||
"path": "/metrics",
|
||||
"scheme": "http"
|
||||
}
|
||||
]`
|
||||
var defaultEndpoints []monitoringv1.Endpoint
|
||||
err := json.Unmarshal([]byte(defaultEndpointsJSON), &defaultEndpoints)
|
||||
s.NoError(err)
|
||||
|
||||
options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}}
|
||||
output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates)
|
||||
|
||||
var svcMonitor monitoringv1.ServiceMonitor
|
||||
helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor)
|
||||
|
||||
s.Len(svcMonitor.Spec.Endpoints, 1, "should have only one endpoint")
|
||||
s.True(reflect.DeepEqual(svcMonitor.Spec.Endpoints[0], defaultEndpoints[0]))
|
||||
}
|
||||
|
||||
func (s *serviceMonitorTemplateTest) TestNamespaceSelector() {
|
||||
options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}}
|
||||
output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates)
|
||||
|
||||
var svcMonitor monitoringv1.ServiceMonitor
|
||||
helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor)
|
||||
s.Len(svcMonitor.Spec.NamespaceSelector.MatchNames, 1)
|
||||
s.Equal("default", svcMonitor.Spec.NamespaceSelector.MatchNames[0])
|
||||
}
|
||||
@ -0,0 +1,220 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/stretchr/testify/suite"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
)
|
||||
|
||||
type serviceTemplateTest struct {
|
||||
suite.Suite
|
||||
chartPath string
|
||||
releaseName string
|
||||
namespace string
|
||||
templates []string
|
||||
}
|
||||
|
||||
func TestServiceTemplate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
chartFullPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
suite.Run(t, &serviceTemplateTest{
|
||||
Suite: suite.Suite{},
|
||||
chartPath: chartFullPath,
|
||||
releaseName: "test",
|
||||
namespace: "metacollector-test",
|
||||
templates: []string{"templates/service.yaml"},
|
||||
})
|
||||
}
|
||||
|
||||
func (s *serviceTemplateTest) TestServiceCreateFalse() {
|
||||
options := &helm.Options{SetValues: map[string]string{"service.create": "false"}}
|
||||
// Render the service account and unmarshal it.
|
||||
_, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
|
||||
s.Error(err, "should error")
|
||||
s.Equal("error while running command: exit status 1; Error: could not find template templates/service.yaml in chart", err.Error())
|
||||
}
|
||||
|
||||
func (s *serviceTemplateTest) TestServiceType() {
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected string
|
||||
}{
|
||||
{"defaultValues",
|
||||
nil,
|
||||
"ClusterIP",
|
||||
},
|
||||
{"NodePort",
|
||||
map[string]string{
|
||||
"service.type": "NodePort",
|
||||
},
|
||||
"NodePort",
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
|
||||
// Render the service and unmarshal it.
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
var svc corev1.Service
|
||||
helm.UnmarshalK8SYaml(subT, output, &svc)
|
||||
|
||||
s.Equal(testCase.expected, string(svc.Spec.Type))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (s *serviceTemplateTest) TestServicePorts() {
|
||||
newPorts := `{
|
||||
"enabled": true,
|
||||
"type": "ClusterIP",
|
||||
"ports": {
|
||||
"metrics": {
|
||||
"port": 8080,
|
||||
"targetPort": "metrics",
|
||||
"protocol": "TCP"
|
||||
},
|
||||
"health-probe": {
|
||||
"port": 8081,
|
||||
"targetPort": "health-probe",
|
||||
"protocol": "TCP"
|
||||
},
|
||||
"broker-grpc": {
|
||||
"port": 45000,
|
||||
"targetPort": "broker-grpc",
|
||||
"protocol": "TCP"
|
||||
},
|
||||
"myNewPort": {
|
||||
"port": 1111,
|
||||
"targetPort": "myNewPort",
|
||||
"protocol": "UDP"
|
||||
}
|
||||
}
|
||||
}`
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(p []corev1.ServicePort)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(p []corev1.ServicePort) {
|
||||
portsJSON := `[
|
||||
{
|
||||
"name": "broker-grpc",
|
||||
"port": 45000,
|
||||
"protocol": "TCP",
|
||||
"targetPort": "broker-grpc"
|
||||
},
|
||||
{
|
||||
"name": "health-probe",
|
||||
"port": 8081,
|
||||
"protocol": "TCP",
|
||||
"targetPort": "health-probe"
|
||||
},
|
||||
{
|
||||
"name": "metrics",
|
||||
"port": 8080,
|
||||
"protocol": "TCP",
|
||||
"targetPort": "metrics"
|
||||
}
|
||||
]`
|
||||
var ports []corev1.ServicePort
|
||||
|
||||
err := json.Unmarshal([]byte(portsJSON), &ports)
|
||||
s.NoError(err)
|
||||
s.True(reflect.DeepEqual(ports, p), "should be equal")
|
||||
},
|
||||
},
|
||||
{
|
||||
"addNewPort",
|
||||
map[string]string{
|
||||
"service": newPorts,
|
||||
},
|
||||
func(p []corev1.ServicePort) {
|
||||
portsJSON := `[
|
||||
{
|
||||
"name": "broker-grpc",
|
||||
"port": 45000,
|
||||
"protocol": "TCP",
|
||||
"targetPort": "broker-grpc"
|
||||
},
|
||||
{
|
||||
"name": "health-probe",
|
||||
"port": 8081,
|
||||
"protocol": "TCP",
|
||||
"targetPort": "health-probe"
|
||||
},
|
||||
{
|
||||
"name": "metrics",
|
||||
"port": 8080,
|
||||
"protocol": "TCP",
|
||||
"targetPort": "metrics"
|
||||
},
|
||||
{
|
||||
"name": "myNewPort",
|
||||
"port": 1111,
|
||||
"protocol": "UDP",
|
||||
"targetPort": "myNewPort"
|
||||
}
|
||||
]`
|
||||
var ports []corev1.ServicePort
|
||||
|
||||
err := json.Unmarshal([]byte(portsJSON), &ports)
|
||||
s.NoError(err)
|
||||
s.True(reflect.DeepEqual(ports, p), "should be equal")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
s.Run(testCase.name, func() {
|
||||
subT := s.T()
|
||||
subT.Parallel()
|
||||
|
||||
options := &helm.Options{SetJsonValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates)
|
||||
s.NoError(err, "should succeed")
|
||||
|
||||
var svc corev1.Service
|
||||
helm.UnmarshalK8SYaml(subT, output, &svc)
|
||||
|
||||
testCase.expected(svc.Spec.Ports)
|
||||
})
|
||||
}
|
||||
}
|
||||
202
falco/charts/k8s-metacollector/values.yaml
Normal file
202
falco/charts/k8s-metacollector/values.yaml
Normal file
@ -0,0 +1,202 @@
|
||||
# Default values for k8s-metacollector.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
# -- replicaCount is the number of identical copies of the k8s-metacollector.
|
||||
replicaCount: 1
|
||||
|
||||
# -- image is the configuration for the k8s-metacollector image.
|
||||
image:
|
||||
# -- pullPolicy is the policy used to determine when a node should attempt to pull the container image.
|
||||
pullPolicy: IfNotPresent
|
||||
# -- pullSecects a list of secrets containing credentials used when pulling from private/secure registries.
|
||||
pullSecrets: []
|
||||
# -- registry is the image registry to pull from.
|
||||
registry: docker.io
|
||||
# -- repository is the image repository to pull from
|
||||
repository: falcosecurity/k8s-metacollector
|
||||
# -- tag is image tag to pull. Overrides the image tag whose default is the chart appVersion.
|
||||
tag: ""
|
||||
|
||||
# -- nameOverride is the new name used to override the release name used for k8s-metacollector components.
|
||||
nameOverride: ""
|
||||
# -- fullNameOverride same as nameOverride but for the full name.
|
||||
fullnameOverride: ""
|
||||
# -- namespaceOverride overrides the deployment namespace. It's useful for multi-namespace deployments in combined charts.
|
||||
namespaceOverride: ""
|
||||
|
||||
|
||||
# -- serviceAccount is the configuration for the service account.
|
||||
serviceAccount:
|
||||
# -- create specifies whether a service account should be created.
|
||||
create: true
|
||||
# -- annotations to add to the service account.
|
||||
annotations: {}
|
||||
# -- name is name of the service account to use.
|
||||
# -- If not set and create is true, a name is generated using the full name template.
|
||||
name: ""
|
||||
|
||||
# -- podAnnotations are custom annotations to be added to the pod.
|
||||
podAnnotations: {}
|
||||
|
||||
# -- podSecurityContext holds the security settings for the pod.
|
||||
# -- These settings are override by the ones specified for the container when there is overlap.
|
||||
podSecurityContext:
|
||||
# -- runAsNonRoot when set to true enforces that the specified container runs as a non-root user.
|
||||
runAsNonRoot: true
|
||||
# -- runAsUser specifies the user ID (UID) that the containers inside the pod should run as.
|
||||
runAsUser: 1000
|
||||
# -- runAsGroup specifies the group ID (GID) that the containers inside the pod should run as.
|
||||
runAsGroup: 1000
|
||||
# -- fsGroup specifies the group ID (GID) that should be used for the volume mounted within a Pod.
|
||||
fsGroup: 1000
|
||||
|
||||
# -- containerSecurityContext holds the security settings for the container.
|
||||
containerSecurityContext:
|
||||
# -- capabilities fine-grained privileges that can be assigned to processes.
|
||||
capabilities:
|
||||
# -- drop drops the given set of privileges.
|
||||
drop:
|
||||
- ALL
|
||||
|
||||
# -- service exposes the k8s-metacollector services to be accessed from within the cluster.
|
||||
# ref: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
service:
|
||||
# -- enabled specifies whether a service should be created.
|
||||
create: true
|
||||
# -- type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible
|
||||
# from within the cluster.
|
||||
type: ClusterIP
|
||||
# -- ports denotes all the ports on which the Service will listen.
|
||||
ports:
|
||||
# -- metrics denotes a listening service named "metrics".
|
||||
metrics:
|
||||
# -- port is the port on which the Service will listen.
|
||||
port: 8080
|
||||
# -- targetPort is the port on which the Pod is listening.
|
||||
targetPort: "metrics"
|
||||
# -- protocol specifies the network protocol that the Service should use for the associated port.
|
||||
protocol: "TCP"
|
||||
# -- health-probe denotes a listening service named "health-probe"
|
||||
health-probe:
|
||||
# -- port is the port on which the Service will listen.
|
||||
port: 8081
|
||||
# -- targetPort is the port on which the Pod is listening.
|
||||
targetPort: "health-probe"
|
||||
# -- protocol specifies the network protocol that the Service should use for the associated port.
|
||||
protocol: "TCP"
|
||||
# -- broker-grpc denotes a listening service named "grpc-broker"
|
||||
broker-grpc:
|
||||
# -- port is the port on which the Service will listen.
|
||||
port: 45000
|
||||
# -- targetPort is the port on which the Pod is listening.
|
||||
targetPort: "broker-grpc"
|
||||
# -- protocol specifies the network protocol that the Service should use for the associated port.
|
||||
protocol: "TCP"
|
||||
|
||||
# -- serviceMonitor holds the configuration for the ServiceMonitor CRD.
|
||||
# A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should
|
||||
# discover and scrape metrics from the k8s-metacollector service.
|
||||
serviceMonitor:
|
||||
# -- create specifies whether a ServiceMonitor CRD should be created for a prometheus operator.
|
||||
# https://github.com/coreos/prometheus-operator
|
||||
# Enable it only if the ServiceMonitor CRD is installed in your cluster.
|
||||
create: false
|
||||
# -- path at which the metrics are expose by the k8s-metacollector.
|
||||
path: /metrics
|
||||
# -- labels set of labels to be applied to the ServiceMonitor resource.
|
||||
# If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right
|
||||
# label here in order for the ServiceMonitor to be selected for target discovery.
|
||||
labels: {}
|
||||
# -- interval specifies the time interval at which Prometheus should scrape metrics from the service.
|
||||
interval: 15s
|
||||
# -- scheme specifies network protocol used by the metrics endpoint. In this case HTTP.
|
||||
scheme: http
|
||||
# -- tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when
|
||||
# scraping metrics from a service. It allows you to define the details of the TLS connection, such as
|
||||
# CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support
|
||||
# TLS configuration for the metrics endpoint.
|
||||
tlsConfig: {}
|
||||
# insecureSkipVerify: false
|
||||
# caFile: /path/to/ca.crt
|
||||
# certFile: /path/to/client.crt
|
||||
# keyFile: /path/to/client.key
|
||||
# -- scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request.
|
||||
# If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for
|
||||
# that target.
|
||||
scrapeTimeout: 10s
|
||||
# -- relabelings configures the relabeling rules to apply the target’s metadata labels.
|
||||
relabelings: []
|
||||
# -- targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics.
|
||||
targetLabels: []
|
||||
|
||||
# -- resources defines the computing resources (CPU and memory) that are allocated to the containers running within the Pod.
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
# -- nodeSelector specifies a set of key-value pairs that must match labels assigned to nodes
|
||||
# for the Pod to be eligible for scheduling on that node.
|
||||
nodeSelector: {}
|
||||
|
||||
# -- tolerations are applied to pods and allow them to be scheduled on nodes with matching taints.
|
||||
tolerations: []
|
||||
|
||||
# -- affinity allows pod placement based on node characteristics, or any other custom labels assigned to nodes.
|
||||
affinity: {}
|
||||
|
||||
# -- healthChecks contains the configuration for liveness and readiness probes.
|
||||
healthChecks:
|
||||
# -- livenessProbe is a diagnostic mechanism used to determine wether a container within a Pod is still running and healthy.
|
||||
livenessProbe:
|
||||
# -- httpGet specifies that the liveness probe will make an HTTP GET request to check the health of the container.
|
||||
httpGet:
|
||||
# -- path is the specific endpoint on which the HTTP GET request will be made.
|
||||
path: /healthz
|
||||
# -- port is the port on which the container exposes the "/healthz" endpoint.
|
||||
port: 8081
|
||||
# -- initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe.
|
||||
initialDelaySeconds: 45
|
||||
# -- timeoutSeconds is the number of seconds after which the probe times out.
|
||||
timeoutSeconds: 5
|
||||
# -- periodSeconds specifies the interval at which the liveness probe will be repeated.
|
||||
periodSeconds: 15
|
||||
# -- readinessProbe is a mechanism used to determine whether a container within a Pod is ready to serve traffic.
|
||||
readinessProbe:
|
||||
# -- httpGet specifies that the readiness probe will make an HTTP GET request to check whether the container is ready.
|
||||
httpGet:
|
||||
# -- path is the specific endpoint on which the HTTP GET request will be made.
|
||||
path: /readyz
|
||||
# -- port is the port on which the container exposes the "/readyz" endpoint.
|
||||
port: 8081
|
||||
# -- initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe.
|
||||
initialDelaySeconds: 30
|
||||
# -- timeoutSeconds is the number of seconds after which the probe times out.
|
||||
timeoutSeconds: 5
|
||||
# -- periodSeconds specifies the interval at which the readiness probe will be repeated.
|
||||
periodSeconds: 15
|
||||
|
||||
# -- grafana contains the configuration related to grafana.
|
||||
grafana:
|
||||
# -- dashboards contains configuration for grafana dashboards.
|
||||
dashboards:
|
||||
# -- enabled specifies whether the dashboards should be deployed.
|
||||
enabled: false
|
||||
# --configmaps to be deployed that contain a grafana dashboard.
|
||||
configMaps:
|
||||
# -- collector contains the configuration for collector's dashboard.
|
||||
collector:
|
||||
# -- name specifies the name for the configmap.
|
||||
name: k8s-metacollector-grafana-dashboard
|
||||
# -- namespace specifies the namespace for the configmap.
|
||||
namespace: ""
|
||||
# -- folder where the dashboard is stored by grafana.
|
||||
folder: ""
|
||||
16
falco/ci/ci-values.yaml
Normal file
16
falco/ci/ci-values.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
# CI values for Falco.
|
||||
# The following values will bypass the installation of the kernel module
|
||||
# and disable the kernel space driver.
|
||||
|
||||
# disable the kernel space driver
|
||||
driver:
|
||||
enabled: false
|
||||
|
||||
# make Falco run in userspace only mode
|
||||
extra:
|
||||
args:
|
||||
- --userspace
|
||||
|
||||
# enforce /proc mounting since Falco still tries to scan it
|
||||
mounts:
|
||||
enforceProcMount: true
|
||||
46
falco/templates/NOTES.txt
Normal file
46
falco/templates/NOTES.txt
Normal file
@ -0,0 +1,46 @@
|
||||
{{- if eq .Values.controller.kind "daemonset" }}
|
||||
Falco agents are spinning up on each node in your cluster. After a few
|
||||
seconds, they are going to start monitoring your containers looking for
|
||||
security issues.
|
||||
{{printf "\n" }}
|
||||
{{- end}}
|
||||
{{- if .Values.integrations }}
|
||||
WARNING: The following integrations have been deprecated and removed
|
||||
- gcscc
|
||||
- natsOutput
|
||||
- snsOutput
|
||||
- pubsubOutput
|
||||
Consider to use falcosidekick (https://github.com/falcosecurity/falcosidekick) as replacement.
|
||||
{{- else }}
|
||||
No further action should be required.
|
||||
{{- end }}
|
||||
{{printf "\n" }}
|
||||
|
||||
{{- if not .Values.falcosidekick.enabled }}
|
||||
Tip:
|
||||
You can easily forward Falco events to Slack, Kafka, AWS Lambda and more with falcosidekick.
|
||||
Full list of outputs: https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick.
|
||||
You can enable its deployment with `--set falcosidekick.enabled=true` or in your values.yaml.
|
||||
See: https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml for configuration values.
|
||||
|
||||
{{- end}}
|
||||
|
||||
|
||||
{{- if (has .Values.driver.kind (list "module" "modern-bpf")) -}}
|
||||
{{- println }}
|
||||
WARNING(drivers):
|
||||
{{- printf "\nThe driver kind: \"%s\" is an alias and might be removed in the future.\n" .Values.driver.kind -}}
|
||||
{{- $driver := "" -}}
|
||||
{{- if eq .Values.driver.kind "module" -}}
|
||||
{{- $driver = "kmod" -}}
|
||||
{{- else if eq .Values.driver.kind "modern-bpf" -}}
|
||||
{{- $driver = "modern_ebpf" -}}
|
||||
{{- end -}}
|
||||
{{- printf "Please use \"%s\" instead." $driver}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if and (not (empty .Values.falco.load_plugins)) (or .Values.falcoctl.artifact.follow.enabled .Values.falcoctl.artifact.install.enabled) }}
|
||||
|
||||
WARNING:
|
||||
{{ printf "It seems you are loading the following plugins %v, please make sure to install them by adding the correct reference to falcoctl.config.artifact.install.refs: %v" .Values.falco.load_plugins .Values.falcoctl.config.artifact.install.refs -}}
|
||||
{{- end }}
|
||||
411
falco/templates/_helpers.tpl
Normal file
411
falco/templates/_helpers.tpl
Normal file
@ -0,0 +1,411 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "falco.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "falco.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "falco.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow the release namespace to be overridden
|
||||
*/}}
|
||||
{{- define "falco.namespace" -}}
|
||||
{{- default .Release.Namespace .Values.namespaceOverride -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "falco.labels" -}}
|
||||
helm.sh/chart: {{ include "falco.chart" . }}
|
||||
{{ include "falco.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "falco.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "falco.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Renders a value that contains template.
|
||||
Usage:
|
||||
{{ include "falco.renderTemplate" ( dict "value" .Values.path.to.the.Value "context" $) }}
|
||||
*/}}
|
||||
{{- define "falco.renderTemplate" -}}
|
||||
{{- if typeIs "string" .value }}
|
||||
{{- tpl .value .context }}
|
||||
{{- else }}
|
||||
{{- tpl (.value | toYaml) .context }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "falco.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "falco.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Return the proper Falco image name
|
||||
*/}}
|
||||
{{- define "falco.image" -}}
|
||||
{{- with .Values.image.registry -}}
|
||||
{{- . }}/
|
||||
{{- end -}}
|
||||
{{- .Values.image.repository }}:
|
||||
{{- .Values.image.tag | default .Chart.AppVersion -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Falco driver loader image name
|
||||
*/}}
|
||||
{{- define "falco.driverLoader.image" -}}
|
||||
{{- with .Values.driver.loader.initContainer.image.registry -}}
|
||||
{{- . }}/
|
||||
{{- end -}}
|
||||
{{- .Values.driver.loader.initContainer.image.repository }}:
|
||||
{{- .Values.driver.loader.initContainer.image.tag | default .Chart.AppVersion -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Falcoctl image name
|
||||
*/}}
|
||||
{{- define "falcoctl.image" -}}
|
||||
{{ printf "%s/%s:%s" .Values.falcoctl.image.registry .Values.falcoctl.image.repository .Values.falcoctl.image.tag }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Extract the unixSocket's directory path
|
||||
*/}}
|
||||
{{- define "falco.unixSocketDir" -}}
|
||||
{{- if and .Values.falco.grpc.enabled .Values.falco.grpc.bind_address (hasPrefix "unix://" .Values.falco.grpc.bind_address) -}}
|
||||
{{- .Values.falco.grpc.bind_address | trimPrefix "unix://" | dir -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for rbac.
|
||||
*/}}
|
||||
{{- define "rbac.apiVersion" -}}
|
||||
{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }}
|
||||
{{- print "rbac.authorization.k8s.io/v1" -}}
|
||||
{{- else -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Build http url for falcosidekick.
|
||||
*/}}
|
||||
{{- define "falcosidekick.url" -}}
|
||||
{{- if not .Values.falco.http_output.url -}}
|
||||
{{- $falcoName := include "falco.fullname" . -}}
|
||||
{{- $listenPort := .Values.falcosidekick.listenport | default "2801" -}}
|
||||
{{- if .Values.falcosidekick.fullfqdn -}}
|
||||
{{- printf "http://%s-falcosidekick.%s.svc.cluster.local:%s" $falcoName .Release.Namespace $listenPort -}}
|
||||
{{- else -}}
|
||||
{{- printf "http://%s-falcosidekick:%s" $falcoName $listenPort -}}
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
{{- .Values.falco.http_output.url -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
|
||||
{{/*
|
||||
Set appropriate falco configuration if falcosidekick has been configured.
|
||||
*/}}
|
||||
{{- define "falco.falcosidekickConfig" -}}
|
||||
{{- if .Values.falcosidekick.enabled -}}
|
||||
{{- $_ := set .Values.falco "json_output" true -}}
|
||||
{{- $_ := set .Values.falco "json_include_output_property" true -}}
|
||||
{{- $_ := set .Values.falco.http_output "enabled" true -}}
|
||||
{{- $_ := set .Values.falco.http_output "url" (include "falcosidekick.url" .) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Get port from .Values.falco.grpc.bind_addres.
|
||||
*/}}
|
||||
{{- define "grpc.port" -}}
|
||||
{{- $error := "unable to extract listenPort from .Values.falco.grpc.bind_address. Make sure it is in the correct format" -}}
|
||||
{{- if and .Values.falco.grpc.enabled .Values.falco.grpc.bind_address (not (hasPrefix "unix://" .Values.falco.grpc.bind_address)) -}}
|
||||
{{- $tokens := split ":" .Values.falco.grpc.bind_address -}}
|
||||
{{- if $tokens._1 -}}
|
||||
{{- $tokens._1 -}}
|
||||
{{- else -}}
|
||||
{{- fail $error -}}
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
{{- fail $error -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Disable the syscall source if some conditions are met.
|
||||
By default the syscall source is always enabled in falco. If no syscall source is enabled, falco
|
||||
exits. Here we check that no producers for syscalls event has been configured, and if true
|
||||
we just disable the sycall source.
|
||||
*/}}
|
||||
{{- define "falco.configSyscallSource" -}}
|
||||
{{- $userspaceDisabled := true -}}
|
||||
{{- $gvisorDisabled := (ne .Values.driver.kind "gvisor") -}}
|
||||
{{- $driverDisabled := (not .Values.driver.enabled) -}}
|
||||
{{- if or (has "-u" .Values.extra.args) (has "--userspace" .Values.extra.args) -}}
|
||||
{{- $userspaceDisabled = false -}}
|
||||
{{- end -}}
|
||||
{{- if and $driverDisabled $userspaceDisabled $gvisorDisabled }}
|
||||
- --disable-source
|
||||
- syscall
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
We need the falco binary in order to generate the configuration for gVisor. This init container
|
||||
is deployed within the Falco pod when gVisor is enabled. The image is the same as the one of Falco we are
|
||||
deploying and the configuration logic is a bash script passed as argument on the fly. This solution should
|
||||
be temporary and will stay here until we move this logic to the falcoctl tool.
|
||||
*/}}
|
||||
{{- define "falco.gvisor.initContainer" -}}
|
||||
- name: {{ .Chart.Name }}-gvisor-init
|
||||
image: {{ include "falco.image" . }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
args:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
set -o errexit
|
||||
set -o nounset
|
||||
set -o pipefail
|
||||
|
||||
root={{ .Values.driver.gvisor.runsc.root }}
|
||||
config={{ .Values.driver.gvisor.runsc.config }}
|
||||
|
||||
echo "* Configuring Falco+gVisor integration...".
|
||||
# Check if gVisor is configured on the node.
|
||||
echo "* Checking for /host${config} file..."
|
||||
if [[ -f /host${config} ]]; then
|
||||
echo "* Generating the Falco configuration..."
|
||||
/usr/bin/falco --gvisor-generate-config=${root}/falco.sock > /host${root}/pod-init.json
|
||||
sed -E -i.orig '/"ignore_missing" : true,/d' /host${root}/pod-init.json
|
||||
if [[ -z $(grep pod-init-config /host${config}) ]]; then
|
||||
echo "* Updating the runsc config file /host${config}..."
|
||||
echo " pod-init-config = \"${root}/pod-init.json\"" >> /host${config}
|
||||
fi
|
||||
# Endpoint inside the container is different from outside, add
|
||||
# "/host" to the endpoint path inside the container.
|
||||
echo "* Setting the updated Falco configuration to /gvisor-config/pod-init.json..."
|
||||
sed 's/"endpoint" : "\/run/"endpoint" : "\/host\/run/' /host${root}/pod-init.json > /gvisor-config/pod-init.json
|
||||
else
|
||||
echo "* File /host${config} not found."
|
||||
echo "* Please make sure that the gVisor is configured in the current node and/or the runsc root and config file path are correct"
|
||||
exit -1
|
||||
fi
|
||||
echo "* Falco+gVisor correctly configured."
|
||||
exit 0
|
||||
volumeMounts:
|
||||
- mountPath: /host{{ .Values.driver.gvisor.runsc.path }}
|
||||
name: runsc-path
|
||||
readOnly: true
|
||||
- mountPath: /host{{ .Values.driver.gvisor.runsc.root }}
|
||||
name: runsc-root
|
||||
- mountPath: /host{{ .Values.driver.gvisor.runsc.config }}
|
||||
name: runsc-config
|
||||
- mountPath: /gvisor-config
|
||||
name: falco-gvisor-config
|
||||
{{- end -}}
|
||||
|
||||
|
||||
{{- define "falcoctl.initContainer" -}}
|
||||
- name: falcoctl-artifact-install
|
||||
image: {{ include "falcoctl.image" . }}
|
||||
imagePullPolicy: {{ .Values.falcoctl.image.pullPolicy }}
|
||||
args:
|
||||
- artifact
|
||||
- install
|
||||
{{- with .Values.falcoctl.artifact.install.args }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.falcoctl.artifact.install.resources }}
|
||||
resources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
{{- if .Values.falcoctl.artifact.install.securityContext }}
|
||||
{{- toYaml .Values.falcoctl.artifact.install.securityContext | nindent 4 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- mountPath: {{ .Values.falcoctl.config.artifact.install.pluginsDir }}
|
||||
name: plugins-install-dir
|
||||
- mountPath: {{ .Values.falcoctl.config.artifact.install.rulesfilesDir }}
|
||||
name: rulesfiles-install-dir
|
||||
- mountPath: /etc/falcoctl
|
||||
name: falcoctl-config-volume
|
||||
{{- with .Values.falcoctl.artifact.install.mounts.volumeMounts }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
env:
|
||||
{{- if .Values.falcoctl.artifact.install.env }}
|
||||
{{- include "falco.renderTemplate" ( dict "value" .Values.falcoctl.artifact.install.env "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "falcoctl.sidecar" -}}
|
||||
- name: falcoctl-artifact-follow
|
||||
image: {{ include "falcoctl.image" . }}
|
||||
imagePullPolicy: {{ .Values.falcoctl.image.pullPolicy }}
|
||||
args:
|
||||
- artifact
|
||||
- follow
|
||||
{{- with .Values.falcoctl.artifact.follow.args }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.falcoctl.artifact.follow.resources }}
|
||||
resources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
{{- if .Values.falcoctl.artifact.follow.securityContext }}
|
||||
{{- toYaml .Values.falcoctl.artifact.follow.securityContext | nindent 4 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- mountPath: {{ .Values.falcoctl.config.artifact.follow.pluginsDir }}
|
||||
name: plugins-install-dir
|
||||
- mountPath: {{ .Values.falcoctl.config.artifact.follow.rulesfilesDir }}
|
||||
name: rulesfiles-install-dir
|
||||
- mountPath: /etc/falcoctl
|
||||
name: falcoctl-config-volume
|
||||
{{- with .Values.falcoctl.artifact.follow.mounts.volumeMounts }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
env:
|
||||
{{- if .Values.falcoctl.artifact.follow.env }}
|
||||
{{- include "falco.renderTemplate" ( dict "value" .Values.falcoctl.artifact.follow.env "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
|
||||
{{/*
|
||||
Build configuration for k8smeta plugin and update the relevant variables.
|
||||
* The configuration that needs to be built up is the initconfig section:
|
||||
init_config:
|
||||
collectorPort: 0
|
||||
collectorHostname: ""
|
||||
nodeName: ""
|
||||
The falco chart exposes this configuriotino through two variable:
|
||||
* collectors.kubenetetes.collectorHostname;
|
||||
* collectors.kubernetes.collectorPort;
|
||||
If those two variable are not set, then we take those values from the k8smetacollector subchart.
|
||||
The hostname is built using the name of the service that exposes the collector endpoints and the
|
||||
port is directly taken form the service's port that exposes the gRPC endpoint.
|
||||
We reuse the helpers from the k8smetacollector subchart, by passing down the variables. There is a
|
||||
hardcoded values that is the chart name for the k8s-metacollector chart.
|
||||
|
||||
* The falcoctl configuration is updated to allow plugin artifacts to be installed. The refs in the install
|
||||
section are updated by adding the reference for the k8s meta plugin that needs to be installed.
|
||||
NOTE: It seems that the named templates run during the validation process. And then again during the
|
||||
render fase. In our case we are setting global variable that persist during the various phases.
|
||||
We need to make the helper idempotent.
|
||||
*/}}
|
||||
{{- define "k8smeta.configuration" -}}
|
||||
{{- if and .Values.collectors.kubernetes.enabled .Values.driver.enabled -}}
|
||||
{{- $hostname := "" -}}
|
||||
{{- if .Values.collectors.kubernetes.collectorHostname -}}
|
||||
{{- $hostname = .Values.collectors.kubernetes.collectorHostname -}}
|
||||
{{- else -}}
|
||||
{{- $collectorContext := (dict "Release" .Release "Values" (index .Values "k8s-metacollector") "Chart" (dict "Name" "k8s-metacollector")) -}}
|
||||
{{- $hostname = printf "%s.%s.svc" (include "k8s-metacollector.fullname" $collectorContext) (include "k8s-metacollector.namespace" $collectorContext) -}}
|
||||
{{- end -}}
|
||||
{{- $hasConfig := false -}}
|
||||
{{- range .Values.falco.plugins -}}
|
||||
{{- if eq (get . "name") "k8smeta" -}}
|
||||
{{ $hasConfig = true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if not $hasConfig -}}
|
||||
{{- $listenPort := default (index .Values "k8s-metacollector" "service" "ports" "broker-grpc" "port") .Values.collectors.kubernetes.collectorPort -}}
|
||||
{{- $listenPort = int $listenPort -}}
|
||||
{{- $pluginConfig := dict "name" "k8smeta" "library_path" "libk8smeta.so" "init_config" (dict "collectorHostname" $hostname "collectorPort" $listenPort "nodeName" "${FALCO_K8S_NODE_NAME}") -}}
|
||||
{{- $newConfig := append .Values.falco.plugins $pluginConfig -}}
|
||||
{{- $_ := set .Values.falco "plugins" ($newConfig | uniq) -}}
|
||||
{{- $loadedPlugins := append .Values.falco.load_plugins "k8smeta" -}}
|
||||
{{- $_ = set .Values.falco "load_plugins" ($loadedPlugins | uniq) -}}
|
||||
{{- end -}}
|
||||
{{- $_ := set .Values.falcoctl.config.artifact.install "refs" ((append .Values.falcoctl.config.artifact.install.refs .Values.collectors.kubernetes.pluginRef) | uniq)}}
|
||||
{{- $_ = set .Values.falcoctl.config.artifact "allowedTypes" ((append .Values.falcoctl.config.artifact.allowedTypes "plugin") | uniq)}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Based on the user input it populates the driver configuration in the falco config map.
|
||||
*/}}
|
||||
{{- define "falco.engineConfiguration" -}}
|
||||
{{- if .Values.driver.enabled -}}
|
||||
{{- $supportedDrivers := list "kmod" "ebpf" "modern_ebpf" "gvisor" -}}
|
||||
{{- $aliasDrivers := list "module" "modern-bpf" -}}
|
||||
{{- if and (not (has .Values.driver.kind $supportedDrivers)) (not (has .Values.driver.kind $aliasDrivers)) -}}
|
||||
{{- fail (printf "unsupported driver kind: \"%s\". Supported drivers %s, alias %s" .Values.driver.kind $supportedDrivers $aliasDrivers) -}}
|
||||
{{- end -}}
|
||||
{{- if or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") -}}
|
||||
{{- $kmodConfig := dict "kind" "kmod" "kmod" (dict "buf_size_preset" .Values.driver.kmod.bufSizePreset "drop_failed_exit" .Values.driver.kmod.dropFailedExit) -}}
|
||||
{{- $_ := set .Values.falco "engine" $kmodConfig -}}
|
||||
{{- else if eq .Values.driver.kind "ebpf" -}}
|
||||
{{- $ebpfConfig := dict "kind" "ebpf" "ebpf" (dict "buf_size_preset" .Values.driver.ebpf.bufSizePreset "drop_failed_exit" .Values.driver.ebpf.dropFailedExit "probe" .Values.driver.ebpf.path) -}}
|
||||
{{- $_ := set .Values.falco "engine" $ebpfConfig -}}
|
||||
{{- else if or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf") -}}
|
||||
{{- $ebpfConfig := dict "kind" "modern_ebpf" "modern_ebpf" (dict "buf_size_preset" .Values.driver.modernEbpf.bufSizePreset "drop_failed_exit" .Values.driver.modernEbpf.dropFailedExit "cpus_for_each_buffer" .Values.driver.modernEbpf.cpusForEachBuffer) -}}
|
||||
{{- $_ := set .Values.falco "engine" $ebpfConfig -}}
|
||||
{{- else if eq .Values.driver.kind "gvisor" -}}
|
||||
{{- $root := printf "/host%s/k8s.io" .Values.driver.gvisor.runsc.root -}}
|
||||
{{- $gvisorConfig := dict "kind" "gvisor" "gvisor" (dict "config" "/gvisor-config/pod-init.json" "root" $root) -}}
|
||||
{{- $_ := set .Values.falco "engine" $gvisorConfig -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
It returns "true" if the driver loader has to be enabled, otherwise false.
|
||||
*/}}
|
||||
{{- define "driverLoader.enabled" -}}
|
||||
{{- if or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf") (eq .Values.driver.kind "gvisor") (not .Values.driver.enabled) (not .Values.driver.loader.enabled) -}}
|
||||
false
|
||||
{{- else -}}
|
||||
true
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
19
falco/templates/certs-secret.yaml
Normal file
19
falco/templates/certs-secret.yaml
Normal file
@ -0,0 +1,19 @@
|
||||
{{- with .Values.certs }}
|
||||
{{- if and .server.key .server.crt .ca.crt }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" $ }}-certs
|
||||
namespace: {{ include "falco.namespace" $ }}
|
||||
labels:
|
||||
{{- include "falco.labels" $ | nindent 4 }}
|
||||
type: Opaque
|
||||
data:
|
||||
{{ $key := .server.key }}
|
||||
server.key: {{ $key | b64enc | quote }}
|
||||
{{ $crt := .server.crt }}
|
||||
server.crt: {{ $crt | b64enc | quote }}
|
||||
falco.pem: {{ print $key $crt | b64enc | quote }}
|
||||
ca.crt: {{ .ca.crt | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
18
falco/templates/client-certs-secret.yaml
Normal file
18
falco/templates/client-certs-secret.yaml
Normal file
@ -0,0 +1,18 @@
|
||||
{{- if and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}-client-certs
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "falco.labels" $ | nindent 4 }}
|
||||
type: Opaque
|
||||
data:
|
||||
{{ $key := .Values.certs.client.key }}
|
||||
client.key: {{ $key | b64enc | quote }}
|
||||
{{ $crt := .Values.certs.client.crt }}
|
||||
client.crt: {{ $crt | b64enc | quote }}
|
||||
falcoclient.pem: {{ print $key $crt | b64enc | quote }}
|
||||
ca.crt: {{ .Values.certs.ca.crt | b64enc | quote }}
|
||||
ca.pem: {{ .Values.certs.ca.crt | b64enc | quote }}
|
||||
{{- end }}
|
||||
13
falco/templates/configmap.yaml
Normal file
13
falco/templates/configmap.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
data:
|
||||
falco.yaml: |-
|
||||
{{- include "falco.falcosidekickConfig" . }}
|
||||
{{- include "k8smeta.configuration" . -}}
|
||||
{{- include "falco.engineConfiguration" . -}}
|
||||
{{- toYaml .Values.falco | nindent 4 }}
|
||||
23
falco/templates/daemonset.yaml
Normal file
23
falco/templates/daemonset.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
{{- if eq .Values.controller.kind "daemonset" }}
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
{{- if .Values.controller.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.controller.annotations | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "falco.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
{{- include "falco.podTemplate" . | nindent 4 }}
|
||||
{{- with .Values.controller.daemonset.updateStrategy }}
|
||||
updateStrategy:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
23
falco/templates/deployment.yaml
Normal file
23
falco/templates/deployment.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
{{- if eq .Values.controller.kind "deployment" }}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
{{- if .Values.controller.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.controller.annotations | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
replicas: {{ .Values.controller.deployment.replicas }}
|
||||
{{- if .Values.controller.deployment.revisionHistoryLimit }}
|
||||
revisionHistoryLimit: {{ .Values.controller.deployment.revisionHistoryLimit }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "falco.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
{{- include "falco.podTemplate" . | nindent 4 }}
|
||||
{{- end }}
|
||||
13
falco/templates/falcoctl-configmap.yaml
Normal file
13
falco/templates/falcoctl-configmap.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
{{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}-falcoctl
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
data:
|
||||
falcoctl.yaml: |-
|
||||
{{- include "k8smeta.configuration" . -}}
|
||||
{{- toYaml .Values.falcoctl.config | nindent 4 }}
|
||||
{{- end }}
|
||||
16
falco/templates/grpc-service.yaml
Normal file
16
falco/templates/grpc-service.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
{{- if and .Values.falco.grpc.enabled .Values.falco.grpc.bind_address (not (hasPrefix "unix://" .Values.falco.grpc.bind_address)) }}
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}-grpc
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
{{- include "falco.selectorLabels" . | nindent 4 }}
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: {{ include "grpc.port" . }}
|
||||
{{- end }}
|
||||
421
falco/templates/pod-template.tpl
Normal file
421
falco/templates/pod-template.tpl
Normal file
@ -0,0 +1,421 @@
|
||||
{{- define "falco.podTemplate" -}}
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}
|
||||
labels:
|
||||
{{- include "falco.selectorLabels" . | nindent 4 }}
|
||||
{{- with .Values.podLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
checksum/rules: {{ include (print $.Template.BasePath "/rules-configmap.yaml") . | sha256sum }}
|
||||
{{- if and .Values.certs (not .Values.certs.existingSecret) }}
|
||||
checksum/certs: {{ include (print $.Template.BasePath "/certs-secret.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
serviceAccountName: {{ include "falco.serviceAccountName" . }}
|
||||
{{- with .Values.podSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 4}}
|
||||
{{- end }}
|
||||
{{- if .Values.driver.enabled }}
|
||||
{{- if and (eq .Values.driver.kind "ebpf") .Values.driver.ebpf.hostNetwork }}
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.podPriorityClassName }}
|
||||
priorityClassName: {{ .Values.podPriorityClassName }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.driver.kind "gvisor" }}
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: {{ include "falco.image" . }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 8 }}
|
||||
securityContext:
|
||||
{{- include "falco.securityContext" . | nindent 8 }}
|
||||
args:
|
||||
- /usr/bin/falco
|
||||
{{- include "falco.configSyscallSource" . | indent 8 }}
|
||||
{{- with .Values.collectors }}
|
||||
{{- if .enabled }}
|
||||
{{- if .containerd.enabled }}
|
||||
- --cri
|
||||
- /run/containerd/containerd.sock
|
||||
{{- end }}
|
||||
{{- if .crio.enabled }}
|
||||
- --cri
|
||||
- /run/crio/crio.sock
|
||||
{{- end }}
|
||||
- -pk
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.extra.args }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
env:
|
||||
- name: FALCO_K8S_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
{{- if .Values.extra.env }}
|
||||
{{- include "falco.renderTemplate" ( dict "value" .Values.extra.env "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
tty: {{ .Values.tty }}
|
||||
{{- if .Values.falco.webserver.enabled }}
|
||||
livenessProbe:
|
||||
initialDelaySeconds: {{ .Values.healthChecks.livenessProbe.initialDelaySeconds }}
|
||||
timeoutSeconds: {{ .Values.healthChecks.livenessProbe.timeoutSeconds }}
|
||||
periodSeconds: {{ .Values.healthChecks.livenessProbe.periodSeconds }}
|
||||
httpGet:
|
||||
path: {{ .Values.falco.webserver.k8s_healthz_endpoint }}
|
||||
port: {{ .Values.falco.webserver.listen_port }}
|
||||
{{- if .Values.falco.webserver.ssl_enabled }}
|
||||
scheme: HTTPS
|
||||
{{- end }}
|
||||
readinessProbe:
|
||||
initialDelaySeconds: {{ .Values.healthChecks.readinessProbe.initialDelaySeconds }}
|
||||
timeoutSeconds: {{ .Values.healthChecks.readinessProbe.timeoutSeconds }}
|
||||
periodSeconds: {{ .Values.healthChecks.readinessProbe.periodSeconds }}
|
||||
httpGet:
|
||||
path: {{ .Values.falco.webserver.k8s_healthz_endpoint }}
|
||||
port: {{ .Values.falco.webserver.listen_port }}
|
||||
{{- if .Values.falco.webserver.ssl_enabled }}
|
||||
scheme: HTTPS
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
{{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }}
|
||||
{{- if has "rulesfile" .Values.falcoctl.config.artifact.allowedTypes }}
|
||||
- mountPath: /etc/falco
|
||||
name: rulesfiles-install-dir
|
||||
{{- end }}
|
||||
{{- if has "plugin" .Values.falcoctl.config.artifact.allowedTypes }}
|
||||
- mountPath: /usr/share/falco/plugins
|
||||
name: plugins-install-dir
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- mountPath: /root/.falco
|
||||
name: root-falco-fs
|
||||
{{- if or .Values.driver.enabled .Values.mounts.enforceProcMount }}
|
||||
- mountPath: /host/proc
|
||||
name: proc-fs
|
||||
{{- end }}
|
||||
{{- if and .Values.driver.enabled (not .Values.driver.loader.enabled) }}
|
||||
readOnly: true
|
||||
- mountPath: /host/boot
|
||||
name: boot-fs
|
||||
readOnly: true
|
||||
- mountPath: /host/lib/modules
|
||||
name: lib-modules
|
||||
- mountPath: /host/usr
|
||||
name: usr-fs
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.driver.enabled }}
|
||||
- mountPath: /host/etc
|
||||
name: etc-fs
|
||||
readOnly: true
|
||||
{{- end -}}
|
||||
{{- if and .Values.driver.enabled (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module")) }}
|
||||
- mountPath: /host/dev
|
||||
name: dev-fs
|
||||
readOnly: true
|
||||
- name: sys-fs
|
||||
mountPath: /sys/module/falco
|
||||
{{- end }}
|
||||
{{- if and .Values.driver.enabled (and (eq .Values.driver.kind "ebpf") (contains "falco-no-driver" .Values.image.repository)) }}
|
||||
- name: debugfs
|
||||
mountPath: /sys/kernel/debug
|
||||
{{- end }}
|
||||
{{- with .Values.collectors }}
|
||||
{{- if .enabled }}
|
||||
{{- if .docker.enabled }}
|
||||
- mountPath: /host/var/run/docker.sock
|
||||
name: docker-socket
|
||||
{{- end }}
|
||||
{{- if .containerd.enabled }}
|
||||
- mountPath: /host/run/containerd/containerd.sock
|
||||
name: containerd-socket
|
||||
{{- end }}
|
||||
{{- if .crio.enabled }}
|
||||
- mountPath: /host/run/crio/crio.sock
|
||||
name: crio-socket
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- mountPath: /etc/falco/falco.yaml
|
||||
name: falco-yaml
|
||||
subPath: falco.yaml
|
||||
{{- if .Values.customRules }}
|
||||
- mountPath: /etc/falco/rules.d
|
||||
name: rules-volume
|
||||
{{- end }}
|
||||
{{- if or .Values.certs.existingSecret (and .Values.certs.server.key .Values.certs.server.crt .Values.certs.ca.crt) }}
|
||||
- mountPath: /etc/falco/certs
|
||||
name: certs-volume
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if or .Values.certs.existingSecret (and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt) }}
|
||||
- mountPath: /etc/falco/certs/client
|
||||
name: client-certs-volume
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- include "falco.unixSocketVolumeMount" . | nindent 8 -}}
|
||||
{{- with .Values.mounts.volumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.driver.kind "gvisor" }}
|
||||
- mountPath: /usr/local/bin/runsc
|
||||
name: runsc-path
|
||||
readOnly: true
|
||||
- mountPath: /host{{ .Values.driver.gvisor.runsc.root }}
|
||||
name: runsc-root
|
||||
- mountPath: /host{{ .Values.driver.gvisor.runsc.config }}
|
||||
name: runsc-config
|
||||
- mountPath: /gvisor-config
|
||||
name: falco-gvisor-config
|
||||
{{- end }}
|
||||
{{- if .Values.falcoctl.artifact.follow.enabled }}
|
||||
{{- include "falcoctl.sidecar" . | nindent 4 }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
{{- with .Values.extra.initContainers }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.driver.kind "gvisor" }}
|
||||
{{- include "falco.gvisor.initContainer" . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if eq (include "driverLoader.enabled" .) "true" }}
|
||||
{{- include "falco.driverLoader.initContainer" . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.falcoctl.artifact.install.enabled }}
|
||||
{{- include "falcoctl.initContainer" . | nindent 4 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
{{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }}
|
||||
- name: plugins-install-dir
|
||||
emptyDir: {}
|
||||
- name: rulesfiles-install-dir
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
- name: root-falco-fs
|
||||
emptyDir: {}
|
||||
{{- if .Values.driver.enabled }}
|
||||
- name: boot-fs
|
||||
hostPath:
|
||||
path: /boot
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: usr-fs
|
||||
hostPath:
|
||||
path: /usr
|
||||
- name: etc-fs
|
||||
hostPath:
|
||||
path: /etc
|
||||
{{- end }}
|
||||
{{- if and .Values.driver.enabled (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module")) }}
|
||||
- name: dev-fs
|
||||
hostPath:
|
||||
path: /dev
|
||||
- name: sys-fs
|
||||
hostPath:
|
||||
path: /sys/module/falco
|
||||
{{- end }}
|
||||
{{- if and .Values.driver.enabled (and (eq .Values.driver.kind "ebpf") (contains "falco-no-driver" .Values.image.repository)) }}
|
||||
- name: debugfs
|
||||
hostPath:
|
||||
path: /sys/kernel/debug
|
||||
{{- end }}
|
||||
{{- with .Values.collectors }}
|
||||
{{- if .enabled }}
|
||||
{{- if .docker.enabled }}
|
||||
- name: docker-socket
|
||||
hostPath:
|
||||
path: {{ .docker.socket }}
|
||||
{{- end }}
|
||||
{{- if .containerd.enabled }}
|
||||
- name: containerd-socket
|
||||
hostPath:
|
||||
path: {{ .containerd.socket }}
|
||||
{{- end }}
|
||||
{{- if .crio.enabled }}
|
||||
- name: crio-socket
|
||||
hostPath:
|
||||
path: {{ .crio.socket }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.driver.enabled .Values.mounts.enforceProcMount }}
|
||||
- name: proc-fs
|
||||
hostPath:
|
||||
path: /proc
|
||||
{{- end }}
|
||||
{{- if eq .Values.driver.kind "gvisor" }}
|
||||
- name: runsc-path
|
||||
hostPath:
|
||||
path: {{ .Values.driver.gvisor.runsc.path }}/runsc
|
||||
type: File
|
||||
- name: runsc-root
|
||||
hostPath:
|
||||
path: {{ .Values.driver.gvisor.runsc.root }}
|
||||
- name: runsc-config
|
||||
hostPath:
|
||||
path: {{ .Values.driver.gvisor.runsc.config }}
|
||||
type: File
|
||||
- name: falco-gvisor-config
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
- name: falcoctl-config-volume
|
||||
configMap:
|
||||
name: {{ include "falco.fullname" . }}-falcoctl
|
||||
items:
|
||||
- key: falcoctl.yaml
|
||||
path: falcoctl.yaml
|
||||
- name: falco-yaml
|
||||
configMap:
|
||||
name: {{ include "falco.fullname" . }}
|
||||
items:
|
||||
- key: falco.yaml
|
||||
path: falco.yaml
|
||||
{{- if .Values.customRules }}
|
||||
- name: rules-volume
|
||||
configMap:
|
||||
name: {{ include "falco.fullname" . }}-rules
|
||||
{{- end }}
|
||||
{{- if or .Values.certs.existingSecret (and .Values.certs.server.key .Values.certs.server.crt .Values.certs.ca.crt) }}
|
||||
- name: certs-volume
|
||||
secret:
|
||||
{{- if .Values.certs.existingSecret }}
|
||||
secretName: {{ .Values.certs.existingSecret }}
|
||||
{{- else }}
|
||||
secretName: {{ include "falco.fullname" . }}-certs
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.certs.existingSecret (and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt) }}
|
||||
- name: client-certs-volume
|
||||
secret:
|
||||
{{- if .Values.certs.existingClientSecret }}
|
||||
secretName: {{ .Values.certs.existingClientSecret }}
|
||||
{{- else }}
|
||||
secretName: {{ include "falco.fullname" . }}-client-certs
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "falco.unixSocketVolume" . | nindent 4 -}}
|
||||
{{- with .Values.mounts.volumes }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "falco.driverLoader.initContainer" -}}
|
||||
- name: {{ .Chart.Name }}-driver-loader
|
||||
image: {{ include "falco.driverLoader.image" . }}
|
||||
imagePullPolicy: {{ .Values.driver.loader.initContainer.image.pullPolicy }}
|
||||
args:
|
||||
{{- with .Values.driver.loader.initContainer.args }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.driver.kind "ebpf" }}
|
||||
- ebpf
|
||||
{{- end }}
|
||||
{{- with .Values.driver.loader.initContainer.resources }}
|
||||
resources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
{{- if .Values.driver.loader.initContainer.securityContext }}
|
||||
{{- toYaml .Values.driver.loader.initContainer.securityContext | nindent 4 }}
|
||||
{{- else if (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module")) }}
|
||||
privileged: true
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- mountPath: /root/.falco
|
||||
name: root-falco-fs
|
||||
- mountPath: /host/proc
|
||||
name: proc-fs
|
||||
readOnly: true
|
||||
- mountPath: /host/boot
|
||||
name: boot-fs
|
||||
readOnly: true
|
||||
- mountPath: /host/lib/modules
|
||||
name: lib-modules
|
||||
- mountPath: /host/usr
|
||||
name: usr-fs
|
||||
readOnly: true
|
||||
- mountPath: /host/etc
|
||||
name: etc-fs
|
||||
readOnly: true
|
||||
env:
|
||||
{{- if .Values.driver.loader.initContainer.env }}
|
||||
{{- include "falco.renderTemplate" ( dict "value" .Values.driver.loader.initContainer.env "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "falco.securityContext" -}}
|
||||
{{- $securityContext := dict -}}
|
||||
{{- if .Values.driver.enabled -}}
|
||||
{{- if (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module")) -}}
|
||||
{{- $securityContext := set $securityContext "privileged" true -}}
|
||||
{{- end -}}
|
||||
{{- if eq .Values.driver.kind "ebpf" -}}
|
||||
{{- if .Values.driver.ebpf.leastPrivileged -}}
|
||||
{{- $securityContext := set $securityContext "capabilities" (dict "add" (list "SYS_ADMIN" "SYS_RESOURCE" "SYS_PTRACE")) -}}
|
||||
{{- else -}}
|
||||
{{- $securityContext := set $securityContext "privileged" true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if (or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf")) -}}
|
||||
{{- if .Values.driver.modernEbpf.leastPrivileged -}}
|
||||
{{- $securityContext := set $securityContext "capabilities" (dict "add" (list "BPF" "SYS_RESOURCE" "PERFMON" "SYS_PTRACE")) -}}
|
||||
{{- else -}}
|
||||
{{- $securityContext := set $securityContext "privileged" true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if not (empty (.Values.containerSecurityContext)) -}}
|
||||
{{- toYaml .Values.containerSecurityContext }}
|
||||
{{- else -}}
|
||||
{{- toYaml $securityContext }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
|
||||
{{- define "falco.unixSocketVolumeMount" -}}
|
||||
{{- if and .Values.falco.grpc.enabled .Values.falco.grpc.bind_address (hasPrefix "unix://" .Values.falco.grpc.bind_address) }}
|
||||
- mountPath: {{ include "falco.unixSocketDir" . }}
|
||||
name: grpc-socket-dir
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "falco.unixSocketVolume" -}}
|
||||
{{- if and .Values.falco.grpc.enabled .Values.falco.grpc.bind_address (hasPrefix "unix://" .Values.falco.grpc.bind_address) }}
|
||||
- name: grpc-socket-dir
|
||||
hostPath:
|
||||
path: {{ include "falco.unixSocketDir" . }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
14
falco/templates/rules-configmap.yaml
Normal file
14
falco/templates/rules-configmap.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
{{- if .Values.customRules }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" . }}-rules
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
data:
|
||||
{{- range $file, $content := .Values.customRules }}
|
||||
{{ $file }}: |-
|
||||
{{ $content | indent 4}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
43
falco/templates/securitycontextconstraints.yaml
Normal file
43
falco/templates/securitycontextconstraints.yaml
Normal file
@ -0,0 +1,43 @@
|
||||
{{- if and .Values.scc.create (.Capabilities.APIVersions.Has "security.openshift.io/v1") }}
|
||||
apiVersion: security.openshift.io/v1
|
||||
kind: SecurityContextConstraints
|
||||
metadata:
|
||||
annotations:
|
||||
kubernetes.io/description: |
|
||||
This provides the minimum requirements Falco to run in Openshift.
|
||||
name: {{ include "falco.serviceAccountName" . }}
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
allowHostDirVolumePlugin: true
|
||||
allowHostIPC: false
|
||||
allowHostNetwork: true
|
||||
allowHostPID: true
|
||||
allowHostPorts: false
|
||||
allowPrivilegeEscalation: true
|
||||
allowPrivilegedContainer: true
|
||||
allowedCapabilities: []
|
||||
allowedUnsafeSysctls: []
|
||||
defaultAddCapabilities: []
|
||||
fsGroup:
|
||||
type: RunAsAny
|
||||
groups: []
|
||||
priority: 0
|
||||
readOnlyRootFilesystem: false
|
||||
requiredDropCapabilities: []
|
||||
runAsUser:
|
||||
type: RunAsAny
|
||||
seLinuxContext:
|
||||
type: RunAsAny
|
||||
seccompProfiles:
|
||||
- '*'
|
||||
supplementalGroups:
|
||||
type: RunAsAny
|
||||
users:
|
||||
- system:serviceaccount:{{ include "falco.namespace" . }}:{{ include "falco.serviceAccountName" . }}
|
||||
volumes:
|
||||
- hostPath
|
||||
- emptyDir
|
||||
- secret
|
||||
- configMap
|
||||
{{- end }}
|
||||
14
falco/templates/serviceaccount.yaml
Normal file
14
falco/templates/serviceaccount.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "falco.serviceAccountName" . }}
|
||||
namespace: {{ include "falco.namespace" . }}
|
||||
labels:
|
||||
{{- include "falco.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
18
falco/templates/services.yaml
Normal file
18
falco/templates/services.yaml
Normal file
@ -0,0 +1,18 @@
|
||||
{{- with $dot := . }}
|
||||
{{- range $service := $dot.Values.services }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "falco.fullname" $dot }}-{{ $service.name }}
|
||||
namespace: {{ include "falco.namespace" $dot }}
|
||||
labels:
|
||||
{{- include "falco.labels" $dot | nindent 4 }}
|
||||
spec:
|
||||
{{- with $service }}
|
||||
{{- omit . "name" "selector" | toYaml | nindent 2 }}
|
||||
{{- end}}
|
||||
selector:
|
||||
{{- include "falco.selectorLabels" $dot | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
22
falco/tests/unit/consts.go
Normal file
22
falco/tests/unit/consts.go
Normal file
@ -0,0 +1,22 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
const (
|
||||
releaseName = "rendered-resources"
|
||||
patternK8sMetacollectorFiles = `# Source: falco/charts/k8s-metacollector/templates/([^\n]+)`
|
||||
k8sMetaPluginName = "k8smeta"
|
||||
)
|
||||
17
falco/tests/unit/doc.go
Normal file
17
falco/tests/unit/doc.go
Normal file
@ -0,0 +1,17 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package unit contains the unit tests for the Falco chart.
|
||||
package unit
|
||||
302
falco/tests/unit/driverConfig_test.go
Normal file
302
falco/tests/unit/driverConfig_test.go
Normal file
@ -0,0 +1,302 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
)
|
||||
|
||||
func TestDriverConfigInFalcoConfig(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(t *testing.T, config any)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "kmod", kind)
|
||||
require.Equal(t, float64(4), bufSizePreset)
|
||||
require.False(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"kind=kmod",
|
||||
map[string]string{
|
||||
"driver.kind": "kmod",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "kmod", kind)
|
||||
require.Equal(t, float64(4), bufSizePreset)
|
||||
require.False(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"kind=module(alias)",
|
||||
map[string]string{
|
||||
"driver.kind": "module",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "kmod", kind)
|
||||
require.Equal(t, float64(4), bufSizePreset)
|
||||
require.False(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"kmod=onfig",
|
||||
map[string]string{
|
||||
"driver.kmod.bufSizePreset": "6",
|
||||
"driver.kmod.dropFailedExit": "true",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "kmod", kind)
|
||||
require.Equal(t, float64(6), bufSizePreset)
|
||||
require.True(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"kind=ebpf",
|
||||
map[string]string{
|
||||
"driver.kind": "ebpf",
|
||||
"driver.ebpf.bufSizePreset": "6",
|
||||
"driver.ebpf.dropFailedExit": "true",
|
||||
"driver.ebpf.path": "testing/Path/ebpf",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "ebpf", kind)
|
||||
require.Equal(t, "testing/Path/ebpf", path)
|
||||
require.Equal(t, float64(6), bufSizePreset)
|
||||
require.True(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"ebpf=config",
|
||||
map[string]string{
|
||||
"driver.kind": "ebpf",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "ebpf", kind)
|
||||
require.Equal(t, "${HOME}/.falco/falco-bpf.o", path)
|
||||
require.Equal(t, float64(4), bufSizePreset)
|
||||
require.False(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"kind=modern_ebpf",
|
||||
map[string]string{
|
||||
"driver.kind": "modern_ebpf",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "modern_ebpf", kind)
|
||||
require.Equal(t, float64(4), bufSizePreset)
|
||||
require.Equal(t, float64(2), cpusForEachBuffer)
|
||||
require.False(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"kind=modern-bpf(alias)",
|
||||
map[string]string{
|
||||
"driver.kind": "modern-bpf",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "modern_ebpf", kind)
|
||||
require.Equal(t, float64(4), bufSizePreset)
|
||||
require.Equal(t, float64(2), cpusForEachBuffer)
|
||||
require.False(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"modernEbpf=config",
|
||||
map[string]string{
|
||||
"driver.kind": "modern-bpf",
|
||||
"driver.modernEbpf.bufSizePreset": "6",
|
||||
"driver.modernEbpf.dropFailedExit": "true",
|
||||
"driver.modernEbpf.cpusForEachBuffer": "8",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "modern_ebpf", kind)
|
||||
require.Equal(t, float64(6), bufSizePreset)
|
||||
require.Equal(t, float64(8), cpusForEachBuffer)
|
||||
require.True(t, dropFailedExit)
|
||||
},
|
||||
},
|
||||
{
|
||||
"kind=gvisor",
|
||||
map[string]string{
|
||||
"driver.kind": "gvisor",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, config, root, err := getGvisorConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "gvisor", kind)
|
||||
require.Equal(t, "/gvisor-config/pod-init.json", config)
|
||||
require.Equal(t, "/host/run/containerd/runsc/k8s.io", root)
|
||||
},
|
||||
},
|
||||
{
|
||||
"gvisor=config",
|
||||
map[string]string{
|
||||
"driver.kind": "gvisor",
|
||||
"driver.gvisor.runsc.root": "/my/root/test",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Len(t, config, 2, "should have only two items")
|
||||
kind, config, root, err := getGvisorConfig(config)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "gvisor", kind)
|
||||
require.Equal(t, "/gvisor-config/pod-init.json", config)
|
||||
require.Equal(t, "/host/my/root/test/k8s.io", root)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
|
||||
|
||||
var cm corev1.ConfigMap
|
||||
helm.UnmarshalK8SYaml(t, output, &cm)
|
||||
var config map[string]interface{}
|
||||
|
||||
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
|
||||
engine := config["engine"]
|
||||
testCase.expected(t, engine)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDriverConfigWithUnsupportedDriver(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
values := map[string]string{
|
||||
"driver.kind": "notExisting",
|
||||
}
|
||||
options := &helm.Options{SetValues: values}
|
||||
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
|
||||
require.Error(t, err)
|
||||
require.True(t, strings.Contains(err.Error(), "unsupported driver kind: \"notExisting\". Supported drivers [kmod ebpf modern_ebpf gvisor], alias [module modern-bpf]"))
|
||||
}
|
||||
|
||||
func getKmodConfig(config interface{}) (kind string, bufSizePreset float64, dropFailedExit bool, err error) {
|
||||
configMap, ok := config.(map[string]interface{})
|
||||
if !ok {
|
||||
err = fmt.Errorf("can't assert type of config")
|
||||
return
|
||||
}
|
||||
|
||||
kind = configMap["kind"].(string)
|
||||
kmod := configMap["kmod"].(map[string]interface{})
|
||||
bufSizePreset = kmod["buf_size_preset"].(float64)
|
||||
dropFailedExit = kmod["drop_failed_exit"].(bool)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func getEbpfConfig(config interface{}) (kind, path string, bufSizePreset float64, dropFailedExit bool, err error) {
|
||||
configMap, ok := config.(map[string]interface{})
|
||||
if !ok {
|
||||
err = fmt.Errorf("can't assert type of config")
|
||||
return
|
||||
}
|
||||
|
||||
kind = configMap["kind"].(string)
|
||||
ebpf := configMap["ebpf"].(map[string]interface{})
|
||||
bufSizePreset = ebpf["buf_size_preset"].(float64)
|
||||
dropFailedExit = ebpf["drop_failed_exit"].(bool)
|
||||
path = ebpf["probe"].(string)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func getModernEbpfConfig(config interface{}) (kind string, bufSizePreset, cpusForEachBuffer float64, dropFailedExit bool, err error) {
|
||||
configMap, ok := config.(map[string]interface{})
|
||||
if !ok {
|
||||
err = fmt.Errorf("can't assert type of config")
|
||||
return
|
||||
}
|
||||
|
||||
kind = configMap["kind"].(string)
|
||||
modernEbpf := configMap["modern_ebpf"].(map[string]interface{})
|
||||
bufSizePreset = modernEbpf["buf_size_preset"].(float64)
|
||||
dropFailedExit = modernEbpf["drop_failed_exit"].(bool)
|
||||
cpusForEachBuffer = modernEbpf["cpus_for_each_buffer"].(float64)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func getGvisorConfig(cfg interface{}) (kind, config, root string, err error) {
|
||||
configMap, ok := cfg.(map[string]interface{})
|
||||
if !ok {
|
||||
err = fmt.Errorf("can't assert type of config")
|
||||
return
|
||||
}
|
||||
|
||||
kind = configMap["kind"].(string)
|
||||
gvisor := configMap["gvisor"].(map[string]interface{})
|
||||
config = gvisor["config"].(string)
|
||||
root = gvisor["root"].(string)
|
||||
|
||||
return
|
||||
}
|
||||
131
falco/tests/unit/driverLoader_test.go
Normal file
131
falco/tests/unit/driverLoader_test.go
Normal file
@ -0,0 +1,131 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
)
|
||||
|
||||
// TestDriverLoaderEnabled tests the helper that enables the driver loader based on the configuration.
|
||||
func TestDriverLoaderEnabled(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
true,
|
||||
},
|
||||
{
|
||||
"driver.kind=modern-bpf",
|
||||
map[string]string{
|
||||
"driver.kind": "modern-bpf",
|
||||
},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"driver.kind=modern_ebpf",
|
||||
map[string]string{
|
||||
"driver.kind": "modern_ebpf",
|
||||
},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"driver.kind=gvisor",
|
||||
map[string]string{
|
||||
"driver.kind": "gvisor",
|
||||
},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"driver.disabled",
|
||||
map[string]string{
|
||||
"driver.enabled": "false",
|
||||
},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"driver.loader.disabled",
|
||||
map[string]string{
|
||||
"driver.loader.enabled": "false",
|
||||
},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"driver.kind=kmod",
|
||||
map[string]string{
|
||||
"driver.kind": "kmod",
|
||||
},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"driver.kind=module",
|
||||
map[string]string{
|
||||
"driver.kind": "module",
|
||||
},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"driver.kind=ebpf",
|
||||
map[string]string{
|
||||
"driver.kind": "ebpf",
|
||||
},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"driver.kind=kmod&driver.loader.disabled",
|
||||
map[string]string{
|
||||
"driver.kind": "kmod",
|
||||
"driver.loader.enabled": "false",
|
||||
},
|
||||
false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/daemonset.yaml"})
|
||||
|
||||
var ds appsv1.DaemonSet
|
||||
helm.UnmarshalK8SYaml(t, output, &ds)
|
||||
found := false
|
||||
for i := range ds.Spec.Template.Spec.InitContainers {
|
||||
if ds.Spec.Template.Spec.InitContainers[i].Name == "falco-driver-loader" {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
|
||||
require.Equal(t, testCase.expected, found)
|
||||
})
|
||||
}
|
||||
}
|
||||
520
falco/tests/unit/k8smetacollectorDependency_test.go
Normal file
520
falco/tests/unit/k8smetacollectorDependency_test.go
Normal file
@ -0,0 +1,520 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// Copyright 2024 The Falco Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package unit
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"slices"
|
||||
)
|
||||
|
||||
const chartPath = "../../"
|
||||
|
||||
// Using the default values we want to test that all the expected resources for the k8s-metacollector are rendered.
|
||||
func TestRenderedResourcesWithDefaultValues(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
options := &helm.Options{}
|
||||
// Template the chart using the default values.yaml file.
|
||||
output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Extract all rendered files from the output.
|
||||
re := regexp.MustCompile(patternK8sMetacollectorFiles)
|
||||
matches := re.FindAllStringSubmatch(output, -1)
|
||||
require.Len(t, matches, 0)
|
||||
|
||||
}
|
||||
|
||||
func TestRenderedResourcesWhenNotEnabled(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Template files that we expect to be rendered.
|
||||
templateFiles := []string{
|
||||
"clusterrole.yaml",
|
||||
"clusterrolebinding.yaml",
|
||||
"deployment.yaml",
|
||||
"service.yaml",
|
||||
"serviceaccount.yaml",
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
options := &helm.Options{SetValues: map[string]string{
|
||||
"collectors.kubernetes.enabled": "true",
|
||||
}}
|
||||
|
||||
// Template the chart using the default values.yaml file.
|
||||
output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Extract all rendered files from the output.
|
||||
re := regexp.MustCompile(patternK8sMetacollectorFiles)
|
||||
matches := re.FindAllStringSubmatch(output, -1)
|
||||
|
||||
var renderedTemplates []string
|
||||
for _, match := range matches {
|
||||
// Filter out test templates.
|
||||
if !strings.Contains(match[1], "test-") {
|
||||
renderedTemplates = append(renderedTemplates, match[1])
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the rendered resources are equal tho the expected ones.
|
||||
require.Equal(t, len(renderedTemplates), len(templateFiles), "should be equal")
|
||||
|
||||
for _, rendered := range renderedTemplates {
|
||||
require.True(t, slices.Contains(templateFiles, rendered), "template files should contain all the rendered files")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPluginConfigurationInFalcoConfig(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(t *testing.T, config any)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(t *testing.T, config any) {
|
||||
plugin := config.(map[string]interface{})
|
||||
// Get init config.
|
||||
initConfig, ok := plugin["init_config"]
|
||||
require.True(t, ok)
|
||||
initConfigMap := initConfig.(map[string]interface{})
|
||||
// Check that the collector port is correctly set.
|
||||
port := initConfigMap["collectorPort"]
|
||||
require.Equal(t, float64(45000), port.(float64))
|
||||
// Check that the collector nodeName is correctly set.
|
||||
nodeName := initConfigMap["nodeName"]
|
||||
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
|
||||
// Check that the collector hostname is correctly set.
|
||||
hostName := initConfigMap["collectorHostname"]
|
||||
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", releaseName), hostName.(string))
|
||||
|
||||
// Check that the library path is set.
|
||||
libPath := plugin["library_path"]
|
||||
require.Equal(t, "libk8smeta.so", libPath)
|
||||
},
|
||||
},
|
||||
{
|
||||
"overrideK8s-metacollectorNamespace",
|
||||
map[string]string{
|
||||
"k8s-metacollector.namespaceOverride": "test",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
plugin := config.(map[string]interface{})
|
||||
// Get init config.
|
||||
initConfig, ok := plugin["init_config"]
|
||||
require.True(t, ok)
|
||||
initConfigMap := initConfig.(map[string]interface{})
|
||||
// Check that the collector port is correctly set.
|
||||
port := initConfigMap["collectorPort"]
|
||||
require.Equal(t, float64(45000), port.(float64))
|
||||
// Check that the collector nodeName is correctly set.
|
||||
nodeName := initConfigMap["nodeName"]
|
||||
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
|
||||
// Check that the collector hostname is correctly set.
|
||||
hostName := initConfigMap["collectorHostname"]
|
||||
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.test.svc", releaseName), hostName.(string))
|
||||
|
||||
// Check that the library path is set.
|
||||
libPath := plugin["library_path"]
|
||||
require.Equal(t, "libk8smeta.so", libPath)
|
||||
},
|
||||
},
|
||||
{
|
||||
"overrideK8s-metacollectorName",
|
||||
map[string]string{
|
||||
"k8s-metacollector.fullnameOverride": "collector",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
plugin := config.(map[string]interface{})
|
||||
// Get init config.
|
||||
initConfig, ok := plugin["init_config"]
|
||||
require.True(t, ok)
|
||||
initConfigMap := initConfig.(map[string]interface{})
|
||||
// Check that the collector port is correctly set.
|
||||
port := initConfigMap["collectorPort"]
|
||||
require.Equal(t, float64(45000), port.(float64))
|
||||
// Check that the collector nodeName is correctly set.
|
||||
nodeName := initConfigMap["nodeName"]
|
||||
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
|
||||
// Check that the collector hostname is correctly set.
|
||||
hostName := initConfigMap["collectorHostname"]
|
||||
require.Equal(t, "collector.default.svc", hostName.(string))
|
||||
|
||||
// Check that the library path is set.
|
||||
libPath := plugin["library_path"]
|
||||
require.Equal(t, "libk8smeta.so", libPath)
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
"overrideK8s-metacollectorNamespaceAndName",
|
||||
map[string]string{
|
||||
"k8s-metacollector.namespaceOverride": "test",
|
||||
"k8s-metacollector.fullnameOverride": "collector",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
plugin := config.(map[string]interface{})
|
||||
// Get init config.
|
||||
initConfig, ok := plugin["init_config"]
|
||||
require.True(t, ok)
|
||||
initConfigMap := initConfig.(map[string]interface{})
|
||||
// Check that the collector port is correctly set.
|
||||
port := initConfigMap["collectorPort"]
|
||||
require.Equal(t, float64(45000), port.(float64))
|
||||
// Check that the collector nodeName is correctly set.
|
||||
nodeName := initConfigMap["nodeName"]
|
||||
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
|
||||
// Check that the collector hostname is correctly set.
|
||||
hostName := initConfigMap["collectorHostname"]
|
||||
require.Equal(t, "collector.test.svc", hostName.(string))
|
||||
|
||||
// Check that the library path is set.
|
||||
libPath := plugin["library_path"]
|
||||
require.Equal(t, "libk8smeta.so", libPath)
|
||||
},
|
||||
},
|
||||
{
|
||||
"set CollectorHostname",
|
||||
map[string]string{
|
||||
"collectors.kubernetes.collectorHostname": "test",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
plugin := config.(map[string]interface{})
|
||||
// Get init config.
|
||||
initConfig, ok := plugin["init_config"]
|
||||
require.True(t, ok)
|
||||
initConfigMap := initConfig.(map[string]interface{})
|
||||
// Check that the collector port is correctly set.
|
||||
port := initConfigMap["collectorPort"]
|
||||
require.Equal(t, float64(45000), port.(float64))
|
||||
// Check that the collector nodeName is correctly set.
|
||||
nodeName := initConfigMap["nodeName"]
|
||||
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
|
||||
// Check that the collector hostname is correctly set.
|
||||
hostName := initConfigMap["collectorHostname"]
|
||||
require.Equal(t, "test", hostName.(string))
|
||||
|
||||
// Check that the library path is set.
|
||||
libPath := plugin["library_path"]
|
||||
require.Equal(t, "libk8smeta.so", libPath)
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
"set CollectorHostname and namespace name",
|
||||
map[string]string{
|
||||
"collectors.kubernetes.collectorHostname": "test-with-override",
|
||||
"k8s-metacollector.namespaceOverride": "test",
|
||||
"k8s-metacollector.fullnameOverride": "collector",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
plugin := config.(map[string]interface{})
|
||||
// Get init config.
|
||||
initConfig, ok := plugin["init_config"]
|
||||
require.True(t, ok)
|
||||
initConfigMap := initConfig.(map[string]interface{})
|
||||
// Check that the collector port is correctly set.
|
||||
port := initConfigMap["collectorPort"]
|
||||
require.Equal(t, float64(45000), port.(float64))
|
||||
// Check that the collector nodeName is correctly set.
|
||||
nodeName := initConfigMap["nodeName"]
|
||||
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
|
||||
// Check that the collector hostname is correctly set.
|
||||
hostName := initConfigMap["collectorHostname"]
|
||||
require.Equal(t, "test-with-override", hostName.(string))
|
||||
|
||||
// Check that the library path is set.
|
||||
libPath := plugin["library_path"]
|
||||
require.Equal(t, "libk8smeta.so", libPath)
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
"set collectorPort",
|
||||
map[string]string{
|
||||
"collectors.kubernetes.collectorPort": "8888",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
plugin := config.(map[string]interface{})
|
||||
// Get init config.
|
||||
initConfig, ok := plugin["init_config"]
|
||||
require.True(t, ok)
|
||||
initConfigMap := initConfig.(map[string]interface{})
|
||||
// Check that the collector port is correctly set.
|
||||
port := initConfigMap["collectorPort"]
|
||||
require.Equal(t, float64(8888), port.(float64))
|
||||
// Check that the collector nodeName is correctly set.
|
||||
nodeName := initConfigMap["nodeName"]
|
||||
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
|
||||
// Check that the collector hostname is correctly set.
|
||||
hostName := initConfigMap["collectorHostname"]
|
||||
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", releaseName), hostName.(string))
|
||||
|
||||
// Check that the library path is set.
|
||||
libPath := plugin["library_path"]
|
||||
require.Equal(t, "libk8smeta.so", libPath)
|
||||
},
|
||||
},
|
||||
{
|
||||
"drive disabled",
|
||||
map[string]string{
|
||||
"driver.enabled": "false",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
require.Nil(t, config)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// Enable the collector.
|
||||
if testCase.values != nil {
|
||||
testCase.values["collectors.kubernetes.enabled"] = "true"
|
||||
} else {
|
||||
testCase.values = map[string]string{"collectors.kubernetes.enabled": "true"}
|
||||
}
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
|
||||
|
||||
var cm corev1.ConfigMap
|
||||
helm.UnmarshalK8SYaml(t, output, &cm)
|
||||
var config map[string]interface{}
|
||||
|
||||
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
|
||||
plugins := config["plugins"]
|
||||
pluginsArray := plugins.([]interface{})
|
||||
found := false
|
||||
// Find the k8smeta plugin configuration.
|
||||
for _, plugin := range pluginsArray {
|
||||
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == k8sMetaPluginName {
|
||||
testCase.expected(t, plugin)
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if found {
|
||||
// Check that the plugin has been added to the ones that need to be loaded.
|
||||
loadplugins := config["load_plugins"]
|
||||
require.True(t, slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName))
|
||||
} else {
|
||||
testCase.expected(t, nil)
|
||||
loadplugins := config["load_plugins"]
|
||||
require.True(t, !slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Test that the helper does not overwrite user's configuration.
|
||||
func TestPluginConfigurationUniqueEntries(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginsJSON := `[
|
||||
{
|
||||
"init_config": null,
|
||||
"library_path": "libk8saudit.so",
|
||||
"name": "k8saudit",
|
||||
"open_params": "http://:9765/k8s-audit"
|
||||
},
|
||||
{
|
||||
"library_path": "libcloudtrail.so",
|
||||
"name": "cloudtrail"
|
||||
},
|
||||
{
|
||||
"init_config": "",
|
||||
"library_path": "libjson.so",
|
||||
"name": "json"
|
||||
},
|
||||
{
|
||||
"init_config": {
|
||||
"collectorHostname": "rendered-resources-k8s-metacollector.default.svc",
|
||||
"collectorPort": 45000,
|
||||
"nodeName": "${FALCO_K8S_NODE_NAME}"
|
||||
},
|
||||
"library_path": "libk8smeta.so",
|
||||
"name": "k8smeta"
|
||||
}
|
||||
]`
|
||||
|
||||
loadPluginsJSON := `[
|
||||
"k8smeta",
|
||||
"k8saudit"
|
||||
]`
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
options := &helm.Options{SetJsonValues: map[string]string{
|
||||
"falco.plugins": pluginsJSON,
|
||||
"falco.load_plugins": loadPluginsJSON,
|
||||
}, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}}
|
||||
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
|
||||
|
||||
var cm corev1.ConfigMap
|
||||
helm.UnmarshalK8SYaml(t, output, &cm)
|
||||
var config map[string]interface{}
|
||||
|
||||
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
|
||||
plugins := config["plugins"]
|
||||
|
||||
out, err := json.MarshalIndent(plugins, "", " ")
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pluginsJSON, string(out))
|
||||
pluginsArray := plugins.([]interface{})
|
||||
// Find the k8smeta plugin configuration.
|
||||
numConfigK8smeta := 0
|
||||
for _, plugin := range pluginsArray {
|
||||
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == k8sMetaPluginName {
|
||||
numConfigK8smeta++
|
||||
}
|
||||
}
|
||||
|
||||
require.Equal(t, 1, numConfigK8smeta)
|
||||
|
||||
// Check that the plugin has been added to the ones that need to be loaded.
|
||||
loadplugins := config["load_plugins"]
|
||||
require.Len(t, loadplugins.([]interface{}), 2)
|
||||
require.True(t, slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName))
|
||||
}
|
||||
|
||||
// Test that the helper does not overwrite user's configuration.
|
||||
func TestFalcoctlRefs(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginsJSON := `[
|
||||
{
|
||||
"init_config": null,
|
||||
"library_path": "libk8saudit.so",
|
||||
"name": "k8saudit",
|
||||
"open_params": "http://:9765/k8s-audit"
|
||||
},
|
||||
{
|
||||
"library_path": "libcloudtrail.so",
|
||||
"name": "cloudtrail"
|
||||
},
|
||||
{
|
||||
"init_config": "",
|
||||
"library_path": "libjson.so",
|
||||
"name": "json"
|
||||
},
|
||||
{
|
||||
"init_config": {
|
||||
"collectorHostname": "rendered-resources-k8s-metacollector.default.svc",
|
||||
"collectorPort": 45000,
|
||||
"nodeName": "${FALCO_K8S_NODE_NAME}"
|
||||
},
|
||||
"library_path": "libk8smeta.so",
|
||||
"name": "k8smeta"
|
||||
}
|
||||
]`
|
||||
|
||||
testFunc := func(t *testing.T, config any) {
|
||||
// Get artifact configuration map.
|
||||
configMap := config.(map[string]interface{})
|
||||
artifactConfig := (configMap["artifact"]).(map[string]interface{})
|
||||
// Test allowed types.
|
||||
allowedTypes := artifactConfig["allowedTypes"]
|
||||
require.Len(t, allowedTypes, 2)
|
||||
require.True(t, slices.Contains(allowedTypes.([]interface{}), "plugin"))
|
||||
require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile"))
|
||||
// Test plugin reference.
|
||||
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
|
||||
require.Len(t, refs, 2)
|
||||
require.True(t, slices.Contains(refs, "falco-rules:3"))
|
||||
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0"))
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
valuesJSON map[string]string
|
||||
expected func(t *testing.T, config any)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
testFunc,
|
||||
},
|
||||
{
|
||||
"setPluginConfiguration",
|
||||
map[string]string{
|
||||
"falco.plugins": pluginsJSON,
|
||||
},
|
||||
testFunc,
|
||||
},
|
||||
{
|
||||
"driver disabled",
|
||||
map[string]string{
|
||||
"driver.enabled": "false",
|
||||
},
|
||||
func(t *testing.T, config any) {
|
||||
// Get artifact configuration map.
|
||||
configMap := config.(map[string]interface{})
|
||||
artifactConfig := (configMap["artifact"]).(map[string]interface{})
|
||||
// Test plugin reference.
|
||||
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
|
||||
require.True(t, !slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0"))
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
options := &helm.Options{SetJsonValues: testCase.valuesJSON, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}}
|
||||
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/falcoctl-configmap.yaml"})
|
||||
|
||||
var cm corev1.ConfigMap
|
||||
helm.UnmarshalK8SYaml(t, output, &cm)
|
||||
var config map[string]interface{}
|
||||
helm.UnmarshalK8SYaml(t, cm.Data["falcoctl.yaml"], &config)
|
||||
testCase.expected(t, config)
|
||||
})
|
||||
}
|
||||
}
|
||||
59
falco/tests/unit/serviceAccount_test.go
Normal file
59
falco/tests/unit/serviceAccount_test.go
Normal file
@ -0,0 +1,59 @@
|
||||
package unit
|
||||
|
||||
import (
|
||||
"github.com/gruntwork-io/terratest/modules/helm"
|
||||
"github.com/stretchr/testify/require"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestServiceAccount(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
helmChartPath, err := filepath.Abs(chartPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
values map[string]string
|
||||
expected func(t *testing.T, sa *corev1.ServiceAccount)
|
||||
}{
|
||||
{
|
||||
"defaultValues",
|
||||
nil,
|
||||
func(t *testing.T, sa *corev1.ServiceAccount) {
|
||||
require.Equal(t, sa.Name, "")
|
||||
},
|
||||
},
|
||||
{
|
||||
"kind=kmod",
|
||||
map[string]string{
|
||||
"serviceAccount.create": "true",
|
||||
},
|
||||
func(t *testing.T, sa *corev1.ServiceAccount) {
|
||||
require.Equal(t, sa.Name, "rendered-resources-falco")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCase := testCase
|
||||
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
options := &helm.Options{SetValues: testCase.values}
|
||||
output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
|
||||
if err != nil {
|
||||
require.True(t, strings.Contains(err.Error(), "Error: could not find template templates/serviceaccount.yaml in chart"))
|
||||
}
|
||||
|
||||
var sa corev1.ServiceAccount
|
||||
helm.UnmarshalK8SYaml(t, output, &sa)
|
||||
|
||||
testCase.expected(t, &sa)
|
||||
})
|
||||
}
|
||||
}
|
||||
63
falco/values-gvisor-gke.yaml
Normal file
63
falco/values-gvisor-gke.yaml
Normal file
@ -0,0 +1,63 @@
|
||||
# Default values to deploy Falco on GKE with gVisor.
|
||||
|
||||
# Affinity constraint for pods' scheduling.
|
||||
# Needed to deploy Falco on the gVisor enabled nodes.
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: sandbox.gke.io/runtime
|
||||
operator: In
|
||||
values:
|
||||
- gvisor
|
||||
|
||||
# Tolerations to allow Falco to run on Kubernetes 1.6 masters.
|
||||
# Adds the neccesssary tolerations to allow Falco pods to be scheduled on the gVisor enabled nodes.
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/master
|
||||
- effect: NoSchedule
|
||||
key: sandbox.gke.io/runtime
|
||||
operator: Equal
|
||||
value: gvisor
|
||||
|
||||
# Enable gVisor and set the appropriate paths.
|
||||
driver:
|
||||
enabled: true
|
||||
kind: gvisor
|
||||
gvisor:
|
||||
runsc:
|
||||
path: /home/containerd/usr/local/sbin
|
||||
root: /run/containerd/runsc
|
||||
config: /run/containerd/runsc/config.toml
|
||||
|
||||
# Enable the containerd collector to enrich the syscall events with metadata.
|
||||
collectors:
|
||||
enabled: true
|
||||
containerd:
|
||||
enabled: true
|
||||
socket: /run/containerd/containerd.sock
|
||||
|
||||
falcoctl:
|
||||
artifact:
|
||||
install:
|
||||
# -- Enable the init container. We do not recommend installing plugins for security reasons since they are executable objects.
|
||||
# We install only "rulesfiles".
|
||||
enabled: true
|
||||
follow:
|
||||
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
|
||||
enabled: true
|
||||
config:
|
||||
artifact:
|
||||
install:
|
||||
# -- List of artifacts to be installed by the falcoctl init container.
|
||||
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
|
||||
refs: [falco-rules:3]
|
||||
follow:
|
||||
# -- List of artifacts to be followed by the falcoctl sidecar container.
|
||||
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
|
||||
refs: [falco-rules:3]
|
||||
|
||||
# Set this to true to force Falco so output the logs as soon as they are emmitted.
|
||||
tty: false
|
||||
59
falco/values-k8saudit.yaml
Normal file
59
falco/values-k8saudit.yaml
Normal file
@ -0,0 +1,59 @@
|
||||
# -- Disable the drivers since we want to deploy only the k8saudit plugin.
|
||||
driver:
|
||||
enabled: false
|
||||
|
||||
# -- Disable the collectors, no syscall events to enrich with metadata.
|
||||
collectors:
|
||||
enabled: false
|
||||
|
||||
# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurabale.
|
||||
controller:
|
||||
kind: deployment
|
||||
deployment:
|
||||
# -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
|
||||
# For more info check the section on Plugins in the README.md file.
|
||||
replicas: 1
|
||||
|
||||
|
||||
falcoctl:
|
||||
artifact:
|
||||
install:
|
||||
# -- Enable the init container.
|
||||
enabled: true
|
||||
follow:
|
||||
# -- Enable the sidecar container.
|
||||
enabled: true
|
||||
config:
|
||||
artifact:
|
||||
install:
|
||||
# -- List of artifacts to be installed by the falcoctl init container.
|
||||
refs: [k8saudit-rules:0.7]
|
||||
follow:
|
||||
# -- List of artifacts to be followed by the falcoctl sidecar container.
|
||||
refs: [k8saudit-rules:0.7]
|
||||
|
||||
services:
|
||||
- name: k8saudit-webhook
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 9765 # See plugin open_params
|
||||
nodePort: 30007
|
||||
protocol: TCP
|
||||
|
||||
falco:
|
||||
rules_file:
|
||||
- /etc/falco/k8s_audit_rules.yaml
|
||||
- /etc/falco/rules.d
|
||||
plugins:
|
||||
- name: k8saudit
|
||||
library_path: libk8saudit.so
|
||||
init_config:
|
||||
""
|
||||
# maxEventBytes: 1048576
|
||||
# sslCertificate: /etc/falco/falco.pem
|
||||
open_params: "http://:9765/k8s-audit"
|
||||
- name: json
|
||||
library_path: libjson.so
|
||||
init_config: ""
|
||||
# Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.
|
||||
load_plugins: [k8saudit, json]
|
||||
62
falco/values-syscall-k8saudit.yaml
Normal file
62
falco/values-syscall-k8saudit.yaml
Normal file
@ -0,0 +1,62 @@
|
||||
# Enable the driver, and choose between the kernel module or the ebpf probe.
|
||||
# Default value: kernel module.
|
||||
driver:
|
||||
enabled: true
|
||||
kind: module
|
||||
|
||||
# Enable the collectors used to enrich the events with metadata.
|
||||
# Check the values.yaml file for fine-grained options.
|
||||
collectors:
|
||||
enabled: true
|
||||
|
||||
# We set the controller to daemonset since we have the syscalls source enabled.
|
||||
# It will ensure that every node on our cluster will be monitored by Falco.
|
||||
# Please note that the api-server will use the "k8saudit-webhook" service to send
|
||||
# audit logs to the falco instances. That means that when we have multiple instances of Falco
|
||||
# we can not predict to which instance the audit logs will be sent. When testing please check all
|
||||
# the Falco instance to make sure that at least one of them have received the audit logs.
|
||||
controller:
|
||||
kind: daemonset
|
||||
|
||||
falcoctl:
|
||||
artifact:
|
||||
install:
|
||||
# -- Enable the init container.
|
||||
enabled: true
|
||||
follow:
|
||||
# -- Enable the sidecar container.
|
||||
enabled: true
|
||||
config:
|
||||
artifact:
|
||||
install:
|
||||
# -- List of artifacts to be installed by the falcoctl init container.
|
||||
refs: [falco-rules:3, k8saudit-rules:0.7]
|
||||
follow:
|
||||
# -- List of artifacts to be followed by the falcoctl sidecar container.
|
||||
refs: [falco-rules:3, k8saudit-rules:0.7]
|
||||
|
||||
services:
|
||||
- name: k8saudit-webhook
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 9765 # See plugin open_params
|
||||
nodePort: 30007
|
||||
protocol: TCP
|
||||
|
||||
falco:
|
||||
rules_file:
|
||||
- /etc/falco/falco_rules.yaml
|
||||
- /etc/falco/k8s_audit_rules.yaml
|
||||
- /etc/falco/rules.d
|
||||
plugins:
|
||||
- name: k8saudit
|
||||
library_path: libk8saudit.so
|
||||
init_config:
|
||||
""
|
||||
# maxEventBytes: 1048576
|
||||
# sslCertificate: /etc/falco/falco.pem
|
||||
open_params: "http://:9765/k8s-audit"
|
||||
- name: json
|
||||
library_path: libjson.so
|
||||
init_config: ""
|
||||
load_plugins: [k8saudit, json]
|
||||
1298
falco/values.home.yaml
Normal file
1298
falco/values.home.yaml
Normal file
File diff suppressed because it is too large
Load Diff
1298
falco/values.yaml
Normal file
1298
falco/values.yaml
Normal file
File diff suppressed because it is too large
Load Diff
5
kube-prometheus-stack/.editorconfig
Normal file
5
kube-prometheus-stack/.editorconfig
Normal file
@ -0,0 +1,5 @@
|
||||
root = true
|
||||
|
||||
[files/dashboards/*.json]
|
||||
indent_size = 2
|
||||
indent_style = space
|
||||
29
kube-prometheus-stack/.helmignore
Normal file
29
kube-prometheus-stack/.helmignore
Normal file
@ -0,0 +1,29 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
# helm/charts
|
||||
OWNERS
|
||||
hack/
|
||||
ci/
|
||||
kube-prometheus-*.tgz
|
||||
|
||||
unittests/
|
||||
files/dashboards/
|
||||
12
kube-prometheus-stack/CONTRIBUTING.md
Normal file
12
kube-prometheus-stack/CONTRIBUTING.md
Normal file
@ -0,0 +1,12 @@
|
||||
# Contributing Guidelines
|
||||
|
||||
## How to contribute to this chart
|
||||
|
||||
1. Fork this repository, develop and test your Chart.
|
||||
1. Bump the chart version for every change.
|
||||
1. Ensure PR title has the prefix `[kube-prometheus-stack]`
|
||||
1. When making changes to rules or dashboards, see the README.md section on how to sync data from upstream repositories
|
||||
1. Check the `hack/minikube` folder has scripts to set up minikube and components of this chart that will allow all components to be scraped. You can use this configuration when validating your changes.
|
||||
1. Check for changes of RBAC rules.
|
||||
1. Check for changes in CRD specs.
|
||||
1. PR must pass the linter (`helm lint`)
|
||||
18
kube-prometheus-stack/Chart.lock
Normal file
18
kube-prometheus-stack/Chart.lock
Normal file
@ -0,0 +1,18 @@
|
||||
dependencies:
|
||||
- name: crds
|
||||
repository: ""
|
||||
version: 0.0.0
|
||||
- name: kube-state-metrics
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 5.16.4
|
||||
- name: prometheus-node-exporter
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 4.31.0
|
||||
- name: grafana
|
||||
repository: https://grafana.github.io/helm-charts
|
||||
version: 7.3.7
|
||||
- name: prometheus-windows-exporter
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 0.3.1
|
||||
digest: sha256:f359d9feb38d8859523056ddd2a078aa4880bf467219bf27972c87138e112ca7
|
||||
generated: "2024-03-14T22:04:16.515476846Z"
|
||||
65
kube-prometheus-stack/Chart.yaml
Normal file
65
kube-prometheus-stack/Chart.yaml
Normal file
@ -0,0 +1,65 @@
|
||||
annotations:
|
||||
artifacthub.io/license: Apache-2.0
|
||||
artifacthub.io/links: |
|
||||
- name: Chart Source
|
||||
url: https://github.com/prometheus-community/helm-charts
|
||||
- name: Upstream Project
|
||||
url: https://github.com/prometheus-operator/kube-prometheus
|
||||
artifacthub.io/operator: "true"
|
||||
apiVersion: v2
|
||||
appVersion: v0.72.0
|
||||
dependencies:
|
||||
- condition: crds.enabled
|
||||
name: crds
|
||||
repository: ""
|
||||
version: 0.0.0
|
||||
- condition: kubeStateMetrics.enabled
|
||||
name: kube-state-metrics
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 5.16.*
|
||||
- condition: nodeExporter.enabled
|
||||
name: prometheus-node-exporter
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 4.31.*
|
||||
- condition: grafana.enabled
|
||||
name: grafana
|
||||
repository: https://grafana.github.io/helm-charts
|
||||
version: 7.3.*
|
||||
- condition: windowsMonitoring.enabled
|
||||
name: prometheus-windows-exporter
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 0.3.*
|
||||
description: kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
|
||||
and Prometheus rules combined with documentation and scripts to provide easy to
|
||||
operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus
|
||||
Operator.
|
||||
home: https://github.com/prometheus-operator/kube-prometheus
|
||||
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
|
||||
keywords:
|
||||
- operator
|
||||
- prometheus
|
||||
- kube-prometheus
|
||||
kubeVersion: '>=1.19.0-0'
|
||||
maintainers:
|
||||
- email: andrew@quadcorps.co.uk
|
||||
name: andrewgkew
|
||||
- email: gianrubio@gmail.com
|
||||
name: gianrubio
|
||||
- email: github.gkarthiks@gmail.com
|
||||
name: gkarthiks
|
||||
- email: kube-prometheus-stack@sisti.pt
|
||||
name: GMartinez-Sisti
|
||||
- email: github@jkroepke.de
|
||||
name: jkroepke
|
||||
- email: scott@r6by.com
|
||||
name: scottrigby
|
||||
- email: miroslav.hadzhiev@gmail.com
|
||||
name: Xtigyro
|
||||
- email: quentin.bisson@gmail.com
|
||||
name: QuentinBisson
|
||||
name: kube-prometheus-stack
|
||||
sources:
|
||||
- https://github.com/prometheus-community/helm-charts
|
||||
- https://github.com/prometheus-operator/kube-prometheus
|
||||
type: application
|
||||
version: 57.0.3
|
||||
1048
kube-prometheus-stack/README.md
Normal file
1048
kube-prometheus-stack/README.md
Normal file
File diff suppressed because it is too large
Load Diff
3
kube-prometheus-stack/charts/crds/Chart.yaml
Normal file
3
kube-prometheus-stack/charts/crds/Chart.yaml
Normal file
@ -0,0 +1,3 @@
|
||||
apiVersion: v2
|
||||
name: crds
|
||||
version: 0.0.0
|
||||
3
kube-prometheus-stack/charts/crds/README.md
Normal file
3
kube-prometheus-stack/charts/crds/README.md
Normal file
@ -0,0 +1,3 @@
|
||||
# crds subchart
|
||||
|
||||
See: [https://github.com/prometheus-community/helm-charts/issues/3548](https://github.com/prometheus-community/helm-charts/issues/3548)
|
||||
5722
kube-prometheus-stack/charts/crds/crds/crd-alertmanagerconfigs.yaml
Normal file
5722
kube-prometheus-stack/charts/crds/crds/crd-alertmanagerconfigs.yaml
Normal file
File diff suppressed because it is too large
Load Diff
7752
kube-prometheus-stack/charts/crds/crds/crd-alertmanagers.yaml
Normal file
7752
kube-prometheus-stack/charts/crds/crds/crd-alertmanagers.yaml
Normal file
File diff suppressed because it is too large
Load Diff
742
kube-prometheus-stack/charts/crds/crds/crd-podmonitors.yaml
Normal file
742
kube-prometheus-stack/charts/crds/crds/crd-podmonitors.yaml
Normal file
@ -0,0 +1,742 @@
|
||||
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.13.0
|
||||
operator.prometheus.io/version: 0.72.0
|
||||
name: podmonitors.monitoring.coreos.com
|
||||
spec:
|
||||
group: monitoring.coreos.com
|
||||
names:
|
||||
categories:
|
||||
- prometheus-operator
|
||||
kind: PodMonitor
|
||||
listKind: PodMonitorList
|
||||
plural: podmonitors
|
||||
shortNames:
|
||||
- pmon
|
||||
singular: podmonitor
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: PodMonitor defines monitoring for a set of pods.
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource this
|
||||
object represents. Servers may infer this from the endpoint the client
|
||||
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: Specification of desired Pod selection for target discovery
|
||||
by Prometheus.
|
||||
properties:
|
||||
attachMetadata:
|
||||
description: "`attachMetadata` defines additional metadata which is
|
||||
added to the discovered targets. \n It requires Prometheus >= v2.37.0."
|
||||
properties:
|
||||
node:
|
||||
description: When set to true, Prometheus must have the `get`
|
||||
permission on the `Nodes` objects.
|
||||
type: boolean
|
||||
type: object
|
||||
jobLabel:
|
||||
description: "The label to use to retrieve the job name from. `jobLabel`
|
||||
selects the label from the associated Kubernetes `Pod` object which
|
||||
will be used as the `job` label for all metrics. \n For example
|
||||
if `jobLabel` is set to `foo` and the Kubernetes `Pod` object is
|
||||
labeled with `foo: bar`, then Prometheus adds the `job=\"bar\"`
|
||||
label to all ingested metrics. \n If the value of this field is
|
||||
empty, the `job` label of the metrics defaults to the namespace
|
||||
and name of the PodMonitor object (e.g. `<namespace>/<name>`)."
|
||||
type: string
|
||||
keepDroppedTargets:
|
||||
description: "Per-scrape limit on the number of targets dropped by
|
||||
relabeling that will be kept in memory. 0 means no limit. \n It
|
||||
requires Prometheus >= v2.47.0."
|
||||
format: int64
|
||||
type: integer
|
||||
labelLimit:
|
||||
description: "Per-scrape limit on number of labels that will be accepted
|
||||
for a sample. \n It requires Prometheus >= v2.27.0."
|
||||
format: int64
|
||||
type: integer
|
||||
labelNameLengthLimit:
|
||||
description: "Per-scrape limit on length of labels name that will
|
||||
be accepted for a sample. \n It requires Prometheus >= v2.27.0."
|
||||
format: int64
|
||||
type: integer
|
||||
labelValueLengthLimit:
|
||||
description: "Per-scrape limit on length of labels value that will
|
||||
be accepted for a sample. \n It requires Prometheus >= v2.27.0."
|
||||
format: int64
|
||||
type: integer
|
||||
namespaceSelector:
|
||||
description: Selector to select which namespaces the Kubernetes `Pods`
|
||||
objects are discovered from.
|
||||
properties:
|
||||
any:
|
||||
description: Boolean describing whether all namespaces are selected
|
||||
in contrast to a list restricting them.
|
||||
type: boolean
|
||||
matchNames:
|
||||
description: List of namespace names to select from.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
type: object
|
||||
podMetricsEndpoints:
|
||||
description: List of endpoints part of this PodMonitor.
|
||||
items:
|
||||
description: PodMetricsEndpoint defines an endpoint serving Prometheus
|
||||
metrics to be scraped by Prometheus.
|
||||
properties:
|
||||
authorization:
|
||||
description: "`authorization` configures the Authorization header
|
||||
credentials to use when scraping the target. \n Cannot be
|
||||
set at the same time as `basicAuth`, or `oauth2`."
|
||||
properties:
|
||||
credentials:
|
||||
description: Selects a key of a Secret in the namespace
|
||||
that contains the credentials for authentication.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type:
|
||||
description: "Defines the authentication type. The value
|
||||
is case-insensitive. \n \"Basic\" is not a supported value.
|
||||
\n Default: \"Bearer\""
|
||||
type: string
|
||||
type: object
|
||||
basicAuth:
|
||||
description: "`basicAuth` configures the Basic Authentication
|
||||
credentials to use when scraping the target. \n Cannot be
|
||||
set at the same time as `authorization`, or `oauth2`."
|
||||
properties:
|
||||
password:
|
||||
description: '`password` specifies a key of a Secret containing
|
||||
the password for authentication.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
username:
|
||||
description: '`username` specifies a key of a Secret containing
|
||||
the username for authentication.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
bearerTokenSecret:
|
||||
description: "`bearerTokenSecret` specifies a key of a Secret
|
||||
containing the bearer token for scraping targets. The secret
|
||||
needs to be in the same namespace as the PodMonitor object
|
||||
and readable by the Prometheus Operator. \n Deprecated: use
|
||||
`authorization` instead."
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
enableHttp2:
|
||||
description: '`enableHttp2` can be used to disable HTTP2 when
|
||||
scraping the target.'
|
||||
type: boolean
|
||||
filterRunning:
|
||||
description: "When true, the pods which are not running (e.g.
|
||||
either in Failed or Succeeded state) are dropped during the
|
||||
target discovery. \n If unset, the filtering is enabled. \n
|
||||
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase"
|
||||
type: boolean
|
||||
followRedirects:
|
||||
description: '`followRedirects` defines whether the scrape requests
|
||||
should follow HTTP 3xx redirects.'
|
||||
type: boolean
|
||||
honorLabels:
|
||||
description: When true, `honorLabels` preserves the metric's
|
||||
labels when they collide with the target's labels.
|
||||
type: boolean
|
||||
honorTimestamps:
|
||||
description: '`honorTimestamps` controls whether Prometheus
|
||||
preserves the timestamps when exposed by the target.'
|
||||
type: boolean
|
||||
interval:
|
||||
description: "Interval at which Prometheus scrapes the metrics
|
||||
from the target. \n If empty, Prometheus uses the global scrape
|
||||
interval."
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
metricRelabelings:
|
||||
description: '`metricRelabelings` configures the relabeling
|
||||
rules to apply to the samples before ingestion.'
|
||||
items:
|
||||
description: "RelabelConfig allows dynamic rewriting of the
|
||||
label set for targets, alerts, scraped samples and remote
|
||||
write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
properties:
|
||||
action:
|
||||
default: replace
|
||||
description: "Action to perform based on the regex matching.
|
||||
\n `Uppercase` and `Lowercase` actions require Prometheus
|
||||
>= v2.36.0. `DropEqual` and `KeepEqual` actions require
|
||||
Prometheus >= v2.41.0. \n Default: \"Replace\""
|
||||
enum:
|
||||
- replace
|
||||
- Replace
|
||||
- keep
|
||||
- Keep
|
||||
- drop
|
||||
- Drop
|
||||
- hashmod
|
||||
- HashMod
|
||||
- labelmap
|
||||
- LabelMap
|
||||
- labeldrop
|
||||
- LabelDrop
|
||||
- labelkeep
|
||||
- LabelKeep
|
||||
- lowercase
|
||||
- Lowercase
|
||||
- uppercase
|
||||
- Uppercase
|
||||
- keepequal
|
||||
- KeepEqual
|
||||
- dropequal
|
||||
- DropEqual
|
||||
type: string
|
||||
modulus:
|
||||
description: "Modulus to take of the hash of the source
|
||||
label values. \n Only applicable when the action is
|
||||
`HashMod`."
|
||||
format: int64
|
||||
type: integer
|
||||
regex:
|
||||
description: Regular expression against which the extracted
|
||||
value is matched.
|
||||
type: string
|
||||
replacement:
|
||||
description: "Replacement value against which a Replace
|
||||
action is performed if the regular expression matches.
|
||||
\n Regex capture groups are available."
|
||||
type: string
|
||||
separator:
|
||||
description: Separator is the string between concatenated
|
||||
SourceLabels.
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: The source labels select values from existing
|
||||
labels. Their content is concatenated using the configured
|
||||
Separator and matched against the configured regular
|
||||
expression.
|
||||
items:
|
||||
description: LabelName is a valid Prometheus label name
|
||||
which may only contain ASCII letters, numbers, as
|
||||
well as underscores.
|
||||
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
|
||||
type: string
|
||||
type: array
|
||||
targetLabel:
|
||||
description: "Label to which the resulting string is written
|
||||
in a replacement. \n It is mandatory for `Replace`,
|
||||
`HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and
|
||||
`DropEqual` actions. \n Regex capture groups are available."
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
oauth2:
|
||||
description: "`oauth2` configures the OAuth2 settings to use
|
||||
when scraping the target. \n It requires Prometheus >= 2.27.0.
|
||||
\n Cannot be set at the same time as `authorization`, or `basicAuth`."
|
||||
properties:
|
||||
clientId:
|
||||
description: '`clientId` specifies a key of a Secret or
|
||||
ConfigMap containing the OAuth2 client''s ID.'
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its
|
||||
key must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
clientSecret:
|
||||
description: '`clientSecret` specifies a key of a Secret
|
||||
containing the OAuth2 client''s secret.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
endpointParams:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: '`endpointParams` configures the HTTP parameters
|
||||
to append to the token URL.'
|
||||
type: object
|
||||
scopes:
|
||||
description: '`scopes` defines the OAuth2 scopes used for
|
||||
the token request.'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
tokenUrl:
|
||||
description: '`tokenURL` configures the URL to fetch the
|
||||
token from.'
|
||||
minLength: 1
|
||||
type: string
|
||||
required:
|
||||
- clientId
|
||||
- clientSecret
|
||||
- tokenUrl
|
||||
type: object
|
||||
params:
|
||||
additionalProperties:
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
description: '`params` define optional HTTP URL parameters.'
|
||||
type: object
|
||||
path:
|
||||
description: "HTTP path from which to scrape for metrics. \n
|
||||
If empty, Prometheus uses the default value (e.g. `/metrics`)."
|
||||
type: string
|
||||
port:
|
||||
description: "Name of the Pod port which this endpoint refers
|
||||
to. \n It takes precedence over `targetPort`."
|
||||
type: string
|
||||
proxyUrl:
|
||||
description: '`proxyURL` configures the HTTP Proxy URL (e.g.
|
||||
"http://proxyserver:2195") to go through when scraping the
|
||||
target.'
|
||||
type: string
|
||||
relabelings:
|
||||
description: "`relabelings` configures the relabeling rules
|
||||
to apply the target's metadata labels. \n The Operator automatically
|
||||
adds relabelings for a few standard Kubernetes fields. \n
|
||||
The original scrape job's name is available via the `__tmp_prometheus_job_name`
|
||||
label. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
items:
|
||||
description: "RelabelConfig allows dynamic rewriting of the
|
||||
label set for targets, alerts, scraped samples and remote
|
||||
write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
properties:
|
||||
action:
|
||||
default: replace
|
||||
description: "Action to perform based on the regex matching.
|
||||
\n `Uppercase` and `Lowercase` actions require Prometheus
|
||||
>= v2.36.0. `DropEqual` and `KeepEqual` actions require
|
||||
Prometheus >= v2.41.0. \n Default: \"Replace\""
|
||||
enum:
|
||||
- replace
|
||||
- Replace
|
||||
- keep
|
||||
- Keep
|
||||
- drop
|
||||
- Drop
|
||||
- hashmod
|
||||
- HashMod
|
||||
- labelmap
|
||||
- LabelMap
|
||||
- labeldrop
|
||||
- LabelDrop
|
||||
- labelkeep
|
||||
- LabelKeep
|
||||
- lowercase
|
||||
- Lowercase
|
||||
- uppercase
|
||||
- Uppercase
|
||||
- keepequal
|
||||
- KeepEqual
|
||||
- dropequal
|
||||
- DropEqual
|
||||
type: string
|
||||
modulus:
|
||||
description: "Modulus to take of the hash of the source
|
||||
label values. \n Only applicable when the action is
|
||||
`HashMod`."
|
||||
format: int64
|
||||
type: integer
|
||||
regex:
|
||||
description: Regular expression against which the extracted
|
||||
value is matched.
|
||||
type: string
|
||||
replacement:
|
||||
description: "Replacement value against which a Replace
|
||||
action is performed if the regular expression matches.
|
||||
\n Regex capture groups are available."
|
||||
type: string
|
||||
separator:
|
||||
description: Separator is the string between concatenated
|
||||
SourceLabels.
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: The source labels select values from existing
|
||||
labels. Their content is concatenated using the configured
|
||||
Separator and matched against the configured regular
|
||||
expression.
|
||||
items:
|
||||
description: LabelName is a valid Prometheus label name
|
||||
which may only contain ASCII letters, numbers, as
|
||||
well as underscores.
|
||||
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
|
||||
type: string
|
||||
type: array
|
||||
targetLabel:
|
||||
description: "Label to which the resulting string is written
|
||||
in a replacement. \n It is mandatory for `Replace`,
|
||||
`HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and
|
||||
`DropEqual` actions. \n Regex capture groups are available."
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
scheme:
|
||||
description: "HTTP scheme to use for scraping. \n `http` and
|
||||
`https` are the expected values unless you rewrite the `__scheme__`
|
||||
label via relabeling. \n If empty, Prometheus uses the default
|
||||
value `http`."
|
||||
enum:
|
||||
- http
|
||||
- https
|
||||
type: string
|
||||
scrapeTimeout:
|
||||
description: "Timeout after which Prometheus considers the scrape
|
||||
to be failed. \n If empty, Prometheus uses the global scrape
|
||||
timeout unless it is less than the target's scrape interval
|
||||
value in which the latter is used."
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
targetPort:
|
||||
anyOf:
|
||||
- type: integer
|
||||
- type: string
|
||||
description: "Name or number of the target port of the `Pod`
|
||||
object behind the Service, the port must be specified with
|
||||
container port property. \n Deprecated: use 'port' instead."
|
||||
x-kubernetes-int-or-string: true
|
||||
tlsConfig:
|
||||
description: TLS configuration to use when scraping the target.
|
||||
properties:
|
||||
ca:
|
||||
description: Certificate authority used when verifying server
|
||||
certificates.
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its
|
||||
key must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
cert:
|
||||
description: Client certificate to present when doing client-authentication.
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its
|
||||
key must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
insecureSkipVerify:
|
||||
description: Disable target certificate validation.
|
||||
type: boolean
|
||||
keySecret:
|
||||
description: Secret containing the client key file for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
serverName:
|
||||
description: Used to verify the hostname for the targets.
|
||||
type: string
|
||||
type: object
|
||||
trackTimestampsStaleness:
|
||||
description: "`trackTimestampsStaleness` defines whether Prometheus
|
||||
tracks staleness of the metrics that have an explicit timestamp
|
||||
present in scraped data. Has no effect if `honorTimestamps`
|
||||
is false. \n It requires Prometheus >= v2.48.0."
|
||||
type: boolean
|
||||
type: object
|
||||
type: array
|
||||
podTargetLabels:
|
||||
description: '`podTargetLabels` defines the labels which are transferred
|
||||
from the associated Kubernetes `Pod` object onto the ingested metrics.'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
sampleLimit:
|
||||
description: '`sampleLimit` defines a per-scrape limit on the number
|
||||
of scraped samples that will be accepted.'
|
||||
format: int64
|
||||
type: integer
|
||||
scrapeClass:
|
||||
description: The scrape class to apply.
|
||||
minLength: 1
|
||||
type: string
|
||||
scrapeProtocols:
|
||||
description: "`scrapeProtocols` defines the protocols to negotiate
|
||||
during a scrape. It tells clients the protocols supported by Prometheus
|
||||
in order of preference (from most to least preferred). \n If unset,
|
||||
Prometheus uses its default value. \n It requires Prometheus >=
|
||||
v2.49.0."
|
||||
items:
|
||||
description: 'ScrapeProtocol represents a protocol used by Prometheus
|
||||
for scraping metrics. Supported values are: * `OpenMetricsText0.0.1`
|
||||
* `OpenMetricsText1.0.0` * `PrometheusProto` * `PrometheusText0.0.4`'
|
||||
enum:
|
||||
- PrometheusProto
|
||||
- OpenMetricsText0.0.1
|
||||
- OpenMetricsText1.0.0
|
||||
- PrometheusText0.0.4
|
||||
type: string
|
||||
type: array
|
||||
x-kubernetes-list-type: set
|
||||
selector:
|
||||
description: Label selector to select the Kubernetes `Pod` objects.
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements.
|
||||
The requirements are ANDed.
|
||||
items:
|
||||
description: A label selector requirement is a selector that
|
||||
contains values, a key, and an operator that relates the key
|
||||
and values.
|
||||
properties:
|
||||
key:
|
||||
description: key is the label key that the selector applies
|
||||
to.
|
||||
type: string
|
||||
operator:
|
||||
description: operator represents a key's relationship to
|
||||
a set of values. Valid operators are In, NotIn, Exists
|
||||
and DoesNotExist.
|
||||
type: string
|
||||
values:
|
||||
description: values is an array of string values. If the
|
||||
operator is In or NotIn, the values array must be non-empty.
|
||||
If the operator is Exists or DoesNotExist, the values
|
||||
array must be empty. This array is replaced during a strategic
|
||||
merge patch.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
type: object
|
||||
type: array
|
||||
matchLabels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: matchLabels is a map of {key,value} pairs. A single
|
||||
{key,value} in the matchLabels map is equivalent to an element
|
||||
of matchExpressions, whose key field is "key", the operator
|
||||
is "In", and the values array contains only "value". The requirements
|
||||
are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
targetLimit:
|
||||
description: '`targetLimit` defines a limit on the number of scraped
|
||||
targets that will be accepted.'
|
||||
format: int64
|
||||
type: integer
|
||||
required:
|
||||
- selector
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
||||
759
kube-prometheus-stack/charts/crds/crds/crd-probes.yaml
Normal file
759
kube-prometheus-stack/charts/crds/crds/crd-probes.yaml
Normal file
@ -0,0 +1,759 @@
|
||||
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.13.0
|
||||
operator.prometheus.io/version: 0.72.0
|
||||
name: probes.monitoring.coreos.com
|
||||
spec:
|
||||
group: monitoring.coreos.com
|
||||
names:
|
||||
categories:
|
||||
- prometheus-operator
|
||||
kind: Probe
|
||||
listKind: ProbeList
|
||||
plural: probes
|
||||
shortNames:
|
||||
- prb
|
||||
singular: probe
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: Probe defines monitoring for a set of static targets or ingresses.
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource this
|
||||
object represents. Servers may infer this from the endpoint the client
|
||||
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: Specification of desired Ingress selection for target discovery
|
||||
by Prometheus.
|
||||
properties:
|
||||
authorization:
|
||||
description: Authorization section for this endpoint
|
||||
properties:
|
||||
credentials:
|
||||
description: Selects a key of a Secret in the namespace that contains
|
||||
the credentials for authentication.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must be
|
||||
a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must be
|
||||
defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type:
|
||||
description: "Defines the authentication type. The value is case-insensitive.
|
||||
\n \"Basic\" is not a supported value. \n Default: \"Bearer\""
|
||||
type: string
|
||||
type: object
|
||||
basicAuth:
|
||||
description: 'BasicAuth allow an endpoint to authenticate over basic
|
||||
authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint'
|
||||
properties:
|
||||
password:
|
||||
description: '`password` specifies a key of a Secret containing
|
||||
the password for authentication.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must be
|
||||
a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must be
|
||||
defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
username:
|
||||
description: '`username` specifies a key of a Secret containing
|
||||
the username for authentication.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must be
|
||||
a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must be
|
||||
defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
bearerTokenSecret:
|
||||
description: Secret to mount to read bearer token for scraping targets.
|
||||
The secret needs to be in the same namespace as the probe and accessible
|
||||
by the Prometheus Operator.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must be a
|
||||
valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
interval:
|
||||
description: Interval at which targets are probed using the configured
|
||||
prober. If not specified Prometheus' global scrape interval is used.
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
jobName:
|
||||
description: The job name assigned to scraped metrics by default.
|
||||
type: string
|
||||
keepDroppedTargets:
|
||||
description: "Per-scrape limit on the number of targets dropped by
|
||||
relabeling that will be kept in memory. 0 means no limit. \n It
|
||||
requires Prometheus >= v2.47.0."
|
||||
format: int64
|
||||
type: integer
|
||||
labelLimit:
|
||||
description: Per-scrape limit on number of labels that will be accepted
|
||||
for a sample. Only valid in Prometheus versions 2.27.0 and newer.
|
||||
format: int64
|
||||
type: integer
|
||||
labelNameLengthLimit:
|
||||
description: Per-scrape limit on length of labels name that will be
|
||||
accepted for a sample. Only valid in Prometheus versions 2.27.0
|
||||
and newer.
|
||||
format: int64
|
||||
type: integer
|
||||
labelValueLengthLimit:
|
||||
description: Per-scrape limit on length of labels value that will
|
||||
be accepted for a sample. Only valid in Prometheus versions 2.27.0
|
||||
and newer.
|
||||
format: int64
|
||||
type: integer
|
||||
metricRelabelings:
|
||||
description: MetricRelabelConfigs to apply to samples before ingestion.
|
||||
items:
|
||||
description: "RelabelConfig allows dynamic rewriting of the label
|
||||
set for targets, alerts, scraped samples and remote write samples.
|
||||
\n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
properties:
|
||||
action:
|
||||
default: replace
|
||||
description: "Action to perform based on the regex matching.
|
||||
\n `Uppercase` and `Lowercase` actions require Prometheus
|
||||
>= v2.36.0. `DropEqual` and `KeepEqual` actions require Prometheus
|
||||
>= v2.41.0. \n Default: \"Replace\""
|
||||
enum:
|
||||
- replace
|
||||
- Replace
|
||||
- keep
|
||||
- Keep
|
||||
- drop
|
||||
- Drop
|
||||
- hashmod
|
||||
- HashMod
|
||||
- labelmap
|
||||
- LabelMap
|
||||
- labeldrop
|
||||
- LabelDrop
|
||||
- labelkeep
|
||||
- LabelKeep
|
||||
- lowercase
|
||||
- Lowercase
|
||||
- uppercase
|
||||
- Uppercase
|
||||
- keepequal
|
||||
- KeepEqual
|
||||
- dropequal
|
||||
- DropEqual
|
||||
type: string
|
||||
modulus:
|
||||
description: "Modulus to take of the hash of the source label
|
||||
values. \n Only applicable when the action is `HashMod`."
|
||||
format: int64
|
||||
type: integer
|
||||
regex:
|
||||
description: Regular expression against which the extracted
|
||||
value is matched.
|
||||
type: string
|
||||
replacement:
|
||||
description: "Replacement value against which a Replace action
|
||||
is performed if the regular expression matches. \n Regex capture
|
||||
groups are available."
|
||||
type: string
|
||||
separator:
|
||||
description: Separator is the string between concatenated SourceLabels.
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: The source labels select values from existing labels.
|
||||
Their content is concatenated using the configured Separator
|
||||
and matched against the configured regular expression.
|
||||
items:
|
||||
description: LabelName is a valid Prometheus label name which
|
||||
may only contain ASCII letters, numbers, as well as underscores.
|
||||
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
|
||||
type: string
|
||||
type: array
|
||||
targetLabel:
|
||||
description: "Label to which the resulting string is written
|
||||
in a replacement. \n It is mandatory for `Replace`, `HashMod`,
|
||||
`Lowercase`, `Uppercase`, `KeepEqual` and `DropEqual` actions.
|
||||
\n Regex capture groups are available."
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
module:
|
||||
description: 'The module to use for probing specifying how to probe
|
||||
the target. Example module configuring in the blackbox exporter:
|
||||
https://github.com/prometheus/blackbox_exporter/blob/master/example.yml'
|
||||
type: string
|
||||
oauth2:
|
||||
description: OAuth2 for the URL. Only valid in Prometheus versions
|
||||
2.27.0 and newer.
|
||||
properties:
|
||||
clientId:
|
||||
description: '`clientId` specifies a key of a Secret or ConfigMap
|
||||
containing the OAuth2 client''s ID.'
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
clientSecret:
|
||||
description: '`clientSecret` specifies a key of a Secret containing
|
||||
the OAuth2 client''s secret.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must be
|
||||
a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must be
|
||||
defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
endpointParams:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: '`endpointParams` configures the HTTP parameters
|
||||
to append to the token URL.'
|
||||
type: object
|
||||
scopes:
|
||||
description: '`scopes` defines the OAuth2 scopes used for the
|
||||
token request.'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
tokenUrl:
|
||||
description: '`tokenURL` configures the URL to fetch the token
|
||||
from.'
|
||||
minLength: 1
|
||||
type: string
|
||||
required:
|
||||
- clientId
|
||||
- clientSecret
|
||||
- tokenUrl
|
||||
type: object
|
||||
prober:
|
||||
description: Specification for the prober to use for probing targets.
|
||||
The prober.URL parameter is required. Targets cannot be probed if
|
||||
left empty.
|
||||
properties:
|
||||
path:
|
||||
default: /probe
|
||||
description: Path to collect metrics from. Defaults to `/probe`.
|
||||
type: string
|
||||
proxyUrl:
|
||||
description: Optional ProxyURL.
|
||||
type: string
|
||||
scheme:
|
||||
description: HTTP scheme to use for scraping. `http` and `https`
|
||||
are the expected values unless you rewrite the `__scheme__`
|
||||
label via relabeling. If empty, Prometheus uses the default
|
||||
value `http`.
|
||||
enum:
|
||||
- http
|
||||
- https
|
||||
type: string
|
||||
url:
|
||||
description: Mandatory URL of the prober.
|
||||
type: string
|
||||
required:
|
||||
- url
|
||||
type: object
|
||||
sampleLimit:
|
||||
description: SampleLimit defines per-scrape limit on number of scraped
|
||||
samples that will be accepted.
|
||||
format: int64
|
||||
type: integer
|
||||
scrapeClass:
|
||||
description: The scrape class to apply.
|
||||
minLength: 1
|
||||
type: string
|
||||
scrapeProtocols:
|
||||
description: "`scrapeProtocols` defines the protocols to negotiate
|
||||
during a scrape. It tells clients the protocols supported by Prometheus
|
||||
in order of preference (from most to least preferred). \n If unset,
|
||||
Prometheus uses its default value. \n It requires Prometheus >=
|
||||
v2.49.0."
|
||||
items:
|
||||
description: 'ScrapeProtocol represents a protocol used by Prometheus
|
||||
for scraping metrics. Supported values are: * `OpenMetricsText0.0.1`
|
||||
* `OpenMetricsText1.0.0` * `PrometheusProto` * `PrometheusText0.0.4`'
|
||||
enum:
|
||||
- PrometheusProto
|
||||
- OpenMetricsText0.0.1
|
||||
- OpenMetricsText1.0.0
|
||||
- PrometheusText0.0.4
|
||||
type: string
|
||||
type: array
|
||||
x-kubernetes-list-type: set
|
||||
scrapeTimeout:
|
||||
description: Timeout for scraping metrics from the Prometheus exporter.
|
||||
If not specified, the Prometheus global scrape timeout is used.
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
targetLimit:
|
||||
description: TargetLimit defines a limit on the number of scraped
|
||||
targets that will be accepted.
|
||||
format: int64
|
||||
type: integer
|
||||
targets:
|
||||
description: Targets defines a set of static or dynamically discovered
|
||||
targets to probe.
|
||||
properties:
|
||||
ingress:
|
||||
description: ingress defines the Ingress objects to probe and
|
||||
the relabeling configuration. If `staticConfig` is also defined,
|
||||
`staticConfig` takes precedence.
|
||||
properties:
|
||||
namespaceSelector:
|
||||
description: From which namespaces to select Ingress objects.
|
||||
properties:
|
||||
any:
|
||||
description: Boolean describing whether all namespaces
|
||||
are selected in contrast to a list restricting them.
|
||||
type: boolean
|
||||
matchNames:
|
||||
description: List of namespace names to select from.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
type: object
|
||||
relabelingConfigs:
|
||||
description: 'RelabelConfigs to apply to the label set of
|
||||
the target before it gets scraped. The original ingress
|
||||
address is available via the `__tmp_prometheus_ingress_address`
|
||||
label. It can be used to customize the probed URL. The original
|
||||
scrape job''s name is available via the `__tmp_prometheus_job_name`
|
||||
label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config'
|
||||
items:
|
||||
description: "RelabelConfig allows dynamic rewriting of
|
||||
the label set for targets, alerts, scraped samples and
|
||||
remote write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
properties:
|
||||
action:
|
||||
default: replace
|
||||
description: "Action to perform based on the regex matching.
|
||||
\n `Uppercase` and `Lowercase` actions require Prometheus
|
||||
>= v2.36.0. `DropEqual` and `KeepEqual` actions require
|
||||
Prometheus >= v2.41.0. \n Default: \"Replace\""
|
||||
enum:
|
||||
- replace
|
||||
- Replace
|
||||
- keep
|
||||
- Keep
|
||||
- drop
|
||||
- Drop
|
||||
- hashmod
|
||||
- HashMod
|
||||
- labelmap
|
||||
- LabelMap
|
||||
- labeldrop
|
||||
- LabelDrop
|
||||
- labelkeep
|
||||
- LabelKeep
|
||||
- lowercase
|
||||
- Lowercase
|
||||
- uppercase
|
||||
- Uppercase
|
||||
- keepequal
|
||||
- KeepEqual
|
||||
- dropequal
|
||||
- DropEqual
|
||||
type: string
|
||||
modulus:
|
||||
description: "Modulus to take of the hash of the source
|
||||
label values. \n Only applicable when the action is
|
||||
`HashMod`."
|
||||
format: int64
|
||||
type: integer
|
||||
regex:
|
||||
description: Regular expression against which the extracted
|
||||
value is matched.
|
||||
type: string
|
||||
replacement:
|
||||
description: "Replacement value against which a Replace
|
||||
action is performed if the regular expression matches.
|
||||
\n Regex capture groups are available."
|
||||
type: string
|
||||
separator:
|
||||
description: Separator is the string between concatenated
|
||||
SourceLabels.
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: The source labels select values from existing
|
||||
labels. Their content is concatenated using the configured
|
||||
Separator and matched against the configured regular
|
||||
expression.
|
||||
items:
|
||||
description: LabelName is a valid Prometheus label
|
||||
name which may only contain ASCII letters, numbers,
|
||||
as well as underscores.
|
||||
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
|
||||
type: string
|
||||
type: array
|
||||
targetLabel:
|
||||
description: "Label to which the resulting string is
|
||||
written in a replacement. \n It is mandatory for `Replace`,
|
||||
`HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and
|
||||
`DropEqual` actions. \n Regex capture groups are available."
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
selector:
|
||||
description: Selector to select the Ingress objects.
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector
|
||||
requirements. The requirements are ANDed.
|
||||
items:
|
||||
description: A label selector requirement is a selector
|
||||
that contains values, a key, and an operator that
|
||||
relates the key and values.
|
||||
properties:
|
||||
key:
|
||||
description: key is the label key that the selector
|
||||
applies to.
|
||||
type: string
|
||||
operator:
|
||||
description: operator represents a key's relationship
|
||||
to a set of values. Valid operators are In, NotIn,
|
||||
Exists and DoesNotExist.
|
||||
type: string
|
||||
values:
|
||||
description: values is an array of string values.
|
||||
If the operator is In or NotIn, the values array
|
||||
must be non-empty. If the operator is Exists or
|
||||
DoesNotExist, the values array must be empty.
|
||||
This array is replaced during a strategic merge
|
||||
patch.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
type: object
|
||||
type: array
|
||||
matchLabels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: matchLabels is a map of {key,value} pairs.
|
||||
A single {key,value} in the matchLabels map is equivalent
|
||||
to an element of matchExpressions, whose key field is
|
||||
"key", the operator is "In", and the values array contains
|
||||
only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
staticConfig:
|
||||
description: 'staticConfig defines the static list of targets
|
||||
to probe and the relabeling configuration. If `ingress` is also
|
||||
defined, `staticConfig` takes precedence. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config.'
|
||||
properties:
|
||||
labels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: Labels assigned to all metrics scraped from the
|
||||
targets.
|
||||
type: object
|
||||
relabelingConfigs:
|
||||
description: 'RelabelConfigs to apply to the label set of
|
||||
the targets before it gets scraped. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config'
|
||||
items:
|
||||
description: "RelabelConfig allows dynamic rewriting of
|
||||
the label set for targets, alerts, scraped samples and
|
||||
remote write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
properties:
|
||||
action:
|
||||
default: replace
|
||||
description: "Action to perform based on the regex matching.
|
||||
\n `Uppercase` and `Lowercase` actions require Prometheus
|
||||
>= v2.36.0. `DropEqual` and `KeepEqual` actions require
|
||||
Prometheus >= v2.41.0. \n Default: \"Replace\""
|
||||
enum:
|
||||
- replace
|
||||
- Replace
|
||||
- keep
|
||||
- Keep
|
||||
- drop
|
||||
- Drop
|
||||
- hashmod
|
||||
- HashMod
|
||||
- labelmap
|
||||
- LabelMap
|
||||
- labeldrop
|
||||
- LabelDrop
|
||||
- labelkeep
|
||||
- LabelKeep
|
||||
- lowercase
|
||||
- Lowercase
|
||||
- uppercase
|
||||
- Uppercase
|
||||
- keepequal
|
||||
- KeepEqual
|
||||
- dropequal
|
||||
- DropEqual
|
||||
type: string
|
||||
modulus:
|
||||
description: "Modulus to take of the hash of the source
|
||||
label values. \n Only applicable when the action is
|
||||
`HashMod`."
|
||||
format: int64
|
||||
type: integer
|
||||
regex:
|
||||
description: Regular expression against which the extracted
|
||||
value is matched.
|
||||
type: string
|
||||
replacement:
|
||||
description: "Replacement value against which a Replace
|
||||
action is performed if the regular expression matches.
|
||||
\n Regex capture groups are available."
|
||||
type: string
|
||||
separator:
|
||||
description: Separator is the string between concatenated
|
||||
SourceLabels.
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: The source labels select values from existing
|
||||
labels. Their content is concatenated using the configured
|
||||
Separator and matched against the configured regular
|
||||
expression.
|
||||
items:
|
||||
description: LabelName is a valid Prometheus label
|
||||
name which may only contain ASCII letters, numbers,
|
||||
as well as underscores.
|
||||
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
|
||||
type: string
|
||||
type: array
|
||||
targetLabel:
|
||||
description: "Label to which the resulting string is
|
||||
written in a replacement. \n It is mandatory for `Replace`,
|
||||
`HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and
|
||||
`DropEqual` actions. \n Regex capture groups are available."
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
static:
|
||||
description: The list of hosts to probe.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
type: object
|
||||
type: object
|
||||
tlsConfig:
|
||||
description: TLS configuration to use when scraping the endpoint.
|
||||
properties:
|
||||
ca:
|
||||
description: Certificate authority used when verifying server
|
||||
certificates.
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
cert:
|
||||
description: Client certificate to present when doing client-authentication.
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
insecureSkipVerify:
|
||||
description: Disable target certificate validation.
|
||||
type: boolean
|
||||
keySecret:
|
||||
description: Secret containing the client key file for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must be
|
||||
a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must be
|
||||
defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
serverName:
|
||||
description: Used to verify the hostname for the targets.
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
||||
9041
kube-prometheus-stack/charts/crds/crds/crd-prometheusagents.yaml
Normal file
9041
kube-prometheus-stack/charts/crds/crds/crd-prometheusagents.yaml
Normal file
File diff suppressed because it is too large
Load Diff
10444
kube-prometheus-stack/charts/crds/crds/crd-prometheuses.yaml
Normal file
10444
kube-prometheus-stack/charts/crds/crds/crd-prometheuses.yaml
Normal file
File diff suppressed because it is too large
Load Diff
131
kube-prometheus-stack/charts/crds/crds/crd-prometheusrules.yaml
Normal file
131
kube-prometheus-stack/charts/crds/crds/crd-prometheusrules.yaml
Normal file
@ -0,0 +1,131 @@
|
||||
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.13.0
|
||||
operator.prometheus.io/version: 0.72.0
|
||||
name: prometheusrules.monitoring.coreos.com
|
||||
spec:
|
||||
group: monitoring.coreos.com
|
||||
names:
|
||||
categories:
|
||||
- prometheus-operator
|
||||
kind: PrometheusRule
|
||||
listKind: PrometheusRuleList
|
||||
plural: prometheusrules
|
||||
shortNames:
|
||||
- promrule
|
||||
singular: prometheusrule
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: PrometheusRule defines recording and alerting rules for a Prometheus
|
||||
instance
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource this
|
||||
object represents. Servers may infer this from the endpoint the client
|
||||
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: Specification of desired alerting rule definitions for Prometheus.
|
||||
properties:
|
||||
groups:
|
||||
description: Content of Prometheus rule file
|
||||
items:
|
||||
description: RuleGroup is a list of sequentially evaluated recording
|
||||
and alerting rules.
|
||||
properties:
|
||||
interval:
|
||||
description: Interval determines how often rules in the group
|
||||
are evaluated.
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
limit:
|
||||
description: Limit the number of alerts an alerting rule and
|
||||
series a recording rule can produce. Limit is supported starting
|
||||
with Prometheus >= 2.31 and Thanos Ruler >= 0.24.
|
||||
type: integer
|
||||
name:
|
||||
description: Name of the rule group.
|
||||
minLength: 1
|
||||
type: string
|
||||
partial_response_strategy:
|
||||
description: 'PartialResponseStrategy is only used by ThanosRuler
|
||||
and will be ignored by Prometheus instances. More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response'
|
||||
pattern: ^(?i)(abort|warn)?$
|
||||
type: string
|
||||
rules:
|
||||
description: List of alerting and recording rules.
|
||||
items:
|
||||
description: 'Rule describes an alerting or recording rule
|
||||
See Prometheus documentation: [alerting](https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
|
||||
or [recording](https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules)
|
||||
rule'
|
||||
properties:
|
||||
alert:
|
||||
description: Name of the alert. Must be a valid label
|
||||
value. Only one of `record` and `alert` must be set.
|
||||
type: string
|
||||
annotations:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: Annotations to add to each alert. Only valid
|
||||
for alerting rules.
|
||||
type: object
|
||||
expr:
|
||||
anyOf:
|
||||
- type: integer
|
||||
- type: string
|
||||
description: PromQL expression to evaluate.
|
||||
x-kubernetes-int-or-string: true
|
||||
for:
|
||||
description: Alerts are considered firing once they have
|
||||
been returned for this long.
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
keep_firing_for:
|
||||
description: KeepFiringFor defines how long an alert will
|
||||
continue firing after the condition that triggered it
|
||||
has cleared.
|
||||
minLength: 1
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
labels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: Labels to add or overwrite.
|
||||
type: object
|
||||
record:
|
||||
description: Name of the time series to output to. Must
|
||||
be a valid metric name. Only one of `record` and `alert`
|
||||
must be set.
|
||||
type: string
|
||||
required:
|
||||
- expr
|
||||
type: object
|
||||
type: array
|
||||
required:
|
||||
- name
|
||||
type: object
|
||||
type: array
|
||||
x-kubernetes-list-map-keys:
|
||||
- name
|
||||
x-kubernetes-list-type: map
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
||||
2483
kube-prometheus-stack/charts/crds/crds/crd-scrapeconfigs.yaml
Normal file
2483
kube-prometheus-stack/charts/crds/crds/crd-scrapeconfigs.yaml
Normal file
File diff suppressed because it is too large
Load Diff
765
kube-prometheus-stack/charts/crds/crds/crd-servicemonitors.yaml
Normal file
765
kube-prometheus-stack/charts/crds/crds/crd-servicemonitors.yaml
Normal file
@ -0,0 +1,765 @@
|
||||
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.13.0
|
||||
operator.prometheus.io/version: 0.72.0
|
||||
name: servicemonitors.monitoring.coreos.com
|
||||
spec:
|
||||
group: monitoring.coreos.com
|
||||
names:
|
||||
categories:
|
||||
- prometheus-operator
|
||||
kind: ServiceMonitor
|
||||
listKind: ServiceMonitorList
|
||||
plural: servicemonitors
|
||||
shortNames:
|
||||
- smon
|
||||
singular: servicemonitor
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: ServiceMonitor defines monitoring for a set of services.
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource this
|
||||
object represents. Servers may infer this from the endpoint the client
|
||||
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: Specification of desired Service selection for target discovery
|
||||
by Prometheus.
|
||||
properties:
|
||||
attachMetadata:
|
||||
description: "`attachMetadata` defines additional metadata which is
|
||||
added to the discovered targets. \n It requires Prometheus >= v2.37.0."
|
||||
properties:
|
||||
node:
|
||||
description: When set to true, Prometheus must have the `get`
|
||||
permission on the `Nodes` objects.
|
||||
type: boolean
|
||||
type: object
|
||||
endpoints:
|
||||
description: List of endpoints part of this ServiceMonitor.
|
||||
items:
|
||||
description: Endpoint defines an endpoint serving Prometheus metrics
|
||||
to be scraped by Prometheus.
|
||||
properties:
|
||||
authorization:
|
||||
description: "`authorization` configures the Authorization header
|
||||
credentials to use when scraping the target. \n Cannot be
|
||||
set at the same time as `basicAuth`, or `oauth2`."
|
||||
properties:
|
||||
credentials:
|
||||
description: Selects a key of a Secret in the namespace
|
||||
that contains the credentials for authentication.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type:
|
||||
description: "Defines the authentication type. The value
|
||||
is case-insensitive. \n \"Basic\" is not a supported value.
|
||||
\n Default: \"Bearer\""
|
||||
type: string
|
||||
type: object
|
||||
basicAuth:
|
||||
description: "`basicAuth` configures the Basic Authentication
|
||||
credentials to use when scraping the target. \n Cannot be
|
||||
set at the same time as `authorization`, or `oauth2`."
|
||||
properties:
|
||||
password:
|
||||
description: '`password` specifies a key of a Secret containing
|
||||
the password for authentication.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
username:
|
||||
description: '`username` specifies a key of a Secret containing
|
||||
the username for authentication.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
bearerTokenFile:
|
||||
description: "File to read bearer token for scraping the target.
|
||||
\n Deprecated: use `authorization` instead."
|
||||
type: string
|
||||
bearerTokenSecret:
|
||||
description: "`bearerTokenSecret` specifies a key of a Secret
|
||||
containing the bearer token for scraping targets. The secret
|
||||
needs to be in the same namespace as the ServiceMonitor object
|
||||
and readable by the Prometheus Operator. \n Deprecated: use
|
||||
`authorization` instead."
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
enableHttp2:
|
||||
description: '`enableHttp2` can be used to disable HTTP2 when
|
||||
scraping the target.'
|
||||
type: boolean
|
||||
filterRunning:
|
||||
description: "When true, the pods which are not running (e.g.
|
||||
either in Failed or Succeeded state) are dropped during the
|
||||
target discovery. \n If unset, the filtering is enabled. \n
|
||||
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase"
|
||||
type: boolean
|
||||
followRedirects:
|
||||
description: '`followRedirects` defines whether the scrape requests
|
||||
should follow HTTP 3xx redirects.'
|
||||
type: boolean
|
||||
honorLabels:
|
||||
description: When true, `honorLabels` preserves the metric's
|
||||
labels when they collide with the target's labels.
|
||||
type: boolean
|
||||
honorTimestamps:
|
||||
description: '`honorTimestamps` controls whether Prometheus
|
||||
preserves the timestamps when exposed by the target.'
|
||||
type: boolean
|
||||
interval:
|
||||
description: "Interval at which Prometheus scrapes the metrics
|
||||
from the target. \n If empty, Prometheus uses the global scrape
|
||||
interval."
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
metricRelabelings:
|
||||
description: '`metricRelabelings` configures the relabeling
|
||||
rules to apply to the samples before ingestion.'
|
||||
items:
|
||||
description: "RelabelConfig allows dynamic rewriting of the
|
||||
label set for targets, alerts, scraped samples and remote
|
||||
write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
properties:
|
||||
action:
|
||||
default: replace
|
||||
description: "Action to perform based on the regex matching.
|
||||
\n `Uppercase` and `Lowercase` actions require Prometheus
|
||||
>= v2.36.0. `DropEqual` and `KeepEqual` actions require
|
||||
Prometheus >= v2.41.0. \n Default: \"Replace\""
|
||||
enum:
|
||||
- replace
|
||||
- Replace
|
||||
- keep
|
||||
- Keep
|
||||
- drop
|
||||
- Drop
|
||||
- hashmod
|
||||
- HashMod
|
||||
- labelmap
|
||||
- LabelMap
|
||||
- labeldrop
|
||||
- LabelDrop
|
||||
- labelkeep
|
||||
- LabelKeep
|
||||
- lowercase
|
||||
- Lowercase
|
||||
- uppercase
|
||||
- Uppercase
|
||||
- keepequal
|
||||
- KeepEqual
|
||||
- dropequal
|
||||
- DropEqual
|
||||
type: string
|
||||
modulus:
|
||||
description: "Modulus to take of the hash of the source
|
||||
label values. \n Only applicable when the action is
|
||||
`HashMod`."
|
||||
format: int64
|
||||
type: integer
|
||||
regex:
|
||||
description: Regular expression against which the extracted
|
||||
value is matched.
|
||||
type: string
|
||||
replacement:
|
||||
description: "Replacement value against which a Replace
|
||||
action is performed if the regular expression matches.
|
||||
\n Regex capture groups are available."
|
||||
type: string
|
||||
separator:
|
||||
description: Separator is the string between concatenated
|
||||
SourceLabels.
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: The source labels select values from existing
|
||||
labels. Their content is concatenated using the configured
|
||||
Separator and matched against the configured regular
|
||||
expression.
|
||||
items:
|
||||
description: LabelName is a valid Prometheus label name
|
||||
which may only contain ASCII letters, numbers, as
|
||||
well as underscores.
|
||||
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
|
||||
type: string
|
||||
type: array
|
||||
targetLabel:
|
||||
description: "Label to which the resulting string is written
|
||||
in a replacement. \n It is mandatory for `Replace`,
|
||||
`HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and
|
||||
`DropEqual` actions. \n Regex capture groups are available."
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
oauth2:
|
||||
description: "`oauth2` configures the OAuth2 settings to use
|
||||
when scraping the target. \n It requires Prometheus >= 2.27.0.
|
||||
\n Cannot be set at the same time as `authorization`, or `basicAuth`."
|
||||
properties:
|
||||
clientId:
|
||||
description: '`clientId` specifies a key of a Secret or
|
||||
ConfigMap containing the OAuth2 client''s ID.'
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its
|
||||
key must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
clientSecret:
|
||||
description: '`clientSecret` specifies a key of a Secret
|
||||
containing the OAuth2 client''s secret.'
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
endpointParams:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: '`endpointParams` configures the HTTP parameters
|
||||
to append to the token URL.'
|
||||
type: object
|
||||
scopes:
|
||||
description: '`scopes` defines the OAuth2 scopes used for
|
||||
the token request.'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
tokenUrl:
|
||||
description: '`tokenURL` configures the URL to fetch the
|
||||
token from.'
|
||||
minLength: 1
|
||||
type: string
|
||||
required:
|
||||
- clientId
|
||||
- clientSecret
|
||||
- tokenUrl
|
||||
type: object
|
||||
params:
|
||||
additionalProperties:
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
description: params define optional HTTP URL parameters.
|
||||
type: object
|
||||
path:
|
||||
description: "HTTP path from which to scrape for metrics. \n
|
||||
If empty, Prometheus uses the default value (e.g. `/metrics`)."
|
||||
type: string
|
||||
port:
|
||||
description: "Name of the Service port which this endpoint refers
|
||||
to. \n It takes precedence over `targetPort`."
|
||||
type: string
|
||||
proxyUrl:
|
||||
description: '`proxyURL` configures the HTTP Proxy URL (e.g.
|
||||
"http://proxyserver:2195") to go through when scraping the
|
||||
target.'
|
||||
type: string
|
||||
relabelings:
|
||||
description: "`relabelings` configures the relabeling rules
|
||||
to apply the target's metadata labels. \n The Operator automatically
|
||||
adds relabelings for a few standard Kubernetes fields. \n
|
||||
The original scrape job's name is available via the `__tmp_prometheus_job_name`
|
||||
label. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
items:
|
||||
description: "RelabelConfig allows dynamic rewriting of the
|
||||
label set for targets, alerts, scraped samples and remote
|
||||
write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
|
||||
properties:
|
||||
action:
|
||||
default: replace
|
||||
description: "Action to perform based on the regex matching.
|
||||
\n `Uppercase` and `Lowercase` actions require Prometheus
|
||||
>= v2.36.0. `DropEqual` and `KeepEqual` actions require
|
||||
Prometheus >= v2.41.0. \n Default: \"Replace\""
|
||||
enum:
|
||||
- replace
|
||||
- Replace
|
||||
- keep
|
||||
- Keep
|
||||
- drop
|
||||
- Drop
|
||||
- hashmod
|
||||
- HashMod
|
||||
- labelmap
|
||||
- LabelMap
|
||||
- labeldrop
|
||||
- LabelDrop
|
||||
- labelkeep
|
||||
- LabelKeep
|
||||
- lowercase
|
||||
- Lowercase
|
||||
- uppercase
|
||||
- Uppercase
|
||||
- keepequal
|
||||
- KeepEqual
|
||||
- dropequal
|
||||
- DropEqual
|
||||
type: string
|
||||
modulus:
|
||||
description: "Modulus to take of the hash of the source
|
||||
label values. \n Only applicable when the action is
|
||||
`HashMod`."
|
||||
format: int64
|
||||
type: integer
|
||||
regex:
|
||||
description: Regular expression against which the extracted
|
||||
value is matched.
|
||||
type: string
|
||||
replacement:
|
||||
description: "Replacement value against which a Replace
|
||||
action is performed if the regular expression matches.
|
||||
\n Regex capture groups are available."
|
||||
type: string
|
||||
separator:
|
||||
description: Separator is the string between concatenated
|
||||
SourceLabels.
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: The source labels select values from existing
|
||||
labels. Their content is concatenated using the configured
|
||||
Separator and matched against the configured regular
|
||||
expression.
|
||||
items:
|
||||
description: LabelName is a valid Prometheus label name
|
||||
which may only contain ASCII letters, numbers, as
|
||||
well as underscores.
|
||||
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
|
||||
type: string
|
||||
type: array
|
||||
targetLabel:
|
||||
description: "Label to which the resulting string is written
|
||||
in a replacement. \n It is mandatory for `Replace`,
|
||||
`HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and
|
||||
`DropEqual` actions. \n Regex capture groups are available."
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
scheme:
|
||||
description: "HTTP scheme to use for scraping. \n `http` and
|
||||
`https` are the expected values unless you rewrite the `__scheme__`
|
||||
label via relabeling. \n If empty, Prometheus uses the default
|
||||
value `http`."
|
||||
enum:
|
||||
- http
|
||||
- https
|
||||
type: string
|
||||
scrapeTimeout:
|
||||
description: "Timeout after which Prometheus considers the scrape
|
||||
to be failed. \n If empty, Prometheus uses the global scrape
|
||||
timeout unless it is less than the target's scrape interval
|
||||
value in which the latter is used."
|
||||
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
|
||||
type: string
|
||||
targetPort:
|
||||
anyOf:
|
||||
- type: integer
|
||||
- type: string
|
||||
description: Name or number of the target port of the `Pod`
|
||||
object behind the Service. The port must be specified with
|
||||
the container's port property.
|
||||
x-kubernetes-int-or-string: true
|
||||
tlsConfig:
|
||||
description: TLS configuration to use when scraping the target.
|
||||
properties:
|
||||
ca:
|
||||
description: Certificate authority used when verifying server
|
||||
certificates.
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its
|
||||
key must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
caFile:
|
||||
description: Path to the CA cert in the Prometheus container
|
||||
to use for the targets.
|
||||
type: string
|
||||
cert:
|
||||
description: Client certificate to present when doing client-authentication.
|
||||
properties:
|
||||
configMap:
|
||||
description: ConfigMap containing data to use for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key to select.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the ConfigMap or its
|
||||
key must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
secret:
|
||||
description: Secret containing data to use for the targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind,
|
||||
uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key
|
||||
must be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
certFile:
|
||||
description: Path to the client cert file in the Prometheus
|
||||
container for the targets.
|
||||
type: string
|
||||
insecureSkipVerify:
|
||||
description: Disable target certificate validation.
|
||||
type: boolean
|
||||
keyFile:
|
||||
description: Path to the client key file in the Prometheus
|
||||
container for the targets.
|
||||
type: string
|
||||
keySecret:
|
||||
description: Secret containing the client key file for the
|
||||
targets.
|
||||
properties:
|
||||
key:
|
||||
description: The key of the secret to select from. Must
|
||||
be a valid secret key.
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
|
||||
TODO: Add other useful fields. apiVersion, kind, uid?'
|
||||
type: string
|
||||
optional:
|
||||
description: Specify whether the Secret or its key must
|
||||
be defined
|
||||
type: boolean
|
||||
required:
|
||||
- key
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
serverName:
|
||||
description: Used to verify the hostname for the targets.
|
||||
type: string
|
||||
type: object
|
||||
trackTimestampsStaleness:
|
||||
description: "`trackTimestampsStaleness` defines whether Prometheus
|
||||
tracks staleness of the metrics that have an explicit timestamp
|
||||
present in scraped data. Has no effect if `honorTimestamps`
|
||||
is false. \n It requires Prometheus >= v2.48.0."
|
||||
type: boolean
|
||||
type: object
|
||||
type: array
|
||||
jobLabel:
|
||||
description: "`jobLabel` selects the label from the associated Kubernetes
|
||||
`Service` object which will be used as the `job` label for all metrics.
|
||||
\n For example if `jobLabel` is set to `foo` and the Kubernetes
|
||||
`Service` object is labeled with `foo: bar`, then Prometheus adds
|
||||
the `job=\"bar\"` label to all ingested metrics. \n If the value
|
||||
of this field is empty or if the label doesn't exist for the given
|
||||
Service, the `job` label of the metrics defaults to the name of
|
||||
the associated Kubernetes `Service`."
|
||||
type: string
|
||||
keepDroppedTargets:
|
||||
description: "Per-scrape limit on the number of targets dropped by
|
||||
relabeling that will be kept in memory. 0 means no limit. \n It
|
||||
requires Prometheus >= v2.47.0."
|
||||
format: int64
|
||||
type: integer
|
||||
labelLimit:
|
||||
description: "Per-scrape limit on number of labels that will be accepted
|
||||
for a sample. \n It requires Prometheus >= v2.27.0."
|
||||
format: int64
|
||||
type: integer
|
||||
labelNameLengthLimit:
|
||||
description: "Per-scrape limit on length of labels name that will
|
||||
be accepted for a sample. \n It requires Prometheus >= v2.27.0."
|
||||
format: int64
|
||||
type: integer
|
||||
labelValueLengthLimit:
|
||||
description: "Per-scrape limit on length of labels value that will
|
||||
be accepted for a sample. \n It requires Prometheus >= v2.27.0."
|
||||
format: int64
|
||||
type: integer
|
||||
namespaceSelector:
|
||||
description: Selector to select which namespaces the Kubernetes `Endpoints`
|
||||
objects are discovered from.
|
||||
properties:
|
||||
any:
|
||||
description: Boolean describing whether all namespaces are selected
|
||||
in contrast to a list restricting them.
|
||||
type: boolean
|
||||
matchNames:
|
||||
description: List of namespace names to select from.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
type: object
|
||||
podTargetLabels:
|
||||
description: '`podTargetLabels` defines the labels which are transferred
|
||||
from the associated Kubernetes `Pod` object onto the ingested metrics.'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
sampleLimit:
|
||||
description: '`sampleLimit` defines a per-scrape limit on the number
|
||||
of scraped samples that will be accepted.'
|
||||
format: int64
|
||||
type: integer
|
||||
scrapeClass:
|
||||
description: The scrape class to apply.
|
||||
minLength: 1
|
||||
type: string
|
||||
scrapeProtocols:
|
||||
description: "`scrapeProtocols` defines the protocols to negotiate
|
||||
during a scrape. It tells clients the protocols supported by Prometheus
|
||||
in order of preference (from most to least preferred). \n If unset,
|
||||
Prometheus uses its default value. \n It requires Prometheus >=
|
||||
v2.49.0."
|
||||
items:
|
||||
description: 'ScrapeProtocol represents a protocol used by Prometheus
|
||||
for scraping metrics. Supported values are: * `OpenMetricsText0.0.1`
|
||||
* `OpenMetricsText1.0.0` * `PrometheusProto` * `PrometheusText0.0.4`'
|
||||
enum:
|
||||
- PrometheusProto
|
||||
- OpenMetricsText0.0.1
|
||||
- OpenMetricsText1.0.0
|
||||
- PrometheusText0.0.4
|
||||
type: string
|
||||
type: array
|
||||
x-kubernetes-list-type: set
|
||||
selector:
|
||||
description: Label selector to select the Kubernetes `Endpoints` objects.
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements.
|
||||
The requirements are ANDed.
|
||||
items:
|
||||
description: A label selector requirement is a selector that
|
||||
contains values, a key, and an operator that relates the key
|
||||
and values.
|
||||
properties:
|
||||
key:
|
||||
description: key is the label key that the selector applies
|
||||
to.
|
||||
type: string
|
||||
operator:
|
||||
description: operator represents a key's relationship to
|
||||
a set of values. Valid operators are In, NotIn, Exists
|
||||
and DoesNotExist.
|
||||
type: string
|
||||
values:
|
||||
description: values is an array of string values. If the
|
||||
operator is In or NotIn, the values array must be non-empty.
|
||||
If the operator is Exists or DoesNotExist, the values
|
||||
array must be empty. This array is replaced during a strategic
|
||||
merge patch.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
type: object
|
||||
type: array
|
||||
matchLabels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: matchLabels is a map of {key,value} pairs. A single
|
||||
{key,value} in the matchLabels map is equivalent to an element
|
||||
of matchExpressions, whose key field is "key", the operator
|
||||
is "In", and the values array contains only "value". The requirements
|
||||
are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
targetLabels:
|
||||
description: '`targetLabels` defines the labels which are transferred
|
||||
from the associated Kubernetes `Service` object onto the ingested
|
||||
metrics.'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
targetLimit:
|
||||
description: '`targetLimit` defines a limit on the number of scraped
|
||||
targets that will be accepted.'
|
||||
format: int64
|
||||
type: integer
|
||||
required:
|
||||
- selector
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
||||
7240
kube-prometheus-stack/charts/crds/crds/crd-thanosrulers.yaml
Normal file
7240
kube-prometheus-stack/charts/crds/crds/crd-thanosrulers.yaml
Normal file
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user