590 lines
32 KiB
Go Template
590 lines
32 KiB
Go Template
# Falco
|
|
|
|
[Falco](https://falco.org) is a *Cloud Native Runtime Security* tool designed to detect anomalous activity in your applications. You can use Falco to monitor runtime security of your Kubernetes applications and internal components.
|
|
|
|
## Introduction
|
|
|
|
The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in [values.yaml](./values.yaml) file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info.
|
|
|
|
## Attention
|
|
|
|
Before installing Falco in a Kubernetes cluster, a user should check that the kernel version used in the nodes is supported by the community. Also, before reporting any issue with Falco (missing kernel image, CrashLoopBackOff and similar), make sure to read [about the driver](#about-the-driver) section and adjust your setup as required.
|
|
|
|
## Adding `falcosecurity` repository
|
|
|
|
Before installing the chart, add the `falcosecurity` charts repository:
|
|
|
|
```bash
|
|
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
|
helm repo update
|
|
```
|
|
|
|
## Installing the Chart
|
|
|
|
To install the chart with the release name `falco` in namespace `falco` run:
|
|
|
|
```bash
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco
|
|
```
|
|
|
|
After a few minutes Falco instances should be running on all your nodes. The status of Falco pods can be inspected through *kubectl*:
|
|
```bash
|
|
kubectl get pods -n falco -o wide
|
|
```
|
|
If everything went smoothly, you should observe an output similar to the following, indicating that all Falco instances are up and running in you cluster:
|
|
|
|
```bash
|
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
|
falco-57w7q 1/1 Running 0 3m12s 10.244.0.1 control-plane <none> <none>
|
|
falco-h4596 1/1 Running 0 3m12s 10.244.1.2 worker-node-1 <none> <none>
|
|
falco-kb55h 1/1 Running 0 3m12s 10.244.2.3 worker-node-2 <none> <none>
|
|
```
|
|
The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in [values.yaml](./values.yaml) of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node.
|
|
> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment.
|
|
|
|
### Falco, Event Sources and Kubernetes
|
|
Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in Kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or **drivers** (system events).
|
|
|
|
Note that **a Falco instance can handle multiple event sources in parallel**. you can deploy Falco leveraging **drivers** for syscall events and at the same time loading **plugins**. A step by step guide on how to deploy Falco with multiple sources can be found [here](https://falco.org/docs/getting-started/third-party/learning/#falco-with-multiple-sources).
|
|
|
|
#### About Drivers
|
|
|
|
Falco needs a **driver** to analyze the system workload and pass security events to userspace. The supported drivers are:
|
|
|
|
* [Kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module)
|
|
* [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe)
|
|
* [Modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe)
|
|
|
|
The driver should be installed on the node where Falco is running. The _kernel module_ (default option) and the _eBPF probe_ are installed on the node through an *init container* (i.e. `falco-driver-loader`) that tries download a prebuilt driver or build it on-the-fly as a fallback. The _Modern eBPF probe_ doesn't require an init container because it is shipped directly into the Falco binary. However, the _Modern eBPF probe_ requires [recent BPF features](https://falco.org/docs/event-sources/kernel/#modern-ebpf-probe).
|
|
|
|
##### Pre-built drivers
|
|
|
|
The [kernel-crawler](https://github.com/falcosecurity/kernel-crawler) automatically discovers kernel versions and flavors. At the time being, it runs weekly. We have a site where users can check for the discovered kernel flavors and versions, [example for Amazon Linux 2](https://falcosecurity.github.io/kernel-crawler/?arch=x86_64&target=AmazonLinux2).
|
|
|
|
The discovery of a kernel version by the [kernel-crawler](https://falcosecurity.github.io/kernel-crawler/) does not imply that pre-built kernel modules and bpf probes are available. That is because once kernel-crawler has discovered new kernels versions, the drivers need to be built by jobs running on our [Driver Build Grid infra](https://github.com/falcosecurity/test-infra#dbg). Please keep in mind that the building process is based on best effort. Users can check the existence of prebuilt modules at the following [link](https://download.falco.org/driver/site/index.html?lib=3.0.1%2Bdriver&target=all&arch=all&kind=all).
|
|
|
|
##### Building the driver on the fly (fallback)
|
|
|
|
If a prebuilt driver is not available for your distribution/kernel, users can build the driver by them self or install the kernel headers on the nodes, and the init container (falco-driver-loader) will try and build the driver on the fly.
|
|
|
|
Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation.
|
|
|
|
##### Selecting a different driver loader image
|
|
|
|
Note that since Falco 0.36.0 and Helm chart version 3.7.0 the driver loader image has been updated to be compatible with newer kernels (5.x and above) meaning that if you have an older kernel version and you are trying to build the kernel module you may experience issues. In that case you can use the `falco-driver-loader-legacy` image to use the previous version of the toolchain. To do so you can set the appropriate value, i.e. `--set driver.loader.initContainer.image.repository=falcosecurity/falco-driver-loader-legacy`.
|
|
|
|
#### About Plugins
|
|
[Plugins](https://falco.org/docs/plugins/) are used to extend Falco to support new **data sources**. The current **plugin framework** supports *plugins* with the following *capabilities*:
|
|
|
|
* Event sourcing capability;
|
|
* Field extraction capability;
|
|
|
|
Plugin capabilities are *composable*, we can have a single plugin with both capabilities. Or on the other hand, we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this is the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco
|
|
|
|
Note that **the driver is not required when using plugins**.
|
|
|
|
#### About gVisor
|
|
gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. For more information please consult the [official docs](https://gvisor.dev/docs/). In version `0.32.1`, Falco first introduced support for gVisor by leveraging the stream of system call information coming from gVisor.
|
|
Falco requires the version of [runsc](https://gvisor.dev/docs/user_guide/install/) to be equal to or above `20220704.0`. The following snippet shows the gVisor configuration variables found in [values.yaml](./values.yaml):
|
|
```yaml
|
|
driver:
|
|
gvisor:
|
|
enabled: true
|
|
runsc:
|
|
path: /home/containerd/usr/local/sbin
|
|
root: /run/containerd/runsc
|
|
config: /run/containerd/runsc/config.toml
|
|
```
|
|
Falco uses the [runsc](https://gvisor.dev/docs/user_guide/install/) binary to interact with sandboxed containers. The following variables need to be set:
|
|
* `runsc.path`: absolute path of the `runsc` binary in the k8s nodes;
|
|
* `runsc.root`: absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it;
|
|
* `runsc.config`: absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence.
|
|
|
|
If you want to know more how Falco uses those configuration paths please have a look at the `falco.gvisor.initContainer` helper in [helpers.tpl](./templates/_helpers.tpl).
|
|
A preset `values.yaml` file [values-gvisor-gke.yaml](./values-gvisor-gke.yaml) is provided and can be used as it is to deploy Falco with gVisor support in a [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods) cluster. It is also a good starting point for custom deployments.
|
|
|
|
##### Example: running Falco on GKE, with or without gVisor-enabled pods
|
|
|
|
If you use GKE with k8s version at least `1.24.4-gke.1800` or `1.25.0-gke.200` with gVisor sandboxed pods, you can install a Falco instance to monitor them with, e.g.:
|
|
|
|
```
|
|
helm install falco-gvisor falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco-gvisor \
|
|
-f https://raw.githubusercontent.com/falcosecurity/charts/master/charts/falco/values-gvisor-gke.yaml
|
|
```
|
|
|
|
Note that the instance of Falco above will only monitor gVisor sandboxed workloads on gVisor-enabled node pools. If you also need to monitor regular workloads on regular node pools you can use the eBPF driver as usual:
|
|
|
|
```
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
--set driver.kind=ebpf
|
|
```
|
|
|
|
The two instances of Falco will operate independently and can be installed, uninstalled or configured as needed. If you were already monitoring your regular node pools with eBPF you don't need to reinstall it.
|
|
|
|
##### Falco+gVisor additional resources
|
|
An exhaustive blog post about Falco and gVisor can be found on the [Falco blog](https://falco.org/blog/intro-gvisor-falco/).
|
|
If you need help on how to set gVisor in your environment please have a look at the [gVisor official docs](https://gvisor.dev/docs/user_guide/quick_start/kubernetes/)
|
|
|
|
### About Falco Artifacts
|
|
Historically **rules files** and **plugins** used to be shipped inside the Falco docker image and/or inside the chart. Starting from version `v0.3.0` of the chart, the [**falcoctl tool**](https://github.com/falcosecurity/falcoctl) can be used to install/update **rules files** and **plugins**. When referring to such objects we will use the term **artifact**. For more info please check out the following [proposal](https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md).
|
|
|
|
The default configuration of the chart for new installations is to use the **falcoctl** tool to handle **artifacts**. The chart will deploy two new containers along the Falco one:
|
|
* `falcoctl-artifact-install` an init container that makes sure to install the configured **artifacts** before the Falco container starts;
|
|
* `falcoctl-artifact-follow` a sidecar container that periodically checks for new artifacts (currently only *falco-rules*) and downloads them;
|
|
|
|
For more info on how to enable/disable and configure the **falcoctl** tool checkout the config values [here](./README.md#Configuration) and the [upgrading notes](./BREAKING-CHANGES.md#300)
|
|
|
|
### Deploying Falco in Kubernetes
|
|
After the clarification of the different [**event sources**](#falco-event-sources-and-kubernetes) and how they are consumed by Falco using the **drivers** and the **plugins**, now let us discuss how Falco is deployed in Kubernetes.
|
|
|
|
The chart deploys Falco using a `daemonset` or a `deployment` depending on the **event sources**.
|
|
|
|
#### Daemonset
|
|
When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster.
|
|
|
|
**Kernel module**
|
|
To run Falco with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) you can use the default values of the helm chart:
|
|
|
|
```bash
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco
|
|
```
|
|
|
|
**eBPF probe**
|
|
|
|
To run Falco with the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet:
|
|
|
|
```bash
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
--set driver.kind=ebpf
|
|
```
|
|
|
|
There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run:
|
|
|
|
```bash
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace "your-custom-name-space" \
|
|
-f "path-to-custom-values.yaml-file"
|
|
```
|
|
|
|
**modern eBPF probe**
|
|
|
|
To run Falco with the [modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe-experimental) you just need to set `driver.kind=modern_bpf` as shown in the following snippet:
|
|
|
|
```bash
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
--set driver.kind=modern_ebpf
|
|
```
|
|
|
|
#### Deployment
|
|
In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service.
|
|
The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins:
|
|
|
|
* need to be reachable when ingesting logs directly from remote services;
|
|
* need only one active replica, otherwise events will be sent/received to/from different Falco instances;
|
|
|
|
|
|
## Uninstalling the Chart
|
|
|
|
To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`;
|
|
|
|
```bash
|
|
helm uninstall falco --namespace falco
|
|
```
|
|
|
|
## Showing logs generated by Falco container
|
|
There are many reasons why we would have to inspect the messages emitted by the Falco container. When deployed in Kubernetes the Falco logs can be inspected through:
|
|
```bash
|
|
kubectl logs -n falco falco-pod-name
|
|
```
|
|
where `falco-pods-name` is the name of the Falco pod running in your cluster.
|
|
The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command:
|
|
```bash
|
|
kubectl logs -f -n falco falco-pod-name
|
|
```
|
|
The `-f (--follow)` flag follows the logs and live stream them to your terminal and it is really useful when you are debugging a new rule and want to make sure that the rule is triggered when some actions are performed in the system.
|
|
|
|
If we need to access logs of a previous Falco run we do that by adding the `-p (--previous)` flag:
|
|
```bash
|
|
kubectl logs -p -n falco falco-pod-name
|
|
```
|
|
A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong.
|
|
|
|
### Enabling real time logs
|
|
By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening.
|
|
In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file.
|
|
|
|
## K8s-metacollector
|
|
Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed.
|
|
A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it.
|
|
The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata
|
|
from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.
|
|
|
|
Kubernetes' resources for which metadata will be collected and sent to Falco:
|
|
* pods;
|
|
* namespaces;
|
|
* deployments;
|
|
* replicationcontrollers;
|
|
* replicasets;
|
|
* services;
|
|
|
|
### Plugin
|
|
Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component
|
|
in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin.
|
|
The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information
|
|
in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the
|
|
associated Falco instance is deployed, resulting in node-level granularity.
|
|
|
|
### Exported Fields: Old and New
|
|
The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the
|
|
[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still
|
|
available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still
|
|
usable and will return meaningful data when the `container runtime collectors` are enabled:
|
|
* k8s.pod.name;
|
|
* k8s.pod.id;
|
|
* k8s.pod.label;
|
|
* k8s.pod.labels;
|
|
* k8s.pod.ip;
|
|
* k8s.pod.cni.json;
|
|
* k8s.pod.namespace.name;
|
|
|
|
The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new
|
|
[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new
|
|
`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco.
|
|
|
|
### Enabling the k8s-metacollector
|
|
The following command will deploy Falco + k8s-metacollector + k8smeta:
|
|
```bash
|
|
helm install falco falcosecurity/falco \
|
|
--namespace falco \
|
|
--create-namespace \
|
|
--set collectors.kubernetes.enabled=true
|
|
```
|
|
|
|
## Loading custom rules
|
|
|
|
Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs.
|
|
|
|
So the question is: How can we load custom rules in our Falco deployment?
|
|
|
|
We are going to create a file that contains custom rules so that we can keep it in a Git repository.
|
|
|
|
```bash
|
|
cat custom-rules.yaml
|
|
```
|
|
|
|
And the file looks like this one:
|
|
|
|
```yaml
|
|
customRules:
|
|
rules-traefik.yaml: |-
|
|
- macro: traefik_consider_syscalls
|
|
condition: (evt.num < 0)
|
|
|
|
- macro: app_traefik
|
|
condition: container and container.image startswith "traefik"
|
|
|
|
# Restricting listening ports to selected set
|
|
|
|
- list: traefik_allowed_inbound_ports_tcp
|
|
items: [443, 80, 8080]
|
|
|
|
- rule: Unexpected inbound tcp connection traefik
|
|
desc: Detect inbound traffic to traefik using tcp on a port outside of expected set
|
|
condition: inbound and evt.rawres >= 0 and not fd.sport in (traefik_allowed_inbound_ports_tcp) and app_traefik
|
|
output: Inbound network connection to traefik on unexpected port (command=%proc.cmdline pid=%proc.pid connection=%fd.name sport=%fd.sport user=%user.name %container.info image=%container.image)
|
|
priority: NOTICE
|
|
|
|
# Restricting spawned processes to selected set
|
|
|
|
- list: traefik_allowed_processes
|
|
items: ["traefik"]
|
|
|
|
- rule: Unexpected spawned process traefik
|
|
desc: Detect a process started in a traefik container outside of an expected set
|
|
condition: spawned_process and not proc.name in (traefik_allowed_processes) and app_traefik
|
|
output: Unexpected process spawned in traefik container (command=%proc.cmdline pid=%proc.pid user=%user.name %container.info image=%container.image)
|
|
priority: NOTICE
|
|
```
|
|
|
|
So next step is to use the custom-rules.yaml file for installing the Falco Helm chart.
|
|
|
|
```bash
|
|
helm install falco -f custom-rules.yaml falcosecurity/falco
|
|
```
|
|
|
|
And we will see in our logs something like:
|
|
|
|
```bash
|
|
Tue Jun 5 15:08:57 2018: Loading rules from file /etc/falco/rules.d/rules-traefik.yaml:
|
|
```
|
|
|
|
And this means that our Falco installation has loaded the rules and is ready to help us.
|
|
|
|
## Kubernetes Audit Log
|
|
|
|
The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin. It is entirely up to you to set up the [webhook backend](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#webhook-backend) of the Kubernetes API server to forward the Audit Log event to the Falco listening port.
|
|
|
|
The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin:
|
|
```yaml
|
|
# -- Disable the drivers since we want to deploy only the k8saudit plugin.
|
|
driver:
|
|
enabled: false
|
|
|
|
# -- Disable the collectors, no syscall events to enrich with metadata.
|
|
collectors:
|
|
enabled: false
|
|
|
|
# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable.
|
|
controller:
|
|
kind: deployment
|
|
deployment:
|
|
# -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
|
|
# For more info check the section on Plugins in the README.md file.
|
|
replicas: 1
|
|
|
|
|
|
falcoctl:
|
|
artifact:
|
|
install:
|
|
# -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects.
|
|
enabled: true
|
|
follow:
|
|
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
|
|
enabled: true
|
|
config:
|
|
artifact:
|
|
install:
|
|
# -- Resolve the dependencies for artifacts.
|
|
resolveDeps: true
|
|
# -- List of artifacts to be installed by the falcoctl init container.
|
|
# Only rulesfile, the plugin will be installed as a dependency.
|
|
refs: [k8saudit-rules:0.5]
|
|
follow:
|
|
# -- List of artifacts to be followed by the falcoctl sidecar container.
|
|
refs: [k8saudit-rules:0.5]
|
|
|
|
services:
|
|
- name: k8saudit-webhook
|
|
type: NodePort
|
|
ports:
|
|
- port: 9765 # See plugin open_params
|
|
nodePort: 30007
|
|
protocol: TCP
|
|
|
|
falco:
|
|
rules_file:
|
|
- /etc/falco/k8s_audit_rules.yaml
|
|
- /etc/falco/rules.d
|
|
plugins:
|
|
- name: k8saudit
|
|
library_path: libk8saudit.so
|
|
init_config:
|
|
""
|
|
# maxEventBytes: 1048576
|
|
# sslCertificate: /etc/falco/falco.pem
|
|
open_params: "http://:9765/k8s-audit"
|
|
- name: json
|
|
library_path: libjson.so
|
|
init_config: ""
|
|
# Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.
|
|
load_plugins: [k8saudit, json]
|
|
|
|
```
|
|
Here is the explanation of the above configuration:
|
|
* disable the drivers by setting `driver.enabled=false`;
|
|
* disable the collectors by setting `collectors.enabled=false`;
|
|
* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`;
|
|
* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`;
|
|
* enable the `falcoctl-artifact-install` init container;
|
|
* configure `falcoctl-artifact-install` to install the required plugins;
|
|
* disable the `falcoctl-artifact-follow` sidecar container;
|
|
* load the correct ruleset for our plugin in `falco.rulesFile`;
|
|
* configure the plugins to be loaded, in this case, the `k8saudit` and `json`;
|
|
* and finally we add our plugins in the `load_plugins` to be loaded by Falco.
|
|
|
|
The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used:
|
|
|
|
|
|
```bash
|
|
#make sure the falco namespace exists
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
-f ./values-k8saudit.yaml
|
|
```
|
|
After a few minutes a Falco instance should be running on your cluster. The status of Falco pod can be inspected through *kubectl*:
|
|
```bash
|
|
kubectl get pods -n falco -o wide
|
|
```
|
|
If everything went smoothly, you should observe an output similar to the following, indicating that the Falco instance is up and running:
|
|
|
|
```bash
|
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
|
falco-64484d9579-qckms 1/1 Running 0 101s 10.244.2.2 worker-node-2 <none> <none>
|
|
```
|
|
|
|
Furthermore you can check that Falco logs through *kubectl logs*
|
|
|
|
```bash
|
|
kubectl logs -n falco falco-64484d9579-qckms
|
|
```
|
|
In the logs you should have something similar to the following, indicating that Falco has loaded the required plugins:
|
|
```bash
|
|
Fri Jul 8 16:07:24 2022: Falco version 0.32.0 (driver version 39ae7d40496793cf3d3e7890c9bbdc202263836b)
|
|
Fri Jul 8 16:07:24 2022: Falco initialized with configuration file /etc/falco/falco.yaml
|
|
Fri Jul 8 16:07:24 2022: Loading plugin (k8saudit) from file /usr/share/falco/plugins/libk8saudit.so
|
|
Fri Jul 8 16:07:24 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so
|
|
Fri Jul 8 16:07:24 2022: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
|
|
Fri Jul 8 16:07:24 2022: Starting internal webserver, listening on port 8765
|
|
```
|
|
*Note that the support for the dynamic backend (also known as the `AuditSink` object) has been deprecated from Kubernetes and removed from this chart.*
|
|
|
|
### Manual setup with NodePort on kOps
|
|
|
|
Using `kops edit cluster`, ensure these options are present, then run `kops update cluster` and `kops rolling-update cluster`:
|
|
```yaml
|
|
spec:
|
|
kubeAPIServer:
|
|
auditLogMaxBackups: 1
|
|
auditLogMaxSize: 10
|
|
auditLogPath: /var/log/k8s-audit.log
|
|
auditPolicyFile: /srv/kubernetes/assets/audit-policy.yaml
|
|
auditWebhookBatchMaxWait: 5s
|
|
auditWebhookConfigFile: /srv/kubernetes/assets/webhook-config.yaml
|
|
fileAssets:
|
|
- content: |
|
|
# content of the webserver CA certificate
|
|
# remove this fileAsset and certificate-authority from webhook-config if using http
|
|
name: audit-ca.pem
|
|
roles:
|
|
- Master
|
|
- content: |
|
|
apiVersion: v1
|
|
kind: Config
|
|
clusters:
|
|
- name: falco
|
|
cluster:
|
|
# remove 'certificate-authority' when using 'http'
|
|
certificate-authority: /srv/kubernetes/assets/audit-ca.pem
|
|
server: https://localhost:32765/k8s-audit
|
|
contexts:
|
|
- context:
|
|
cluster: falco
|
|
user: ""
|
|
name: default-context
|
|
current-context: default-context
|
|
preferences: {}
|
|
users: []
|
|
name: webhook-config.yaml
|
|
roles:
|
|
- Master
|
|
- content: |
|
|
# ... paste audit-policy.yaml here ...
|
|
# https://raw.githubusercontent.com/falcosecurity/plugins/master/plugins/k8saudit/configs/audit-policy.yaml
|
|
name: audit-policy.yaml
|
|
roles:
|
|
- Master
|
|
```
|
|
## Enabling gRPC
|
|
|
|
The Falco gRPC server and the Falco gRPC Outputs APIs are not enabled by default.
|
|
Moreover, Falco supports running a gRPC server with two main binding types:
|
|
- Over a local **Unix socket** with no authentication
|
|
- Over the **network** with mandatory mutual TLS authentication (mTLS)
|
|
|
|
> **Tip**: Once gRPC is enabled, you can deploy [falco-exporter](https://github.com/falcosecurity/falco-exporter) to export metrics to Prometheus.
|
|
|
|
### gRPC over unix socket (default)
|
|
|
|
The preferred way to use the gRPC is over a Unix socket.
|
|
|
|
To install Falco with gRPC enabled over a **unix socket**, you have to:
|
|
|
|
```shell
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
--set falco.grpc.enabled=true \
|
|
--set falco.grpc_output.enabled=true
|
|
```
|
|
|
|
### gRPC over network
|
|
|
|
The gRPC server over the network can only be used with mutual authentication between the clients and the server using TLS certificates.
|
|
How to generate the certificates is [documented here](https://falco.org/docs/grpc/#generate-valid-ca).
|
|
|
|
To install Falco with gRPC enabled over the **network**, you have to:
|
|
|
|
```shell
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
--set falco.grpc.enabled=true \
|
|
--set falco.grpc_output.enabled=true \
|
|
--set falco.grpc.unixSocketPath="" \
|
|
--set-file certs.server.key=/path/to/server.key \
|
|
--set-file certs.server.crt=/path/to/server.crt \
|
|
--set-file certs.ca.crt=/path/to/ca.crt
|
|
|
|
```
|
|
|
|
## Enable http_output
|
|
|
|
HTTP output enables Falco to send events through HTTP(S) via the following configuration:
|
|
|
|
```shell
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
--set falco.http_output.enabled=true \
|
|
--set falco.http_output.url="http://some.url/some/path/" \
|
|
--set falco.json_output=true \
|
|
--set json_include_output_property=true
|
|
```
|
|
|
|
Additionally, you can enable mTLS communication and load HTTP client cryptographic material via:
|
|
|
|
```shell
|
|
helm install falco falcosecurity/falco \
|
|
--create-namespace \
|
|
--namespace falco \
|
|
--set falco.http_output.enabled=true \
|
|
--set falco.http_output.url="https://some.url/some/path/" \
|
|
--set falco.json_output=true \
|
|
--set json_include_output_property=true \
|
|
--set falco.http_output.mtls=true \
|
|
--set falco.http_output.client_cert="/etc/falco/certs/client/client.crt" \
|
|
--set falco.http_output.client_key="/etc/falco/certs/client/client.key" \
|
|
--set falco.http_output.ca_cert="/etc/falco/certs/client/ca.crt" \
|
|
--set-file certs.client.key="/path/to/client.key",certs.client.crt="/path/to/client.crt",certs.ca.crt="/path/to/cacert.crt"
|
|
```
|
|
|
|
Or instead of directly setting the files via `--set-file`, mounting an existing volume with the `certs.existingClientSecret` value.
|
|
|
|
## Deploy Falcosidekick with Falco
|
|
|
|
[`Falcosidekick`](https://github.com/falcosecurity/falcosidekick) can be installed with `Falco` by setting `--set falcosidekick.enabled=true`. This setting automatically configures all options of `Falco` for working with `Falcosidekick`.
|
|
All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick#configuration).
|
|
For example, to enable the deployment of [`Falcosidekick-UI`](https://github.com/falcosecurity/falcosidekick-ui), add `--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true`.
|
|
|
|
If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidekick` might be captured, use the full FQDN of `Falcosidekick` by using `--set falcosidekick.fullfqdn=true` to avoid that.
|
|
|
|
## Configuration
|
|
|
|
The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See [values.yaml](./values.yaml) for full list.
|
|
|
|
{{ template "chart.valuesSection" . }}
|