Deploy with Flux
This guide shows how to deploy and manage virtual clusters using Flux, a GitOps tool for Kubernetes. Flux implements GitOps principles by continuously ensuring that your Kubernetes clusters match the desired state defined in a Git repository.
Prerequisitesβ
-
Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Run the command
kubectl auth can-i create clusterrole -A
to verify that your current kube-context has administrative privileges.infoTo obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using
kubectl config
commands or authenticating through your cloud provider's CLI tools. -
helm
: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it. -
kubectl
: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.
Additionally, you'll need:
- A Kubernetes cluster with Flux controllers installed
- The
flux
CLI tool installed on your machine (See Flux Installation Guide) - The
vcluster
CLI tool installed on your machine - Basic understanding of GitOps principles
Architecture optionsβ
When integrating Flux with virtual clusters, it's important to consider the architectural pattern that best fits your GitOps strategy. The relationship between Flux and virtual clusters can be configured in multiple ways, each with its own trade-offs.
Unlike ArgoCD, which treats other clusters as first-class objects, Flux manages workloads on other clusters through KubeConfig references in HelmRelease and Kustomize resources. This difference in design influences how you'll structure your GitOps workflows when working with virtual clusters.
The standalone approach of deploying Flux in each virtual cluster may not scale well for large numbers of virtual clusters, especially for ephemeral environments like PR preview environments. A hub and spoke model or a Flux instance per host cluster approach is generally more resource-efficient for these scenarios, as it reduces the overhead of running multiple Flux controllers.
1. Flux instance per host clusterβ
When running Flux on each host cluster, it can manage the virtual clusters within that environment. This approach is recommended if you already use Flux for each traditional cluster and want to maintain a similar management pattern.
- One Flux instance per host cluster
- Each Flux instance manages multiple virtual clusters on that host
- Virtual cluster KubeConfig Secret management is simplified since Secrets are local to the cluster
- Clear separation of responsibilities by host cluster
- Recommended if you already use a Flux instance per traditional cluster
- Provides better resource utilization since the Flux controllers are shared
2. Hub and spoke modelβ
With this approach, a central Flux instance manages multiple virtual clusters across different host clusters. This is a good option if you already use a single Flux instance with multiple Kubernetes clusters or if you want centralized control of all virtual environments.
- One central Flux instance manages multiple virtual clusters across different hosts
- Works well with existing hub and spoke Flux setups
- Requires secure KubeConfig Secret management between clusters
- More efficient for large numbers of virtual clusters
- Provides a single control point for all virtual cluster management
- Can simplify GitOps workflows by having a single source of truth
3. Flux inside virtual clustersβ
While possible, running Flux inside every virtual cluster adds resource overhead and management complexity. This approach might be suitable when virtual clusters need complete isolation and independent GitOps workflows.
- Each virtual cluster runs its own Flux instance
- Provides complete isolation between environments
- Teams can manage their own GitOps workflows independently
- Increased resource overhead (each vCluster needs its own Flux controllers)
- More complex to manage at scale
- Suitable for environments where strict isolation is required
Enable KubeConfig exportβ
To enable Flux to deploy to virtual clusters, you need to create a KubeConfig Secret that Flux can reference.
exportKubeConfig:
# Set a meaningful context name
context: default
# Use a server URL that is accessible from the Flux controllers
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443
# Skip TLS verification when Flux connects to the vCluster
insecure: true
# Specify the secret where the KubeConfig will be stored
secret:
name: vcluster-flux-kubeconfig
syncer:
extraArgs:
# Add TLS SAN for the server URL to ensure certificate validity
- --tls-san=vcluster-name.vcluster-namespace.svc.cluster.local
This configuration:
- Exports the virtual cluster KubeConfig as a Secret in the host namespace
- Makes the Secret available for Flux to use with the
spec.kubeConfig
field - Uses a server URL that is accessible from the Flux controllers (replace
vcluster-name
andvcluster-namespace
with your actual values) - Sets
insecure: true
to automatically skip TLS certificate verification - Adds a TLS SAN (Subject Alternative Name) that matches the server URL, which helps prevent certificate verification errors
The vCluster exportKubeConfig
configuration creates a Secret with the KubeConfig data stored under the key config
. When referring to this Secret in Flux resources, you must specify this key in the secretRef.key
field, as shown in the examples below.
# In Flux HelmRelease
spec:
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config # Must match the key used in the vCluster-generated Secret
When using vCluster with Flux, proper TLS certificate configuration is essential:
- Set
exportKubeConfig.insecure: true
in your vCluster configuration - Configure proper TLS SANs with the
--tls-san
flag in vCluster configuration - Ensure the server URL matches the certificate's SAN
# In your vCluster configuration
syncer:
extraArgs:
- --tls-san=vcluster-name.vcluster-namespace.svc.cluster.local
exportKubeConfig:
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443
insecure: true
See the Troubleshooting section for solutions to certificate issues.
Deploy virtual clusters with Fluxβ
Git Repository Create the vCluster Helm repository definition
First, create a source for the vCluster Helm charts in your Git repository:
clusters/sources/vcluster-repository.yaml---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: vcluster
namespace: flux-system
spec:
interval: 1h
url: https://charts.loft.shGit Repository Define your vCluster configuration
Create a vCluster configuration file in your Git repository:
clusters/production/vcluster-demo.yaml---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: vcluster-demo
namespace: vcluster-demo
spec:
interval: 10m
chart:
spec:
chart: vcluster
version: "0.15.x"
sourceRef:
kind: HelmRepository
name: vcluster
namespace: flux-system
values:
# Configure TLS SAN for the certificate
syncer:
extraArgs:
- --tls-san=vcluster-demo.vcluster-demo.svc.cluster.local
exportKubeConfig:
# Set a meaningful context name
context: default
# Use a server URL that matches the TLS SAN
server: https://vcluster-demo.vcluster-demo.svc.cluster.local:443
# Skip TLS verification when Flux connects to the vCluster
insecure: true
# Specify the secret where the KubeConfig will be stored
secret:
name: vcluster-flux-kubeconfig
sync:
toHost:
ingresses:
enabled: true
controlPlane:
coredns:
enabled: true
embedded: true
backingStore:
etcd:
embedded:
enabled: trueYou can include any standard vCluster configuration in the
values
section.Kubernetes Cluster Apply the vCluster namespace
Before applying the HelmRelease, ensure the namespace exists:
Create the namespacekubectl create namespace vcluster-demo
Git Repository Commit and push your changes
Commit and push to the repositorygit add clusters/
git commit -m "Add vCluster demo configuration"
git pushFlux detects the changes and deploy the vCluster according to your configuration.
Deploy applications to virtual clustersβ
Once your vCluster is running, you can use Flux to deploy applications directly to the vCluster.
Git Repository Create a Helm repository source
vcluster-apps/sources/podinfo-repository.yaml---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: podinfo
namespace: vcluster-demo
spec:
interval: 1h
url: https://stefanprodan.github.io/podinfoGit Repository Create a HelmRelease targeting the vCluster
vcluster-apps/apps/podinfo-app.yaml---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: podinfo
namespace: vcluster-demo
spec:
chart:
spec:
chart: podinfo
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: podinfo
version: '*'
interval: 30m
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config
# Skip TLS verification for the target cluster
# Available in Flux v0.40.0 and later
skipTLSVerify: true
releaseName: podinfo
targetNamespace: podinfo
install:
createNamespace: true
values:
ui:
message: "Deployed by Flux to virtual cluster"
ingress:
enabled: true
hosts:
- host: podinfo.example.com
paths:
- path: /
pathType: PrefixHandling TLS certificate verificationThe
kubeConfig
section references the Secret created by the vCluster using theexportKubeConfig
setting. There are several approaches to handle TLS certificate verification:-
Recommended approach (Flux v0.40.0+): Use
skipTLSVerify: true
in thekubeConfig
section as shown above, which tells Flux to skip certificate verification when connecting to the virtual cluster. -
Alternative approach: Configure both the TLS SAN and
insecure: true
in your vCluster configuration as we did in the example. -
If you still encounter certificate errors: Use a modified Secret created with the solution in the troubleshooting section:
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig-modified # Use the modified Secret
key: config-
Git Repository Commit and push your changes
Commit and push application definitionsgit add vcluster-apps/
git commit -m "Add podinfo application for vCluster demo"
git pushVirtual Cluster Verify deployment
Once Flux reconciles the changes, you can connect to your vCluster and verify the application is deployed:
Connect to vCluster and check deploymentvcluster connect vcluster-demo -n vcluster-demo
# Check the deployment in the virtual cluster
kubectl get namespace podinfo
kubectl get pods -n podinfo
kubectl get ingress -n podinfo
Manage multiple virtual clustersβ
When managing multiple virtual clusters with Flux, you can use Kustomize to organize your configurations.
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- sources/vcluster-repository.yaml
- development/vcluster-dev.yaml
- staging/vcluster-staging.yaml
- production/vcluster-prod.yaml
Bootstrap review environments with pre-installed Fluxβ
A common scenario is having Flux already installed on the host cluster and wanting to leverage it for ephemeral review environments with virtual clusters. This approach follows GitOps principles while avoiding the need to install Flux separately for each environment.
Use existing Flux for review environmentsβ
When Flux is already installed on your host cluster, you can create a GitOps workflow for review environments that:
- Uses the existing Flux installation on the host cluster
- Deploys virtual clusters for each review environment
- Leverages Flux to deploy applications to these virtual clusters
- Reduces overhead and speeds up environment bootstrapping
Git Repository Create a structure for review environments
βββ clusters/
β βββ sources/
β β βββ vcluster-repository.yaml # HelmRepository for vCluster
β βββ reviews/
β βββ review-env-template.yaml # Template for new review environments
β βββ pr-123/ # Directory for a specific PR review
β βββ vcluster.yaml # vCluster definition for PR-123
βββ apps/
βββ sources/
β βββ app-repository.yaml # Application source repositories
βββ reviews/
βββ pr-123/ # Apps for PR-123 environment
βββ deployment.yaml # Application deployment targeting PR-123 vClusterGit Repository Create a template for review environments
clusters/reviews/review-env-template.yaml---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: vcluster-${PR_NUMBER}
namespace: review-${PR_NUMBER}
spec:
interval: 10m
chart:
spec:
chart: vcluster
version: "0.15.x"
sourceRef:
kind: HelmRepository
name: vcluster
namespace: flux-system
values:
sync:
toHost:
ingresses:
enabled: true
exportKubeConfig:
context: default
server: https://kubernetes.default.svc.cluster.local:443
secret:
name: vcluster-${PR_NUMBER}-kubeconfigCI Pipeline Create a CI workflow that generates environments
CI workflow (conceptual)steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Create PR-specific vCluster config
run: |
export PR_NUMBER=${GITHUB_REF#refs/pull/}
export PR_NUMBER=${PR_NUMBER%/merge}
mkdir -p clusters/reviews/pr-${PR_NUMBER}
# Generate the vCluster config from template
cat clusters/reviews/review-env-template.yaml | \
sed "s/\${PR_NUMBER}/$PR_NUMBER/g" > \
clusters/reviews/pr-${PR_NUMBER}/vcluster.yaml
- name: Commit and push to GitOps repo
run: |
git add clusters/reviews/pr-${PR_NUMBER}
git commit -m "Add review environment for PR #${PR_NUMBER}"
git pushWith this approach, your CI/CD pipeline creates the necessary configuration in your GitOps repository, and Flux (already running on the host cluster) automatically provisions the vCluster and deploys applications to it.
Host Cluster Existing Flux detects and applies changes
The Flux controllers already running on your host cluster is going to:
- Detect the new vCluster configuration
- Create the required namespace
- Deploy the vCluster using the Helm chart
- Create the KubeConfig Secret
- Use the exported KubeConfig to deploy apps to the vCluster
This entire process follows GitOps principles, with your Git repository as the source of truth, and Flux handling the reconciliationβall without requiring manual intervention or imperative commands.
For production environments, consider implementing automatic cleanup of review environments when PRs are closed or merged. This can be done by adding another CI workflow step that removes the corresponding directory from your GitOps repository.
This pattern allows you to leverage an existing Flux installation rather than deploying Flux separately for each review environment, which significantly reduces overhead and bootstrap time.
For organizations managing a large number of virtual clusters, especially for dynamic ephemeral environments, vCluster Platform provides additional capabilities for virtual cluster lifecycle management and integrates well with GitOps workflows. It includes features for automatic creation of KubeConfig Secrets, management of access control, and simplified bootstrapping of virtual clusters with Flux.
Troubleshootβ
Host Cluster- Verify the virtual cluster KubeConfig Secret exists with the correct format
- Check Flux controller logs for errors
- Ensure Flux has the necessary permissions to access the Secret
kubectl logs -n flux-system deployment/source-controller
kubectl logs -n flux-system deployment/helm-controller
- Verify that resources are being created in the virtual cluster
- Check that the exportKubeConfig setting is properly configured
- Ensure the server URL is reachable from the Flux controllers
kubectl get configmap -n vcluster-namespace vcluster-flux-demo -o yaml
Common Issuesβ
TLS certificate verification errorsβ
If you see TLS certificate verification errors in Flux controller logs like:
tls: failed to verify certificate: x509: certificate signed by unknown authority
This is a common issue when Flux attempts to connect to a vCluster, because the vCluster generates a self-signed certificate. Follow these solutions in order:
Solution 1: properly configure vcluster certificate SANsβ
The most reliable approach is to configure proper TLS SANs when deploying vCluster:
syncer:
extraArgs:
- --tls-san=vcluster-name.vcluster-namespace.svc.cluster.local
exportKubeConfig:
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443
insecure: true
secret:
name: vcluster-flux-kubeconfig
This ensures the certificate includes the correct SAN for the service DNS name.
Solution 2: use a modified kubeconfig secretβ
If you're still encountering issues, create a modified KubeConfig Secret with TLS verification disabled:
# Set your environment variables
NAMESPACE="vcluster-namespace"
VCLUSTER_NAME="vcluster-name"
KUBECONFIG_SECRET="vcluster-flux-kubeconfig"
# Create a temporary directory
TMPDIR=$(mktemp -d)
cd $TMPDIR
# Extract original kubeconfig
kubectl get secret -n $NAMESPACE $KUBECONFIG_SECRET -o jsonpath='{.data.config}' | base64 -d > original-kubeconfig.yaml
# Extract client certificates
CLIENT_CERT=$(grep -A1 "client-certificate-data:" original-kubeconfig.yaml | tail -n1 | awk '{print $1}')
CLIENT_KEY=$(grep -A1 "client-key-data:" original-kubeconfig.yaml | tail -n1 | awk '{print $1}')
# Create KubeConfig without certificate-authority-data and with insecure-skip-tls-verify enabled
cat > modified-kubeconfig.yaml << EOF
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://${VCLUSTER_NAME}.${NAMESPACE}.svc.cluster.local:443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: ${CLIENT_CERT}
client-key-data: ${CLIENT_KEY}
EOF
# Create Secret
kubectl create secret generic ${KUBECONFIG_SECRET}-modified -n $NAMESPACE --from-file=config=modified-kubeconfig.yaml
# Clean up
rm -rf $TMPDIR
Then update your Flux resource to use this Secret:
spec:
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig-modified
key: config
Solution 3: use flux's built-in tls verification optionsβ
For newer versions of Flux (v0.40.0+), you can use Flux's native TLS verification options in your HelmRelease or Kustomization resources:
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: podinfo
namespace: vcluster-demo
spec:
# Other fields...
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config
# Skip TLS verification for target cluster
skipTLSVerify: true
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: app-deployment
namespace: vcluster-demo
spec:
# Other fields...
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config
# Skip TLS verification for target cluster
skipTLSVerify: true
This approach has the advantage of not requiring you to modify the KubeConfig Secret manually while still resolving TLS certificate verification issues.
Connection refused errorsβ
If you see "connection refused" errors in the Flux controller logs, it may indicate:
- The virtual cluster's API server is not accessible from Flux
- Network policies are blocking the communication
- The virtual cluster is not running or healthy
- The server URL in the KubeConfig is not correctly configured
You might see errors in the Flux controller logs like:
connect: connection refused
To troubleshoot:
- Check if the virtual cluster is running and ready:
kubectl get pods -n <vcluster-namespace>
- Verify the server URL in your
exportKubeConfig
setting:
kubectl get secret -n <vcluster-namespace> <kubeconfig-secret-name> -o jsonpath='{.data.config}' | base64 -d | grep server
- Ensure the server URL is accessible from the Flux controllers. Using the service DNS name is generally more reliable:
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443