Skip to main content
Back to Overview
February 7, 2026|14 min read

GitOps: ArgoCD + FluxCD, Better Together

Why we use two GitOps tools on every managed Kubernetes cluster - FluxCD for platform operations and ArgoCD for customer workloads. How FluxCD's native Helm Releases, post-renderers, and Kustomization model give us the day 2 operations we need at scale.

Jan FuhrerBy Jan Fuhrer

"ArgoCD or FluxCD?" is one of the most common questions we get from customers evaluating our managed Kubernetes platform. Our answer: we use both. On every cluster.

This is a deliberate architecture choice. FluxCD manages the platform layer. ArgoCD gives customers a self-service UI for their workloads. Each tool does what it is best at, and they stay out of each other's way.

We have been running this dual-GitOps setup across 50+ managed Kubernetes clusters since 2022. This post explains why the split exists, what makes FluxCD essential for day 2 operations at our scale, and how the two tools complement each other.

Our architecture: FluxCD for platform, ArgoCD for customers

FluxCD

Managed by Natron

Cilium CNI
cert-manager
Prometheus + Grafana + Loki
Alertmanager
Kyverno policies
Velero backups
External Secrets Operator
Blackbox Exporter
Ingress Controller
Git sourcenatron-internal/platform-config
ArgoCD

Managed by Customer

Application deployments
Helm releases & Kustomize
Environment promotion (dev/staging/prod)
Deployment status & sync state
Rollbacks & manual syncs
Team-scoped projects & RBAC
Git sourcecustomer-org/application-deployments

FluxCD: the platform layer

FluxCD manages everything Natron is responsible for: Cilium CNI, cert-manager, the observability stack (Prometheus, Grafana, Loki, Alertmanager), Velero backups, Kyverno, External Secrets Operator, Ingress controllers, and more.

FluxCD reconciles from our internal Git repository. Customers never see this repo. When we push a Cilium upgrade, FluxCD detects the change and reconciles silently. No UI, no manual sync, no notification to the customer.

ArgoCD: the application layer

ArgoCD manages everything the customer deploys. Their microservices, APIs, web applications, workers, CronJobs. Each team gets a scoped ArgoCD project with RBAC and sees only their own applications.

Developers need a UI to see sync status, view diffs, and trigger rollbacks. ArgoCD's AppProject model gives each team an isolated, self-service view. This is the right tool for application delivery.

Why FluxCD is essential for managed services

This is the part most ArgoCD-vs-FluxCD comparisons miss. For a managed services provider operating 50+ clusters with dozens of platform services per cluster, FluxCD has operational advantages that fundamentally shape how we work.

Native Helm Releases

FluxCD creates actual Helm Release objects in the cluster. When you run helm list, you see every release FluxCD manages. When you run helm get values <release>, you see the active configuration. Standard Helm tooling works because FluxCD speaks native Helm.

ArgoCD does not work this way. It renders Helm charts and applies the resulting manifests, but it does not create Helm Releases. The cluster has no record of chart versions or active values in Helm's native format. Troubleshooting means going through the ArgoCD UI or API instead of standard helm commands.

For day 2 operations, this distinction goes beyond convenience. Because FluxCD manages real Helm Releases, we can decouple a service from FluxCD at any time and take over manually. This matters during migrations: if we need to move a service to a different chart, restructure how it is deployed, or hand it off to a customer's own tooling, we suspend the FluxCD HelmRelease and the native Helm Release remains in the cluster, fully functional, with its history intact. We can then helm upgrade it manually, migrate it to a different management approach, or let another tool adopt it. There is no proprietary state to untangle.

With ArgoCD-managed resources, this kind of decoupling is harder. The manifests exist in the cluster, but there is no Helm Release to hand off. You are either in ArgoCD or you are re-creating the deployment from scratch.

Post-renderers: ship fixes without waiting for upstream

Many Helm charts we deploy do not support every configuration option we need. Security contexts, host aliases, additional pod labels, pod annotations for monitoring, network policy adjustments. These are edge cases that matter when you are hardening and monitoring every service to the same standard across all clusters.

We contribute these improvements back to upstream chart maintainers. But contributions take time to review, merge, and release. We cannot wait weeks for an upstream merge when a customer cluster needs a security fix now.

FluxCD's HelmRelease resource supports post-renderers. A post-renderer takes the fully rendered Kubernetes manifests from a Helm chart and applies Kustomize patches before they reach the cluster. This lets us inject security contexts, add pod labels for our monitoring stack, set host aliases, or patch any field in any resource the chart produces.

apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: example-service
spec:
  chart:
    spec:
      chart: example
      version: "2.x"
  postRenderers:
    - kustomize:
        patches:
          - target:
              kind: Deployment
            patch: |
              - op: add
                path: /spec/template/spec/securityContext
                value:
                  runAsNonRoot: true
                  fsGroup: 65534
          - target:
              kind: Deployment
            patch: |
              - op: add
                path: /spec/template/metadata/labels/monitoring
                value: "enabled"

This is how we maintain a consistent security and observability baseline across every managed service, regardless of what the upstream Helm chart supports today. When our upstream contribution gets merged, we remove the post-renderer patch. Clean and reversible.

Multi-cluster scaling with Kustomizations

Managing platform services across 50+ clusters means dealing with a matrix of configurations: different cloud providers, different Kubernetes versions, different customer requirements, shared baselines.

FluxCD's Kustomization resource maps directly to this problem. We structure our platform repo with a base layer of service definitions and per-cluster overlays that patch in specific values. Kustomize's strategic merge patches and JSON patches let us express "this cluster is like the baseline, except for these three differences" without duplicating entire HelmRelease definitions.

Combined with FluxCD's dependency ordering (Kustomization A depends on Kustomization B), we can express complex rollout sequences: CRDs before operators, operators before their custom resources, network policies before workloads.

ArgoCD has ApplicationSets for multi-cluster targeting, but the patching and merging model is not as granular. When you need to override a specific field in a specific HelmRelease on a specific cluster, Kustomize overlays in FluxCD handle this natively. This is the difference between "deploy the same thing everywhere" and "deploy a consistent baseline with controlled variations", which is what managed services actually require.

Day 2 operations in practice

Scenario: A Helm chart upgrade breaks a CRD. FluxCD detects the failed helm upgrade, marks the HelmRelease as failed, and keeps the previous revision running. The platform engineer sees the failure in kubectl get helmrelease -A, checks helm history <release>, and identifies the breaking change. The fix goes into Git and FluxCD reconciles. If the situation requires manual intervention, we suspend the HelmRelease and work directly with the native Helm Release using standard tooling.

Scenario: Upstream chart lacks a required security context. We add a post-renderer patch to the HelmRelease, push to Git, FluxCD applies it within 60 seconds. The patch is version-controlled, reviewable, and scoped to exactly the fields we need to change. We open a PR upstream with the proper chart change. When it is merged and released, we bump the chart version and remove the post-renderer.

Scenario: New cluster onboarding. We create a cluster overlay directory, reference the shared base Kustomizations, add cluster-specific patches (cloud provider config, node selectors, resource limits), and push. FluxCD bootstraps the entire platform stack in the correct dependency order. A new cluster goes from empty to production-ready through a single Git commit.

Why the split, not just one tool?

Customer ApplicationsArgoCDCustomer team
MicroservicesAPIsWeb appsWorkersCronJobs
Platform ServicesFluxCDNatron
ObservabilitySecurityNetworkingBackupsPolicies
Cluster InfrastructureFluxCDNatron
CNICSINode configCluster addonsCRDs
Natron manages (FluxCD)
Cluster upgrades
CNI + networking
Observability stack
Backup + restore
TLS certificates
Policy engine
Secret sync
Node management
Responsibility boundary
Customer manages (ArgoCD)
App deployments
Release promotion
Config & secrets
Scaling decisions
Feature flags
CI pipelines
App monitoring
Rollbacks

FluxCD manages ArgoCD. ArgoCD is a platform component. With FluxCD managing it, ArgoCD upgrades are a version bump in a HelmRelease, not a self-referential reconciliation. This alone justified the split.

Different audiences. Platform engineers work through Git and kubectl. Application developers want a UI with sync status and rollback buttons. One tool cannot serve both workflows without becoming overly complex.

Different blast radii. A bad platform change affects every workload. A bad application change affects one team. Different tools mean independent failure domains: if ArgoCD goes down, the platform stays healthy. If FluxCD goes down, applications keep running.

Different change cadences. Platform components change weekly or monthly. Applications change daily or hourly. Separate reconciliation loops mean neither blocks the other.

How we set this up

Natron updates platform
Platform repo updated
FluxCD detects change
Reconciles silently
Customer deploys application
App repo updated
ArgoCD shows diff
Team syncs via UI

On every managed cluster, the bootstrap works like this:

  1. FluxCD is installed first. It bootstraps from our internal platform Git repo. All platform services are defined as FluxCD Kustomizations and HelmReleases.
  2. ArgoCD is one of those platform services. FluxCD installs and manages ArgoCD, including its versions, configuration, and RBAC.
  3. ArgoCD connects to customer repos. We create scoped AppProjects per team. The customer's CI/CD pipeline pushes manifests to their repo, and ArgoCD syncs from there.

The two tools never manage the same resources. FluxCD owns platform namespaces (flux-system, monitoring, cert-manager, kyverno). ArgoCD owns customer application namespaces. Kyverno policies enforce this boundary.

Aspect
FluxCD
ArgoCD
Primary use case
Infrastructure & platform automation
Application delivery & promotion
UI
CLI-only (operators do not need a UI)
Full web UI (developers need visibility)
Reconciliation
Pull-based, event-driven, silent
Pull-based with sync status dashboard
Multi-tenancy
Namespace-scoped Kustomizations
AppProjects with RBAC per team
Helm support
HelmRelease CRD (declarative)
Native Helm rendering in UI
Drift detection
Automatic correction, no notification needed
Visual diff in UI, manual or auto sync
Access model
No UI to secure, cluster-internal only
SSO/OIDC, team-scoped dashboards

Explore further

This GitOps architecture is part of our broader managed Kubernetes platform. For multi-tenant setups, see how we handle tenant onboarding.

If you are thinking about your GitOps strategy for a growing platform, schedule a call. We will walk through your current setup and see where the boundaries should be.

Jan Fuhrer

About the author

Jan Fuhrer

Platform Engineer and Architect at Natron Tech, designing GitOps workflows and platform automation for managed Kubernetes across Switzerland.

The best interface between two teams is a Git repository, not a shared dashboard.

Read Next