This document is out of dated. For updated documents please check: ArgoCD Developer’s Guide
Summary
The goal of this document is to provide an introduction to GitOps, an emerging application delivery pattern that has been garnering considerable support, especially in the Kubernetes community.
Some key principles of GitOps:
- Git is the single source of truth for all application and infrastructure code.
- Everything is declarative and everything is in Git.
- Containers are the unit of deployment, for immutability.
- Pipelines are triggered by Git pull-requests.
- Deployments are orchestrated by the cluster (Kubernetes), not the CI system (Gitlab CI).
- Deployments are merely a cluster driven reconciliation of current state (the cluster) with desired state (Git).
- CI & CD are distinct processes that are firewalled from one another
The sections below diagram the workflows for Continuous Integration (CI) and Continuous Delivery (CD), respectively. They’re separated to emphasize the “firewall” between the two.
Continuous Integration (CI) Pipeline
<!— image: image (from original wiki uploads) —>
The CI pipeline is triggered by a Git push which is familiar and comfortable territory for developers, and in many organizations, is already well established. The value-add here is to treat CI and CD as two distinct, autonomous workflows.
The CI workflow concludes with a tested software installation artifact, delivered securely to an artifact repository. Unfortunately, many solutions often confuse matters by letting the CI system orchestrate (push) the deployment, something I and other GitOps proponents consider a security anti-pattern as it gives privileged cluster access to the build system. By decoupling CI from CD, we not only make the system more secure but easier to reason about.
Next, we’ll move on to CD, where most of the work usually resides.
Continuous Deployment (CD) Pipeline
<!— image: image (from original wiki uploads) —>
With CI focusing on the application code and resulting artifact, CD is free to focus on the runtime environment. In our case, this will be AWS Elastic Kubernetes Service (EKS), but I’ve built similar systems for non-containerized workflows using VMware vSphere with Chef Infra handling the orchestration. Again, the focus is on the workflow, not the technology.
More recently, in Kubernetes-centric environments, I’ve leveraged Weaveworks Flux - a Kubernetes Operator whose responsibility is to detect any divergence between desired state (Git) and actual state (the Kubernetes cluster) and reconcile the two - as the orchestrator. This solution will enable us to tailor their upgrade policy to meet the requirements of each of their environments. Some example upgrade triggers:
- Manually update the Kubernetes deployment manifest with the new version (e.g. image: myapp:1.1.1) and git commit && git push to trigger action.
- Tag the deployment artifact with the appropriate environment tag (e.g. environment: dev) and let metadata drive the process.
- Preconfigure environments to always automatically pull the latest artifact, as is often desired in dev/test environments.
The diagram below depicts a single view of the entire solution.
<!— image: image (from original wiki uploads) —>
Application Delivery Model (RFC)
NOTE: This is mostly just brainstorming notes at this point as Adam and I sort out a delivery pipeline proposal
Components
- Gitlab code storage
- Gitlab artifact storage
- helm charts
- container images
- Gitlab CI engine (
gitlab-ci.yml) - Helm package manager
- Application code repos (Gitlab)
- (TBD) Application config repos (Gitlab)
- ArgoCD engine
- EKS Cluster(s)
Git Code Storage
ArgoCD best practices document makes a compelling case for keeping app code and config separate.
I’m leaning toward a 1:1 app code/config pattern:
- application code repository (e.g.
winterlight) - application config repository (e.g.
winterlight-config)
As to what goes where:
Code Repository
- source code
- tests
- Dockerfile
CI builds app, runs unit tests, creates Docker image, runs tests, pushes container image to Gitlab artifact repository.
Config Repository
- Helm (Kubernetes manifests, metadata, per-environment values files (?))
- ArgoCD manifest?
EKS
Propose running two EKS clusters to start:
ace-test- for DevOps testing purposesace-prod- for production deployments, promoted fromace-test
We’ll deploy applications to an apps Kubernetes namespace
Helm
We will use Helm to package up our applications. We need to decide on the following:
- Helm deployment strategy
- Helm packaging/versioning strategy
- Helm promotion strategy
CodeFresh published a best practices doc that provides options.
We’ll want to see how ArgoCD and Helm work together here. More in the ArgoCD section.
ArgoCD
Tentative plan is to run ArgoCD on each EKS cluster. We’ve also considered running a single ArgoCD system on a third K8s cluster (eks-mgmt), but Todd thinks this abandons the desired GitOps pull deployment pattern.
We’ll therefore probably start with ArgoCD running on each cluster in an argocd namespace.
Events
- DevOps updates application code and creates PR
- DevOps updates application code on master
- DevOps updates application config (helm) and creates PR
- DevOps updates application config (helm) on master
- ArgoCD detects a Helm chart version bump
- ArgoCD detects an application container version bump
Questions
- Does every container image update require a Helm chart update? How to coordinate that if they’re stored separately?
- Can we employ per-environment Helm values files? Where would keep them? Can we configure ArgoCD to reference them when performing an install. (e.g.
helm install -f myvals.yaml ./mychart) - Can ArgoCD be configured to poll our (helm/container) artifact repositories and auto-install the latest version? What about semantic versioning constraints (
~> 1.1)? If it can, will it write the version bump back to the config repository so we have tracking (like Flux does)? - ArgoCD applications are driven by manifests. Where do we store these? With the app code? With the app config code? Separately?
- If we separate app config and app code into separate repos:
- Dockerfile goes with app
- Where does helm chart go? See question above about helm/app updates
- Where does ArgoCD manifest go?
- Do we have an app config monorepo or do 1:1 like with app code (e.g. winerlight and winterlight-config)?