Skip to main content

Starting stack on AWS

Infrastructure

  1. VPC in the region that was picked. Along with the VPC, private and public subnets are also provisioned with dedicated subnets for any databases you might want to provision. These are associated with security groups, ingress and egress rules, public IPs and EIPs. This spans across 3 AZs.
  2. EKS - a kubernetes cluster with a single managed nodegroup that can scale up and down within the specified limits. The nodes (EC2 instances) are provisioned through an ASG (Auto-Scaling Group). This can be used to provision on-demand instances or spot instances for cost savings in non-critical environments. By default, EBS is used for storage (the default is going to change to EFS for cross-AZ access).
  3. Loadbalancer - A single NLB is used per cluster. This is to provide maximum flexibility for services to be exposed on different endpoints and ports (eg, two different services on https://app.example.com and tcp://app.example.com:5432). This loadbalancer is provisioned through the ingress-nginx controller.

Services

These services are provisioned by default:

  1. Prometheus for metric collection.
  2. cert-manager for provisioning letsencrypt TLS certificates.
  3. cluster-autoscaler to scale the ASGs based on load, within limits.
  4. node-exporter and event-exporter for logging kubernetes cluster specific diagnostics.

Context

Each functioning Argonaut environment consists of at least one kubernetes cluster. Let's take an example environment named prod. This cluster has the same name as the environment prod in the chosen region. It also has two namespaces:

  1. tools for third party tools like prometheus, grafana, cluster-autoscaler, etc.
  2. prod where all applications are deployed.

Each application deployed through Argonaut is essentially a helm chart that is automatically configured according to the service descriptor requirements.