Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3493.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-02249cea912acc475 Launch Stack
    HVM (arm64) ami-0e1ab2ca7245825b5 Launch Stack
    ap-east-1 HVM (amd64) ami-03712d20d7cf937c6 Launch Stack
    HVM (arm64) ami-06cb52bacbeb1f61e Launch Stack
    ap-northeast-1 HVM (amd64) ami-077d5923b2b488e1a Launch Stack
    HVM (arm64) ami-0bbac0b95247a2bf7 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0bb8be1e4de0292dd Launch Stack
    HVM (arm64) ami-027eb161207271736 Launch Stack
    ap-south-1 HVM (amd64) ami-0098e6fa7fe671565 Launch Stack
    HVM (arm64) ami-01aaa082a7b03e139 Launch Stack
    ap-southeast-1 HVM (amd64) ami-015e86aa8b5f4d124 Launch Stack
    HVM (arm64) ami-0f395fa6cad33b760 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0f1c5c999ebd2bc9a Launch Stack
    HVM (arm64) ami-0e4ec90e059a3a022 Launch Stack
    ap-southeast-3 HVM (amd64) ami-08d478cf9bd3e54bc Launch Stack
    HVM (arm64) ami-0c9aae99a7fb420ea Launch Stack
    ca-central-1 HVM (amd64) ami-021848f664ef1a023 Launch Stack
    HVM (arm64) ami-02159de83fce571e0 Launch Stack
    eu-central-1 HVM (amd64) ami-0863242bd9c3b80cc Launch Stack
    HVM (arm64) ami-03d1c1d938007db36 Launch Stack
    eu-north-1 HVM (amd64) ami-0da4ef78be3cec17f Launch Stack
    HVM (arm64) ami-001a469c02c451821 Launch Stack
    eu-south-1 HVM (amd64) ami-0a509419b7f8e368b Launch Stack
    HVM (arm64) ami-0700c47cbb9bf93bf Launch Stack
    eu-west-1 HVM (amd64) ami-05646a9c305ee7d50 Launch Stack
    HVM (arm64) ami-099e8792dc8441e5c Launch Stack
    eu-west-2 HVM (amd64) ami-00004abfd815ee26f Launch Stack
    HVM (arm64) ami-0c2953eca7a7b3f50 Launch Stack
    eu-west-3 HVM (amd64) ami-03d870488150eb3f4 Launch Stack
    HVM (arm64) ami-0f7fa1e8c95317eca Launch Stack
    me-south-1 HVM (amd64) ami-02edde594ff5fc2b3 Launch Stack
    HVM (arm64) ami-062d7c9e0f35afd8d Launch Stack
    sa-east-1 HVM (amd64) ami-082c9cd42e46d163c Launch Stack
    HVM (arm64) ami-024944ec7fe69bfd5 Launch Stack
    us-east-1 HVM (amd64) ami-03b4fb8db27e403a3 Launch Stack
    HVM (arm64) ami-0edc7d3a1e2d6b9d9 Launch Stack
    us-east-2 HVM (amd64) ami-0c4a7b59b2b435ebf Launch Stack
    HVM (arm64) ami-0403e4c14111790f3 Launch Stack
    us-west-1 HVM (amd64) ami-045cca654430d2ea6 Launch Stack
    HVM (arm64) ami-0cdeb31626706d235 Launch Stack
    us-west-2 HVM (amd64) ami-06caba2a4eba64e5f Launch Stack
    HVM (arm64) ami-0e4d400e3b30cfc1c Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3446.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0b31bf26150cad61b Launch Stack
    HVM (arm64) ami-0f192b20a06e6b5fe Launch Stack
    ap-east-1 HVM (amd64) ami-035f0d254c6ff3210 Launch Stack
    HVM (arm64) ami-0a288044783adf282 Launch Stack
    ap-northeast-1 HVM (amd64) ami-027192ba6cf793043 Launch Stack
    HVM (arm64) ami-091426b1f5d7405fa Launch Stack
    ap-northeast-2 HVM (amd64) ami-0127a1302abb98151 Launch Stack
    HVM (arm64) ami-0e894e902039ac2dd Launch Stack
    ap-south-1 HVM (amd64) ami-00f1a48351f4b3b96 Launch Stack
    HVM (arm64) ami-0cd8e38ed78b38431 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0c49e1e72a3cc0fe9 Launch Stack
    HVM (arm64) ami-00ba86a40665f282f Launch Stack
    ap-southeast-2 HVM (amd64) ami-0fe46c40342ed0bfc Launch Stack
    HVM (arm64) ami-03f2e437ad1ee7ddf Launch Stack
    ap-southeast-3 HVM (amd64) ami-04a4957fbc5019b6c Launch Stack
    HVM (arm64) ami-06b10faacfb8f0b26 Launch Stack
    ca-central-1 HVM (amd64) ami-05a7aa59863ae6f73 Launch Stack
    HVM (arm64) ami-07cf2906b6e64459c Launch Stack
    eu-central-1 HVM (amd64) ami-088201e5514c464bd Launch Stack
    HVM (arm64) ami-0a88905372b0b7719 Launch Stack
    eu-north-1 HVM (amd64) ami-07de32333d12a8fe5 Launch Stack
    HVM (arm64) ami-05566009b812b250c Launch Stack
    eu-south-1 HVM (amd64) ami-032737efbbd22d343 Launch Stack
    HVM (arm64) ami-0b959d27cc8ed2685 Launch Stack
    eu-west-1 HVM (amd64) ami-0f9035ac217f89c45 Launch Stack
    HVM (arm64) ami-00402259bd7943bab Launch Stack
    eu-west-2 HVM (amd64) ami-0fde4f61f85bb21d9 Launch Stack
    HVM (arm64) ami-03d0caadcbd0ab87f Launch Stack
    eu-west-3 HVM (amd64) ami-094ee4c569344949a Launch Stack
    HVM (arm64) ami-00c3344444a99b80f Launch Stack
    me-south-1 HVM (amd64) ami-025c3ed147ae30d9f Launch Stack
    HVM (arm64) ami-0e90aa6b4ff9b9af1 Launch Stack
    sa-east-1 HVM (amd64) ami-0ef0f5276e25c6277 Launch Stack
    HVM (arm64) ami-064ea1853801c69a7 Launch Stack
    us-east-1 HVM (amd64) ami-08c197adfdf25f439 Launch Stack
    HVM (arm64) ami-03c6374dac9f3babd Launch Stack
    us-east-2 HVM (amd64) ami-0acc385b11427d3de Launch Stack
    HVM (arm64) ami-057df76bf42528f18 Launch Stack
    us-west-1 HVM (amd64) ami-073f1bba02f84d439 Launch Stack
    HVM (arm64) ami-003917edf7139658e Launch Stack
    us-west-2 HVM (amd64) ami-0a35505586f5bbeaa Launch Stack
    HVM (arm64) ami-0a575ff35d8d00f12 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3374.2.3.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-017537ef6849de44d Launch Stack
    HVM (arm64) ami-01ff8783dfb0db541 Launch Stack
    ap-east-1 HVM (amd64) ami-0a141c6f27004b1f3 Launch Stack
    HVM (arm64) ami-04ee6c0833abdd50e Launch Stack
    ap-northeast-1 HVM (amd64) ami-0c2fa14b82e234a34 Launch Stack
    HVM (arm64) ami-024c9fa53656dc523 Launch Stack
    ap-northeast-2 HVM (amd64) ami-029661abb9255012a Launch Stack
    HVM (arm64) ami-0a7b6c53a103ceaf9 Launch Stack
    ap-south-1 HVM (amd64) ami-064cfad0364a8bd99 Launch Stack
    HVM (arm64) ami-058c9c3d3ad9ec9dc Launch Stack
    ap-southeast-1 HVM (amd64) ami-04e82d913e66f3cda Launch Stack
    HVM (arm64) ami-00b99f91620087b20 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0670fd4a6985bc105 Launch Stack
    HVM (arm64) ami-0af1bf7f291ce914b Launch Stack
    ap-southeast-3 HVM (amd64) ami-0702ca435f3cfbccc Launch Stack
    HVM (arm64) ami-0c3b70099b04bf10d Launch Stack
    ca-central-1 HVM (amd64) ami-0f0133f14aee38efd Launch Stack
    HVM (arm64) ami-00ef75087c8c671c0 Launch Stack
    eu-central-1 HVM (amd64) ami-0f87be0cd3fde765c Launch Stack
    HVM (arm64) ami-0e0d8151bf00a8fba Launch Stack
    eu-north-1 HVM (amd64) ami-02e0d6b648212772a Launch Stack
    HVM (arm64) ami-03d569246b1e09bb1 Launch Stack
    eu-south-1 HVM (amd64) ami-0e7a4d2074f0de2af Launch Stack
    HVM (arm64) ami-00e4e4db9d70de2ea Launch Stack
    eu-west-1 HVM (amd64) ami-0560452bb3634d3b3 Launch Stack
    HVM (arm64) ami-069ea0976f33515d5 Launch Stack
    eu-west-2 HVM (amd64) ami-0be3c28fb62cf569d Launch Stack
    HVM (arm64) ami-0e610756bb4b04028 Launch Stack
    eu-west-3 HVM (amd64) ami-030c7f405dae0221c Launch Stack
    HVM (arm64) ami-02117c3bc1ecf378c Launch Stack
    me-south-1 HVM (amd64) ami-013d464d537122592 Launch Stack
    HVM (arm64) ami-0846f22bba36238c1 Launch Stack
    sa-east-1 HVM (amd64) ami-0a142d881018b6993 Launch Stack
    HVM (arm64) ami-01da7f3926f10f110 Launch Stack
    us-east-1 HVM (amd64) ami-0e9246e69476cae15 Launch Stack
    HVM (arm64) ami-0cd9c13c6f6f4b5ba Launch Stack
    us-east-2 HVM (amd64) ami-05221e4bd41c2aca2 Launch Stack
    HVM (arm64) ami-02729d8e37dc5db99 Launch Stack
    us-west-1 HVM (amd64) ami-087ba0102ab3671e9 Launch Stack
    HVM (arm64) ami-00afdfd96eb71252e Launch Stack
    us-west-2 HVM (amd64) ami-085f581307a3a975d Launch Stack
    HVM (arm64) ami-08b56f1eef819d6f4 Launch Stack

    AWS China AMIs maintained by Giant Swarm

    The following AMIs are not part of the official Flatcar Container Linux release process and may lag behind (query version).

    View as json feed: amd64
    EC2 Region AMI Type AMI ID CloudFormation

    CloudFormation will launch a cluster of Flatcar Container Linux machines with a security and autoscaling group.

    Container Linux Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
        - name: nginx.service
          enabled: true
          contents: |
            Description=NGINX example
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host
            ExecStop=/usr/bin/docker stop nginx1

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i > ignition.json

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Container Linux Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
        - name: media-ephemeral.mount
          enabled: true
          contents: |

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Container Linux Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Container Linux Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh [email protected]<ip address>

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-03b4fb8db27e403a3 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source:
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-03b4fb8db27e403a3 (amd64), Beta ami-08c197adfdf25f439 (amd64), or Stable ami-0e9246e69476cae15 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget
    $ wget
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Known issues


    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply

    Start with a file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
    provider "aws" {
      region = var.aws_region
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
      tags = {
        Name = var.cluster_name
    resource "aws_subnet" "subnet" {
      vpc_id     =
      cidr_block = var.subnet_cidr
      tags = {
        Name = var.cluster_name
    resource "aws_internet_gateway" "gateway" {
      vpc_id =
      tags = {
        Name = var.cluster_name
    resource "aws_route_table" "default" {
      vpc_id =
      route {
        cidr_block = ""
        gateway_id =
      tags = {
        Name = var.cluster_name
    resource "aws_route_table_association" "public" {
      route_table_id =
      subnet_id      =
    resource "aws_security_group" "securitygroup" {
      vpc_id =
      tags = {
        Name = var.cluster_name
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id =
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = [""]
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id =
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = [""]
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
      filter {
        name   = "architecture"
        values = ["x86_64"]
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
      associate_public_ip_address = true
      subnet_id                   =
      vpc_security_group_ids      = []
      tags = {
        Name = "${var.cluster_name}-${each.key}"
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key

    Create a file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    variable "vpc_cidr" {
      type    = string
      default = ""
    variable "subnet_cidr" {
      type    = string
      default = ""

    An file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    variant: flatcar
    version: 1.0.0
        - name: core
            - ${ssh_keys}
        - path: /home/core/works
          filesystem: root
          mode: 0755
            inline: |
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              echo My name is ${name} and the hostname is $${hostname}          

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply

    Log in via ssh [email protected] with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .