Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3227.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-039bcc02dfa08c00a Launch Stack
    HVM (arm64) ami-094668f9a17c38862 Launch Stack
    ap-east-1 HVM (amd64) ami-0974d30dab7f920e5 Launch Stack
    HVM (arm64) ami-07938a84edf8cdad4 Launch Stack
    ap-northeast-1 HVM (amd64) ami-08c96c4f69b0817c7 Launch Stack
    HVM (arm64) ami-03c9189705360eac6 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0a29fbabd98b18435 Launch Stack
    HVM (arm64) ami-0385afce8934648a0 Launch Stack
    ap-south-1 HVM (amd64) ami-0fce19f28a22a3f8d Launch Stack
    HVM (arm64) ami-01341baa482c37c46 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0b0f807f5d7962726 Launch Stack
    HVM (arm64) ami-0b4272496be5f76b4 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0344d9b713ba6676a Launch Stack
    HVM (arm64) ami-0bea0795a170d6f05 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0a343c8d1700e6fe4 Launch Stack
    HVM (arm64) ami-004b58ff4ee2643f7 Launch Stack
    ca-central-1 HVM (amd64) ami-06e9c3c66cc7a7312 Launch Stack
    HVM (arm64) ami-0c3e4e9388db9c89b Launch Stack
    eu-central-1 HVM (amd64) ami-0e6881d1a1891de2d Launch Stack
    HVM (arm64) ami-0309e6ba42ca8bc79 Launch Stack
    eu-north-1 HVM (amd64) ami-0d88240c3476d23ee Launch Stack
    HVM (arm64) ami-08a2ef6844124d233 Launch Stack
    eu-south-1 HVM (amd64) ami-011228b172853b89f Launch Stack
    HVM (arm64) ami-0e263e64b02151557 Launch Stack
    eu-west-1 HVM (amd64) ami-0a25f908bfb3fbf7c Launch Stack
    HVM (arm64) ami-0a97af0605918559d Launch Stack
    eu-west-2 HVM (amd64) ami-07006b9e8ef8903e1 Launch Stack
    HVM (arm64) ami-0d488acb5eae668a6 Launch Stack
    eu-west-3 HVM (amd64) ami-0c5db06b5398ba053 Launch Stack
    HVM (arm64) ami-09b15892526fa57e3 Launch Stack
    me-south-1 HVM (amd64) ami-03ba6f1ce42c2cf4b Launch Stack
    HVM (arm64) ami-08bb0935ba1325019 Launch Stack
    sa-east-1 HVM (amd64) ami-0e8ab0326c95fccf6 Launch Stack
    HVM (arm64) ami-0aad5ae1f51144caa Launch Stack
    us-east-1 HVM (amd64) ami-0fda9386ac559775a Launch Stack
    HVM (arm64) ami-0d50cd6cd23aa5803 Launch Stack
    us-east-2 HVM (amd64) ami-0af86c8c4c0604478 Launch Stack
    HVM (arm64) ami-0ca32a562f8c23ac6 Launch Stack
    us-west-1 HVM (amd64) ami-0187883a67023766b Launch Stack
    HVM (arm64) ami-04d0f6f017d8183c2 Launch Stack
    us-west-2 HVM (amd64) ami-0d57ea2bc5175bcc1 Launch Stack
    HVM (arm64) ami-07230f711b3d8b4c6 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3185.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-011ac425c44b6294c Launch Stack
    HVM (arm64) ami-086696bf7da9a40b6 Launch Stack
    ap-east-1 HVM (amd64) ami-07fc891d5166dca59 Launch Stack
    HVM (arm64) ami-0f3a92eb2188afd22 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0e7a4cba068e07274 Launch Stack
    HVM (arm64) ami-058e72fd3d55a5d52 Launch Stack
    ap-northeast-2 HVM (amd64) ami-01f3e1997ed0310cd Launch Stack
    HVM (arm64) ami-0b7d888adb403b8cf Launch Stack
    ap-south-1 HVM (amd64) ami-007a8c62962834e09 Launch Stack
    HVM (arm64) ami-032300bccb9c47d74 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0eddb7103456df7bf Launch Stack
    HVM (arm64) ami-0aa392ba7f72bf020 Launch Stack
    ap-southeast-2 HVM (amd64) ami-06168d11c7102ff56 Launch Stack
    HVM (arm64) ami-02c64414005d525d0 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0743d8e87f6c967b7 Launch Stack
    HVM (arm64) ami-0e97a2cc10344ce9c Launch Stack
    ca-central-1 HVM (amd64) ami-0e84121f89a660e85 Launch Stack
    HVM (arm64) ami-01dd8d0f1be1aff3f Launch Stack
    eu-central-1 HVM (amd64) ami-0ddedc1ba59245a6a Launch Stack
    HVM (arm64) ami-0e182fb3e3fb0fc0b Launch Stack
    eu-north-1 HVM (amd64) ami-0b8a1ed41ef58c5fa Launch Stack
    HVM (arm64) ami-066ddf39c441239c4 Launch Stack
    eu-south-1 HVM (amd64) ami-0dcd0a0e173b61ca9 Launch Stack
    HVM (arm64) ami-039e10697264b13ab Launch Stack
    eu-west-1 HVM (amd64) ami-04c4e64f8bfb21527 Launch Stack
    HVM (arm64) ami-0ecc241cfb639edc0 Launch Stack
    eu-west-2 HVM (amd64) ami-0bc4d5ee70f799a50 Launch Stack
    HVM (arm64) ami-0228245a3a93229e4 Launch Stack
    eu-west-3 HVM (amd64) ami-029eb9e626930b105 Launch Stack
    HVM (arm64) ami-054febbe1a08c3396 Launch Stack
    me-south-1 HVM (amd64) ami-007e652ce4ada89dc Launch Stack
    HVM (arm64) ami-00f9c3420e8f6e704 Launch Stack
    sa-east-1 HVM (amd64) ami-0931ff6c5b1fc5c04 Launch Stack
    HVM (arm64) ami-0c909ecf04a79cc97 Launch Stack
    us-east-1 HVM (amd64) ami-0b708274c015e6e1c Launch Stack
    HVM (arm64) ami-0cf55affe02219efb Launch Stack
    us-east-2 HVM (amd64) ami-0108c944cc5b5cc4b Launch Stack
    HVM (arm64) ami-047809bdcbff60b6d Launch Stack
    us-west-1 HVM (amd64) ami-09a563d7871b99500 Launch Stack
    HVM (arm64) ami-0b8931bda83fd337d Launch Stack
    us-west-2 HVM (amd64) ami-00ca30d6a02a2ca36 Launch Stack
    HVM (arm64) ami-0d157b92bc8f2977c Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3139.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0e3a965953108ee28 Launch Stack
    HVM (arm64) ami-0bbe1d5a11b6e386b Launch Stack
    ap-east-1 HVM (amd64) ami-060df2cfba2522ef2 Launch Stack
    HVM (arm64) ami-09c3ab3b15323396a Launch Stack
    ap-northeast-1 HVM (amd64) ami-0476632ad84c7ede7 Launch Stack
    HVM (arm64) ami-034dfecd325da9ce6 Launch Stack
    ap-northeast-2 HVM (amd64) ami-08ded244139500ed7 Launch Stack
    HVM (arm64) ami-0282ff20c12facbd3 Launch Stack
    ap-south-1 HVM (amd64) ami-0813fe8860eb455cb Launch Stack
    HVM (arm64) ami-0d9038258ab09e571 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0b943be4cc1ad82b2 Launch Stack
    HVM (arm64) ami-07c1033b9e98d5b97 Launch Stack
    ap-southeast-2 HVM (amd64) ami-02f9f79b4fc6e6c21 Launch Stack
    HVM (arm64) ami-0882f4faef1990301 Launch Stack
    ap-southeast-3 HVM (amd64) ami-018c69adc8d424594 Launch Stack
    HVM (arm64) ami-074ae8bd0e401026a Launch Stack
    ca-central-1 HVM (amd64) ami-052123e33beafe7c6 Launch Stack
    HVM (arm64) ami-0778bb02d2fde18f7 Launch Stack
    eu-central-1 HVM (amd64) ami-0e5273ed1d5096355 Launch Stack
    HVM (arm64) ami-07d1a98b050bf659b Launch Stack
    eu-north-1 HVM (amd64) ami-083c47d47c2c04a08 Launch Stack
    HVM (arm64) ami-0e8b882ccf21165ec Launch Stack
    eu-south-1 HVM (amd64) ami-0d8ffeb9fb4e5a90a Launch Stack
    HVM (arm64) ami-012b406c16b73629a Launch Stack
    eu-west-1 HVM (amd64) ami-0c86b15ecbf1ecab6 Launch Stack
    HVM (arm64) ami-0b2cdd92b116274f4 Launch Stack
    eu-west-2 HVM (amd64) ami-09a968bee3de823ea Launch Stack
    HVM (arm64) ami-056a4c48a306ec98e Launch Stack
    eu-west-3 HVM (amd64) ami-015ee6262a264a8ac Launch Stack
    HVM (arm64) ami-0d37dc3ad85ecbcd0 Launch Stack
    me-south-1 HVM (amd64) ami-0297a0640cd14c674 Launch Stack
    HVM (arm64) ami-09dc85c607ca9bf9b Launch Stack
    sa-east-1 HVM (amd64) ami-0247db41a971df6a0 Launch Stack
    HVM (arm64) ami-04a4ff38651bd9062 Launch Stack
    us-east-1 HVM (amd64) ami-0e7ab9b30f0aaa519 Launch Stack
    HVM (arm64) ami-010e0daffe1b1b8d4 Launch Stack
    us-east-2 HVM (amd64) ami-0eb890879478a511e Launch Stack
    HVM (arm64) ami-0e91dd5df2c7b674d Launch Stack
    us-west-1 HVM (amd64) ami-086106e4348fd8354 Launch Stack
    HVM (arm64) ami-043f90bf3358d4cb2 Launch Stack
    us-west-2 HVM (amd64) ami-092a5a8ef0f00a134 Launch Stack
    HVM (arm64) ami-0da161ad3ac9617b6 Launch Stack

    AWS China AMIs maintained by Giant Swarm

    The following AMIs are not part of the official Flatcar Container Linux release process and may lag behind (query version).

    View as json feed: amd64
    EC2 Region AMI Type AMI ID CloudFormation

    CloudFormation will launch a cluster of Flatcar Container Linux machines with a security and autoscaling group.

    Container Linux Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Container Linux Configs (CLC). These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this CLC YAML config will start an NGINX Docker container:

    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i ghcr.io/flatcar-linux/ct:latest -platform ec2 > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Container Linux Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    storage:
      filesystems:
        - mount:
            device: /dev/xvdb
            format: ext4
            wipe_filesystem: true
    
    systemd:
      units:
        - name: media-ephemeral.mount
          enable: true
          contents: |
            [Mount]
            What=/dev/xvdb
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Container Linux Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Container Linux Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh [email protected]<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0fda9386ac559775a (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0fda9386ac559775a (amd64), Beta ami-0b708274c015e6e1c (amd64), or Stable ami-0e7ab9b30f0aaa519 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Known issues

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh [email protected] with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .