Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3346.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-046f4209f7f389e48 Launch Stack
    HVM (arm64) ami-0d06cdc5d47530141 Launch Stack
    ap-east-1 HVM (amd64) ami-0e744a43b75827fdd Launch Stack
    HVM (arm64) ami-06913506d8665f323 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0702b7713fed74111 Launch Stack
    HVM (arm64) ami-032efda378c89e519 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0fbd933ee146e068a Launch Stack
    HVM (arm64) ami-010fbb5815a984ed7 Launch Stack
    ap-south-1 HVM (amd64) ami-0c2cc150411193be5 Launch Stack
    HVM (arm64) ami-073dc27383c53c70e Launch Stack
    ap-southeast-1 HVM (amd64) ami-08b8c5729811b7519 Launch Stack
    HVM (arm64) ami-00f1fc7d74c2f553a Launch Stack
    ap-southeast-2 HVM (amd64) ami-0f88bdcc5ed3d4867 Launch Stack
    HVM (arm64) ami-09d006d36c509b0b0 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0a27f6b4f3a8124b5 Launch Stack
    HVM (arm64) ami-0f1c9fd0f1e5d560d Launch Stack
    ca-central-1 HVM (amd64) ami-0b1f41a0fd9ab47b8 Launch Stack
    HVM (arm64) ami-0c70f7f9499a694e7 Launch Stack
    eu-central-1 HVM (amd64) ami-012ba1e17444f8bfe Launch Stack
    HVM (arm64) ami-01aa37b144482b7f8 Launch Stack
    eu-north-1 HVM (amd64) ami-03b1b0ac962a5becb Launch Stack
    HVM (arm64) ami-0e0a3000d6e620b51 Launch Stack
    eu-south-1 HVM (amd64) ami-00ff02d8d52fc250e Launch Stack
    HVM (arm64) ami-0622593b8a5a473b9 Launch Stack
    eu-west-1 HVM (amd64) ami-0dd78bb5e73e985b9 Launch Stack
    HVM (arm64) ami-030d4311d4ec2d6b4 Launch Stack
    eu-west-2 HVM (amd64) ami-0319da473a28fc0d8 Launch Stack
    HVM (arm64) ami-0694effd54bde97fe Launch Stack
    eu-west-3 HVM (amd64) ami-0db34447b958665a0 Launch Stack
    HVM (arm64) ami-01c35a72520b74115 Launch Stack
    me-south-1 HVM (amd64) ami-0bd7d253630abe83f Launch Stack
    HVM (arm64) ami-0fa78ff633f8cfc16 Launch Stack
    sa-east-1 HVM (amd64) ami-0f666678de700feb8 Launch Stack
    HVM (arm64) ami-0049743d9f45af91d Launch Stack
    us-east-1 HVM (amd64) ami-0f9bc9d916f914b94 Launch Stack
    HVM (arm64) ami-0b6d542d82e3dec00 Launch Stack
    us-east-2 HVM (amd64) ami-0845f08f37b3361a4 Launch Stack
    HVM (arm64) ami-02daae0441033a99f Launch Stack
    us-west-1 HVM (amd64) ami-0ff7d15093b2a4e9f Launch Stack
    HVM (arm64) ami-04f6dd770c7766169 Launch Stack
    us-west-2 HVM (amd64) ami-06f96a9bafedbf9c5 Launch Stack
    HVM (arm64) ami-0e33567e32aed461d Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3277.1.2.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0ba71096e9fdc447b Launch Stack
    HVM (arm64) ami-021093dfbc517cb1c Launch Stack
    ap-east-1 HVM (amd64) ami-09e50e5b8eef3c72b Launch Stack
    HVM (arm64) ami-0bde68be4c036a9dc Launch Stack
    ap-northeast-1 HVM (amd64) ami-022edad34604764bd Launch Stack
    HVM (arm64) ami-095827518cc79163c Launch Stack
    ap-northeast-2 HVM (amd64) ami-0e6459d508cfff9b7 Launch Stack
    HVM (arm64) ami-0cb7956e0ec9205a9 Launch Stack
    ap-south-1 HVM (amd64) ami-0a3b0e17be6fd44a1 Launch Stack
    HVM (arm64) ami-0bd2617feed3dc979 Launch Stack
    ap-southeast-1 HVM (amd64) ami-01c2514dc6e460d38 Launch Stack
    HVM (arm64) ami-0d8c01e1a5f0e626a Launch Stack
    ap-southeast-2 HVM (amd64) ami-022a2abb9280634b7 Launch Stack
    HVM (arm64) ami-083d11c1661042311 Launch Stack
    ap-southeast-3 HVM (amd64) ami-07c638b44a5e13603 Launch Stack
    HVM (arm64) ami-0244aa2fe5a9b8392 Launch Stack
    ca-central-1 HVM (amd64) ami-0175983d1dde277e9 Launch Stack
    HVM (arm64) ami-06cc24dac34e2c95e Launch Stack
    eu-central-1 HVM (amd64) ami-0f774eb67dfe1affb Launch Stack
    HVM (arm64) ami-0a7fcc19e7f414a8e Launch Stack
    eu-north-1 HVM (amd64) ami-0d62b69603d01e7cd Launch Stack
    HVM (arm64) ami-0fbed7dd8d0b2c54f Launch Stack
    eu-south-1 HVM (amd64) ami-005b56ac5a65beb28 Launch Stack
    HVM (arm64) ami-0cfff71d5e4126449 Launch Stack
    eu-west-1 HVM (amd64) ami-0735c8c53466f28b5 Launch Stack
    HVM (arm64) ami-0917a73c783b4b53f Launch Stack
    eu-west-2 HVM (amd64) ami-0bb08a219cea37644 Launch Stack
    HVM (arm64) ami-078a11e0f15206dc5 Launch Stack
    eu-west-3 HVM (amd64) ami-0328dd93bf2b055c7 Launch Stack
    HVM (arm64) ami-0e105e169c2a7d58d Launch Stack
    me-south-1 HVM (amd64) ami-0ac47ecf69bf0988c Launch Stack
    HVM (arm64) ami-03fc095c92ad55d64 Launch Stack
    sa-east-1 HVM (amd64) ami-0b85601782aafac4a Launch Stack
    HVM (arm64) ami-083e4452d03d91452 Launch Stack
    us-east-1 HVM (amd64) ami-00fcc1a5d8de7d67c Launch Stack
    HVM (arm64) ami-0c8ad668cb394ad6d Launch Stack
    us-east-2 HVM (amd64) ami-0a5762e675bdfe8cd Launch Stack
    HVM (arm64) ami-061dc391a0dadefec Launch Stack
    us-west-1 HVM (amd64) ami-0d8467fa164ed2b67 Launch Stack
    HVM (arm64) ami-04d1261d847e4e2f5 Launch Stack
    us-west-2 HVM (amd64) ami-00d9d57495a252cc7 Launch Stack
    HVM (arm64) ami-0a7538bde5ed2ac84 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3227.2.2.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0e773e8f2439ebf6f Launch Stack
    HVM (arm64) ami-00af19495394b3270 Launch Stack
    ap-east-1 HVM (amd64) ami-0d959823c8d0e084a Launch Stack
    HVM (arm64) ami-0a26efff1037a9534 Launch Stack
    ap-northeast-1 HVM (amd64) ami-05a56cbc5c26152af Launch Stack
    HVM (arm64) ami-0ef71fa167235657e Launch Stack
    ap-northeast-2 HVM (amd64) ami-05990c0e59d170ac8 Launch Stack
    HVM (arm64) ami-01b8b57e123537b27 Launch Stack
    ap-south-1 HVM (amd64) ami-0cb01325d5f3bae59 Launch Stack
    HVM (arm64) ami-0d8ac74d55f74e3f6 Launch Stack
    ap-southeast-1 HVM (amd64) ami-01050588d2f69c050 Launch Stack
    HVM (arm64) ami-03c94cc7eac9819a7 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0859e0da16f7b1730 Launch Stack
    HVM (arm64) ami-0147d6d126b77afe5 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0f61ab189cc5138fa Launch Stack
    HVM (arm64) ami-0abfeb703b08ec2ae Launch Stack
    ca-central-1 HVM (amd64) ami-0c11312278c86e2f1 Launch Stack
    HVM (arm64) ami-06281e7098640ae49 Launch Stack
    eu-central-1 HVM (amd64) ami-0a3664b2cbabfb9d0 Launch Stack
    HVM (arm64) ami-0548633e7dcba4438 Launch Stack
    eu-north-1 HVM (amd64) ami-000a9bd9926dc62f5 Launch Stack
    HVM (arm64) ami-011cfacb78895dc18 Launch Stack
    eu-south-1 HVM (amd64) ami-098edf3f0ee216819 Launch Stack
    HVM (arm64) ami-0e9e87e35f3b848bb Launch Stack
    eu-west-1 HVM (amd64) ami-0b394509ae792efc0 Launch Stack
    HVM (arm64) ami-0f38531b2a838e4c0 Launch Stack
    eu-west-2 HVM (amd64) ami-06c34f68f072eecf7 Launch Stack
    HVM (arm64) ami-0d193fcbdbd56675b Launch Stack
    eu-west-3 HVM (amd64) ami-0dffceb0d574aef0c Launch Stack
    HVM (arm64) ami-00778c5bf3658eeab Launch Stack
    me-south-1 HVM (amd64) ami-0193603ec0299b39d Launch Stack
    HVM (arm64) ami-0bcedd8333e790b90 Launch Stack
    sa-east-1 HVM (amd64) ami-089997930067b7999 Launch Stack
    HVM (arm64) ami-0dee97799457bff67 Launch Stack
    us-east-1 HVM (amd64) ami-0e12c4fb31633888a Launch Stack
    HVM (arm64) ami-075b3d61d6f212b58 Launch Stack
    us-east-2 HVM (amd64) ami-0b5bb801dc949bb7f Launch Stack
    HVM (arm64) ami-0ac9e8080b714e2c1 Launch Stack
    us-west-1 HVM (amd64) ami-04291b5b766035ca4 Launch Stack
    HVM (arm64) ami-0f78f9325cb860d59 Launch Stack
    us-west-2 HVM (amd64) ami-0a74156d2cf5281c6 Launch Stack
    HVM (arm64) ami-02ea825969d1f499b Launch Stack

    AWS China AMIs maintained by Giant Swarm

    The following AMIs are not part of the official Flatcar Container Linux release process and may lag behind (query version).

    View as json feed: amd64
    EC2 Region AMI Type AMI ID CloudFormation

    CloudFormation will launch a cluster of Flatcar Container Linux machines with a security and autoscaling group.

    Container Linux Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Container Linux Configs (CLC). These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this CLC YAML config will start an NGINX Docker container:

    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i ghcr.io/flatcar/ct:latest -platform ec2 > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Container Linux Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    storage:
      filesystems:
        - mount:
            device: /dev/xvdb
            format: ext4
            wipe_filesystem: true
            label: ephemeral
    
    systemd:
      units:
        - name: media-ephemeral.mount
          enable: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Container Linux Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Container Linux Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh [email protected]<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0f9bc9d916f914b94 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0f9bc9d916f914b94 (amd64), Beta ami-00fcc1a5d8de7d67c (amd64), or Stable ami-0e12c4fb31633888a (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Known issues

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh [email protected] with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .