Skip to content

ISSUE - Adding EBS Volume forces EC2 Replacement #401

Closed as not planned
Closed as not planned
@HeikoMR

Description

@HeikoMR

Description

Hello,

We deployed an EC2 Instance via this module and initially configured an additional ebs volume.
Now we need to add a second non root volume, but that forces a replacement of the ec2 instance.

We tried to define it additionally in the ebs_block_device, we tried it by adding it via a secondary ebs_volume resource block + attachment and we tried it manually. All three variants lead to the fact that the ec2 want's to recreate itself.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
    [x]
  2. Re-initialize the project root to pull down modules: terraform init
    [x]
  3. Re-attempt your terraform plan or apply and check if the issue still persists
    [x]

Versions

  • Module version [Required]:

  • Terraform version:
    v.1.7.0 + v.1.9.5

  • Provider version(s):
    Aws - 5.64.0
    Module version:
    5.6.1

Reproduction Code [Required]

module "xyz" {
  source  = "terraform-aws-modules/ec2-instance/aws"
  version = "5.6.1"

  name                                 = var.name
  instance_type                  = var.instance_type
  ami                                    = var.ami
  key_name                         = aws_key_pair.ec2.key_name
  vpc_security_group_ids  = concat([module.xyz_sg.security_group_id], var.additional_security_group_ids)
  disable_api_termination = var.disable_api_termination
  root_block_device = [{
    encrypted   = true
    kms_key_id  = var.kms_key_arn
    volume_type = "gp3"
    volume_size = var.root_volume_size
  }]

  ebs_block_device = [
    {
      device_name = "/dev/sdf"
      volume_type = "gp3"
      volume_size = var.ebs_volume_size1
      kms_key_id  = var.kms_key_arn
    },
   {
      device_name = "/dev/sdg"
      volume_type = "gp3"
      volume_size = var.ebs_volume_size2
      kms_key_id  = var.kms_key_arn
}
]

  tags = {
    Terraform = "true"
    Backup    = "true"
  }
}

Steps to reproduce the behavior:

  1. Launch Ec2 Instance with single non root ebs volume.
  2. Add another non root ebs volume either in the module itself into the ebs_block_device list or by specyfing it outside of the module as new resource or manually
  3. terraform plan

Expected behavior

The second non root ebs volume should just get created and attached to the instance without recreating the whole instance

Actual behavior

Terraform want's to recreate the whole instance because of the ebs_volumes. We were able to reproduce it in both of our environments (dev/prod).

Terminal Output Screenshot(s)

  # module.xyz.module.xyz.aws_instance.this[0] must be replaced
-/+ resource "aws_instance" "this" {
      ~ arn                                  = "arn:aws:ec2:eu-central-1:xyz:instance/i-xyz" -> (known after apply)
      ~ associate_public_ip_address          = true -> (known after apply)
      ~ availability_zone                    = "eu-central-1a" -> (known after apply)
      ~ cpu_core_count                       = 1 -> (known after apply)
      ~ cpu_threads_per_core                 = 2 -> (known after apply)
      ~ disable_api_stop                     = false -> (known after apply)
      ~ ebs_optimized                        = false -> (known after apply)
      - hibernation                          = false -> null
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      ~ id                                   = "i-xyz" -> (known after apply)
      ~ instance_initiated_shutdown_behavior = "stop" -> (known after apply)
      + instance_lifecycle                   = (known after apply)
      ~ instance_state                       = "running" -> (known after apply)
      ~ ipv6_address_count                   = 0 -> (known after apply)
      ~ ipv6_addresses                       = [] -> (known after apply)
      ~ monitoring                           = false -> (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      ~ placement_partition_number           = 0 -> (known after apply)
      ~ primary_network_interface_id         = "eni-xyz" -> (known after apply)
      ~ private_dns                          = "xyz.eu-central-1.compute.internal" -> (known after apply)
      ~ private_ip                           = "xyz" -> (known after apply)
      ~ public_dns                           = "xyz.eu-central-1.compute.amazonaws.com" -> (known after apply)
      ~ public_ip                            = "xyz" -> (known after apply)
      ~ secondary_private_ips                = [] -> (known after apply)
      ~ security_groups                      = [] -> (known after apply)
      + spot_instance_request_id             = (known after apply)
      ~ subnet_id                            = "subnet-xyz" -> (known after apply)
        tags                                 = {
            "Backup"    = "true"
            "Name"      = "xyz"
            "Terraform" = "true"
        }
      ~ tenancy                              = "default" -> (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
        # (10 unchanged attributes hidden)

      - capacity_reservation_specification {
          - capacity_reservation_preference = "open" -> null
        }

      - cpu_options {
          - core_count       = 1 -> null
          - threads_per_core = 2 -> null
        }

      ~ credit_specification {
          - cpu_credits = "unlimited" -> null
        }

      - ebs_block_device { # forces replacement
          - delete_on_termination = true -> null
          - device_name           = "/dev/sdf" -> null
          - encrypted             = true -> null
          - iops                  = 3000 -> null
          - kms_key_id            = "arn:aws:kms:eu-central-1:xyz:key/xyz" -> null
          - tags                  = {} -> null
          - tags_all              = {} -> null
          - throughput            = 125 -> null
          - volume_id             = "vol-xyz" -> null
          - volume_size           = 400 -> null
          - volume_type           = "gp3" -> null
        }
      + ebs_block_device { # forces replacement
          + delete_on_termination = true
          + device_name           = "/dev/sdf"
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = "arn:aws:kms:eu-central-1:xyz:key/xyz"
          + snapshot_id           = (known after apply)
          + tags_all              = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = 400
          + volume_type           = "gp3"
        }
      + ebs_block_device { # forces replacement
          + delete_on_termination = true
          + device_name           = "/dev/sdh"
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = "arn:aws:kms:eu-central-1:xyz:key/xyz"
          + snapshot_id           = (known after apply)
          + tags_all              = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = 400
          + volume_type           = "gp3"
        }

      ~ enclave_options {
          ~ enabled = false -> (known after apply)
        }

      - maintenance_options {
          - auto_recovery = "default" -> null
        }

      ~ metadata_options {
          ~ instance_metadata_tags      = "disabled" -> (known after apply)
            # (4 unchanged attributes hidden)
        }

      - private_dns_name_options {
          - enable_resource_name_dns_a_record    = false -> null
          - enable_resource_name_dns_aaaa_record = false -> null
          - hostname_type                        = "ip-name" -> null
        }

      ~ root_block_device {
          ~ device_name           = "/dev/sda1" -> (known after apply)
          ~ iops                  = 3000 -> (known after apply)
          - tags                  = {} -> null
          ~ tags_all              = {} -> (known after apply)
          ~ throughput            = 125 -> (known after apply)
          ~ volume_id             = "vol-xyz" -> (known after apply)
            # (5 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

Additional context

Any workaround for this if there is no fix?
Our instance is already in productive usage.

Thanks in advance for your help

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions