【问题标题】:Terraform AWS EKS - Unable to mount EFS volume To Fargate PodTerraform AWS EKS - 无法将 EFS 卷挂载到 Fargate Pod
【发布时间】:2021-01-22 18:30:29
【问题描述】:

我现在已经连续工作了近 5 天,但无法使其正常工作。根据 AWS 文档,我应该*能够将 EFS 卷安装到部署到 kubernetes (EKS) 中的 Fargate 节点的 pod。

我 100% 通过 terraform 完成所有工作。在这一点上我迷失了,我的眼睛几乎从我读过的大量可怕的文档中流血。任何人都可以为我提供的任何指导,以帮助我完成这项工作!

这是我到目前为止所做的:

  1. 设置 EKS CSI 驱动程序、存储类和角色绑定(不太清楚为什么需要这些角色绑定)
resource "kubernetes_csi_driver" "efs" {
  metadata {
    name = "efs.csi.aws.com"
  }

  spec {
    attach_required        = false
    volume_lifecycle_modes = [
      "Persistent"
    ]
  }
}

resource "kubernetes_storage_class" "efs" {
  metadata {
    name = "efs-sc"
  }
  storage_provisioner = kubernetes_csi_driver.efs.metadata[0].name
  reclaim_policy      = "Retain"
}

resource "kubernetes_cluster_role_binding" "efs_pre" {
  metadata {
    name = "efs_role_pre"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }
  subject {
    kind      = "ServiceAccount"
    name      = "default"
    namespace = "pre"
  }
}

resource "kubernetes_cluster_role_binding" "efs_live" {
  metadata {
    name = "efs_role_live"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }
  subject {
    kind      = "ServiceAccount"
    name      = "default"
    namespace = "live"
  }
}
  1. 使用策略和安全组设置 EFS 卷
module "vpc" {
  source    = "../../read_only_data/vpc"
  stackname = var.vpc_stackname
}
resource "aws_efs_file_system" "efs_data" {
    creation_token = "xva-${var.environment}-pv-efsdata-${var.side}"

    # encrypted   = true
    # kms_key_id  = ""

    performance_mode = "generalPurpose" #maxIO
    throughput_mode  = "bursting"
    
    lifecycle_policy {
        transition_to_ia = "AFTER_30_DAYS"
    }
}

data "aws_efs_file_system" "efs_data" {
  file_system_id = aws_efs_file_system.efs_data.id
}

resource "aws_efs_access_point" "efs_data" {
  file_system_id = aws_efs_file_system.efs_data.id
}

/* Policy that does the following:
- Prevent root access by default
- Enforce read-only access by default
- Enforce in-transit encryption for all clients
*/
resource "aws_efs_file_system_policy" "efs_data" {
  file_system_id = aws_efs_file_system.efs_data.id

  policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "elasticfilesystem:ClientMount",
            "Resource": aws_efs_file_system.efs_data.arn
        },
        {
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "*",
            "Resource": aws_efs_file_system.efs_data.arn,
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
  })
}

# Security Groups for this volume
resource "aws_security_group" "allow_eks_cluster" {
  name        = "xva-${var.environment}-efsdata-${var.side}"
  description = "This will allow the cluster ${data.terraform_remote_state.cluster.outputs.eks_cluster_name} to access this volume and use it."
  vpc_id      = module.vpc.vpc_id

  ingress {
    description = "NFS For EKS Cluster ${data.terraform_remote_state.cluster.outputs.eks_cluster_name}"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    security_groups = [
      data.terraform_remote_state.cluster.outputs.eks_cluster_sg_id
    ]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "allow_tls"
  }
}

# Mount to the subnets that will be using this efs volume
# Also attach sg's to restrict access to this volume
resource "aws_efs_mount_target" "efs_data-app01" {
  file_system_id = aws_efs_file_system.efs_data.id
  subnet_id      = module.vpc.private_app_subnet_01

  security_groups = [
    aws_security_group.allow_eks_cluster.id
  ]
}

resource "aws_efs_mount_target" "efs_data-app02" {
  file_system_id = aws_efs_file_system.efs_data.id
  subnet_id      = module.vpc.private_app_subnet_02

  security_groups = [
    aws_security_group.allow_eks_cluster.id
  ]
}
  1. 在 kubernetes 中创建引用 EFS 卷的持久卷
data "terraform_remote_state" "csi" {
  backend = "s3"
  config = {
    bucket  = "xva-${var.account_type}-terraform-${var.region_code}"
    key     = "${var.environment}/efs/driver/terraform.tfstate"
    region  = var.region
    profile = var.profile
  }
}
resource "kubernetes_persistent_volume" "efs_data" {
  metadata {
    name = "pv-efsdata"

    labels = {
        app = "example"
    }
  }

  spec {
    access_modes = ["ReadOnlyMany"]

    capacity = {
      storage = "25Gi"
    }

    volume_mode                      = "Filesystem"
    persistent_volume_reclaim_policy = "Retain"
    storage_class_name               = data.terraform_remote_state.csi.outputs.storage_name

    persistent_volume_source {
      csi {
        driver        = data.terraform_remote_state.csi.outputs.csi_name
        volume_handle = aws_efs_file_system.efs_data.id
        read_only    = true
      }
    }
  }
}
  1. 然后创建部署以使用挂载 EFS 卷的 pod 进行 Fargate
data "terraform_remote_state" "efs_data_volume" {
  backend = "s3"
  config = {
    bucket  = "xva-${var.account_type}-terraform-${var.region_code}"
    key     = "${var.environment}/efs/volume/terraform.tfstate"
    region  = var.region
    profile = var.profile
  }
}
resource "kubernetes_persistent_volume_claim" "efs_data" {
  metadata {
    name      = "pv-efsdata-claim-${var.side}"
    namespace = var.side
  }

  spec {
    access_modes       = ["ReadOnlyMany"]
    storage_class_name =  data.terraform_remote_state.csi.outputs.storage_name
    resources {
      requests = {
        storage = "25Gi"
      }
    }
    volume_name = data.terraform_remote_state.efs_data_volume.outputs.volume_name
  }
}

resource "kubernetes_deployment" "example" {
  timeouts {
    create = "3m"
    update = "4m"
    delete = "2m"
  }

  metadata {
    name      = "deployment-example"
    namespace = var.side

    labels = {
      app      = "example"
      platform = "fargate"
      subnet   = "app"
    }
  }

  spec {
    replicas = 1

    selector {
      match_labels = {
        app = "example"
      }
    }

    template {
      metadata {
        labels = {
          app      = "example"
          platform = "fargate"
          subnet   = "app"
        }
      }

      spec {
        volume {
          name = "efs-data-volume"
          
          persistent_volume_claim {
            claim_name = kubernetes_persistent_volume_claim.efs_data.metadata[0].name
            read_only  = true
          }
        }

        container {
          image = "${var.nexus_docker_endpoint}/example:${var.docker_tag}"
          name  = "example"

          env {
            name  = "environment"
            value = var.environment
          }
          env {
            name = "dockertag"
            value = var.docker_tag
          }

          volume_mount {
            name = "efs-data-volume"
            read_only = true
            mount_path = "/appconf/"
          }

          # liveness_probe {
          #   http_get {
          #     path = "/health"
          #     port = 443
          #   }

          #   initial_delay_seconds = 3
          #   period_seconds        = 3
          # }

          port {
            container_port = 443
          }
        }
      }
    }
  }
}

它可以看到 kuberenetes 中的持久卷,我可以看到它已被声明,我什至可以看到它试图在 pod 日志中挂载该卷。但是,在描述 pod 时,我不可避免地总是会看到以下错误:

Volumes:
  efs-data-volume:
    Type:        PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:   pv-efsdata-claim-pre
    ReadOnly:    true
...
...
Events:
  Type     Reason       Age                  From                                                       Message
  ----     ------       ----                 ----                                                       -------
  Warning  FailedMount  11m (x629 over 23h)  kubelet, <redacted-fargate-endpoint>  Unable to attach or mount volumes: unmounted volumes=[efs-data-volume], unattached volumes=[efs-data-volume]: timed out waiting for the condition
  Warning  FailedMount  47s (x714 over 23h)  kubelet, <redacted-fargate-endpoint>  MountVolume.SetUp failed for volume "pv-efsdata" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Volume capability not supported

【问题讨论】:

  • 第一个更正是将ReadOnlyMany 更改为其他内容。显然,fargate 和 efs 不支持只读,现在这很愚蠢。虽然仍然有安装卷的问题(使用上面的 yamls)最有趣的警告是:``` 输出:无法启动 amazon-efs-mount-watchdog,无法识别的初始化系统“supervisord”```

标签: kubernetes terraform amazon-eks aws-fargate amazon-efs


【解决方案1】:

我终于做到了。我已成功将 EFS 卷挂载到 Fargate Pod(将近 6 天后)!我能够从这个已关闭的 github 问题中得到我需要的方向:https://github.com/aws/containers-roadmap/issues/826

最终我使用这个模块来构建我的 eks 集群:https://registry.terraform.io/modules/cloudposse/eks-cluster/aws/0.29.0?tab=outputs

如果您使用输出“security_group_id”,它会输出“附加安全组”。以我的经验,这对 aws 完全没有好处。不知道为什么当你不能用它做任何事情时它甚至存在。我需要使用的安全组是“集群安全组”。因此,我在 EFS 卷安全组挂载点和 BAM 的端口 2049 入口规则上添加了“集群安全组”的 ID!我已成功将 EFS 卷挂载到部署的 pod。

另一个重要的变化是我将持久卷类型更改为 ReadWriteMany,因为 fargate 显然不支持 ReadOnlyMany。

【讨论】:

    猜你喜欢
    • 2022-01-16
    • 2020-08-04
    • 2021-12-03
    • 2021-12-22
    • 1970-01-01
    • 2019-10-17
    • 2020-04-29
    • 2021-12-23
    • 2021-11-06
    相关资源
    最近更新 更多