【问题标题】:Files are not archived in terraform before uploaded to GCP文件在上传到 GCP 之前未在 terraform 中存档
【发布时间】:2023-05-13 13:05:02
【问题描述】:

尽管使用了depends_on 指令,但在尝试将其放入存储桶之前似乎并未创建 zip。考虑到管道输出,它只是在将文件上传到存储桶之前忽略了归档文件。两个文件(index.js 和 package.json)都存在。

resource "google_storage_bucket" "cloud-functions" {
  project       = var.project-1-id
  name          = "${var.project-1-id}-cloud-functions"
  location      = var.project-1-region
}

resource "google_storage_bucket_object" "start_instance" {
  name       = "start_instance.zip"
  bucket     = google_storage_bucket.cloud-functions.name
  source     = "${path.module}/start_instance.zip"
  depends_on = [
    data.archive_file.start_instance,
  ]
}

data "archive_file" "start_instance" {
  type        = "zip"
  output_path = "${path.module}/start_instance.zip"

  source {
    content  = file("${path.module}/scripts/start_instance/index.js")
    filename = "index.js"
  }

  source {
    content  = file("${path.module}/scripts/start_instance/package.json")
    filename = "package.json"
  }
}
Terraform has been successfully initialized!
 $ terraform apply -input=false "planfile"
 google_storage_bucket_object.stop_instance: Creating...
 google_storage_bucket_object.start_instance: Creating...
 Error: open ./start_instance.zip: no such file or directory
   on cloud_functions.tf line 41, in resource "google_storage_bucket_object" "start_instance":
   41: resource "google_storage_bucket_object" "start_instance" {

日志:

 2020-11-18T13:02:56.796Z [DEBUG] plugin.terraform-provider-google_v3.40.0_x5: 2020/11/18 13:02:56 [WARN] Failed to read source file "./start_instance.zip". Cannot compute md5 hash for it.
 2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.stop_instance, but we are tolerating it because it is using the legacy plugin SDK.
     The following problems may be the cause of any confusing errors from downstream operations:
       - .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
 2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.start_instance, but we are tolerating it because it is using the legacy plugin SDK.
     The following problems may be the cause of any confusing errors from downstream operations:
       - .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)

【问题讨论】:

  • 请将变量 TF_LOG 设置为 DEBUG 并在 TF_LOG_PATH 上捕获输出(参见terraform.io/docs/internals/debugging.html
  • @Iñigo 我到了[WARN] ReferenceTransformer: reference not found: "data.archive_file.start_instance#destroy"
  • 我在上面添加了一些日志输出。
  • 你的 terraform 版本是什么?版本 0.11.11 存在问题
  • 请查看Terraform Official Documentation。另请参阅此GitHub issue* post,这可能会对您有所帮助。

标签: google-cloud-platform terraform bucket


【解决方案1】:

GitLab CI/CD 管道也有同样的问题。经过一番挖掘,根据discussion,我发现通过这种设置,plan 和apply 阶段在不同的容器中运行,归档步骤在plan 阶段执行。

一种解决方法是使用 null_resource 创建一个虚拟触发器,并强制 archive_file 依赖它,因此,在应用阶段执行。

resource null_resource dummy_trigger {
  triggers = {
    timestamp = timestamp()
  }
}

resource "google_storage_bucket" "cloud-functions" {
  project       = var.project-1-id
  name          = "${var.project-1-id}-cloud-functions"
  location      = var.project-1-region
}

resource "google_storage_bucket_object" "start_instance" {
  name       = "start_instance.zip"
  bucket     = google_storage_bucket.cloud-functions.name
  source     = "${path.module}/start_instance.zip"
  depends_on = [
    data.archive_file.start_instance,
  ]
}

data "archive_file" "start_instance" {
  type        = "zip"
  output_path = "${path.module}/start_instance.zip"

  source {
    content  = file("${path.module}/scripts/start_instance/index.js")
    filename = "index.js"
  }

  source {
    content  = file("${path.module}/scripts/start_instance/package.json")
    filename = "package.json"
  }
  
  depends_on = [
    resource.null_resource.dummy_trigger,
  ]
}

【讨论】:

    最近更新 更多