首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >为什么地形迫使替换极光全球数据库?

为什么地形迫使替换极光全球数据库?
EN

Stack Overflow用户
提问于 2020-11-18 12:13:33
回答 1查看 1.2K关注 0票数 0

Terraform和Terraform提供程序版本

https://releases.hashicorp.com/terraform/0.13.5/terraform_0.13.5_linux_amd64.zip

  • hashicorp/aws v3.15.0安装的

受影响资源

  • aws_rds_cluster
  • aws_rds_cluster_instance

Terraform配置文件

代码语言:javascript
复制
    # inside ./modules/rds/main.tf

    terraform {
      required_providers {
        aws = {
          source = "hashicorp/aws"
        }
      }
      required_version = "~> 0.13"
    }
    
    provider "aws" {
      alias = "primary"
    }
    
    provider "aws" {
      alias = "dr"
    }
    
    locals {
      region_tags      = ["primary", "dr"]
      db_name          = "${var.project_name}-${var.stage}-db"
      db_cluster_0     = "${local.db_name}-cluster-${local.region_tags[0]}"
      db_cluster_1     = "${local.db_name}-cluster-${local.region_tags[1]}"
      db_instance_name = "${local.db_name}-instance"
    }
    
    resource "aws_rds_global_cluster" "global_db" {
      global_cluster_identifier = "${var.project_name}-${var.stage}"
      database_name             = "${var.project_name}${var.stage}db"
      engine                    = "aurora-mysql"
      engine_version            = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
      // force_destroy             = true
    }
    
    resource "aws_rds_cluster" "primary_cluster" {
      depends_on         = [aws_rds_global_cluster.global_db]
      provider           = aws.primary
      cluster_identifier = "${local.db_name}-cluster-${local.region_tags[0]}"
    
      # the database name does not allow dashes:
      database_name = "${var.project_name}${var.stage}db"
    
      # The engine and engine_version must be repeated in aws_rds_global_cluster,
      # aws_rds_cluster, and aws_rds_cluster_instance to 
      # avoid "Value for engine should match" error
      engine                    = "aurora-mysql"
      engine_version            = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
      engine_mode               = "global"
      global_cluster_identifier = aws_rds_global_cluster.global_db.id
    
      # backtrack and multi-master not supported by Aurora Global.
    
      master_username         = var.username
      master_password         = var.password
      backup_retention_period = 5
      preferred_backup_window = "07:00-09:00"
      db_subnet_group_name    = aws_db_subnet_group.primary.id
    
      # We must have these values, because destroying or rolling back requires them
      skip_final_snapshot       = true
      final_snapshot_identifier = "ci-aurora-cluster-backup"
    
      tags = {
        Name      = local.db_cluster_0
        Stage     = var.stage
        CreatedBy = var.created_by
      }
    }
    
    resource "aws_rds_cluster_instance" "primary" {
      depends_on           = [aws_rds_global_cluster.global_db]
      provider             = aws.primary
      cluster_identifier   = aws_rds_cluster.primary_cluster.id
      engine               = "aurora-mysql"
      engine_version       = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
      instance_class       = "db.${var.instance_class}.${var.instance_size}"
      db_subnet_group_name = aws_db_subnet_group.primary.id
    
      tags = {
        Name      = local.db_instance_name
        Stage     = var.stage
        CreatedBy = var.created_by
      }
    }
    
    resource "aws_rds_cluster" "dr_cluster" {
      depends_on         = [aws_rds_cluster_instance.primary, aws_rds_global_cluster.global_db]
      provider           = aws.dr
      cluster_identifier = "${local.db_name}-cluster-${local.region_tags[1]}"
    
      # db name now allowed to specified on secondary regions
    
      # The engine and engine_version must be repeated in aws_rds_global_cluster,
      # aws_rds_cluster, and aws_rds_cluster_instance to 
      # avoid "Value for engine should match" error
      engine                    = "aurora-mysql"
      engine_version            = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
      engine_mode               = "global"
      global_cluster_identifier = aws_rds_global_cluster.global_db.id
    
      # backtrack and multi-master not supported by Aurora Global.
    
      # cannot specify username/password in cross-region replication cluster:
      backup_retention_period = 5
      preferred_backup_window = "07:00-09:00"
      db_subnet_group_name    = aws_db_subnet_group.dr.id
    
      # We must have these values, because destroying or rolling back requires them
      skip_final_snapshot       = true
      final_snapshot_identifier = "ci-aurora-cluster-backup"
    
      tags = {
        Name      = local.db_cluster_1
        Stage     = var.stage
        CreatedBy = var.created_by
      }
    }
    
    resource "aws_rds_cluster_instance" "dr_instance" {
      depends_on           = [aws_rds_cluster_instance.primary, aws_rds_global_cluster.global_db]
      provider             = aws.dr
      cluster_identifier   = aws_rds_cluster.dr_cluster.id
      engine               = "aurora-mysql"
      engine_version       = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
      instance_class       = "db.${var.instance_class}.${var.instance_size}"
      db_subnet_group_name = aws_db_subnet_group.dr.id
    
      tags = {
        Name      = local.db_instance_name
        Stage     = var.stage
        CreatedBy = var.created_by
      }
    }
    
    resource "aws_db_subnet_group" "primary" {
      name       = "${local.db_name}-subnetgroup"
      subnet_ids = var.subnet_ids
      provider   = aws.primary
    
      tags = {
        Name      = "primary_subnet_group"
        Stage     = var.stage
        CreatedBy = var.created_by
      }
    }
    
    resource "aws_db_subnet_group" "dr" {
      provider   = aws.dr
      name       = "${local.db_name}-subnetgroup"
      subnet_ids = var.dr_subnet_ids
    
      tags = {
        Name      = "dr_subnet_group"
        Stage     = var.stage
        CreatedBy = var.created_by
      }
    }
    
    resource "aws_rds_cluster_parameter_group" "default" {
      name        = "rds-cluster-pg"
      family      = "aurora-mysql${var.mysql_version}"
      description = "RDS default cluster parameter group"
      parameter {
        name  = "character_set_server"
        value = "utf8"
      }
      parameter {
        name  = "character_set_client"
        value = "utf8"
      }
      parameter {
        name         = "aurora_parallel_query"
        value        = "ON"
        apply_method = "pending-reboot"
      }
    }

./modules/sns/main.tf中,这是我在从./modules目录中调用terraform apply时添加的资源:

代码语言:javascript
复制
    resource "aws_sns_topic" "foo_topic" {
      name = "foo-${var.stage}-${var.topic_name}"
      tags = {
        Name      = "foo-${var.stage}-${var.topic_name}"
        Stage     = var.stage
        CreatedBy = var.created_by
        CreatedOn = timestamp()
      }
    }

./modules/main.tf

代码语言:javascript
复制
    terraform {
      backend "s3" {
        bucket = "terraform-remote-state-s3-bucket-unique-name"
        key    = "terraform.tfstate"
        region = "us-east-2"
        dynamodb_table = "TerraformLockTable"
      }
    }

    provider "aws" {
      alias  = "primary"
      region = var.region
    }

    provider "aws" {
      alias  = "dr"
      region = var.dr_region
    }


    module "vpc" {
      stage  = var.stage
      source = "./vpc"
      providers = {
        aws = aws.primary
      }
    }
    module "dr_vpc" {
      stage  = var.stage
      source = "./vpc"
      providers = {
        aws = aws.dr
      }
    }

    module "vpc_security_group" {
      source = "./vpc_security_group"
      vpc_id = module.vpc.vpc_id
      providers = {
        aws = aws.primary
      }
    }


    module "rds" {
      source        = "./rds"
      stage         = var.stage
      created_by    = var.created_by
      vpc_id        = module.vpc.vpc_id
      subnet_ids    = [module.vpc.subnet_a_id, module.vpc.subnet_b_id, module.vpc.subnet_c_id]
      dr_subnet_ids = [module.dr_vpc.subnet_a_id, module.dr_vpc.subnet_b_id, module.dr_vpc.subnet_c_id]
      region        = var.region
      username      = var.rds_username
      password      = var.rds_password

      providers = {
        aws.primary = aws.primary
        aws.dr      = aws.dr
      }
    }

    module "sns_start" {
      stage      = var.stage
      source     = "./sns"
      topic_name = "start"
      created_by = var.created_by
    }

./modules/variables.tf

代码语言:javascript
复制
variable "region" {
  default = "us-east-2"
}

variable "dr_region" {
  default = "us-west-2"
}
variable "service" {
  type        = string
  default     = "foo-back"
  description = "service to match what serverless framework deploys"
}

variable "stage" {
  type        = string
  default     = "sandbox"
  description = "The stage to deploy: sandbox, dev, qa, uat, or prod"

  validation {
    condition     = can(regex("sandbox|dev|qa|uat|prod", var.stage))
    error_message = "The stage value must be a valid stage: sandbox, dev, qa, uat, or prod."
  }
}

variable "created_by" {
  description = "Company or vendor name followed by the username part of the email address"
}

variable "rds_username" {
  description = "Username for rds"
}

variable "rds_password" {
  description = "Password for rds"
}

./modules/sns/main.tf

代码语言:javascript
复制
resource "aws_sns_topic" "foo_topic" {
  name = "foo-${var.stage}-${var.topic_name}"
  tags = {
    Name      = "foo-${var.stage}-${var.topic_name}"
    Stage     = var.stage
    CreatedBy = var.created_by
    CreatedOn = timestamp()
  }
}

./modules/sns/output.tf

代码语言:javascript
复制
output "sns_topic_arn" {
  value = aws_sns_topic.foo_topic.arn
}

调试输出

这两个输出都有修改的键、名称、帐户ID等:

在应用:https://gist.github.com/ystoneman/5c842769c28e1ae5969f9aaff1556b37之前运行terraform applystateplan输出

预期行为

已经创建了整个./modules/main.tf,惟一添加的是SNS模块,因此应该只创建SNS模块。

实际行为

但是相反,RDS资源也受到了影响,terraform“声称”engine_mode已经从provisioned变成了global,尽管控制台上说它已经是global了:

plan输出还指出,cluster_identifier只是known after apply,因此是forces replacement,但是,我认为cluster_identifier需要让aws_rds_cluster知道它属于aws_rds_global_clusteraws_rds_cluster_instance需要知道它分别属于aws_rds_cluster

复制步骤

module "sns_start"

  • cd ./modules

  • terraform apply注释掉(在完成此步骤之后,我所包含的状态文件是at)

  • uncomment out module "sns_start"

  • terraform apply (此时是我提供调试输出的地方)

重要因子

无论我是从Mac还是在AWS CodeBuild中运行它,都会发生这个问题。

参考文献

AWS Terraform tried to destory and rebuild RDS cluster似乎也引用了这一点,但它并不是特定于全局集群的,您确实需要标识符,以便实例和集群知道它们属于什么。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-11-18 20:41:02

您似乎使用的是aws提供程序的过时版本,并且指定的engine_mode不正确。有一张与此相关的bug票据:https://github.com/hashicorp/terraform-provider-aws/issues/16088

它是在3.15.0版本中修正的,您可以通过

代码语言:javascript
复制
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 3.15.0"
    }
  }
  required_version = "~> 0.13"
}

此外,您应该从terraform规范中完全删除engine_mode属性。

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64892899

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档