Migrating Terraform State Files to Workspaces in an AWS S3 Bucket

Just as I did with GCP a few weeks ago, I needed to circle back and migrate my state files to a cloud storage bucket. This done mainly to centralize the storage location automatically and thus lower the chance of a state file loss or corruption.

Previously, I’d been separating the state files using the -state parameter. I then use a different input file and state file for each environment like this:

terraform apply -var-file=env1.tfvars -state=env1.tfstate
terraform apply -var-file=env2.tfvars -state=env2.tfstate
terraform apply -var-file=env3.tfvars -state=env3.tfstate

To instead store the state files in an AWS S3 bucket, create a backend.tf file with this content:

terraform {
  backend "s3" {
    bucket               = "my-bucket-name"
    workspace_key_prefix = "tf-state"
    key                  = "terraform.tfstate"
    region               = "us-west-1"
  }
}

This will use a bucket named ‘my-bucket-name’ in AWS region us-west-1. Each workspace will store its state file in tfstate/<WORKSPACE_NAME>/terraform.tfstate

Note: if workspace_key_prefix is not specified, the directory ‘env:‘ will be created and used.

Since the backend has changed, I have to run this:

terraform init -reconfigure

I then have to copy the local state files to the correct location that the workspace will be using. This is easiest done with the AWS CLI tool, which will automatically create the sub-directory if it doesn’t exist.

aws s3 cp env1.tfstate s3://my-bucket-name/tf-state/env1/terraform.tfstate
aws s3 cp env2.tfstate s3://my-bucket-name/tf-state/env2/terraform.tfstate
aws s3 cp env3.tfstate s3://my-bucket-name/tf-state/env3/terraform.tfstate

I then create a workspace for each state file:

$ terraform workspace new env1
Created and switched to workspace "env1"!

Now I’m ready to run the applies and verify state is matching input

$ terraform apply -var-file=env1.tfvars

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

$ terraform workspace new env2
Created and switched to workspace "env2"!

$ terraform apply -var-file=env2.tfvars

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Doing it in the opposite order

An alternate way to do this migration is enable workspaces first, then migrate the backend to S3.

$ terraform workspace new env1
Created and switched to workspace "env1"!

$ mv env1.tfstate terraform.tfstate.d/env1/terraform.tfstate

$ terraform apply -var-file=env1.tfvars

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Then create the backend.tf file and run terraform init -reconfigure. You’ll then be prompted to move the state files to S3:

$ terraform init -reconfigure
Initializing modules...

Initializing the backend...
Do you want to migrate all workspaces to "s3"?

Enter a value: yes

$ terraform apply -var-file=env1.tfvars

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Either way, the state files have to be individually migrated to the storage bucket

Advertisement

AWS or GCP IPSec Tunnels with BGP routing on a FortiGate software version 6.x

To use BGP routing on an AWS or GCP VPN connection, the tunnel interface needs to have its IP address assigned as a /32 and then the remote IP specified:

config system interface
    edit "GCP"
        set vdom "root"
        set ip 169.254.0.2 255.255.255.255
        set type tunnel
        set remote-ip 169.254.0.1 255.255.255.255
        set interface "wan1"
    next
end

BGP can be configured under the GUI in Network -> BGP in most cases, but the CLI has additional options. Here’s an example config for a peer 169.254.0.1 with ASN 64512, announcing the 192.168.1.0/24 prefix.

config router bgp
    set as 65000
    set router-id 192.168.1.254
    set keepalive-timer 10
    set holdtime-timer 30
    set scan-time 15
    config neighbor
       edit "169.254.0.1"
           set remote-as 64512
       next
    end
    config network
        edit 1
            set prefix 192.168.1.0 255.255.255.0
        next
    end


“No Vlan association for STP Interface Member 1.0” when upgrading F5 BigIP in AWS from 13.1.1 to 13.1.3.2

After upgrading several of our AWS Bigip-VEs in AWS from 13.1.1 to 13.1.3.2 without issue, I had big problems with a pair this afternoon.  The first one took forever to boot up, and when it did, was complaining about incomplete configuration.

I’ve seen this before and know it usually means it couldn’t migrate a certain section of the configuration file, and rather than ignoring it, it just can’t load anything.  This is what was showing up in /var/log/ltm and on console:

May 4 17:47:53 bigip warning mcpd[18134]: 01070932:4: Pending local Interface from cluster.: 1.0, configuration ignored
May 4 17:47:53 bigip warning mcpd[18134]: 01070932:4: Pending Interface: 1.0, configuration ignored
May 4 17:47:53 bigip err mcpd[18134]: 01070523:3: No Vlan association for STP Interface Member 1.0.
May 4 17:47:53 bigip emerg load_config_files: "/usr/bin/tmsh -n -g load sys config partitions all base " - failed. -- 01070523:3: No Vlan association for STP Interface Member 1.0. Error: failed to reset strict operations; disconnecting from mcpd. Will reconnect on next command.

The problem was the original 13.1.1 configuration file had this:

net interface 1.0 {

media-fixed 10000T-FD

}

This is really an error from the get-go, since interface 1.0 is the eth0 / management interface and shouldn’t be in the “net interface” section.

My work around was to reset to factory config, then re-create Self IPs and re-sync the cluster.  Alternately, the configuration file could be modified to simply remove the offending lines.

Since I did not see this configuration in any other F5 BigIP-VEs, I’d suspect it was mistakenly inserted in to the 13.1.1-0.0.4 AMI by F5 that I’d launched last summer.