// DevOps

How I Tamed Terraform: Moving State from Git to Cloudflare R2

Published on 2026-04-10

When I first started working on the project’s infrastructure, storing terraform.tfstate locally (or, I confess, dropping it into Git) seemed like a reasonable solution. But as soon as colleagues joined the work, I quickly realized: local state is a powder keg.

One person forgot to do a git pull, another applied changes from their machine — and suddenly I’m spending half a day “resolving” infrastructure conflicts. At some point it became clear: I needed a Remote State.

For the backend I chose Cloudflare R2. The reason is simple: it’s S3-compatible storage with no egress fees (zero egress fees), and it took me literally a few minutes to set up.


Why I abandoned local state

The main problem with a local file is the lack of synchronization and control.

Risk of overwriting someone else’s changes

If two people run terraform apply at the same time, Terraform doesn’t prevent a race without locks. In the end, the one who writes the state last wins.

Secrets in the clear

Passwords, tokens, and keys can be stored in terraform.tfstate — often in plaintext. Keeping this in Git is frankly a bad idea.

Inability to do proper CI/CD

Without remote state, each Terraform run in CI sees its “own” world, which makes automation pointless.


How I set this up

Since R2 supports the S3 API, I used the standard s3 backend. But because this is not AWS, I had to disable a number of Terraform checks.

I moved the configuration into a separate file backend.hcl so as not to clutter the main code:

bucket                      = "my-infrastructure-state"
key                         = "prod/main.tfstate"
region                      = "auto"

# Important for R2
use_path_style              = true

# Disable AWS-specific checks
skip_credentials_validation = true
skip_metadata_api_check     = true
skip_region_validation      = true
skip_requesting_account_id  = true

# Sometimes required depending on the Terraform version
skip_s3_checksum            = true

endpoints = {
  s3 = "https://<YOUR_ACCOUNT_ID>.r2.cloudflarestorage.com"
}

In the main Terraform code I left a minimal block:

terraform {
  backend "s3" {}
}

How I addressed locking (State Locking)

The classic approach for the S3 backend is to use DynamoDB for locks. But in the case of Cloudflare R2 I don’t have that option, because it’s not AWS.

I tried using:

use_lockfile = true

This parameter appeared in newer versions of Terraform (starting with 1.10) and allows using a file in the bucket as a locking mechanism.

But in practice I realized an important thing:

it’s not a full replacement for DynamoDB locking.

What this means in practice

  • locking works on a best-effort basis
  • there is no guarantee of full atomicity
  • theoretically races are still possible

How I cope with it

I didn’t rely only on the lockfile and adopted some simple rules for myself:

  • I don’t run terraform apply in parallel with others
  • I try to move apply into CI/CD
  • I treat locking in R2 as an additional protection, not a guarantee

If I had critical infrastructure — I would implement a strict CI-only pipeline with an external locking mechanism.


How I migrated

The migration took one command:

terraform init \
  -backend-config=backend.hcl \
  -migrate-state

Terraform itself:

  1. found the local terraform.tfstate
  2. asked for confirmation
  3. moved it to R2

After that I deleted the local file and added it to .gitignore.


What I got as a result

Peace of mind

Now the state is not stored on my machine.

I also enabled versioning on the R2 bucket to be able to roll back.


Team collaboration

Even without perfect locking:

  • I have a single source of truth
  • questions like “who has the up-to-date state” disappeared

Security

  • secrets are no longer in Git
  • access is controlled through Cloudflare API tokens
  • permissions can be restricted

Practical takeaways

If I set up a Terraform backend on R2 again, I will definitely:

  • enable versioning
  • separate environments (prod/stage)
  • avoid running apply from different machines
  • move apply to CI/CD whenever possible

Conclusion

If I continue to store state locally or in Git — I’m knowingly creating problems for myself.

Cloudflare R2 gave me a quick way to move to Remote State without AWS. Yes, it has limitations, especially with locking, but it’s still a huge step forward.

For me it’s a case where a small infrastructure change actually saved time, nerves, and eliminated a class of problems that used to seem “normal”.

// Reviews

Related reviews

// Contact

Need help?

Get in touch with me and I'll help solve the problem

Send request
Write and get a quick reply