<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[PrasitSTK | AWS, AI & Cloud Engineering]]></title><description><![CDATA[Hey there! 👋🏼 I'm Prasit, aka "O". As a Solution Architect with a background in DevOps, I'm writing about building innovative solutions. Join me as we explore the exciting world of tech together! 🚀]]></description><link>https://prasitstk.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 17 May 2026 02:04:24 GMT</lastBuildDate><atom:link href="https://prasitstk.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Store Terraform state remotely on S3 bucket]]></title><description><![CDATA[In my previous post, I mentioned a bit about the better way to store your Terraform state file. Instead of keeping it on your own local computer, it's better to keep it remotely. This is especially us]]></description><link>https://prasitstk.com/store-terraform-state-remotely-on-s3-bucket</link><guid isPermaLink="true">https://prasitstk.com/store-terraform-state-remotely-on-s3-bucket</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[DynamoDB]]></category><category><![CDATA[terraform-state]]></category><category><![CDATA[terraform remote backend]]></category><dc:creator><![CDATA[Prasit (O) Sutthikamolsakul]]></dc:creator><pubDate>Sat, 30 Mar 2024 13:27:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/629d42ff14a2e6466765cfb2/6aec3f1f-787f-4546-8606-f9fb7adb14c7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my previous post, I mentioned a bit about the better way to store your Terraform state file. Instead of keeping it on your own local computer, it's better to keep it remotely. This is especially useful when working with others on different computers.</p>
<p>In this blog post, I'll show you how to set up your Terraform remote state storage using Amazon S3 bucket and DynamoDB table within your existing Terraform project. These will help manage your Terraform state remotely alongside other project resources.</p>
<p>Let's dive into each step together! 🤿</p>
<h1>Overview 🌐</h1>
<p>This is actually the chicken-and-egg situation of using Terraform to create the backend resources where you want to store your Terraform state on themselves. To make this work, you have to use a two-step process:</p>
<ol>
<li><p>Write Terraform code to create a S3 bucket and a DynamoDB table, and deploy that code with a local state file (<code>local</code> backend).</p>
</li>
<li><p>Go back to the Terraform code, add a <code>s3</code> remote backend configuration to it to use the newly created S3 bucket and DynamoDB table as a backend state storage, and run <code>terraform init</code> again to move your local state file into S3 bucket.</p>
</li>
</ol>
<p>Later if you would like to destroy the entire stack including both S3 bucket and DynamoDB table for remote backend, you need to do another two-step process in reverse as follows:</p>
<ol>
<li><p>Go to the Terraform code, remove the backend configuration, and run <code>terraform init -migrate-state</code> to copy the Terraform state back to your local disk.</p>
</li>
<li><p>Run <code>terraform destroy</code> to delete the S3 bucket and DynamoDB table along with other resources from the entire stack.</p>
</li>
</ol>
<p>With this roadmap in mind, let's begin our exploration of each step in detail.</p>
<blockquote>
<p>NOTE: I have already developed Terraform configurations <a href="https://github.com/prasitstk/terraform-aws-practices-s3-backend">here</a> to illustrate the steps outlined above in detail. You can visit my repository, create a codespace based on it, and conduct your workshop there.</p>
</blockquote>
<h1>Migration 🚚</h1>
<p>In this section, I will explain how to modify the existing Terraform code project to transition the Terraform state information from the local state file to the S3 bucket, which will also be defined within the same project. Here's how:</p>
<h2>Step 1: Define &amp; Apply additional resources for backend</h2>
<p>First, in the Terraform code, I define a S3 bucket as a remote location to store the Terraform state file. Here are resource blocks that I configure for this S3 bucket in <a href="https://github.com/prasitstk/terraform-aws-practices-s3-backend">my repository</a>:</p>
<pre><code class="language-yaml">########################
# Secure state storage #
########################

resource "aws_s3_bucket" "backend_bucket" {
  bucket = "terraform-backend-&lt;random-string&gt;"

  # After migrating the remote state file back to the local one, the remote state file still exists on the backend bucket.
  # So, with `force_destroy = true`, we can run `terraform destroy` to destroy the backend bucket even the remote state file still exists.
  force_destroy = true

  tags = {
    Name = "S3 Remote Terraform State Store"
  }
}

resource "aws_s3_bucket_versioning" "backend_bucket_versioning" {
  bucket = aws_s3_bucket.backend_bucket.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_lifecycle_configuration" "backend_bucket_lifecycle" {
  bucket = aws_s3_bucket.backend_bucket.id

  rule {
    id = "noncurrent-version-expiration-rule"

    noncurrent_version_expiration {
      # NOTE: The number of days an object is noncurrent before Amazon S3 can perform the associated action.
      noncurrent_days = 7

      # NOTE: The number of noncurrent versions Amazon S3 will retain.
      # newer_noncurrent_versions = 10
    }

    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "backend_bucket_sse" {
  bucket = aws_s3_bucket.backend_bucket.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256" # NOTE: Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3)
    }
  }
}

resource "aws_s3_bucket_public_access_block" "backend_bucket_public_access_block" {
  bucket = aws_s3_bucket.backend_bucket.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}
</code></pre>
<p>I don't just define the S3 bucket with its default properties; I also enhance its security with additional layers of protection. Each aspect of this S3 bucket can be described as follows:</p>
<ul>
<li><p><code>resource "aws_s3_bucket" "backend_bucket"</code></p>
<ul>
<li><p><code>bucket = "terraform-backend-&lt;random-string&gt;"</code>: This specifies the name of the S3 bucket. Note that S3 bucket names must be <em>globally unique</em> among all AWS customers, so please choose the <code>&lt;random-string&gt;</code> wisely.</p>
</li>
<li><p><code>force_destroy = true</code>: This ensures that even if the remote state file still exists, running <code>terraform destroy</code> will remove the backend bucket.</p>
</li>
</ul>
</li>
<li><p><code>resource "aws_s3_bucket_versioning" "backend_bucket_versioning"</code>: This enables versioning on the S3 bucket, ensuring that every update to a file in the bucket creates a new version. This allows you to view older versions of the remote state file and revert to them if needed, serving as a useful fallback mechanism.</p>
</li>
<li><p><code>resource "aws_s3_bucket_lifecycle_configuration" "backend_bucket_lifecycle"</code>: This prevents too many noncurrent versions of the remote state file from being retained.</p>
</li>
<li><p><code>resource "aws_s3_bucket_server_side_encryption_configuration" "backend_bucket_sse"</code>: This ensures that your state files, along with any contained secrets, are always encrypted on disk when stored in the S3 bucket.</p>
</li>
<li><p><code>resource "aws_s3_bucket_public_access_block" "backend_bucket_public_access_block"</code>: This will block all public access to the S3 bucket. Since Terraform state files may contain sensitive data and secrets, adding this extra layer of protection ensures that no one on your team can accidentally make this S3 bucket public.</p>
</li>
</ul>
<p>If you're collaborating on a project with multiple team members and using the S3 bucket as a remote state backend, DynamoDB is utilized to support the locking mechanism. It contains a single attribute named "LockID," indicating whether operations on the state file can proceed.</p>
<p>During Terraform operations (such as <em>plan</em>, <em>apply</em>, destroy), the state file is locked for the duration of the operation. If another developer attempts to execute operations concurrently, the request is denied. Operations can resume once the current operation is complete, and the lock on the state file is released from the DynamoDB table.</p>
<p>Here's how we define the DynamoDB table for the locking purpose:</p>
<pre><code class="language-yaml">#######################
# State locking table #
#######################

resource "aws_dynamodb_table" "backend_state_lock_tbl" {
  name = "terraform-backend-state-locking". # NOTE: You can change the table name as desired.

  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S" # NOTE: String type
  }

  tags = {
    "Name" = "DynamoDB Terraform State Lock Table"
  }
}
</code></pre>
<blockquote>
<p>NOTE: Setting <code>billing_mode = "PAY_PER_REQUEST"</code> means that we opt for the on-demand capacity mode in the DynamoDB table, where you pay for the actual request usage. This is ideal for scenarios where the table is primarily used for testing purposes.</p>
</blockquote>
<p>Let's begin by creating the remote backend resources on AWS and saving the state locally. Run the following commands:</p>
<pre><code class="language-bash">terraform plan -out /tmp/tfplan
terraform apply /tmp/tfplan
</code></pre>
<p>You should notice that a <code>terraform.tfstate</code> file is created, containing the state information corresponding to the newly created backend resources. Proceed to the next step to transfer its contents to the remote storage.</p>
<h2>Step 2: Configure backend &amp; Reinitiate Terraform project</h2>
<p>Next, add a backend configuration for the S3 bucket to your Terraform code. This configuration is specific to Terraform and is placed within a <code>terraform</code> block. Here's how you can structure it:</p>
<pre><code class="language-yaml">terraform {

  # ... &lt;Other blocks&gt; ...

  backend "s3" {
    bucket         = "terraform-backend-&lt;random-string&gt;"
    region         = "ap-southeast-1"
    key            = "path/to/remote_terraform.tfstate" # NOTE: Change the object key for the state file as desired.
    # NOTE: `encrypt = true` as a second layer in addition to `backend_bucket_sse` to ensure that the state file is always encrypted on the S3 bucket.
    encrypt        = true
    dynamodb_table = "terraform-backend-state-locking"
  }
}
</code></pre>
<blockquote>
<p>NOTE: Ensure to replace <code>&lt;random-string&gt;</code> with the same name suffix of the S3 bucket you created earlier.</p>
</blockquote>
<p>To instruct Terraform to store your state file in this S3 bucket, you'll need to run the <code>terraform init</code> command again. This command not only downloads provider code but also configures your Terraform backend. Since the <code>init</code> command is idempotent, it's safe to run it multiple times.</p>
<p>Additionally, you need a separate credential to migrate the state file to the S3 bucket. If you haven't specified AWS access keys in environment variables <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code>, you'll need to partially configure the backend block externally in a separate variable file, such as <code>backend.tfvars</code>:</p>
<pre><code class="language-yaml">access_key = "&lt;aws-account-access-key&gt;"
secret_key = "&lt;aws-account-secret-key&gt;"
</code></pre>
<p>Then, you'll need to add the <code>-backend-config=backend.tfvars</code> option to the <code>terraform init</code> command so that Terraform has the necessary permissions to migrate the state file:</p>
<pre><code class="language-bash">terraform init -backend-config=backend.tfvars
</code></pre>
<p>After run the above command, Terraform will recognize that you already have a state file locally and prompt you to copy it to the new S3 backend bucket. Confirm by typing <strong>"yes"</strong> and your Terraform state will be stored in the S3 bucket. You can verify this by navigating to the S3 bucket on the Amazon S3 web console.</p>
<h1>Clean Up <strong>🧹</strong></h1>
<p>As mentioned earlier, to delete all resources, including the backend resources, you'll also need to perform two separate tasks:</p>
<h2>Step 1: Remove backend configuration &amp; Reinitiate Terraform project</h2>
<p>Begin by deleting the entire <code>backend "s3"</code> block inside the <code>terraform</code> block from your code. Then, copy the Terraform state information back to your local disk by executing the following command:</p>
<pre><code class="language-bash">terraform init -migrate-state
</code></pre>
<p>You'll notice that the state information is now restored to the <code>terraform.tfstate</code> file.</p>
<h2>Step 2: Destroy all resources</h2>
<p>To proceed with resource deletion on AWS, simply execute the Terraform CLI:</p>
<pre><code class="language-bash">terraform destroy
</code></pre>
<h1>Conclusion <strong>🏁</strong></h1>
<p>In summary, this blog post has shown you how to set up a remote Terraform state backend using Amazon S3 bucket and DynamoDB table in your existing project. With this setup, you can manage your infrastructure more efficiently and work better with your team.</p>
<p>Happy Terraforming! 🤗</p>
<h1>References <strong>🔗</strong></h1>
<ul>
<li><p><a href="https://github.com/prasitstk/terraform-aws-practices-s3-backend">github.com / prasitstk / terraform-aws-practices-s3-backend</a></p>
</li>
<li><p><a href="https://developer.hashicorp.com/terraform/language/settings/backends/s3">Terraform / Backend / S3</a></p>
</li>
<li><p><a href="https://developer.hashicorp.com/terraform/language/settings/backends/configuration#partial-configuration">Terraform / Backend Configuration / Partial Configuration</a></p>
</li>
<li><p><a href="https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa#01db">How to manage Terraform state</a></p>
</li>
<li><p><a href="https://technology.doximity.com/articles/terraform-s3-backend-best-practices">Terraform S3 Backend Best Practices</a></p>
</li>
<li><p><a href="https://spacelift.io/blog/terraform-s3-backend">How to Manage Terraform S3 Backend – Best Practices</a></p>
</li>
<li><p><a href="https://medium.com/all-things-devops/how-to-store-terraform-state-on-s3-be9cd0070590">How to Store Terraform State on S3</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Copy/Sync data between two S3 buckets with Terraform]]></title><description><![CDATA[Last week, a friend asked me how to easily copy files from one AWS account to another. After some searching, I found a helpful pattern from AWS (here) that explains the steps. I think it's a common ta]]></description><link>https://prasitstk.com/copysync-data-between-two-s3-buckets-with-terraform</link><guid isPermaLink="true">https://prasitstk.com/copysync-data-between-two-s3-buckets-with-terraform</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[aws cli]]></category><category><![CDATA[S3 CLI]]></category><dc:creator><![CDATA[Prasit (O) Sutthikamolsakul]]></dc:creator><pubDate>Fri, 29 Mar 2024 03:39:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/629d42ff14a2e6466765cfb2/6a82e9c1-1e76-48af-b23f-98615c05ad10.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, a friend asked me how to easily copy files from one AWS account to another. After some searching, I found a helpful pattern from AWS (<a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-data-from-an-s3-bucket-to-another-account-and-region-by-using-the-aws-cli.html">here</a>) that explains the steps. I think it's a common task that many AWS users might need to do. So, I decided to create a simple way to do it using Terraform to create AWS resources for this task. I've shared my solution on GitHub in case it can help others too (<a href="https://github.com/prasitstk/terraform-aws-patterns-copy-data-s3-bucket-across-accounts">here</a>).</p>
<p>In this blog post, I'll walk you through the process and show you how to use my Terraform code from my solution to simplify the resource definition for this task.</p>
<p>Let's get started and simplify data migration! 🚀</p>
<h1>Architecture 🕍</h1>
<p>The objective of this AWS pattern is to create multi-account resources that allow us to migrate objects or files from an Amazon S3 bucket in an AWS account to another S3 bucket in a different account or region. Let's see the architecture of this pattern:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711270177548/5ee7656a-cb78-4a46-b226-2482829ec7f7.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>The source S3 bucket</strong> allows <em>GetObject</em> permission through an attached resource policy, typically <em>a S3 Bucket Policy</em>.</p>
</li>
<li><p>In the target account, <strong>an IAM user (temp-user)</strong> needs to assume <strong>an IAM role</strong> granting <em>PutObject</em> permission for <strong>the target S3 bucket</strong> and <em>GetObject</em> permission for <strong>the source S3 bucket</strong>.</p>
</li>
<li><p>Finally, execute <em>copy</em> or <em>sync</em> commands on behalf of <strong>temp-user</strong> to transfer data from <strong>the source S3 bucket</strong> to <strong>the target S3 bucket</strong>.</p>
</li>
</ul>
<h1>Prerequisites 📋</h1>
<p>There are a few things we need before we demonstrate how to utilize my Terraform code to support copying or syncing S3 objects across accounts:</p>
<ul>
<li><p>An active <strong>source</strong> AWS account with <strong>an IAM user (assuming named as <em>src-tf-developer</em>)</strong> that has required permissions to create/update/delete resources by <strong>Terraform CLI</strong>.</p>
</li>
<li><p>An active <strong>target</strong> AWS account with <strong>an IAM user (assuming named as <em>tgt-tf-developer</em>)</strong> that has required permissions to create/update/delete resources by <strong>Terraform CLI</strong>.</p>
</li>
<li><p>Access keys of those IAM Users from both accounts above.</p>
</li>
<li><p>An S3 bucket of the <strong>source</strong> AWS account in a particular region <em><strong>with some objects to be copied</strong></em>.</p>
</li>
<li><p>An <em><strong>empty</strong></em> S3 bucket of the <strong>target</strong> AWS account in a particular region.</p>
</li>
<li><p>Terraform (version &gt;= 0.13)</p>
</li>
</ul>
<h1>Terraform Environment Setup 🛠️</h1>
<p>I will demonstrate how to set up the environment for our Terraform project based on my repository <a href="https://github.com/prasitstk/terraform-aws-patterns-copy-data-s3-bucket-across-accounts">here</a>. First, navigate to my repository, click <strong>"Code"</strong> button and click <strong>"Create codespace on main"</strong> to start your own codespace on this repository:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711272379625/af854942-e594-4e9d-baec-e355ec451cf4.png" alt="" style="display:block;margin:0 auto" />

<p>After the codespace initializes successfully, create <code>terraform.tfvars</code> <em>(ignored from. git)</em> on the root directory of the project to define variables for Terraform as follows:</p>
<pre><code class="language-yaml"># Source AWS account configuration
# - Source AWS account region code: "ap-southeast-1" (= Singapore), for example.
# - AWS access key ID and secret key of src-tf-developer in the source account.
# - Source S3 bucket name, "src-bucket.prasitio.com", for example.
src_aws_region     = "&lt;source-aws-account-region&gt;"
src_aws_access_key = "&lt;source-aws-account-access-key&gt;"
src_aws_secret_key = "&lt;source-aws-account-secret-key&gt;"
src_s3_bucket_name = "&lt;source-aws-s3-bucket-name&gt;"

# Target AWS account configuration
# - Target AWS account region code: "ap-southeast-1" (= Singapore), for example.
# - AWS access key ID and secret key of tgt-tf-developer in the target account.
# - Target S3 bucket name, "tgt-bucket.prasitio.com", for example.
tgt_aws_region     = "&lt;target-aws-account-region&gt;"
tgt_aws_access_key = "&lt;target-aws-account-access-key&gt;"
tgt_aws_secret_key = "&lt;target-aws-account-secret-key&gt;"
tgt_s3_bucket_name = "&lt;target-aws-s3-bucket-name&gt;"
</code></pre>
<blockquote>
<p>NOTE: This repository configure devcontainer to make AWS CLI, Terraform CLI, and necessary VS Code extensions available to be used. So, no need to install all of them again.</p>
</blockquote>
<h1>Infrastructure Deployment 🏗️</h1>
<p>Initialize the Terraform project on the current working directory and download/install AWS provider by:</p>
<pre><code class="language-yaml">terraform init
</code></pre>
<p>Then, run the following command to create an execution plan and write it into a temporary folder. This will let you preview the infrastructure changes that Terraform plans to make to your infrastructure before applying the actual changes:</p>
<pre><code class="language-yaml">terraform plan -out /tmp/tfplan
</code></pre>
<p>Run the following command to apply the actual infrastructure changes based on the exported plan from the previous step onto both source and target AWS accounts:</p>
<pre><code class="language-yaml">terraform apply /tmp/tfplan
</code></pre>
<p>Note that by default <code>terraform apply</code> will automatically create a state file called <code>terraform.tfstate</code> which contains information about all resources created or modified by Terraform. It is actually not a best practice to keep the state file locally when you collaborate with others to work with the same Terraform project. In the future blog post, I will write about the best practice to keep the Terraform state file remotely on S3 bucket. Stay tune!</p>
<h1>Explanation 💬</h1>
<p>Before we perform the <em>copy</em> or <em>sync</em> task from the source S3 objects, let me explain more about what are inside my Terraform project, what resources Terraform creates and refers to, and why they are needed for our task.</p>
<h2>Providers</h2>
<p>Terraform relies on plugins called <em>providers</em> to interact with cloud providers, here is AWS. In this project, I declare AWS provider to be installed in <code>providers.tf</code>:</p>
<pre><code class="language-yaml">terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&gt; 5.0"
    }
  }
}
</code></pre>
<p>In <code>providers.tf</code>, from the installed AWS provider, I also configure multiple settings for each source and target AWS accounts separately. I use the <code>alias</code> meta-argument to provide an extra name for each configuration. The data source and resource blocks needs to refer to these aliases by using <code>provider = aws.&lt;alias&gt;</code> to differentiate the destination of the AWS account that those blocks are working on. Here are the provider configurations for each account in the <code>providers.tf</code>:</p>
<pre><code class="language-yaml">provider "aws" {
  alias      = "source"
  region     = var.src_aws_region
  access_key = var.src_aws_access_key
  secret_key = var.src_aws_secret_key
}

provider "aws" {
  alias      = "target"
  region     = var.tgt_aws_region
  access_key = var.tgt_aws_access_key
  secret_key = var.tgt_aws_secret_key
}
</code></pre>
<h2>Input variables</h2>
<p><em>Terraform input variables</em> are used to assign dynamic values to resource attributes in a Terraform module. With <em>Input variables</em>, you can customize modules without altering your Terraform code.</p>
<p>In my project, I define the <em>input variables</em> of the root module in <code>variables.tf</code> file and assign their value in <code>terraform.tfvars</code> as I mentioned in the previous section. Here is how variables are defined:</p>
<pre><code class="language-yaml">variable "src_aws_region" {
  type        = string
  description = "AWS Region of your source AWS account"
}

variable "src_aws_access_key" {
  type        = string
  description = "AWS Access Key of your source AWS account"
}

variable "src_aws_secret_key" {
  type        = string
  description = "AWS Secret Key of your source AWS account"
}

variable "src_s3_bucket_name" {
  type        = string
  description = "S3 bucket name of your source AWS account"
}

variable "tgt_aws_region" {
  type        = string
  description = "AWS Region of your target AWS account"
}

variable "tgt_aws_access_key" {
  type        = string
  description = "AWS Access Key of your target AWS account"
}

variable "tgt_aws_secret_key" {
  type        = string
  description = "AWS Secret Key of your target AWS account"
}

variable "tgt_s3_bucket_name" {
  type        = string
  description = "S3 bucket name of your target AWS account"
}
</code></pre>
<blockquote>
<p>NOTE: Terraform variables can be other data types rather than string. You can find more information <a href="https://developer.hashicorp.com/terraform/language/expressions/types">here</a>.</p>
</blockquote>
<h2>Data sources</h2>
<p><em>Data sources</em> provide information about entities that are not managed by the current <em>Terraform</em> configuration. In this project, our data sources are source and target S3 buckets because I try to simulate the general problem of copying and syncing S3 objects across existing S3 buckets, not the new ones. Later, by using this Terraform project defined in <code>main.tf</code>, we can create new resources that associate with both S3 buckets to support only for copying and syncing S3 objects operation.</p>
<p>Here is how we define those data sources in <code>data.tf</code>:</p>
<pre><code class="language-yaml">###############
# Datasources #
###############

#----------------#
# Source account #
#----------------#

data "aws_s3_bucket" "src_s3_bucket" {
  provider = aws.source
  bucket   = var.src_s3_bucket_name
}

#----------------#
# Target account #
#----------------#

data "aws_s3_bucket" "tgt_s3_bucket" {
  provider = aws.target
  bucket   = var.tgt_s3_bucket_name
}
</code></pre>
<p>The <code>data</code> block is to define how we query a data of a particular resource in a cloud provider. You can see more about how to get information about AWS S3 Bucket from a particular account <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket">here</a>.</p>
<h2>Resources</h2>
<p>As I mentioned before, <code>main.tf</code> is a file that define all of the AWS resources to be created on both source and target AWS accounts only for copying and syncing S3 objects operation. All of them are security resources that limit permissions of AWS CLI to do only those tasks. The below section will describe each resource:</p>
<h3>Source account :: S3 Bucket Policy</h3>
<pre><code class="language-yaml">resource "aws_s3_bucket_policy" "app_bucket_policy" {
  provider = aws.source
  bucket   = data.aws_s3_bucket.src_s3_bucket.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        "Sid" : "DelegateS3Access",
        "Effect" : "Allow",
        "Principal" : {
          "AWS" : aws_iam_role.s3_migration_role.arn
        },
        "Action" : [
          "s3:ListBucket",
          "s3:GetObject",
          "s3:GetObjectTagging",
          "s3:GetObjectVersion",
          "s3:GetObjectVersionTagging"
        ],
        "Resource" : [
          "arn:aws:s3:::${data.aws_s3_bucket.src_s3_bucket.id}/*",
          "arn:aws:s3:::${data.aws_s3_bucket.src_s3_bucket.id}"
        ]
      }
    ]
  })
}
</code></pre>
<p>This <code>resource</code> block defines a S3 Bucket Policy to be attached to the source S3 bucket on the source AWS account. The policy allows the IAM Role <code>aws_iam_role.s3_migration_role</code> from the target account to get S3 objects from the source bucket.</p>
<h3>Target account :: IAM User</h3>
<pre><code class="language-yaml">resource "aws_iam_user" "temp_user" {
  provider = aws.target
  name     = "temp-user"
  path     = "/"
}

resource "aws_iam_access_key" "temp_user_access_key" {
  provider = aws.target
  user     = aws_iam_user.temp_user.name
}
</code></pre>
<p>The first <code>resource</code> block defines an IAM User on the target account named <em><strong>temp-user</strong></em>. Another <code>resource</code> block will generate <em><strong>temp-user</strong></em>'s access key ID and secret key to be used on AWS CLI to perform <em>copy</em> or <em>sync</em> task later.</p>
<h3>Target account :: IAM Policy</h3>
<pre><code class="language-yaml">resource "aws_iam_policy" "s3_migration_policy" {
  provider = aws.target
  name     = "S3MigrationPolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        "Effect" : "Allow",
        "Action" : [
          "s3:ListBucket",
          "s3:GetObject",
          "s3:GetObjectTagging",
          "s3:GetObjectVersion",
          "s3:GetObjectVersionTagging"
        ],
        "Resource" : [
          "arn:aws:s3:::${data.aws_s3_bucket.src_s3_bucket.id}",
          "arn:aws:s3:::${data.aws_s3_bucket.src_s3_bucket.id}/*"
        ]
      },
      {
        "Effect" : "Allow",
        "Action" : [
          "s3:ListBucket",
          "s3:PutObject",
          "s3:PutObjectAcl",
          "s3:PutObjectTagging",
          "s3:GetObjectTagging",
          "s3:GetObjectVersion",
          "s3:GetObjectVersionTagging"
        ],
        "Resource" : [
          "arn:aws:s3:::${data.aws_s3_bucket.tgt_s3_bucket.id}",
          "arn:aws:s3:::${data.aws_s3_bucket.tgt_s3_bucket.id}/*"
        ]
      }
    ]
  })
}
</code></pre>
<p>This <code>resource</code> block defines the IAM Policy <em><strong>S3MigrationPolicy</strong></em> that allow any attaching principals to get S3 objects from the source S3 bucket and put S3 objects to the target S3 bucket. For our task, we attach it to the IAM Role <em><strong>S3MigrationRole</strong></em> defined below.</p>
<h3>Target account :: IAM Role</h3>
<pre><code class="language-yaml">resource "aws_iam_role" "s3_migration_role" {
  provider = aws.target
  name     = "S3MigrationRole"
  path     = "/"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        "Sid" : "AllowTempUserToAssumeRole",
        "Effect" : "Allow",
        "Principal" : {
          "AWS" : aws_iam_user.temp_user.arn
        },
        "Action" : "sts:AssumeRole"
      }
    ]
  })

  managed_policy_arns = [
    aws_iam_policy.s3_migration_policy.arn
  ]
}
</code></pre>
<p>The last <code>resource</code> block defines an IAM Role <em><strong>S3MigrationRole</strong></em> attaching the IAM Policy <em><strong>S3MigrationPolicy</strong></em> to allow only <em><strong>temp-user</strong></em> to assume to the role to perform <em>copy</em> or <em>sync</em> operation by AWS CLI.</p>
<h2>Outputs</h2>
<p>In the last file of our Terraform setup, <code>outputs.tf</code>, we define output variables to display crucial information in the console after successful applying the root module. These outputs are specifically tailored for the <em>copy</em> or <em>sync</em> tasks executed through the AWS CLI:</p>
<pre><code class="language-yaml">output "tgt_temp_user_access_key_id" {
  value = aws_iam_access_key.temp_user_access_key.id
}

output "tgt_temp_user_access_key_secret" {
  value = aws_iam_access_key.temp_user_access_key.secret
  sensitive = true
}

output "tgt_s3_migration_role_arn" {
  value = aws_iam_role.s3_migration_role.arn
}

output "src_aws_region" {
  value = var.src_aws_region
}

output "src_s3_bucket_name" {
  value = data.aws_s3_bucket.src_s3_bucket.id
}

output "tgt_aws_region" {
  value = var.tgt_aws_region
}

output "tgt_s3_bucket_name" {
  value = data.aws_s3_bucket.tgt_s3_bucket.id
}
</code></pre>
<p>You can see the <code>sensitive = true</code> attribute for the <code>tgt_temp_user_access_key_secret</code> output. This is important because Terraform considers this generated secret key as sensitive data. Terraform requires us to confirm our intent to display it in the console by forcing us to add <code>sensitive = true</code> for this <code>output</code> block; otherwise, the following error will occur:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711680342902/5a663ecd-94dd-41e6-ae91-90c1c9293ab6.png" alt="" style="display:block;margin:0 auto" />

<p>To view all outputs, you can use the command <code>terraform output</code>:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711680547050/cdf35ae6-0319-423b-baa5-bed54713cc9f.png" alt="" style="display:block;margin:0 auto" />

<p>Although <code>tgt_temp_user_access_key_secret</code> remains masked, you can explicitly display sensitive data like this by running <code>terraform output tgt_temp_user_access_key_secret</code>.</p>
<p>For the next steps, most commands will rely on the <code>terraform output &lt;output-variable-name&gt;</code> command to assign each output variable to its corresponding environment variable. We also use the <code>-raw</code> flag to print values in a format suitable for assigning environment variables, without quotes or a newline character.</p>
<h1>Copy or Sync 🔄</h1>
<p>Run the following commands to set up environment variables for AWS CLI with AWS access keys of the newly created <strong><em>temp-user</em> IAM User</strong> of the <strong>target</strong> account:</p>
<pre><code class="language-bash"># Set up AWS access keys of temp-user
export AWS_ACCESS_KEY_ID=$(terraform output -raw tgt_temp_user_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(terraform output -raw tgt_temp_user_access_key_secret)
</code></pre>
<p>Assume <em>S3MigrationRole</em> by <em><strong>temp-user</strong></em> and replace environment variables for AWS CLI with the newly received temporary access keys from <code>aws sts assume-role</code> command:</p>
<pre><code class="language-bash"># Assume S3MigrationRole by temp-user and set up temporary access keys
export TARGET_S3_MIGRATION_ROLE_ARN=$(terraform output -raw tgt_s3_migration_role_arn)
export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"  \
  $(aws sts assume-role \
    --role-arn "$TARGET_S3_MIGRATION_ROLE_ARN" \
    --role-session-name AWSCLI-Session \
    --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
    --output text))
</code></pre>
<p>Verify that currently AWS CLI actually assumes the <em>S3MigrationRole</em> role by:</p>
<pre><code class="language-bash">aws sts get-caller-identity
</code></pre>
<p>Before running <em>copy</em> or <em>sync</em> command, set up the following variables:</p>
<pre><code class="language-bash">export SRC_BUCKET_NAME=$(terraform output -raw src_s3_bucket_name)
export SRC_REGION=$(terraform output -raw src_aws_region)
export TGT_BUCKET_NAME=$(terraform output -raw tgt_s3_bucket_name)
export TGT_REGION=$(terraform output -raw tgt_aws_region)
</code></pre>
<p>Run the following command to <em>copy</em> all objects from a folder in the <strong>source</strong> S3 bucket into another folder on the <strong>target</strong> S3 bucket:</p>
<pre><code class="language-bash">aws s3 cp s3://$SRC_BUCKET_NAME/src-folder-path/ \
  s3://$TGT_BUCKET_NAME/tgt-folder-path/ \
  --recursive --source-region \(SRC_REGION --region \)TGT_REGION
</code></pre>
<p>Check whether all of the objects on a specified folder are copied into the <strong>target</strong> S3 bucket folder path.</p>
<p>Or, you can <em>sync</em> all of the objects in the <strong>source</strong> S3 bucket with the <strong>target</strong> S3 bucket:</p>
<pre><code class="language-bash">aws s3 sync "s3://$SRC_BUCKET_NAME/" \
  "s3://$TGT_BUCKET_NAME/" \
  --source-region \(SRC_REGION --region \)TGT_REGION
</code></pre>
<p>Then, verify whether all objects from the <strong>source</strong> S3 bucket are copied into the <strong>target</strong> S3 bucket.</p>
<h1>Clean Up <strong>🧹</strong></h1>
<p>To clean up the whole infrastructure by deleting all of the resources from both source and target AWS accounts, excluding the source and target S3 bucket and their objects inside, run the following command:</p>
<pre><code class="language-bash">terraform destroy
</code></pre>
<p>If <code>-auto-approve</code> is set, then the destroy confirmation will not be shown:</p>
<pre><code class="language-bash">terraform destroy -auto-approve
</code></pre>
<h1><strong>Conclusion 🏁</strong></h1>
<p>This blog post shows an easy way to <em>copy</em> or <em>sync</em> data between AWS S3 buckets across accounts using Terraform. I've outlined the necessary steps, from setting up prerequisites to executing the migration task. With Terraform, it's simple to manage the necessary parts. By using AWS CLI and Terraform outputs, you can smoothly <em>copy</em> or <em>sync</em> your data. Following cleanup steps is also important to keep things tidy and save costs.</p>
<p>By following this guide, you can automate S3 data migration, reducing manual work and error risks. Terraform's declarative approach allows easy adaptation and scaling to fit your needs, giving you the power to manage data migration tasks smoothly across your AWS setups.</p>
<p>Stay tuned for more helpful tips in our next blog post! 😎</p>
<h1><strong>References 🔗</strong></h1>
<h2>Main Resources</h2>
<ul>
<li><p><a href="https://github.com/prasitstk/terraform-aws-patterns-copy-data-s3-bucket-across-accounts">github.com / prasitstk / terraform-aws-patterns-copy-data-s3-bucket-across-accounts</a></p>
</li>
<li><p><a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-data-from-an-s3-bucket-to-another-account-and-region-by-using-the-aws-cli.html">AWS Prescriptive Guidance Patterns: Copy data from an S3 bucket to another account and Region by using the AWS CLI</a></p>
</li>
</ul>
<h2>AWS</h2>
<ul>
<li><p><a href="https://stackoverflow.com/questions/63241009/aws-sts-assume-role-in-one-command">AWS sts assume role in one command</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/63241009/aws-sts-assume-role-in-one-command">Creating an S3 bucket (Amazon</a><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html">S3 documentation)</a></p>
</li>
<li><p><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html">Amazon S3 bucket policies and user policies (Amazon S3 documentation)</a></p>
</li>
<li><p><a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html?icmpid=docs_iam_console">IAM identities (users, groups, and roles) (IAM documentation)</a></p>
</li>
<li><p><a href="https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html">cp command (AWS CLI documentation)</a></p>
</li>
<li><p><a href="https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html">sync command (AWS CLI documentation)</a></p>
</li>
</ul>
<h2>Terraform</h2>
<ul>
<li><p><a href="https://developer.hashicorp.com/terraform/language/expressions/types">Terraform - Types and Values</a></p>
</li>
<li><p><a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket">aws_s3_bucket | Data Sources | hashicorp/aws</a></p>
</li>
<li><p><a href="https://developer.hashicorp.com/terraform/language/providers">Terraform - Providers</a></p>
</li>
<li><p><a href="https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations">Terraform - Providers - alias: Multiple Provider Configurations</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Streamline Terraform-AWS Development with Dev Containers in GitHub Codespaces]]></title><description><![CDATA[In my previous post, I explained how to set up your Codespace for Terraform on AWS manually. If you're planning to work with multiple repositories using the same setup, you can use "Dev containers".
"]]></description><link>https://prasitstk.com/streamline-terraform-aws-development-with-dev-containers-in-github-codespaces</link><guid isPermaLink="true">https://prasitstk.com/streamline-terraform-aws-development-with-dev-containers-in-github-codespaces</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[github-codespaces]]></category><category><![CDATA[codespaces]]></category><category><![CDATA[devcontainer]]></category><dc:creator><![CDATA[Prasit (O) Sutthikamolsakul]]></dc:creator><pubDate>Sat, 16 Mar 2024 03:36:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/629d42ff14a2e6466765cfb2/ec6a01b5-2173-4d44-85ed-a1b725a6fc19.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <a href="https://prasitstk.hashnode.dev/basic-setup-terraform-for-aws-on-github-codespaces">my previous post</a>, I explained how to set up your Codespace for Terraform on AWS manually. If you're planning to work with multiple repositories using the same setup, you can use <strong>"Dev containers"</strong>.</p>
<p><strong>"Dev containers"</strong> (Development containers) are Docker containers tailored to provide comprehensive development environments. When working in a Codespace, you're essentially using a <strong>dev container</strong> on a virtual machine. These containers can be configured for specific repositories, ensuring Codespaces offer a customized environment with all necessary tools and runtimes for the project.</p>
<p>In this post, I'll leverage Dev containers to create a basic development environment in Codespace for a Terraform-AWS repository. This setup includes:</p>
<ul>
<li><p>AWS access keys for AWS CLI</p>
</li>
<li><p>Terraform VS Code extension</p>
</li>
<li><p>AWS CLI</p>
</li>
<li><p>Terraform CLI</p>
</li>
</ul>
<p>First, I'll cover setting up AWS access keys and the Terraform VS Code extension for a dev container. Then, I'll detail three methods for installing AWS CLI and Terraform CLI:</p>
<ol>
<li><p>"postCreateCommand" property of Dev container</p>
</li>
<li><p>Dev containers features</p>
</li>
<li><p>Custom Dockerfile <em><strong>[Recommended 👍]</strong></em></p>
</li>
</ol>
<p>By following these steps, you'll be ready to apply it to your real-world Terraform for AWS projects on Codespace.</p>
<p>Let's get started! 🚀</p>
<h1>Define AWS access keys on Dev Container 🔑</h1>
<p><strong>AWS access keys</strong> are credentials comprising an <strong>access key ID</strong> and <strong>secret access key</strong>. We utilize them to grant access for <strong>AWS CLI</strong> to manage AWS resources on our AWS account from our Codespace environment.</p>
<p>To begin, log in to your GitHub account and navigate to the Terraform-AWS repository where you wish to set up a dev container. Next, click the <strong>"Code"</strong> button, choose the <strong>"Codespaces"</strong> tab, and select the ellipsis icon <strong>"..."</strong>. From the dropdown menu, opt for <strong>"Configure dev container"</strong> as illustrated below:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710413670939/4d2ed5e7-406b-4c5e-a40b-8a80e749bd4a.png" alt="Select menu to configure dev container on your repository" style="display:block;margin:0 auto" />

<p>After that, GitHub will try to create a new directory <code>.devcontainer</code> and a new file inside <code>devcontainer.json</code>. The configuration files for a dev container are contained in a <code>.devcontainer</code> directory in your repository and <code>devcontainer.json</code> is the primary one. Define the <strong>"recommended"</strong> personal secrets by adding <code>"secrets"</code> property into this JSON file as follows:</p>
<pre><code class="language-json">{
  "image": "mcr.microsoft.com/devcontainers/universal:2",
  "features": {
  },
  // ADDED //
  "secrets": {
    "AWS_ACCESS_KEY_ID": {
      "description": "AWS access key associated with an IAM user or role.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    },
    "AWS_SECRET_ACCESS_KEY": { 
      "description": "the secret key associated with the access key (AWS_ACCESS_KEY_ID). This is essentially the password for that access key.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    }
    ///////////
  }
}
</code></pre>
<p>Proceed by clicking the <strong>"Commit changes..."</strong> button, entering the commit message, and confirming the changes by clicking <strong>"Commit changes"</strong>:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710448754845/8a70baac-ed32-438b-b1ae-0dbf30524161.png" alt="Commit change to create devcontainer.json" style="display:block;margin:0 auto" />

<p>It's worth noting that AWS access keys are not directly defined in the Dev container configuration files but are <strong>"recommended"</strong> instead. Users must create these secrets in their personal Codespaces settings. If not, the <code>"secrets"</code> property will prompt them to do so when using the advanced options method to create a Codespace as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710449447970/6241ebab-e049-416e-b364-bddf4c085385.png" alt="Navigate to the menu of the advanced options method to create a Codespace" style="display:block;margin:0 auto" />

<p>Then the names of the recommended secrets are only listed on the below page when the dev container configuration on the selected branch specifies these secrets.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710449933094/8fcd9b6d-d507-4573-9f93-9a297f0e17f6.png" alt="Recommended secrets on personal settings" style="display:block;margin:0 auto" />

<p>By selecting <strong>"Associate with repository?"</strong> for certain secrets and clicking <strong>"Create codespace"</strong>, these secrets will be added to <strong>your personal Codespaces settings</strong>, along with creating a new codespace. Navigate to your GitHub account's personal settings page, select <strong>"Codespaces"</strong> from the left-hand menu, and you'll find the two secrets automatically created under <strong>"Codespaces secrets"</strong>:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710450643306/84cbb66e-614d-4113-bb36-6adbf13054d3.png" alt="New Codespaces secrets on personal settings" style="display:block;margin:0 auto" />

<p>Update each secret by pasting your AWS access keys and clicking <strong>"Save changes."</strong> These secrets won't be available as environment variables until you click <strong>"Reload to apply"</strong> button on the dialog box that just appears below your codespace:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710451695409/9a2d95c5-645b-44f1-adfe-1828b72d0076.png" alt="Reload to apply new codespace secrets to be available" style="display:block;margin:0 auto" />

<p>Once AWS access keys from your Codespaces secrets are confirmed, verify their presence using the <code>echo</code> command:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710451842789/2b0703f3-63b1-421f-a7aa-c6752a487519.png" alt="echo command to verify AWS access keys existence" style="display:block;margin:0 auto" />

<p>Your AWS access keys should now appear in the VS Code Terminal on Codespace. With AWS CLI installed, it will utilize these keys to manage your AWS resources accordingly.</p>
<h1>Define Terraform VS Code extension on Dev Container 🧩</h1>
<p>On <a href="https://prasitstk.hashnode.dev/basic-setup-terraform-for-aws-on-github-codespaces#heading-step-8-install-the-terraform-extension-for-vs-code">the step 8 in my previous post</a>, I demonstrated manual installation of the <strong>Terraform extension for VS Code</strong>. However, you can streamline this process by defining it directly in the <code>devcontainer.json</code> configuration file. Here's how:</p>
<p>Add the <code>"customizations"</code> property to your <code>devcontainer.json</code> file:</p>
<pre><code class="language-json">{
  "image": "mcr.microsoft.com/devcontainers/universal:2",
  "features": {
  },
  "secrets": {
    "AWS_ACCESS_KEY_ID": {
      "description": "AWS access key associated with an IAM user or role.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    },
    "AWS_SECRET_ACCESS_KEY": { 
      "description": "the secret key associated with the access key (AWS_ACCESS_KEY_ID). This is essentially the password for that access key.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    }
  },
  // ADDED //
  "customizations": {
    "vscode": {
      "extensions": ["hashicorp.terraform"]
    }
  }
  ///////////
}
</code></pre>
<p>After adding the property, you'll see a dialog appear. Click <strong>"Rebuild now"</strong> to proceed:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710454390451/cd42fcc7-6e8d-4a9b-9f03-673c693c7ed9.png" alt="press &quot;Rebuild Now&quot; button to rebuild devcontainer" style="display:block;margin:0 auto" />

<p>Alternatively, you can select <strong>"Rebuild Container"</strong> from the menu as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710454679750/e6d7b5b8-2650-4ac0-be69-8acc80c8f2c3.png" alt="Navigate to &quot;Rebuild Container&quot; menu" style="display:block;margin:0 auto" />

<p>Once completed, the <strong>Terraform VS Code Extension</strong> will be installed automatically here:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710454793311/dd74ad27-b41c-4e40-9d03-bca44c16864d.png" alt="Terraform VS Code extension buttons on VS Code" style="display:block;margin:0 auto" />

<p>Note that to find the extension ID for inclusion in the <code>"extensions"</code> array, visit the extension's marketplace page. You can find this information under the <strong>"More Info"</strong> section:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710455136587/a52c71a6-2535-4191-9a90-f05d09b178de.png" alt="Navigate to the extension ID of the Terraform VS Code extension" style="display:block;margin:0 auto" />

<h1>Install AWS &amp; Terraform CLIs on Dev Container 🛠</h1>
<h2>Method#1: "postCreateCommand" property of Dev container</h2>
<p>You can add a new <code>"postCreateCommand"</code> property to the <code>devcontainer.json</code> file to run commands or scripts after the container is created. Start by creating a shell script file <code>.devcontainer/post-create.sh</code> to install AWS and Terraform CLIs:</p>
<pre><code class="language-bash">#!/usr/bin/env bash

# Install Terraform CLI
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update &amp;&amp; sudo apt install terraform

# Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
rm -f awscliv2.zip
rm -rf aws
</code></pre>
<p>Next, include <code>"postCreateCommand"</code> in <code>.devcontainer/devcontainer.json</code> to execute the shell script after container setup automatically:</p>
<pre><code class="language-json">{
  "image": "mcr.microsoft.com/devcontainers/universal:2",
  "features": {
  },
  "secrets": {
    "AWS_ACCESS_KEY_ID": {
      "description": "AWS access key associated with an IAM user or role.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    },
    "AWS_SECRET_ACCESS_KEY": { 
      "description": "the secret key associated with the access key (AWS_ACCESS_KEY_ID). This is essentially the password for that access key.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    }
  },
  "customizations": {
    "vscode": {
      "extensions": ["hashicorp.terraform"]
    }
  },
  // ADDED //
  "postCreateCommand": "bash .devcontainer/post-create.sh"
  ///////////
}
</code></pre>
<p>Once added, rebuild the container by clicking "<strong>Rebuild now"</strong> or selecting <strong>"Rebuild Container"</strong> from the menu.</p>
<p>Note that, in this method, after the codespace is rebuilt, you <strong><mark>need to wait for a while</mark></strong> until you see both <code>awscliv2.zip</code> file and <code>aws/</code> directory are deleted. This can be inconvenient and I do not recommend this method. However, I would like to demonstrate this method to show you how to work with <code>"postCreateCommand"</code> on the Dev container configuration. Hopefully, it should be more useful for other use cases.</p>
<p>To verify installation, run <code>aws --version</code> and <code>terraform -version</code> commands:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710470076179/d5bc2a84-dc55-4478-b854-087b788170f2.png" alt="Verify AWS and Terraform CLIs installation by showing their version." style="display:block;margin:0 auto" />

<p>With this setup, you can now utilize Terraform to manage AWS resources. Let's explore the second method by using <strong>Dev container features</strong>.</p>
<h2>Method#2: Dev container features</h2>
<p>Dev container offers a built-in method for adding additional software via the <code>"features"</code> property in the <code>devcontainer.json</code> file. This enables the installation of various tools to support your development, either from a predefined set of <a href="https://containers.dev/features">Features</a> or <a href="https://code.visualstudio.com/blogs/2022/09/15/dev-container-features">custom ones</a>. To make AWS and Terraform CLIs available, define features in the <code>devcontainer.json</code> file:</p>
<pre><code class="language-json">{
  "image": "mcr.microsoft.com/devcontainers/universal:2",
  // UPDATED //
  "features": {
    "ghcr.io/devcontainers/features/aws-cli:1": {},
    "ghcr.io/devcontainers/features/terraform:1": {}
  },
  /////////////
  "secrets": {
    "AWS_ACCESS_KEY_ID": {
      "description": "AWS access key associated with an IAM user or role.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    },
    "AWS_SECRET_ACCESS_KEY": { 
      "description": "the secret key associated with the access key (AWS_ACCESS_KEY_ID). This is essentially the password for that access key.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    }
  },
  "customizations": {
    "vscode": {
      "extensions": ["hashicorp.terraform"]
    }
  }
}
</code></pre>
<p>Once added, rebuild the container by clicking <strong>"Rebuild now"</strong> or selecting <strong>"Rebuild Container"</strong> from the menu.</p>
<p>With this method, not only is software installed, but also VS Code extensions supporting the software. However, this can result in <strong><mark>excessive extensions being installed, potentially consuming more memory or CPU than anticipated and impacting development efficiency.</mark></strong> You can view all installed extensions as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710484174412/9732a440-8247-479d-99fc-b0b9f8b97f66.png" alt="Show VS Code installed after add features to devcontainer." style="display:block;margin:0 auto" />

<p>Again, to verify installation, run <code>aws --version</code> and <code>terraform -version</code> commands:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710470076179/d5bc2a84-dc55-4478-b854-087b788170f2.png" alt="Verfiy AWS and Terraform CLIs installation by showing their version." style="display:block;margin:0 auto" />

<p>This setup also allows you to utilize Terraform for managing AWS resources. Now, let's explore the third method, which I recommend 😉.</p>
<h2>Method#3: Custom Dockerfile <em>[Recommended 👍]</em></h2>
<p>Instead of starting with an existing image, this method involves creating a custom image using a <code>Dockerfile</code>. This file extends the image by executing additional shell commands to install AWS and Terraform CLIs during the container image building process. First, create a <code>Dockerfile</code> in the <code>.devcontainer</code> directory:</p>
<pre><code class="language-dockerfile">FROM mcr.microsoft.com/devcontainers/universal:2

# Install Terraform CLI
RUN wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg \
    &amp;&amp; echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list \
    &amp;&amp; sudo apt update &amp;&amp; sudo apt install terraform

# Install AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
    &amp;&amp; unzip awscliv2.zip \
    &amp;&amp; sudo ./aws/install \
    &amp;&amp; rm -f awscliv2.zip \
    &amp;&amp; rm -rf aws
</code></pre>
<p>In the <code>devcontainer.json</code> file, replace <code>"image"</code> property with <code>"build"</code> property:</p>
<pre><code class="language-json">{
  // REPLACE //
  //"image": "mcr.microsoft.com/devcontainers/universal:2",
  "build": {
    "dockerfile": "Dockerfile"
  },
  /////////////
  "features": {
  },
  "secrets": {
    "AWS_ACCESS_KEY_ID": {
      "description": "AWS access key associated with an IAM user or role.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    },
    "AWS_SECRET_ACCESS_KEY": { 
      "description": "the secret key associated with the access key (AWS_ACCESS_KEY_ID). This is essentially the password for that access key.",
      "documentationUrl": "https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list"
    }
  },
  "customizations": {
    "vscode": {
      "extensions": ["hashicorp.terraform"]
    }
  }
}
</code></pre>
<p>Rebuild the container by clicking <strong>"Rebuild now"</strong> or selecting <strong>"Rebuild Container"</strong> from the menu.</p>
<p>Again, to verify installation, run <code>aws --version</code> and <code>terraform -version</code> commands:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710470076179/d5bc2a84-dc55-4478-b854-087b788170f2.png" alt="Verfiy AWS and Terraform CLIs installation by showing their version." style="display:block;margin:0 auto" />

<p>This method enables you to utilize Terraform for managing AWS resources efficiently. I recommend this method for the following reasons:</p>
<ol>
<li><p>Unlike Method #1, there is <mark>no need for a wait time</mark> after the container is running, as AWS and Terraform CLIs are already installed in the custom image.</p>
</li>
<li><p>Unlike Method #2, <mark>no excessive extensions are installed</mark>, ensuring a streamlined development environment.</p>
</li>
</ol>
<h1>Bonus: Make the selected method a GitHub template 🎁</h1>
<p>After choosing your preferred method, it's time to commit changes:</p>
<pre><code class="language-bash">git add .
git commit -m "Add Dev container configuration"
git push origin main
</code></pre>
<p>Now, anyone wanting to create a new codespace from your Terraform-AWS repository won't need to manually set up everything. They can simply click to create it from your repository, and voilà! Everything they need is ready!</p>
<p>Alternatively, if you plan to create multiple Terraform-AWS repositories for future projects, you can make your repository a template:</p>
<ul>
<li><p>Navigate to the main page of the repository.</p>
</li>
<li><p>Under your repository name, click "<strong>Settings"</strong> tab.</p>
</li>
<li><p>Select "<strong>Template repository"</strong>.</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710495220480/c9b0129d-dd93-4c4c-ad67-ae4bce255a02.png" alt="Navigate to where to make the repository as a template one." style="display:block;margin:0 auto" />

<p>Now, every time you create a new repository, you can use this repository as a template:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710495491744/ef832dfb-3bb7-4658-b6d4-b55012727212.png" alt="Show how to use the newly created template to create a new repository." style="display:block;margin:0 auto" />

<h1>Conclusion <strong>🏁</strong></h1>
<p>Congratulations! 🎉 You now have your Dev container configuration for your Codespace setup without any manual tasks. What's even better is that you also have your pre-defined template for your future Terraform-AWS projects. I hope you find this post useful for your work and enjoy joining me on my upcoming blog posts! 😎</p>
<h1>References <strong>🔗</strong></h1>
<h2>AWS</h2>
<ul>
<li><a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html">Environment variables to configure the AWS CLI</a></li>
</ul>
<h2>Dev containers</h2>
<ul>
<li><p><a href="https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers">Introduction to dev containers</a></p>
</li>
<li><p><a href="https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers#using-the-default-dev-container-configuration">Using the default dev container configuration</a></p>
</li>
<li><p><a href="https://code.visualstudio.com/docs/devcontainers/create-dev-container">Create a Dev Container</a></p>
</li>
<li><p><a href="https://containers.dev/implementors/json_reference/">Dev Container metadata reference</a></p>
</li>
<li><p><a href="https://containers.dev/supporting#editors">Dev containers &gt; Editors &gt; Visual Studio Code</a></p>
</li>
<li><p><a href="https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/configuring-dev-containers/specifying-recommended-secrets-for-a-repository">Specifying recommended secrets for a repository</a></p>
</li>
<li><p><a href="https://containers.dev/implementors/features/">Dev Container Features reference</a></p>
</li>
<li><p><a href="https://containers.dev/features">Available Dev Container Features</a></p>
</li>
<li><p><a href="https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/configuring-dev-containers/adding-features-to-a-devcontainer-file?tool=webui">Adding features to a devcontainer.json file</a></p>
</li>
<li><p><a href="https://code.visualstudio.com/blogs/2022/09/15/dev-container-features">Custom Dev Container Features</a></p>
</li>
<li><p><a href="https://containers.dev/guide/dockerfile#dockerfile">Dev containers &gt; Using Images, Dockerfiles, and Docker Compose</a></p>
</li>
</ul>
<h2>GitHub</h2>
<ul>
<li><p><a href="https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/configuring-dev-containers/specifying-recommended-secrets-for-a-repository">Specifying recommended secrets for a repository</a></p>
</li>
<li><p><a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-template-repository">Creating a template repository</a></p>
</li>
<li><p><a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-repository-from-a-template">Creating a repository from a template</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Basic Setup Terraform for AWS on GitHub Codespaces]]></title><description><![CDATA[In this post, I'll start my journey with Terraform on AWS as part of my decision to create a portfolio showcasing my experiences. With nearly three years of Terraform expertise under my belt, I've rea]]></description><link>https://prasitstk.com/basic-setup-terraform-for-aws-on-github-codespaces</link><guid isPermaLink="true">https://prasitstk.com/basic-setup-terraform-for-aws-on-github-codespaces</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[github-codespaces]]></category><category><![CDATA[codespaces]]></category><dc:creator><![CDATA[Prasit (O) Sutthikamolsakul]]></dc:creator><pubDate>Fri, 08 Mar 2024 16:37:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/629d42ff14a2e6466765cfb2/7af0916d-b2e7-4b43-a03a-c111d032283b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, I'll start my journey with Terraform on AWS as part of my decision to create a portfolio showcasing my experiences. With nearly three years of Terraform expertise under my belt, I've realized the importance of flexibility in development. Whether I'm on my iPad Air or laptop, being able to develop Terraform code from any machine is invaluable. This led me to explore setting up a basic Terraform environment in GitHub Codespaces, starting with a manual approach to understand its foundations before diving into more advanced setups.</p>
<p>Embarking on this journey, we'll navigate the basic concept of AWS, Terraform, and GitHub Codespaces. Starting by setting up our GitHub Codespace environment, we'll then streamline the creation of a simple AWS infrastructure using Terraform. Finally, we'll explore a one-step cleanup process for these resources.</p>
<p>Join me in simplifying infrastructure management with Terraform and GitHub Codespaces, wherever creativity takes you 😉.</p>
<h1>Exploring AWS, Terraform, and GitHub Codespaces 🔍</h1>
<h2>What is AWS?</h2>
<p><strong>AWS</strong> (Amazon Web Services) is one of the Amazon's cloud computing platform, offering a wide range of services like computing power, storage, networking, databases, machine learning, AI, and more. Primarily, it allows businesses to grow without the need for costly upfront infrastructure investments, offering flexibility and cost-effectiveness. You can explore the full range of benefits AWS offers businesses by clicking on this <a href="https://aws.amazon.com/application-hosting/benefits/">link</a>.</p>
<h2>What is Terraform?</h2>
<p><strong>Terraform</strong>, created by <strong>HashiCorp</strong>, is an open-source tool for managing infrastructure as code. It lets users define and provision resources using a simple configuration language, called <strong>HCL (HashiCorp configuration language)</strong>. With Terraform, you can manage your infrastructure through code, enabling version control, collaboration, and automation. It supports multiple cloud providers like AWS, Azure, and Google Cloud, making it adaptable for various environments.</p>
<h2>What is GitHub Codespaces?</h2>
<p><strong>GitHub Codespaces</strong> offers cloud-hosted development environments for coding from anywhere via any web browser, seemingly integrating with GitHub repositories. It eliminates the need for local setups, enabling developers to work on projects from any device with internet access, including my iPad Air 😎.</p>
<h1>Get Started 🚀</h1>
<h2>Step 1: Create a Codespace on a branch in a GitHub repository.</h2>
<p>Open your browser, go to <a href="https://github.com/login">https://github.com/login</a> and login to your GitHub account.</p>
<p>Browse to a repository where you’d like to create the Codespace from. Press "<strong>Code"</strong> button, select the "<strong>Codespaces"</strong> tab, and press "<strong>Create codespace on main"</strong> button to let GitHub initiate a Codespace VS Code environment based on your <strong>main</strong> branch.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709887459288/facb3afd-4fe6-4eef-a5cf-91533c777c48.png" alt="Create codespace on main branch of a GitHub repository." style="display:block;margin:0 auto" />

<p>Within a few seconds your GitHub Codespace should be launched in the new browser tab as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709887844471/2ff88bd5-9cdb-4fc6-b644-d71d592254f0.png" alt="Codespace VS Code UI from a GitHub repository" style="display:block;margin:0 auto" />

<h2>Step 2: Install Terraform CLI in the Codespace</h2>
<p>First, check the OS release of the Codespace by typing the command <code>cat /etc/os-release</code> on the "<strong>Terminal"</strong> tab as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709889782482/ec69ead5-1493-4eee-a1e6-81874b65ea3f.png" alt="Show OS release on Terminal tab of the Codespace UI." style="display:block;margin:0 auto" />

<p>For me, it shows "<strong>Ubuntu 20.04.6 LTS"</strong>. Actually I do this because I would like to make sure that I select the correct Terraform installation commands from the web page <a href="https://developer.hashicorp.com/terraform/install#linux">here</a>:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709889483371/308299b4-38aa-4d39-ada9-62ac715f0a60.png" alt="Part of web page source of commands to install Terraform CLI" style="display:block;margin:0 auto" />

<p>Based on my OS release of my Codespace, I will copy the commands on the "<strong>Ubuntu/Debian"</strong> tab and press on the <strong>"Terminal"</strong> tab on my Codespace VS Code and run all of them to install the <strong>Terraform CLI</strong> (Command Line Interface) as follows:</p>
<pre><code class="language-bash"># Installation commands for Ubuntu/Debian
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update &amp;&amp; sudo apt install terraform
</code></pre>
<p>To verify that we install the <strong>Terraform CLI</strong> successfully, type the <code>terraform -version</code> on the "<strong>Terminal"</strong> tab, it should display the version of the installed <strong>Terraform CLI</strong> as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709889657330/d597d685-fa00-425c-b676-87e2efc3a8ff.png" alt="Show Terraform CLI version on Terminal tab of the Codespace UI." style="display:block;margin:0 auto" />

<h2>Step 3: Install AWS CLI</h2>
<p>Installing the <strong>AWS CLI</strong> (Command Line Interface) after the <strong>Terraform CLI</strong> is necessary to enable Terraform to interact with AWS resources effectively. Based on the installation commands from <a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">this AWS page</a> on the "<strong>Linux"</strong> section, run the following commands on the Terminal tab:</p>
<pre><code class="language-bash"># Installation commands for Linux x86 (64-bit)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# ADDED: Clean up after installation
rm -f awscliv2.zip
rm -rf aws
</code></pre>
<p>After the installation is complete, verify the installation of the <strong>AWS CLI</strong> by typing the command <code>aws --version</code> on the "<strong>Terminal"</strong> tab, it should show the version of the installed <strong>AWS CLI</strong> as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709890322232/52439a55-bdde-4999-acbb-93ae5e6d7184.png" alt="Show AWS CLI version on Terminal tab of the Codespace UI." style="display:block;margin:0 auto" />

<h2>Step 4: Sign up for a new AWS account if not available yet</h2>
<p>If you already have your own AWS account, skip this step.</p>
<p>If not, create your new AWS account by following the steps <a href="https://repost.aws/knowledge-center/create-and-activate-aws-account">here</a>.</p>
<h2>Step 5: Create a new IAM User for Terraform development</h2>
<p>Sign in to your AWS account, then type "<strong>IAM"</strong> on the search text box at the top to navigate to "<strong>IAM Dashboard"</strong>:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709891540290/be067310-e2a3-4061-9013-d55db27a2422.png" alt="Navigate to IAM Dashboard from AWS console." style="display:block;margin:0 auto" />

<p>On the left-hand side menu, under the "<strong>Access management"</strong>, click on the "<strong>Users"</strong> menu link.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709891721884/52c2e37b-fbad-4233-905c-40e86d308e4b.png" alt="Navigate IAM Users web to create a new IAM User. " style="display:block;margin:0 auto" />

<p>On the "<strong>IAM Users"</strong> page, click the "<strong>Create user"</strong> button, then provide the information of the new IAM User as follows:</p>
<ul>
<li><p>User name: <strong>terraform-developer</strong></p>
</li>
<li><p>Attach policies directly &gt; Select <strong>AdministratorAccess</strong></p>
</li>
</ul>
<blockquote>
<p>BEST PRACTICE 👍</p>
<p>Note that using the <strong>AdministratorAccess</strong> policy for an IAM user managing AWS resources with Terraform isn't usually recommended. This policy gives full access to all AWS services and resources, which can be risky if the user's credentials are hacked.</p>
<p>Instead, it's better to stick to <strong>the principle of least privilege</strong>. Only give IAM users the permissions they really need, like creating or changing certain resources. This makes things safer and reduces the chance of someone getting into your AWS resources without permission.</p>
</blockquote>
<h2>Step 6: Create access keys for the IAM User</h2>
<p>From "<strong>IAM Users"</strong> page, select <strong>terraform-developer</strong> IAM User we've just created, scroll down until you see the "<strong>Access keys"</strong> box, then press "<strong>Create access key"</strong> button as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709893392509/98a9ed16-9a2c-481f-913e-58e9fa91206c.png" alt="Show the button to create access key." style="display:block;margin:0 auto" />

<p>Select the "<strong>Use case"</strong> as "<strong>Command Line Interface (CLI)"</strong>, acknowledge the warning below, and click "<strong>Next"</strong> button below as follows:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709893571214/4e2de751-c39a-4392-85c8-18197c567144.png" alt="Select a Use case for the access key to be created." style="display:block;margin:0 auto" />

<p>Optionally set the description tag, then click "<strong>Create access key"</strong> button below.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709893652904/c1e3586f-7881-49f1-822d-f98164be7575.png" alt="Show optional description tag field and the button to confirm to creating access key." style="display:block;margin:0 auto" />

<p>Your access key and secret access key now have been successfully generated. It's important to note that once you click the "<strong>Done"</strong> button below, you won't be able to access this "<strong>Retrieve access keys"</strong> page again. Be sure to copy or download these keys securely, as they won't be accessible later.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709894097849/0a872d47-b51a-408c-9450-21f5915c684d.png" alt="Show the result page after the access key and its secret access key are created." style="display:block;margin:0 auto" />

<h2>Step 7: Configure AWS CLI</h2>
<p>To enable Terraform CLI for AWS resource management, configure your AWS credentials using the access keys generated earlier. Set <strong>AWS_ACCESS_KEY_ID</strong> and <strong>AWS_SECRET_ACCESS_KEY</strong> environment variables in your GitHub Codespace Terminal session. To do this, navigate to your repository "<strong>Settings</strong> &gt; <strong>Secrets and variables</strong> &gt; <strong>Codespace"</strong>, and create new repository secrets with the following details:</p>
<ul>
<li><p><strong>AWS_ACCESS_KEY_ID</strong> = <em>&lt;Access key&gt;</em></p>
</li>
<li><p><strong>AWS_SECRET_ACCESS_KEY =</strong><em>&lt;Secret access key&gt;</em></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709899391491/866860c6-08e0-4082-a0e5-59dd2d07ffae.png" alt="Show a GitHub web page to add repository secrets." style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709899538368/a8d53af3-1dbf-4c50-b5f5-790b3b3a9153.png" alt="Show list of all repository secrets of AWS access keys." style="display:block;margin:0 auto" />

<p>After configuring these secrets, your Codespace will automatically detect them. You'll receive a notification in your Codespace stating, "Your codespace secrets have changed. Reload to apply." Simply click the '<strong>Reload to apply'</strong> button for the changes to take effect.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709900827368/4ca3267c-67bf-4385-8041-ae88871ce849.png" alt="Show the button to reload the Codespace to make new secrets effective in Terminal tab." style="display:block;margin:0 auto" />

<p>After the Codespace reloads, access keys for <strong>AWS CLI</strong> are configured. <strong>Terraform CLI</strong> is now effectively authorized to manage AWS resources using the <strong>terraform-developer</strong> IAM User. We're all set to define and create our AWS infrastructure.</p>
<blockquote>
<p>IMPORTANT 🔐</p>
<p>Note that AWS credentials or any secrets stored in the <strong>"Repository secrets"</strong> setting are shared with all GitHub collaborators. For privacy, store them in your personal <strong>"Codespaces secrets"</strong> setting instead.</p>
<p>Reference: <a href="https://docs.github.com/en/codespaces/managing-your-codespaces/managing-your-account-specific-secrets-for-github-codespaces">Managing your account-specific secrets for GitHub Codespaces</a></p>
</blockquote>
<h2>Step 8: Install the Terraform Extension for VS Code</h2>
<p>On your Codespace, install the <strong>Terraform extension for VS Code</strong> by:</p>
<ol>
<li><p>Select <strong>"Extensions"</strong> menu on the left-hand side.</p>
</li>
<li><p>Search for the <strong>"terraform"</strong> on the marketplace.</p>
</li>
<li><p>Press <strong>"Install"</strong> button on the "<strong>HashiCorp Terraform"</strong> extension.</p>
</li>
</ol>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710026725047/85d67690-61fb-4fef-9a09-88ba115f4c31.png" alt="Show steps on UI to install Terraform VS Code extension in Codespace UI." style="display:block;margin:0 auto" />

<h2>Step 9: Define AWS infrastructure in Terraform configuration file</h2>
<p>In the root directory of the Codespace, create a file named <code>main.tf</code> to define our AWS infrastructure using <strong>HCL (HashiCorp Configuration Language)</strong>.</p>
<p>In <code>main.tf</code>, add the following code to create a basic <strong>Amazon VPC (Virtual Private Cloud)</strong> virtual network. Remember, creating a VPC incurs no cost, but it's recommended to clean it up afterward.</p>
<pre><code class="language-yaml">terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&gt; 5.0"
    }
  }
}

provider "aws" {
  region = "ap-southeast-1"
}

resource "aws_vpc" "example_vpc" {
  cidr_block = "10.0.0.0/16"
}
</code></pre>
<blockquote>
<p>The Terraform code provided above is from the example on the AWS Provider page linked <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">here</a>.</p>
</blockquote>
<p>We can explain each block in the code as follows:</p>
<pre><code class="language-yaml">terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&gt; 5.0"
    }
  }
}
</code></pre>
<ul>
<li>The <code>terraform</code> block specifies the required AWS provider. AWS provider is a Terraform plugin that enables Terraform to interact with a specific service or technology platform, such as AWS, or Azure. This block will also map its local provider name called <code>aws</code> to a source address <code>"hashicorp/aws"</code> and a version constraint <code>"~&gt; 5.0"</code>.</li>
</ul>
<pre><code class="language-yaml">provider "aws" {
  region = "ap-southeast-1"
}
</code></pre>
<ul>
<li>The <code>provider "aws"</code> block will configure the AWS Provider with default region of Singapore (<code>"ap-southeast-1"</code>). Note that the default region in this block is only optional. You can define it by either <code>AWS_REGION</code> or <code>AWS_DEFAULT_REGION</code> environment variable with <code>ap-southeast-1</code> value through <strong>Secrets and variables</strong> setting from step 7 as well.</li>
</ul>
<pre><code class="language-yaml">resource "aws_vpc" "example_vpc" {
  cidr_block = "10.0.0.0/16"
}
</code></pre>
<ul>
<li>This <code>resource</code> block defines a VPC resource (by <code>"aws_vpc"</code> resource type) named <code>"example_vpc"</code> with IPv4 CIDR block of <code>"10.0.0.0/16"</code>.</li>
</ul>
<h2>Step 10: Create AWS infrastructure</h2>
<p>Now we have our configuration file to define our AWS infrastructure. We will then create the specified AWS resources on your AWS account on behalf of <strong>terraform-developer</strong> IAM User. Here are the commands to:</p>
<ul>
<li>Initialize the Terraform project in the root directory to download the AWS provider plugin and set up the working directory by:</li>
</ul>
<pre><code class="language-bash">terraform init
</code></pre>
<ul>
<li>Preview the actions that Terraform will take before applying any changes to the infrastructure by:</li>
</ul>
<pre><code class="language-bash">terraform plan
</code></pre>
<ul>
<li>After you're satisfied with the changes proposed by the <code>terraform plan</code> command, apply those changes to your AWS account by:</li>
</ul>
<pre><code class="language-bash">terraform apply
</code></pre>
<blockquote>
<p>Note that Terraform will prompt you to confirm applying the changes. Simply type <code>yes</code> and press <strong>Enter</strong> to proceed.</p>
</blockquote>
<h1>Verify the result ✅</h1>
<ul>
<li><p>Sign in to your AWS account.</p>
</li>
<li><p>On the AWS console, type <strong>"VPC"</strong> on the top search bar and click <strong>"VPC"</strong> link as follows:</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709913923022/e5552924-0d07-4f4c-bcd5-aa4bbfd28e27.png" alt="Navigate to VPC dashboard from AWS console." style="display:block;margin:0 auto" />

<ul>
<li>Click "<strong>Your VPCs"</strong> menu, then you should see the newly created VPC whose IPv4 CIDR is "10.0.0.0/16" as specified in the Terraform code as follows:</li>
</ul>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710025859843/cf5f6e17-8c04-4e51-b136-77aa1a1886c2.png" alt="Show the newly created VPC from Terraform configuration code." style="display:block;margin:0 auto" />

<p>If you see the new VPC on your VPC console, congratulations 🎉🎉🎉! You’ve successfully applied and managed your infrastructure as code using Terraform for AWS on GitHub Codespaces, even on the go 🚶🏻.</p>
<h1>Clean up 🧹</h1>
<p>After experimenting with Terraform, it's crucial to clean up your resources to avoid unexpected costs. To destroy the resources created earlier, run:</p>
<pre><code class="language-bash">terraform destroy
</code></pre>
<blockquote>
<p>Terraform will request confirmation before destroying the resources. Type <code>yes</code> and press <strong>Enter</strong> to proceed.</p>
</blockquote>
<h1>Conclusion 🏁</h1>
<p>In this guide, we've navigated the setup of Terraform for AWS on GitHub Codespaces, enabling flexible infrastructure management from any device. By following simple steps, we established our environment, created AWS resources, and verified our setup.</p>
<p>As we finish up, remember that your learning with Terraform and AWS doesn't stop here. Now that you have the basics, you can start trying out more advanced setups and improvements to better manage your infrastructure alongside me 😁.</p>
<p>Keep coding and exploring! 🚀</p>
<h1>References 🔗</h1>
<h2>Terraform</h2>
<ul>
<li><p><a href="https://developer.hashicorp.com/terraform/install#linux">Install | Terraform | HashiCorp Developer | Linux</a></p>
</li>
<li><p><a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">Docs overview | hashicorp/aws | Terraform | Terraform Registry</a></p>
</li>
</ul>
<h2>AWS</h2>
<ul>
<li><p><a href="https://aws.amazon.com/application-hosting/benefits/">AWS | Benefits at a Glance</a></p>
</li>
<li><p><a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">AWS CLI | Install or update to the latest version of the AWS CLI</a></p>
</li>
<li><p><a href="https://repost.aws/knowledge-center/create-and-activate-aws-account">AWS Knowledge Center | How do I create and activate a new AWS account?</a></p>
</li>
</ul>
<h2>GitHub Codespaces</h2>
<ul>
<li><a href="https://docs.github.com/en/codespaces/managing-your-codespaces/managing-your-account-specific-secrets-for-github-codespaces">Managing your account-specific secrets for GitHub Codespaces</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🌏 Hello World!]]></title><description><![CDATA[Hello World and welcome to my very first blog! I'm Prasit Sutthikamolsakul, a cloud engineer and architect based in Bangkok, Thailand. I'm excited to share my knowledge and experiences with you through this platform. This is my first time blogging, a...]]></description><link>https://prasitstk.com/hello-world</link><guid isPermaLink="true">https://prasitstk.com/hello-world</guid><category><![CDATA[Welcome]]></category><category><![CDATA[First Blog]]></category><category><![CDATA[Hello World]]></category><dc:creator><![CDATA[Prasit (O) Sutthikamolsakul]]></dc:creator><pubDate>Sat, 02 Mar 2024 04:40:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/75nbwHfDsnY/upload/8ecb21b5032aba8fdb727643374736b2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello World and welcome to my very first blog! I'm Prasit Sutthikamolsakul, a cloud engineer and architect based in Bangkok, Thailand. I'm excited to share my knowledge and experiences with you through this platform. This is my first time blogging, and I'm thrilled to start this journey with you.</p>
<h3 id="heading-topics-to-be-covered">Topics to be covered</h3>
<p>In this blog, I'll be covering a wide range of topics, including:</p>
<ul>
<li><p>Personal learning experiences</p>
</li>
<li><p>Cloud and software engineering tutorials and tips</p>
</li>
<li><p>Productivity and self-improvement</p>
</li>
<li><p>Technology news and updates</p>
</li>
<li><p>And much more!</p>
</li>
</ul>
<h3 id="heading-get-in-touch">Get in Touch</h3>
<p>I'm eager to hear from you! Whether you have questions, suggestions, or just want to say hello, feel free to leave a comment on any of my posts or connect with me on social media. I'm looking forward to engaging with fellow bloggers and readers.</p>
<p>Thank you for joining me on this adventure! I hope you find value and inspiration in my posts. Let's learn and grow together.</p>
]]></content:encoded></item></channel></rss>