16 Commits
v0.0.1 ... main

Author SHA1 Message Date
sidpalas
9949d314f3 Updates from bitrot livestream (#16)
- Remove minor version pin for postgres versions (i.e. 12.5 -> 12)
- Update s3 config to use separate versioning and encryption terraform resources
- Use bucket_prefix instead of bucket for bucket naming to avoid name conflicts

Stream: https://youtu.be/KWwKPYuOGBw
2023-01-13 12:43:41 -05:00
Lachlan Mulcahy
7a3fdaca72 03-basics: Fix deprecation warnings and postgres version error (#12)
This change updates the s3 bucket resource syntax to use the newer
resource types for specifying versioning and encryption configs.
We also enable auto_minor_version_upgrade for the RDS instance and
switch to only asking for major version 12.

This will just use the default/latest RDS PostgreSQL v12 minor
version. Upside, the specific engine_version provided here will take
longer before it becomes invalid. Minor downside, we are saying its
OK for this RDS instance to undergo minor version upgrades, which
while fine for a toy example like this, is often not great in prod.
2022-09-06 19:53:38 -04:00
sidpalas
4012eec6cd Update README with link and thumbnail 2022-02-16 12:53:11 -08:00
sid palas
75922067d8 disable staging deploy 2022-02-13 12:23:58 -08:00
sid palas
0fdc55db8e update domain 2022-02-13 12:00:39 -08:00
sidpalas
d276ae7b6e [feature] Add workflow step to run terratest test (#5) 2021-07-05 19:24:17 -07:00
sid palas
a35c1c0632 Fix relative source path for web_app module 2021-07-05 15:45:19 -07:00
sid palas
becbd33b93 Disable github action push event trigger 2021-06-27 22:09:11 -07:00
sid palas
3c0fe8a7f4 Update readmes for modules 1-3 2021-06-27 22:07:31 -07:00
sid palas
f21f709b51 Updates during testing of TF 1.0.1 2021-06-27 13:41:03 -07:00
sid palas
4afa2070b3 [docs] Add top level readme 2021-05-27 20:08:59 -07:00
sid palas
d551e98de5 [bugfix] update condition to use event_name==release 2021-05-27 17:39:54 -07:00
sid palas
4199e89b67 [Bugfix] remove quotes from tag regex 2021-05-27 17:29:53 -07:00
sid palas
b46e7de9b4 use published release rather than tags 2021-05-27 17:25:08 -07:00
sid palas
5fd2d96596 Merge branch 'main' of https://github.com/sidpalas/devops-directive-terraform-course into main 2021-05-27 17:21:47 -07:00
sid palas
30b623bfac update tag filter 2021-05-27 17:21:32 -07:00
27 changed files with 297 additions and 225 deletions

View File

@@ -1,11 +1,12 @@
name: "Terraform"
on:
push:
branches:
- main
tags:
- v\d+\.\d+\.\d+$
# Uncomment to enable staging deploy from main
# push:
# branches:
# - main
release:
types: [published]
pull_request:
jobs:
@@ -25,7 +26,8 @@ jobs:
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
terraform_version: 0.15.4
terraform_version: 1.0.1
terraform_wrapper: false
- name: Terraform Format
id: fmt
@@ -38,6 +40,7 @@ jobs:
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
# Route 53 zone must already exist for this to succeed!
run: terraform plan -var db_pass=${{secrets.DB_PASS }} -no-color
continue-on-error: true
@@ -72,16 +75,26 @@ jobs:
if: steps.plan.outcome == 'failure'
run: exit 1
- uses: actions/setup-go@v2
with:
go-version: '^1.15.5'
- name : Terratest Execution
if: github.event_name == 'pull_request'
working-directory: 08-testing/tests/terratest
run: |
go test . -v timeout 10m
- name: Check tag
id: check-tag
run: |
if [[ ${{ github.ref }} =~ "^refs\/tags\/v[0-9]+\.[0-9]+\.[0-9]+$" ]]; then echo ::set-output name=environment::production
if [[ ${{ github.ref }} =~ ^refs\/tags\/v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then echo ::set-output name=environment::production
elif [[ ${{ github.ref }} == 'refs/heads/main' ]]; then echo ::set-output name=environment::staging
else echo ::set-output name=environment::unknown
fi
- name: Terraform Apply Global
if: github.event_name == 'push'
if: github.event_name == 'push' || github.event_name == 'release'
working-directory: 07-managing-multiple-environments/file-structure/global
run: |
terraform init
@@ -92,7 +105,7 @@ jobs:
run: terraform apply -var db_pass=${{secrets.DB_PASS }} -auto-approve
- name: Terraform Apply Production
if: steps.check-tag.outputs.environment == 'production' && github.event_name == 'push'
if: steps.check-tag.outputs.environment == 'production' && github.event_name == 'release'
working-directory: 07-managing-multiple-environments/file-structure/production
run: |
terraform init

View File

@@ -0,0 +1,3 @@
# 01 - Evolution of Cloud + Infrastructure as Code
This module doesn't have any corresponding code.

View File

@@ -1,8 +0,0 @@
## Install Terraform
1) install terraform
## AWS Account Setup
2) create non-root AWS user
3) Add AmazonEC2FullAccess
4) Save Access key + secret key (or use AWS CLI `aws configure` -- https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)

22
02-overview/README.md Normal file
View File

@@ -0,0 +1,22 @@
## 02 - Overview + Setup
## Install Terraform
Official installation instructions from HashiCorp: https://learn.hashicorp.com/tutorials/terraform/install-cli
## AWS Account Setup
AWS Terraform provider documentation: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication
1) create non-root AWS user
2) Add the necessary IAM roles (e.g. AmazonEC2FullAccess)
3) Save Access key + secret key (or use AWS CLI `aws configure` -- https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
## Hello World
`./main.tf` contains minimal configuration to provision an EC2 instance.
1) `aws configure`
2) `terraform init`
3) `terraform plan`
4) `terraform apply`

View File

@@ -1,4 +1,45 @@
1) Create account credentials
## 03 - Basics
## Remote Backends
Remote backends enable storage of TF state in a remote, location to enable secure collaboration.
### Terraform Cloud
https://www.terraform.io/cloud
`./terraform-cloud-backend/main.tf`
### AWS S3 + Dynamo DB
Steps to initialize backend in AWS and manage it with Terraform:
1) Use config from `./aws-backend/` (init, plan, apply) to provision s3 bucket and dynamoDB table with local state
2) Uncomment the remote backend configuration
3) Reinitialize with `terraform init`:
```
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
```
Now the S3 bucket and dynamoDB table are mam and are able to be used as the state backend!
## Web-App
Generic web application architecture including:
- EC2 instances
- S3 bucket
- RDS instance
- Load balancer
- Route 53 DNS config
This example will be refined and improved in later modules.
## Architecture
![](./web-app/architecture.png)

View File

@@ -1,31 +0,0 @@
Steps to initialize backend in AWS and manage it with Terraform:
1) Use config from `bootstrap` (init, plan, apply) to provision s3 bucket and dynamoDB table with local state
2) copy state file into import-bootstrap
1) cp terraform.tfstate ../import-bootstrap
3) Initialize within `import-bootstrap` using `terraform init`
4) Uncomment out s3 backend provider:
```
backend "s3" {
bucket = "devops-directive-tf-state"
key = "tf-infra/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locking"
encrypt = true
}
```
4) Reinitialize with `terraform init`:
```
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
```
Now the S3 bucket and dynamoDB table are managed by Terraform and are able to be used as the state backend!

View File

@@ -1,47 +0,0 @@
terraform {
# THIS BACKEND CONFIG GETS UNCOMMENTED IN IMPORT-BOOTSTRAP
# backend "s3" {
# bucket = "devops-directive-tf-state"
# key = "03-basics/import-bootstrap/terraform.tfstate"
# region = "us-east-1"
# dynamodb_table = "terraform-state-locking"
# encrypt = true
# }
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "devops-directive-tf-state"
force_destroy = true
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-locking"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}

View File

@@ -1,47 +0,0 @@
terraform {
### UNCOMMENT THIS AFTER INITIALIZNG ###
# backend "s3" {
# bucket = "devops-directive-tf-state"
# key = "03-basics/import-bootstrap/terraform.tfstate"
# region = "us-east-1"
# dynamodb_table = "terraform-state-locking"
# encrypt = true
# }
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "devops-directive-tf-state"
force_destroy = true
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-locking"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}

View File

@@ -0,0 +1,56 @@
terraform {
#############################################################
## AFTER RUNNING TERRAFORM APPLY (WITH LOCAL BACKEND)
## YOU WILL UNCOMMENT THIS CODE THEN RERUN TERRAFORM INIT
## TO SWITCH FROM LOCAL BACKEND TO REMOTE AWS BACKEND
#############################################################
# backend "s3" {
# bucket = "devops-directive-tf-state" # REPLACE WITH YOUR BUCKET NAME
# key = "03-basics/import-bootstrap/terraform.tfstate"
# region = "us-east-1"
# dynamodb_table = "terraform-state-locking"
# encrypt = true
# }
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "devops-directive-tf-state" # REPLACE WITH YOUR BUCKET NAME
force_destroy = true
}
resource "aws_s3_bucket_versioning" "terraform_bucket_versioning" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state_crypto_conf" {
bucket = aws_s3_bucket.terraform_state.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-locking"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}

View File

@@ -1,8 +0,0 @@
## Terraform Cloud Account Setup
1) Create account at terraform.io
2) Use terraform login
3) Set any necessary credentials for whichever cloud services you are using because with terraform cloud backend, the plan/apply are actually run remotely.
```
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
```

View File

@@ -44,17 +44,22 @@ resource "aws_instance" "instance_2" {
}
resource "aws_s3_bucket" "bucket" {
bucket = "devops-directive-web-app-data"
bucket_prefix = "devops-directive-web-app-data"
force_destroy = true
versioning {
enabled = true
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
bucket = aws_s3_bucket.bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_crypto_conf" {
bucket = aws_s3_bucket.bucket.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
@@ -182,12 +187,12 @@ resource "aws_lb" "load_balancer" {
}
resource "aws_route53_zone" "primary" {
name = "mysuperawesomesite.com"
name = "devopsdeployed.com"
}
resource "aws_route53_record" "root" {
zone_id = aws_route53_zone.primary.zone_id
name = "mysuperawesomesite.com"
name = "devopsdeployed.com"
type = "A"
alias {
@@ -198,13 +203,18 @@ resource "aws_route53_record" "root" {
}
resource "aws_db_instance" "db_instance" {
allocated_storage = 20
storage_type = "standard"
engine = "postgres"
engine_version = "12.5"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
skip_final_snapshot = true
allocated_storage = 20
# This allows any minor version within the major engine_version
# defined below, but will also result in allowing AWS to auto
# upgrade the minor version of your DB. This may be too risky
# in a real production environment.
auto_minor_version_upgrade = true
storage_type = "standard"
engine = "postgres"
engine_version = "12"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
skip_final_snapshot = true
}

View File

@@ -37,7 +37,7 @@ resource "aws_db_instance" "db_instance" {
allocated_storage = 20
storage_type = "gp2"
engine = "postgres"
engine_version = "12.4"
engine_version = "12"
instance_class = "db.t2.micro"
name = "mydb"
username = var.db_user

View File

@@ -45,17 +45,22 @@ resource "aws_instance" "instance_2" {
}
resource "aws_s3_bucket" "bucket" {
bucket = var.bucket_name
bucket_prefix = var.bucket_prefix
force_destroy = true
versioning {
enabled = true
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
bucket = aws_s3_bucket.bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_crypto_conf" {
bucket = aws_s3_bucket.bucket.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
@@ -202,7 +207,7 @@ resource "aws_db_instance" "db_instance" {
allocated_storage = 20
storage_type = "standard"
engine = "postgres"
engine_version = "12.5"
engine_version = "12"
instance_class = "db.t2.micro"
name = var.db_name
username = var.db_user

View File

@@ -1,5 +1,5 @@
bucket_name = "devops-directive-web-app-data"
domain = "mysuperawesomesite.com"
db_name = "mydb"
db_user = "foo"
# db_pass = "foobarbaz"
bucket_prefix = "devops-directive-web-app-data"
domain = "devopsdeployed.com"
db_name = "mydb"
db_user = "foo"
# db_pass = "foobarbaz"

View File

@@ -22,8 +22,8 @@ variable "instance_type" {
# S3 Variables
variable "bucket_name" {
description = "name of s3 bucket for app data"
variable "bucket_prefix" {
description = "prefix of s3 bucket for app data"
type = string
}

View File

@@ -2,7 +2,7 @@ resource "aws_db_instance" "db_instance" {
allocated_storage = 20
storage_type = "standard"
engine = "postgres"
engine_version = "12.5"
engine_version = "12"
instance_class = "db.t2.micro"
name = var.db_name
username = var.db_user

View File

@@ -1,15 +1,20 @@
resource "aws_s3_bucket" "bucket" {
bucket = var.bucket_name
bucket_prefix = var.bucket_prefix
force_destroy = true
versioning {
enabled = true
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
bucket = aws_s3_bucket.bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_crypto_conf" {
bucket = aws_s3_bucket.bucket.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

View File

@@ -34,8 +34,8 @@ variable "instance_type" {
# S3 Variables
variable "bucket_name" {
description = "name of s3 bucket for app data"
variable "bucket_prefix" {
description = "prefix of s3 bucket for app data"
type = string
}

View File

@@ -21,8 +21,14 @@ provider "aws" {
region = "us-east-1"
}
variable "db_pass" {
description = "password for database"
variable "db_pass_1" {
description = "password for database #1"
type = string
sensitive = true
}
variable "db_pass_2" {
description = "password for database #2"
type = string
sensitive = true
}
@@ -31,28 +37,28 @@ module "web_app_1" {
source = "../web-app-module"
# Input Variables
bucket_name = "web-app-1-devops-directive-web-app-data"
domain = "mysuperawesomesite.com"
bucket_prefix = "web-app-1-data"
domain = "devopsdeployed.com"
app_name = "web-app-1"
environment_name = "production"
instance_type = "t2.small"
instance_type = "t2.micro"
create_dns_zone = true
db_name = "webapp1db"
db_user = "foo"
db_pass = var.db_pass
db_pass = var.db_pass_1
}
module "web_app_2" {
source = "../web-app-module"
# Input Variables
bucket_name = "web-app-2-devops-directive-web-app-data"
domain = "myothersuperawesomesite.com"
bucket_prefix = "web-app-2-data"
domain = "anotherdevopsdeployed.com"
app_name = "web-app-2"
environment_name = "production"
instance_type = "t2.small"
instance_type = "t2.micro"
create_dns_zone = true
db_name = "webapp2db"
db_user = "foo"
db_pass = var.db_pass
db_user = "bar"
db_pass = var.db_pass_2
}

View File

@@ -1,4 +1,4 @@
- Note about using separate AWS projects (avoids prefix issues, improved IAM control)
- Note about using separate AWS accounts (avoids prefix issues, improved IAM control)
- Cover this in advanced section?
```

View File

@@ -23,5 +23,5 @@ provider "aws" {
# Route53 zone is shared across staging and production
resource "aws_route53_zone" "primary" {
name = "mysuperawesomesite.com"
}
name = "devopsdeployed.com"
}

View File

@@ -35,10 +35,10 @@ module "web_app" {
source = "../../../06-organization-and-modules/web-app-module"
# Input Variables
bucket_name = "devops-directive-web-app-data-${local.environment_name}"
domain = "mysuperawesomesite.com"
bucket_prefix = "web-app-data-${local.environment_name}"
domain = "devopsdeployed.com"
environment_name = local.environment_name
instance_type = "t2.small"
instance_type = "t2.micro"
create_dns_zone = false
db_name = "${local.environment_name}mydb"
db_user = "foo"

View File

@@ -35,8 +35,8 @@ module "web_app" {
source = "../../../06-organization-and-modules/web-app-module"
# Input Variables
bucket_name = "devops-directive-web-app-data-${local.environment_name}"
domain = "mysuperawesomesite.com"
bucket_prefix = "web-app-data-${local.environment_name}"
domain = "devopsdeployed.com"
environment_name = local.environment_name
instance_type = "t2.micro"
create_dns_zone = false

View File

@@ -32,13 +32,13 @@ locals {
}
module "web_app" {
source = "../../05-organization-and-modules/web-app-module"
source = "../../06-organization-and-modules/web-app-module"
# Input Variables
bucket_name = "devops-directive-web-app-data-${local.environment_name}"
domain = "mysuperawesomesite.com"
bucket_prefix = "web-app-data-${local.environment_name}"
domain = "devopsdeployed.com"
environment_name = local.environment_name
instance_type = "t2.small"
instance_type = "t2.micro"
create_dns_zone = terraform.workspace == "production" ? true : false
db_name = "${local.environment_name}mydb"
db_user = "foo"

View File

@@ -0,0 +1,7 @@
How to run this test?
download dependencies, then run the tests...
```
go mod download
go test -v --timeout 10m
```

View File

@@ -23,7 +23,7 @@ func TestTerraformHelloWorldExample(t *testing.T) {
instanceURL := terraform.Output(t, terraformOptions, "url")
tlsConfig := tls.Config{}
maxRetries := 30
timeBetweenRetries := 5 * time.Second
timeBetweenRetries := 10 * time.Second
http_helper.HttpGetWithRetryWithCustomValidation(
t, instanceURL, &tlsConfig, maxRetries, timeBetweenRetries, validate,

45
README.md Normal file
View File

@@ -0,0 +1,45 @@
# DevOps Directive Terraform Course
This is the companion repo to: [Complete Terraform Course - From BEGINNER to PRO! (Learn Infrastructure as Code)](https://www.youtube.com/watch?v=7xngnjfIlK4)
[![thumbnail](https://user-images.githubusercontent.com/1320389/154354937-98533608-2f42-44c1-8110-87f7e3f45085.jpeg)](https://www.youtube.com/watch?v=7xngnjfIlK4)
## 01 - Evolution of Cloud + Infrastructure as Code
High level overview of the evolution of cloud computing and infrastructure as code.
This module does not have any corresponding code.
## 02 - Overview + Setup
Terraform overview and setup instructions.
Includes basic `hello world` terraform config to provision a single AWS EC2 instance.
## 03 - Basics
Covers main usage pattern, setting up remote backends (where the terraform state is stored) using terraform Cloud and AWS, and provides a naive implementation of a web application architecture.
## 04 - Variables and Outputs
Introduces the concepts of variables which enable Terraform configurations to be flexible and composable. Refactors web application to use these features.
## 05 - Language Features
Describes additional features of the Hashicorp Configuration Language (HCL).
## 06 - Organization and Modules
Demonstrates how to structure terraform code into reuseable modules and how to instantiate/configure modules.
## 07 - Managing Multiple Environments
Shows two methods for managing multiple environments (e.g. dev/staging/prodution) with Terraform.
## 08 - Testing
Explains different types of testing (manual + automated) for Terraform modules and configurations.
## 09 - Developer Workflows + CI/CD
Covers how teams can work together with Terraform and how to set up CI/CD pipelines to keep infrastructure environments up to date.