mirror of
https://github.com/sidpalas/devops-directive-terraform-course.git
synced 2025-12-11 13:21:14 +00:00
Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9949d314f3 | ||
|
|
7a3fdaca72 | ||
|
|
4012eec6cd | ||
|
|
75922067d8 | ||
|
|
0fdc55db8e | ||
|
|
d276ae7b6e | ||
|
|
a35c1c0632 | ||
|
|
becbd33b93 | ||
|
|
3c0fe8a7f4 | ||
|
|
f21f709b51 | ||
|
|
4afa2070b3 | ||
|
|
d551e98de5 | ||
|
|
4199e89b67 | ||
|
|
b46e7de9b4 |
31
.github/workflows/terraform.yml
vendored
31
.github/workflows/terraform.yml
vendored
@@ -1,11 +1,12 @@
|
||||
name: "Terraform"
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
tags:
|
||||
- v[0-9]+\.[0-9]+\.[0-9]+$
|
||||
# Uncomment to enable staging deploy from main
|
||||
# push:
|
||||
# branches:
|
||||
# - main
|
||||
release:
|
||||
types: [published]
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
@@ -25,7 +26,8 @@ jobs:
|
||||
- name: Setup Terraform
|
||||
uses: hashicorp/setup-terraform@v1
|
||||
with:
|
||||
terraform_version: 0.15.4
|
||||
terraform_version: 1.0.1
|
||||
terraform_wrapper: false
|
||||
|
||||
- name: Terraform Format
|
||||
id: fmt
|
||||
@@ -38,6 +40,7 @@ jobs:
|
||||
- name: Terraform Plan
|
||||
id: plan
|
||||
if: github.event_name == 'pull_request'
|
||||
# Route 53 zone must already exist for this to succeed!
|
||||
run: terraform plan -var db_pass=${{secrets.DB_PASS }} -no-color
|
||||
continue-on-error: true
|
||||
|
||||
@@ -72,16 +75,26 @@ jobs:
|
||||
if: steps.plan.outcome == 'failure'
|
||||
run: exit 1
|
||||
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '^1.15.5'
|
||||
|
||||
- name : Terratest Execution
|
||||
if: github.event_name == 'pull_request'
|
||||
working-directory: 08-testing/tests/terratest
|
||||
run: |
|
||||
go test . -v timeout 10m
|
||||
|
||||
- name: Check tag
|
||||
id: check-tag
|
||||
run: |
|
||||
if [[ ${{ github.ref }} =~ "^refs\/tags\/v[0-9]+\.[0-9]+\.[0-9]+$" ]]; then echo ::set-output name=environment::production
|
||||
if [[ ${{ github.ref }} =~ ^refs\/tags\/v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then echo ::set-output name=environment::production
|
||||
elif [[ ${{ github.ref }} == 'refs/heads/main' ]]; then echo ::set-output name=environment::staging
|
||||
else echo ::set-output name=environment::unknown
|
||||
fi
|
||||
|
||||
- name: Terraform Apply Global
|
||||
if: github.event_name == 'push'
|
||||
if: github.event_name == 'push' || github.event_name == 'release'
|
||||
working-directory: 07-managing-multiple-environments/file-structure/global
|
||||
run: |
|
||||
terraform init
|
||||
@@ -92,7 +105,7 @@ jobs:
|
||||
run: terraform apply -var db_pass=${{secrets.DB_PASS }} -auto-approve
|
||||
|
||||
- name: Terraform Apply Production
|
||||
if: steps.check-tag.outputs.environment == 'production' && github.event_name == 'push'
|
||||
if: steps.check-tag.outputs.environment == 'production' && github.event_name == 'release'
|
||||
working-directory: 07-managing-multiple-environments/file-structure/production
|
||||
run: |
|
||||
terraform init
|
||||
|
||||
3
01-cloud-and-iac/README.md
Normal file
3
01-cloud-and-iac/README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# 01 - Evolution of Cloud + Infrastructure as Code
|
||||
|
||||
This module doesn't have any corresponding code.
|
||||
@@ -1,8 +0,0 @@
|
||||
## Install Terraform
|
||||
1) install terraform
|
||||
|
||||
## AWS Account Setup
|
||||
2) create non-root AWS user
|
||||
3) Add AmazonEC2FullAccess
|
||||
4) Save Access key + secret key (or use AWS CLI `aws configure` -- https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
|
||||
|
||||
22
02-overview/README.md
Normal file
22
02-overview/README.md
Normal file
@@ -0,0 +1,22 @@
|
||||
## 02 - Overview + Setup
|
||||
|
||||
## Install Terraform
|
||||
|
||||
Official installation instructions from HashiCorp: https://learn.hashicorp.com/tutorials/terraform/install-cli
|
||||
|
||||
## AWS Account Setup
|
||||
|
||||
AWS Terraform provider documentation: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication
|
||||
|
||||
1) create non-root AWS user
|
||||
2) Add the necessary IAM roles (e.g. AmazonEC2FullAccess)
|
||||
3) Save Access key + secret key (or use AWS CLI `aws configure` -- https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
|
||||
|
||||
## Hello World
|
||||
|
||||
`./main.tf` contains minimal configuration to provision an EC2 instance.
|
||||
|
||||
1) `aws configure`
|
||||
2) `terraform init`
|
||||
3) `terraform plan`
|
||||
4) `terraform apply`
|
||||
@@ -1,4 +1,45 @@
|
||||
1) Create account credentials
|
||||
## 03 - Basics
|
||||
|
||||
## Remote Backends
|
||||
|
||||
Remote backends enable storage of TF state in a remote, location to enable secure collaboration.
|
||||
|
||||
### Terraform Cloud
|
||||
|
||||
https://www.terraform.io/cloud
|
||||
|
||||
`./terraform-cloud-backend/main.tf`
|
||||
|
||||
### AWS S3 + Dynamo DB
|
||||
|
||||
Steps to initialize backend in AWS and manage it with Terraform:
|
||||
|
||||
1) Use config from `./aws-backend/` (init, plan, apply) to provision s3 bucket and dynamoDB table with local state
|
||||
2) Uncomment the remote backend configuration
|
||||
3) Reinitialize with `terraform init`:
|
||||
|
||||
```
|
||||
Do you want to copy existing state to the new backend?
|
||||
Pre-existing state was found while migrating the previous "local" backend to the
|
||||
newly configured "s3" backend. No existing state was found in the newly
|
||||
configured "s3" backend. Do you want to copy this state to the new "s3"
|
||||
backend? Enter "yes" to copy and "no" to start with an empty state.
|
||||
|
||||
Enter a value: yes
|
||||
```
|
||||
|
||||
Now the S3 bucket and dynamoDB table are mam and are able to be used as the state backend!
|
||||
|
||||
## Web-App
|
||||
|
||||
Generic web application architecture including:
|
||||
- EC2 instances
|
||||
- S3 bucket
|
||||
- RDS instance
|
||||
- Load balancer
|
||||
- Route 53 DNS config
|
||||
|
||||
This example will be refined and improved in later modules.
|
||||
|
||||
## Architecture
|
||||

|
||||
@@ -1,31 +0,0 @@
|
||||
Steps to initialize backend in AWS and manage it with Terraform:
|
||||
|
||||
1) Use config from `bootstrap` (init, plan, apply) to provision s3 bucket and dynamoDB table with local state
|
||||
2) copy state file into import-bootstrap
|
||||
1) cp terraform.tfstate ../import-bootstrap
|
||||
3) Initialize within `import-bootstrap` using `terraform init`
|
||||
4) Uncomment out s3 backend provider:
|
||||
|
||||
```
|
||||
backend "s3" {
|
||||
bucket = "devops-directive-tf-state"
|
||||
key = "tf-infra/terraform.tfstate"
|
||||
region = "us-east-1"
|
||||
dynamodb_table = "terraform-state-locking"
|
||||
encrypt = true
|
||||
}
|
||||
```
|
||||
|
||||
4) Reinitialize with `terraform init`:
|
||||
|
||||
```
|
||||
Do you want to copy existing state to the new backend?
|
||||
Pre-existing state was found while migrating the previous "local" backend to the
|
||||
newly configured "s3" backend. No existing state was found in the newly
|
||||
configured "s3" backend. Do you want to copy this state to the new "s3"
|
||||
backend? Enter "yes" to copy and "no" to start with an empty state.
|
||||
|
||||
Enter a value: yes
|
||||
```
|
||||
|
||||
Now the S3 bucket and dynamoDB table are managed by Terraform and are able to be used as the state backend!
|
||||
@@ -1,47 +0,0 @@
|
||||
terraform {
|
||||
# THIS BACKEND CONFIG GETS UNCOMMENTED IN IMPORT-BOOTSTRAP
|
||||
# backend "s3" {
|
||||
# bucket = "devops-directive-tf-state"
|
||||
# key = "03-basics/import-bootstrap/terraform.tfstate"
|
||||
# region = "us-east-1"
|
||||
# dynamodb_table = "terraform-state-locking"
|
||||
# encrypt = true
|
||||
# }
|
||||
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "~> 3.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket" "terraform_state" {
|
||||
bucket = "devops-directive-tf-state"
|
||||
force_destroy = true
|
||||
versioning {
|
||||
enabled = true
|
||||
}
|
||||
|
||||
server_side_encryption_configuration {
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_dynamodb_table" "terraform_locks" {
|
||||
name = "terraform-state-locking"
|
||||
billing_mode = "PAY_PER_REQUEST"
|
||||
hash_key = "LockID"
|
||||
attribute {
|
||||
name = "LockID"
|
||||
type = "S"
|
||||
}
|
||||
}
|
||||
@@ -1,47 +0,0 @@
|
||||
terraform {
|
||||
### UNCOMMENT THIS AFTER INITIALIZNG ###
|
||||
# backend "s3" {
|
||||
# bucket = "devops-directive-tf-state"
|
||||
# key = "03-basics/import-bootstrap/terraform.tfstate"
|
||||
# region = "us-east-1"
|
||||
# dynamodb_table = "terraform-state-locking"
|
||||
# encrypt = true
|
||||
# }
|
||||
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "~> 3.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket" "terraform_state" {
|
||||
bucket = "devops-directive-tf-state"
|
||||
force_destroy = true
|
||||
versioning {
|
||||
enabled = true
|
||||
}
|
||||
|
||||
server_side_encryption_configuration {
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_dynamodb_table" "terraform_locks" {
|
||||
name = "terraform-state-locking"
|
||||
billing_mode = "PAY_PER_REQUEST"
|
||||
hash_key = "LockID"
|
||||
attribute {
|
||||
name = "LockID"
|
||||
type = "S"
|
||||
}
|
||||
}
|
||||
56
03-basics/aws-backend/main.tf
Normal file
56
03-basics/aws-backend/main.tf
Normal file
@@ -0,0 +1,56 @@
|
||||
terraform {
|
||||
#############################################################
|
||||
## AFTER RUNNING TERRAFORM APPLY (WITH LOCAL BACKEND)
|
||||
## YOU WILL UNCOMMENT THIS CODE THEN RERUN TERRAFORM INIT
|
||||
## TO SWITCH FROM LOCAL BACKEND TO REMOTE AWS BACKEND
|
||||
#############################################################
|
||||
# backend "s3" {
|
||||
# bucket = "devops-directive-tf-state" # REPLACE WITH YOUR BUCKET NAME
|
||||
# key = "03-basics/import-bootstrap/terraform.tfstate"
|
||||
# region = "us-east-1"
|
||||
# dynamodb_table = "terraform-state-locking"
|
||||
# encrypt = true
|
||||
# }
|
||||
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "~> 3.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket" "terraform_state" {
|
||||
bucket = "devops-directive-tf-state" # REPLACE WITH YOUR BUCKET NAME
|
||||
force_destroy = true
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_versioning" "terraform_bucket_versioning" {
|
||||
bucket = aws_s3_bucket.terraform_state.id
|
||||
versioning_configuration {
|
||||
status = "Enabled"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state_crypto_conf" {
|
||||
bucket = aws_s3_bucket.terraform_state.bucket
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_dynamodb_table" "terraform_locks" {
|
||||
name = "terraform-state-locking"
|
||||
billing_mode = "PAY_PER_REQUEST"
|
||||
hash_key = "LockID"
|
||||
attribute {
|
||||
name = "LockID"
|
||||
type = "S"
|
||||
}
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
## Terraform Cloud Account Setup
|
||||
1) Create account at terraform.io
|
||||
2) Use terraform login
|
||||
3) Set any necessary credentials for whichever cloud services you are using because with terraform cloud backend, the plan/apply are actually run remotely.
|
||||
```
|
||||
# AWS_ACCESS_KEY_ID
|
||||
# AWS_SECRET_ACCESS_KEY
|
||||
```
|
||||
@@ -44,17 +44,22 @@ resource "aws_instance" "instance_2" {
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket" "bucket" {
|
||||
bucket = "devops-directive-web-app-data"
|
||||
bucket_prefix = "devops-directive-web-app-data"
|
||||
force_destroy = true
|
||||
versioning {
|
||||
enabled = true
|
||||
}
|
||||
}
|
||||
|
||||
server_side_encryption_configuration {
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
resource "aws_s3_bucket_versioning" "bucket_versioning" {
|
||||
bucket = aws_s3_bucket.bucket.id
|
||||
versioning_configuration {
|
||||
status = "Enabled"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_crypto_conf" {
|
||||
bucket = aws_s3_bucket.bucket.bucket
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -182,12 +187,12 @@ resource "aws_lb" "load_balancer" {
|
||||
}
|
||||
|
||||
resource "aws_route53_zone" "primary" {
|
||||
name = "mysuperawesomesite.com"
|
||||
name = "devopsdeployed.com"
|
||||
}
|
||||
|
||||
resource "aws_route53_record" "root" {
|
||||
zone_id = aws_route53_zone.primary.zone_id
|
||||
name = "mysuperawesomesite.com"
|
||||
name = "devopsdeployed.com"
|
||||
type = "A"
|
||||
|
||||
alias {
|
||||
@@ -198,13 +203,18 @@ resource "aws_route53_record" "root" {
|
||||
}
|
||||
|
||||
resource "aws_db_instance" "db_instance" {
|
||||
allocated_storage = 20
|
||||
storage_type = "standard"
|
||||
engine = "postgres"
|
||||
engine_version = "12.5"
|
||||
instance_class = "db.t2.micro"
|
||||
name = "mydb"
|
||||
username = "foo"
|
||||
password = "foobarbaz"
|
||||
skip_final_snapshot = true
|
||||
allocated_storage = 20
|
||||
# This allows any minor version within the major engine_version
|
||||
# defined below, but will also result in allowing AWS to auto
|
||||
# upgrade the minor version of your DB. This may be too risky
|
||||
# in a real production environment.
|
||||
auto_minor_version_upgrade = true
|
||||
storage_type = "standard"
|
||||
engine = "postgres"
|
||||
engine_version = "12"
|
||||
instance_class = "db.t2.micro"
|
||||
name = "mydb"
|
||||
username = "foo"
|
||||
password = "foobarbaz"
|
||||
skip_final_snapshot = true
|
||||
}
|
||||
|
||||
@@ -37,7 +37,7 @@ resource "aws_db_instance" "db_instance" {
|
||||
allocated_storage = 20
|
||||
storage_type = "gp2"
|
||||
engine = "postgres"
|
||||
engine_version = "12.4"
|
||||
engine_version = "12"
|
||||
instance_class = "db.t2.micro"
|
||||
name = "mydb"
|
||||
username = var.db_user
|
||||
|
||||
@@ -45,17 +45,22 @@ resource "aws_instance" "instance_2" {
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket" "bucket" {
|
||||
bucket = var.bucket_name
|
||||
bucket_prefix = var.bucket_prefix
|
||||
force_destroy = true
|
||||
versioning {
|
||||
enabled = true
|
||||
}
|
||||
}
|
||||
|
||||
server_side_encryption_configuration {
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
resource "aws_s3_bucket_versioning" "bucket_versioning" {
|
||||
bucket = aws_s3_bucket.bucket.id
|
||||
versioning_configuration {
|
||||
status = "Enabled"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_crypto_conf" {
|
||||
bucket = aws_s3_bucket.bucket.bucket
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -202,7 +207,7 @@ resource "aws_db_instance" "db_instance" {
|
||||
allocated_storage = 20
|
||||
storage_type = "standard"
|
||||
engine = "postgres"
|
||||
engine_version = "12.5"
|
||||
engine_version = "12"
|
||||
instance_class = "db.t2.micro"
|
||||
name = var.db_name
|
||||
username = var.db_user
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
bucket_name = "devops-directive-web-app-data"
|
||||
domain = "mysuperawesomesite.com"
|
||||
db_name = "mydb"
|
||||
db_user = "foo"
|
||||
# db_pass = "foobarbaz"
|
||||
bucket_prefix = "devops-directive-web-app-data"
|
||||
domain = "devopsdeployed.com"
|
||||
db_name = "mydb"
|
||||
db_user = "foo"
|
||||
# db_pass = "foobarbaz"
|
||||
|
||||
@@ -22,8 +22,8 @@ variable "instance_type" {
|
||||
|
||||
# S3 Variables
|
||||
|
||||
variable "bucket_name" {
|
||||
description = "name of s3 bucket for app data"
|
||||
variable "bucket_prefix" {
|
||||
description = "prefix of s3 bucket for app data"
|
||||
type = string
|
||||
}
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ resource "aws_db_instance" "db_instance" {
|
||||
allocated_storage = 20
|
||||
storage_type = "standard"
|
||||
engine = "postgres"
|
||||
engine_version = "12.5"
|
||||
engine_version = "12"
|
||||
instance_class = "db.t2.micro"
|
||||
name = var.db_name
|
||||
username = var.db_user
|
||||
|
||||
@@ -1,15 +1,20 @@
|
||||
resource "aws_s3_bucket" "bucket" {
|
||||
bucket = var.bucket_name
|
||||
bucket_prefix = var.bucket_prefix
|
||||
force_destroy = true
|
||||
versioning {
|
||||
enabled = true
|
||||
}
|
||||
}
|
||||
|
||||
server_side_encryption_configuration {
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
resource "aws_s3_bucket_versioning" "bucket_versioning" {
|
||||
bucket = aws_s3_bucket.bucket.id
|
||||
versioning_configuration {
|
||||
status = "Enabled"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_crypto_conf" {
|
||||
bucket = aws_s3_bucket.bucket.bucket
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -34,8 +34,8 @@ variable "instance_type" {
|
||||
|
||||
# S3 Variables
|
||||
|
||||
variable "bucket_name" {
|
||||
description = "name of s3 bucket for app data"
|
||||
variable "bucket_prefix" {
|
||||
description = "prefix of s3 bucket for app data"
|
||||
type = string
|
||||
}
|
||||
|
||||
|
||||
@@ -21,8 +21,14 @@ provider "aws" {
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
variable "db_pass" {
|
||||
description = "password for database"
|
||||
variable "db_pass_1" {
|
||||
description = "password for database #1"
|
||||
type = string
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
variable "db_pass_2" {
|
||||
description = "password for database #2"
|
||||
type = string
|
||||
sensitive = true
|
||||
}
|
||||
@@ -31,28 +37,28 @@ module "web_app_1" {
|
||||
source = "../web-app-module"
|
||||
|
||||
# Input Variables
|
||||
bucket_name = "web-app-1-devops-directive-web-app-data"
|
||||
domain = "mysuperawesomesite.com"
|
||||
bucket_prefix = "web-app-1-data"
|
||||
domain = "devopsdeployed.com"
|
||||
app_name = "web-app-1"
|
||||
environment_name = "production"
|
||||
instance_type = "t2.small"
|
||||
instance_type = "t2.micro"
|
||||
create_dns_zone = true
|
||||
db_name = "webapp1db"
|
||||
db_user = "foo"
|
||||
db_pass = var.db_pass
|
||||
db_pass = var.db_pass_1
|
||||
}
|
||||
|
||||
module "web_app_2" {
|
||||
source = "../web-app-module"
|
||||
|
||||
# Input Variables
|
||||
bucket_name = "web-app-2-devops-directive-web-app-data"
|
||||
domain = "myothersuperawesomesite.com"
|
||||
bucket_prefix = "web-app-2-data"
|
||||
domain = "anotherdevopsdeployed.com"
|
||||
app_name = "web-app-2"
|
||||
environment_name = "production"
|
||||
instance_type = "t2.small"
|
||||
instance_type = "t2.micro"
|
||||
create_dns_zone = true
|
||||
db_name = "webapp2db"
|
||||
db_user = "foo"
|
||||
db_pass = var.db_pass
|
||||
db_user = "bar"
|
||||
db_pass = var.db_pass_2
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
- Note about using separate AWS projects (avoids prefix issues, improved IAM control)
|
||||
- Note about using separate AWS accounts (avoids prefix issues, improved IAM control)
|
||||
- Cover this in advanced section?
|
||||
|
||||
```
|
||||
|
||||
@@ -23,5 +23,5 @@ provider "aws" {
|
||||
|
||||
# Route53 zone is shared across staging and production
|
||||
resource "aws_route53_zone" "primary" {
|
||||
name = "mysuperawesomesite.com"
|
||||
}
|
||||
name = "devopsdeployed.com"
|
||||
}
|
||||
|
||||
@@ -35,10 +35,10 @@ module "web_app" {
|
||||
source = "../../../06-organization-and-modules/web-app-module"
|
||||
|
||||
# Input Variables
|
||||
bucket_name = "devops-directive-web-app-data-${local.environment_name}"
|
||||
domain = "mysuperawesomesite.com"
|
||||
bucket_prefix = "web-app-data-${local.environment_name}"
|
||||
domain = "devopsdeployed.com"
|
||||
environment_name = local.environment_name
|
||||
instance_type = "t2.small"
|
||||
instance_type = "t2.micro"
|
||||
create_dns_zone = false
|
||||
db_name = "${local.environment_name}mydb"
|
||||
db_user = "foo"
|
||||
|
||||
@@ -35,8 +35,8 @@ module "web_app" {
|
||||
source = "../../../06-organization-and-modules/web-app-module"
|
||||
|
||||
# Input Variables
|
||||
bucket_name = "devops-directive-web-app-data-${local.environment_name}"
|
||||
domain = "mysuperawesomesite.com"
|
||||
bucket_prefix = "web-app-data-${local.environment_name}"
|
||||
domain = "devopsdeployed.com"
|
||||
environment_name = local.environment_name
|
||||
instance_type = "t2.micro"
|
||||
create_dns_zone = false
|
||||
|
||||
@@ -32,13 +32,13 @@ locals {
|
||||
}
|
||||
|
||||
module "web_app" {
|
||||
source = "../../05-organization-and-modules/web-app-module"
|
||||
source = "../../06-organization-and-modules/web-app-module"
|
||||
|
||||
# Input Variables
|
||||
bucket_name = "devops-directive-web-app-data-${local.environment_name}"
|
||||
domain = "mysuperawesomesite.com"
|
||||
bucket_prefix = "web-app-data-${local.environment_name}"
|
||||
domain = "devopsdeployed.com"
|
||||
environment_name = local.environment_name
|
||||
instance_type = "t2.small"
|
||||
instance_type = "t2.micro"
|
||||
create_dns_zone = terraform.workspace == "production" ? true : false
|
||||
db_name = "${local.environment_name}mydb"
|
||||
db_user = "foo"
|
||||
|
||||
7
08-testing/tests/terratest/README.md
Normal file
7
08-testing/tests/terratest/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
How to run this test?
|
||||
|
||||
download dependencies, then run the tests...
|
||||
```
|
||||
go mod download
|
||||
go test -v --timeout 10m
|
||||
```
|
||||
@@ -23,7 +23,7 @@ func TestTerraformHelloWorldExample(t *testing.T) {
|
||||
instanceURL := terraform.Output(t, terraformOptions, "url")
|
||||
tlsConfig := tls.Config{}
|
||||
maxRetries := 30
|
||||
timeBetweenRetries := 5 * time.Second
|
||||
timeBetweenRetries := 10 * time.Second
|
||||
|
||||
http_helper.HttpGetWithRetryWithCustomValidation(
|
||||
t, instanceURL, &tlsConfig, maxRetries, timeBetweenRetries, validate,
|
||||
|
||||
45
README.md
Normal file
45
README.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# DevOps Directive Terraform Course
|
||||
|
||||
This is the companion repo to: [Complete Terraform Course - From BEGINNER to PRO! (Learn Infrastructure as Code)](https://www.youtube.com/watch?v=7xngnjfIlK4)
|
||||
|
||||
[](https://www.youtube.com/watch?v=7xngnjfIlK4)
|
||||
|
||||
## 01 - Evolution of Cloud + Infrastructure as Code
|
||||
|
||||
High level overview of the evolution of cloud computing and infrastructure as code.
|
||||
|
||||
This module does not have any corresponding code.
|
||||
|
||||
## 02 - Overview + Setup
|
||||
|
||||
Terraform overview and setup instructions.
|
||||
|
||||
Includes basic `hello world` terraform config to provision a single AWS EC2 instance.
|
||||
|
||||
## 03 - Basics
|
||||
|
||||
Covers main usage pattern, setting up remote backends (where the terraform state is stored) using terraform Cloud and AWS, and provides a naive implementation of a web application architecture.
|
||||
|
||||
## 04 - Variables and Outputs
|
||||
|
||||
Introduces the concepts of variables which enable Terraform configurations to be flexible and composable. Refactors web application to use these features.
|
||||
|
||||
## 05 - Language Features
|
||||
|
||||
Describes additional features of the Hashicorp Configuration Language (HCL).
|
||||
|
||||
## 06 - Organization and Modules
|
||||
|
||||
Demonstrates how to structure terraform code into reuseable modules and how to instantiate/configure modules.
|
||||
|
||||
## 07 - Managing Multiple Environments
|
||||
|
||||
Shows two methods for managing multiple environments (e.g. dev/staging/prodution) with Terraform.
|
||||
|
||||
## 08 - Testing
|
||||
|
||||
Explains different types of testing (manual + automated) for Terraform modules and configurations.
|
||||
|
||||
## 09 - Developer Workflows + CI/CD
|
||||
|
||||
Covers how teams can work together with Terraform and how to set up CI/CD pipelines to keep infrastructure environments up to date.
|
||||
Reference in New Issue
Block a user