This repository declares infrastructure of Gigit cloud as a code using Terraform.
- Terraform v1.2.6: how to install terraform.
- AWS credentials for accessing Terraform state (hosted in S3 bucket)
- gomplate, use your local dependency management system for it, for mac:
brew install gomplate
- GNU Make (should be part of any system by default). Optional, you can run command from makefile directly in terminal.
- Create a dedicated git repository for your project's infrastructure.
It is a good idea to keep a state of your current infrastructure in git. Because terraform is declarative approach, you can revert your infrastructure to any moment of the changes.
Let's assume you have crated the repository for your project infrastructure and working from that:
git clone ssh@my_project_infrastructure
cd my_project_infrastructure
- Copy two file to your root repo location:
curl https://raw.githubusercontent.com/MadAppGang/infrastructure/main/project/Makefile -o Makefile
curl https://raw.githubusercontent.com/MadAppGang/infrastructure/main/project/dev.yaml -o dev.yaml
Do not clone this repository, you don't need it! We assume that you are located in the empty repo of your project's infrastructure or you can do it in a subfolder inside your project.
As a result you will
- Init new data:
make init
- Edit
dev.yaml
file and run generate your terraform data:
make dev
or
gomplate -c vars=dev.yaml -f ./infrastructure/env/main.tmpl -o ./env/dev/main.tf
If you set up on a new AWS account, you need to create state bucket first:
export AWS_PROFILE=projectdev
aws s3 mb s3://instagram-terraform-state-dev
- Init Terraform:
Wnsure uou are using the proper AWS_PROFILE first.
export AWS_PROFILE=projectdev
make devplan
or
cd env/dev
terraform init
- Apply the plan when you're happy with it:
make devapply
or
terraform apply
- After that commit this repo and ideally you don't need it any more.
You can find an examples of docker files, and github actions for different tech stacks in receipts
folder.
Whenever you publish new ECR (using github action or manually) the watcher in the cloud will redeploy your infrastructure.
In production you need to send special command to AWS event bridge. Just explicit deploys to prod allowed. If you want to automate it - add this to your github action or other CI.
- Whenever you make a change to your configuration,
dev.yaml
orprod.yaml
you need to update it.
make dev
make devplan
.........
terraform change output here
ensure terraform performs what you expected
.......
make devapply
- update your infrastructure
You can check infrastructure version by typing make version
. If new version available with new features you need, you need to update your reference architecture files.
make update
make dev
make devapply
Don't upgrade if you don't have too. We are trying to keep backward compatibility, but it is not guaranteed.
command | description |
---|---|
clean | remove all the data |
update | apply new version of infrastructure |
version | show current infrastructure version |
dev | generate dev terraform env |
prod | generate prod terraform env |
devplan | show dev terraform plan |
prodplan | show prod terraform plan |
devapply | apply dev terraform plan |
prodapply | apply prod terraform plan |
Backend, and every task are using env variables from AWS Parameter Store (SMM). One parameter store per value.
When you need to populate initial values from JSON file, please use
github_subject is a string to provide access for AWS infrastructure.
More details could be found in official GitHub docs
repo:OWNER/REPOSITORY:environment:NAME
All services by default should respond status 200
on GET handler with path /health/live
. If it is not responding with status 200, the application load balancer will consider the service unhealthy and redeploy it.
You can use Amazon ECS Exec to execute command remotely in terminal.
To do so, you need to install AWS Session Management Plugin on your machine.
For mac Mx you need:
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac_arm64/session-manager-plugin.pkg" -o "session-manager-plugin.pkg"
sudo installer -pkg session-manager-plugin.pkg -target /
sudo ln -s /usr/local/sessionmanagerplugin/bin/session-manager-plugin /usr/local/bin/session-manager-plugin
After that you can verify the installation: session-manager-plugin
.
With session manager you can login to container, execute a command in container or do a port forwarding.
You can use a usefull script to help you work with AWS Exec.
You can test events by sending them event bus using CLI. The same command is used to send it ot deploy from Github Action.
aws events put-events --entries "Source=github,Detail=\"{}\",DetailType=TESTING,EventBusName=default"