Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 15
Next »
Prerequisites
AWS Account
Terraform Cloud Account
Preconfigured access in ~/.terraformrc
. Get the token from https://app.terraform.io by going to Settings → Teams → Team API Token. Generate a new token and create the file ~/.terraformrc
credentials "app.terraform.io" {
token = "iz5o8MNxgBBPwQ...."
}
# | Step |
|
---|
1 | Setup | Check out repository senofi/openidl-devops Create a new folder under openidl-devops/aws-infrastructure/environments/ by copying the sample folder openidl-devops/aws-infrastructure/environments/sample-env
|
2 | Create IAM User & Role | Pull the AWS credentials from AWS Console for the AWS account you have access to. The user used needs to have access to IAM to create roles and other users. Go to openidl-devops/aws-infrastructure/environments/<env-folder> as copied in the previous section Configure openidl-devops/aws-infrastructure/environments/<env-folder>/org-vars.yaml Fill in the iam AWS access and secret keys Configure the org ID and the environment ID (dev, test or prod)
Go to <env-folder>/iam and run terragrunt plan After a review apply the changes with terragrunt apply
The script creates: |
3 | Create Ops Kubernetes Cluster |
Register manually a new key pair in AWS by going to EC2 → Key pairs. Create a new key with a name awx-target Keep the private key in the environments folder or anywhere on the file system you prefer Go to the Terraform Cloud workspace that was just created in the previous section and go to the States tab. Open the top state in the list and find outputs and copy access_key and secret_key values that will be used for the next step Go to <env-folder>/k8s-cluster and run terragrunt plan The previous step should fail but it should have created a new workspace in Terraform Cloud - e.g. devnet-d3-k8s-cluster Make sure the AWS variables are set in org-vars.yaml under terraform: property aws_access_key = terraform user’s access key ID aws_secret_key = terraform user’s secret access key region = us-east-2 or any other region you prefer aws_role_arn = terraform role ARN aws_external_id = terraform
Run again terragrunt plan Review and if things look ok run terragrunt apply Acknowledge the run with yes in the prompt
The script creates: |
4 | Import the Kubernetes Cluster connection config | Make sure you have an AWS profile set in your ~/.aws/config and ~/.aws/credentials
[profile tf-user]
region = us-east-2
external_id = terraform
[profile tf-role]
external_id = terraform
source_profile = tf-user
role_arn = arn:aws:iam::<aws-account-number>:role/tf_automation
region = us-east-2
[tf-user]
aws_access_key_id = AKI...
aws_secret_access_key = r3AB...
Find the name of the Kubernetes cluster and update the local config with it
export AWS_PROFILE=tf-role
aws eks update-kubeconfig --name ops-k8s
|
5 | Install Nginx | Install Nginx Ingress controller
kubectl create ns ingress-nginx
helm install -n ingress-nginx lb ingress-nginx/ingress-nginx
|
6 | Install Jenkins | Use the helm chart for installing Jenkins onto the Kubernetes cluster created above.
cd <devops-repo>/jenkins
kubectl create ns jenkins
helm repo add jenkins https://charts.jenkins.io
helm upgrade --install -n jenkins jenkins jenkins/jenkins --values values.yaml
Wait for Jenkins to start up. To view the Jenkins admin password:
kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
Set up a cloud-provisioned Jenkins node as defined in the Kubernetes plugin config in Jenkins. |
7 | Install Ansible Tower (AWX) | Create the AWX DB by connecting to the RDS PostgreSQL instance created via Terraform. Create an SSH Tunnel. Lookup the RDS DB DNS and the EC2 instance that is the AWX target public DNS and replace them in the command line template:
ssh -i <env-folder>/awx-target.pem -N -L 5432:ops-tools-db.<instance-id>.us-east-2.rds.amazonaws.com:5432 ubuntu@<awx-target-ec2>.us-east-2.compute.amazonaws.com -vv
Connect with DBeaver (or another PostgreSQL client) on localhost port 5432 and run the following SQL after replacing <pass> with an actual password (as defined under environments/<env>/org-vars.yaml )
create database awx;
create user awxuser with encrypted password '<pass>';
grant all privileges on database awx to awxuser;
Configure the Kustomize script awx-custom.yaml by replacing the DB settings in awx-operator folder under openidl-devops Git repository.
Install AWX with the Kustomize command.
helm repo add awx-operator https://ansible.github.io/awx-operator/
cd awx-operator
kustomize build . | kubectl apply -f -
Watch for the script failing and if it does run it again (timing issue due to the creation of the AWX RBAC) |
8 | Update DNS record (optional) | Go to the AWS Account → Route53 Create a new Hosted Zone (e.g. d1.test.senofi.net) Under the new hosted zone create a new entry of type A with an Alias for the Kubernetes cluster (e.g. ops.d1.test.senofi.net) to point to a Classic Load Balancer
Now Jenkins and AWX should be available via http://ops.d1.test.senofi.net/ and http://ops.d1.test.senofi.net/jenkins. |
9 | Terraform Cloud workspaces | We need to maintain two workspaces - one for the Fabric Kubernetes cluster and one for the openIDL applications. To create the workspaces use the tool located in senofi/openidl-devops: Go to openidl-devops/aws-infrastructure/environments/<env-folder>/terraform-cloud and run If everything looks ok, execute terragrunt apply . This should create two workspaces and a var set in Terraform Cloud. Create a new KMS key (symetric, encrypt/decrypt) in the AWS console. The name is not important but use a meaningful name that will associate it with this environment. Use it to populate the property in the next step Go to openidl-devops/automation/terraform-cloud and update configuration.properties Make sure that the varset name Create SSH keys
ssh-keygen -t rsa -f app_eks_worker_nodes_ssh_key.pem ssh-keygen -t rsa -f blk_eks_worker_nodes_ssh_key.pem ssh-keygen -t rsa -f bastion_ssh_key.pem
Populate the variable set by executing the following command in openidl-devops/automation/terraform-cloud
pip install -r requirements.txt
python populate-variable-set.py
Copy the contents of the public keys and populate them in Terraform Cloud UI under Variable Sets → <the newly created varset>
|
10 | Configure Jenkins | Set Jenkins node label ‘openidl’ in Kubernetes Cloud by going to Manage Jenkins → Manage Nodes and Clouds → Configure Clouds. Make sure that under Pod Template details the labels field contains the value ‘openidl’.
Also, remove the prepopulated ‘sleep’ command if it is set on the pod template:
Create the Terraform Job Template Terraform Token Secret - Login to Jenkins go to Manage Jenkins → Manage Credentials → Stores scoped to Jenkins (Jenkins) → Global Credentials (unrestricted) → Add credentials
Choose Kind as secret text, enter secret text like Token in “secret” field and name the secret ID as unique since it will be used in pipeline code. Git Credentials - Add a new credential
Terraform Job Go to Jenkins → New Item. Use a name such as Terraform Job Select job type as PIPELINE and proceed. Select Definition as Pipeline Script from SCM Select SCM as Git Key in the Infrastructure code repository (openidl-gitops) URL. Select the Git credential created above Specify the relevant branch “refs/heads/<branch-name>”. Set script path to jenkins-jobs/jenkinsfile-tf
|
11 | Run Terraform Job | Run the Jenkins Terraform Job Open the console log for the job. Once the job asks for an input accept and choose the apply option The job runs a second plan into the Kubernetes workspace in Terraform Cloud. When asked - accept and apply the changes Go to the AWS Console and find EKS (Elastic Kubernetes Service). Choose the blk cluster and go to Add-Ons. Find the EBS plugin and add it to the list. The plugin makes sure volumes could be created in Kubernetes
|