Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 16
Next »
Prerequisites
AWS Account
Terraform Cloud Account
Preconfigured access in ~/.terraformrc
. Get the token from https://app.terraform.io by going to Settings → Teams → Team API Token. Generate a new token and create the file ~/.terraformrc
credentials "app.terraform.io" {
token = "iz5o8MNxgBBPwQ...."
}
# | Step |
|
---|
1 | Setup | Check out repository senofi/openidl-devops Create a new folder under openidl-devops/aws-infrastructure/environments/ by copying the sample folder openidl-devops/aws-infrastructure/environments/sample-env
|
2 | Create IAM User & Role | Pull the AWS credentials from AWS Console for the AWS account you have access to. The user used needs to have access to IAM to create roles and other users. Go to openidl-devops/aws-infrastructure/environments/<env-folder> as copied in the previous section Configure openidl-devops/aws-infrastructure/environments/<env-folder>/org-vars.yaml Fill in the iam AWS access and secret keys Configure the org ID and the environment ID (dev, test or prod)
Go to <env-folder>/iam and run terragrunt plan After a review apply the changes with terragrunt apply
The script creates: |
3 | Create Ops Kubernetes Cluster |
Register manually a new key pair in AWS by going to EC2 → Key pairs. Create a new key with a name awx-target Keep the private key in the environments folder or anywhere on the file system you prefer Go to the Terraform Cloud workspace that was just created in the previous section and go to the States tab. Open the top state in the list and find outputs and copy access_key and secret_key values that will be used for the next step Go to <env-folder>/k8s-cluster and run terragrunt plan The previous step should fail but it should have created a new workspace in Terraform Cloud - e.g. devnet-d3-k8s-cluster Make sure the AWS variables are set in org-vars.yaml under terraform: property aws_access_key = terraform user’s access key ID aws_secret_key = terraform user’s secret access key region = us-east-2 or any other region you prefer aws_role_arn = terraform role ARN aws_external_id = terraform
Run again terragrunt plan Review and if things look ok run terragrunt apply Acknowledge the run with yes in the prompt
The script creates: |
4 | Import the Kubernetes Cluster connection config | Make sure you have an AWS profile set in your ~/.aws/config and ~/.aws/credentials
[profile tf-user]
region = us-east-2
external_id = terraform
[profile tf-role]
external_id = terraform
source_profile = tf-user
role_arn = arn:aws:iam::<aws-account-number>:role/tf_automation
region = us-east-2
[tf-user]
aws_access_key_id = AKI...
aws_secret_access_key = r3AB...
Find the name of the Kubernetes cluster and update the local config with it
export AWS_PROFILE=tf-role
aws eks update-kubeconfig --name ops-k8s
|
5 | Install Nginx | Install Nginx Ingress controller
kubectl create ns ingress-nginx
helm install -n ingress-nginx lb ingress-nginx/ingress-nginx
|
6 | Install Jenkins | Use the helm chart for installing Jenkins onto the Kubernetes cluster created above.
cd <devops-repo>/jenkins
kubectl create ns jenkins
helm repo add jenkins https://charts.jenkins.io
helm upgrade --install -n jenkins jenkins jenkins/jenkins --values values.yaml
Wait for Jenkins to start up. To view the Jenkins admin password:
kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
Set up a cloud-provisioned Jenkins node as defined in the Kubernetes plugin config in Jenkins. |
7 | Install Ansible Tower (AWX) | Create the AWX DB by connecting to the RDS PostgreSQL instance created via Terraform. Create an SSH Tunnel. Lookup the RDS DB DNS and the EC2 instance that is the AWX target public DNS and replace them in the command line template:
ssh -i <env-folder>/awx-target.pem -N -L 5432:ops-tools-db.<instance-id>.us-east-2.rds.amazonaws.com:5432 ubuntu@<awx-target-ec2>.us-east-2.compute.amazonaws.com -vv
Connect with DBeaver (or another PostgreSQL client) on localhost port 5432 and run the following SQL after replacing <pass> with an actual password (as defined under environments/<env>/org-vars.yaml )
create database awx;
create user awxuser with encrypted password '<pass>';
grant all privileges on database awx to awxuser;
Configure the Kustomize script awx-custom.yaml by replacing the DB settings in awx-operator folder under openidl-devops Git repository.
Install AWX with the Kustomize command.
helm repo add awx-operator https://ansible.github.io/awx-operator/
cd awx-operator
kustomize build . | kubectl apply -f -
Watch for the script failing and if it does run it again (timing issue due to the creation of the AWX RBAC) |
8 | Update DNS record (optional) | Go to the AWS Account → Route53 Create a new Hosted Zone (e.g. d1.test.senofi.net) Under the new hosted zone create a new entry of type A with an Alias for the Kubernetes cluster (e.g. ops.d1.test.senofi.net) to point to a Classic Load Balancer
Now Jenkins and AWX should be available via http://ops.d1.test.senofi.net/ and http://ops.d1.test.senofi.net/jenkins. |