Completed DynamoDB + DAX Benchmarker with a nice TUI to boot
This commit is contained in:
@@ -0,0 +1,322 @@
|
||||
# Benchmarking Ansible Automation
|
||||
|
||||
This folder houses all the [Ansible](https://www.ansible.com/) roles to automate the configuration of your local
|
||||
environment and to deploy the necessary DynamoDB and DAX components to AWS. AWS Deployments leverage
|
||||
[AWS CDK](https://aws.amazon.com/cdk/) to automate the provisioning of AWS resources. For more information,
|
||||
navigate to the [CDK directory](../cdk/README.md).
|
||||
|
||||
To just see how to run different plays and their corresponding commands without knowing how it all works together,
|
||||
skip down to the [Plays](#plays) section below.
|
||||
|
||||
Note that if no `ssh_key_name` is provided, the default value is `$USER-dax-pair`
|
||||
|
||||
## Prerequisites
|
||||
* You must be logged into the AWS CLI prior to running the CDK. Ensure you're logged into your target AWS account by running
|
||||
`aws sts get-caller-identity`.
|
||||
* Install pip (Assuming python3 is already installed): `sudo apt-get install python3-pip`
|
||||
* Install the most recent version of Ansible and jmespath from pip: `pip3 install --user ansible jmespath`
|
||||
* Export the local bin path: `export PATH=~/.local/bin:$PATH`
|
||||
* Install curl (`sudo apt-get install curl`)
|
||||
* Install the required Ansible dependencies using Ansible Galaxy (`ansible-galaxy install -r requirements.yml`)
|
||||
|
||||
## Initializing the Stack
|
||||
To initialize the stack (including the local Elastic Stack), run the `deploy_benchmarker.yml` playbook with the `init` tag:
|
||||
```shell
|
||||
ansible-playbook -i inventories/local \
|
||||
--tags init \
|
||||
--ask-become-pass \
|
||||
deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
## Deploying the Stack
|
||||
To deploy the entire benchmarking stack all at once, local and AWS, use the following command:
|
||||
```shell
|
||||
ansible-playbook -i inventories/local \
|
||||
-e vpc_id={{ vpc_id_to_deploy_into }} \
|
||||
deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
The same prerequisites apply to the CDK with the necessary environment or CDK parameters as is defined in the
|
||||
[CDK Parameters](../cdk/README.md#cdk-arguments) section of the CDK README. Ansible will only resolve the following variables
|
||||
for you; all other variables must be supplied by the user a runtime:
|
||||
|
||||
* `localIp`
|
||||
* `awsAccount`
|
||||
|
||||
## Running the benchmarkers
|
||||
To run the benchmarkers, run the following command:
|
||||
```shell
|
||||
ansible-playbook -i inventories/local \
|
||||
-e dax_endpoint={{ the_dax_endpoint_uri }} \
|
||||
run_benchmarkers.yml
|
||||
```
|
||||
|
||||
### Ansible Command Breakdown
|
||||
Let's analyze how an ansible command is formed:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local \
|
||||
-e vpc_id={{ vpc_id_to_deploy_into }} \
|
||||
--ask-become-pass \
|
||||
deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
`ansible-playbook` is the program that runs our playbook, `deploy_benchmarker.yml`. [Playbooks](https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html)
|
||||
are the main "blueprints" of automation tasks that Ansible uses.
|
||||
|
||||
`-i inventories/local` tells Ansible that we want to use the hosts and variables associated
|
||||
with the `local` environment. So later in the playbook and
|
||||
[roles](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html), when we're
|
||||
using variables and hosts, we're pulling the corresponding values for this environment. More
|
||||
information about inventories in Ansible can be found
|
||||
[here](https://docs.ansible.com/ansible/2.3/intro_inventory.html). Inventories would be a good place
|
||||
to start learning about Ansible if you're confused by what's happening in this module.
|
||||
|
||||
[This](./inventories/local/host_vars/localhost.yml) is where you'd put variables to persist between runs of this application.
|
||||
By default, they are only provided for you if you follow the steps in the main repository script.
|
||||
|
||||
`-e vpc_id={{ vpc_id_to_deploy_into }}` is setting an extra variable for the playbook to use (fun fact: `-e` is an alias
|
||||
for `--extra-vars`). This variable is not defined by default in your [local host vars](./inventories/local/host_vars/localhost.yml) because
|
||||
we don't know what VPC you want to deploy the stack into. If you're running this using the main TUI script in the root
|
||||
of this repo, then this is handled graphically for you. This will be set on the first run of the CDK deployment, so you do not have to specify
|
||||
the `vpc_id` between subsequent runs. Otherwise, if you wish to change the VPC ID for any reason (including prior to an initial run), and
|
||||
you wish to run this Ansible playbook manually, you can add it to your host vars file.
|
||||
|
||||
`--ask-become-pass` is telling Ansible to prompt you for your sudo password, so it can run installs and other configuration tasks on your behalf.
|
||||
|
||||
`deploy_benchmarker.yml` is the name of our playbook that we want Ansible to run.
|
||||
|
||||
## Using Tags to Control What is Deployed
|
||||
Each part of the `deploy_benchmarker.yml` playbook has
|
||||
[tags](https://docs.ansible.com/ansible/latest/user_guide/playbooks_tags.html) associated with them.
|
||||
These tags allow us to tell Ansible which part(s) of the playbook we want to run. In other words, tags
|
||||
allow us to tell Ansible which parts of the overall Logstash deployment pipeline we want to run.
|
||||
|
||||
They `deploy_benchmarker.yml` playbook (and a couple of roles) has the following tags in it:
|
||||
|
||||
* `init`
|
||||
* `init_elk`
|
||||
* `stop_elk`
|
||||
* `prerequisites`
|
||||
* `elk`
|
||||
* `cdk`
|
||||
* `run`
|
||||
* `deploy`
|
||||
* `destroy`
|
||||
* `destroy_key_pair`
|
||||
* `upload`
|
||||
* `dynamodb`
|
||||
* `dax`
|
||||
* `crud`
|
||||
* `read-only`
|
||||
|
||||
To view all these tags and their associated plays from the `ansible` CLI, run
|
||||
|
||||
```shell
|
||||
ansible-playbook deploy_benchmarker.yml --list-tags
|
||||
```
|
||||
|
||||
Using these tags, we can specify that we only want to run specific parts of the Benchmarking Deployment pipeline that's
|
||||
defined in the `deploy_benchmarker.yml` playbook.
|
||||
|
||||
For example: If we only wanted to start the ELK (Elasticsearch-Logstash-Kibana) stack, we would run this:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags elk deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
Likewise, if we wanted to stop the ELK stack, we'd run this:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags stop_elk deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
Note the `--tags` argument. This allows us to tell Ansible to only run tasks or roles that have the
|
||||
`elk` or `stop_elk` tag on them.
|
||||
|
||||
We can also specify multiple arguments for `--tags` if we wish; for example, if we wanted to simply spin up the local
|
||||
Elastic stack (synonymous with ELK stack), and deploy the CDK, we'd run the following:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local -e vpc_id=vpc-1234567890 --tags 'elk,cdk' deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
## Plays
|
||||
The following plays can be run from these playbooks using the tags with the following commands:
|
||||
|
||||
#### Initialize Your Local Environment and Elastic Stack
|
||||
A sudo password is required to install applications, so we tell Ansible to prompt us for it at the start:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags init deploy_benchmarker.yml --ask-become-pass
|
||||
```
|
||||
|
||||
#### Deploy CDK and Run the Benchmarkers on the Bastion Host
|
||||
This assumes you already know the VPC ID to deploy into and have already created an SSH key pair and have the key pair
|
||||
locally in your `~/.ssh` directory with a `.pem` extension.
|
||||
|
||||
If you did not do this manually, it was done for you automatically and the created pair is under `~/.ssh/$USER-dax-pair.pem`.
|
||||
|
||||
You can either specify the `vpc_id` argument directly via `-e` in the command, or you can hard code
|
||||
it in your [host_vars](./inventories/local/host_vars/localhost.yml). You must also already be logged into the AWS CLI for
|
||||
your target environment, or specify a `profile_id` either in your `host_vars` or via `-e`, along with an `aws_region`. If you're not
|
||||
already logged into AWS, your `profile_id` must be configured to be picked up automatically from your `~/.aws/config` or
|
||||
`~/.aws/credentials` files with no additional login steps in order to deploy to AWS.
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local -e vpc_id=vpc-1234567890 --tags deploy deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
#### Shut Down Your Local Elastic Stack
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags stop_elk deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
#### Wipe Away everything
|
||||
Once more, this assumes you either have the DAX
|
||||
endpoint and the VPC ID hardcoded in your [host vars](./inventories/local/host_vars/localhost.yml), or you provide them via `-e`.
|
||||
|
||||
If you've already run a CDK deploy via Ansible, then you should not need to specify anything.
|
||||
|
||||
**Note:** For safety purposes, this will _not_ wipe away the `ssk_key_name` in your `~/.ssh` directory. If you specified
|
||||
a pre-existing key to use for this deployment, it will not be touched. If you did not specify a key name, the automatically
|
||||
generated key `$USER-dax-pair` will be left in your `~/.ssh` directory. If you wish to delete this pair from your local machine
|
||||
and remove it from AWS, also specify the `destroy_key_pair` tag as well in the below command.
|
||||
|
||||
You can either specify the `vpc_id` argument directly via `-e` in the command, or you can hard code
|
||||
it in your [host_vars](./inventories/local/host_vars/localhost.yml). You must also already be logged into the AWS CLI for
|
||||
your target environment, or specify a `profile_id` either in your `host_vars` or via `-e`, along with an `aws_region`. If you're not
|
||||
already logged into AWS, your `profile_id` must be configured to be picked up automatically from your `~/.aws/config` or
|
||||
`~/.aws/credentials` files with no additional login steps in order to deploy to AWS.
|
||||
|
||||
**Destroy Everything, But Leave the ssh_key_name Key-Pair Alone:**
|
||||
```shell
|
||||
ansible-playbook -i inventories/local -e vpc_id=vpc-1234567890 --tags destroy deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
**Destroy Everything, Including the ssh_key_name Key-Pair**
|
||||
```shell
|
||||
ansible-playbook -i inventories/local -e vpc_id=vpc-1234567890 --tags 'destroy,destroy_key_pair' deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
### Additional Plays You Can Run
|
||||
|
||||
#### Only Install Prerequisites for Local Machine
|
||||
A sudo password is required to install applications, so we tell Ansible to prompt us for it at the start:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags prerequisites deploy_benchmarker.yml --ask-become-pass
|
||||
```
|
||||
|
||||
#### Start Your Local Elastic Stack
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags elk deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
#### Just Deploy the CDK
|
||||
This assumes you already know the VPC ID to deploy into and have already created an SSH key pair and have the key pair
|
||||
locally in your `~/.ssh` directory with a `.pem` extension. If you did not do this manually, it was done for you automatically
|
||||
and the created pair is under `~/.ssh/$USER-dax-pair.pem`. You can either specify the `vpc_id`
|
||||
argument directly via `-e` in the command, or you can hard code it in your [host_vars](./inventories/local/host_vars/localhost.yml).
|
||||
|
||||
If you've already run a CDK deploy via Ansible, then you should not need to specify anything.
|
||||
|
||||
You must also already be logged into the AWS CLI for your target environment, or specify a `profile_id` either in your
|
||||
`host_vars` or via `-e`, along with an `aws_region`. If you're not already logged into AWS, your `profile_id` must be
|
||||
configured to be picked up automatically from your `~/.aws/config` or `~/.aws/credentials` files with no additional
|
||||
login steps in order to deploy to AWS.
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags cdk deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
#### Only Upload the Benchmarkers to the Bastion Host
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags upload deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
#### Run All Benchmarkers and Scenarios
|
||||
This assumes the CDK is already deployed and an EC2 instance already exists. This also assumes you either have the DAX
|
||||
endpoint and the VPC ID hardcoded in your [host vars](./inventories/local/host_vars/localhost.yml), or you provide them via `-e`.
|
||||
If you've already run a CDK deploy via Ansible, then you should not need to specify anything.
|
||||
|
||||
Additionally, You must also already be logged into the AWS CLI for
|
||||
your target environment, or specify a `profile_id` either in your `host_vars` or via `-e`, along with an `aws_region`. If you're not
|
||||
already logged into AWS, your `profile_id` must be configured to be picked up automatically from your `~/.aws/config` or
|
||||
`~/.aws/credentials` files with no additional login steps in order to deploy to AWS:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags run run_benchmarkers.yml
|
||||
```
|
||||
|
||||
#### Only Run the DynamoDB/DAX Benchmarker
|
||||
This assumes the CDK is already deployed and an EC2 instance already exists. This also assumes you either have the DAX
|
||||
endpoint and the VPC ID hardcoded in your [host vars](./inventories/local/host_vars/localhost.yml), or you provide them via `-e`.
|
||||
If you've already run a CDK deploy via Ansible, then you should not need to specify anything.
|
||||
|
||||
Additionally, You must also already be logged into the AWS CLI for
|
||||
your target environment, or specify a `profile_id` either in your `host_vars` or via `-e`, along with an `aws_region`. If you're not
|
||||
already logged into AWS, your `profile_id` must be configured to be picked up automatically from your `~/.aws/config` or
|
||||
`~/.aws/credentials` files with no additional login steps in order to deploy to AWS:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags dynamodb deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags dax deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
Note the difference in tags: `dynamodb` and `dax`
|
||||
|
||||
#### Only Run the Benchmarkers in CRUD/READONLY mode
|
||||
This assumes the CDK is already deployed and an EC2 instance already exists. This also assumes you either have the DAX
|
||||
endpoint and the VPC ID hardcoded in your [host vars](./inventories/local/host_vars/localhost.yml), or you provide them via `-e`.
|
||||
If you've already run a CDK deploy via Ansible, then you should not need to specify anything.
|
||||
|
||||
Additionally, You must also already be logged into the AWS CLI for
|
||||
your target environment, or specify a `profile_id` either in your `host_vars` or via `-e`, along with an `aws_region`. If you're not
|
||||
already logged into AWS, your `profile_id` must be configured to be picked up automatically from your `~/.aws/config` or
|
||||
`~/.aws/credentials` files with no additional login steps in order to deploy to AWS:
|
||||
|
||||
**CRUD:**
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags crud deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
**read-only:**
|
||||
```shell
|
||||
ansible-playbook -i inventories/local --tags read-only deploy_benchmarker.yml
|
||||
```
|
||||
|
||||
## Supported Variables
|
||||
The following variables are supported to be specified via the `-e` argument when running the `deploy_benchmarker.yml`
|
||||
playbook:
|
||||
|
||||
| Variable Name | Description | Required? |
|
||||
|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|-----------|
|
||||
| `profile_id` | The name of the AWS CLI profile you wish to deploy with; <br>Defaults to using the `AWS_PROFILE` environment variable | |
|
||||
| `vpc_id` | The ID of the VPC in the AWS account you're deploying to where you want the CDK components created <br>Only required on first run only | * |
|
||||
| `local_ip` | The public IP of your local machine; <br>Defaults to the response from `curl -s -L checkip.amazonaws.com` | |
|
||||
| `ssh_key_name` | The name of the SSH key-pair that will be used when creating the EC2 instance to allow you SSH access to it; <br>Defaults to `$USER-dax-pair` | |
|
||||
| `aws_account` | The account ID of the AWS account you're deploying into; <br>Defaults to the result of `aws sts get-caller-identity \| jq -r .Account` | |
|
||||
| `base_table_name` | The base name to use when creating the DynamoDB table; <br>Defaults to `high-velocity-table` | |
|
||||
| `cdk_action` | The action to perform when deploying the CDK; <br>Defaults to `deploy` | |
|
||||
| `duration` | How long to run each simulation for; <br>Defaults to 1800 seconds | |
|
||||
| `benchmarker` | Which benchmarker to run (i.e. `dynamodb` or `dax`) | |
|
||||
| `dax_endpoint` | The DAX URI to use to hit the DAX cluster; <br>Only required when running the benchmarkers and without an initial CDK deploy) | * |
|
||||
|
||||
## Run Order
|
||||
When first running from scratch, you'll want to run with the `init` tags first to initialize the Elastic Stack and install the prerequisites, then run again without any tags to actually
|
||||
deploy everything and run the benchmarkers. If you only want to run the benchmarkers, run the `run_benchmarkers.yml` playbook, or specify the `run` tag.
|
||||
|
||||
## Troubleshooting
|
||||
You can generally get more information about your problem by adding `-vvv` to the end of your
|
||||
`ansible-playbook` command. The more `v`'s you add, the more verbose the output and the more information
|
||||
you will get. For example:
|
||||
|
||||
```shell
|
||||
ansible-playbook -i inventories/local -e cdk_action=destroy --tags 'elk,cdk' deploy_benchmarker.yml -vvv
|
||||
```
|
||||
@@ -0,0 +1,41 @@
|
||||
- name: Deploy the benchmarking components
|
||||
connection: local
|
||||
hosts: local
|
||||
gather_facts: yes
|
||||
roles:
|
||||
- { role: install_prerequisites, tags: [ never, prerequisites, init ] }
|
||||
- { role: configure_elastic_stack, tags: elk }
|
||||
- { role: deploy_cdk, tags: [ cdk, deploy ] }
|
||||
- { role: destroy, tags: [ never, destroy ], cdk_action: destroy }
|
||||
tasks:
|
||||
- name: Populate the DynamoDB table with random data
|
||||
shell:
|
||||
chdir: ../scripts
|
||||
cmd: ./randomly-generate-high-velocity-data.sh -i 5000
|
||||
tags: deploy
|
||||
|
||||
- name: Build the benchmarkers using the Makefile
|
||||
shell:
|
||||
chdir: ../
|
||||
cmd: make build
|
||||
tags: deploy
|
||||
|
||||
- name: Upload the benchmarkers to the bastion host
|
||||
hosts: bastion
|
||||
gather_facts: yes
|
||||
vars:
|
||||
ssh_key_name: "{{ hostvars['localhost']['ssh_key_name'] }}"
|
||||
ansible_ssh_private_key_file: "~/.ssh/{{ ssh_key_name }}.pem"
|
||||
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
|
||||
remote_user: ec2-user
|
||||
tags: [ upload, deploy ]
|
||||
tasks:
|
||||
- copy:
|
||||
src: "../{{ item }}"
|
||||
dest: .
|
||||
mode: 0777
|
||||
loop:
|
||||
- dynamodb-benchmarker
|
||||
- dax-benchmarker
|
||||
|
||||
- import_playbook: run_benchmarkers.yml
|
||||
@@ -0,0 +1,8 @@
|
||||
user_name: "{{ lookup('env', 'USER') }}"
|
||||
ssh_key_name: "{{ lookup('env', 'USER') }}-dax-pair"
|
||||
profile_id: "{{ lookup('env', 'AWS_PROFILE') }}"
|
||||
aws_region: "{{ lookup('env', 'AWS_REGION') }}"
|
||||
stack_name: "{{ user_name }}-dax-benchmark-stack"
|
||||
vpc_id:
|
||||
base_table_name:
|
||||
dax_endpoint:
|
||||
@@ -0,0 +1,3 @@
|
||||
local:
|
||||
hosts:
|
||||
localhost:
|
||||
@@ -0,0 +1,4 @@
|
||||
---
|
||||
collections:
|
||||
- name: community.general
|
||||
- name: amazon.aws
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1,32 @@
|
||||
- name: Clone the docker-elk repo
|
||||
git:
|
||||
repo: https://github.com/deviantony/docker-elk.git
|
||||
dest: ../../docker-elk
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Build the docker-elk stack just in case a pre-existing version of Elasticsearch needs its nodes upgraded
|
||||
shell:
|
||||
chdir: ../../docker-elk
|
||||
cmd: docker compose build
|
||||
|
||||
- name: Start the docker-elk setup container
|
||||
shell:
|
||||
chdir: ../../docker-elk
|
||||
cmd: docker-compose up setup
|
||||
|
||||
- name: Start the docker-elk stack
|
||||
shell:
|
||||
chdir: ../../docker-elk
|
||||
cmd: docker compose up -d
|
||||
|
||||
- name: Wait 20 seconds for the ELK stack to start
|
||||
pause:
|
||||
seconds: 20
|
||||
|
||||
- name: Import the benchmarking dashboards into Kibana
|
||||
shell:
|
||||
cmd: >
|
||||
curl -X POST http://localhost:5601/api/saved_objects/_import?overwrite=true
|
||||
-H 'kbn-xsrf: true'
|
||||
-u 'elastic:changeme'
|
||||
--form file=@roles/configure_elastic_stack/files/benchmarker-dashboards.ndjson
|
||||
@@ -0,0 +1,8 @@
|
||||
- { import_tasks: init_elk_stack.yml, tags: [ never, init, init_elk ] }
|
||||
- { import_tasks: stop_elk_stack.yml, tags: [ never, stop_elk ] }
|
||||
|
||||
- name: Start the docker-elk stack
|
||||
shell:
|
||||
chdir: ../../docker-elk
|
||||
cmd: docker compose up -d
|
||||
tags: deploy
|
||||
@@ -0,0 +1,4 @@
|
||||
- name: Stop the docker-elk stack
|
||||
shell:
|
||||
chdir: ../../docker-elk
|
||||
cmd: docker compose down
|
||||
@@ -0,0 +1,119 @@
|
||||
- name: Check if a key-pair following the specified format already exists
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.ssh/{{ ssh_key_name }}.pem"
|
||||
register: key_pair
|
||||
changed_when: no
|
||||
when: "'destroy' not in ansible_run_tags"
|
||||
|
||||
- block:
|
||||
- name: Create a new key-pair
|
||||
ec2_key:
|
||||
name: "{{ ssh_key_name }}"
|
||||
register: aws_key_pair
|
||||
|
||||
- name: Create the new pem file
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/.ssh/{{ ssh_key_name }}.pem"
|
||||
state: touch
|
||||
mode: '0400'
|
||||
|
||||
- name: Add the generated key-pair to the new file
|
||||
blockinfile:
|
||||
path: "{{ ansible_env.HOME }}/.ssh/{{ ssh_key_name }}.pem"
|
||||
block: "{{ aws_key_pair.key.private_key }}"
|
||||
|
||||
when:
|
||||
- "'destroy' not in ansible_run_tags"
|
||||
- not key_pair.stat.exists
|
||||
|
||||
- name: Fetch the current system's public IP
|
||||
shell:
|
||||
cmd: curl -s -L checkip.amazonaws.com
|
||||
register: public_ip_resp
|
||||
|
||||
- name: Fetch the current AWS account ID
|
||||
shell:
|
||||
cmd: aws sts get-caller-identity | jq -r .Account
|
||||
register: aws_account_resp
|
||||
|
||||
- name: Install CDK dependencies
|
||||
npm:
|
||||
ci: yes
|
||||
path: ../cdk
|
||||
|
||||
- name: Bootstrapping the AWS environment
|
||||
shell:
|
||||
chdir: ../cdk
|
||||
cmd: >
|
||||
npm run build && yes | npm run cdk bootstrap --
|
||||
--no-color --require-approval never
|
||||
--profile {{ profile_id | default("personal") }}
|
||||
-c vpcId={{ vpc_id }}
|
||||
-c localIp={{ public_ip_resp.stdout }}
|
||||
-c sshKeyName={{ ssh_key_name }}
|
||||
-c awsAccount={{ aws_account_resp.stdout }}
|
||||
-c baseTableName={{ base_table_name | default('') }}
|
||||
|
||||
- name: Deploying Benchmarking CDK
|
||||
shell:
|
||||
chdir: ../cdk
|
||||
cmd: >
|
||||
npm run build && yes | npm run cdk {{ cdk_action | default("deploy") }} --
|
||||
--no-color --require-approval never
|
||||
--profile {{ profile_id | default("personal") }}
|
||||
-c vpcId={{ vpc_id }}
|
||||
-c localIp={{ public_ip_resp.stdout }}
|
||||
-c sshKeyName={{ ssh_key_name }}
|
||||
-c awsAccount={{ aws_account_resp.stdout }}
|
||||
-c baseTableName={{ base_table_name | default('') }}
|
||||
register: cdk_response
|
||||
|
||||
- name: Benchmarking CDK deployment summary
|
||||
debug:
|
||||
msg: "{{ cdk_response.stderr_lines }}"
|
||||
|
||||
- block:
|
||||
- name: Fetch the benchmark stack outputs
|
||||
cloudformation_info:
|
||||
stack_name: "{{ stack_name }}"
|
||||
register: benchmark_stack
|
||||
|
||||
- name: Extracting the bastion host IP
|
||||
set_fact:
|
||||
bastion_host_ip: "{{ benchmark_stack.cloudformation[stack_name].stack_outputs['InstancePublicIp'] }}"
|
||||
|
||||
- name: Extracting DAX endpoint
|
||||
set_fact:
|
||||
dax_endpoint: "{{ benchmark_stack.cloudformation[stack_name].stack_outputs['DaxEndpoint'] }}"
|
||||
|
||||
- name: Setting the dax_endpoint variable in the host vars if it doesn't exist already
|
||||
lineinfile:
|
||||
path: inventories/local/host_vars/localhost.yml
|
||||
line: "dax_endpoint: {{ dax_endpoint }}"
|
||||
regexp: '^dax_endpoint:'
|
||||
|
||||
- name: Setting the vpc_id variable in the host vars if it doesn't exist already
|
||||
lineinfile:
|
||||
path: inventories/local/host_vars/localhost.yml
|
||||
line: "vpc_id: {{ vpc_id }}"
|
||||
regexp: '^vpc_id:'
|
||||
|
||||
- block:
|
||||
- name: Setting the bastion host IP if it doesnt exist in the inventory
|
||||
lineinfile:
|
||||
path: inventories/local/hosts.yml
|
||||
line: |
|
||||
bastion:
|
||||
hosts:
|
||||
{{ bastion_host_ip }}:
|
||||
regexp: 'bastion:\n\s*hosts:\n\s*(?:\d{1,3}\.){3}\d{1,3}:'
|
||||
insertafter: EOF
|
||||
|
||||
- name: Add the bastion host to the bastion group
|
||||
add_host:
|
||||
name: "{{ bastion_host_ip }}"
|
||||
groups: bastion
|
||||
when:
|
||||
- "'bastion' not in groups"
|
||||
- "'bastion' not in group_names"
|
||||
when: "'destroy' not in ansible_run_tags"
|
||||
@@ -0,0 +1,54 @@
|
||||
- name: Wipe away local Elastic Stack
|
||||
shell:
|
||||
chdir: ../../docker-elk
|
||||
cmd: docker compose down -v
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Wipe away the ELK directory
|
||||
file:
|
||||
path: ../../docker-elk
|
||||
state: absent
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Run CDK Destroy
|
||||
import_role:
|
||||
name:
|
||||
deploy_cdk
|
||||
|
||||
- name: Delete the key-pair from AWS
|
||||
ec2_key:
|
||||
name: "{{ ssh_key_name }}"
|
||||
state: absent
|
||||
ignore_errors: yes
|
||||
tags: [ never, destroy_key_pair ]
|
||||
|
||||
- name: Delete the key pair from your local machine
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/.ssh/{{ ssh_key_name }}.pem"
|
||||
state: absent
|
||||
ignore_errors: yes
|
||||
tags: [ never, destroy_key_pair ]
|
||||
|
||||
- name: Remove the bastion host from the bastion host group
|
||||
replace:
|
||||
path: inventories/local/hosts.yml
|
||||
replace: ''
|
||||
regexp: '^bastion:\n\s*hosts:\n\s*(?:\d{1,3}\.){3}\d{1,3}:'
|
||||
|
||||
- name: Reset the dax_endpoint variable in the host vars
|
||||
lineinfile:
|
||||
path: inventories/local/host_vars/localhost.yml
|
||||
line: 'dax_endpoint:'
|
||||
regexp: '^dax_endpoint:'
|
||||
|
||||
- name: Reset the vpc_id variable in the host vars
|
||||
lineinfile:
|
||||
path: inventories/local/host_vars/localhost.yml
|
||||
line: 'vpc_id:'
|
||||
regexp: '^vpc_id:'
|
||||
|
||||
- name: Clean the repository using the Makefile
|
||||
shell:
|
||||
chdir: ../
|
||||
cmd:
|
||||
make clean
|
||||
@@ -0,0 +1,22 @@
|
||||
- name: Add Docker's official GPG key
|
||||
apt_key:
|
||||
url: https://download.docker.com/linux/ubuntu/gpg
|
||||
keyring: /etc/apt/keyrings/docker.gpg
|
||||
|
||||
- name: Set up docker APT repository
|
||||
apt_repository:
|
||||
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
||||
|
||||
- name: Install the required APT dependencies
|
||||
apt:
|
||||
update_cache: yes
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- docker-compose
|
||||
- containerd.io
|
||||
- docker-compose-plugin
|
||||
- jq
|
||||
- unzip
|
||||
- curl
|
||||
- git
|
||||
@@ -0,0 +1,26 @@
|
||||
- name: Check if AWS CLI is installed
|
||||
shell:
|
||||
cmd: hash aws 2> /dev/null
|
||||
ignore_errors: yes
|
||||
changed_when: no
|
||||
register: awscli_installation_status
|
||||
|
||||
- block:
|
||||
- name: Download the AWS CLI from AWS
|
||||
unarchive:
|
||||
src: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
|
||||
dest: "{{ ansible_env.HOME }}/Downloads"
|
||||
group: "{{ user_name }}"
|
||||
owner: "{{ user_name }}"
|
||||
remote_src: yes
|
||||
|
||||
- name: Install the AWS CLI
|
||||
shell:
|
||||
cmd: "{{ ansible_env.HOME }}/Downloads/aws/install"
|
||||
|
||||
- name: Cleanup downloaded AWS installation files
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/Downloads/aws/"
|
||||
state: absent
|
||||
|
||||
when: awscli_installation_status.rc | int != 0
|
||||
@@ -0,0 +1,15 @@
|
||||
- name: Check if Go is installed
|
||||
shell:
|
||||
cmd: command -v go 2> /dev/null
|
||||
ignore_errors: yes
|
||||
changed_when: no
|
||||
register: go_installation_status
|
||||
|
||||
- name: Install Go 1.20
|
||||
unarchive:
|
||||
src: https://go.dev/dl/go1.20.5.linux-amd64.tar.gz
|
||||
dest: /usr/local
|
||||
creates: /usr/local/go
|
||||
remote_src: yes
|
||||
become: yes
|
||||
when: go_installation_status.rc | int != 0
|
||||
@@ -0,0 +1,25 @@
|
||||
- { import_tasks: aws_cli.yml, become: yes }
|
||||
- import_tasks: rust.yml
|
||||
- import_tasks: go.yml
|
||||
- import_tasks: node.yml
|
||||
- { import_tasks: apt.yml, become: yes }
|
||||
|
||||
- name: Install CDK
|
||||
npm:
|
||||
name: "{{ item }}"
|
||||
global: yes
|
||||
loop:
|
||||
- aws-cdk
|
||||
- typescript
|
||||
|
||||
- name: Check if golangci-lint is installed
|
||||
shell:
|
||||
cmd: command -v golangci-lint 2> /dev/null
|
||||
ignore_errors: yes
|
||||
changed_when: no
|
||||
register: golangci_lint_installation_status
|
||||
|
||||
- name: Install golangci-lint
|
||||
shell:
|
||||
cmd: curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b /usr/local/bin v1.53.3
|
||||
when: golangci_lint_installation_status.rc | int != 0
|
||||
@@ -0,0 +1,34 @@
|
||||
- name: Check if node is installed
|
||||
shell:
|
||||
cmd: hash node 2> /dev/null
|
||||
ignore_errors: yes
|
||||
changed_when: no
|
||||
register: node_installation_status
|
||||
|
||||
- block:
|
||||
- name: Install nvm
|
||||
shell: >
|
||||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
|
||||
args:
|
||||
creates: "{{ ansible_env.HOME }}/.nvm/nvm.sh"
|
||||
|
||||
- name: Install Node.JS
|
||||
shell:
|
||||
cmd: |
|
||||
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
|
||||
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
|
||||
nvm install node
|
||||
|
||||
- name: Add NVM exports to bashrc
|
||||
lineinfile:
|
||||
path: "{{ ansible_env.HOME }}/.bashrc"
|
||||
line: 'export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"'
|
||||
regexp: '^export NVM_DIR=.+'
|
||||
|
||||
- name: Add NVM script to bashrc
|
||||
lineinfile:
|
||||
path: "{{ ansible_env.HOME }}/.bashrc"
|
||||
line: '[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"'
|
||||
regexp: '\[ -s |\$NVM_DIR/nvm\.sh \].+'
|
||||
|
||||
when: node_installation_status.rc | int != 0
|
||||
@@ -0,0 +1,11 @@
|
||||
- name: Check if rustup is installed
|
||||
shell:
|
||||
cmd: command -v rustup 2> /dev/null
|
||||
ignore_errors: yes
|
||||
changed_when: no
|
||||
register: rustup_installation_status
|
||||
|
||||
- name: Install Rust via Rustup
|
||||
shell: >
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
when: rustup_installation_status.rc | int != 0
|
||||
@@ -0,0 +1,110 @@
|
||||
- name: Get AWS Credentials
|
||||
connection: local
|
||||
hosts: local
|
||||
gather_facts: yes
|
||||
tags: [ run, deploy ]
|
||||
tasks:
|
||||
- name: Ensure the user is logged into their AWS CLI
|
||||
assert:
|
||||
that:
|
||||
- aws_region is defined
|
||||
- profile_id is defined
|
||||
- dax_endpoint is defined
|
||||
|
||||
- name: Get the environment variables to set on the bastion host for the current AWS profile
|
||||
shell:
|
||||
cmd: aws configure export-credentials
|
||||
register: aws_creds
|
||||
|
||||
- name: Register the aws_creds as a fact for the benchmarkers playbook to receive
|
||||
set_fact:
|
||||
aws_credentials: "{{ aws_creds.stdout }}"
|
||||
|
||||
- name: Run the benchmarkers
|
||||
hosts: bastion
|
||||
gather_facts: no
|
||||
vars:
|
||||
ssh_key_name: "{{ hostvars['localhost']['ssh_key_name'] }}"
|
||||
ansible_ssh_private_key_file: "~/.ssh/{{ ssh_key_name }}.pem"
|
||||
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -R 9200:localhost:9200'
|
||||
tags: [ run, deploy ]
|
||||
remote_user: ec2-user
|
||||
tasks:
|
||||
- name: Run the DynamoDB benchmarker in CRUD mode
|
||||
shell:
|
||||
cmd: >
|
||||
export AWS_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('AccessKeyId') }}";
|
||||
export AWS_SECRET_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SecretAccessKey') }}";
|
||||
export AWS_SESSION_TOKEN="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SessionToken') }}";
|
||||
export AWS_CREDENTIAL_EXPIRATION="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('Expiration') }}";
|
||||
export AWS_REGION="{{ hostvars['localhost']['aws_region'] }}";
|
||||
./dynamodb-benchmarker -d "{{ duration | default(1800) | int }}" -t "{{ hostvars['localhost']['user_name'] }}"-high-velocity-table
|
||||
executable: /bin/bash
|
||||
tags:
|
||||
- dynamodb
|
||||
- crud
|
||||
|
||||
- name: Run the DynamoDB benchmarker in read-only mode
|
||||
shell:
|
||||
cmd: >
|
||||
export AWS_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('AccessKeyId') }}";
|
||||
export AWS_SECRET_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SecretAccessKey') }}";
|
||||
export AWS_SESSION_TOKEN="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SessionToken') }}";
|
||||
export AWS_CREDENTIAL_EXPIRATION="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('Expiration') }}";
|
||||
export AWS_REGION="{{ hostvars['localhost']['aws_region'] }}";
|
||||
./dynamodb-benchmarker -d "{{ duration | default(1800) | int }}" -t "{{ hostvars['localhost']['user_name'] }}"-high-velocity-table -r
|
||||
executable: /bin/bash
|
||||
tags:
|
||||
- dynamodb
|
||||
- read-only
|
||||
|
||||
- name: Run the DAX benchmarker in CRUD mode
|
||||
shell:
|
||||
cmd: >
|
||||
export AWS_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('AccessKeyId') }}";
|
||||
export AWS_SECRET_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SecretAccessKey') }}";
|
||||
export AWS_SESSION_TOKEN="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SessionToken') }}";
|
||||
export AWS_CREDENTIAL_EXPIRATION="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('Expiration') }}";
|
||||
export AWS_REGION="{{ hostvars['localhost']['aws_region'] }}";
|
||||
export DAX_ENDPOINT="{{ hostvars['localhost']['dax_endpoint'] }}";
|
||||
unset cmd;
|
||||
basecmd='./dax-benchmarker -c 100
|
||||
-d 115
|
||||
-t "{{ hostvars['localhost']['user_name'] }}"-high-velocity-table
|
||||
-e "{{ hostvars['localhost']['dax_endpoint'] }}"';
|
||||
for i in $(seq 1 9); do
|
||||
cmd+="$basecmd & ";
|
||||
done;
|
||||
cmd+="$basecmd";
|
||||
timeout -s SIGINT "{{ duration | default(1800) | int }}" bash -c "while :; do $cmd; done"
|
||||
executable: /bin/bash
|
||||
ignore_errors: yes
|
||||
tags:
|
||||
- dax
|
||||
- crud
|
||||
|
||||
- name: Run the DAX benchmarker in read-only mode
|
||||
shell:
|
||||
cmd: >
|
||||
export AWS_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('AccessKeyId') }}";
|
||||
export AWS_SECRET_ACCESS_KEY="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SecretAccessKey') }}";
|
||||
export AWS_SESSION_TOKEN="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('SessionToken') }}";
|
||||
export AWS_CREDENTIAL_EXPIRATION="{{ hostvars['localhost']['aws_credentials'] | community.general.json_query('Expiration') }}";
|
||||
export AWS_REGION="{{ hostvars['localhost']['aws_region'] }}";
|
||||
export DAX_ENDPOINT="{{ hostvars['localhost']['dax_endpoint'] }}";
|
||||
unset cmd;
|
||||
basecmd='./dax-benchmarker -c 100
|
||||
-d 115
|
||||
-r
|
||||
-t "{{ hostvars['localhost']['user_name'] }}"-high-velocity-table
|
||||
-e "{{ hostvars['localhost']['dax_endpoint'] }}"';
|
||||
for i in $(seq 1 9); do
|
||||
cmd+="$basecmd & ";
|
||||
done;
|
||||
cmd+="$basecmd";
|
||||
timeout -s SIGINT "{{ duration | default(1800) | int }}" bash -c "while :; do $cmd; done"
|
||||
executable: /bin/bash
|
||||
ignore_errors: yes
|
||||
tags:
|
||||
- dax
|
||||
- read-only
|
||||
Reference in New Issue
Block a user