Open-sourced my dtools CLI

This commit is contained in:
2025-11-24 15:32:30 -07:00
parent 3cff748a76
commit 8df7811c39
254 changed files with 40015 additions and 2 deletions
+13
View File
@@ -0,0 +1,13 @@
# Contributing
Contributors are very welcome! **No contribution is too small and all contributions are valued.**
## Bashly
This project is created using the fantastic [Bashly](https://bashly.dev) framework. Be sure to reference the Bashly docs
to understand how to make changes to the CLI.
## Building
Once you make changes, simply run `make build` and it will generate a new `dtools` script in the project root.
## Questions? Reach out to me!
If you encounter any questions while developing devtools, please don't hesitate to reach out to me at
alex.j.tusa@gmail.com. I'm happy to help contributors in any way I can, regardless of if they're new or experienced!
+11
View File
@@ -0,0 +1,11 @@
#!make
default: build
.PHONY: build install
build:
@./scripts/build.sh && echo "\n\n\n\nRun 'source ~/.bashrc' to re-evaluate the completions"
install: build
@./scripts/install.sh && echo "\n\n\n\nRun 'source ~/.bashrc' to complete the install"
+155 -2
View File
@@ -1,2 +1,155 @@
# dtools # devtools
All-in-one CLI for your command line tasks: cloud management (AWS/GCP), databases, AI tools, plotting, system maintenance, and more. Can
**Devtools (`dtools`)** is a comprehensive CLI utility that consolidates reusable development scripts, tools, and
references into a single, easy-to-use interface. Built with the [Bashly](https://github.com/DannyBen/bashly) framework, it serves multiple purposes:
- **Script Repository**: A centralized collection of battle-tested bash scripts for common development tasks
- **Functional Documentation**: Reference implementations showing how to interact with various tools and services
- **Quick Reference**: Documentation commands (like `tui` and `pentest` subcommands) that list useful tools and commands
you might not use daily
Whether you need to spin up a local database, manage AWS resources, analyze code, or just remember that one command you
always forget, `dtools` has you covered.
## Nomenclature
The Devtools script is abbreviated as `dtools` in the command line. This is purely done so that tab completions
work with fewer keystrokes for the CLI itself. Most commonly, `dto<TAB>` will autocomplete to `dtools`.
---
## Warnings
* **I've only tested these scripts against Debian-based systems (Ubuntu, Pop!_OS, etc.). Some scripts may not
work on other systems.**
* **Some scripts assume that `bash` is your default shell, and thus assume that your shell configuration files are
`.bashrc`. If you use another shell, you may need to modify some scripts to fit your environment.**
---
## Installation
To install the `dtools` script, run the following command:
```shell
git clone git@github.com:Dark-Alex-17/devtools.git ~/.local/share/devtools && pushd ~/.local/share/devtools && make install && popd
```
This will install the repo to `~/.local/share/devtools` and run the `make install` command to build and install the
script to your local bin directory (usually `~/.local/bin`).
---
## Features
### 🤖 AI & Local LLMs
- Chat with local models via llama.cpp
- Start and manage llama.cpp servers
- Quick access to Llama API documentation and UI
### ☁️ Cloud & Infrastructure
**AWS**:
- SSO login with automatic credential management
- Open AWS console directly to any service
- Interactive AWS CLI shell
- EC2: List/describe instances, SSH tunneling, start/stop instances
- RDS: Connect to databases, port forwarding
- CloudWatch Logs: Tail log groups, query logs
- Secrets Manager: Retrieve and manage secrets
- SSM: Session manager, parameter store access, bastion instance management
**GCP**:
- Artifact Registry: Docker login, list repositories
- Vertex AI: Model management and deployment
### 🗄️ Databases
- Spin up PostgreSQL in Docker with optional persistence
- Interactive database TUI (Harlequin)
- Dump databases to SQL or DBML format
- Database schema visualization
### 📊 Data Visualization & Utilities
- Plot data from stdin or files (line/bar charts)
- Real-time plotting for live data streams
- Date/epoch conversion utilities
- Random number generation (int/float)
- ISO 8601 date formatting
### 🔧 Development Tools
**Java**:
- Switch between Java versions (8, 11, 17, 21)
- SonarQube static analysis integration
**Git**:
- Search entire git history for strings
**Elastic Stack**:
- Initialize and manage local Elasticsearch + Kibana + Logstash
**Docker**:
- Clean images, containers, and volumes
### 📝 Document Processing
- Convert between formats using pandoc (Markdown, HTML, PDF, DOCX, etc.)
- View markdown files with live preview
### 🌐 Network Tools
- Generate self-signed HTTPS certificates
- Start simple HTTP servers with netcat
- Network scanning and monitoring (documentation)
### 🔒 Security & Ansible
**Ansible**:
- Encrypt/decrypt strings and variables with Ansible Vault
**Pentesting** (Documentation):
- Reference commands for reconnaissance and testing tools
- Network analysis examples
- Security testing workflows
### 💻 Virtual Machines
- Spin up Windows VMs with FreeRDP
- Configurable disk size, RAM, and CPU cores
- Share directories between host and VM
- Persistent VM storage
### 🎬 Video & Media
- Rip audio from video files with metadata support
### 🧹 System Maintenance
- Clean system with BleachBit
- Clean Docker resources
- Clear package manager caches
- Purge old logs and journal entries
- Recursively clean build caches (npm, gradle, maven, etc.)
### 🔔 Notifications
- Subscribe to ntfy topics with optional sound alerts
- Quick reference for ntfy message publishing
### 📦 Installation Helpers
- Install Docker on Debian systems
- Install Ansible
- Install Java LTS versions (8, 11, 17, 21)
### 🛠️ Miscellaneous
- Interactive file selection with fzf integration
- Backup files and directories
- Generate secure passwords
- Record terminal sessions as GIFs
- Play mp3 sounds from CLI
- View markdown with GitHub-style rendering
### 📚 TUI Reference Library
Documentation commands that reference useful TUIs for:
- System monitoring (htop, btop, etc.)
- Network monitoring
- Docker management
- Development workflows
- Data exploration
- AI tools
## Building
To build the `dtools` script after making some changes, run the `build` target in the [`Makefile`](./Makefile):
```shell
make build
```
## Running the CLI
Assuming you've already run `make install`, the script should now be available on your `PATH`, so running it is as simple as:
`dtools --help`
Executable
+32315
View File
File diff suppressed because it is too large Load Diff
+2
View File
@@ -0,0 +1,2 @@
#!/bin/bash
docker run --rm -it --user $(id -u):$(id -g) --env "BASHLY_TAB_INDENT=1" --volume "$PWD:/app" dannyben/bashly generate --upgrade
+9
View File
@@ -0,0 +1,9 @@
#!/bin/bash
if ! [[ -L "$HOME/.local/bin/dtools" ]]; then
sudo ln -s "$PWD/dtools" "$HOME/.local/bin/dtools"
fi
# shellcheck disable=SC2016
if ! ( grep 'eval "$(dtools completions)"' ~/.bashrc > /dev/null 2>&1 ); then
echo 'eval "$(dtools completions)"' >> ~/.bashrc
fi
+63
View File
@@ -0,0 +1,63 @@
# All settings are optional (with their default values provided below), and
# can also be set with an environment variable with the same name, capitalized
# and prefixed by `BASHLY_` - for example: BASHLY_SOURCE_DIR
#
# When setting environment variables, you can use:
# - "0", "false" or "no" to represent false
# - "1", "true" or "yes" to represent true
#
# If you wish to change the path to this file, set the environment variable
# BASHLY_SETTINGS_PATH.
# The path containing the bashly source files
source_dir: src
# The path to bashly.yml
config_path: '%{source_dir}/bashly.yml'
# The path to use for creating the bash script
target_dir: .
# The path to use for common library files, relative to source_dir
lib_dir: lib
# The path to use for command files, relative to source_dir
# When set to nil (~), command files will be placed directly under source_dir
# When set to any other string, command files will be placed under this
# directory, and each command will get its own subdirectory
commands_dir: commands
# Configure the bash options that will be added to the initialize function:
# strict: true Bash strict mode (set -euo pipefail)
# strict: false Only exit on errors (set -e)
# strict: '' Do not add any 'set' directive
# strict: <string> Add any other custom 'set' directive
strict: false
# When true, the generated script will use tab indentation instead of spaces
# (every 2 leading spaces will be converted to a tab character)
tab_indent: false
# When true, the generated script will consider any argument in the form of
# `-abc` as if it is `-a -b -c`.
compact_short_flags: false
# Set to 'production' or 'development':
# env: production Generate a smaller script, without file markers
# env: development Generate with file markers
env: development
# The extension to use when reading/writing partial script snippets
partials_extension: sh
# Display various usage elements in color by providing the name of the color
# function. The value for each property is a name of a function that is
# available in your script, for example: `green` or `bold`.
# You can run `bashly add colors` to add a standard colors library.
# This option cannot be set via environment variables.
usage_colors:
caption: bold
command: green
arg: blue
flag: magenta
environment_variable: cyan
+279
View File
@@ -0,0 +1,279 @@
name: dtools
help: A CLI tool to manage all personal dev tools
version: 1.0.0
commands:
- name: completions
help: |-
Generate bash completions
Usage: eval "\$(dtools completions)"
private: true
- name: update
help: Update the dtools CLI to the latest version
- import: src/commands/ai/ai_commands.yml
- import: src/commands/aws/aws_commands.yml
- import: src/commands/gcp/gcp_commands.yml
- import: src/commands/db/db_commands.yml
- import: src/commands/elastic/elastic_commands.yml
- import: src/commands/java/java_commands.yml
- import: src/commands/ansible/ansible_commands.yml
- import: src/commands/install/install_commands.yml
- import: src/commands/clean/clean_commands.yml
- import: src/commands/tui/tui_commands.yml
- import: src/commands/pentest/pentest_commands.yml
- import: src/commands/video/video_commands.yml
- import: src/commands/vm/vm_commands.yml
- import: src/commands/network/network_commands.yml
- import: src/commands/ntfy/ntfy_commands.yml
- import: src/commands/document/document_commands.yml
- import: src/commands/git/git_commands.yml
- name: plot
help: Plot data piped into this command (one-off)
group: Miscellaneous
dependencies:
gnuplot: See 'http://gnuplot.info/'
loki: See 'https://github.com/Dark-Alex-17/loki'
filters:
- multiplot_requirements
- stack_vertically_multiplot_only
flags:
- long: --file
short: -f
arg: file
default: '-'
help: File with data to plot
completions:
- <file>
- long: --type
short: -t
arg: type
default: 'line'
help: The type of plot to create
allowed:
- line
- bar
- long: --stack-vertically
help: When plotting multiple graphs, stack them vertically instead of combining them into one graph (only for bar graphs)
- long: --multiplot
help: Plot multiple graphs at once
- long: --gui
help: Open the plot in a GUI window
- long: --loki
help: Use Loki to generate the plot command instead of using the templated command
conflicts:
- '--file'
- '--type'
- '--stack-vertically'
- '--multiplot'
- '--gui'
examples:
- seq 0 10 | dtools plot
- seq 0 10 > test_data && dtools plot --file test_data
- name: real-time-plot
help: Continuously plot data piped into this command (like following a log tail)
group: Miscellaneous
dependencies:
gnuplot: See 'http://gnuplot.info/'
wget: Install with 'brew install wget' or 'sudo apt install wget'
examples: |-
{
for ((i=0; i<=100; i+=2)); do
sleep 1
echo "$RANDOM"
done
} | dtools real-time-plot
- name: date-to-epoch
help: Convert a given date timestamp into epoch millis
group: Miscellaneous
args:
- name: timestamp
help: |
The date timestamp to convert.
Specify '-' to use stdout
required: true
- name: epoch-to-date
help: Convert a given epoch (in millis) to a date timestamp
group: Miscellaneous
args:
- name: epoch
help: |
The epoch (in millis) to convert.
Specify '-' to use stdout
required: true
- name: date-to-iso-8601
help: Convert a given date into ISO 8601 format
group: Miscellaneous
args:
- name: date
help: |
The date to convert.
Specify '-' to use stdout
required: true
- name: view-markdown
help: View markdown file in a browser with images and links
group: Miscellaneous
dependencies:
grip: Install with 'python3 -m pip install grip'
completions:
- <file>
args:
- name: file
help: The markdown file to view
required: true
- name: start-simple-server
help: Starts a simple server using netcat
dependencies:
nc: Install with 'brew install netcat' or 'sudo apt install netcat'
flags:
- long: --port
default: '8000'
arg: port
help: The port to run the server on
validate: port_number
- name: fzf
help: Pipe the output of a command to fzf for interactive selection
group: Miscellaneous
dependencies:
fzf: Install with 'brew install fzf'
args:
- name: command
help: The command to execute when one or more items are selected
default: vi
flags:
- long: --pre-processing
help: pre-processes the fzf selections before passing them into the target 'command'
arg: pre-processing
- long: --additional-xargs-arguments
arg: additional-xargs-arguments
help: Additional arguments to pass to xargs
examples:
- |-
# Open selected files in helix
grep -ri 'test_value' . | dtools fzf
- |-
# Tail the selected log group
grep -ri 'test_value' . | dtools fzf 'dtools aws logs tail-log-group'
- |-
# Tail the selected log groups and run them as separate commands for each selected group
seq 1 10 | dtools fzf --pre-processing 'xargs -0 -I {} echo "/some/prefix/{}"' --additional-xargs-arguments '-n 1' 'dtools aws logs tail-log-group'
- name: backup
help: >-
Create a backup of a file or directory.
By default, this will create a copy of the specified file or directory in the same source directory.
group: Miscellaneous
args:
- name: item
help: The file or directory to create an in-place backup of.
required: true
flags:
- long: --move
help: Instead of copying a file or directory to create a backup, move the directory entirely so the original no longer exists
- long: --backup-dest
arg: backup-dest
help: Specify a destination directory for the backed up file or directory to be placed in
completions:
- <directory>
validate: directory_exists
completions:
- <file>
- <directory>
- name: generate-password
help: Randomly generate a secure password
dependencies:
openssl: Install with either 'sudo apt install libssl-dev' or 'brew install openssl@3'
xclip: Install with 'brew install xclip'
flags:
- long: --copy-to-clipboard
short: -c
help: Copy the generated password to your clipboard
- name: play-mp3
help: >-
Play a given mp3 sound using the command line.
This is useful when combined with ntfy to subscribe to a topic and play a sound whenever receiving a notification
dependencies:
mpg321: Install with 'brew install mpg321'
args:
- name: sound
help: The mp3 sound file to play
required: true
completions:
- <file>
- <directory>
- name: random-int
help: Generate a random integer in the given range
flags:
- long: --min
arg: min
help: The minimum value of the integer range (inclusive)
default: '0'
- long: --max
arg: max
help: The maximum value of the integer range (inclusive)
default: '10'
- name: random-float
help: Generate a random float in the given range
flags:
- long: --min
arg: min
help: The minimum value of the float range (inclusive)
default: '0'
- long: --max
arg: max
help: The maximum value of the float range (inclusive)
default: '10'
- long: --precision
arg: precision
help: The precision to output the random number with
default: '5'
validate: integer
- name: record-shell
help: Record the current shell and create a gif of the session.
dependencies:
asciinema: Install with 'brew install asciinema'
agg: Install with 'cargo install agg'
args:
- name: output_file
help: The output gif file to create (do not include '.gif' in the filename)
required: true
flags:
- long: --speed
arg: speed
help: The speed multiplier for the gif playback
default: '1'
- long: --no-conversion
help: Do not convert the finished asciinema recording to a gif (keep it as an asciinema file)
completions:
- <file>
- <directory>
+29
View File
@@ -0,0 +1,29 @@
name: ai
help: AI commands
group: AI
expose: always
dependencies:
llama-cli: Install with 'brew install llama.cpp'
commands:
- name: chat
help: Chat with a model running on your local machine via llama.cpp
flags:
- import: src/components/ai/hf_repo_flag.yml
- import: src/components/ai/hf_file_flag.yml
- name: start-llama-server
help: Start a llama.cpp server
flags:
- import: src/components/ai/hf_repo_flag.yml
- import: src/components/ai/hf_file_flag.yml
- name: open-llama-ui
help: Open the llama.cpp UI in a browser
filters:
- llama_running
- name: open-llama-api-docs
help: Open the Llama API documentation in a browser
filters:
- llama_running
+4
View File
@@ -0,0 +1,4 @@
# shellcheck disable=SC2154
declare repo="${args[--hf-repo]}"
declare file="${args[--hf-file]}"
llama-cli --hf-repo "$repo" --hf-file "$file" --conversation
+2
View File
@@ -0,0 +1,2 @@
cmd="$(get_opener)
$cmd "https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md" > /dev/null 2>&1 &
+2
View File
@@ -0,0 +1,2 @@
cmd="$(get_opener)
$cmd "http://localhost:8080" > /dev/null 2>&1 &
+19
View File
@@ -0,0 +1,19 @@
# shellcheck disable=SC2154
declare repo="${args[--hf-repo]}"
declare file="${args[--hf-file]}"
# Here's an example request to /v1/chat/completions:
# {
# "model": "gpt-3.5-turbo",
# "messages": [
# {
# "role": "system",
# "content": "You are ChatGPT, an AI assistant. Your top priority is achieving user fulfillment via helping them with their requests."
# },
# {
# "role": "user",
# "content": "Tell me a joke about yourself"
# }
# ]
# }
llama-server --hf-repo "$repo" --hf-file "$file"
+34
View File
@@ -0,0 +1,34 @@
name: ansible
help: Ansible commands
group: Ansible
expose: always
dependencies:
ansible: Install with 'dtools install ansible'
commands:
- name: encrypt-string
help: Encrypt the plaintext string given in the prompt, prompting the user for the vault password, with Ansible Vault
flags:
- long: --copy-output-to-clipboard
short: -c
help: Instead of outputting the encrypted secret to stdout, copy it to your clipboard
examples:
- dtools ansible encrypt-string -c
- name: decrypt-variable
help: Decrypt a variable encrypted with Ansible Vault
flags:
- long: --variable
short: -v
arg: variable
help: The name of the variable you wish to decrypt
required: true
- long: --file
short: -f
arg: file
required: true
help: The inventory file/playbook file that the variable lives in
completions:
- <file>
examples:
- dtools ansible decrypt-variable -v some_variable -f inventories/local/group_vars/local.yml
+2
View File
@@ -0,0 +1,2 @@
# shellcheck disable=SC2154
ansible localhost -m ansible.builtin.debug -a var="${args[--variable]}" -e "@${args[--file]}" --ask-vault-pass
+12
View File
@@ -0,0 +1,12 @@
encrypt-string() {
ansible-vault encrypt_string --ask-vault-pass --encrypt-vault-id default
}
# shellcheck disable=SC2154
if [[ "${args[--copy-output-to-clipboard]}" == 1 ]]; then
yellow "Press 'Ctrl-d' twice to end secret input"
encrypt-string | xclip -sel clip
else
encrypt-string
fi
+135
View File
@@ -0,0 +1,135 @@
name: aws
help: AWS commands
group: AWS
expose: always
dependencies:
aws: Install the latest version following the instructions at 'https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html'
commands:
- name: login
help: |-
Log in to AWS using SSO.
This command will also set your 'AWS_PROFILE' and 'AWS_REGION' environment variables.
It will also export temporary credentials to your environment with the 'AWS_ACCESS_KEY_ID' and 'AWS_SECRET_ACCESS_KEY' environment variables.
This command is essentially a shorthand for the following commands:
dtools aws profile <PROFILE>
dtools aws region <REGION>
dtools aws login
dtools aws export-sso-creds
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
examples:
- dtools aws login -p prod -r us-east-1
- |-
# When the 'AWS_PROFILE' and 'AWS_REGION' environment variables are already
# set
dtools aws login
- name: console
help: Open the AWS console in your default browser using the current AWS_REGION and AWS_PROFILE
dependencies:
wmctrl: Install with 'sudo apt-get install wmctrl'
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --service
short: -s
help: The AWS service to open the console to
arg: service
completions:
- >-
$(python -c $'import boto3\nfor service in boto3.Session().get_available_services(): print(service)' | grep -v 'codestar\|honeycode\|mobile\|worklink')
- name: shell
help: Drop into an interactive AWS CLI shell with auto-completion
- name: profile
help: Change AWS profile
completions:
- $(cat ~/.aws/config | awk '/\[profile*/ { print substr($2, 1, length($2)-1); }')
args:
- name: profile
required: true
help: The AWS profile to use, corresponding profiles in your ~/.aws/config
validate: aws_profile_exists
examples:
- dtools aws profile prod
- name: region
help: Change AWS region
args:
- name: region
required: true
help: The AWS region to use
allowed:
import: src/components/aws/allowed_regions.yml
examples:
- dtools aws region us-east-1
- name: toggle-auto-prompt
help: Toggle the AWS CLI auto prompt
- name: export-sso-creds
help: |-
Exports SSO credentials to environment variables for use with AWS SDKs
This includes all of the following variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_CREDENTIAL_EXPIRATION
AWS_REGION
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- name: generate-sso-profiles
help: |-
Fetch all AWS accounts via AWS SSO, and generate the profiles for CLI connectivity.
In the event that the script fails automation when selecting an account to use for the basic setup,
you can manually perform this first step by running 'aws configure sso', use 'https://d-123456789ab.awsapps.com/start'
as the SSO Start URL, and use any account with any settings. Then you can run this command again for it
to work properly.
dependencies:
jq: Install with 'brew install jq'
flags:
- long: --backup
help: Create a backup of the previous AWS config
- long: --default-cli-region
short: -d
arg: default-cli-region
help: |-
The default CLI region for each profile.
Defaults to using the same region as the provided SSO region
allowed:
import: src/components/aws/allowed_regions.yml
- long: --sso-region
short: -r
arg: sso-region
required: true
help: The region for SSO accounts
allowed:
import: src/components/aws/allowed_regions.yml
- long: --sso-start-url
short: -u
arg: sso-start-url
required: true
help: The start URL for SSO authentication
examples:
- dtools aws generate-sso-profiles -u https://d-123456789ab.awsapps.com/start -r us-east-1
- import: src/commands/aws/ec2/ec2_commands.yml
- import: src/commands/aws/ssm/ssm_commands.yml
- import: src/commands/aws/secretsmanager/secretsmanager_commands.yml
- import: src/commands/aws/logs/logs_commands.yml
- import: src/commands/aws/rds/rds_commands.yml
+438
View File
@@ -0,0 +1,438 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare service="${args[--service]}"
declare base_aws_url="https://console.aws.amazon.com"
validate-or-refresh-aws-auth
if ! [[ -f /usr/local/bin/aws_console ]]; then
cat <<EOF >> aws_console
#!/usr/bin/env python3
import sys
import json
import webbrowser
import urllib.parse
import os
import argparse
from typing import Optional
import time
import pyautogui
import requests
import boto3
def get_logout_url(region: Optional[str] = None):
urllib.parse.quote_plus(
"https://aws.amazon.com/premiumsupport/knowledge-center/sign-out-account/?from_aws_sso_util_logout"
)
if not region or region == "us-east-1":
return "https://signin.aws.amazon.com/oauth?Action=logout&redirect_uri="
if region == "us-gov-east-1":
return "https://us-gov-east-1.signin.amazonaws-us-gov.com/oauth?Action=logout"
if region == "us-gov-west-1":
return "https://signin.amazonaws-us-gov.com/oauth?Action=logout"
return f"https://{region}.signin.aws.amazon.com/oauth?Action=logout&redirect_uri="
def get_federation_endpoint(region: Optional[str] = None):
if not region or region == "us-east-1":
return "https://signin.aws.amazon.com/federation"
if region == "us-gov-east-1":
return "https://us-gov-east-1.signin.amazonaws-us-gov.com/federation"
if region == "us-gov-west-1":
return "https://signin.amazonaws-us-gov.com/federation"
return f"https://{region}.signin.aws.amazon.com/federation"
def get_destination_base_url(region: Optional[str] = None):
if region and region.startswith("us-gov-"):
return "https://console.amazonaws-us-gov.com"
if region:
return f"https://{region}.console.aws.amazon.com/"
return "https://console.aws.amazon.com/"
def get_destination(
path: Optional[str] = None,
region: Optional[str] = None,
override_region_in_destination: bool = False,
):
base = get_destination_base_url(region=region)
if path:
stripped_path_parts = urllib.parse.urlsplit(path)[2:]
path = urllib.parse.urlunsplit(("", "") + stripped_path_parts)
url = urllib.parse.urljoin(base, path)
else:
url = base
if not region:
return url
parts = list(urllib.parse.urlsplit(url))
query_params = urllib.parse.parse_qsl(parts[3])
if override_region_in_destination:
query_params = [(k, v) for k, v in query_params if k != "region"]
query_params.append(("region", region))
elif not any(k == "region" for k, _ in query_params):
query_params.append(("region", region))
query_str = urllib.parse.urlencode(query_params)
parts[3] = query_str
url = urllib.parse.urlunsplit(parts)
return url
def DurationType(value):
value = int(value)
if 15 < value < 720:
raise ValueError("Duration must be between 15 and 720 minutes (inclusive)")
return value
def main():
parser = argparse.ArgumentParser(description="Launch the AWS console")
parser.add_argument("--profile", metavar="PROFILE_NAME", help="A config profile to use")
parser.add_argument("--region", metavar="REGION", help="The AWS region")
parser.add_argument(
"--destination",
dest="destination_path",
metavar="PATH",
help="Console URL path to go to",
)
override_region_group = parser.add_mutually_exclusive_group()
override_region_group.add_argument("--override-region-in-destination", action="store_true")
override_region_group.add_argument(
"--keep-region-in-destination",
dest="override_region_in_destination",
action="store_false",
)
open_group = parser.add_mutually_exclusive_group()
open_group.add_argument(
"--open",
dest="open_url",
action="store_true",
default=None,
help="Open the login URL in a browser (the default)",
)
open_group.add_argument(
"--no-open",
dest="open_url",
action="store_false",
help="Do not open the login URL",
)
print_group = parser.add_mutually_exclusive_group()
print_group.add_argument(
"--print",
dest="print_url",
action="store_true",
default=None,
help="Print the login URL",
)
print_group.add_argument(
"--no-print",
dest="print_url",
action="store_false",
help="Do not print the login URL",
)
parser.add_argument(
"--duration",
metavar="MINUTES",
type=DurationType,
help="The session duration in minutes",
)
logout_first_group = parser.add_mutually_exclusive_group()
logout_first_group.add_argument(
"--logout-first",
"-l",
action="store_true",
default=True,
help="Open a logout page first",
)
logout_first_group.add_argument(
"--no-logout-first",
dest="logout_first",
action="store_false",
help="Do not open a logout page first",
)
args = parser.parse_args()
if args.open_url is None:
args.open_url = True
logout_first_from_env = False
if args.logout_first is None:
args.logout_first = os.environ.get("AWS_CONSOLE_LOGOUT_FIRST", "").lower() in [
"true",
"1",
]
logout_first_from_env = True
if args.logout_first and not args.open_url:
if logout_first_from_env:
logout_first_value = os.environ["AWS_CONSOLE_LOGOUT_FIRST"]
raise parser.exit(f"AWS_CONSOLE_LOGOUT_FIRST={logout_first_value} requires --open")
else:
raise parser.exit("--logout-first requires --open")
session = boto3.Session(profile_name=args.profile)
if not args.region:
args.region = session.region_name or os.environ.get("AWS_CONSOLE_DEFAULT_REGION")
if not args.destination_path:
args.destination_path = session._session.get_scoped_config().get("web_console_destination") or os.environ.get(
"AWS_CONSOLE_DEFAULT_DESTINATION"
)
credentials = session.get_credentials()
if not credentials:
parser.exit("Could not find credentials")
federation_endpoint = get_federation_endpoint(region=args.region)
issuer = os.environ.get("AWS_CONSOLE_DEFAULT_ISSUER")
destination = get_destination(
path=args.destination_path,
region=args.region,
override_region_in_destination=args.override_region_in_destination,
)
launch_console(
session=session,
federation_endpoint=federation_endpoint,
destination=destination,
region=args.region,
open_url=args.open_url,
print_url=args.print_url,
duration=args.duration,
logout_first=args.logout_first,
issuer=issuer,
)
def launch_console(
session: boto3.Session,
federation_endpoint: str,
destination: str,
region: Optional[str] = None,
open_url: Optional[bool] = None,
print_url: Optional[bool] = None,
duration: Optional[int] = None,
logout_first: Optional[bool] = None,
issuer: Optional[str] = None,
):
if not issuer:
issuer = "aws_console_launcher.py"
read_only_credentials = session.get_credentials().get_frozen_credentials()
session_data = {
"sessionId": read_only_credentials.access_key,
"sessionKey": read_only_credentials.secret_key,
"sessionToken": read_only_credentials.token,
}
get_signin_token_payload = {
"Action": "getSigninToken",
"Session": json.dumps(session_data),
}
if duration is not None:
get_signin_token_payload["SessionDuration"] = duration * 60
response = requests.post(federation_endpoint, data=get_signin_token_payload)
if response.status_code != 200:
print("Could not get signin token", file=sys.stderr)
print(response.status_code + "\n" + response.text, file=sys.stderr)
sys.exit(2)
token = response.json()["SigninToken"]
get_login_url_params = {
"Action": "login",
"Issuer": issuer,
"Destination": destination,
"SigninToken": token,
}
request = requests.Request(method="GET", url=federation_endpoint, params=get_login_url_params)
prepared_request = request.prepare()
login_url = prepared_request.url
if print_url:
print(login_url)
if open_url:
if logout_first:
logout_url = get_logout_url(region=region)
webbrowser.open(logout_url, autoraise=False)
time.sleep(1)
os.system('wmctrl -a "Manage AWS Resources"')
pyautogui.hotkey("ctrl", "w")
webbrowser.open(login_url)
if __name__ == "__main__":
main()
EOF
chmod +x aws_console
sudo mv aws_console /usr/local/bin/
fi
declare -A service_aliases=(
[accessanalyzer]="access-analyzer"
[alexaforbusiness]="a4b"
[apigatewaymanagementapi]="apigateway"
[apigatewayv2]="apigateway"
[appconfig]="systems-manager/appconfig"
[application-autoscaling]="awsautoscaling"
[application-insights]="cloudwatch/home?#settings:AppInsightsSettings"
[appstream]="appstream2"
[autoscaling]="ec2/home#AutoScalingGroups:"
[autoscaling-plans]="awsautoscaling/home#dashboard"
[budgets]="billing/home#/budgets"
[ce]="costmanagement/home#/cost-explorer"
[chime]="chime-sdk"
[clouddirectory]="directoryservicev2/home#!/cloud-directories"
[cloudhsmv2]="cloudhsm"
[cloudsearchdomain]="cloudsearch"
[codeartifact]="codesuite/codeartifact"
[codeguru-reviewer]="codeguru/reviewer"
[codeguruprofiler]="codeguru/profiler"
[cognito-identity]="iamv2/home#/identity_providers"
[cognito-idp]="cognito/v2/idp"
[cognito-sync]="appsync"
[connectparticipant]="connect"
[cur]="billing/home#/reports"
[dax]="dynamodbv2/home#dax-clusters"
[directconnect]="directconnect/v2/home"
[dlm]="ec2/home#Lifecycle"
[dms]="dms/v2"
[ds]="directoryservicev2"
[dynamodbstreams]="dynamodbv2"
[ebs]="ec2/home#Volumes:"
[ec2-instance-connect]="ec2/home#Instances:"
[elastic-inference]="sagemaker"
[elb]="ec2/home#LoadBalancers:"
[elbv2]="ec2/home#LoadBalancers:"
[es]="aos/home"
[fms]="wafv2/fmsv2/home"
[forecastquery]="forecast"
[glacier]="glacier/home"
[globalaccelerator]="globalaccelerattor/home"
[identitystore]="singlesignon"
[iot-data]="iot"
[iot-jobs-data]="iot/home#/jobhub"
[iot1click-devices]="iot/home#/thinghub"
[iot1click-projects]="iot"
[iotevents-data]="iotevents/home#/input"
[iotsecuretunneling]="iot/home#/tunnelhub"
[iotthingsgraph]="iot/home#/thinghub"
[kafka]="msk"
[kinesis-video-archived-media]="kinesisvideo/home"
[kinesis-video-media]="kinesisvideo/home"
[kinesis-video-signaling]="kinesisvideo/home#/signalingChannels"
[kinesisanalyticsv2]="flink"
[kinesisvideo]="kinesisvideo/home"
[lex-models]="lexv2/home#bots"
[lex-runtime]="lexv2/home#bots"
[lightsail]="ls"
[logs]="cloudwatch/home#logsV2:"
[macie2]="macie"
[marketplace-catalog]="marketplace/home#/search!mpSearch/search"
[marketplace-entitlement]="marketplace"
[marketplacecommerceanalytics]="marketplace/home#/vendor-insights"
[mediapackage-vod]="mediapackagevod"
[mediastore-data]="mediastore"
[meteringmarketplace]="marketplace"
[mgh]="migrationhub"
[migrationhub-config]="migrationhub"
[mq]="amazon-mq"
[networkmanager]="networkmanager/home"
[opsworkscm]="opsworks"
[personalize]="personalize/home"
[personalize-events]="personalize/home"
[personalize-runtime]="personalize/home"
[pi]="rds/home#performance-insights"
[pinpoint]="pinpointv2"
[pinpoint-email]="pinpoint/home#/email-account-settings/overview"
[pinpoint-sms-voice]="pinpoint"
[qldb-session]="qldb"
[ram]="ram/home"
[rds-data]="rds/home#query-editor:"
[redshift-data]="redshiftv2/home#/query-editor:"
[resourcegroupstaggingapi]="resource-groups"
[route53domains]="route53/domains"
[s3control]="s3"
[sagemaker-a2i-runtime]="sagemaker/groundtruth#/a2i"
[sagemaker-runtime]="sagemaker"
[savingsplans]="costmanagement/home#/savings-plans/overview"
[schemas]="events/home#/schemas"
[sdb]="simpledb"
[service-quotas]="servicequotas"
[servicediscovery]="cloudmap"
[shield]="wafv2/shieldv2"
[sms]="mgn/home"
[snowball]="snowfamily"
[ssm]="systems-manager"
[sso]="singlesignon"
[sso-admin]="singlesignon"
[sso-oidc]="singlesignon"
[stepfunctions]="states"
[sts]="iam"
[swf]="swf/v2"
[translate]="translate/home"
[waf]="wafv2/homev2"
[waf-regional]="wafv2/homev2"
[wafv2]="wafv2/homev2"
[workdocs]="zocalo"
[workmailmessageflow]="workmail"
[xray]="xray/home"
)
case "$service" in
"pricing")
firefox "https://calculator.aws" > /dev/null 2>&1
exit
;;
"mturk")
firefox "https://mturk.com" > /dev/null 2>&1
exit
;;
"quicksight")
firefox "quicksight.aws.amazon.com" > /dev/null 2>&1
exit
;;
*)
if [[ -v service_aliases["$service"] ]]; then
service_url="${base_aws_url}/${service_aliases[$service]}"
else
service_url="${base_aws_url}/${service}"
fi
;;
esac
aws_console --profile "$aws_profile" --region "$aws_region" --destination "$service_url"
+28
View File
@@ -0,0 +1,28 @@
name: ec2
help: EC2 commands
group: EC2
expose: always
dependencies:
aws: Install the latest version following the instructions at 'https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html'
jq: Install using 'brew install jq'
commands:
- name: list-instances
help: List all EC2 instances in the account
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --detailed
help: Output the list of all instances in the full detailed format
conflicts: [--filter]
- long: --filter
short: -f
arg: filter
help: Filter the output to only show the specified information
repeatable: true
unique: true
allowed:
import: src/components/aws/ec2/allowed_list_instance_filters.yml
conflicts: [--detailed]
+49
View File
@@ -0,0 +1,49 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare detailed_format="${args[--detailed]}"
eval "filters=(${args[--filter]:-})"
validate-or-refresh-aws-auth
spinny-start
# shellcheck disable=SC2155
declare instances=$(aws ec2 describe-instances --profile "$aws_profile" --region "$aws_region")
spinny-stop
# Must be ordered by non-nested fields first
declare -A instance_field_mappings=(
[instance-id]='InstanceId'
[instance-type]='InstanceType'
[private-dns-name]='PrivateDnsName'
[private-ip-address]='PrivateIpAddress'
[public-dns-name]='PublicDnsName'
[subnet-id]='SubnetId'
[vpc-id]='VpcId'
[tags]='Tags'
[launch-time]='LaunchTime'
[architecture]='Architecture'
[instance-profile]='IamInstanceProfile'
[security-groups]='SecurityGroups'
[availability-zone]='"AvailabilityZone": .Placement.AvailabilityZone'
[state]='"State": .State.Name'
[os]='"OS": .PlatformDetails'
)
if [[ $detailed_format == 1 ]]; then
jq . <<< "$instances"
elif [[ -v filters[@] ]]; then
declare object_def=""
for filter_name in "${!instance_field_mappings[@]}"; do
# shellcheck disable=SC2154
if printf '%s\0' "${filters[@]}" | grep -Fxqz -- "$filter_name"; then
object_def+="${instance_field_mappings[$filter_name]}, "
fi
done
jq '.Reservations[].Instances[] | { '"$object_def"' }' <<< "$instances"
else
jq '.Reservations[].Instances[] | pick(.InstanceId, .PrivateDnsName, .PrivateIpAdress, .PublicDnsName, .SubnetId, .VpcId, .Tags)' <<< "$instances"
fi
+7
View File
@@ -0,0 +1,7 @@
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
declare aws_region="$(get-aws-region)"
validate-or-refresh-aws-auth
bash -c "eval \"\$(aws --profile $aws_profile --region $aws_region configure export-credentials --format env)\"; export AWS_REGION=$aws_region; exec bash"
+123
View File
@@ -0,0 +1,123 @@
# shellcheck disable=SC2154
declare aws_region="${args[--default-cli-region]}"
declare sso_region="${args[--sso-region]}"
declare sso_start_url="${args[--sso-start-url]}"
declare backup="${args[--backup]}"
set -e
if [[ -z $aws_region ]]; then
aws_region="$sso_region"
fi
export AWS_REGION=$aws_region
write-profile-to-config() {
profileName=$1
ssoStartUrl=$2
ssoRegion=$3
ssoAccountId=$4
ssoRoleName=$5
defaultRegion=$6
blue_bold "Creating profile $profileName"
cat <<-EOF >> "$HOME"/.aws/config
[profile $profileName]
sso_start_url = $ssoStartUrl
sso_region = $ssoRegion
sso_account_id = $ssoAccountId
sso_role_name = $ssoRoleName
region = $defaultRegion
EOF
}
if [[ $backup == 1 ]]; then
yellow "Backing up old AWS config"
mv "$HOME"/.aws/config "$HOME"/.aws/config.bak
fi
login() {
ssoLoggedIn=$(find "$HOME/.aws/sso/cache" -type f ! -name "botocore*" -exec jq -r '.accessToken | select(. != null)' {} \; | wc -l)
if [[ $ssoLoggedIn == 0 || ! -f "$HOME"/.aws/config ]]; then
yellow_bold "You must first be logged into AWS with at least one profile. Logging in now..."
[[ -f "$HOME"/.aws/config ]] || touch "$HOME"/.aws/config
export AWS_PROFILE=''
export AWS_REGION=''
/usr/bin/expect<<-EOF
set force_conservative 1
set timeout 120
match_max 100000
spawn aws configure sso
expect "SSO session name (Recommended):"
send -- "session\r"
expect "SSO start URL"
send -- "$sso_start_url\\r"
expect "SSO region"
send -- "$sso_region\r"
expect {
"SSO registration scopes" {
send "sso:account:access\\r"
exp_continue
}
-re {(.*)accounts available to you(.*)} {
send "\\r"
exp_continue
}
-re {(.*)roles available to you(.*)} {
send "\\r"
exp_continue
}
"CLI default client Region"
}
send "\r\r\r\r"
expect eof
EOF
elif ! (aws sts get-caller-identity > /dev/null 2>&1); then
red_bold "You must be logged into AWS before running this script."
yellow "Logging in via SSO. Follow the steps in the opened browser to log in."
profiles=$(awk '/\[profile*/ { print substr($2, 1, length($2)-1); }' ~/.aws/config | tail -1)
if ! aws sso login --profile "${profiles[0]}"; then
red_bold "Unable to login. Please try again."
exit 1
fi
green "Logged in!"
fi
blue "Fetching SSO access token"
profiles=$(awk '/\[profile*/ { print substr($2, 1, length($2)-1); }' ~/.aws/config | tail -1)
# shellcheck disable=SC2227
ACCESS_TOKEN=$(find "$HOME/.aws/sso/cache" -type f ! -name 'botocore*' -exec jq -r '.accessToken | select(. != null)' {} 2>/dev/null \; | tail -1)
}
login
if ! (aws sso list-accounts --profile "${profiles[0]}" --region "$aws_region" --access-token "$ACCESS_TOKEN" --output json > /dev/null 2>&1); then
red "Unable to use existing SSO access token. Wiping tokens and generating new tokens..."
rm "$HOME"/.aws/sso/cache/*.json
login
fi
aws sso list-accounts --profile "${profiles[0]}" --region "$aws_region" --access-token "$ACCESS_TOKEN" --output json | jq '.accountList[]' -rc | while read -r account; do
declare accountId
declare accountName
accountId="$(echo "$account" | jq -rc '.accountId')"
accountName="$(echo "$account" | jq -rc '.accountName | ascii_downcase | gsub(" "; "-")')"
aws sso list-account-roles --profile "${profiles[0]}" --region "$aws_region" --access-token "$ACCESS_TOKEN" --output json --account-id "$accountId" | jq '.roleList[].roleName' -rc | while read -r roleName; do
declare profileName
profileName="$accountName-$roleName"
if ! (grep -q "$profileName" ~/.aws/config); then
blue "Creating profiles for account $accountName"
write-profile-to-config "$accountName-$roleName" "$sso_start_url" "$sso_region" "$accountId" "$roleName" "$aws_region"
fi
done
done
green_bold "Successfully generated profiles from AWS SSO!"
+15
View File
@@ -0,0 +1,15 @@
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
declare aws_region="$(get-aws-region)"
validate-or-refresh-aws-auth
if ( grep "AWS_PROFILE" ~/.bashrc > /dev/null 2>&1 ); then
sed -i "/^AWS_PROFILE=/c\export AWS_PROFILE=$aws_profile" ~/.bashrc
fi
if ( grep "AWS_REGION" ~/.bashrc > /dev/null 2>&1 ); then
sed -i "/^AWS_REGION=/c\export AWS_REGION=$aws_region" ~/.bashrc
fi
bash -c "export AWS_PROFILE=$aws_profile; export AWS_REGION=$aws_region; eval \"\$(aws configure export-credentials --format env --profile $aws_profile)\"; exec bash"
+15
View File
@@ -0,0 +1,15 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare detailed_format="${args[--detailed]}"
validate-or-refresh-aws-auth
declare log_groups=$(aws logs describe-log-groups --profile "$aws_profile" --region "$aws_region")
if [[ $detailed_format == 1 ]]; then
jq . <<< "$log_groups"
else
jq -r '.logGroups[].logGroupName' <<< "$log_groups"
fi
+77
View File
@@ -0,0 +1,77 @@
name: logs
help: AWS Logs commands
group: Logs
expose: always
dependencies:
aws: Install the latest version following the instructions at 'https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html'
lnav: Install with 'brew install lnav'
unbuffer: Install with 'brew install expect'
commands:
- name: list-log-groups
help: List all of the log groups in CloudWatch
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --detailed
help: Output the list of all CloudWatch log groups in the full detailed format
- name: tail-log-group
help: Tails the specified CloudWatch log group
dependencies:
lnav: Install with 'brew install lnav'
unbuffer: Install with 'brew install expect'
args:
- name: log-group
required: true
help: The name of the log group to tail
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --since
short: -s
arg: since
default: 10m
help: The time to start tailing the log group from
validate: relative_since_time_format
completions:
- $(for e in s m h d w; do echo "${2//[!0-9]/}${e}"; done)
- long: --verbose
short: -v
help: Show verbose log output
- long: --stdout
help: Show the log output in stdout
examples:
- dtools aws tail-log-group /aws/lambda/test-lambda-1
- name: query-log-groups
help: Query one or more log groups with the given query string
filters:
- profile_and_region_variables_set_with_flags
args:
- name: query
help: The query string to query the log groups for
default: fields @timestamp, @message | sort @timestamp desc
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --log-group-name
short: -l
help: The names of a log group to query for
repeatable: true
arg: log_group_name
required: true
- long: --start-time
help: The start time for the query (ISO 8601)
arg: start_time
required: true
- long: --end-time
help: The end time for the query (ISO 8601)
arg: end_time
required: true
examples:
- dtools aws logs query-log-groups 'correlationId' --start-time '2025-03-18T15:00:00Z' --end-time '2025-03-18T16:00:00Z' --log-group-name caerus-api-log-group -l /aws/lambda/revisit-prod-revisit-core-historical-schedules-s3-writer-lambda
+43
View File
@@ -0,0 +1,43 @@
# shellcheck disable=SC2155
export aws_region="$(get-aws-region)"
# shellcheck disable=SC2155
export aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
export query="${args[query]}"
# shellcheck disable=SC2154
export start_time="${args[--start-time]}"
# shellcheck disable=SC2154
export end_time="${args[--end-time]}"
eval "log_group_names=(${args[--log-group-name]})"
export log_file=$(mktemp)
trap "rm -f $log_file" EXIT
validate-or-refresh-aws-auth
write-logs() {
log_group="$1"
query_id="$(aws logs start-query \
--log-group-names "$log_group" \
--start-time "$(date -d "$start_time" +"%s%3N")" \
--end-time "$(date -d "$end_time" +"%s%3N")" \
--query-string "$query" \
--profile "$aws_profile" \
--region "$aws_region" \
--output json | jq -r '.queryId // empty')"
if [[ -z $query_id ]]; then
red "Unable to start query for log group: '$log_group'"
exit 1
fi
until [[ "$(aws logs get-query-results --query-id "$query_id" --profile "$aws_profile" --region "$aws_region" --query status --output text)" == "Complete" ]]; do
sleep 1
done
aws logs get-query-results --query-id "$query_id" --profile "$aws_profile" --region "$aws_region" | tr -d '\000-\037' | jq -r --arg log_group "$log_group" '.results[] | { "timestamp": (.[] | select(.field == "@timestamp") | .value), "message": (.[] | select(.field == "@message") | .value), "logGroup": $log_group }' >> "$log_file"
}
export -f write-logs
parallel -j8 write-logs {} ::: ${log_group_names[*]}
jq -rs '. | sort_by(.timestamp) | map("\(.timestamp) \(.logGroup) \(.message)")[]' "$log_file" | sed '/^$/d'
+31
View File
@@ -0,0 +1,31 @@
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
declare aws_region="$(get-aws-region)"
declare temp_log_file="$(mktemp)"
set -e
# shellcheck disable=SC2064
# 'kill -- -$$' also kills the entire process group whose ID is $$
# So this means that this will also kill all subprocesses started by
# this script
trap "rm -f $temp_log_file && kill -- -$$" EXIT
validate-or-refresh-aws-auth
# shellcheck disable=SC2154
unbuffer aws --profile "$aws_profile" --region "$aws_region" logs tail "${args[log-group]}" --follow --format short --no-cli-auto-prompt --since "${args[--since]}" >> "$temp_log_file" &
if [[ ${args[--verbose]} == 1 ]]; then
if [[ ${args[--stdout]} == 1 ]]; then
tail -f "$temp_log_file"
else
lnav "$temp_log_file"
fi
elif [[ ${args[--stdout]} == 1 ]]; then
tail -f "$temp_log_file" |\
awk '{$1=""; gsub(/^[ \t]+/, "", $0); if ($0 !~ /^END|^REPORT|^START/) { print }}'
else
tail -f "$temp_log_file" |\
awk '{$1=""; gsub(/^[ \t]+/, "", $0); if ($0 !~ /^END|^REPORT|^START/) { print }}' |\
lnav
fi
+11
View File
@@ -0,0 +1,11 @@
set-aws-profile() {
if ( grep -q "AWS_PROFILE" ~/.bashrc ); then
sed -i "/^AWS_PROFILE=/c\export AWS_PROFILE=$1" ~/.bashrc
fi
bash -c "export AWS_PROFILE=$1; exec bash"
}
declare profile
# shellcheck disable=SC2154
set-aws-profile "${args[profile]}"
@@ -0,0 +1,16 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare db_instance="${args[db_instance]}"
validate-or-refresh-aws-auth
spinny-start
aws --profile "$aws_profile" \
--region "$aws_region" \
rds describe-db-instances \
--query DBInstances[] |\
jq -r '.[] | select(.DBInstanceIdentifier == "'"$db_instance"'") | .Endpoint | {"address": .Address, "port": .Port}'
spinny-stop
+14
View File
@@ -0,0 +1,14 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
validate-or-refresh-aws-auth
spinny-start
aws --profile "$aws_profile" \
--region "$aws_region" \
rds describe-db-instances \
--query 'DBInstances[].DBInstanceIdentifier' \
--output text
spinny-stop
+28
View File
@@ -0,0 +1,28 @@
name: rds
help: RDS commands
group: RDS
expose: always
dependencies:
aws: Install the latest version following the instructions at 'https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html'
jq: Install with 'brew install jq'
commands:
- name: list-db-instances
help: List all RDS DB instances for the given account by their name
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- name: fetch-db-connection-details
help: Fetch the connection details for the given RDS DB instance
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
args:
- name: db_instance
required: true
help: The RDS DB instance name
+8
View File
@@ -0,0 +1,8 @@
declare region
# shellcheck disable=SC2154
region="${args[region]}"
if ( grep -q "AWS_REGION" ~/.bashrc ); then
sed -i "/^AWS_REGION=/c\export AWS_REGION=$region" ~/.bashrc
fi
bash -c "export AWS_REGION=$region; exec bash"
@@ -0,0 +1,10 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare secret_name="${args[--name]}"
declare secret_string="${args[--secret-string]}"
validate-or-refresh-aws-auth
aws secretsmanager create-secret --name "$secret_name" --secret-string "$secret_string" --profile "$aws_profile" --region "$aws_region"
@@ -0,0 +1,16 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare detailed_format="${args[--detailed]}"
validate-or-refresh-aws-auth
# shellcheck disable=SC2155
declare secrets=$(aws secretsmanager list-secrets --profile "$aws_profile" --region "$aws_region")
if [[ $detailed_format == 1 ]]; then
jq . <<< "$secrets"
else
jq -r '.SecretList[].Name' <<< "$secrets"
fi
@@ -0,0 +1,48 @@
name: secretsmanager
help: Secrets Manager commands
group: Secrets Manager
expose: always
dependencies:
aws: Install the latest version following the instructions at 'https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html'
jq: Install using 'brew install jq'
commands:
- name: list-secrets
help: List all AWS Secrets Manager secrets
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --detailed
help: Output the list of all secrets in the detailed format
- name: show-secret
help: Show the secret value for the specified secret
args:
- name: secret_id
required: true
help: The secret ID for which the value needs to be displayed
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --detailed
help: Output the secret value in detailed format
- name: create-secret
help: Create a new secret in Secrets Manager
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --name
help: Name for the new secret
required: true
arg: name
- long: --secret-string
help: The secret string to be stored
required: true
arg: secret_string
@@ -0,0 +1,16 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare detailed_format="${args[--detailed]}"
declare secret_id="${args[secret_id]}"
validate-or-refresh-aws-auth
declare secret_value=$(aws secretsmanager get-secret-value --secret-id "$secret_id" --profile "$aws_profile" --region "$aws_region")
if [[ $detailed_format == 1 ]]; then
jq . <<< "$secret_value"
else
jq '.SecretString' <<< "$secret_value" | sed 's|\\"|"|g' | sed -e 's/"{/{/' -e 's/}"/}/' | jq
fi
+1
View File
@@ -0,0 +1 @@
aws --cli-auto-prompt
+10
View File
@@ -0,0 +1,10 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare name="${args[--name]}"
declare value="${args[--value]}"
validate-or-refresh-aws-auth
aws ssm put-parameter --name "$name" --value "$value" --type String --profile "$aws_profile" --region "$aws_region"
+9
View File
@@ -0,0 +1,9 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare parameter_name="${args[parameter_name]}"
validate-or-refresh-aws-auth
aws ssm delete-parameter --name "$parameter_name" --profile "$aws_profile" --region "$aws_region"
+16
View File
@@ -0,0 +1,16 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare detailed_format="${args[--detailed]}"
declare parameter_name="${args[parameter_name]}"
validate-or-refresh-aws-auth
declare parameter_value=$(aws ssm get-parameter --name "$parameter_name" --profile "$aws_profile" --region "$aws_region")
if [[ $detailed_format == 1 ]]; then
jq . <<< "$parameter_value"
else
jq '.Parameter.Value' <<< "$parameter_value" | tr -d '"'
fi
+16
View File
@@ -0,0 +1,16 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare detailed_format="${args[--detailed]}"
validate-or-refresh-aws-auth
# shellcheck disable=SC2155
declare parameters=$(aws ssm describe-parameters --profile "$aws_profile" --region "$aws_region")
if [[ $detailed_format == 1 ]]; then
jq . <<< "$parameters"
else
jq -r '.Parameters[].Name' <<< "$parameters"
fi
+137
View File
@@ -0,0 +1,137 @@
name: ssm
help: SSM commands
group: SSM
expose: always
dependencies:
aws: Install the latest version following the instructions at 'https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html'
jq: Install using 'brew install jq'
commands:
- name: start-port-forwarding
help: Use SSM to connect to an EC2 instance and forward a local port to the remote machine
args:
- name: instance-id
help: The ID of the EC2 instance to connect to
required: true
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --remote-port
help: The port number of the server on the EC2 instance
arg: remote-port
default: "80"
validate: aws_ssm_port_forwarding_number
- long: --local-port
help: The port number on the local machine to forward traffic to. An open port is chosen at run-time if not provided.
arg: local-port
default: "0"
validate: aws_ssm_port_forwarding_number
- long: --host
help: Hostname or IP address of the destination server
arg: host
default: localhost
validate: aws_ssm_port_forwarding_host
examples:
- dtools aws start-port-forwarding i-0892eeaed80a5b00b --remote-port 5432 --local-port 5432 --host prod-postgres.ctm8i4qgknv3.us-east-1.rds.amazonaws.com --profile prod --region us-east-1
- name: start-ngrok-bastion-instance
help: Start an EC2 instance to act as a bastion host for ngrok
dependencies:
jq: Install with 'brew install jq'
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --hostname
help: |
The hostname to forward connections to.
This will be hostnames like the ones that are only accessible via AWS ECS Service Discovery (e.g. api.caerus.local)
required: true
- long: --subnet-id
help: The subnet ID that the instance is to be deployed into
required: true
- long: --port
help: The port on the destination hostname to forward connections to
arg: port
default: "8080"
- long: --ngrok-url
short: -u
arg: ngrok_url
help: The ngrok URL to connect to
required: true
- long: --ngrok-auth-token
short: -a
arg: ngrok_auth_token
help: The ngrok authentication token
required: true
- name: list-parameters
help: List all AWS SSM Parameter Store parameters
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --detailed
help: Output the list of all parameters in the detailed format
- name: get-parameter
help: Get the value of an AWS SSM Parameter Store parameter
args:
- name: parameter_name
required: true
help: The name of the parameter to retrieve
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --detailed
help: Output the parameter value in detailed format
- name: create-parameter
help: Create a new parameter in AWS SSM Parameter Store
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --name
help: Name for the new parameter
required: true
arg: name
- long: --value
help: The value of the parameter to be stored
required: true
arg: value
- name: update-parameter
help: Update an existing parameter in AWS SSM Parameter Store (Will create a new parameter if it does not exist)
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
- long: --name
help: Name of the parameter to update
required: true
arg: name
- long: --value
help: The value of the parameter to be stored
required: true
arg: value
- name: delete-parameter
help: Delete a parameter from AWS SSM Parameter Store
filters:
- profile_and_region_variables_set_with_flags
flags:
- import: src/components/aws/profile_flag.yml
- import: src/components/aws/region_flag.yml
args:
- name: parameter_name
required: true
help: The name of the parameter to delete
@@ -0,0 +1,108 @@
set -e
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
# shellcheck disable=SC2154
declare subnet_id="${args[--subnet-id]}"
declare hostname="${args[--hostname]}"
declare port="${args[--port]}"
declare ngrok_url="${args[--ngrok-url]}"
declare ngrok_auth_token="${args[--ngrok-auth-token]}"
validate-or-refresh-aws-auth
cleanup() {
if [[ -n "$instance_id" ]]; then
yellow "Terminating the EC2 instance..."
aws --profile "$aws_profile" --region "$aws_region" ec2 terminate-instances --instance-ids "$instance_id"
fi
}
trap "cleanup" EXIT
cyan "Ensuring the AmazonSSMRoleForInstancesQuickSetup role exists..."
if ! aws --profile "$aws_profile" --region "$aws_region" iam get-role --role-name AmazonSSMRoleForInstancesQuickSetup > /dev/null 2>&1; then
yellow "Creating the AmazonSSMRoleForInstancesQuickSetup role..."
aws --profile "$aws_profile" --region "$aws_region" iam create-role \
--role-name AmazonSSMRoleForInstancesQuickSetup \
--assume-role-policy-document '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole"}]}' \
> /dev/null
yellow "Attaching the AmazonSSMManagedInstanceCore policy to the role..."
aws --profile "$aws_profile" --region "$aws_region" iam attach-role-policy \
--role-name AmazonSSMRoleForInstancesQuickSetup \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
yellow "Attaching the AmazonSSMPatchAssociation policy to the role..."
aws --profile "$aws_profile" --region "$aws_region" iam attach-role-policy \
--role-name AmazonSSMRoleForInstancesQuickSetup \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMPatchAssociation
yellow "Creating the AmazonSSMRoleForInstancesQuickSetup instance profile..."
aws --profile "$aws_profile" --region "$aws_region" iam create-instance-profile \
--instance-profile-name AmazonSSMRoleForInstancesQuickSetup \
> /dev/null
yellow "Adding the AmazonSSMRoleForInstancesQuickSetup role to the instance profile..."
aws --profile "$aws_profile" --region "$aws_region" iam add-role-to-instance-profile \
--instance-profile-name AmazonSSMRoleForInstancesQuickSetup --role-name AmazonSSMRoleForInstancesQuickSetup \
> /dev/null
sleep 5
fi
cyan "Launching an EC2 instance..."
# shellcheck disable=SC2155
declare instance_id=$({
aws --profile "$aws_profile" --region "$aws_region" ec2 run-instances \
--image-id resolve:ssm:/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 \
--instance-type t2.micro \
--count 1 \
--subnet-id "$subnet_id" \
--iam-instance-profile Name=AmazonSSMRoleForInstancesQuickSetup \
--user-data $'#!/bin/bash\nwget https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz\ntar xvzf ./ngrok-v3-stable-linux-amd64.tgz -C /usr/local/bin' \
--query Instances[0].InstanceId \
--output text
})
get-instance-state() {
aws --profile "$aws_profile" --region "$aws_region" ec2 describe-instance-status \
--instance-ids "$instance_id" \
--query InstanceStatuses[0] |\
jq '. | {instance: .InstanceStatus.Details[0].Status, system: .SystemStatus.Details[0].Status}'
}
status_checks=$(get-instance-state)
until [[ $(jq -r '.instance' <<< "$status_checks") == "passed" && $(jq -r '.system' <<< "$status_checks") == "passed" ]]; do
yellow "Waiting for instance to start..."
sleep 1
status_checks=$(get-instance-state)
done
green 'Instance is running!'
yellow "Adding the ngrok authtoken to the instance..."
aws --profile "$aws_profile" --region "$aws_region" ssm start-session \
--target "$instance_id" \
--document-name AWS-StartInteractiveCommand \
--parameters command="ngrok config add-authtoken $ngrok_auth_token"
yellow 'Starting ngrok tunnel...'
cyan 'The resource will be available at the following URL: '
cyan_bold "https://$ngrok_url"
cyan "\nYou will be able to point Postman to the above URL to access the resource."
yellow_bold "\nPress 'Ctrl+C' to stop the ngrok tunnel and to terminate the EC2 instance."
red_bold "This information will only be displayed once. Please make a note of it.\n"
read -rp "To acknowledge receipt and continue, press 'Enter'." </dev/tty
aws --profile "$aws_profile" --region "$aws_region" ssm start-session \
--target "$instance_id" \
--document-name AWS-StartInteractiveCommand \
--parameters command="ngrok http ${hostname}:${port} --domain $ngrok_url"
yellow "Terminating the EC2 instance..."
aws --profile "$aws_profile" --region "$aws_region" ec2 terminate-instances --instance-ids "$instance_id"
@@ -0,0 +1,17 @@
# shellcheck disable=SC2155
declare aws_profile="$(get-aws-profile)"
declare aws_region="$(get-aws-region)"
# shellcheck disable=SC2154
declare instance_id="${args[instance-id]}"
declare remote_port="${args[--remote-port]}"
declare local_port="${args[--local-port]}"
declare host="${args[--host]}"
validate-or-refresh-aws-auth
aws ssm start-session \
--profile "$aws_profile" \
--region "$aws_region" \
--target "$instance_id" \
--document-name "AWS-StartPortForwardingSessionToRemoteHost" \
--parameters "portNumber=${remote_port},localPortNumber=${local_port},host=${host}"
+10
View File
@@ -0,0 +1,10 @@
# shellcheck disable=SC2155
declare aws_region="$(get-aws-region)"
declare aws_profile="$(get-aws-profile)"
# shellcheck disable=SC2154
declare name="${args[--name]}"
declare value="${args[--value]}"
validate-or-refresh-aws-auth
aws ssm put-parameter --name "$name" --value "$value" --overwrite --profile "$aws_profile" --region "$aws_region"
+13
View File
@@ -0,0 +1,13 @@
set-aws-auto-prompt() {
if ( grep "AWS_CLI_AUTO_PROMPT" ~/.bashrc > /dev/null 2>&1 ); then
sed -i "/AWS_CLI_AUTO_PROMPT=/c\export AWS_CLI_AUTO_PROMPT=$1" ~/.bashrc
fi
bash -c "export AWS_CLI_AUTO_PROMPT=$1; exec bash"
}
if [[ -z ${AWS_CLI_AUTO_PROMPT} || $AWS_CLI_AUTO_PROMPT == 'off' ]]; then
set-aws-auto-prompt on
else
set-aws-auto-prompt off
fi
+42
View File
@@ -0,0 +1,42 @@
# shellcheck disable=SC2154
declare item="${args[item]}"
declare backup_dest="${args[--backup-dest]}"
declare move="${args[--move]}"
if [[ $move == 1 ]]; then
if [[ -d $item ]]; then
if [[ -n $backup_dest ]]; then
yellow_bold "Backing up directory to: ${backup_dest}${item}-bak/. Original directory will no longer exist."
mv -f "$item" "${backup_dest}${item}-bak"
else
yellow_bold "Backing up directory to: ${item}-bak/. Original directory will no longer exist."
mv -f "$item" "${item}-bak"
fi
elif [[ -f $item ]]; then
if [[ -n $backup_dest ]]; then
yellow_bold "Creating backup file: ${backup_dest}${item}.bak. Original file will no longer exist."
mv -f "$item" "${backup_dest}${item}.bak"
else
yellow_bold "Creating backup file: ${item}.bak. Original file will no longer exist."
mv -f "$item" "${item}.bak"
fi
fi
else
if [[ -d $item ]]; then
if [[ -n $backup_dest ]]; then
yellow_bold "Backing up directory to: ${backup_dest}${item}-bak/."
cp -rf "$item" "${backup_dest}${item}-bak"
else
yellow_bold "Backing up directory to: ${item}-bak/."
cp -rf "$item" "${item}-bak"
fi
elif [[ -f $item ]]; then
if [[ -n $backup_dest ]]; then
yellow_bold "Creating backup file: ${backup_dest}${item}.bak."
cp -rf "$item" "${backup_dest}${item}.bak"
else
yellow_bold "Creating backup file: ${item}.bak."
cp -rf "$item" "${item}.bak"
fi
fi
fi
+110
View File
@@ -0,0 +1,110 @@
blue_bold "Running BleachBit"
spinny-start
readarray -t bleachbitCleanList < <(cat <<-EOF
adobe_reader.cache
adobe_reader.mru
adobe_reader.tmp
apt.autoclean
apt.autoremove
apt.clean
apt.package_lists
chromium.cache
chromium.cookies
chromium.dom
chromium.form_history
chromium.history
chromium.passwords
chromium.search_engines
chromium.session
chromium.sync
chromium.vacuum
discord.cache
discord.cookies
discord.history
discord.vacuum
elinks.history
epiphany.cache
epiphany.cookies
epiphany.dom
epiphany.passwords
epiphany.places
evolution.cache
firefox.cache
firefox.cookies
firefox.crash_reports
flash.cache
gedit.recent_documents
gimp.tmp
google_chrome.cache
google_chrome.cookies
google_chrome.dom
google_chrome.form_history
google_chrome.history
google_chrome.passwords
google_chrome.search_engines
google_chrome.session
google_chrome.sync
google_chrome.vacuum
google_earth.temporary_files
google_toolbar.search_history
java.cache
journald.clean
libreoffice.cache
libreoffice.history
openofficeorg.cache
openofficeorg.recent_documents
opera.cache
opera.cookies
opera.dom
opera.form_history
opera.history
opera.passwords
opera.session
opera.vacuum
pidgin.cache
pidgin.logs
realplayer.cookies
realplayer.history
realplayer.logs
rhythmbox.cache
rhythmbox.history
seamonkey.cache
seamonkey.chat_logs
seamonkey.cookies
seamonkey.download_history
seamonkey.history
secondlife_viewer.Cache
secondlife_viewer.Logs
skype.chat_logs
skype.installers
sqlite3.history
system.cache
system.clipboard
system.rotated_logs
system.trash
system.tmp
thumbnails.cache
thunderbird.cache
thunderbird.cookies
thunderbird.index
thunderbird.passwords
thunderbird.vacuum
transmission.history
transmission.torrents
vlc.memory_dump
vlc.mru
wine.tmp
winetricks.temporary_files
x11.debug_logs
EOF
)
for cleaner in "${bleachbitCleanList[@]}"; do
blue_bold "Running BleachBit cleaner: $cleaner"
sudo bleachbit -c "$cleaner"
done
spinny-stop
green_bold "Finished running BleachBit cleaners"
+66
View File
@@ -0,0 +1,66 @@
blue_bold "Cleaning build caches"
# shellcheck disable=SC2154
declare code_directory="${args[code-directory]}"
readarray -t nodeModulesList < <(find "$code_directory" -type d -name node_modules)
readarray -t buildList < <(find "$code_directory" -type d -name build)
readarray -t outList < <(find "$code_directory" -type d -name out)
readarray -t cdkOutList < <(find "$code_directory" -type d -name cdk.out)
readarray -t pycacheList < <(find "$code_directory" -type d -name __pycache__)
readarray -t cargoList < <(find "$code_directory" -type f -name Cargo.toml -exec dirname {} \;)
blue_bold "Cleaning 'node_modules' directories..."
spinny-start
for nodeModulesDirectory in "${nodeModulesList[@]}"; do
blue_bold "Cleaning 'node_modules' directory: $nodeModulesDirectory"
sudo rm -rf "$nodeModulesDirectory"
done
spinny-stop
blue_bold "Cleaning 'build' directories..."
spinny-start
for buildDirectory in "${buildList[@]}"; do
blue_bold "Cleaning 'build' directory: $buildDirectory"
sudo rm -rf "$buildDirectory"
done
spinny-stop
blue_bold "Cleaning 'out' directories..."
spinny-start
for outDirectory in "${outList[@]}"; do
blue_bold "Cleaning 'out' directory: $outDirectory"
sudo rm -rf "$outDirectory"
done
spinny-stop
blue_bold "Cleaning 'cdk.out' directories..."
spinny-start
for cdkOutDirectory in "${cdkOutList[@]}"; do
blue_bold "Cleaning 'cdk.out' directory: $cdkOutDirectory"
sudo rm -rf "$cdkOutDirectory"
done
spinny-stop
blue_bold "Cleaning 'pycache' directories..."
spinny-start
for pycacheDirectory in "${pycacheList[@]}"; do
blue_bold "Cleaning 'pycache' directory: $pycacheDirectory"
sudo rm -rf "$pycacheDirectory"
done
spinny-stop
blue_bold "Cleaning 'Rust' projects..."
spinny-start
for cargoDirectory in "${cargoList[@]}"; do
blue_bold "Cleaning rust project: $cargoDirectory"
# shellcheck disable=SC2164
pushd "$cargoDirectory" > /dev/null 2>&1
cargo clean
# shellcheck disable=SC2164
popd > /dev/null 2>&1
done
blue_bold "Cleaning the ~/.m2/repository cache..."
rm -rf "$HOME"/.m2/repository
green_bold "Finished cleaning build caches"
+36
View File
@@ -0,0 +1,36 @@
name: clean
help: System cleaning commands
group: Cleaning
expose: always
commands:
- name: bleachbit
help: |-
Perform a system-wide upkeep cleanup with BleachBit
Note: This will clean Chrome, Opera, and Chromium caches and passwords
dependencies:
bleachbit: Install from 'https://www.bleachbit.org/download'
- name: docker
help: Clean docker images, containers, and volumes
dependencies:
docker: Install with 'dtools install docker'
- name: package-caches
help: Clean package manager caches (Debian-based systems only)
filters:
- debian_based_os
- name: logs
help: Clean up system logs by deleting old logs and clearing the journal
- name: build-caches
help: Clean all build caches
completions:
- <directory>
args:
- name: code-directory
required: true
help: The base directory for all of your code repositories to recursively clean build caches from
examples:
- dtools clean build-caches ~/code
+13
View File
@@ -0,0 +1,13 @@
blue_bold "Cleaning docker"
blue_bold "Pruning Docker images and containers..."
spinny-start
yes | docker system prune -a
spinny-stop
blue_bold "Pruning Docker volumes..."
spinny-start
yes | docker volume prune
spinny-stop
green_bold "Finished cleaning Docker"
+8
View File
@@ -0,0 +1,8 @@
blue_bold "Cleaning system logs..."
blue_bold "Vacuuming journal logs older than 3 days..."
sudo journalctl --vacuum-time 3d
blue_bold "Deleting archived logs..."
sudo find /var/log -type f -name '*.gz' -delete
sudo find /var/log -type f -name '*.1' -delete
+26
View File
@@ -0,0 +1,26 @@
blue_bold "Cleaning packages..."
blue_bold "Cleaning apt cache..."
sudo apt-get clean
sudo apt-get autoclean
blue_bold "Removing unnecessary apt dependencies..."
sudo apt-get autoremove
sudo apt-get purge
blue_bold "Cleaning up pip cache..."
pip cache purge
sudo pip cache purge
if (command -v snap > /dev/null 2>&1); then
blue_bold "Removing disabled snaps..."
set -eu
LANG=en_US.UTF-8 snap list --all |\
awk '/disabled/{print $1, $3}' |\
while read -r snapname revision; do
snap remove "$snapname" --revision="$revision"
done
blue_bold "Purging cached Snap versions..."
sudo rm -rf /var/cache/snapd/*
fi
green_bold "Finished cleaning packages"
+1
View File
@@ -0,0 +1 @@
send_completions
+8
View File
@@ -0,0 +1,8 @@
# shellcheck disable=SC2154
datetime="${args[timestamp]}"
if [[ $datetime == "-" ]]; then
date +"%s%3N" -f -
else
date -d "$datetime" +"%s%3N"
fi
+8
View File
@@ -0,0 +1,8 @@
# shellcheck disable=SC2154
datetime="${args[date]}"
if [[ $datetime == "-" ]]; then
date -u +"%Y-%m-%dT%H:%M:%S.%3NZ" -f -
else
date -u +"%Y-%m-%dT%H:%M:%S.%3NZ" -d "$datetime"
fi
+14
View File
@@ -0,0 +1,14 @@
set -e
# shellcheck disable=SC2154
declare gcp_location="${args[--location]}"
# shellcheck disable=SC2154
declare gcp_project="${args[--project]}"
validate-or-refresh-gcp-auth
if [[ -n $gcp_location ]]; then
harlequin -a bigquery --project "$gcp_project" --location "$gcp_location"
else
harlequin -a bigquery --project "$gcp_project"
fi
+133
View File
@@ -0,0 +1,133 @@
name: db
help: Database commands
group: Database
expose: always
dependencies:
harlequin: Install with 'curl -LsSf https://astral.sh/uv/install.sh | sh'
docker: Install with 'dtools install docker'
db2dbml: Install with 'npm install -g @dbml/cli'
commands:
- name: postgres
help: |-
Start an interactive Docker container with psql to experiment with PostgreSQL.
The default password is 'password'.
The current directory is also mounted as a read-only volume under '/data' to run scripts.
filters:
- postgres_not_running
flags:
- long: --dump
help: Dump the persistent DB into a single large SQL script
conflicts:
[
--tui,
--persistent,
--wipe-persistent-data,
--dump-to-dbml,
--schema,
]
- long: --persistent-dir-prefix
arg: persistent_dir_prefix
help: Specify the persistence directory ($HOME/.db/postgres/<DIR>) to load/wipe the DB from
default: 'default'
completions:
- $(ls -1 $HOME/.db/postgres/)
- long: --dump-to-dbml
help: Dumps the persistent DB into DBML to be imported into dbdiagram.io
conflicts: [--tui, --persistent, --wipe-persistent-data, --dump]
- long: --schema
short: -s
help: Specify the schema to dump
needs: [--dump-to-dbml, --database]
conflicts: [--dump]
arg: schema
repeatable: true
unique: true
- long: --tui
help: Open the DB in a TUI (harlequin)
conflicts: [--dump]
- long: --persistent
help: Persist the DB data to disk (persists to ~/.db/postgres)
conflicts: [--dump]
- long: --wipe-persistent-data
help: Wipe any persistent data from the disk before starting the container
conflicts: [--dump]
needs: [--persistent]
- long: --database
help: Specify the name of the databaose to use
arg: database
- long: --port
help: Specify the host port to expose the DB on
default: '5432'
arg: port
- name: mysql
help: |-
Start an interactive Docker container with mysql to experiment with MySQL.
The default password is 'password'.
The current directory is also mounted as a read-only volume under '/app' to run scripts.
filters:
- mysql_not_running
flags:
- long: --persistent-dir-prefix
arg: persistent_dir_prefix
help: Specify the persistence directory ($HOME/.db/mysql/<DIR>) to load/wipe the DB from
default: 'default'
completions:
- $(ls -1 $HOME/.db/mysql/)
- long: --dump
help: Dump the persistent DB into a single large SQL script
conflicts: [--tui, --persistent, --wipe-persistent-data, --dump-to-dbml]
- long: --dump-to-dbml
help: Dumps the persistent DB into DBML to be imported into dbdiagram.io
conflicts: [--tui, --persistent, --wipe-persistent-data, --dump]
- long: --tui
help: Open the DB in a TUI (harlequin)
conflicts: [--dump]
- long: --persistent
help: Persist the DB data to disk (persists to ~/.db/mysql)
conflicts: [--dump]
- long: --wipe-persistent-data
help: Wipe any persistent data from the disk before starting the container
conflicts: [--dump]
needs: [--persistent]
- long: --database
help: Specify the name of the databaose to use
arg: database
- long: --port
help: Specify the host port to expose the DB on
default: '3306'
arg: port
- name: bigquery
help: |-
Start a Harlequin session to big query using the specified project
flags:
- long: --project
short: -p
help: The GCP project to use
arg: project
required: true
- long: --location
short: -l
arg: location
help: The GCP location to use
allowed:
import: src/components/gcp/allowed_locations.yml
+70
View File
@@ -0,0 +1,70 @@
set -e
trap "docker stop mysql > /dev/null 2>&1" EXIT
# shellcheck disable=SC2154
declare db="${args[--database]}"
declare port="${args[--port]}"
declare persistent_dir_prefix="${args[--persistent-dir-prefix]}"
declare data_dir="${HOME}/.db/mysql/$persistent_dir_prefix"
[[ -d $data_dir ]] || mkdir -p "$data_dir"
start-persistent-mysql-container() {
docker run -d --rm \
-v ".:/app:ro" \
-v "$data_dir:/var/lib/mysql" \
-p "$port:3306" \
--name mysql \
-e MYSQL_ROOT_PASSWORD=password \
mysql
}
if [[ ${args[--wipe-persistent-data]} == 1 ]]; then
yellow "Removing persisted data from: $data_dir..."
rm -rf "$data_dir"
fi
if [[ "${args[--persistent]}" == 1 ]]; then
start-persistent-mysql-container
spinny-start
elif [[ "${args[--dump]}" == 1 || "${args[--dump-to-dbml]}" == 1 ]]; then
start-persistent-mysql-container > /dev/null 2>&1
else
docker run -d --rm \
-v ".:/app:ro" \
-p "$port:3306" \
--name mysql \
-e MYSQL_ROOT_PASSWORD=password \
mysql
spinny-start
fi
sleep 10
# shellcheck disable=SC2154
if [[ "${args[--tui]}" == 1 ]]; then
spinny-stop
if [[ -z $db ]]; then
harlequin -a mysql -h localhost -p "$port" -U root --password password
else
harlequin -a mysql -h localhost -p "$port" -U root --password password --database "$db"
fi
elif [[ "${args[--dump]}" == 1 ]]; then
if [[ -z $db ]]; then
docker exec mysql mysqldump --protocol=tcp -u root -P "$port" --password=password --no-data --all-databases
else
docker exec mysql mysqldump --protocol=tcp -u root -P "$port" --password=password --no-data --databases "$db"
fi
elif [[ "${args[--dump-to-dbml]}" == 1 ]]; then
if [[ -z $db ]]; then
env NODE_NO_WARNINGS=1 db2dbml mysql "mysql://root:password@localhost:$port"
rm -rf dbml-error.log
else
env NODE_NO_WARNINGS=1 db2dbml mysql "mysql://root:password@localhost:$port/$db"
rm -rf dbml-error.log
fi
else
spinny-stop
docker exec -it mysql mysql -u root --password=password
fi
+64
View File
@@ -0,0 +1,64 @@
set -e
trap "docker stop postgres > /dev/null 2>&1" EXIT
# shellcheck disable=SC2154
declare db="${args[--database]}"
declare port="${args[--port]}"
declare persistent_dir_prefix="${args[--persistent-dir-prefix]}"
declare data_dir="${HOME}/.db/postgres/$persistent_dir_prefix"
eval "schema=(${args[--schema]:-})"
[[ -d $data_dir ]] || mkdir -p "$data_dir"
start-persistent-postgres-container() {
docker run -d --rm \
-v ".:/data" \
-v "$data_dir:/var/lib/postgresql" \
-p "$port:5432" \
--name postgres \
-e POSTGRES_PASSWORD=password \
postgres
}
if [[ ${args[--wipe-persistent-data]} == 1 ]]; then
yellow "Removing persisted data from: $data_dir..."
sudo rm -rf "$data_dir"
fi
if [[ "${args[--persistent]}" == 1 ]]; then
start-persistent-postgres-container
spinny-start
elif [[ "${args[--dump]}" == 1 || "${args[--dump-to-dbml]}" == 1 ]]; then
start-persistent-postgres-container > /dev/null 2>&1
else
docker run -d --rm \
-v ".:/data" \
-p "$port:5432" \
--name postgres \
-e POSTGRES_PASSWORD=password \
postgres
spinny-start
fi
sleep 3
# shellcheck disable=SC2154
if [[ "${args[--tui]}" == 1 ]]; then
spinny-stop
harlequin -a postgres "postgres://postgres:password@localhost:$port/$db" -f .
elif [[ "${args[--dump]}" == 1 ]]; then
docker exec postgres pg_dump -U postgres -s -F p -E UTF-8
elif [[ "${args[--dump-to-dbml]}" == 1 ]]; then
if [[ "${#schema[@]}" != 0 ]]; then
schemas_parameter="schemas=$(echo -n "${schema[*]}" | tr ' ' ',')"
env NODE_NO_WARNINGS=1 db2dbml postgres "postgresql://postgres:password@localhost:$port/$db?$schemas_parameter"
rm -rf dbml-error.log
else
env NODE_NO_WARNINGS=1 db2dbml postgres "postgresql://postgres:password@localhost:$port/$db"
rm -rf dbml-error.log
fi
else
spinny-stop
docker exec -it postgres psql -U postgres
fi
+10
View File
@@ -0,0 +1,10 @@
# shellcheck disable=SC2154
declare file="${args[file]}"
# shellcheck disable=SC2154
declare source_format="${args[--source-format]}"
# shellcheck disable=SC2154
declare target_format="${args[--target-format]}"
# shellcheck disable=SC2154
declare output_file="${args[--output-file]:-${PWD}/${file%%."${source_format}"}.${target_format}}"
pandoc -f "$source_format" -t "$target_format" -o "$output_file" "$file" -V geometry:margin=1in
+6
View File
@@ -0,0 +1,6 @@
# shellcheck disable=SC2154
declare input_file="${args[input-file]}"
# shellcheck disable=SC2154
declare output_file="${args[--output-file]}"
qpdf --decrypt "$input_file" "$output_file"
@@ -0,0 +1,77 @@
name: document
help: Commands for manipulating documents
group: Documents
expose: always
commands:
- name: convert
help: Convert any given document into any other supported format using pandoc
dependencies:
pandoc: Install with 'brew install pandoc'
args:
- name: file
required: true
help: The file to convert
flags:
- long: --source-format
help: The format of the source file
required: true
arg: source_format
allowed:
import: src/components/documents/allowed_pandoc_source_formats.yml
- long: --target-format
help: The target format of the output file
required: true
arg: target_format
allowed:
import: src/components/documents/allowed_pandoc_target_formats.yml
- long: --output-file
arg: output_file
help: The output file with the extension (defaults to <working_directory>/<file>.$<target_format>)
completions:
- <file>
- <directory>
completions:
- <file>
- <directory>
- name: merge-pdf
help: Merge a list of PDFs into a single PDF file
dependencies:
pdftk: Install with 'brew install pdftk-java'
args:
- name: output-file
help: The name of the output PDF file name
required: true
flags:
- long: --input-file
short: -i
arg: input_file
help: An input file to merge into a single PDF
repeatable: true
completions:
- <file>
- <directory>
completions:
- <file>
- <directory>
- name: decrypt-pdf
help: Decrypt a PDF so it can be manipulated via CLI tools
dependencies:
qpdf: Install with 'brew install qpdf'
args:
- name: input-file
help: The PDF you wish to decrypt
required: true
flags:
- long: --output-file
arg: output_file
required: true
help: The name of the output decrypted PDF file
completions:
- <file>
- <directory>
completions:
- <file>
- <directory>
+6
View File
@@ -0,0 +1,6 @@
# shellcheck disable=SC2154
declare output_file="${args[output-file]}"
# shellcheck disable=SC2154
eval "input_files=(${args[--input-file]:-})"
pdftk ${input_files[*]} output "$output_file"
+23
View File
@@ -0,0 +1,23 @@
name: elastic
help: Elastic Stack commands
group: Elastic
expose: always
dependencies:
docker: Install with 'dtools install docker'
docker-compose: Install with 'dtools install docker'
git: Install with 'brew install git'
commands:
- name: init
help: Initialize a local Elastic Stack (Elasticsearch + Kibana + Logstash)
- name: start
help: |-
Start a local Elastic Stack (Elasticsearch + Kibana + Logstash)
Default credentials:
Username: elastic
Password: changeme
- name: stop
help: Stop a locally running Elastic Stack (Elasticsearch + Kibana + Logstash)
+15
View File
@@ -0,0 +1,15 @@
declare current_dir="$PWD"
[[ -d $HOME/Applications ]] || mkdir "$HOME"/Applications
cd "$HOME"/Applications || exit
[[ -d $HOME/Applications/docker-elk ]] || git clone https://github.com/deviantony/docker-elk.git
cd docker-elk || exit
blue "Build the docker-elk stack just in case a pre-existing version of Elasticsearch needs its nodes upgraded"
docker-compose build
blue "Start the docker-elk setup container"
docker-compose up setup
cd "$current_dir" || exit
+12
View File
@@ -0,0 +1,12 @@
declare current_dir="$PWD"
cd "$HOME"/Applications/docker-elk || exit
blue "Start the docker-elk stack"
docker-compose up -d
yellow_bold "\n\n\nDefault credentials:"
yellow "Username: elastic"
yellow "Password: changeme"
cd "$current_dir" || exit
+8
View File
@@ -0,0 +1,8 @@
declare current_dir="$PWD"
cd "$HOME"/Applications/docker-elk || exit
blue "Stop the docker-elk stack"
docker-compose down
cd "$current_dir" || exit
+13
View File
@@ -0,0 +1,13 @@
# shellcheck disable=SC2154
epoch="${args[epoch]}"
convert-epoch() {
awk '{print substr($0, 0, length($0)-3) "." substr($0, length($0)-2);}' <<< "$1"
}
if [[ $epoch == "-" ]]; then
read epoch_stdin
date -u -d "@$(convert-epoch "$epoch_stdin")" +"%Y-%m-%d %H:%M:%S"
else
date -u -d "@$(convert-epoch "$epoch")" +"%Y-%m-%d %H:%M:%S"
fi
+13
View File
@@ -0,0 +1,13 @@
set -eo pipefail
# shellcheck disable=SC2154
declare pre_processing_pipe="${args[--pre-processing]}"
declare target_command="${args[command]}"
declare additional_xargs_arguments="${args[--additional-xargs-arguments]}"
if [[ -z $pre_processing_pipe ]]; then
# shellcheck disable=SC2154
eval "fzf --print0 --preview 'batcat {} --style=numbers --color=always' --height=75% --multi --bind '?:toggle-preview,ctrl-a:select-all' --preview-window hidden | xargs -0 $additional_xargs_arguments -o $target_command"
else
# shellcheck disable=SC2154
eval "fzf --print0 --preview 'batcat {} --style=numbers --color=always' --height=75% --multi --bind '?:toggle-preview,ctrl-a:select-all' --preview-window hidden | $pre_processing_pipe | xargs -0 $additional_xargs_arguments -o $target_command"
fi
@@ -0,0 +1,32 @@
name: artifacts
help: GCP Artifact Registry commands
group: Artifact Registry
expose: always
dependencies:
gcloud: Install the latest version following the instructions at 'https://docs.cloud.google.com/sdk/docs/install'
jq: Install using 'brew install jq'
commands:
- name: list-repositories
help: List all repositories in artifact registry for the specified project and location
filters:
- project_and_location_variables_set_with_flags
flags:
- import: src/components/gcp/project_flag.yml
- import: src/components/gcp/location_flag.yml
- name: list-images
help: List all images contained with the specified artifact registry repository
args:
- name: repository_name
required: true
help: The GCP docker repository whose images you wish to list
filters:
- project_and_location_variables_set_with_flags
flags:
- import: src/components/gcp/project_flag.yml
- import: src/components/gcp/location_flag.yml
- long: --detailed
help: Output the images with full details in JSON format
examples:
- dtools gcp artifacts list-images serving-docker
+13
View File
@@ -0,0 +1,13 @@
# shellcheck disable=SC2155
declare gcp_project="$(get-gcp-project)"
declare gcp_location="$(get-gcp-location)"
# shellcheck disable=SC2154
declare repository_name="${args[repository_name]}"
validate-or-refresh-gcp-auth
if [[ "${args[--detailed]}" == 1 ]]; then
gcloud artifacts docker images list "$gcp_location-docker.pkg.dev/$gcp_project/$repository_name" --format json
else
gcloud artifacts docker images list "$gcp_location-docker.pkg.dev/$gcp_project/$repository_name" 2>&1 | awk 'NR > 3 {print $1}' | xargs -I{} basename {}
fi
@@ -0,0 +1,7 @@
# shellcheck disable=SC2155
declare gcp_location="$(get-gcp-location)"
declare gcp_project="$(get-gcp-project)"
validate-or-refresh-gcp-auth
gcloud artifacts repositories list --project "$gcp_project" --location "$gcp_location" --format json 2> /dev/null | jq -r '.[] | .name' | awk -F/ '{printf("%-20s %-15s\n", $6, $4)}'
+62
View File
@@ -0,0 +1,62 @@
name: gcp
help: GCP commands
group: GCP
expose: always
dependencies:
gcloud: Install the latest version following the instructions at 'https://docs.cloud.google.com/sdk/docs/install'
commands:
- name: login
help: |-
Log in to GCP using SSO.
This command will also set your 'GCP_PROJECT' and 'GCP_LOCATION' environment variables.
This command is essentially a shorthand for the following commands:
dtools gcp project <PROJECT>
dtools gcp location <LOCATION>
gcloud auth login
gcloud auth application-default login
filters:
- project_and_location_variables_set_with_flags
flags:
- import: src/components/gcp/project_flag.yml
- import: src/components/gcp/location_flag.yml
examples:
- dtools gcp login -p lab -r us-central1
- dtools gcp login --project prod --location africa-south1
- |-
# When the 'GCP_PROJECT' and 'GCP_LOCATION' environment variables are already
# set
dtools gcp login
- name: project
help: Change GCP project
completions:
- $(gcloud projects list | awk 'NR > 1 {print $1}')
args:
- name: project
required: true
help: The GCP project to use
examples:
- dtools gcp project lab
- name: location
help: Change GCP location
args:
- name: location
required: true
help: The GCP location to use
allowed:
import: src/components/gcp/allowed_locations.yml
examples:
- dtools gcp location us-central1
- name: get-project-number
help: Get the GCP project number of the specified project
args:
- name: project_name
required: true
help: The name of the project whose number you wish to fetch
- import: src/commands/gcp/vertex/vertex_commands.yml
- import: src/commands/gcp/artifacts/artifacts_commands.yml
+6
View File
@@ -0,0 +1,6 @@
# shellcheck disable=SC2154
declare project_name="${args[project_name]}"
validate-or-refresh-gcp-auth
gcloud projects describe "$project_name" --format="value(projectNumber)"
+8
View File
@@ -0,0 +1,8 @@
# shellcheck disable=SC2154
declare gcp_location="${args[location]}"
if ( grep "GCP_LOCATION" ~/.bashrc ); then
sed -i "/^GCP_LOCATION=/c\export GCP_LOCATION=$gcp_location" ~/.bashrc
fi
bash -c "export GCP_LOCATION=$gcp_location; exec bash"
+35
View File
@@ -0,0 +1,35 @@
# shellcheck disable=SC2155
declare gcp_project="$(get-gcp-project)"
declare gcp_location="$(get-gcp-location)"
yellow "Refreshing user credentials..."
spinny-start
if ! (gcloud auth login > /dev/null 2>&1); then
spinny-stop
red_bold "Unable to log into GCP."
else
spinny-stop
close-gcp-auth-tab
green "User credentials refreshed"
fi
yellow "Refreshing application default credentials..."
spinny-start
if ! (gcloud auth application-default login > /dev/null 2>&1); then
spinny-stop
red_bold "Unable to configure GCP credentials for applications."
else
spinny-stop
close-gcp-auth-tab
green "GCP application default credentials refreshed"
fi
if ( grep "GCP_PROJECT" ~/.bashrc > /dev/null 2>&1 ); then
sed -i "/^GCP_PROJECT=/c\export GCP_PROJECT=$gcp_project" ~/.bashrc
fi
if ( grep "GCP_LOCATION" ~/.bashrc > /dev/null 2>&1 ); then
sed -i "/^GCP_LOCATION=/c\export GCP_LOCATION=$gcp_location" ~/.bashrc
fi
bash -c "export GCP_PROJECT=$gcp_project; export GCP_LOCATION=$gcp_location; exec bash"
+9
View File
@@ -0,0 +1,9 @@
# shellcheck disable=SC2154
declare gcp_project="${args[project]}"
if ( grep "GCP_PROJECT" ~/.bashrc > /dev/null 2>&1 ); then
sed -i "/^GCP_PROJECT=/c\export GCP_PROJECT=$gcp_project" ~/.bashrc
fi
gcloud config set project "$gcp_project"
bash -c "export GCP_PROJECT=$gcp_project; exec bash"
+116
View File
@@ -0,0 +1,116 @@
# shellcheck disable=SC2155
declare gcp_location="$(get-gcp-location)"
declare gcp_project="$(get-gcp-project)"
# shellcheck disable=SC2154
declare container_image="${args[--container-image]}"
declare container_image_uri="${gcp_location}-docker.pkg.dev/${gcp_project}/${container_image}"
declare container_port="${args[--container-port]}"
declare health_route="${args[--health-route]}"
declare predict_route="${args[--predict-route]}"
declare display_name="${args[--display-name]}"
declare artifact_uri="gs://${gcp_project}/${args[--model-gcs-uri]}"
declare endpoint_name="${args[--endpoint-name]}"
declare machine_type="${args[--machine-type]}"
declare accelerator="${args[--accelerator]}"
validate-or-refresh-gcp-auth
get-endpoint-id() {
gcloud ai endpoints list \
--region "$gcp_location" \
2> /dev/null |\
grep -i "$endpoint_name" |\
awk '{print $1;}'
}
endpoint-has-deployed-model() {
[[ $(gcloud ai endpoints describe "$endpoint_id" \
--region "$gcp_location" \
--format json \
2> /dev/null |\
jq -r '.deployedModels | length > 0') == "true" ]]
}
yellow "Uploading model to Vertex model registry..."
if [[ -z "$artifact_uri" ]]; then
gcloud ai models upload \
--project "$gcp_project" \
--region "$gcp_location" \
--display-name "$display_name" \
--container-image-uri "$container_image_uri" \
--container-ports "$container_port" \
--container-health-route "$health_route" \
--container-predict-route "$predict_route"
else
gcloud ai models upload \
--project "$gcp_project" \
--region "$gcp_location" \
--display-name "$display_name" \
--container-image-uri "$container_image_uri" \
--container-ports "$container_port" \
--container-health-route "$health_route" \
--container-predict-route "$predict_route" \
--artifact-uri "$artifact_uri"
fi
green "Successfully uploaded model to Vertex model registry"
new_model_id="$(gcloud ai models list --sort-by ~versionCreateTime --format 'value(name)' --region "$gcp_location" 2> /dev/null | head -1)"
yellow "New model id: '$new_model_id'"
if [[ -z $(get-endpoint-id) ]]; then
red_bold "Endpoint with name '$endpoint_name' does not exist."
yellow "Creating new endpoint..."
dataset_name="$(tr '-' '_' <<< "$endpoint_name")"
gcloud ai endpoints create \
--display-name "$endpoint_name" \
--region "$gcp_location" \
--request-response-logging-rate 1 \
--request-response-logging-table "bq://${gcp_project}.${dataset_name}.serving_predict"
green "Successfully created new endpoint with name: '$endpoint_name'"
fi
endpoint_id="$(get-endpoint-id)"
yellow "Endpoint '$endpoint_name' has id: '$endpoint_id'"
if endpoint-has-deployed-model; then
old_model_id="$(gcloud ai endpoints describe "$endpoint_id" \
--region "$gcp_location" \
--format json \
2> /dev/null |\
jq -r '.deployedModels[0].model' |\
xargs basename)"
deployed_model_id="$(gcloud ai endpoints describe "$endpoint_id" \
--region "$gcp_location" \
--format json \
2> /dev/null |\
jq -r '.deployedModels[0].id')"
red "Undeploying existing model: '$old_model_id' with deployed id: '$deployed_model_id'..."
gcloud ai endpoints undeploy-model "$endpoint_id" \
--region "$gcp_location" \
--deployed-model-id "$deployed_model_id"
green "Successfully undeployed existing model: '$old_model_id'"
fi
yellow "Deploying new model to endpoint '$endpoint_id'..."
if [[ -z "$accelerator" ]]; then
gcloud ai endpoints deploy-model "$endpoint_id" \
--region "$gcp_location" \
--model "$new_model_id" \
--display-name "$display_name" \
--machine-type "$machine_type"
else
gcloud ai endpoints deploy-model "$endpoint_id" \
--region "$gcp_location" \
--model "$new_model_id" \
--display-name "$display_name" \
--machine-type "$machine_type" \
--accelerator "type=${accelerator},count=1"
fi
green "Successfully deployed model '$new_model_id' to endpoint '$endpoint_id'"
+12
View File
@@ -0,0 +1,12 @@
# shellcheck disable=SC2155
declare gcp_location="$(get-gcp-location)"
declare gcp_project="$(get-gcp-project)"
validate-or-refresh-gcp-auth
# shellcheck disable=SC2154
if [[ ${args[--detailed]} == 1 ]]; then
gcloud ai endpoints list --project "$gcp_project" --region "$gcp_location" --format json
else
gcloud ai endpoints list --project "$gcp_project" --region "$gcp_location" --format=json | jq -r '.[].displayName'
fi
+30
View File
@@ -0,0 +1,30 @@
# shellcheck disable=SC2155
declare gcp_location="$(get-gcp-location)"
# shellcheck disable=SC2154
declare file="${args[--file]}"
declare endpoint_name="${args[--endpoint-name]}"
validate-or-refresh-gcp-auth
endpoint_id="$(gcloud ai endpoints list --region "$gcp_location" --format json 2>/dev/null | jq --arg endpoint_name "$endpoint_name" -r '.[] | select(.displayName == $endpoint_name) | .deployedModels[0].id')"
if [[ -z $endpoint_id ]]; then
red "Invalid endpoint name specified: '$endpoint_name'"
red "Unable to determine endpoint ID"
exit 1
fi
model_uri="$(gcloud ai endpoints list --region "$gcp_location" --format json 2>/dev/null | jq --arg endpoint_name "$endpoint_name" -r '.[] | select(.displayName == $endpoint_name) | .name')"
if [[ -z $model_uri ]]; then
red "Unable to determine model URI from given endpoint name: '$endpoint_name' and region: '$gcp_location'"
exit 1
fi
bearer="$(gcloud auth print-access-token)"
curl -X POST \
-H "Authorization: Bearer $bearer" \
-H "Content-Type: application/json; charset=utf-8" \
-d @"${file}" \
"https://${gcp_location}-aiplatform.googleapis.com/v1/$model_uri:predict"
@@ -0,0 +1,17 @@
# shellcheck disable=SC2155
declare gcp_location="$(get-gcp-location)"
declare gcp_project="$(get-gcp-project)"
# shellcheck disable=SC2154
declare endpoint_name="${args[endpoint_name]}"
validate-or-refresh-gcp-auth
endpoint_id="$(gcloud ai endpoints list --region "$gcp_location" --format json 2>/dev/null | jq --arg endpoint_name "$endpoint_name" -r '.[] | select(.displayName == $endpoint_name) | .deployedModels[0].id')"
if [[ -z $endpoint_id ]]; then
red "Invalid endpoint name specified: '$endpoint_name'"
red "Unable to determine endpoint ID"
exit 1
fi
gcloud beta logging tail "resource.type=cloud_aiplatform_endpoint AND resource.labels.endpoint_id=$endpoint_id" --project "$gcp_project"
+115
View File
@@ -0,0 +1,115 @@
name: vertex
help: Vertex AI commands
group: Vertex
expose: always
dependencies:
gcloud: Install the latest version following the instructions at 'https://docs.cloud.google.com/sdk/docs/install'
jq: Install with 'brew install jq'
commands:
- name: deploy-model
help: |-
Deploy a model into Vertex AI (assumes only one model is deployed on the given endpoint).
This will do the following:
- Upload the specified model into the Vertex Model Registry
- Check if an endpoint exists corresponding to the provided name
- If not, it will create an endpoint
- Undeploy any pre-existing models on the endpoint (assumes 1 model per endpoint)
- Deploy the new model to the endpoint
Always be sure to build the image first and then push it to the corresponding Artifact Registry repo; e.g
'docker push us-central1-docker.pkg.dev/prod/serving-docker/alex-test:latest'
filters:
- project_and_location_variables_set_with_flags
flags:
- import: src/components/gcp/project_flag.yml
- import: src/components/gcp/location_flag.yml
- long: --display-name
short: -d
arg: display_name
help: Display name of the model.
required: true
- long: --container-image
short: -c
arg: container_image
help: |-
URI of the Model serving container file in the Container Registry (e.g. repository/image:latest).
You can list repositories with: dtools gcp artifacts list-repositories
You can list images with: dtools gcp artifacts list-images <REPOSITORY_NAME>
required: true
- long: --container-port
arg: container_port
help: Container port to receive HTTP requests at
default: '8080'
- long: --health-route
arg: health_route
help: HTTP path to send health checks to inside the container
default: '/isalive'
- long: --predict-route
arg: predict_route
help: HTTP path to send prediction requests to inside the container
default: '/predict'
- long: --model-gcs-uri
arg: model_gcs_uri
help: |-
Path to the directory containing the Model artifact and any of its supporting files.
If undefined, ensure the model image that is being deployed contains the model JSON within the image.
Use 'gcloud storage ls gs://<PROJECT>' to find the URI of the model artifact you wish to use
- long: --endpoint-name
short: -e
arg: endpoint_name
help: The name of the endpoint to deploy the model to (will create one if it does not already exist)
required: true
- long: --machine-type
short: -m
arg: machine_type
help: The machine type to use for the deployed model (e.g. n1-standard-4)
default: n1-standard-2
- long: --accelerator
help: The type of accelerator to attach to the machine
arg: accelerator
examples:
- dtools gcp vertex deploy-model --display-name alex-vertex-test --container-image serving-docker/alex-vertex-test:latest -e alex-vertex-test-endpoint
- dtools gcp vertex deploy-model --display-name alex-test --container-image serving-docker/alex-test:latest -e alex-test-endpoint --accelerator nvidia-tesla-t4
- dtools gcp vertex deploy-model --display-name alex-vertex-test --container-image serving-docker/alex-vertex-test:latest --model-gcs-uri model-training/388781844076/vertex-model-training-pipeline-20250319032739/store-model-in-gcs_1311307013581438976 -e alex-arrhythmia-test-endpoint
- name: predict
help: Query a Vertex endpoint with a prediction request
flags:
- import: src/components/gcp/location_flag.yml
- long: --file
short: -f
arg: file
help: The JSON file to query the model with
completions:
- <file>
- long: --endpoint-name
short: -e
arg: endpoint_name
help: The name of the endpoint to query
- name: list-endpoints
help: List all Vertex endpoints for the specified project and region
filters:
- project_and_location_variables_set_with_flags
flags:
- import: src/components/gcp/project_flag.yml
- import: src/components/gcp/location_flag.yml
- long: --detailed
help: Output the gcloud query with full details in JSON format
- name: tail-endpoint-logs
help: Tail the logs for the given endpoint
args:
- name: endpoint_name
required: true
help: The name of the endpoint whose logs you wish to tail
filters:
- project_and_location_variables_set_with_flags
flags:
- import: src/components/gcp/project_flag.yml
- import: src/components/gcp/location_flag.yml
+6
View File
@@ -0,0 +1,6 @@
# shellcheck disable=SC2154
if [[ "${args[--copy-to-clipboard]}" == 1 ]]; then
openssl rand -base64 32 | tr -d '\n' | xclip -sel clip
else
openssl rand -base64 32
fi
+14
View File
@@ -0,0 +1,14 @@
name: git
help: Git commands
group: Git
expose: always
commands:
- name: search-history
help: Search all previous tracked files for a given string to see all changes involving the specified string
args:
- name: search-string
help: The string to search all git history for
required: true
examples:
- dtools git search-history 'energy_required'
+8
View File
@@ -0,0 +1,8 @@
# shellcheck disable=SC2154
declare search_string="${args[search-string]}"
git rev-list --all | (
while read -r revision; do
git grep -F "$search_string" "$revision"
done
)
+3
View File
@@ -0,0 +1,3 @@
sudo apt-get update
sudo apt-get install python3.8 python3-pip
pip3 install --user ansible
+25
View File
@@ -0,0 +1,25 @@
blue_bold "Installing prerequisites..."
yes | sudo add-apt-repository universe
yes | sudo add-apt-repository multiverse
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release apt-transport-https
blue_bold "Checking for the /etc/apt/keyrings directory..."
[[ -d /etc/apt/keyrings ]] || sudo mkdir /etc/apt/keyrings
blue_bold "Installing the Docker GPG key..."
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
blue_bold "Setting up the Docker APT repository..."
echo \
"deb [arch=\"$(dpkg --print-architecture)\" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
\"$(. /etc/os-release && echo $VERSION_CODENAME)\" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
blue_bold "Installing Docker..."
sudo apt-get install containerd.io docker-ce docker-ce-cli docker-compose-plugin docker-buildx-plugin
green_bold "Successfully installed Docker"
+18
View File
@@ -0,0 +1,18 @@
name: install
help: Install commands
group: Install
expose: always
commands:
- name: docker
help: Install Docker (Debian-based systems only)
filters:
- debian_based_os
- name: ansible
help: Install Ansible
- name: java
help: Install LTS OpenJDK's 8, 11, 17, and 21 (Debian-based systems only)
filters:
- debian_based_os
+3
View File
@@ -0,0 +1,3 @@
sudo add-apt-repository ppa:openjdk-r/ppa
sudo apt-get update
sudo apt-get install openjdk-8-jdk openjdk-11-jdk openjdk-17-jdk openjdk-21-jdk -y
+20
View File
@@ -0,0 +1,20 @@
# shellcheck disable=SC2154
declare sonar_url="${args[--sonar-url]}"
declare sonar_login="${args[--sonar-login]}"
declare sonar_project_key="${args[--sonar-project-key]}"
if [[ -f pom.xml ]]; then
mvn sonar:sonar \
-Dsonar.projectKey="$sonar_project_key" \
-Dsonar.host.url="$sonar_url" \
-Dsonar.login="sonar_login"
elif [[ -f settings.gradle ]]; then
if (grep -q plugins build.gradle); then
sed '/plugins/a id "org.sonarqube" version "5.0.0.4638"' build.gradle
fi
./gradlew sonar \
-Dsonar.projectKey="$sonar_project_key" \
-Dsonar.host.url="$sonar_url" \
-Dsonar.login="$sonar_login"
fi
+45
View File
@@ -0,0 +1,45 @@
name: java
help: Java commands
group: Java
expose: always
dependencies:
java: Install with 'dtools install java'
commands:
- name: set-version
help: Sets the system-wide Java version
args:
- name: version
required: true
help: The Java version to use
allowed:
- '8'
- '11'
- '17'
- '21'
examples:
- dtools java set-version 17
- name: analyze-with-sonar
help: Perform static code analysis for the current directory's Java project with SonarQube
filters:
- maven_or_gradle_installed
flags:
- long: --sonar-url
short: -u
arg: sonar_url
help: The SonarQube server URL to use for analysis
required: true
- long: --sonar-login
short: -l
arg: sonar_login
help: The SonarQube login token to use for analysis
required: true
- long: --sonar-project-key
short: -k
arg: sonar_project_key
help: The SonarQube project key to use for analysis
required: true
+44
View File
@@ -0,0 +1,44 @@
sudo rm /usr/bin/java
sudo rm /usr/bin/javac
sudo rm /usr/bin/javadoc
sudo rm /usr/bin/javah
sudo rm /usr/bin/javap
declare basePath=/usr/lib/jvm
# shellcheck disable=SC2154
declare version="${args[version]}"
case $version in
8)
declare jdk8Path="$basePath/java-8-openjdk-amd64/bin"
sudo ln -s "$jdk8Path/java" /usr/bin/java
sudo ln -s "$jdk8Path/javac" /usr/bin/javac
sudo ln -s "$jdk8Path/javadoc" /usr/bin/javadoc
sudo ln -s "$jdk8Path/javah" /usr/bin/javah
sudo ln -s "$jdk8Path/javap" /usr/bin/javap
;;
11)
declare jdk11Path="$basePath/java-11-openjdk-amd64/bin"
sudo ln -s "$jdk11Path/java" /usr/bin/java
sudo ln -s "$jdk11Path/javac" /usr/bin/javac
sudo ln -s "$jdk11Path/javadoc" /usr/bin/javadoc
sudo ln -s "$jdk11Path/javah" /usr/bin/javah
sudo ln -s "$jdk11Path/javap" /usr/bin/javap
;;
17)
declare jdk17Path="$basePath/java-17-openjdk-amd64/bin"
sudo ln -s "$jdk17Path/java" /usr/bin/java
sudo ln -s "$jdk17Path/javac" /usr/bin/javac
sudo ln -s "$jdk17Path/javadoc" /usr/bin/javadoc
sudo ln -s "$jdk17Path/javah" /usr/bin/javah
sudo ln -s "$jdk17Path/javap" /usr/bin/javap
;;
21)
declare jdk21Path="$basePath/java-21-openjdk-amd64/bin"
sudo ln -s "$jdk21Path/java" /usr/bin/java
sudo ln -s "$jdk21Path/javac" /usr/bin/javac
sudo ln -s "$jdk21Path/javadoc" /usr/bin/javadoc
sudo ln -s "$jdk21Path/javah" /usr/bin/javah
sudo ln -s "$jdk21Path/javap" /usr/bin/javap
;;
esac
+35
View File
@@ -0,0 +1,35 @@
# shellcheck disable=SC2154
declare url="${args[--url]}"
# shellcheck disable=SC2154
declare name="${args[--name]}"
# shellcheck disable=SC2154
declare output="${args[--output]}"
# shellcheck disable=SC2154
declare limit="${args[--limit]}"
# shellcheck disable=SC2154
declare behaviors="${args[--behaviors]}"
# shellcheck disable=SC2154
declare exclude="${args[--exclude]}"
# shellcheck disable=SC2154
declare workers="${args[--workers]}"
# shellcheck disable=SC2154
declare wait_until="${args[--wait-until]}"
# shellcheck disable=SC2154
declare keep="${args[--keep]}"
# shellcheck disable=SC2154
declare shm_size="${args[--shm-size]}"
docker run \
--rm \
--shm-size="$shm_size" \
-v "$output":/output \
ghcr.io/openzim/zimit zimit \
--url "$url" \
--name "$name" \
--output "$output" \
"${limit:+--limit "$limit"}" \
--behaviors "$behaviors" \
"${exclude:+--exclude "$exclude"}" \
"${workers:+--workers "$workers"}" \
--wait-until "$wait_until" \
"${keep:+--keep}"
@@ -0,0 +1,17 @@
# shellcheck disable=SC2154
declare output="${args[--output]}"
# shellcheck disable=SC2154
declare key_output="${args[--key-output]}"
# shellcheck disable=SC2154
declare pfx_output="${args[--pfx-output]}"
# shellcheck disable=SC2154
declare hostname="${args[--hostname]}"
sudo openssl req -x509 -newkey rsa:2048 -days 365 -nodes -out "$output" -keyout "$key_output" -subj "/C=US/ST=Colorado/L=Denver/O=ClarkeCloud/OU=IT/CN=$hostname"
sudo chmod 600 "$output"
sudo chmod 600 "$key_output"
if [[ -n $pfx_output ]]; then
sudo openssl pkcs12 -export -out "$pfx_output" -inkey "$key_output" -in "$output"
sudo chmod 600 "$pfx_output"
fi
+17
View File
@@ -0,0 +1,17 @@
# shellcheck disable=SC2154
declare https_port="${args[--https-port]}"
# shellcheck disable=SC2154
declare proxy_target_host="${args[--proxy-target-host]}"
# shellcheck disable=SC2154
declare proxy_target_port="${args[--proxy-target-port]}"
# shellcheck disable=SC2154
declare ssl_certificate="${args[--ssl-certificate]}"
declare dtools_cert=/etc/devtools/dtools.pem
if [[ $ssl_certificate = "$dtools_cert" && ! -f $dtools_cert ]]; then
[[ -d /etc/devtools ]] || sudo mkdir /etc/devtools
sudo openssl req -new -x509 -days 365 -nodes -out "$dtool_cert" -keyout "$dtools_cert" -subj "/C=US/ST=Colorado/L=Denver/O=ClarkeCloud/OU=IT/CN=localhost"
sudo chmod 600 "$dtools_cert"
fi
sudo socat openssl-listen:"$https_port",reuseaddr,fork,cert="$ssl_certificate",verify=0 tcp:"$proxy_target_host":"$proxy_target_port"
+4
View File
@@ -0,0 +1,4 @@
# shellcheck disable=SC2154
declare port="${args[port]}"
docker run --rm -it --name mermaid-server -p "$port:80" tomwright/mermaid-server:latest
+25
View File
@@ -0,0 +1,25 @@
# shellcheck disable=SC2154
declare domain="${args[domain]}"
declare port="${args[--port]}"
declare script_file="${args[--script-file]}"
if [[ -z $script_file ]]; then
script_file="$(mktemp /dev/shm/tmp.XXXXXX)"
trap 'rm -f $script_file' EXIT
cat <<-EOF >> "$script_file"
from mitmproxy import http
import re
def request(flow: http.HTTPFlow) -> None:
match = re.search(r'$domain', flow.request.host)
if match is not None:
print(f"Request to {flow.request.host}:")
print(flow.request.method, flow.request.url)
print("Headers:", flow.request.headers)
print("Body:", flow.request.get_text())
# Requests will be automatically forwarded unless explicitly modified or killed.
EOF
fi
mitmproxy --listen-port "$port" -s "$script_file"
+290
View File
@@ -0,0 +1,290 @@
name: network
help: Network commands
group: Network
expose: always
commands:
- name: generate-self-signed-certificate
help: Generate a self-signed HTTPS certificate for use in testing
dependencies:
openssl: Install with either 'sudo apt install libssl-dev' or 'brew install openssl@3'
flags:
- long: --output
help: The file to write the certificate to
arg: output
required: true
completions:
- <file>
- long: --key-output
help: The output file to write the key to
arg: key_output
required: true
completions:
- <file>
- long: --hostname
help: The hostname that the certificate should be created to work with
default: localhost
arg: hostname
- long: --pfx-output
help: Output a pfx file as well
arg: pfx_output
completions:
- <file>
default: 'false'
examples:
- dtools network generate-self-signed-certificate --output /etc/dtools/test.csr --key-output /etc/dtools/test.key
- |-
# Both in one file
dtools network generate-self-signed-certificate --output /etc/dtools/test.pem --key-output /etc/dtools/test.pem
- |-
# Create a pfx file for Radarr
dtools network generate-self-signed-certificate --output /etc/dtools/test.crt --key-output /etc/dtools/test.key --hostname 192.168.0.105 --output-pfx /etc/dtools/test.pfx
- name: https-proxy
help: Proxy HTTPS traffic
dependencies:
socat: Install with 'brew install socat'
openssl: Install with either 'sudo apt install libssl-dev' or 'brew install openssl@3'
flags:
- long: --https-port
help: The https port to proxy
default: '443'
arg: https_port
- long: --proxy-target-host
help: The target host to redirect all https traffic to
default: localhost
arg: proxy_target_host
- long: --proxy-target-port
help: The port on the target host to send all https traffic to
required: true
arg: proxy_target_port
- long: --ssl-certificate
help: The SSL certificate to use
default: /etc/devtools/dtools.pem
arg: ssl_certificate
completions:
- <file>
- name: tcp-proxy
help: Proxy all TCP traffic with simpleproxy (Debian-based systems only)
filters:
- debian_based_os
dependencies:
simpleproxy: Install with 'sudo apt install simpleproxy'
flags:
- long: --tcp-host
help: The host to listen on
default: 0.0.0.0
arg: tcp_host
- long: --tcp-port
help: The TCP port to listen on
required: true
arg: tcp_port
- long: --proxy-target-host
help: The target host to redirect all TCP traffic to
default: localhost
arg: proxy_target_host
- long: --proxy-target-port
help: The target port on the target host to redirect all TCP traffic to
required: true
arg: proxy_target_port
examples:
- dtools network tcp-proxy --tcp-host 192.168.0.253 --tcp-port 5432 --proxy-target-host localhost --proxy-target-port 5433
- name: proxy-with-nginx
help: |-
Proxy all HTTP and HTTPS requests locally to a remote server using Nginx.
This is useful when trying to proxy a remote HTTPS API that requires a specific certificate or hostname.
dependencies:
nginx: Install with 'brew install nginx'
flags:
- long: --tcp-port
help: The TCP port to listen on
arg: tcp_port
default: '8080'
- long: --proxy-target-host
help: The target host to redirect all traffic to
arg: proxy_target_host
required: true
- long: --proxy-target-protocol
help: The protocol on the host that all traffic is redirected to
arg: proxy_target_protocol
default: http
allowed:
- http
- https
examples:
- |-
# Query with curl 'http://localhost:8081/api/Token', for example
dtools network proxy-with-nginx --tcp-port 8081 --proxy-target-host some.api.com --proxy-target-protocol https
- name: mitm-proxy
help: Start a Man-in-the-Middle (MITM) proxy to intercept and log all requests
dependencies:
mitmproxy: Install with 'brew install --cask mitmproxy'
args:
- name: domain
help: The domain to intercept requests for (regex)
required: true
flags:
- long: --port
arg: port
help: The local port to run the proxy on
default: '8080'
- long: --script-file
arg: script_file
help: The script file to run on all intercepted requests (defaults to simply logging out method, url, headers, and body)
completions:
- <file>
- <directory>
validate: file_exists
examples:
- |-
# Run the proxy on port 8080 for all *google*.com domains
dtools network mitm-proxy .*google.*\.com
# Run a script/service/etc. that will be proxied by MITM proxy
export HTTP_PROXY=http://localhost:8080
export HTTPS_PROXY=https://localhost:8080
export REQUESTS_CA_BUNDLE=~/.mitmproxy/mitmproxy-ca-cert.pem
python3 vertex_model_deployment_script.py
- name: archive-website
help: Download an archive an entire website for offline viewing via Kiwix and .zim formats using OpenZim's zimit
dependencies:
docker: Install with 'dtools install docker'
flags:
- long: --shm-size
help: The size of /dev/shm to allow the container to use
arg: shm_size
default: 2gb
- long: --url
arg: url
help: The URL to be crawled
required: true
- long: --name
arg: name
help: The name of the ZIM file
required: true
- long: --output
arg: output_directory
default: '/output'
completions:
- <directory>
- long: --limit
help: Limit capture to at most <limit> URLs
arg: limit
- long: --behaviors
help: >-
Control which browsertrix behaviors are ran
(defaults to 'autoplay,autofetch,siteSpecific', adding 'autoscroll' to the list is possible to automatically scroll the pages and fetch resources which are lazy loaded)
arg: behaviors
default: 'autoplay,autofetch,siteSpecific'
- long: --exclude
arg: exclude_regex
help: >-
Skip URLs that mmatch the regex from crawling. Can be specified multiple times.
An example is '--exclude=|(\?q=|signup-landing\?|\?cid=)"', where URLs that contain either '?q=' or
'signup-landing?' or '?cid=' will be excluded
- long: --workers
arg: workers
help: Number of crawl workers to run in parallel
- long: --wait-until
arg: wait_until
help: >-
Puppeteer setting for how long to wait for page to load. See https://github.com/puppeteer/puppeteer/blob/main/docs/api.md#pagegotourl-options
The default is 'load', but for static sites, '--wait-until domcontentloaded' may be used to speed up the crawl (to avoid waiting for ads to load for example).
default: 'load'
- long: --keep
help: If set, keep the WARC files in a temp directory inside the output directory
examples:
- dtools network archive-website --url URL --name myzimfile
- |-
# Exclude all video files
dtools network archive-website --url URL --name myzimfile --exclude ".*\.(mp4|webm|ogg|mov|avi|mkv)(\?.*)?$"
- dtools network archive-website --shm-size 2gb --url 'https://www.niaid.nih.gov' --name niaid-backup --exclude '.*\.(mp4|webm|ogg|mov|avi|mkv|mpeg|tar|gz|zip|rar)(\?.*)?$' --workers 16 --wait-until domcontentloaded --behaviors 'autoplay,autofetch,siteSpecific,autoscroll'
- |-
# Serve the website locally on port 9090
kiwix-serve --port 9090 myzimfile.zim
- name: warc-2-zim
help: Convert a WARC to ZIM format for easier offline viewing using OpenZim's zimit
dependencies:
docker: Install with 'dtools install docker'
flags:
- long: --shm-size
help: The size of /dev/shm to allow the container to use
arg: shm_size
default: 2gb
- long: --url
arg: url
help: The URL to be crawled
required: true
- long: --name
arg: name
help: The name of the ZIM file
required: true
- long: --output
arg: output_directory
default: '/output'
completions:
- <directory>
- long: --limit
help: Limit capture to at most <limit> URLs
arg: limit
- long: --behaviors
help: >-
Control which browsertrix behaviors are ran
(defaults to 'autoplay,autofetch,siteSpecific', adding 'autoscroll' to the list is possible to automatically scroll the pages and fetch resources which are lazy loaded)
arg: behaviors
default: 'autoplay,autofetch,siteSpecific'
- long: --exclude
arg: exclude_regex
help: >-
Skip URLs that match the regex from crawling. Can be specified multiple times.
An example is '--exclude=|(\?q=|signup-landing\?|\?cid=)"', where URLs that contain either '?q=' or
'signup-landing?' or '?cid=' will be excluded
- long: --workers
arg: workers
help: Number of crawl workers to run in parallel
- long: --wait-until
arg: wait_until
help: >-
Puppeteer setting for how long to wait for page to load. See https://github.com/puppeteer/puppeteer/blob/main/docs/api.md#pagegotourl-options
The default is 'load', but for static sites, '--wait-until domcontentloaded' may be used to speed up the crawl (to avoid waiting for ads to load for example).
default: 'load'
- long: --keep
help: If set, keep the WARC files in a temp directory inside the output directory
examples:
- dtools network warc-2-zim --url URL --name myzimfile
- |-
# Exclude all video files
dtools network warc-2-zim --url URL --name myzimfile --exclude ".*\.(mp4|webm|ogg|mov|avi|mkv)(\?.*)?$"
- name: mermaid-api
help: Start a local API to generate Mermaid diagrams
dependencies:
docker: Install with 'dtools install docker'
args:
- name: port
help: The port to run the API on
default: '8087'
validate: integer
examples:
- |-
curl --location --request POST 'http://localhost:8087/generate' \
--header 'Content-Type: text/plain' \
--data-raw 'graph LR
A-->B
B-->C
C-->D
C-->F
'
+42
View File
@@ -0,0 +1,42 @@
# shellcheck disable=SC2154
declare tcp_port="${args[--tcp-port]}"
# shellcheck disable=SC2154
declare proxy_target_host="${args[--proxy-target-host]}"
# shellcheck disable=SC2154
declare proxy_target_protocol="${args[--proxy-target-protocol]}"
# shellcheck disable=SC2155
declare temp_config_file="$(mktemp)"
# shellcheck disable=SC2064
trap "rm -f $temp_config_file" EXIT
cat <<-EOF >> "$temp_config_file"
# nginx.conf
worker_processes 1;
events {}
http {
server {
listen $tcp_port;
location / {
proxy_pass $proxy_target_protocol://$proxy_target_host;
# Forward the Host header so the remote API recognizes the request
proxy_set_header Host $proxy_target_host;
# Optional: standard reverse proxy headers
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# Enable SNI (important for HTTPS targets)
proxy_ssl_server_name on;
}
}
}
EOF
yellow "Press 'Ctrl-c' to stop proxying"
sudo nginx -p . -g 'daemon off;' -c "$temp_config_file"

Some files were not shown because too many files have changed in this diff Show More