Cheat Sheet - AWS CLI

Configuration files

# ~/.aws/credentials
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

# ~/.aws/config
[default]
region=eu-central-1

Environment variables

# *nix shells 
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=eu-central-1

# pwsh
$env:AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
$env:AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
$env:AWS_DEFAULT_REGION="eu-central-1"
AWS_ACCESS_KEY_ID     : AWS access key
AWS_SECRET_ACCESS_KEY : AWS secret access key
AWS_DEFAULT_REGION    : AWS region
AWS_DEFAULT_PROFILE   : name of the CLI profile to use
AWS_DEFAULT_OUTPUT    : 
AWS_CONFIG_FILE       : path to a CLI config file if a custom config file is used (default: ~/.aws/config)

Instead of using the --profile flag all the time you can simply define the profile by exporting it to your environment:

export AWS_PROFILE=unprivileged-profile

But be aware of access key preference:

If AWS_PROFILE environment variable is set and the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables are set, then the credentials provided by AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY will override the credentials located in the profile provided by AWS_PROFILE.

You have to unset both AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY and set AWS_PROFILE then it should work correctly.

unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
export AWS_PROFILE=unprivileged-profile

Working with roles

# cat ~/.aws/credentials
[unprivileged-profile]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY

[privileged-role]
role_arn = ARN_OF_IAM_ROLE_IN_CHILD_ACCOUNT
source_profile = unprivileged-profile

List profile information

# Show current profile
aws configure list

# Show all profiles
aws configure list-profiles | sort

Configure pager

The following example sets the default to disable the use of a pager.

[default]
cli_pager=

It can also be disable with the --no-cli-pager command line option or by setting its environment variable to an empty string:

export AWS_PAGER=""

Filter and query examples

aws ec2 describe-instances \
  --filters "Name=tag:Name,Values=myapp-api-instance" \
  --query "Reservations[*].Instances[*].PrivateIpAddress"

aws ec2 describe-images --owners self \
  --filters "Name=name,Values=myapp-server-*" \
  --query 'reverse(sort_by(Images, &CreationDate))[*].[CreationDate,Name,ImageId]' \
  --output table

aws ec2 describe-images --owners aws-marketplace \
  --filters "Name=product-code,Values=aw0evgkw8e5c1q413zgy5pjce" \
  --query "sort_by(Images, &CreationDate)[-1].[ImageId]"

aws ec2 describe-images --owners aws-marketplace \
  --filters "Name=name,Values=CentOS Linux 7*" \
  --query 'reverse(sort_by(Images, &CreationDate))[*].[CreationDate,Name,ImageId]' \
  --output table

aws ec2 describe-images --owners aws-marketplace \
  --filters "Name=virtualization-type,Values=hvm" "Name=root-device-type,Values=ebs" "Name=product-code,Values=aw0evgkw8e5c1q413zgy5pjce" \
  --query 'reverse(sort_by(Images, &CreationDate))[*].[CreationDate,Name,ImageId]' \
  --output table

aws s3api list-objects-v2 --bucket "myapp-backup-log-bucket" --query 'Contents[?LastModified >= `2020-12-09`][].Key'

aws ecs describe-services --cluster myapp-global --services SvcECS-myapp-global-discserv-demo --query 'services[0].taskDefinition'

aws deploy get-deployment-target --deployment-id d-KD5KWT432 --target-id myapp-global:SvcECS-myapp-global-discserv-demo --query "deploymentTarget.ecsTarget.status" --output text

ALB_URL=$(aws elbv2 describe-load-balancers \
    --names alb-myapp-global-discserv-demo \
    --output text \
    --query "LoadBalancers[*].DNSName")

LAST_DEPLOYMENT=$(aws deploy list-deployments \
    --application-name "${codedeploy_application_name}" \
    --deployment-group-name "${codedeploy_deployment_group_name}" \
    --query "deployments" \
    --max-items 1 \
    --output text \
    | head -n 1)

DEPLOYMENT_STATE=$(aws deploy get-deployment \
    --deployment-id "${LAST_DEPLOYMENT}" \
    --query "deploymentInfo.status" \
    --output text)

aws elbv2 describe-listeners --output text \
    --load-balancer-arn "arn:aws:elasticloadbalancing:eu-central-1:123456789012:loadbalancer/app/alb-myapp-global-discserv-demo/85e7d9c4b893b91f" \
    --query 'Listeners[?Port==`80`].ListenerArn'

aws elbv2 describe-listeners \
    --load-balancer-arn "${LB_ARN}" \
    --query 'Listeners[?Port==`443`].ListenerArn' \
    --output text

aws elbv2 describe-listeners \
    --listener-arns "arn:aws:elasticloadbalancing:eu-central-1:123456789012:listener/app/alb-myapp-global-discserv-demo/85e7d9c4b893b91f/02a841099f705adb" \
    --query 'Listeners[0].DefaultActions[0].TargetGroupArn' \
    --output text

Import credentials csv from AWS Console

The CLI docs show the following example:

aws configure import --csv "file://${HOME}/Downloads/${USERNAME}_accessKeys.csv"  --profile-prefix "${PROJECT_SHORTNAME}-"

However, you’ll get the following error when trying to import the csv:

Expected header "User Name" not found

That is because the file is actually missing a column called “User Name” whose value is used to name the profile.

Wrong:

Access key ID,Secret access key
AKIAIOSFODNN7EXAMPLE,wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Correct:

User Name,Access key ID,Secret access key
megamorf,AKIAIOSFODNN7EXAMPLE,wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

You can also use the script aws-import-credentials-csv to correctly import the csv until AWS CLI supports the csv format:

#!/usr/bin/env awk -f

BEGIN {
    FS=","
    print ""
    # profile name i.e. ini header
    header="[" ARGV[1] "]"
    ARGV[1]=""
    print header
}
# only process line 2 of CSV
FNR==2 {
    print "aws_access_key_id=" $1
    print "aws_secret_access_key=" $2
}

Which can be run as follows:

$ ./aws-import-credentials-csv foo_profile $HOME/Downloads/credentials.csv | tee -a $HOME/.aws/credentials

[foo_profile]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Using an external tool to provide AWS credentials

AWS CLI and programs using the AWS SDK support invoking an external program to generate credentials:

[profile developer]
credential_process = /opt/bin/awscreds-custom --username helen

The docs say:

  • output the following JSON object on stdout
  • the SessionToken and Expiration properties are optional
{
  "Version": 1,
  "AccessKeyId": "an AWS access key",
  "SecretAccessKey": "your AWS secret access key",
  "SessionToken": "the AWS session token for temporary credentials", 
  "Expiration": "ISO8601 timestamp when the credentials expire"
}  

I have created the following script to retrieve AWS credentials from our password manager Bitwarden:

#!/usr/bin/env bash

BW_VAULT_ITEM_ID="${1}"
BW_VAULT_ORG_ID="${2}"

show_usage()
{
   echo "Retrieve AWS secret ID and key from Bitwarden username and password"
   echo
   echo "Syntax: $(basename $0) <ItemID> [<OrgID>]"
   echo
}

if [ -z "$*" ]; then
    show_usage
    exit 0
fi

if ! command -v bw &> /dev/null; then
  echo "Error: Bitwarden CLI missing - install with 'brew install bitwarden-cli'" >&2
  exit 1
fi

# Include bashrc if available
if [ -f "$HOME/.bashrc" ]; then
    . "$HOME/.bashrc"
fi

if [ -z "${BW_VAULT_ORG_ID}" ]; then
    ORG_PARAM=""
else
    ORG_PARAM="--organizationid ${BW_VAULT_ORG_ID}"
fi

if [ -z "${BW_SESSION}" ] ; then # bw session is missing
  if command -v bwunlock &> /dev/null ; then # unlock bw with wrapper function
    bwunlock > /dev/null
  else # manually unlock bw
    echo '$BW_SESSION not found! - You need to login to the vault' >&2
    BW_SESSION=$(bw unlock --raw)
  fi
fi

CREDS=$(bw --nointeraction get ${ORG_PARAM} --session ${BW_SESSION} --raw item ${BW_VAULT_ITEM_ID})
if [ $? -ne 0 ] ; then
    echo "Error: getting item from bitwarden failed. Invalid session?" >&2
    exit 1
fi

echo ${CREDS} | jq '.login | {Version: 1, AccessKeyId: .username, SecretAccessKey: .password}'

Then add it to your profile:

[personal-bw-cred]
credential_process = /usr/local/bin/aws-bw 0c06d2b4-cc12-4283-b5b2-a3412149d378
region = eu-central-1

[shared-bw-cred]
credential_process = /usr/local/bin/aws-bw 0c06d2b4-cc12-4283-b5b2-a3412149d378 318e6c95-78e8-4a7b-b50a-52c543ae3a8f
region = eu-central-1

Example:

▶ aws --profile personal-bw-cred ec2 describe-vpcs
{
    "Vpcs": [
        {
            "CidrBlock": "10.0.0.0/16",
            "DhcpOptionsId": "dopt-016788a064b2e8333",
[output omitted...]

S3 - The order of the parameters matters

I’d like to point out the --exclude "*" isn’t a typo. If you don’t add it, the include will match anything. As per the documentation: Note that, by default, all files are included. This means that providing only an –include filter will not change what files are transferred. –include will only re-include files that have been excluded from an –exclude filter. If you only want to upload files with a particular extension, you need to first exclude all files, then re-include the files with the particular extension.

The exclude and include should be used in a specific order, We have to first exclude and then include. The viceversa of it will not be successful.

aws s3 cp . s3://data/ --recursive  --include "2016-08*" --exclude "*" 

This will fail because order of the parameters maters in this case. The include is excluded by the *

aws s3 cp . s3://data/ --recursive --exclude "*" --include "2016-08*"`

This one will work because the we excluded everything but later we had included the specific directory.

aws s3 cp . s3://data/ --recursive --exclude "*" --include "2006-08*" --exclude "*/*"

The final --exclude is to make sure that nothing is picked up from any subdirectories that are picked up by the --recursive

sync is recursive by default:

aws s3 sync . s3://data/ --exclude "*" --include "2016-08*"

S3 - Check if a file exists

# Check if file exists
# The HEAD action retrieves metadata from an object without returning the object itself. 
aws s3api head-object --bucket mybucket --key builds/file.new.ext
if [[ $? -eq 0 ]]; then
  aws s3 cp s3://mybucket/builds/file.new.ext ./file.ext
else
  aws s3 cp s3://mybucket/builds/file.old.ext ./file.ext
fi