AWS CLI
AWS Command Line Interface (AWS CLI) is an open source tool built on top of the AWS SDK for Python (Boto) that provides users with Linux-like commands to interact with AWS services (including S3 object storage).
1. How to install AWS CLI
MacOS
It is recommended to install AWS CLI via Homebrew for Mac users.
1. install Homebrew
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
2. install AWS CLI (after Homebrew is installed)
$ brew update $ brew install awscli
Ubuntu, Debian
$ sudo apt-get update $ sudo apt-get install awscli
RHEL, CentOS or Fedora
$ sudo yum -y update $ sudo yum install awscli
Windows
If you prefer not to install AWS CLI via pip
method as listed below, there is also a MSI installer for deploying AWS CLI in a Windows environment. Please follow the instructions here.
Installing AWS CLI via Python Package Manager (pip)
AWS CLI is a Python module so you can also install AWS CLI using Python's package manager 'pip'. Installing AWS CLI via pip
might allow users to get the latest version of AWS CLI and no super user privileges is needed as it's been deployed as a Python module. Using Ubuntu as an example below but the same steps should be similar to other Linux distributions, Mac or Windows environments:
$ sudo apt-get install python3-pip $ pip3 install awscli $ pip3 install awscli --upgrade
After the installation, open a Terminal window and executes the following command:
$ aws --version
and you should see the followings (the version number can vary):
aws-cli/1.16.20 Python/3.7.0 Darwin/16.7.0 botocore/1.12.10
If you can, you have successfully install AWS CLI.
2. Obtain the access details of your S3-compatible object storage
You will be provided with the access information (i.e. access_key and secret_key) on how to access your S3 object storage via email.
3. Configure your AWS CLI environment
Once your user account (either admin or access account) is created, you will need to configure your AWS CLI environment using the information provided in the provision email via executing the command below:
$ aws configure
you will now need to provide the information for the following questions:
AWS Access Key ID [None]:
copy-and-paste your access_key and press enter,
AWS Secret Access Key [None]:
copy-and-paste your secret_key and press enter,
Default region name [None]:
leave it blank and just press enter,
Default output format [None]:
insert json
and press enter.
The aws configure
command will create two files: credentials & configuration as a result.
Credentials file: located at
~/.aws/credentials
on Linux and Mac, orC:\Users\USERNAME \.aws\credentials
on Windows. This file can contain multiple named profiles in addition to a default profile.Configuration file: located at
~/.aws/config
on Linux and Mac, or atC:\Users\USERNAME \.aws\config
on Windows. This file can contain a default profile, named profiles, and CLI specific configuration parameters for each.
You can add additional configuration values directly to those files or via aws configure set
command. A full list of configuration values can be found here. E.g.: setting concurrent requests to 20 (from the default 10):
$ aws configure set default.s3.max_concurrent_requests 20
3.1. More than one S3 object storage?
You will need to create individual profile for each S3-compatible object storage that you want access. To create a new credential profile, open your credentials file (location of your credentials file can be found above) and adding the following at the bottom of that file:
[2017UOM025] aws_access_key_id = UF1xxxxxxxxxxxxxxxxx aws_secret_access_key = pxhxxxxxxxxxxxxxxxxxxxxxxxxx
You can modify the contain of each profile by using the –profile «ANY-PROFILE-NAME»
option, e.g.:
$ aws configure --profile project02
and you will just follow the same steps as stated above. You can then append this profile option in combination with any S3 file command that listed at the section below.
NOTE: please keep your access_key and secret_key secure!!
4. Test access to your S3-compatible object storage
Open a Terminal window and execute the following command (make sure you have configured your AWS CLI as stated in the steps above):
$ aws s3 ls --recursive --human-readable --summarize --endpoint-url https://objects.storage.unimelb.edu.au
or for a specific S3 object storage (e.g. project02):
$ aws s3 ls --recursive --human-readable --summarize --profile project02 --endpoint-url https://objects.storage.unimelb.edu.au
and you probably will see something similar as listed below (if there is no bucket/object created inside):
$ aws s3 ls --recursive --human-readable --summarize --endpoint-url https://objects.storage.unimelb.edu.au Total Objects: 0 Total Size: 0 Bytes
Try the following command (to create a bucket/folder):
$ aws s3 mb s3://testBucket --endpoint-url https://objects.storage.unimelb.edu.au
and you should see the following message:
make_bucket: testBucket
if you re-run the above aws s3 ls
command again, you will see the following output:
2018-09-25 19:05:46 testBucket
5. Some useful examples of AWS CLI
5.0. Do I need to include "--endpoint-url https://objects.storage.unimelb.edu.au" all the time?
The standard AWS CLI uses *.amazonaws.com
as its default service endpoint so in order to use University's S3-compatible object storage, you will need to specify a different endpoint. There is a awscli plugin called awscli-plugin-endpoint
you can install to set different service endpoint with your AWS CLI.
- Via Homebrew:
$ /usr/local/opt/awscli/libexec/bin/pip install awscli-plugin-endpoint
Via pip:
$ pip install awscli-plugin-endpoint
after the plugin is installed, run the following two commands set the endpoint to https://objects.storage.unimelb.edu.au
:
$ aws configure set plugins.endpoint awscli_plugin_endpoint $ aws configure set s3.endpoint_url https://objects.storage.unimelb.edu.au
or if you have multiple AWS CLI profiles, you can include specific profile name, e.g.:
$ aws configure --profile project02 set plugins.endpoint awscli_plugin_endpoint $ aws configure --profile project02 set s3.endpoint_url https://objects.storage.unimelb.edu.au
now, you can just issue the AWS CLI S3 commands without using the –endpoint-url
parameter, e.g.:
$ aws s3 cp testFile_500MB.file s3://testBucket
or
$ aws s3 cp testFile_500MB.file s3://testBucket --profile project02
5.1. How to upload file to your S3-compatible object storage (using ''cp'')
$ aws s3 cp testFile_500MB.file s3://testBucket --endpoint-url https://objects.storage.unimelb.edu.au
5.2. How to upload directory of files to your S3-compatible object storage (using ''cp'')
$ aws s3 cp --recursive /random/ s3://testBucket/random/ --endpoint-url https://objects.storage.unimelb.edu.au
and you should see the following progress until it is finished:
upload: Users/netTest/random/testFile1.1 to s3://testBucket/random/testFile1.1 upload: Users/netTest/random/testFile1.2 to s3://testBucket/random/testFile1.2 . . .
5.3. How to list all files inside your bucket (folder)
$ aws s3 ls s3://testBucket/ --endpoint-url https://objects.storage.unimelb.edu.au
5.4. How to delete files inside your object storage
For deleting a single file inside a bucket (folder):
$ aws s3 rm s3://testBucket/random/testFile1.1 --endpoint-url https://objects.storage.unimelb.edu.au
For deleting ALL files inside a bucket (folder):
$ aws s3 rm --recursive s3://testBucket/random/ --endpoint-url https://objects.storage.unimelb.edu.au
For deleting a bucket (folder):
$ aws s3 rb s3://testBucket --endpoint-url https://objects.storage.unimelb.edu.au remove_bucket: testBucket
NOTE: you will need to remove/delete all the files inside the bucket/folder first before you can remove a bucket, otherwise you will see this message: An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: Unknown
.
External Links
A list of available AWS CLI S3 file commands can be found here.