Rclone is a tool for transferring or syncing files to and from cloud storage. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. Rclone has cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone can be used from the Command Line Interface (CLI) and also offers several different graphical versions (GUIs) including the Rclone Browser and Rclone WebUI.
Rclone is available for Windows, MacOS, Linux (and *BSD) computers.
Some features that make rclone ideal for transferring data from the cloud to Mediaflux (or vice versa):
- MD5, SHA1 hashes are checked at all times for file integrity
- Timestamps are preserved on files
- Operations can be restarted at any time
- Can transfer files from one remote file store to another remote file store
- Multi-threaded transfers
Some endpoints that might be of interest to University users are SharePoint, OneDrive, Google Drive, Dropbox, S3, WebDav (e.g. Cloudstor), SFTP and SMB.
You can transfer data to or from any of those endpoints, connecting to Mediaflux using either the SFTP or SMB protocols. The SMB protocol is not encrypted, otherwise the experience should be very similar. Note that you need a recent version of rclone for SMB support to be present.
A key advantage of rclone over other tools is that it can transfer data between two remote data stores, for example from S3 to Mediaflux, without needing to download the data to your local machine first. The data will still be transferred though your local network connection, so you should consider using a machine on the university network for optimal performance. This could be a Researcher Desktop or a Melbourne Research Cloud instance, for example.
Using rclone
To use rclone, typically you would configure remotes (storage locations) that you can then use to transfer data. It is also possible to specify the connection parameters on the command line if you prefer.
Typical rclone operations for copying data to or from Mediaflux (for more commands, see the list of all rclone commands):
- copy new or changed files to cloud storage
- sync (one way) to make a directory identical
- move files to cloud storage deleting the local after verification
- check hashes and for missing/extra files
Securing your configuration file
If you wish to store the passwords to your cloud storage services (or indeed to your Mediaflux project) then pay attention to the security of your rclone configuration file. If you are on a shared user computer, ensure that your configuration file is not readable by other users. For example (on linux):
rwh@thinkpad:~$ chmod 600 ~/.config/rclone/rclone.conf
rwh@thinkpad:~$ ls -l ~/.config/rclone/rclone.conf
-rw------- 1 rwh rwh 1404 Dec 8 11:26 /home/rwh/.config/rclone/rclone.conf
You should also consider setting a password on your rclone configuration. You will be prompted for this password when connecting to remote services with rclone or changing your configuration.
rwh@thinkpad:~$ rclone config
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> s
Your configuration is not encrypted.
If you add a password, you will protect your login information to cloud services.
a) Add Password
q) Quit to main menu
a/q> a
Enter NEW configuration password:
password:
Confirm NEW configuration password:
password:
Password set
Your configuration is encrypted.
c) Change Password
u) Unencrypt configuration
q) Quit to main menu
c/u/q> q
Symbolic links
There is some subtlety around how rclone handles symbolic links (and their equivalents on various services). If you need to handle symbolic links, refer to the rclone man page or rclone website for more information.
Examples
We have collected a set of examples that we hope are useful for those transferring data into or out of a remote endpoint to Mediaflux. Please let us know if there is a cloud service that we have missed.
Using Mediaflux through the SFTP protocol as an rclone remote. You can also use the SMB protocol to connect to Mediaflux, but SFTP is likely a better choice due to it being encrypted. SFTP has also been in rclone for longer so may be more mature.
If you choose to store your mediaflux password, as in the example below, we recommend that you set a password on your rclone configuration file (see Securing your configuration file, above).
rwh@thinkpad:~$ rclone config
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> mediaflux-sftp
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
27 / SSH/SFTP Connection
\ "sftp"
[snip]
Storage> 27
** See help for sftp backend at: https://rclone.org/sftp/ **
SSH host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Connect to example.com
\ "example.com"
host> mediaflux.researchsoftware.unimelb.edu.au
SSH username, leave blank for current username, rwh
Enter a string value. Press Enter for the default ("").
user> unimelb:mfuser
SSH port, leave blank to use default (22)
Enter a string value. Press Enter for the default ("").
port>
SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> y
Enter the password:
password: (your university password)
Confirm the password:
password: (your university password)
Raw PEM-encoded private key, If specified, will override key_file parameter.
Enter a string value. Press Enter for the default ("").
key_pem>
Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Enter a string value. Press Enter for the default ("").
key_file>
The passphrase to decrypt the PEM-encoded private key file.
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
in the new OpenSSH format can't be used.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n>
When set forces the usage of the ssh-agent.
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
when the ssh-agent contains many keys.
Enter a boolean value (true or false). Press Enter for the default ("false").
key_use_agent>
Enable the use of insecure ciphers and key exchange methods.
This enables the use of the following insecure ciphers and key exchange methods:
- aes128-cbc
- aes192-cbc
- aes256-cbc
- 3des-cbc
- diffie-hellman-group-exchange-sha256
- diffie-hellman-group-exchange-sha1
Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Use default Cipher list.
\ "false"
2 / Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange.
\ "true"
use_insecure_cipher>
Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
Enter a boolean value (true or false). Press Enter for the default ("false").
disable_hashcheck>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>
Remote config
--------------------
[mediaflux-sftp]
host = mediaflux.researchsoftware.unimelb.edu.au
user = unimelb:rhutton
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
Current remotes:
Name Type
==== ====
ceph-s3 s3
cloudstor webdav
dropbox dropbox
google-drive drive
mediaflux-sftp sftp
swift swift
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
An example transferring files from the local system to Mediaflux and then checking the upload.
(We've used the -P option, to provide real-time copy progress updates).
rwh@thinkpad:~/Public$ rclone -P copy mediaflux-test mediaflux-sftp:/Volumes/proj-demonstration-1128.4.15/mediaflux-test
Enter configuration password:
password:
2022-12-08 11:27:02 NOTICE: test symlink/fh.jpg: Can't follow symlink without -L/--copy-links
Transferred: 541.377M / 541.377 MBytes, 100%, 1.582 MBytes/s, ETA 0s
Checks: 93 / 93, 100%
Transferred: 924 / 924, 100%
Elapsed time: 5m44.3s
rwh@thinkpad:~/Public$ rclone -P check mediaflux-test mediaflux-sftp:/Volumes/proj-demonstration-1128.4.15/mediaflux-test
Enter configuration password:
password:
2022-12-08 12:22:06 NOTICE: test symlink/fh.jpg: Can't follow symlink without -L/--copy-links
2022-12-08 12:22:07 NOTICE: sftp://unimelb:rhutton@mediaflux.researchsoftware.unimelb.edu.au:22//Volumes/proj-demonstration-1128.4.15/mediaflux-test: 0 differences found
2022-12-08 12:22:07 NOTICE: sftp://unimelb:rhutton@mediaflux.researchsoftware.unimelb.edu.au:22//Volumes/proj-demonstration-1128.4.15/mediaflux-test: 1016 hashes could not be checked
2022-12-08 12:22:07 NOTICE: sftp://unimelb:rhutton@mediaflux.researchsoftware.unimelb.edu.au:22//Volumes/proj-demonstration-1128.4.15/mediaflux-test: 1016 matching files
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 1016 / 1016, 100%
Elapsed time: 4.1s
Using the same remote as above, we can transfer data from Mediaflux over SFTP to our local computer or any remote destination that we have configured. Note that if we transfer data between two remotes, the data will still go through the local machine (for most combinations of remotes) so for best results run rlcone on a machine that is connected to the university network. For example, transferring data from Mediaflux to CloudStor (see the webDAV section below for how to configure CloudStor as a remote):
rwh@thinkpad:~/wd$ rclone -P copy mediaflux-sftp:/Volumes/proj-demonstration-1128.4.15/mediaflux-test cloudstor:/mediaflux-test
Enter configuration password:
password:
Transferred: 63.503M / 63.503 MBytes, 100%, 1.150 MBytes/s, ETA 0s
Transferred: 117 / 117, 100%
Google Drive
Documentation for configuring a Google Drive remote
You probably want to generate your own Client ID first, see: https://rclone.org/drive/#making-your-own-client-id
rwh@thinkpad:~$ rclone config
2022/12/02 15:14:27 NOTICE: Config file "/home/rwh/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> google-drive
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
13 / Google Drive
\ "drive"
[snip]
Storage> 13
** See help for drive backend at: https://rclone.org/drive/ **
Google Application Client Id
Setting your own is recommended.
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
If you leave this blank, it will use an internal key which is low performance.
Enter a string value. Press Enter for the default ("").
client_id> xxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com
OAuth Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret> xxxxxxx-xxxxxxxxxxxxxxxx
Scope that rclone should use when requesting access from drive.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Full access all files, excluding Application Data Folder.
\ "drive"
2 / Read-only access to file metadata and file contents.
\ "drive.readonly"
/ Access to files created by rclone only.
3 | These are visible in the drive website.
| File authorization is revoked when the user deauthorizes the app.
\ "drive.file"
/ Allows read and write access to the Application Data folder.
4 | This is not visible in the drive website.
\ "drive.appfolder"
/ Allows read-only access to file metadata but
5 | does not allow any access to read or download file content.
\ "drive.metadata.readonly"
scope> 2
ID of the root folder
Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use
a non root folder as its starting point.
Enter a string value. Press Enter for the default ("").
root_folder_id>
Service Account Credentials JSON file path
Leave blank normally.
Needed only if you want use SA instead of interactive login.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Enter a string value. Press Enter for the default ("").
service_account_file>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>
Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n>
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=YZD5CqjGwYDIPitkOeQe0Q
Log in and authorize rclone for access
Waiting for code...
Got code
Configure this as a team drive?
y) Yes
n) No (default)
y/n> n
--------------------
[google-drive]
client_id = xxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com
client_secret = xxxxxxx-xxxxxxxxxxxxxxxx
scope = drive.readonly
token = {"access_token":"(token here)","token_type":"Bearer","refresh_token":"(token here)","expiry":"2022-12-02T16:15:17.062616116+11:00"}
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
google-drive drive
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
Transfer data from the google-drive remote we just configured, to a Mediaflux project over SFTP with its configuration specified on the command line:
rwh@thinkpad:~$ rclone copy -P google-drive:Drive\ test --sftp-host=mediaflux.researchsoftware.unimelb.edu.au --sftp-user=unimelb:mfuser --sftp-ask-password :sftp:/Volumes/proj-demonstration-1128.4.15/Drive\ test
Enter SFTP password: (university password)
Transferred: 49.795M / 49.795 MBytes, 100%, 1.061 MBytes/s, ETA 0s
Transferred: 11 / 11, 100%
Elapsed time: 59.1s
OneDrive/SharePoint
RClone's documentation for configuring a OneDrive or SharePoint remote
You need to know the name of the OneDrive site that you want to connect to. Then replace the spaces in the name with the plus character '+', e.g. RCS Data Solutions Team becomes RCS+Data+Solutions+Team.
rwh@thinkpad:~$ rclone config
Current remotes:
Name Type
==== ====
google-drive drive
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> rcs-data-solutions-team
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
22 / Microsoft OneDrive
\ "onedrive"
[snip]
Storage> 22
** See help for onedrive backend at: https://rclone.org/onedrive/ **
OAuth Client Id
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id>
OAuth Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>
Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n>
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=ypkECW1NaqWKW6oE01NhHg
Log in and authorize rclone for access
Waiting for code...
Got code
Choose a number from below, or type in an existing value
1 / OneDrive Personal or Business
\ "onedrive"
2 / Root Sharepoint site
\ "sharepoint"
3 / Type in driveID
\ "driveid"
4 / Type in SiteID
\ "siteid"
5 / Search a Sharepoint site
\ "search"
Your choice> 5
What to search for> RCS+Data+Solutions+Team
Found 1 sites, please select the one you want to use:
0: RCS Data Solutions Team (https://unimelbcloud.sharepoint.com/teams/resplatdata) id=unimelbcloud.sharepoint.com,f971a29b-7279-4c5c-904b-9d96cd28a6e0,dce1f480-6740-47c0-8289-6cb36cdd4c8e
Chose drive to use:> 0
Found 1 drives, please select the one you want to use:
0: Documents (documentLibrary) id=(id)
Chose drive to use:> 0
Found drive 'root' of type 'documentLibrary', URL: https://unimelbcloud.sharepoint.com/teams/resplatdata/Shared%20Documents
Is that okay?
y) Yes (default)
n) No
y/n> y
--------------------
[rcs-data-solutions-team]
token = {"access_token":"(token)","token_type":"Bearer","refresh_token":"(token)","expiry":"2022-12-03T16:15:14.286294334+11:00"}
drive_id = (id)
drive_type = documentLibrary
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
google-drive drive
rcs-data-solutions-team onedrive
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
Transfer data from the sharepoint remote we just configured, to a Mediaflux project:
rwh@thinkpad:~$ rclone copy -P rcs-data-solutions-team:sharepoint\ test mediaflux-sftp:/Volumes/proj-demonstration-1128.4.15/sharepoint\ test
Enter configuration password:
password:
Transferred: 63.503M / 63.503 MBytes, 100%, 446.560 kBytes/s, ETA 0s
Transferred: 115 / 115, 100%
Elapsed time: 2m33.6s
Required Flags for SharePoint
As SharePoint does some special things with uploaded documents, you won’t be able to use the document's size or the document's hash to compare if a file has been changed since the upload / which file is newer.
For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the “Last Modified” datetime property to compare your documents:
--ignore-size --ignore-checksum --update
If you are using rclone mount
on OneDrive or SharePoint, you may additionally want to enable --vfs-cache-mode writes
or --vfs-cache-mode full
. --vfs-cache-mode writes
should be sufficient in most instances, see VFS File Caching in the rclone documentation for more information.
Dropbox
Documentation for creating a Dropbox rclone remote
rwh@thinkpad:~$ rclone config
Current remotes:
Name Type
==== ====
google-drive drive
rcs-data-solutions-team onedrive
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> dropbox
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
9 / Dropbox
\ "dropbox"
[snip]
Storage> 9
** See help for dropbox backend at: https://rclone.org/dropbox/ **
OAuth Client Id
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id>
OAuth Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>
Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n>
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=-gnfrwCcFAxmOZrAOxbbfw
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[dropbox]
token = {"access_token":"(token)","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
Transfer data from the dropbox remote we just configured, to a Mediaflux project over SFTP with its configuration specified on the command line:
rwh@thinkpad:~$ rclone copy -P dropbox:backups --sftp-host=mediaflux.researchsoftware.unimelb.edu.au --sftp-user=unimelb:mfuser --sftp-ask-password :sftp:/Volumes/proj-demonstration-1128.4.15/dropbox\ test
Enter SFTP password: (university password)
Transferred: 1.777M / 1.777 MBytes, 100%, 316.083 kBytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 14.9s
OpenStack Swift
Documentation for creating an OpenStack Swift rclone remote
Using the Melbourne Research Cloud object store, we can configure our connection with mostly the defaults (just hit enter to accept the default). We just need to specify the auth URL: https://keystone.rc.nectar.org.au:5000/v3/
rwh@thinkpad:~$ rclone config
Current remotes:
Name Type
==== ====
dropbox dropbox
google-drive drive
rcs-data-solutions-team onedrive
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> swift
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
24 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
[snip]
Storage> 24
** See help for swift backend at: https://rclone.org/swift/ **
Get swift credentials from environment variables in standard OpenStack form.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter swift credentials in the next step
\ "false"
2 / Get swift credentials from environment vars. Leave other fields blank if using this.
\ "true"
env_auth> 2
User name to log in (OS_USERNAME).
Enter a string value. Press Enter for the default ("").
user>
API key or password (OS_PASSWORD).
Enter a string value. Press Enter for the default ("").
key>
Authentication URL for server (OS_AUTH_URL).
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Rackspace US
\ "https://auth.api.rackspacecloud.com/v1.0"
2 / Rackspace UK
\ "https://lon.auth.api.rackspacecloud.com/v1.0"
3 / Rackspace v2
\ "https://identity.api.rackspacecloud.com/v2.0"
4 / Memset Memstore UK
\ "https://auth.storage.memset.com/v1.0"
5 / Memset Memstore UK v2
\ "https://auth.storage.memset.com/v2.0"
6 / OVH
\ "https://auth.cloud.ovh.net/v3"
auth> https://keystone.rc.nectar.org.au:5000/v3/
User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
Enter a string value. Press Enter for the default ("").
user_id>
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
Enter a string value. Press Enter for the default ("").
domain>
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
Enter a string value. Press Enter for the default ("").
tenant>
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
Enter a string value. Press Enter for the default ("").
tenant_id>
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
Enter a string value. Press Enter for the default ("").
tenant_domain>
Region name - optional (OS_REGION_NAME)
Enter a string value. Press Enter for the default ("").
region>
Storage URL - optional (OS_STORAGE_URL)
Enter a string value. Press Enter for the default ("").
storage_url>
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
Enter a string value. Press Enter for the default ("").
auth_token>
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
Enter a string value. Press Enter for the default ("").
application_credential_id>
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
Enter a string value. Press Enter for the default ("").
application_credential_name>
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
Enter a string value. Press Enter for the default ("").
application_credential_secret>
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
Enter a signed integer. Press Enter for the default ("0").
auth_version>
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
Enter a string value. Press Enter for the default ("public").
Choose a number from below, or type in your own value
1 / Public (default, choose this if not sure)
\ "public"
2 / Internal (use internal service net)
\ "internal"
3 / Admin
\ "admin"
endpoint_type>
The storage policy to use when creating a new container
This applies the specified storage policy when creating a new
container. The policy cannot be changed afterwards. The allowed
configuration values and their meaning depend on your Swift storage
provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / OVH Public Cloud Storage
\ "pcs"
3 / OVH Public Cloud Archive
\ "pca"
storage_policy>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>
Remote config
--------------------
[swift]
env_auth = true
auth = https://keystone.rc.nectar.org.au:5000/v3/
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
Current remotes:
Name Type
==== ====
dropbox dropbox
google-drive drive
rcs-data-solutions-team onedrive
swift swift
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
To enable access to your OpenStack project, you will need to download the RC file for your project. From the top right in the web dashboard for Openstack, download it from the User menu (User → OpenStack RC file). Once you have downloaded it, source the file into your shell.
Transfer data from the swift remote we just configured, to a Mediaflux project over SFTP with its configuration specified on the command line:
rwh@thinkpad:~$ . ./bin/Example-Project-openrc.sh # source the openrc file from your OpenStack project
Please enter your OpenStack Password for project Example-Project as user mfuser@unimelb.edu.au:
rwh@thinkpad:~$ rclone ls swift:
29230774 backup/dump.sql.tgz.gpg
461692 backup/input_and_output.zip.gpg
rwh@thinkpad:~$ rclone copy -P swift:backup --sftp-host=mediaflux.researchsoftware.unimelb.edu.au --sftp-user=unimelb:mfuser --sftp-ask-password :sftp:/Volumes/proj-demonstration-1128.4.15/backup
Enter SFTP password: (unimelb password)
Transferred: 28.317M / 28.317 MBytes, 100%, 703.286 kBytes/s, ETA 0s
Transferred: 2 / 2, 100%
Elapsed time: 53.4s
webDAV (OwnCloud/NextCloud e.g. CloudStor)
In the web console of your Nextcloud or OwnCloud account, go to the Settings menu in the top right, and find the WebDav section. This will show you the URL you will need to connect. For example, the URL for CloudStor is: https://cloudstor.aarnet.edu.au/plus/remote.php/webdav/.
Create an application password. In Settings, go to the Security section, under App passwords, enter 'rclone' as the App name and click create new app password. Note down the username and password for entering into the configuration steps below:
rwh@thinkpad:~$ rclone config
Current remotes:
Name Type
==== ====
dropbox dropbox
google-drive drive
rcs-data-solutions-team onedrive
swift swift
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> cloudstor
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
31 / Webdav
\ "webdav"
[snip]
Storage> 31
** See help for webdav backend at: https://rclone.org/webdav/ **
URL of http host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Connect to example.com
\ "https://example.com"
url> https://cloudstor.aarnet.edu.au/plus/remote.php/webdav/
Name of the Webdav site/service/software you are using
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Nextcloud
\ "nextcloud"
2 / Owncloud
\ "owncloud"
3 / Sharepoint
\ "sharepoint"
4 / Other site/service or software
\ "other"
vendor> 2
User name
Enter a string value. Press Enter for the default ("").
user> mfuser@unimelb.edu.au
Password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> y
Enter the password: (your app password)
password:
Confirm the password: (your app password again)
password:
Bearer token instead of user/pass (eg a Macaroon)
Enter a string value. Press Enter for the default ("").
bearer_token>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[cloudstor]
url = https://cloudstor.aarnet.edu.au/plus/remote.php/webdav/
vendor = owncloud
user = mfuser@unimelb.edu.au
pass = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
cloudstor webdav
dropbox dropbox
google-drive drive
rcs-data-solutions-team onedrive
swift swift
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
rwh@thinkpad:~$ rclone ls cloudstor:
223026973 osm_topology2/drive.json.zip
254848134 osm_topology2/walk_full.json.zip
258549116 osm_topology2/ways.json.zip
rwh@thinkpad:~$ rclone copy -P cloudstor:osm_topology2 --sftp-host=mediaflux.researchsoftware.unimelb.edu.au --sftp-user=unimelb:mfuser --sftp-ask-password :sftp:/Volumes/proj-demonstration-1128.4.15/osm_topology2
Enter SFTP password: (university password)
Transferred: 702.309M / 702.309 MBytes, 100%, 1.770 MBytes/s, ETA 0s
Transferred: 3 / 3, 100%
Elapsed time: 6m47.3s
S3-compatible object store
Documentation for creating an s3 rclone remote
This example is specifically for UniMelb S3-compatible object storage, but the same principles apply for other s3 variants. Consult the documentation for the specific configuration required.
rwh@thinkpad:~$ rclone config
Current remotes:
Name Type
==== ====
cloudstor webdav
dropbox dropbox
google-drive drive
rcs-data-solutions-team onedrive
swift swift
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> ceph-s3
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
\ "s3"
[snip]
Storage> 4
** See help for s3 backend at: https://rclone.org/s3/ **
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ "Alibaba"
3 / Ceph Object Storage
\ "Ceph"
4 / Digital Ocean Spaces
\ "DigitalOcean"
5 / Dreamhost DreamObjects
\ "Dreamhost"
6 / IBM COS S3
\ "IBMCOS"
7 / Minio Object Storage
\ "Minio"
8 / Netease Object Storage (NOS)
\ "Netease"
9 / Scaleway Object Storage
\ "Scaleway"
10 / StackPath Object Storage
\ "StackPath"
11 / Tencent Cloud Object Storage (COS)
\ "TencentCOS"
12 / Wasabi Object Storage
\ "Wasabi"
13 / Any other S3 compatible provider
\ "Other"
provider> 3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth>
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> (enter your key id here)
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> (enter your secret key here)
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Use this if unsure. Will use v4 signatures and an empty region.
\ ""
2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
\ "other-v2-signature"
region>
Endpoint for S3 API.
Required when using an S3 clone.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
endpoint> https://objects.storage.unimelb.edu.au
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
location_constraint>
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl>
The server-side encryption algorithm used when storing this object in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
3 / aws:kms
\ "aws:kms"
server_side_encryption>
If using KMS ID you must provide the ARN of Key.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / None
\ ""
2 / arn:aws:kms:*
\ "arn:aws:kms:us-east-1:*"
sse_kms_key_id>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>
Remote config
--------------------
[ceph-s3]
provider = Ceph
access_key_id = (key id)
secret_access_key = (secret key)
endpoint = https://objects.storage.unimelb.edu.au
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Transfer data from the UniMelb S3-compatible object storage remote we just configured, to a Mediaflux project over SFTP with its configuration specified on the command line:
rwh@thinkpad:~$ rclone lsd ceph-s3:
-1 2018-11-01 14:31:17 -1 backups
rwh@thinkpad:~$ rclone copy -P ceph-s3:backups --sftp-host=mediaflux.researchsoftware.unimelb.edu.au --sftp-user=unimelb:mfuser --sftp-ask-password :sftp:/Volumes/proj-demonstration-1128.4.15/backups
Enter SFTP password: (university password)
Transferred: 702.309M / 702.309 MBytes, 100%, 1.770 MBytes/s, ETA 0s
Transferred: 3 / 3, 100%
Elapsed time: 6m47.3s