Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
These are a collection of command-line interface (CLI) clients for Mediaflux developed at the University of Melbourne. They are primarily used for uploading and downloading data to and from a Mediaflux server, as well as verifying that transfers have completed successfully. The clients are written in Java and communicate with Mediaflux over HTTPS, which provides secure, efficient, restartable uploads and downloads with strong data integrity guarantees. Checksum verification (CRC32) can also be enabled to confirm the integrity of transferred files on both the local and remote sides.
The main client utilities include:
unimelb-mf-download – provides efficient, restartable downloads from Mediaflux to the local host.
unimelb-mf-upload – provides efficient, restartable uploads from the local host to Mediaflux.
unimelb-mf-check – provides directory comparison/verification between the local file system and Mediaflux (i.e., confirming the source and destination match).
Additional utilities:
mexplorer – A shell wrapper for Mediaflux Explorer, enabling easy launch from the command line.
aterm – A shell wrapper for Mediaflux aterm for command-line scripting.
aterm-gui – A shell wrapper for launching Mediaflux aterm in GUI mode.
aterm-import – A wrapper for the aterm import command (run aterm help import for options).
aterm-download – A wrapper for the aterm download command (run aterm help download for options).
Additional platform-specific wrapper scripts are available in bin/unix (macOS/Linux) and bin\windows (Windows).
Install
Sprtan HPC Users
Spartan Users
The unimelb-mf-clients package is already pre-installed on Spartan HPC as a module, so you do not need to install it yourself. To use it, load the module with the command:
module load unimelb-mf-clients
You can then run any of the utilities included in the package, for example:
Extract the downloaded .zip or .tar.gz file and place the resulting unimelb-mf-clients-x.x.x directory in your preferred long-term location — for example, the Desktop on Windows, or your home directory on Linux or macOS.
Add unimelb-mf-clients utilities to PATH
Optionally, you can add unimelb-mf-clients utilities to your PATH environment variable so they can be run from any location.
Windows
For example, if unimelb-mf-clients was extracted to your Desktop folder, you might add %USERPROFILE%\Desktop\unimelb-mf-clients-0.8.x\bin\windows to your PATH environment variable. This will allow you to run the utility commands from any folder without specifying the path to the executable file.
Click the Start button, type env and run Edit the environment variables for your account.
Under User variables for <username>, click the Path entry and click Edit.... On Windows 10, you can add an additional row:
Mac OS
To add the unimelb-mf-clients utilities to your PATH environment variable, edit the .zshrc file in your home directory(assuming you are using the default zsh shell on macOS):
Open Terminal and enter the following command to open ~/.zshrc file with TextEdit
open ~/.zshrc
Add the following line (ensuring the path matches the location where unimelb-mf-clients was extracted):
Close and re-open the Terminal to re-load the changes to your PATH.
Linux
To add the unimelb-mf-clients utilities to your PATH environment variable, edit the .bashrc file in your home directory(assuming you are using the bash shell):
Open a Terminal window and edit your .bashrc file with nano command:
nano ~/.bashrc
Add the following line (ensuring the path matches the location where unimelb-mf-clients was extracted):
Authenticate with user credentials (username and password)
For University staff accounts using your University credentials to access Mediaflux, the configuration file should include entries like those shown below:
Note:domain, user (and actually password) are optional, if any of them is not specified you will be prompted for them.
password - although the configuration file can be used to store your password, we do not recommend that you do this. If you don't specify a password, you'll be prompted for it when you run the command and it won't be logged or visible.
The domain may be one of:
unimelb for University of Melbourne staff accounts
student for University of Melbourne student accounts
local for local accounts
If using the unimelb or student domain, user is your staff or student username.
Authenticate with token
When accessing Mediaflux with a token, the configuration file should be configured as shown below:
Optionally, you can make the config file inaccessible to others on Linux
chmod go-rwx ~/.Arcitecta/mflux.cfg
Getting started
Execute the client utility of interest on the command line and supply it the arguments that you need.
On Windows, the clients can be run from Windows PowerShell or the Command Prompt. You can start these by pressing the Start button and typing powershell or cmd, respectively.
On macOS, the clients can be run from the Terminal (Applications -> Utilities -> Terminal).
On Linux, you can execute the clients from any terminal or shell prompt. Linux commonly allows you to launch a terminal with ctrl-alt-t.
The README.md page of the unimelb-mf-clients GitLab repository also provides documentation for the utilities.
Uploading data using unimelb-mf-upload
This command-line client allows you to efficiently upload data to Mediaflux.
Features:
Parallel uploads: Use the --nb-workers option to upload files in parallel. Speed gains depend on available network capacity—using more than 4 threads is not recommended. Under heavy network load, 4 threads may perform no better than 1, so some experimentation may be needed to find the optimal setting.
Checksum validation: Compute checksums for additional verification of file integrity.
Logging: Generate a log file of the upload process.
Email summaries: Automatically generate and email a summary of the upload, including successful and failed transfers, as well as the number of zero-sized files encountered.
Daemon mode: Run the client in the background to continuously upload new data as it arrives on your local file system.
Use the --help switch to see a complete list of command-line arguments.
USAGE:
unimelb-mf-upload [OPTIONS] --dest <dest-collection-path> [src-dir1 [src-dir2...]] [src-file1 [src-file2...]]
unimelb-mf-upload --config <mf-upload-config.xml>
DESCRIPTION:
Upload local files to Mediaflux. If the file pre-exists in Mediaflux and is the same as that being uploaded, the Mediaflux asset is not modified. However, if the files differ, a new version of the asset will be created. In Daemon mode, the process will only upload new files since the process last executed.
OPTIONS:
--config <config.xml> A single configuration file including all required settings (Mediaflux server details, user credentials, application settings). If supplied, all other configuration options are ignored.
--mf.config <mflux.cfg> Path to the config file that contains Mediaflux server details and user credentials.
--mf.host <host> Mediaflux server host.
--mf.port <port> Mediaflux server port.
--mf.transport <https|http|tcp/ip> Mediaflux server transport, can be http, https or tcp/ip.
--no-cluster-io Disable cluster I/O if applicable.
--dest <dest-collection-path> The destination collection in Mediaflux.
--create-parents Create destination parent collection if it does not exist, including any necessary but nonexistent parent collections.
--csum-check If enabled, computes the checksum from the uploaded file and compares with that computed by the server for the Mediaflux asset.
--nb-queriers <n> Number of query threads. Defaults to 1. Maximum is 4
--nb-workers <n> Number of concurrent worker threads to upload data. Defaults to 1. Maximum is 8
--nb-retries <n> Retry times when error occurs. Defaults to 2
--batch-size <size> Size of the query result. Defaults to 1000
--daemon Run as a daemon.
--daemon-port <port> Daemon listener port if running as a daemon. Defaults to 9761
--daemon-scan-interval <seconds> Time interval (in seconds) between scans of source directories. Defaults to 60 seconds.
--exclude-parent Exclude parent directory at the destination (Upload the contents of the directory) if the source path ends with trailing slash.
--aggregate-transmission Aggregate transmission to improve the performance for large number of small files if file size is less than 1MB.
--split Split large files (size>1GB) into chunks and upload them in parallel. Ignored if single worker thread.
--log-dir <dir> Path to the directory for log files. No logging if not specified.
--log-file-size-mb <n> Log file size limit in MB. Defaults to 100MB
--log-file-count <n> Log file count. Defaults to 2
--notify <email-addresses> When completes, send email notification to the recipients(comma-separated email addresses if multiple). Not applicable for daemon mode.
--sync-delete-assets Delete assets that do not have corresponding local files exist.
--hard-delete-assets Force the asset deletion (see --sync-delete-assets) process to hard delete assets. Otherwise, the behaviour is controlled by server properties (whether a deletion is a soft or hard action).
--follow-symlinks Follow symbolic links. If not specified, it will not follow symbolic links, instead, it will create special symbolic link assets in Mediaflux. When exported as NFS share these symoblic assets will be represented as symbolic links. If downloaded using the unimelb-mf-download tool on Linux/MacOS platforms, they can be restored as symbolic links.
--worm Set the WORM state for the uploaded assets.
--worm-can-add-versions Allow to add new versions of metadata and content when in the WORM state. Ignored if --worm option is not specified.
--worm-no-move Disallow to move the uploaded assets. Ignored if --worm option is not specified.
--save-file-attrs If specified, save local file attributes, such as owner uid, gid, ctime and ACls, into Mediaflux asset metadata.
--preserve-modified-time If specified, preserve the local file's modified time as corresponding asset modified time in Mediaflux.
--quiet Do not print progress messages.
--help Prints usage.
--version Prints version.
POSITIONAL ARGUMENTS:
src-dir Source directory to upload.
src-file Source file to upload.
EXAMPLES:
unimelb-mf-upload --mf.config ~/.Arcitecta/mflux.cfg --nb-workers 4 --dest /projects/proj-1128.1.59 ~/Documents/foo ~/Documents/bar
unimelb-mf-upload --config ~/.Arcitecta/mf-upload-config.xml
Pre-existing files
The client checks whether files already exist in Mediaflux or not. If they do exist it will skip the upload. The checks it uses are:
File path/name exists and is the same
File size is the same
If checksums (--csum-check option) are enabled, the checksum is the same
If any of these fail, the file does not pre-exist and will be re-uploaded. In the case that the path/name is the same, but the source file has changed content, it will be uploaded to the pre-existing asset in Mediaflux as a new version.
Checksums
Checksums (a unique number computed from the contents of a file) are an important data integrity mechanism. The Mediaflux server computes a checksum for each file it receives. The upload client can compute checksums from the source data on the client side and compare with the checksum computed by the server when it receives the file. If the checksums match, we can be very confident that the file uploaded correctly.
By default, checksums are not enabled (because computing checksums slows down the upload process). However, it is strongly recommended that you enable these during the upload or run the checker client unimelb-mf-check with checksums to check the upload afterwards.
Case 1 - Files DO NOT pre-exist on Mediaflux
When you enable checksums, and the data DO NOT already exist on the server, the client will compute the check sum as part of the upload process. When Mediaflux creates the asset, it will also compute the checksum. These checksums will be compared.
Case 2 - Files DO pre-exist on Mediaflux
When you enable checksums, and the data DO already exist on the server (by path/name and size), then client will compute the check sum on the local file first and compare the checksum with that already stored in Mediaflux.
If the checksums differ, it will then proceed to re-upload the local file (following the process in Case 1. above) because it has changed and make a new asset version. Thus, overall 2 checksums are computed by the client and one by the server.
Examples
You will need to know where (the path) to locate your data in Mediaflux (the --dest argument of the command) and where to upload from (the last positional argument)
Example 1 - parallel upload with checksum check
Upload data with four worker threads and turn on checksums for upload integrity checking (recommended). As the location of the config files is not specified, the client will look for it in the .Arcitecta directory of your home directory.
If you have a location that should be uploaded on a regular schedule such as an instrument PC that saves data to a given directory on the local computer, you can schedule uploads with unimelb-mf-upload. It is best to request an upload token if you want to do this as the credential will be stored on the computer that is doing the uploads. Contact Research Computing Services to request a token.
Windows
In this example:
we will put the unimelb-mf-clients directory in the %HOMEPATH%\Documents directory
we will save logs to the %HOMEPATH%\Documents\logs directory
will will put the configuration file in the %HOMEPATH%\Documents directory
Download from the GitLab page, selecting the Windows 64bit release. Extract the zip file to %HOMEPATH%\Documents.
Create a configuration file. In this case we are going to use a secure token. In our example, it will be stored in %HOMEPATH%\Documents\mflux.cfg.
Click the start button and start typing Task Scheduler and select it from the Start Menu when it appears.
Click on the Task Scheduler Library, then right click on the space and choose Create Basic Task... from the menu.
Give your task a name and description, then click Next >
choose a start date and time and click Next >
choose Start a program and click Next >
click the Browse button and find the script you created above.
Click Next > and then check the Open the Properties dialog for this task when I click Finish box, then click Finish.
Under Security options, choose which user you would like the task to run under. You may wish to make it so the scheduled job will run even if the user is not logged in.
Linux
In this example:
we will put the unimelb-mf-client files in the ~/bin directory
we will save logs to the ~/logs directory
will will put the configuration file in the ~/.Arcitecta directory
Download from the GitLab page, selecting the Linux 64bit release. Extract the zip file to ~/bin.
Create a Configuration File. In this case we are going to use a secure token. In our example, it will be stored in ~/.Arcitecta/mflux.cfg.
On Linux there's typically two options for scheduling tasks: cron and systemd timers. In this example, we will use a cron job.
Edit the crontab file with the following command:
crontab -e
Create a new scheduled task at the end of the crontab file. To see documentation on the format, try the man 5 crontab command. In our example, we will run the command once per day at 1 am local time.
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 1 * * * $HOME/bin/upload.sh
Save the file, and your job will be scheduled.
crontab: installing new crontab
Mac OS
In this example:
we will put the unimelb-mf-clients in the ~/Applications folder
we will save logs to the ~/Documents/logs folder
we will put the configuration file in the ~/.Arcitecta folder
Download from the GitLab page, selecting the Mac 64bit release. Extract the tar.gz file by clicking on it. It will be extracted to a folder in your Downloads folder, so move it o the Applications folder.
Create a configuration file. In this case we are going to use a secure token. In our example, it will be stored in ~/.Arcitecta/mflux.cfg.
Edit the crontab file with the following command. By default the vim text editor will be used.
crontab -e # this will use the default text editor, usually vim
# if you would prefer to use the pico text editor, use the following command instead:
EDITOR=/usr/bin/pico crontab -e
Create a new scheduled task at the end of the crontab file. To see documentation on the format, try the man 5 crontab command. In our example, we will run the command once per day at 1 am local time.
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 1 * * * $HOME/bin/upload.sh
Save the file, and your job will be scheduled.
crontab: installing new crontab
Problems with special files
Sparse files
Sparse files are files that have large sections of unallocated data. They are commonly used in Linux/Unix systems. Sparse files use storage efficiently when the files have a lot of holes (contiguous ranges of bytes having the value of zero) by storing only metadata for the holes instead of using real disk blocks.
Sparse files should be either excluded, or compressed before uploading to Mediaflux. As Mediaflux backend does not support sparse files and treats them as regular files. Uploading uncompressed sparse files will be waste of storage space. We've seen issues caused by very large sparse files.
Find sparse files
To find the sparse files in your file system, you can use find command below:
The above command compresses the sparse files to *.tar.gz files and preserve their holes (-S option for tar), and the original sparse files will be replaced.
DO NOT try it if you don't know what you are doing.
FIFO (Named Pipe)
A FIFO (First In First Out) is similar to a pipe. The principal difference is that a FIFO has a name within the file system and is opened in the same way as a regular file. A FIFO has a write end and a read end, and data is read from the pipe in the same order as it is written. Fifo is also termed as Named pipes in Linux.
FIFO should not be uploaded to Mediaflux.
Mediaflux Explorer
Uploading FIFO causes Mediaflux Explorer (current version: v1.5.6) to crash.
unimelb-mf-upload
Early versions (prior to v0.7.4) of unimelb-mf-upload also hangs when uploading FIFO. From version v0.7.4 and above, unimelb-mf-upload excludes FIFO files.
Find FIFO (Named Pipes)
The following command can be used to list the FIFO files in your file system:
find ./ -type p
Downloading data using unimelb-mf-download
This client utility allows you to efficiently download data from Mediaflux, either individual files or entire folders recursively.
You need to know:
The location of your data in Mediaflux (the last argument of the command).
The destination path on your local computer (--out) where the data should be saved.
Features:
Parallel downloads: Use the --nb-workers option to download files in parallel. Speed improvements depend on available network capacity—using more than 4 threads is not recommended. Under heavy network load, 4 threads may perform no better than 1, so some experimentation may be needed to find the optimal setting.
Use the --help switch to see a complete list of command-line arguments.
USAGE:
unimelb-mf-download [OPTIONS] --out <dst-dir> <src-asset-or-collection-path> [<src-asset-or-collection-path>...]
unimelb-mf-download --config <mf-download-config.xml>
DESCRIPTION:
Download assets (files) from Mediaflux to the local file system. Pre-existing files in the local file system can be skipped or overwritten. In Daemon mode, the process will only download new assets (files) since the process last executed.
OPTIONS:
--config <config.xml> A single configuration file including all required settings (Mediaflux server details, user credentials, application settings). If conflicts with all the other options.
--mf.config <mflux.cfg> Path to the config file that contains Mediaflux server details and user credentials.
--mf.host <host> Mediaflux server host.
--mf.port <port> Mediaflux server port.
--mf.transport <https|http|tcp/ip> Mediaflux server transport, can be http, https or tcp/ip.
--no-cluster-io Disable cluster I/O if applicable.
-o, --out <dst-dir> The output/destination directory.
--overwrite Overwrite if the destination file exists but has a different size.
--unarchive Extract Arcitecta .aar files.
--csum-check Generate the CRC32 checksum during the file download and compare it with the remote checksum after the download is complete. If both the --csum-check and --overwrite options are enabled, local files will be overwritten even if their sizes match the remote files.
--no-symlinks Do not restore symbolic links. If not specified, it will try to create (restore) symbolic links. Note: creating symbolic links works only the platforms that support symbolic links, such as Linux or MacOS.
--nb-queriers <n> Number of query threads. Defaults to 1. Maximum is 4
--nb-workers <n> Number of concurrent worker threads to download data. Defaults to 1. Maximum is 8
--nb-retries <n> Retry times when error occurs. Defaults to 2
--batch-size <size> Size of the query result. Defaults to 1000
--daemon Run as a daemon.
--daemon-port <port> Daemon listener port if running as a daemon. Defaults to 9761
--daemon-scan-interval <seconds> Time interval (in seconds) between scans of source collection. Defaults to 60 seconds.
--exclude-parent Exclude parent directory at the destination (Download the contents of the directory) if the source path ends with trailing slash.
--include-metadata Download Mediaflux asset metadata as XML file (*.meta.xml)
--aggregate-transmission Aggregate transmission to improve the performance for large number of small files.
--log-dir <dir> Path to the directory for log files. No logging if not specified.
--log-file-size-mb <n> Log file size limit in MB. Defaults to 100MB
--log-file-count <n> Log file count. Defaults to 2
--notify <email-addresses> When completes, send email notification to the recipients(comma-separated email addresses if multiple). Not applicable for daemon mode.
--preserve-modified-time Update the downloaded file’s modification timestamp to match the remote asset’s content mtime in Mediaflux.
--preserve-creation-time Update the downloaded file’s creation timestamp to match the remote asset’s ctime in Mediaflux. Note that not all file systems support modifying file creation times.
--sync-delete-files Delete local files that do not have corresponding assets exist on the server side.
--quiet Do not print progress messages.
--help Prints usage.
--version Prints version.
POSITIONAL ARGUMENTS:
<src-asset-or-collection-path> The source asset path or asset collection/namespace path.
EXAMPLES:
unimelb-mf-download --mf.config ~/.Arcitecta/mflux.cfg --nb-workers 2 --out ~/Downloads /projects/proj-test-1128.1.59/foo /projects/proj-test-1128.1.59/bar
unimelb-mf-download --mf.config ~/.Arcitecta/mflux.cfg --out ~/Downloads /projects/proj-test-1128.1.15/sample.zip
unimelb-mf-download --config ~/.Arcitecta/mf-download-config.xml
Examples
Example 1
Download data with one worker thread and skip pre-existing files, checking for files that pre-exist by their name and size only.
Download data with four worker threads and we overwrite pre-existing files, and we check files pre-exist by their name and size and checksum (slower but safer). We don't need to specify the path to the config file as the client will look for it in the standard places.
This client utility allows you to check and compare assets (files) in Mediaflux against files on your local file system.
It verifies file equality (you can specify the direction of the check) based on existence, name, size, and optionally checksum. The client can also generate a report in CSV format.
Use the --help switch to see a complete list of command-line arguments.
USAGE:
unimelb-mf-check [OPTIONS] --direction <up|down> --output <output.csv> <dir1> <collection1> [<dir2> <collection2>...]
DESCRIPTION:
Compare files in local directory with assets in remote Mediaflux collection and generates a list of the differences.
OPTIONS:
--mf.config <mflux.cfg> Path to the config file that contains Mediaflux server details and user credentials.
--mf.host <host> Mediaflux server host.
--mf.port <port> Mediaflux server port.
--mf.transport <https|http|tcp/ip> Mediaflux server transport, can be http, https or tcp/ip.
--direction <up|down> Direction: up/down.
-o, --output <output.csv> Output CSV file.
--detailed-output Include all files checked. Otherwise, only the missing or invalid files are included in the output.
--compress-output Compress output CSV file to GZIP format(.csv.gz).
--no-csum-check Files are equated if the name, size and CRC32 checksum are the same. With this argument, you can exclude the CRC32 checksum comparison.
--nb-queriers <n> Number of query threads. Defaults to 1. Maximum is 4
--nb-workers <n> Number of concurrent worker threads to read local file (to generate checksum) if needed. Defaults to 1. Maximum is 8
--nb-retries <n> Retry times when error occurs. Defaults to 2
--batch-size <size> Size of the query result. Defaults to 1000
--follow-symlinks Follow symbolic links.
--quiet Do not print progress messages.
--help Prints usage.
--version Prints version.
POSITIONAL ARGUMENTS:
<dir> Local directory path.
<collection> Remote Mediaflux collection path.
EXAMPLES:
unimelb-mf-check --mf.config ~/.Arcitecta/mflux.cfg --direction down --output ~/Documents/foo-download-check.csv ~/Documents/foo /projects/proj-1.2.3/foo
Examples
Example 1
Compare in the downward direction (i.e. Mediaflux is the master)
unimelb-mf-check --direction down --output ~/Documents/foo-download-check.csv ~/Documents/foo /projects/proj-myproj-1.2.3/foo
Downloading data using aterm-download
aterm-download is a wrapper script for the Arcitecta aterm.jar utility, used to download data from Mediaflux.
synopsis:
Exports one or more assets using a specified profile.
usage:
aterm-download [<args>] <file> [<create-args>]
arguments:
-lp <local profile>
[optional] A local profile (ecp) containing a specification for the export.
-mode [test|live]
[optional] Is this a test or a live export? Test export can be used to check whether a profile is correct. Defaults to 'live'.
-ncsr <nb>
[optional] The number of concurrent server requests. A number in the range [1,infinity].
Defaults to 1. Concurrent requests can increase performance as data is downloaded parallel to request processing.
-where <query>
[optional] Query that will return the assets for export/download. Any query conforming to AQL is valid. Must be specified if 'namespace' argument is omitted.
-namespace <namespace>
[optional] The asset namespace to export. Must be specified if 'where' argument is omitted
-onerror [abort|continue]
[optional] If there is an export error, what should happen? Defaults to 'abort'.
-onlocalerror [abort|continue]
[optional] If there is an error accessing or opening a local file (e.g. permissions, etc), what should happen? Defaults to 'abort'.
-task-name <task name>
[optional] Specifies the custom name for the task that monitors the progress of the export. User may track the progress of the task by using server.task.named.describe :name <task name>.
-task-remove-after <hours>
[optional] Used to specify how many hours after the export is complete do we want the monitoring task to be removed from the system. Defaults to '0' hours, i.e. now.
-task-batch-size <batch size>
[optional] When used task that monitors the progress of the export will update the progress after 'task-batch-size' of work units were completed. Defaults to '100' work units.
-task-count-assets <true|false>
[optional] Specifies if the assets should be counted before the export begins. This is used by task that tracks the progress of the export so that it can know total number of work units (file transfers). Defaults to 'false'.
-task-report-bytes <true|false>
[optional] Specifies if the task should include bytes transferred as well when updating progress, not just assets transferred. If set to true, bytes transferred will be reported once every second. Defaults to 'false'.
-verbose [true|false]
[optional] If set to true, will display those files being consumed. Defaults to false.
-export-empty-namespaces [true|false]
[optional] Specifies whether or not to export empty namespaces. If set to true, folders will be created for empty namespaces. This only works in conjunction with -namespace argument. It will be ignored if either of -lp or -where arguments are provided. Defaults to false.
-folder-layout [none|collection]
[optional] Specifies the folder layout for exported files. Ignored if '-lp' provided. Defaults to 'collection'.
-filename-collisions [skip|rename|overwrite]
[optional] Specifies how to handle filename collisions. Ignored if '-lp' provided. Defaults to 'rename'.
-ns-parent-number
[optional] When folder layout is set to 'collection' this argument specifies the number of collection parents to include. Defaults to infinity, i.e. all parents.
Examples
Download a directory (asset namespace)
The command below downloads directory (asset namespace) /projects/proj-demonstration-1128.4.15/test-data from Mediaflux to current local directory:
To download an individual file (asset) /projects/proj-demonstration-1128.4.15/test-data/sample-file1.tar.gz from Mediaflux to current local directory:
aterm-download -ns-parent-number 0 -where "namespace='/projects/proj-demonstration-1128.4.15/test-data' and name='sample-file1.tar.gz'" ./
Utilities to check instrument uploads
There are also client utilities for checking instrument uploads performed by Mediaflux Data Mover. These include:
instrument-upload-list list or search for instrument data uploads in Mediaflux.
instrument-upload-missing-find identify local directories that have not been uploaded to Mediaflux, or where the local directories do not match the total file count or size of the corresponding uploads in Mediaflux.
Run the commands on Windows
Open Command Prompt
Enter "cmd" in Windows search bar, then select "Command Prompt".
Show usage of the two commands
You can use -h option get synopsis of the two commands:
instrument-upload-list -h
or
instrument-upload-missing-find -h
If you have not added the commands to Path environment variable, you will need to specify the fulls to the commands, for example: