This is a command-line Java application that you can use to efficiently upload your data into Mediaflux and make integrity checks. Installation instructions are available in the parent page
This client can
- upload files in parallel (--nb-workers). There is no magic in this, it will only go faster if there is sufficient network capacity. Therefore, please don't use more than 4 upload threads. You may even find that if the network is heavily congested, 4 threads is no faster than 1. You may have to experiment a little to find the optimum.
- compute checksums for additional validation (see below)
- write a log file of the upload
- generate and email a summary of the upload (including successful and failed uploads, and the number of zero-sized files it encountered)
- run in daemon mode (in the background) so it keeps uploading new data to Mediaflux as it arrives in your local file system
- Please see all command-line arguments with the --help switch
Here are all the details for the command-line arguments to this client.
Examples
You will need to know where (the path) to locate your data in Mediaflux (the --dest argument of the command) and where to upload from (the last positional argument)
Example 1 - parallel upload with checksum check
Upload data with four worker threads and turn on checksums for upload integrity checking (recommended). As the location of the config files is not specified, the client will look for it in the .Arcitecta directory of your home directory.
unimelb-mf-upload --csum-check --nb-workers 4 --dest /projects/proj-myproject-1128.1.59/12Jan2018 /data/projects/punim0058
Example 2 - using a configuration file
Upload data with one worker thread and specify explicitly where the configuration file is.
unimelb-mf-upload --mf.config /Users/nebk/.Arcitecta/mflux.cfg --dest /projects/proj-myproject-1128.1.59/12Jan2018 /data/projects/punim0058
The Configuration File might look like this:
host=mediaflux.researchsoftware.unimelb.edu.au port=443 transport=https token=phooP1Angohb2ooyahbiLiuwa6ahjuoKooViedaifooPhiqu1ookahXae7keichael4Shae2ael8ietit2phawucai0Aighifu6olah9OquahDei2aevae3keich8ain1OoLa4O
Checksums
Checksums (a unique number computed from the contents of a file) are an important data integrity mechanism. The Mediaflux server computes a checksum for each file it receives. The upload client can compute checksums from the source data on the client side and compare with the checksum computed by the server when it receives the file. If the checksums match, we can be very confident that the file uploaded correctly. Many other clients for other protocols (e.g. sFTP and SMB) do not do this.
By default, checksums are not enabled (because computing checksums slows down the upload process). However, it is strongly recommended that you enable these during the upload or run the checker client unimelb-mf-check with checksums to check the upload afterwards.
Case 1 - Files DO NOT pre-exist on Mediaflux
When you enable checksums, and the data DO NOT already exist on the server, the client will compute the check sum as part of the upload process. When Mediaflux creates the asset, it will also compute the checksum. These checksums will be compared.
Case 2 - Files DO pre-exist on Mediaflux
When you enable checksums, and the data DO already exist on the server (by path/name and size), then client will compute the check sum on the local file first and compare the checksum with that already stored in Mediaflux.
If the checksums differ, it will then proceed to re-upload the local file (following the process in Case 1. above) because it has changed and make a new asset version. Thus, overall 2 checksums are computed by the client and one by the server.
Pre-existing files
The client checks whether files already exist in Mediaflux or not. If they do exist it will skip the upload. The checks it uses are:
- File path/name exists and is the same
- File size is the same
- If checksums are enabled, the checksum is the same
If any of these fail, the file does not pre-exist and will be re-uploaded. In the case that the path/name is the same, but the source file has changed content, it will be uploaded to the pre-existing asset in Mediaflux as a new version.
Scheduled uploads
If you have a location that should be uploaded on a regular schedule such as an instrument PC that saves data to a given directory on the local computer, you can schedule uploads with unimelb-mf-upload. It is best to request an upload token if you want to do this as the credential will be stored on the computer that is doing the uploads. Contact Research Computing Services to request a token.
Windows
In this example:
- we will put the unimelb-mf-client files in the %HOMEPATH%\Documents directory
- we will save logs to the %HOMEPATH%\Documents\logs directory
- will will put the configuration file in the %HOMEPATH%\Documents directory
Download from the GitLab page, selecting the Windows 64bit release. Extract the zip file to %HOMEPATH%\Documents.
Create a Configuration File. In this case we are going to use a secure token. In our example, it will be stored in %HOMEPATH%\Documents\mflux.cfg.
host=mediaflux.researchsoftware.unimelb.edu.au port=443 transport=https token=phooP1Angohb2ooyahbiLiuwa6ahjuoKooViedaifooPhiqu1ookahXae7keichael4Shae2ael8ietit2phawucai0Aighifu6olah9OquahDei2aevae3keich8ain1OoLa4O
Create a batch file to perform the upload using Notepad. In our example, it will be stored in %HOMEPATH%\Documents\upload.bat:
%HOMEPATH%\Documents\unimelb-mf-clients-0.7.7\bin\windows\unimelb-mf-upload --mf.config %HOMEPATH%\Documents\mflux.cfg --log-dir %HOMEPATH%\Documents\logs --dest /projects/proj-demonstration-1128.4.15 %HOMEPATH%\Documents\data-to-upload
Schedule the upload using Windows Task Scheduler.
- Click the start button and start typing Task Scheduler and select it from the Start Menu when it appears.
- Click on the Task Scheduler Library, then right click on the space and choose Create Basic Task... from the menu.
- Give your task a name and description, then click Next >
- choose a start date and time and click Next >
- choose Start a program and click Next >
- click the Browse button and find the script you created above.
- Click Next > and then check the Open the Properties dialog for this task when I click Finish box, then click Finish.
- Under Security options, choose which user you would like the task to run under. You may wish to make it so the scheduled job will run even if the user is not logged in.
Linux
In this example:
- we will put the unimelb-mf-client files in the ~/bin directory
- we will save logs to the ~/logs directory
- will will put the configuration file in the ~/.Arcitecta directory
Download from the GitLab page, selecting the Linux 64bit release. Extract the zip file to ~/bin.
Create a Configuration File. In this case we are going to use a secure token. In our example, it will be stored in ~/.Arcitecta/mflux.cfg.
host=mediaflux.researchsoftware.unimelb.edu.au port=443 transport=https token=phooP1Angohb2ooyahbiLiuwa6ahjuoKooViedaifooPhiqu1ookahXae7keichael4Shae2ael8ietit2phawucai0Aighifu6olah9OquahDei2aevae3keich8ain1OoLa4O
Create a shell script to perform the upload using the text editor of your choice. In our example, it will be stored in ~/bin/upload.sh:
#!/bin/bash ~/bin/unimelb-mf-clients-0.7.4/bin/unix/unimelb-mf-upload --mf.config ~/.Arcitecta/mflux.cfg --log-dir ~/logs --dest /projects/proj-demonstration-1128.4.15 ~/data-to-upload
On Linux there's typically two options for scheduling tasks: cron and systemd timers. In this example, we will use a cron job.
Edit the crontab file with the following command:
crontab -e
Create a new scheduled task at the end of the crontab file. To see documentation on the format, try the man 5 crontab command. In our example, we will run the command once per day at 1 am local time.
# To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command 0 1 * * * $HOME/bin/upload.sh
Save the file, and your job will be scheduled.
crontab: installing new crontab