put (upload files)

Top  Previous  Next

put LOCAL_FILES [BUCKET_NAME]/[FOLDER]/[OBJECT] [-s] [-t:THREADS] [-mul:PARTSIZE] [-maxb:MAXB] [-cacl:CANNED_ACL] [-meta:METADATA] [-mime:MIMETYPE] [-e] [-le] [-rr] [-ia] [-r] [-cond:"FILTER"] [-nomulmd5] [-nomd5existcheck] [-nobucketlisting] [-keep:KEEP] [-onlydiff] [-onlynewer] [-onlynew] [-onlyexisting] [-purge] [-purgeabort:X] [-move] [-localdelete:"COND"] [-include:INCL] [-exclude:EXCL] [-rinclude:INCL] [-rexclude:EXCL] [-sim] [-showfiles] [-showdelete] [-showlocaldelete] [-showexcl] [-noautostatus] [-minoutput] [-stoponerror] [-optimize] [-accelerate]

Upload one or multiple files (=objects) to a S3 bucket. If an identical file (i.e. same MD5 value) is already stored on Amazon S3, the file is copied, not uploaded, to save bandwidth.

 

Parameter

Description

Examples

LOCAL_FILES

Name / path of the local file(s) to upload. Wildcard characters are supported by default (* and ?) to match multiple objects. A regular expression can be used too, in that case use the flag -r on the command line, see below.

put c:\folder\ mybucket (upload all files in c:\folder\ to mybucket)

put c:\folder\file.txt mybucket (upload file c:\folder\file.txt to mybucket)

put c:\folder\*.txt mybucket (upload files *.txt  in c:\folder\ to mybucket)

[BUCKET_NAME]/[FOLDER]/[OBJECT]

Name of S3 bucket, folder (optional) and object (optional) to upload files to. This is relative to the current S3 working location.

put c:\folder\file.txt mybucket/subfolder/ (upload file c:\folder\file.txt to mybucket/subfolder)

put c:\folder\*.txt mybucket/subfolder/ (upload files *.txt  in c:\folder\ to mybucket/subfolder)

-s

Recursive, upload local files that are in subfolders too. The subfolder structure is replicated while uploading.

put c:\folder\ mybucket -s (upload all files in c:\folder\ and subfolders of c:\folder\ to mybucket. The subfolder structure is replicated in mybucket)

-t:THREADS

Specify the number of concurrent, parallel threads used to upload files to S3. By default only 1 thread is used.

put c:\folder\ mybucket -s -t:4 (upload all files in c:\folder\ and subfolders of c:\folder\ to mybucket using 4 parallel threads)

-mul:PARTSIZE

Use Amazon S3 multipart uploads to upload the files.
 
The PARTSIZE value is optional and can be used to specify the size of each upload part to use, in Megabytes. The minimum upload part size is 5MB and that is also the default size used if PARTSIZE is not specified. Max size is 1000 Megabytes.
 
The -mul flag is required when uploading files larger than 5GB and it is recommended when uploading files larger than 200MB.

put c:\folder\ mybucket -s -t:4 -mul (upload all files in c:\folder\ and subfolders of c:\folder\ to mybucket using 4 parallel threads and multipart uploads)
put c:\folder\ mybucket -s -t:4 -mul:50 (upload all files in c:\folder\ and subfolders of c:\folder\ to mybucket using 4 parallel threads and multipart uploads. Use an upload part size of 50 Megabytes)

-maxb:MAXB

Specify maximum bandwidth to use in KiloBytes/sec. For example -maxb:100 instructs S3Express to use max 100KB/sec to upload.

put c:\folder\ mybucket -maxb:50 (upload all files in c:\folder\ to mybucket, throttle bandwidth to 50KB/s)

-cacl:CANNED_ACL

Set canned ACL of uploaded files. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions.

 

Valid Values for CANNED_ACL:

 

private (Owner gets FULL CONTROL. No one else has access rights, this is the default for an object)

 

public-read (Owner gets FULL CONTROL. The AllUsers group,  that is everyone, gets READ access)

 

public-read-write (Owner gets FULL CONTROL. The AllUsers group, that is everyone, gets READ and WRITE access)

 

authenticated-read (Owner gets FULL CONTROL. The AuthenticatedUsers group, that is all Amazon AWS accounts, gets READ access.)

 

bucket-owner-read (Object owner gets FULL CONTROL. Bucket owner gets READ access)

 

bucket-owner-full-control (Both the object owner and the bucket owner get FULL CONTROL over the object)

 

Note: You can specify only one of these canned ACLs in your request.

put c:\folder\ mybucket -cacl:public-read (upload all files in c:\folder\ to mybucket and make all uploaded files 'public-read')

-meta:META

Metadata headers to be added to the uploaded files. Multiple metadata headers should be separated by |.

put c:\folder\ mybucket -meta:"cache-control:max-age=60" (upload all files in c:\folder\ to mybucket and set metadata header 'cache-control' to max-age=60 for all uploaded files)

-mime:MIMETYPE

Specify the MIME type to assign to uploaded files. By default S3Express assigns standard MIME types (HTTP header "Content-Type"). You can override these default values for uploaded files by using the flag -mime.

put c:\folder\ mybucket -mime:"mymime" (upload all files in c:\folder\ to mybucket and set mime header 'Content-Type' to 'mymime' for all uploaded files, overriding the default values)

-e

Apply Amazon S3 Server Side Encryption to uploaded files.

put c:\folder\ mybucket -e (upload all files in c:\folder\ to mybucket and apply server side encryption for all uploaded files)

-le

Apply local encryption before uploading files and then upload the encrypted files.
 
Local encryption is performed using the open-source file encryption program AEScrypt, which can be downloaded from www.aescrypt.com
 
Download the command line version of AEScrypt for Windows and save the file aescrypt.exe in the same folder where S3Express.exe is.
 
To provide an encryption password, use the command setopt, with flag -clientencpwd.
 
To provide an encryption password hint, use the command setopt, with flag -clientencpwdhint.
If a password hint is specified, it is then added to the metadata of each encrypted file. The metadata header containing the password hint is 'x-amz-meta-s3xpress-encrypted-pwd-hint'.

 

The original MD5 of the unencrypted file is added to the object metadata in the header 'x-amz-meta-s3xpress-encrypted-orig-md5'.
For each encrypted object also the metadata header 'x-amz-meta-s3xpress-encrypted:aescrypt.exe' is added.
 
Alternative encryption programs, such as 7zip or other custom programs, can be specified using the command setopt with option -clientencprogram.

put c:\folder\ mybucket -le (upload all files in c:\folder\ to mybucket. Before uploading, apply client side local encryption using the program AEScrypt. Note that to provide an encryption password the command setopt -clientencpwd must be used first)

 

The flags -e and -le can be combined, e.g.:

 

put c:\folder\ mybucket -e -le (upload all files in c:\folder\ to mybucket. Before uploading, apply client side local encryption using the program AEScrypt. Also apply server side encryption for all uploaded files)

-rr

Set S3 storage class to "Reduced Redundancy" for uploaded files (REDUCED_REDUNDANCY).

put c:\folder\ mybucket -rr (upload all files in c:\folder\ to mybucket and set Storage Class to 'REDUCED_REDUNDANCY' for all uploaded files)

-ia

Set S3 storage class to "Infrequent Access" for uploaded files (STANDARD_IA).

put c:\folder\ mybucket -ia (upload all files in c:\folder\ to mybucket and set Storage Class to 'STANDARD_IA' for all uploaded files)

-r

Regular expression. This flag specifies that LOCAL_FILES is a regular expression operating on the current folder. If you want to apply a regular expression to a folder other than the current folder, use the -cond:FILTER condition or even easier the flag -rinclude or -rexclude, see below.

put ^(a.*)|(b.*)|(c.*) mybucket -r (upload files starting with a, b, or c in the current folder to mybucket. The -r flag only operates on the current local folder)

-cond:FILTER

Filter condition. Only upload files matching the specified condition. More info on filter condition syntax and variables.

put c:\folder\ mybucket -cond:"size <> 0" (upload non-empty files from c:\folder\ to mybucket)

-nomulmd5

Do not recalculate MD5 for files uploaded in multipart mode (see put flag -mul above). When uploading files in multipart mode (-mul), S3Express will force MD5 recalculation for files smaller than 1GB at the end of the upload. Use this flag to disable MD5 recalculation. If needed, the 1GB limit can be changed in the Windows Registry.

put c:\folder\ mybucket -mul -nomulmd5 (upload all files in c:\folder\ to mybucket using multipart uploads and do not force recalculation of MD5 values)

-nomd5existcheck

By default, if S3Express finds an identical file (i.e. same MD5 value) that is already stored on Amazon S3, then that file is replicated and not uploaded again, to save time and bandwidth. This happens only for files <200MB. S3Express will show which files are copied (=duplicated) instead of uploaded. This functionality can be disabled using this flag -nomd5existcheck.

put c:\folder\ mybucket -nomd5existcheck

-nobucketlisting

This option forces S3Express not to list the remote S3 bucket. Instead of listing the remote S3 bucket before the put operation starts,  S3Express will check file by file if a local file needs to be uploaded. This option can be quite slow, but it is faster when a few files are to be uploaded to a large S3 bucket that already has lot of files in it.
 
This option is not compatible with the options -purge, -nobucketlisting and -le, an error will be given in that case.

put c:\folder\ mybucket -onlydiff -nobucketlisting

-keep:KEEP

If the files to be uploaded have a matching file already in S3 that will be overwritten, keep the existing metadata and/or ACL.
-keep:acl keeps the existing ACL
-keep:meta keeps the existing metadata
-keep keeps both, metadata and ACL.

put c:\folder\ mybucket -keep (upload all files in c:\folder\ to mybucket and keep metadata and ACL for S3 files that will be overwritten)

-onlydiff

Only upload files that are different compared to the matching files that are already on S3. Different files are files that have the same path and the same name but a different MD5 value. Different files are also files that are not yet uploaded to S3. So using the '-onlydiff' flag uploads files that are not yet on S3 plus all the files whose content has changed compared to the files already on S3.

 
This flag is equivalent to using -cond:"etag != s3_etag".
 
Note that if the upload part size (-mul) is changed in between uploads, then a file may be re-uploaded even if it is already on S3. The -onlydiff functionality only works when -mul size is kept the same between uploads or -mul is not used.
 
Running twice the same put command with the flag -onlydiff is a good way to verify that all files have been uploaded correctly: all MD5 values should already match, unless local files have been changed since last upload.

put c:\folder\ mybucket -onlydiff (upload files in c:\folder\ to mybucket only if they are different compared to the matching file that is already on S3. Different files are files that have the same path and the same name but a different MD5 value. Files that have already a corresponding file with matching MD5, will not be uploaded)
 
put c:\folder\ mybucket -onlydiff -nobucketlisting (do the same as above but without listing the S3 bucket)

-onlynewer

Only upload files that are newer compared to the matching files that are already on S3. Newer files are files that have the same path and the same name but a newer modified time. Newer files are also files that are not yet uploaded to S3. So using the '-onlynewer' flag uploads files that are not yet on S3 plus all the files whose timestamp is newer compared to files already on S3.

 
This flag is equivalent to using -cond:"timestamp > s3_timestamp".

 

Note that -onlynewer is faster than -onlydiff, because the MD5 value of local files does not need to be calculated when using -onlynewer.

put c:\folder\ mybucket -onlynewer (upload files in c:\folder\ to mybucket only if they are newer compared to the matching file that is already on S3. Newer files are files that have the same path and the same name but a newer modified time)

-onlynew

Only upload files that are new, that is not yet on S3. Using -onlynew only uploads files that are not yet on S3.

 
This is equivalent to using -cond:"s3_etag = ''".

put c:\folder\ mybucket -onlynew (upload files in c:\folder\ to mybucket only if they are new, that is, they do not have a matching file that is already on S3)

-onlyexisting

Only upload files that are already existing on S3. Using -onlyexisting only uploads files that already have a corresponding matching file with same name and path on S3.

 

This is equivalent to using -cond:"s3_etag <> ''".

put c:\folder\ mybucket -onlyexisting (upload files in c:\folder\ to mybucket only if they are already existing on S3, that is, they already have a matching file on S3)

-purge

Delete S3 files that no longer exist locally.

put c:\folder\ mybucket -onlydiff -purge (upload files in c:\folder\ to mybucket only if they are different compared to the matching file that is already on S3. Delete files in mybucket that are not in c:\folder\)

-purgeabort:X

Abort the purge operation if more than X S3 files would be deleted. X can be:
- The number of files
- ALL (it specifies to abort if all files in the S3 bucket would be deleted, this is the default behavior)
- NEVER (it specifies to never abort purge)

put c:\folder\ mybucket -onlydiff -purge -purgeabort:100 (upload files in c:\folder\ to mybucket only if they are different compared to the matching file that is already on S3. Delete files in mybucket that are not in c:\folder\. Do not purge if more than 100 S3 files would be deleted)
 
put c:\folder\ mybucket -onlydiff -purge -purgeabort:ALL (upload files in c:\folder\ to mybucket only if they are different compared to the matching file that is already on S3. Delete files in mybucket that are not in c:\folder\. Do not purge if all S3 files in mybucket would be deleted)
 
put c:\folder\ mybucket -onlydiff -purge -purgeabort:NEVER (upload files in c:\folder\ to mybucket only if they are different compared to the matching file that is already on S3. Delete files in mybucket that are not in c:\folder\. Never abort purge)

-move

Move files to S3, e.g. delete local files immediately after they are successfully uploaded to S3.
 
See: How to move files to S3 (difference between -move and -localdelete)

put c:\folder\ mybucket -s -include:*.jpg -move (move all jpg files in c:\folder\ and subfolders to mybucket)

-localdelete:COND

Delete local files that:
- do not need to be uploaded.
- have a corresponding matching file on S3.
- for which the condition COND is true. COND is a condition that follows the general condition rules.
 
If the condition COND is not specified, that is, only -localdelete is used, then all local files that have a corresponding matching file on S3 will be deleted.
 
See: How to move files to S3 (difference between -move and -localdelete)

put c:\folder\ mybucket -s -onlydiff -localdelete (upload files in c:\folder\ and subfolders to mybucket if they are different from files on S3 and delete local files that have a corresponding matching file on S3)
 
put c:\folder\ mybucket -s -onlydiff -localdelete:'age_days > 90' (upload files in c:\folder\ and subfolders to mybucket if they are different from files on S3 and delete local files that have a corresponding matching file on S3 and are older than 90 days)

-include:INCL

Only upload files with path matching the specified mask (Wildcards). Separate multiple masks with "|".

put c:\folder\ mybucket -include:*.jpg (upload all jpg files in c:\folder\ to mybucket)

put c:\folder\ mybucket -include:*.jpg|*.gif (upload all jpg and gif files in c:\folder\ to mybucket)

-exclude:EXCL

Do not upload files with path matching the specified mask (Wildcards). Separate multiple masks with "|".

put c:\folder\ mybucket -exclude:*.jpg (upload all files in c:\folder\, excluding files with extension .jpg, to mybucket)

put c:\folder\ mybucket -exclude:*.jpg|*.gif (upload all files in c:\folder\, excluding files with extension .jpg and *.gif, to mybucket)

-rinclude:INCL

Only upload files with path matching the specified mask (Regular Expression).

put c:\folder\ mybucket -rinclude:a(x|y|z)b (upload files in c:\folder\ matching axb, ayb and azb to mybucket)

put c:\folder\ mybucket -rinclude:*.(gif|bmp|jpg) (upload files in c:\folder, ending with .gif, .bmp or .jpg, to mybucket)

put c:\folder\ mybucket -rinclude:"IMGP[0-9]{4}.jpg" (upload files in c:\folder\ ending with .jpg and starting with IMG and followed by a four-digit number to mybucket)

-rexclude:EXCL

Do not upload files with path matching the specified mask (Regular Expression).

put c:\folder\ mybucket -rexclude:[abc] (upload all files in c:\folder\ to mybucket, but exclude files containing a, b or c in the file path)

-sim

Simulation. Only preview which files would be uploaded, do not actually upload the files yet.

put c:\folder\ mybucket -include:*.jpg -sim (simulation only, show summary of which files would be selected for upload)

-showfiles

Show detailed list of all selected files to upload not just the summary.

put c:\folder\ mybucket -include:*.jpg -sim -showfiles (simulation only, show list of files that would be selected for upload)

-showdelete

Show detailed list of all selected files to be deleted from the S3 bucket not just the summary. Only applicable if -purge is used.

put c:\folder\ mybucket -include:*.jpg -purge -sim -showfiles -showdelete (simulation only, show list of files that would be selected for upload and show list of files that would be deleted from the S3 bucket)

-showlocaldelete

Show detailed list of all selected local files to be deleted from the local folder due to the the option -localdelete. Only applicable if -localdelete is used.

put c:\folder\ mybucket -onlydiff -localdelete:'age_months>6' -showfiles -showlocaldelete -sim (simulation only, show list of files that would be selected for upload from c:\folder\ to mybucket and show list of files that would be deleted from c:\folder\ due to the -localdelete option)

-showexcl

This flag can only be used in combination with the -sim flag above. Using this flag shows which files would be excluded from the upload.

put c:\folder\ mybucket -include:*.jpg - sim - showexcl (simulation only, show summary of which files would be selected for upload and list which files would be excluded)

-noautostatus

Do not automatically show the latest upload status every 10 seconds. The status can be shown by pressing the key 's' while the upload is in process.

put c:\folder\ mybucket -noautostatus (upload files in c:\folder\ to mybucket and do not automatically show the latest upload status every 10 seconds)

-minoutput

Minimal output. Minimize the output that is shown in the S3Express console during a put operation. This option is useful when copying many small files to S3, which could make the S3Express output in the console too fast to read. Minimal output can be toggled on or off at any time during a put operation by pressing the key 'o'.

put c:\folder\ mybucket -minoutput -s

-stoponerror

Stop operation as soon as an error occurs (do not continue with other files).

put c:\folder\ mybucket -s -stoponerror

-optimize

Enable thread optimization for transferring large amounts of relatively small files over fast connections. Recommended to use with at least 4 threads (-t:4).

put c:\folder\*.jpg mybucket -s -t:16 -optimize

-accelerate

Use Amazon S3 Transfer Acceleration for this operation. S3Express will use 's3-accelerate.amazonaws.com' as the endpoint for this operation. Transfer Acceleration must to be firstly enabled for the bucket in your account or this option will fail with an error.

put c:\folder\*.jpg mybucket -accelerate

 

Notes:
 
- Files in Windows = Objects in S3.
 
- When uploading files to Amazon S3, the Windows modified timestamp is not kept, because Amazon S3 objects get the time of the upload as modified timestamp. This is part of Amazon S3 functionality and it does not depend on S3Express. In order to keep information re the original file modified timestamp, S3Express adds two custom metadata headers to each uploaded file: x-amz-meta-s3xpress-modified-time-iso and x-amz-meta-s3xpress-modified-time. The x-amz-meta-s3xpress-modified-time-iso header contains the original file timestamp in ISO format, while the x-amz-meta-s3xpress-modified-time header contains the original file timestamp in HTTP format. You can see these two metadata headers using the command getmeta or ls -showmeta.
 
- If an identical file (i.e. same MD5 value) is already stored on Amazon S3, the file is copied, not uploaded, to save bandwidth. S3Express will show which files were copied (=duplicated) instead of uploaded. This functionality can be disabled using -nomd5existcheck

Retry on network error:
The number of retries performed in case of a network error, and the wait time, can be set in the general S3Express options using the command setopt