S3Express: Amazon S3 Command Line Utility

Amazon S3 command line utility for Windows. Copy, query, backup multiple files to S3 using the Windows command line.

S3Express FAQ and Knowledge Base

Main Page > Printable
FAQ
 How Do I Backup to Amazon S3 with S3Express?
 How Do I Install S3Express?
 How Do I Uninstall S3Express?
 Is a Manual Avaliable for S3Express?
 Is S3Express multithreaded?
 Why Use Amazon S3 for Cloud Storage?
How-To
 How to backup to Amazon S3
 How to calculate the total size of a bucket
 How to configure S3Express for Minio S3 compatible service
 How to list all non-private objects in a bucket
 How to list all public objects in a bucket
 How to move files to S3 (difference between -move and -localdelete)
 How to restore multiple objects from AWS S3 Glacier with one command
 How to set 'retry values' for network errors
 How to throttle bandwidth (for file uploads to S3)
 How to upload only changed or new files since the last upload to S3 (-onlydiff)
 How to upload very large files to Amazon S3 (efficiently)
 How to use the -onlydiff switch with local encryption (-le)
Known Errors with Solutions
 Error "400 - AuthorizationHeaderMalformed - The authorization header is malformed; Invalid credential date. Date is not the same as X-Amz-Date."
 Error 51 when accessing S3 buckets with periods (.) in the name
 InvalidRequest - The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
Purchasing / Licensing
 After Purchase, When Do I Receive the License Key?
 Can I Try S3Express for Free Before I Buy?
 How Do I Buy S3Express?
 How Do I Enter the License I Bought in S3Express?
 How Much Does S3Express Cost?
 I Ordered S3Express but I Have Not Received My License.
 Which Credit Cards Do You Accept?
 Who Will Be Handling My Payment?
Release History
 Upgrading S3Express (upgrade procedure)
 Version 1.1 (Feb 2014 - Initial Release)
 Version 1.2 (May 2014)
 Version 1.3 (Jun 2014)
 Version 1.3.3 (Jun 2014)
 Version 1.3.4 (Jul 2014)
 Version 1.3.5 (Jul 2014)
 Version 1.4 (Aug 2014)
 Version 1.4.1 (Aug 2014)
 Version 1.5.1 (Sep 2014)
 Version 1.5.3 (Oct 2014)
 Version 1.5.4 (Oct 2014)
 Version 1.5.5 (Nov 2014)
 Version 1.5.6 (Aug 2015)
 Version 1.5.7 (Nov 2015)
 Version 1.5.8 (Feb 2016)
 Version 1.5.9 (Jun 2018)
 [2019] Version 1.5.10 (Jul 2019)
 [2019] Version 1.5.11 (Sep 2019)
 [2021] Version 1.5.12 (Nov 2021)
Security Considerations
 Connection security (HTTPs)
 Enforcing 'private access only' for all objects in a bucket
 Enforcing server-side encryption for all uploads to a bucket
 How to make a backup to S3 more secure using encryption
 Restricting access to a S3 bucket to specific IP addresses


FAQ


How Do I Backup to Amazon S3 with S3Express?

Backing up your files to Amazon S3 using S3Express command line utility is very simple!

  1. The first step is to enter your Amazon S3 credentials in S3Express using the command saveauth. This needs to be done only once: S3Express will then remember your credentials each time.
     
  2. Next you execute the put command. This will upload all files, or just selected files, in any local or network directory you specify, to a bucket in your Amazon S3 account. Files can be selected based on folder, extension, size, age, etc. You can memorize the command you execute in a shortcut, so next time the command can be issued in no time.
     
  3. The first time the put command is executed, S3Express will upload all files that are not already present on Amazon S3. However the next time, you can instruct S3Express to only upload files that have changed since last upload and new files. This will make the backup very fast!
     
  4. The upload operation can be stopped and restarted at any time. It will run silently in the background and report at the end if there were any errors. If stopped, it will the restart from where it left. 
       
  5. There's many options that can be used to optimize your backup to S3: you can use encryption (local or server-side based), you can limit the Bandwidth used by S3Express, so the backup will not interfere with other programs that need to use the network, you can instruct S3Express to automatically retry a file upload after X seconds and X times in case of a network error, you can use multiple threads to achieve maximum speed, you can use Amazon S3 multipart uploads to upload large files, you can keep the existing metadata and/or ACL when overwriting files, and even simulate the upload before actually starting the upload.
      
  6. And once S3Express has finished uploading all files, you can be reassured that all your files are securely and reliably backed up to Amazon S3. Amazon S3 guarantees up to 99.999999999% durability, with 99.99% availability!
      

The following is an example of put commands used to upload / backup files to S3:

put  c:\myfolder\  mybucketname  -s -onlydiff  -e

This uploads only changed or new files from c:\folder\ to <mybucketname> and encrypts the files as they arrive to Amazon S3 (option -e). Changed files are files whose content that changed since last backup, while new files are files that are not yet present in the Amazon S3 bucket. Options -s instructs S3Express to also upload files that are in subfolders of c:\myfolder\. -onlynew (upload only new files), -onlynewer (upload only files that have a newer timestamp) and -onlyexisting (re-upload only files that are already present on S3) are also available.

More details in the PDF tutorial 'Backup Files to Amazon S3 with S3Express'.



How Do I Install S3Express?

S3Express is very easy to install and it will work out of the box.

Just download one the installers (32-bit or 64-bit) and then run the installer. S3Express will be installed in its own folder, under "Program Files", unless otherwise specified. Then click on the S3Express icon to start the command line utility. That's it! The entire installation is less than 8MB in size.

S3Express is a self-contained program. It's compatible with all Windows versions, including Windows Servers. It does not require any additional libraries or software to run.

S3Express does not modify any general Windows settings or any other program settings. It does not install any Windows services or Windows drivers. It can be cleanly uninstalled if needed (instructions), without leaving any traces, and by default it runs without administrative privileges (non-admin).

The S3Express installer and the S3Express.exe executable are digitally signed by TGRMN Software.



How Do I Uninstall S3Express?

You can remove S3Express like any other Windows program. The uninstall procedure will completely and cleanly remove S3Express from your computer.

Follow these steps:

Windows 10, Windows 8, Windows 7, Windows Vista, Windows Server 2016, Windows Server 2012, Windows Server 2008

  1. Close S3Express.
  2. Go to the Start menu > Control Panel.
  3. Click Programs and Features.
  4. Double-click S3Express.
  5. Click Yes when asked to confirm that you want to uninstall S3Express.
  6. The uninstall procedure will completely and cleanly remove S3Express from your system.

Windows XP, Windows Server 2003, Windows Server 2000

  1. Close S3Express.
  2. Go to the Start menu > Control Panel.
  3. Click Add or Remove Programs.
  4. Double-click S3Express.
  5. Click Yes when asked to confirm that you want to uninstall S3Express.
  6. The uninstall procedure will completely and cleanly remove S3Express from your system.



Is a Manual Avaliable for S3Express?

Yes, the S3Express manual is available in HTML format and PDF format.



Is S3Express multithreaded?

Yes, S3Express is a multithreaded application.

S3Express supports multithreaded operations to upload and query multiple S3 items concurrently. This can speed up S3 operations considerably.
Multithreading helps speed things as you can make full use of all the available bandwidth, especially when uploading, deleting or listing a large amount of files that are relatively small.

The number of concurrent file uploads to perform can be set using the -t flag of the PUT command: www.s3express.com/help/vv_put.html

The number of concurrent threads to be used by S3Express during deleting, listing or querying of S3 objects can be set using the option -qmaxthreads : www.s3express.com/help/vv_options.html



Why Use Amazon S3 for Cloud Storage?

Amazon Simple Storage Service, also known as Amazon S3 is an online storage facility. It is inexpensive, fast and easy to setup. It’s a service provided by e-commerce giant Amazon, so you can be rest-assured whatever you stored at S3 is secured. Even large file services like Dropbox use Amazon S3 as their back-end storage.

Inexpensive

In Amazon S3, there’s no initial charges or setup cost. You only pay for what you utilize, how much data you store and how much data you move in and out from the Amazon servers. Plus, upon sign-up, new Amazon customers receive 5 GB of Amazon S3 standard storage, 20,000 Get Requests, 2,000 Put Requests, and 15GB of data transfer out each month for one year. More details on Amazon S3 pricing can be found here.

Secure

Amazon S3 is built to provide infrastructure that allows the customer to maintain full control over who has access to their data. Customers are also able to easily secure their data in transit and at rest.

Reliable

Store data with up to 99.999999999% durability, with 99.99% availability. There are no single points of failure. All failures are tolerated or repaired by the system without any downtime.

Scalable

Amazon S3 can scale in terms of storage, request rate, and users to support an unlimited number of web-scale applications. It uses scale as an advantage: adding nodes to the system increases, not decreases, its availability, speed, throughput, capacity, and robustness.

Fast

Amazon S3 is fast enough to support high-performance applications. Server-side latency is insignificant relative to Internet latency.

 

The S3Express command line utility provides a simple way for you to upload, query, backup files and folders to Amazon S3 storage, based upon flexible criteria. Quickly upload only new or changed files for backup purposes using multipart uploads and concurrent threads, create custom batch scripts, list Amazon S3 files or entire folders, filter files with conditions, query, change object metadata and ACLs and more.




How-To


How to backup to Amazon S3

See PDF tutorial 'Backup Files to Amazon S3 with S3Express'.



How to calculate the total size of a bucket

To calculate the total size of all objects contained in a S3 bucket, use the following command:

ls my_bucket_name -s -sum

where:
my_bucket_name is the name of your bucket
-s is used to include all subfolders (e.g. recursive)
-sum is used to show just a summary without listing all objects.

The output will be similar to the following:

Bucket: my_bucket_name
5417 Files (10.74GB = 11533607629B)



How to configure S3Express for Minio S3 compatible service

Requires S3Express version 1.5.9 or newer.

You can configure S3Express to work with Minio by setting the following options in S3Express.

A) Set the S3Express end-point to Minio IP and port number. For example, assuming Minio is running on IP 191.168.1.10 and port 9000:

setopt -endpoint:192.168.1.10:9000


B)
Enable the S3Express option to use path-style requests:

setopt -usepathstyle:on


C)
If your Minio installation only supports HTTP and not HTTPs, then set S3Express to use the HTTP protocol:

setopt -protocol:http


D)
Enter your Minio authorization credentials in S3Express as you would do for S3, using the saveauth command.



You are now ready to use S3Express with Minio.



How to list all non-private objects in a bucket

Using S3Express you can easily list all non-private objects in a bucket.
The command to use is the following:

ls my_bucket_name -s -cond:"s3_acl_is_private = false"

where:
my_bucket_name is the name of the bucket
-s is used to include subfolders (e.g. recursive)
-cond is the filtering condition to only list objects which do not have private ACL.

This command will list all non-private objects in a S3 bucket.

If you prefer to just see a summary of the total amount of objects present in a bucket that are not private, instead of listing each object's name, add the flag -sum, e.g.:

ls my_bucket_name -s -cond:"s3_acl_is_private = false" -sum

Depending on the amount of objects to check in the bucket, it may take some time for the above command to complete, because each object's ACL must be queried by S3Express, even if the querying is done by S3Express in a multithreaded, concurrent fashion.



How to list all public objects in a bucket

Sometimes it can be useful to check if there are publicly accessible objects in a specific S3 bucket. Using S3Express you can easily list all public objects in a bucket. The command to use is the following:

ls my_bucket_name -s -cond:"s3_acl_is_public_read = true"

where:
my_bucket_name is the name of the bucket
-s is used to include subfolders (e.g. recursive)
-cond is the filtering condition to only list objects which have public-access ACL.

This command will list all public objects in a S3 bucket.

If you prefer to just see a summary of the total amount of objects present in a bucket that are publicly accessible, instead of listing each object's name, add the flag -sum, e.g.:

ls my_bucket_name -s -cond:"s3_acl_is_public_read = true" -sum

Other options for the filtering condition -cond are s3_acl_is_private or s3_acl_is_public_read_write , see S3Express Manual for more details.

Depending on the amount of objects in the bucket, it may take some time for the above command to complete, because each object's ACL must be queried by S3Express, even if the querying is done by S3Express in a multithreaded fashion.



How to move files to S3 (difference between -move and -localdelete)

*** Requires S3Express 1.5 or newer ***  Release History

S3Express can be used to move files to Amazon S3. Moving files means that local files are deleted after/only if they are successfully uploaded to S3. To do that, use the option -move of the put command. The -move option instructs S3Express to immediately delete local files after/only if they are successfully uploaded.


A slightly different capability is given by the option -localdelete:COND. This option instructs S3Express to delete local files if they:

- Are not selected to be uploaded (e.g. due to the option -onlydiff or -onlynewer).

- Have a corresponding matching file already on S3 with same path and name.

- Satisfy the condition COND. COND is a condition that follows the general condition rules.


By using the option -localdelete:COND, a more sophisticated move operation can be setup, because differently from the basic -move  option, -localdelete will:

- Delete local files after having verified they are already on S3, so local files are deleted in a successive run of S3Express, not immediately after they are uploaded like with the -move option.

- Delete local files according to a condition e.g. -localdelete:”age_days > 30” or -localdelete:”size_mb > 50” to delete local files selectively.

- Delete local files that are already on S3 even if these files were not uploaded by S3Express itself.


Note 1
: uploading has priority over -localdelete. During the execution of the put command, firstly files are selected for uploading and after that the remaining files are selected for local deletion if the option -localdelete is specified.

Note 2: If the condition COND is not specified, that is, only -localdelete is used, then all local files that have a corresponding matching file on S3 will be deleted, regardless of age, size, time, etc.



How to restore multiple objects from AWS S3 Glacier with one command

Restoring a small number of objects from AWS S3 Glacier is not a problem: you can use the AWS Web Console and restore them in the GUI. But if you need to restore a LOT of objects, the manual approach is not feasible.

With S3Express, it is easy to restore multiple objects using the restore command. The restore command in S3Express fully supports file masks and conditions, so you can restore objects based on name, extension, size, ACL, metadata, time, age and much more. See S3Express Restore Command for all details.


Some examples of commands that can be issued are:


restore mybucket/a.txt -days:5 -tier:Expedited

restore *.jpg -s -days:10 -tier:Bulk

restore mybucket/* -s -days:1  (restore all objects in mybucket and subfolders for 1 day)

restore "mybucket/my folder/.*\.txt|.*\.vsn" -r -days:2 (restore all objects with extension txt or vsn in mybucket/my folder/ for 2 days)

restore mybucket/^r.* -r  -days:2 (restore all objects starting with 'r' in mybucket for 2 days)

restore mybucket -cond:"name starts_with 'a'" - days:7 (restore all objects in mybucket (non-recursive) if name starting with a)

restore mybucket -s -cond:"extract_value(cache-control,'max-age') > 0" -days:1 (restore all objects in mybucket (recursive, include subfolders) for 1 day if cache-control:max-age > 0 in the metadata)


See S3Express Restore Command for all details.



How to set 'retry values' for network errors

You can instruct S3Express to automatically retry in case of network error, by setting the global options -retry and -retrywait with the command setopt : www.s3express.com/help/vv_options.html

The option -retry sets the number of retries performed by S3Express in case of a network error. By default S3Express retries 3 times.

The option -retrywait sets the wait time, in seconds, before a retry. The default value is 5 seconds.

If you do not want S3Express to retry on network error, set -retry to 0.



How to throttle bandwidth (for file uploads to S3)

You can limit the maximum bandwidth used by S3Express during file uploads. 

The maximum bandwidth can be set via the flag -maxb of the put command, see manual: www.s3express.com/help/vv_put.html

For example, using -maxb:100 will instruct S3Express to use maximum 100KB/sec to upload files.



How to upload only changed or new files since the last upload to S3 (-onlydiff)

The following is an example of the put command that can be used to upload only changed or new files since the last upload to an Amazon S3 bucket:

put c:\myfolder\  my_bucket_name  -s -onlydiff

where:

c:\myfolder\ : the local folder to upload.
my_bucket_name : the S3 bucket name.
-s : include subfolders too (= recursive).
-onlydiff : only upload files that are different compared to the matching files that are already in the S3 bucket. Different files are files that have the same path, and the same name, but a different MD5 value. Different files are also files that are not yet uploaded to S3, i.e. new files. So using the '-onlydiff' flag will upload files that are not yet on S3 plus all the files whose content has changed compared to the files already on S3.


The -onlydiff flag can be used to perform incremental uploads / backups to Amazon S3 from the command line. S3Express will firstly compare the local files vs the corresponding remote files and will then only upload files whose MD5 value is different or files that do not already have a corresponding matching file on S3 (that is, new files).


Other flags that could be used instead of -onlydiff are :

-onlynewer : only upload files that are newer compared to the matching files that are already on S3. Newer files are files that have the same path and the same name but a newer modified time. Newer files are also files that are not yet uploaded to S3. So using the '-onlynewer' flag uploads files that are not yet on S3 plus all the files whose timestamp is newer compared to files already on S3.
The difference between -onlynewer and -onlydiff is that -onlydiff uses the MD5 value to compare files, e.g. files are different if the MD5 value is different, while -onlynewer uses the file timestamp, e.g. a file is different if the timestamp is different.

-onlydifferent : only upload files that are new, that is not yet on S3. Using -onlynew only uploads files that are not yet on S3.

-onlyexisting : only upload files that are already existing on S3. Using -onlyexisting only uploads files that already have a corresponding matching file with same name and path on S3.



How to upload very large files to Amazon S3 (efficiently)

Using Amazon S3 and S3Express command line you can upload very large files to a S3 bucket efficiently (e.g. several megabytes or even multiple gigabytes).

The main issues with uploading large files over the Internet are:

  • The upload could be involuntarily interrupted by a transient network issue and if that happens, the whole upload could fail and it would need to be restarted for the beginning. If the file is very large, it would result in wasted time and bandwitdh.

  • Being a large file, the upload could be voluntarily interrupted by the user with the intent of being continued at a later stage. In that case again, the whole upload would need to be restarted from the beginning.

  • Being the upload one big file, only one thread at a time can be used to upload the file and that would make the transfer quite slow.


All of the above issues are solved using multipart uploads.

By specifying the flag -mul of the command put when uploading files, S3Express will break the files into chunks (by default each chunk will be 5MB) and upload them separately.

You can instruct S3Express to upload a number of chunks in parallel using the flag -t.

If the upload of one single chunk fails, for whatever reason, or if the upload is interrupted, you can simply restart the uncompleted upload and S3Express will restart from the last successful chunk instead of having to re-upload the entire file. If you do not want to restart an unfinished multipart upload, you can use the command rmupl to remove the uncompleted upload.

Once all chucks are uploaded, the file is reconstructed at the destination to exaclty match the origin file. S3Express will also recaclulate and apply the correct MD5 value.

The multipart upload feature in S3Express makes it very convenient to upload very large files to Amazon S3, even over less reliable network connections, using the command line.



How to use the -onlydiff switch with local encryption (-le)

When using local encryption (-le), the MD5 value of the files that are uploaded to S3 will no longer match the local MD5 value, because the files are encrypted before being uploaded. This will prevent the -onlydiff switch from working properly. However there are alternatives:

  • One alternative is to use the -onlynewer switch instead of the -onlydiff switch.

    The -onlynewer switch instructs S3Express to only upload files that have a newer timestamp locally compared to files that are already on S3 but are older. Checking timestamp of S3 objects is very fast because the S3 metadata of each S3 object do not need to be checked.

  • The other possibility is to use the original MD5 value that is stored in the S3 metadata of locally encrypted files. When uploading locally encrypted files to S3, S3Express stores the original MD5 value in the metadata header x-amz-meta-s3xpress-encrypted-orig-md5. So, by adding the following condition to the upload command, you can upload only files that have a changed MD5: -cond:"md5<>x-amz-meta-s3xpress-encrypted-orig-md5".

    For instance: put c:\localfolder\ bucketname -s -le
    -cond:"md5<>x-amz-meta-s3xpress-encrypted-orig-md5"

    Note that the metadata of each S3 object must be check in this case which can take a long time if there are many files to check (it depends also on the available connection speed to Amazon S3).




Known Errors with Solutions


Error "400 - AuthorizationHeaderMalformed - The authorization header is malformed; Invalid credential date. Date is not the same as X-Amz-Date."

Error

When accessing Amazon S3, S3Express reports the error "400 - AuthorizationHeaderMalformed - The authorization header is malformed; Invalid credential date. Date is not the same as X-Amz-Date."

Solution

Please upgrade S3Express to version 1.5.10 or newer. Upgrading is free if you already purchased a license for S3Express version 1, see : www.s3express.com/kb/item29.htm



Error 51 when accessing S3 buckets with periods (.) in the name

Error

When accessing Amazon S3 buckets with periods (.) in the name, S3Express reports error "51 - SSL: no alternative certificate subject name matches target host name".

This error is caused by validation failure of the SSL certificate for the Amazon S3 servers.

Solution

Upgrade S3Express to version 1.5.4 or newer. Upgrading is free if you already purchased a license for S3Express version 1, see: www.s3express.com/kb/item29.htm



InvalidRequest - The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Error

When accessing Amazon S3 buckets that are located in the new AWS Germany (Frankfurt) region, S3Express reports the error "400 - InvalidRequest - The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256."

Solution

The new AWS signature version 4 is required to access buckets in the new AWS Germany (Frankfurt) region. Please upgrade S3Express to version 1.5.5 or newer. Upgrading is free if you already purchased a license for S3Express version 1, see : www.s3express.com/kb/item29.htm




Purchasing / Licensing


After Purchase, When Do I Receive the License Key?

Paying by Credit Card, PayPal

After we receive your online credit card or PayPal order (or by FAX/Phone) and the charge is authorized, you will receive an e-mail with the License Key. Most transactions are immediate, however it may take a few hours to authorize the transaction in some cases.

In the unlikely event that your credit card is declined, you will receive an e-mail stating the reason for this.

If you do not receive any e-mail within 24 hours, please contact us at sales@s3express.com with your name and the date and time of your order to obtain the status of your order.

Other Payments (Wire Transfer, Check, Pro-Forma)

The License Key is sent via e-mail once the payment is received and processed.



Can I Try S3Express for Free Before I Buy?

Yes, you can download S3Express 21-day trial. The trial is free and is fully functional.

After the trial period ends, you will need to purchase one or more licenses.

If you do not wish to purchase after evaluation, the included uninstall routine will completely and cleanly remove S3Express from your computer.



How Do I Buy S3Express?

You can buy S3Express over the Internet, or by Phone/Fax, paying with Credit Card, PayPal, Check or by Wire Transfer.

After we received your order, we'll send you a License Key via E-mail. You will then need to enter this License Key in the S3Express trial that you have downloaded and installed on your computer. S3Express will unlock and become registered in your name.



How Do I Enter the License I Bought in S3Express?

You can enter the license in S3Express using the S3Express license command.



How Much Does S3Express Cost?

See detailed S3Express price list on this page: www.s3express.com/buy.htm



I Ordered S3Express but I Have Not Received My License.

When you order online via our secure server, your license is automatically created and sent to the e-mail account you specified on the ordering form. Usually this takes a few seconds.

If you do not receive our e-mail within 3 hours, please check the following:

  • You entered your "other" e-mail account when you ordered.
  • Your spam filter rejected the e-mail.
  • You have a spam blocking service and the e-mail with code is stuck in a queue somewhere.

You can also contact us at sales@s3express.com. We will be happy to re-send your license information.



Which Credit Cards Do You Accept?

We accept all major Credit Cards (Visa, MasterCard/EuroCard, American Express, Diners' Club, etc.). During the ordering process, you will be given the option to use fax or phone as an alternative to the standard online credit card order. We also accept PayPal.



Who Will Be Handling My Payment?

Your payment will be processed by SWREG, one of Digital River’s (NASDAQ:DRIV) MyCommerce solutions and the oldest on-line software store in the world. Your order is fully guaranteed.

All Digital River’s MyCommerce solutions are secured by Digital River’s enterprise e-commerce infrastructure, which includes a proven payment gateway, advanced fraud prevention and 24x7 customer service.




Release History


Upgrading S3Express (upgrade procedure)

The latest version is always available from the download page.

S3Express is designed to support “install over the top” upgrades from any version to any newer version.

The recommended upgrade procedure is:

  1. Download the latest version from the download page.

  2. Run the installer and install into the same location as the existing install. Make sure S3Express is not running.

  3. That's it! All S3Express settings and license are maintained.



Version 1.1 (Feb 2014 - Initial Release)

Version 1.1 was the initial public release of S3Express.



Version 1.2 (May 2014)

New Features:

  • Added –h command line switch to run S3Express completely hidden (i.e. no console and no user interaction).
  • Detect Windows proxy server settings and use automatically.
  • Ability to set a proxy server manually using command setopt and new option -proxyserver
  • Support 7zip as local encryption program.
  • Support a user-defined program as local encryption program.

Enhancements:

  • Retry on request timeouts (treat a request timeout in the same manner as a network error)
  • Del command: do not ask for deletion confirmation if running simulation only.
  • Report a file size error instead of a timeout error if file size changes during uploads.
  • Made regular expression in S3Express no longer case insensitive by default. Use (?i) to make a regular expression case insensitive.
  • Switches –rinclude and -rexclude now apply to object/file names only, not to their entire path. To exclude or include objects/files based on path use variables ‘path’ and/or ‘s3_path’ in –cond switch.
  • The -reset switch of command setopt can now be applied to specific selected options only.
  • Ability to show values of specific options with command showopt.

Fixes:

  • Fixed issues with switches –d and –od in combination with switch –s of command ls.



Version 1.3 (Jun 2014)

New Features:

  • Added support for object versioning to ls command (two new switches: -inclversions and -showverids).
  • Added support for object versioning in commands del, restore, getacl and getmeta.
  • Added new command restore which restores S3 objects that have been archived to Glacier storage.
  • Added new option -protocol to setopt command which can be used to set the communication protocol to http instead of default https. The http protocol is not secure, but it can be quicker in certain cases.
  • Added new variables to the list of filter condition variables. The new variables are related to S3 object expiration and Glacier restore. See help file for more details.
    The new variables are: s3_version_id, s3_is_latest_version, s3_is_delete_marker, s3_object_max_age, s3_object_expires, expiry_date, expiry_year, expiry_month, expiry_day, expiry_dayofweek, expiry_dayofyear, expiry_endofmonth, expiry_weeknumber, expiry_hour, expiry_minute, expiry_second, expiry_time, expiry_timestamp, expiry_months, expiry_days, expiry_hours, expiry_mins, expiry_secs, glacier_restored, restore_ongoing_request, restore_expiry_date, restore_expiry_year, restore_expiry_month, restore_expiry_day, restore_expiry_dayofweek, restore_expiry_dayofyear, restore_expiry_endofmonth, restore_expiry_weeknumber, restore_expiry_hour, restore_expiry_minute, restore_expiry_second, restore_expiry_time, restore_expiry_timestamp, restore_expiry_months,restore_expiry_days, restore_expiry_hours, restore_expiry_mins, restore_expiry_secs.



Version 1.3.3 (Jun 2014)

New Features:

  • Added -nomd5existcheck switch to put command. This new switch can be used to disable cross-checking of MD5 values. See help file for more information.
  • Added -minoutput switch to put command. This new switch can be used to minimize the output that is shown in the S3Express console during a put operation.

Enhancements:

  • Automatic retry upload if Amazon S3 servers return "slow-down" or "internal error, please retry".
  • Fixed issue with uploading certain files larger than 2GB. Tested multiple uploads of files as large as 100GB or more in multipart upload mode.
  • Added progress report to S3Express output when listing S3 buckets (or local folders) that contain a large amount of files (e.g. one million files or more).



Version 1.3.4 (Jul 2014)

New Features:

  • Added -timeout option to setopt command.

    This option sets the timeout in seconds for each communication between S3Express and Amazon S3. The default value is 60 seconds. Set timeout to 0 to disable timeout (not recommended). If no data is exchanged between S3Express and Amazon S3 within the time specified by the -timeout option, then the request is aborted. A new request is then initiated if the -retry option allows it.

Fixes:

  • Return exit code 0 (success) and not 1 (error) if the put command does not select any files to upload.



Version 1.3.5 (Jul 2014)

Enhancements:

  • Improved speed when uploading one single file to a bucket that already contains several thousand files.

Fixes:

  • Fixed an issue with uploading files with a quote (') in the file name.
  • Fixed several issues involving the cd command.



Version 1.4 (Aug 2014)

New Features:

  • New option -purge for PUT command: delete files from S3 that do not exist locally. For buckets that have versioning enabled, the deleted files are kept as previous versions.
  • New option -purgeabort for PUT command: abort the purge operation if more than X S3 files are to be deleted.
  • New option -stoponerror for PUT command: stop upload or purge operation as soon as first error occurs, do not continue with the rest of the files.
  • New option -onlyprev for LS, DELETE and RESTORE commands: include only previous versions of objects (for buckets that have versioning enabled).
  • New option -minoutput for DELETE command: minimize the output shown in the console during a delete operation. Only total deleted files and eventual errors are shown.
  • New commands (these are useful for multi-command processing): OnErrorSkip, ResetErrorStatus, ShowErrorStatus, Pause. See manual for details.
  • New filter condition variable S3_prev_version_number: contains the previous version number, e.g. the last previous version of an object is the number 1, then 2, 3, etc.

Fixes:

  • When processing multiple commands contained in a text files, do not skip the next commands automatically if an error occurs.



Version 1.4.1 (Aug 2014)

New Features:

  • New command getbktinfo: show bucket information such as bucket's policy, lifecycle configuration, versioning status, logging, etc. 
  • New command checkupdates: opens a web browser and shows if there are more up-to-date versions of S3Express available for download.
  • The showopt command now highlights all the options that are not set to default values.

Fixes:

  • Fixed a bug that could cause a crash in case of a network error happening while finalizing a multipart upload.



Version 1.5.1 (Sep 2014)

New Features:

  • New put command options -move, -localdelete and -showlocaldelete : move files to S3, delete local files after successful upload and/or delete local files if matching S3 files. See how to move files to S3

Improvements:

  • Updated internal HTTP and HTTPs libraries to latest versions.
  • Fixed application crash that could occur while uploading files with HTTPs protocol and multiple threads.



Version 1.5.3 (Oct 2014)

New Features:

  • New PUT command option -optimize: enable thread optimization for transferring large amounts of relatively small files over fast connections. Recommended to use with at least 4 threads (-t:4).

  • Increased the maximum threads that can be used in the PUT command to 32.

Improvements:

  • PUT command: now only scans files in the remote Amazon S3 bucket if the upload condition does contain a S3 object variable.

Fixes:

  • PUT command: fixed a file naming issue that could occur when uploading files from the root folder of a drive (e.g. E:\)



Version 1.5.4 (Oct 2014)

New Features:

  • New PUT option -nobucketlisting: this option forces S3Express not to list the remote S3 bucket. Instead of listing the remote S3 bucket before the put operation starts, S3Express will check file by file if a local file needs to be uploaded. This option can be quite slow, but it is faster when a few files are to be uploaded to a large S3 bucket that already has lot of files in it.

  • New global option -disablecertvalidation (advanced): disable SSL certificate validation over https.

Improvements:

  • Restored support for S3 buckets with periods '.' in the name (over the https protocol).

Fixes:

  • Fixed an issue that could occur when using the new PUT option -optimize with the option -mul.



Version 1.5.5 (Nov 2014)

Improvements:

  • Added support for Amazon AWS signature version 4. This is required to access all new Amazon regions, such as the new AWS Germany (Frankfurt) region.



Version 1.5.6 (Aug 2015)

Fixes:

  • Fixed error "Auto configuration failed" that could occur when starting S3Express.
  • Fixed error that could occur when uploading very large files (S3Express 32-bit version only).



Version 1.5.7 (Nov 2015)

Fixes:

  • Fixed issue with possible >4GB corrupted files after upload.

Improvements:

  • When using -onlydiff with the command put, S3Express now calculates the MD5 value of local files only if a matched file name on S3 is found. This speeds up the comparison and upload.
  • -onlydiff now working also for large files >4GB.
  • New MD5 calculation for multi-part upload mode matching the Amazon S3 calculation.
  • -nomulmd5 option of command put now used only for files <1GB to speed up upload.
  • -nomd5existcheck of command put now used only for files <200MB to speed up upload.



Version 1.5.8 (Feb 2016)

New:

  • Added support for the new S3 storage class Standard - Infrequent Access (Standard - IA). The new Standard - Infrequent Access storage class offers the high durability, low latency, and high throughput of Amazon S3 Standard, but with lower prices, $0.01 per GB retrieval fee, and a 30-day storage minimum. This combination of low cost and high performance makes Standard - IA ideal for long-term file storage, backups, and disaster recovery.

  • Added new command pwd to show current local working directory.



Version 1.5.9 (Jun 2018)

New:

  • Added path-style requests support to be used with S3 compatible services, such as Minio. The new option is called -usepathstyle and can be set to ON if needed.

  • Added automatic time adjustment in case of a "RequestTimeTooSkewed" error from the S3 server.

Fixes:

  • Fixed values of "age" variables in conditions.



[2019] Version 1.5.10 (Jul 2019)

New:

  • Added the new -accelerate option to the put command to support s3 accelerated transfers.

  • Switched S3Express to use AWS Signature Version 4 by default for all regions.

Fixes:

  • Fixed issue with wrong time-zone handling which could result in the error "400 - AuthorizationHeaderMalformed - The authorization header is malformed; Invalid credential date. Date is not the same as X-Amz-Date."



[2019] Version 1.5.11 (Sep 2019)

New:

  • Added new -tier option to the restore command to support different s3 restore tiers.



[2021] Version 1.5.12 (Nov 2021)

New:

  • Added new -region option.
  • Fixed an issue with Wasabi and Backblaze.
  • Bug fixes.




Security Considerations


Connection security (HTTPs)

During file listings, file uploads and file queries, files are transferred between the computer where S3Express is running and the Amazon S3 servers, so it's very important that the communication channel between the two is secure.


To achieve this level of security, S3Express automatically uses the HTTPs protocol (HTTPs = Hypertext Transfer Protocol Secure) to connect and communicate to the Amazon S3 servers. No special settings are needed: it is used by default.


The HTTPs protocol encrypts all the data flow between the client (the computer where S3Express is running) and the server (the Amazon S3 servers). This is the same protocol that is generally used to communicate to a bank site when using a web browser.


The HTTPs protocol protects against eavesdropping and tampering with and/or forging the contents of the communication. It provides a guarantee that one is communicating with precisely the Amazon S3 servers as well as ensuring that the contents of the communications between S3Express and the Amazon S3 servers (file transfers, file listings, etc.) cannot be read or forged by any third party.

HTTP can optionally be used instead of the default HTTPs. You can enable HTTP instead of HTTPs with the setopt command.



Enforcing 'private access only' for all objects in a bucket

When uploading files to a S3 bucket for backup purposes, it's important to make all uploaded objects private, that is, make all objects accessible only by the owner and not by the public. This is already done by default in S3Express, unless otherwise specified. However, to avoid mistakes, this requirement can also be explicitly enforced by using a bucket policy similar to the following one:

 

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Sid": "PrivateAclPolicy",  "Effect": "Deny",
   "Principal": { "AWS": "*"},
   "Action": [
    "s3:PutObject",
    "s3:PutObjectAcl"
   ],
   "Resource": [
    "arn:aws:s3:::bucket_name/*"
   ],
   "Condition": {
    "StringNotEquals": {
     "s3:x-amz-acl": [
      "private"
     ]
    }
   }
  }
 ]
}

Replace bucket_name with the name of your bucket.


This policy will only allow objects to be uploaded to the bucket if the ACL is explicitly set to "private", otherwise access will be denied. Also this policy makes sure that the ACL cannot be changed from private.

The following is an example of uploads explicitly made private in S3Express:

put c:\folder\ bucket_name -s -cacl:private

-cacl:private explicitly makes all uploaded objects private. This is the default (if -cacl is not specified), but the bucket policy above now requires it to be explicitly specified or access will be denied.


To set a bucket policy you can use the Amazon S3 Console.

To verify and to make sure that all the already existing objects in a bucket are correctly set to private, see: www.s3express.com/kb/item24.htm



Enforcing server-side encryption for all uploads to a bucket

Amazon S3 supports bucket policy that you can use if you require server-side encryption for all objects that are stored in your bucket. For example, the following bucket policy denies upload object (s3:PutObject) permission to everyone if the request does not include the x-amz-server-side-encryption header requesting server-side encryption.

{
   "Version":"2012-10-17",
   "Id":"PutObjPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":{
            "AWS":"*"
         },
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::YourBucket/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"AES256"
            }
         }
      }
   ]
}

In S3Express, the x-amz-server-side-encryption header is added by using the -e flag of the PUT command.



How to make a backup to S3 more secure using encryption

File encryption can optionally be used to make a backup to S3 more secure.

S3Express already automatically encrypts files as they are in-transit from and to the Amazon S3 servers, however files can also be stored on the Amazon S3 servers encrypted (i.e. at rest).


S3Express provides two types of encryption: server-side encryption and client-side encryption.


Server-Side encryption
is about data encryption at rest, that is, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. Amazon S3 manages encryption and decryption for you. For example, if you share your objects using a pre-signed URL, the pre-signed URL works the same way for both encrypted and unencrypted objects.

Amazon S3 Server Side Encryption employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 Server Side Encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

When you upload one or more objects with S3Express, you can explicitly specify in your request if you want Amazon S3 to save your object data encrypted. To specify that you want Amazon S3 to save your object data encrypted use the flag -e of the S3Express command PUT. Server-side encryption is optional. Your bucket might contain both encrypted and unencrypted objects.


With Client-Side encryption, you add an extra layer of security by encrypting data locally before uploading the files to Amazon S3. Client-side encryption and server-side encryption can be combined and used together. In S3Express, client-side encryption is provided by AesCrypt.exe, see the -le flag of the PUT command.



Restricting access to a S3 bucket to specific IP addresses

To make our uploads or backup on Amazon S3 even more secure, we can restrict access to a S3 bucket to specific IP addresses. 

The following bucket policy grants permissions to any user to perform any S3 action on objects in the specified bucket. However, the request must originate from the range of IP addresses specified in the condition. The condition in this statement identifies 192.168.143.* range of allowed IP addresses with one exception, 192.168.143.188.

{
    "Version": "2012-10-17",
    "Id": "S3PolicyIPRestrict",
    "Statement": [
        {
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::bucket/*",
            "Condition" : {
                "IpAddress" : {
                    "aws:SourceIp": "192.168.143.0/24"
                },
                "NotIpAddress" : {
                    "aws:SourceIp": "192.168.143.188/32"
                }
            }
        }
    ]
}

The IPAddress and NotIpAddress values specified in the condition uses CIDR notation described in RFC 2632. For more information, go to www.rfc-editor.org/rfc/rfc4632.txt


 A printable version of the entire FAQ and Knowledge Base is also available.
 For further queries, please contact us by e-mail at support@s3express.com