S3Express FAQ and Knowledge Base
|Upgrading S3Express (upgrade procedure)
The latest version is always available from the download page.
S3Express is designed to support “install over the top” upgrades from any version to any newer version.
The recommended upgrade procedure is:
- Download the latest version from the download page.
- Run the installer and install into the same location as the existing install. Make sure S3Express is not running.
- That's it! All S3Express settings and license are maintained.
|Version 1.1 (Feb 2014 - Initial Release)
Version 1.1 was the initial public release of S3Express.
- Added –h command line switch to run S3Express completely hidden (i.e. no console and no user interaction).
- Detect Windows proxy server settings and use automatically.
- Ability to set a proxy server manually using command setopt and new option -proxyserver
- Support 7zip as local encryption program.
- Support a user-defined program as local encryption program.
- Retry on request timeouts (treat a request timeout in the same manner as a network error)
- Del command: do not ask for deletion confirmation if running simulation only.
- Report a file size error instead of a timeout error if file size changes during uploads.
- Made regular expression in S3Express no longer case insensitive by default. Use (?i) to make a regular expression case insensitive.
- Switches –rinclude and -rexclude now apply to object/file names only, not to their entire path. To exclude or include objects/files based on path use variables ‘path’ and/or ‘s3_path’ in –cond switch.
- The -reset switch of command setopt can now be applied to specific selected options only.
- Ability to show values of specific options with command showopt.
- Fixed issues with switches –d and –od in combination with switch –s of command ls.
- Added support for object versioning to ls command (two new switches: -inclversions and -showverids).
- Added support for object versioning in commands del, restore, getacl and getmeta.
- Added new command restore which restores S3 objects that have been archived to Glacier storage.
- Added new option -protocol to setopt command which can be used to set the communication protocol to http instead of default https. The http protocol is not secure, but it can be quicker in certain cases.
- Added new variables to the list of filter condition variables. The new variables are related to S3 object expiration and Glacier restore. See help file for more details.
The new variables are: s3_version_id, s3_is_latest_version, s3_is_delete_marker, s3_object_max_age, s3_object_expires, expiry_date, expiry_year, expiry_month, expiry_day, expiry_dayofweek, expiry_dayofyear, expiry_endofmonth, expiry_weeknumber, expiry_hour, expiry_minute, expiry_second, expiry_time, expiry_timestamp, expiry_months, expiry_days, expiry_hours, expiry_mins, expiry_secs, glacier_restored, restore_ongoing_request, restore_expiry_date, restore_expiry_year, restore_expiry_month, restore_expiry_day, restore_expiry_dayofweek, restore_expiry_dayofyear, restore_expiry_endofmonth, restore_expiry_weeknumber, restore_expiry_hour, restore_expiry_minute, restore_expiry_second, restore_expiry_time, restore_expiry_timestamp, restore_expiry_months,restore_expiry_days, restore_expiry_hours, restore_expiry_mins, restore_expiry_secs.
- Added -nomd5existcheck switch to put command. This new switch can be used to disable cross-checking of MD5 values. See help file for more information.
- Added -minoutput switch to put command. This new switch can be used to minimize the output that is shown in the S3Express console during a put operation.
- Automatic retry upload if Amazon S3 servers return "slow-down" or "internal error, please retry".
- Fixed issue with uploading certain files larger than 2GB. Tested multiple uploads of files as large as 100GB or more in multipart upload mode.
- Added progress report to S3Express output when listing S3 buckets (or local folders) that contain a large amount of files (e.g. one million files or more).
- Added -timeout option to setopt command.
This option sets the timeout in seconds for each communication between S3Express and Amazon S3. The default value is 60 seconds. Set timeout to 0 to disable timeout (not recommended). If no data is exchanged between S3Express and Amazon S3 within the time specified by the -timeout option, then the request is aborted. A new request is then initiated if the -retry option allows it.
- Return exit code 0 (success) and not 1 (error) if the put command does not select any files to upload.
- Improved speed when uploading one single file to a bucket that already contains several thousand files.
- Fixed an issue with uploading files with a quote (') in the file name.
- Fixed several issues involving the cd command.
- New option -purge for PUT command: delete files from S3 that do not exist locally. For buckets that have versioning enabled, the deleted files are kept as previous versions.
- New option -purgeabort for PUT command: abort the purge operation if more than X S3 files are to be deleted.
- New option -stoponerror for PUT command: stop upload or purge operation as soon as first error occurs, do not continue with the rest of the files.
- New option -onlyprev for LS, DELETE and RESTORE commands: include only previous versions of objects (for buckets that have versioning enabled).
- New option -minoutput for DELETE command: minimize the output shown in the console during a delete operation. Only total deleted files and eventual errors are shown.
- New commands (these are useful for multi-command processing): OnErrorSkip, ResetErrorStatus, ShowErrorStatus, Pause. See manual for details.
- New filter condition variable S3_prev_version_number: contains the previous version number, e.g. the last previous version of an object is the number 1, then 2, 3, etc.
- When processing multiple commands contained in a text files, do not skip the next commands automatically if an error occurs.
- New command getbktinfo: show bucket information such as bucket's policy, lifecycle configuration, versioning status, logging, etc.
- New command checkupdates: opens a web browser and shows if there are more up-to-date versions of S3Express available for download.
- The showopt command now highlights all the options that are not set to default values.
- Fixed a bug that could cause a crash in case of a network error happening while finalizing a multipart upload.
- New put command options -move, -localdelete and -showlocaldelete : move files to S3, delete local files after successful upload and/or delete local files if matching S3 files. See how to move files to S3
- Updated internal HTTP and HTTPs libraries to latest versions.
- Fixed application crash that could occur while uploading files with HTTPs protocol and multiple threads.
- New PUT command option -optimize: enable thread optimization for transferring large amounts of relatively small files over fast connections. Recommended to use with at least 4 threads (-t:4).
- Increased the maximum threads that can be used in the PUT command to 32.
- PUT command: now only scans files in the remote Amazon S3 bucket if the upload condition does contain a S3 object variable.
- PUT command: fixed a file naming issue that could occur when uploading files from the root folder of a drive (e.g. E:\)
- New PUT option -nobucketlisting: this option forces S3Express not to list the remote S3 bucket. Instead of listing the remote S3 bucket before the put operation starts, S3Express will check file by file if a local file needs to be uploaded. This option can be quite slow, but it is faster when a few files are to be uploaded to a large S3 bucket that already has lot of files in it.
- New global option -disablecertvalidation (advanced): disable SSL certificate validation over https.
- Restored support for S3 buckets with periods '.' in the name (over the https protocol).
- Fixed an issue that could occur when using the new PUT option -optimize with the option -mul.
- Added support for Amazon AWS signature version 4. This is required to access all new Amazon regions, such as the new AWS Germany (Frankfurt) region.
- Fixed error "Auto configuration failed" that could occur when starting S3Express.
- Fixed error that could occur when uploading very large files (S3Express 32-bit version only).
- Fixed issue with possible >4GB corrupted files after upload.
- When using -onlydiff with the command put, S3Express now calculates the MD5 value of local files only if a matched file name on S3 is found. This speeds up the comparison and upload.
- -onlydiff now working also for large files >4GB.
- New MD5 calculation for multi-part upload mode matching the Amazon S3 calculation.
- -nomulmd5 option of command put now used only for files <1GB to speed up upload.
- -nomd5existcheck of command put now used only for files <200MB to speed up upload.
- Added support for the new S3 storage class Standard - Infrequent Access (Standard - IA). The new Standard - Infrequent Access storage class offers the high durability, low latency, and high throughput of Amazon S3 Standard, but with lower prices, $0.01 per GB retrieval fee, and a 30-day storage minimum. This combination of low cost and high performance makes Standard - IA ideal for long-term file storage, backups, and disaster recovery.
- Added new command pwd to show current local working directory.
- Added path-style requests support to be used with S3 compatible services, such as Minio. The new option is called -usepathstyle and can be set to ON if needed.
- Added automatic time adjustment in case of a "RequestTimeTooSkewed" error from the S3 server.
- Fixed values of "age" variables in conditions.
| Version 1.5.10 (Jul 2019)
- Added the new -accelerate option to the put command to support s3 accelerated transfers.
- Switched S3Express to use AWS Signature Version 4 by default for all regions.
- Fixed issue with wrong time-zone handling which could result in the error "400 - AuthorizationHeaderMalformed - The authorization header is malformed; Invalid credential date. Date is not the same as X-Amz-Date."
| Version 1.5.11 (Sep 2019)
- Added new -tier option to the restore command to support different s3 restore tiers.
For further queries, please contact us by e-mail at email@example.com