[Cyberduck-trac] [Cyberduck] #9131: Files greater than 100GB fail to upload (was: Files greater than 100GB fail to upload to Swift.)

Cyberduck trac at trac.cyberduck.io
Wed Nov 25 14:26:26 UTC 2015


#9131: Files greater than 100GB fail to upload
-----------------------+-------------------------
 Reporter:  jmamma     |         Owner:  dkocher
     Type:  defect     |        Status:  assigned
 Priority:  high       |     Milestone:  4.8
Component:  openstack  |       Version:  4.7.3
 Severity:  critical   |    Resolution:
 Keywords:             |  Architecture:
 Platform:             |
-----------------------+-------------------------

Old description:

> '''Cyberduck has an arbitrary 100GB Object Size limit.'''
>
> '''Observed Behavior:'''
>
> When uploading an object larger than 100GB to Swift Storage, all segments
> will be transferred but the complete operation will fail at 100%.
>
> "Request Entity Too Large. 413 Request Entity Too Large"
>
> The Object manifest is not created.
>
> '''It appears to be related to the design decision below:'''
>
> https://trac.cyberduck.io/ticket/7772
>
> https://trac.cyberduck.io/changeset/14143/trunk/source/ch/cyberduck/core/Preferences.java
>
> "We changed the part size for multipart uploads to 10MB in r14143 to
> allow multipart uploads up to 100GB in total size due to the maximum
> number of parts restriction of 10'000 by S3. Please try with the latest
> snapshot build available and reopen this ticket if you are still having
> this issue."
>
> '''Cause:'''
>
> From my understanding you're failing at objects greater than 100GB to
> prevent more than 10,000 segment being created (an S3 limit).
> OpenStack Swift does not have a hard upper limit on the total number of
> objects stored in a container.
>
> '''Workaround:'''
>
> I've been able to override the default user setting:
> openstack.upload.largeobject.size to 1,048,576,000 (1GB)
> which is approximately the default segment size used by the Python Swift
> Client and increasing the Cyberduck object size limit to 1TB.
>
> Manually editing the Cyberduck user.config xml file is a less than
> desirable solution for our users, who routinely upload 100GB or larger
> files.

New description:

 '''Cyberduck has an arbitrary 100GB Object Size limit.'''

 '''Observed Behavior:'''

 When uploading an object larger than 100GB to Swift Storage, all segments
 will be transferred but the complete operation will fail at 100%.


 {{{
 Request Entity Too Large. 413 Request Entity Too Large
 }}}


 The Object manifest is not created.

 '''It appears to be related to the design decision below:'''

  * #7772
  *
 [https://trac.cyberduck.io/changeset/14143/trunk/source/ch/cyberduck/core/Preferences.java
 r14143]

  > We changed the part size for multipart uploads to 10MB in r14143 to
 allow multipart uploads up to 100GB in total size due to the maximum
 number of parts restriction of 10'000 by S3. Please try with the latest
 snapshot build available and reopen this ticket if you are still having
 this issue.

 '''Cause:'''

 From my understanding you're failing at objects greater than 100GB to
 prevent more than 10,000 segment being created (an S3 limit).
 OpenStack Swift does not have a hard upper limit on the total number of
 objects stored in a container.

 '''Workaround:'''

 I've been able to override the default user setting:
 openstack.upload.largeobject.size to 1,048,576,000 (1GB)
 which is approximately the default segment size used by the Python Swift
 Client and increasing the Cyberduck object size limit to 1TB.

 Manually editing the Cyberduck user.config xml file is a less than
 desirable solution for our users, who routinely upload 100GB or larger
 files.

--

Comment (by dkocher):

 Thanks for your detailed bug report.

-- 
Ticket URL: <https://trac.cyberduck.io/ticket/9131#comment:3>
Cyberduck <https://cyberduck.io>
Libre FTP, SFTP, WebDAV, S3 & OpenStack Swift browser for Mac and Windows


More information about the Cyberduck-trac mailing list