[Cyberduck-trac] [Cyberduck] #1977: Problem opening S3 buckets containing many thousands of files

Cyberduck trac at trac.cyberduck.ch
Tue Apr 22 23:58:31 CEST 2008


#1977: Problem opening S3 buckets containing many thousands of files
---------------------------------+------------------------------------------
 Reporter:  mark at markthomson.ca  |       Owner:  dkocher
     Type:  defect               |      Status:  new    
 Priority:  normal               |   Milestone:  3.0    
Component:  amazon-s3            |     Version:  3.0b2  
 Severity:  normal               |    Keywords:         
---------------------------------+------------------------------------------
 I have several S3 buckets that each contain tens of thousands of data
 objects. These data objects are named using the full path and filename of
 the original file (they are backups). For example
 dir1/dir2/dir3/somefile.txt

 When Cyberduck opens the bucket it begins to download the file list and
 becomes un-responsive during this time. The refresh cannot be stopped
 using the stop button and the app must be force-quit.

 Is it possible to prevent this by first checking the number of files in a
 bucket and warning the user before attempting to fetch the list?
 Alternately, is it possible to download the file list in stages (ie 1000
 at a time)?. The last possible solution I can think of is to form a
 request to get only the first (pseudo) subdir. Although from my basic
 understanding of the S3 API, this is not possible.

 Thanks for making Cyberduck!

-- 
Ticket URL: <http://trac.cyberduck.ch/ticket/1977>
Cyberduck <http://cyberduck.ch>
FTP and SFTP Browser for Mac OS X.


More information about the Cyberduck-trac mailing list