You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This came up when I was doing an external data transfer with the tempDir option. It successfully loaded the files into the temporary S3 location (~7,500 files), but when reading from that location, it only read the first 1,000 files.
AWS has an API that allows the use of a continuation token to build the full list of files, so the connector should use that instead.
As a workaround, for my use case, I think I can make s3maxfilesize sufficiently large to ensure we don't have > 1000 files.
The text was updated successfully, but these errors were encountered:
gigavishal
changed the title
S3 getFileNames retrieves incomplete list of files when more than 1000 files
S3 getFileNames retrieves incomplete list of objects when more than 1000
Oct 16, 2024
This came up when I was doing an external data transfer with the
tempDir
option. It successfully loaded the files into the temporary S3 location (~7,500 files), but when reading from that location, it only read the first 1,000 files.This is because the AWS ListObjects API only returns the first 1000 objects. It's called here: https://github.com/snowflakedb/spark-snowflake/blob/master/src/main/scala/net/snowflake/spark/snowflake/io/CloudStorageOperations.scala#L1852
AWS has an API that allows the use of a continuation token to build the full list of files, so the connector should use that instead.
As a workaround, for my use case, I think I can make
s3maxfilesize
sufficiently large to ensure we don't have > 1000 files.The text was updated successfully, but these errors were encountered: