Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 getFileNames retrieves incomplete list of objects when more than 1000 #589

Open
gigavishal opened this issue Oct 16, 2024 · 0 comments
Open

Comments

@gigavishal
Copy link

This came up when I was doing an external data transfer with the tempDir option. It successfully loaded the files into the temporary S3 location (~7,500 files), but when reading from that location, it only read the first 1,000 files.

This is because the AWS ListObjects API only returns the first 1000 objects. It's called here: https://github.com/snowflakedb/spark-snowflake/blob/master/src/main/scala/net/snowflake/spark/snowflake/io/CloudStorageOperations.scala#L1852

AWS has an API that allows the use of a continuation token to build the full list of files, so the connector should use that instead.

As a workaround, for my use case, I think I can make s3maxfilesize sufficiently large to ensure we don't have > 1000 files.

@gigavishal gigavishal changed the title S3 getFileNames retrieves incomplete list of files when more than 1000 files S3 getFileNames retrieves incomplete list of objects when more than 1000 Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant