Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNOW-340077: UnsupportedOperationException sun.misc.Unsafe or java.nio.DirectByteBuffer when running on JDK16 #484

Closed
plavreshin opened this issue Apr 11, 2021 · 15 comments
Assignees
Labels

Comments

@plavreshin
Copy link

Hi!

Snowflake-jdbc version: 3.13.2
JDK version: openjdk:16-slim

I am observing following issue while trying to fetch data from Snowflake used jdbc/JDK versions above:

Apr 09, 2021 12:35:31 PM net.snowflake.client.jdbc.SnowflakeChunkDownloader getNextChunkToConsume
SEVERE: downloader encountered error: Max retry reached for the download of #chunk0 (Total chunks: 5) retry=10, error=net.snowflake.client.jdbc.SnowflakeSQLLoggedException: JDBC driver internal error: Exception: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available.
at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.downloadAndParseChunk(SnowflakeChunkDownloader.java:895)
at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.call(SnowflakeChunkDownloader.java:950)
at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.call(SnowflakeChunkDownloader.java:766)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:831)
Caused by: java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
at net.snowflake.client.jdbc.internal.io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:399)
at net.snowflake.client.jdbc.internal.io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
at net.snowflake.client.jdbc.internal.io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
at net.snowflake.client.jdbc.internal.io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:247)
at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.ReadChannel.readFully(ReadChannel.java:81)
at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.message.MessageSerializer.readMessageBody(MessageSerializer.java:696)
at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:68)
at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.ArrowStreamReader.loadNextBatch(ArrowStreamReader.java:106)
at net.snowflake.client.jdbc.ArrowResultChunk.readArrowStream(ArrowResultChunk.java:78)
at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.downloadAndParseChunk(SnowflakeChunkDownloader.java:879)
... 6 more

Could you please advice if there is any ETA to remove/replace sun.misc.Unsafe usage in favour Varhandle etc? Assuming new JDK LTS is scheduled for September 2021, it would good to get rid of Unsafe usages for good.

Thanks in advance!

@github-actions github-actions bot changed the title UnsupportedOperationException sun.misc.Unsafe or java.nio.DirectByteBuffer when running on JDK16 SNOW-340077: UnsupportedOperationException sun.misc.Unsafe or java.nio.DirectByteBuffer when running on JDK16 Apr 11, 2021
@sfc-gh-mknister
Copy link
Contributor

Hi, this error seems to be caused by the Arrow library that we use as a dependency. As a workaround for now, could you try setting -Dio.netty.tryReflectionSetAccessible=true? I see this in their documentation: "For java 9 or later, should set "-Dio.netty.tryReflectionSetAccessible=true". This fixes java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.(long, int) not available. thrown by netty."

@anders-ahlgren
Copy link

Hi, I'm having the same problem (also 3.13.2, but with Java 15). I did try setting -Dio.netty.tryReflectionSetAccessible=true but that did not help.

@dzlab
Copy link

dzlab commented Sep 1, 2021

@sfc-gh-mknister any updates on this ticket? I'm having same issue when running on spark

I'm using:

  • spark 3.1.1
  • java 11 installed via java-11-amazon-corretto-devel-11.0.10.9-1.x86_64.rpm
  • using those dependencies
"net.snowflake" % "snowflake-jdbc" % "3.12.10",
"net.snowflake" %% "spark-snowflake" % "2.6.0-spark_2.4"

I read data from Snowflake like this

val format = net.snowflake.spark.snowflake.Utils.SNOWFLAKE_SOURCE_NAME
val options = Map("sfURL" -> "", "sfUser" -> "", "sfPassword" -> "", "sfDatabase" -> "", "sfWarehouse" -> "", "dbtable" -> "")
sqlContext.read.format(format).options(options).load()

I'm passing the Dio.netty.tryReflectionSetAccessible=true java option to both driver and executor

spark.driver.extraJavaOptions=-XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+UseContainerSupport -Dio.netty.tryReflectionSetAccessible=true -javaagent:/opt/spark/jars/jmx_prometheus_javaagent.jar=9091:/opt/spark/conf/prometheus-config.yml  
spark.driver.memory=1G
spark.driver.memoryOverhead=385M
spark.driver.port=7077
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+UseContainerSupport -Dio.netty.tryReflectionSetAccessible=true 

here is full stack trace when trying to read from Snowflake table

ResultStage 25 (count at DatasetOutputNodeInfo.scala:26) failed in 0.065 s due to Job aborted due to stage failure: Task 0 in stage 25.0 failed 4 times, most recent failure: Lost task 0.3 in stage 25.0 (TID 95) (10.0.128.72 executor 2): java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
	at net.snowflake.client.jdbc.internal.io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:399)
	at net.snowflake.client.jdbc.internal.io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
	at net.snowflake.client.jdbc.internal.io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
	at net.snowflake.client.jdbc.internal.io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:247)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.ReadChannel.readFully(ReadChannel.java:81)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.message.MessageSerializer.readMessageBody(MessageSerializer.java:696)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:68)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.ArrowStreamReader.loadNextBatch(ArrowStreamReader.java:106)
	at net.snowflake.client.jdbc.ArrowResultChunk.readArrowStream(ArrowResultChunk.java:78)
	at net.snowflake.client.core.SFArrowResultSet.buildFirstChunk(SFArrowResultSet.java:278)
	at net.snowflake.client.core.SFArrowResultSet.<init>(SFArrowResultSet.java:187)
	at net.snowflake.client.jdbc.SnowflakeResultSetSerializableV1.getResultSet(SnowflakeResultSetSerializableV1.java:933)
	at net.snowflake.spark.snowflake.io.ResultIterator.<init>(SnowflakeResultSetRDD.scala:69)
	at net.snowflake.spark.snowflake.io.SnowflakeResultSetRDD.compute(SnowflakeResultSetRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.$anonfun$getOrCompute$1(RDD.scala:386)
	at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1423)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1350)
	at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1414)
	at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1237)
	at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:384)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:335)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.sql.execution.SQLExecutionRDD.$anonfun$compute$1(SQLExecutionRDD.scala:52)
	at org.apache.spark.sql.internal.SQLConf$.withExistingConf(SQLConf.scala:124)
	at org.apache.spark.sql.execution.SQLExecutionRDD.compute(SQLExecutionRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)

@sfc-gh-dbryant
Copy link

I am using Flyway (7.14.1) with Snowflake JDBC driver 3.13.7 and OpenJDK 16 getting this error:
SQL State : XX000
Error Code : 200001
Message : JDBC driver internal error: Fail to retrieve row count for first arrow chunk: sun.misc.Unsafe or java.nio.DirectByteBuffer.(long, int) not available.

Caused by: net.snowflake.client.jdbc.SnowflakeSQLLoggedException: JDBC driver internal error: Fail to retrieve row count for first arrow chunk: sun.misc.Unsafe or java.nio.DirectByteBuffer.(long, int) not available.

I am passing -jdbcProperties.Dio.netty.tryReflectionSetAccessible=true per the suggestion above without success.

@IRus
Copy link

IRus commented Sep 16, 2021

@sfc-gh-dbryant in JDK16 you should pass -Djdk.module.illegalAccess=permit
In JDK 17 --add-opens jdk.unsupported/sun.misc=ALL-UNNAMED (https://jdk.java.net/17/release-notes#JDK-8266851) but it doesn't work for me /shrug
Also https://issues.apache.org/jira/browse/ARROW-12747

@mdiskin
Copy link

mdiskin commented Sep 16, 2021

Duplicate/related issue:
#533

@cloojure
Copy link

I got it to work for [net.snowflake/snowflake-jdbc "3.13.8"] using Java 11.

@IRus
Copy link

IRus commented Sep 24, 2021

@cloojure driver works fine on jdk11, this ticket about jdk16-17 issues

@anders-ahlgren
Copy link

@IRus Except I have the same problem -- or at least a very, very similar problem -- on jdk 15.

@fprochazka
Copy link

Related #589

@apixandru
Copy link

If anyone else is having issues with this, you could add this to your startup script as a workaround until this is solved

export JDK_JAVA_OPTIONS="$JDK_JAVA_OPTIONS --add-opens=java.base/java.nio=ALL-UNNAMED"

@dzlab
Copy link

dzlab commented Jan 24, 2022

@apixandru this is brilliant thanks for sharing, can you explain why this would work though? I looked at the --add-opens and it seems to be allowing reflections for accessing private APIs and not sure why it would work here - link.

@apixandru
Copy link

@dzlab io.netty.util.internal.PlatformDependent0 uses both unsafe and directbytebuffer as the error message indicates.
Unsafe is still allowed by default but java.base/java.nio needs to be explicitely opened if you're going to do reflection shenanigans on it, in particular what the netty library is trying to do is access this constructor

    // Invoked only by JNI: NewDirectByteBuffer(void*, long)
    //
    private DirectByteBuffer(long addr, int cap) {
        super(-1, 0, cap, cap, null);
        address = addr;
        cleaner = null;
        att = null;
    }

@dzlab
Copy link

dzlab commented Feb 25, 2022

@apixandru I tried this workaround in spark by using the javaExtraOption (According to the Oracle JVM documentation, instead of using the JDK_JAVA_OPTIONS environment variable we can use the extra options Tools Reference)

the executor started with this java command (you can see --add-opens=java.base/java.nio=ALL-UNNAMED)

+ CMD=(${JAVA_HOME}/bin/java "${SPARK_EXECUTOR_JAVA_OPTS[@]}" -Xms$SPARK_EXECUTOR_MEMORY -Xmx$SPARK_EXECUTOR_MEMORY -cp "$SPARK_CLASSPATH:$SPARK_DIST_CLASSPATH" org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url $SPARK_DRIVER_URL --executor-id $SPARK_EXECUTOR_ID --cores $SPARK_EXECUTOR_CORES --app-id $SPARK_APPLICATION_ID --hostname $SPARK_EXECUTOR_POD_IP --resourceProfileId $SPARK_RESOURCE_PROFILE_ID)
+ exec /tini -s -- /usr/java/default/bin/java -XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+UseContainerSupport --add-opens=java.base/java.nio=ALL-UNNAMED -Dspark.network.crypto.enabled=true -Dspark.driver.blockManager.port=10000 -Dspark.authenticate=true -Dspark.ui.port=10013 -Dspark.rpc.message.maxSize=256 -Dspark.vad.port=10011 -Dspark.driver.port=7077 -Xms2048m -Xmx2048m -cp &#x27;/opt/spark/conf::/opt/spark/jars/*:/opt/hadoop3.3.1/etc/hadoop:/opt/hadoop3.3.1/share/hadoop/common/lib/*:/opt/hadoop3.3.1/share/hadoop/common/*:/opt/hadoop3.3.1/share/hadoop/hdfs:/opt/hadoop3.3.1/share/hadoop/hdfs/lib/*:/opt/hadoop3.3.1/share/hadoop/hdfs/*:/opt/hadoop3.3.1/share/hadoop/mapreduce/*:/opt/hadoop3.3.1/share/hadoop/yarn:/opt/hadoop3.3.1/share/hadoop/yarn/lib/*:/opt/hadoop3.3.1/share/hadoop/yarn/*&#x27; org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@mypo.dev-dzlab.svc.cluster.local:7077 --executor-id 1 --cores 1 --app-id spark-application-1645753646054 --hostname 10.0.130.241 --resourceProfileId 0

but in the executor when it tries to download the chunk it fails with the UnsupportedOperationException

2-25 02:05:03.216 10.0.130.241:54321    48     0 (TID 25) ERROR net.snowflake.client.jdbc.SnowflakeChunkDownloader: downloader encountered error: Max retry reached for the download of #chunk0 (Total chunks: 22) retry=10, error=net.snowflake.client.jdbc.SnowflakeSQLLoggedException: JDBC driver internal error: Exception: sun.misc.Unsafe or java.nio.DirectByteBuffer.<;init>;(long, int) not available.
	at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.downloadAndParseChunk(SnowflakeChunkDownloader.java:903)
	at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.call(SnowflakeChunkDownloader.java:953)
	at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.call(SnowflakeChunkDownloader.java:774)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.<;init>;(long, int) not available
	at net.snowflake.client.jdbc.internal.io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:399)
	at net.snowflake.client.jdbc.internal.io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
	at net.snowflake.client.jdbc.internal.io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
	at net.snowflake.client.jdbc.internal.io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:247)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.ReadChannel.readFully(ReadChannel.java:81)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.message.MessageSerializer.readMessageBody(MessageSerializer.java:696)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:68)
	at net.snowflake.client.jdbc.internal.apache.arrow.vector.ipc.ArrowStreamReader.loadNextBatch(ArrowStreamReader.java:106)
	at net.snowflake.client.jdbc.ArrowResultChunk.readArrowStream(ArrowResultChunk.java:78)
	at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.downloadAndParseChunk(SnowflakeChunkDownloader.java:887)
	... 6 more

@sfc-gh-igarish
Copy link
Collaborator

To clean up and re-prioritize more pressing bugs and feature requests we are closing all issues older than 6 months as of March 1, 2023. If there are any issues or feature requests that you would like us to address, please create them according to the new templates we have created. For urgent issues, opening a support case with this link Snowflake Community is the fastest way to get a response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.