Installing and Configuring the JDBC Libraries
Note: The Databricks JDBC driver included with the adapter works with AWS, Azure, and Google Cloud Databricks. For on-premise Apache Spark SQL, use a JDBC driver/JDBC library of your choice, such as Simba® Spark JDBC Driver.
If Java version 17 or above is used for Tidal Automation, specify the path as follows for the correct work of the Databricks Spark SQL Adapter:
-
Сreate a config directory under /master/services/{F8BFA9BD-C599-4D54-A09E-226E731983CC}.
-
Сreate service.props under the config directory.
-
Add the params JVMARGS=--add-opens=java.base/java.nio=ALL-UNNAMED to the service.props.
To establish a connection to the Apache Spark instance, specify the driver path as follows:
-
Create a config directory under /master/services/{F8BFA9BD-C599-4D54-A09E-226E731983CC}.
-
Create service.props under the config directory.
-
Specify the driver path in service.props following the example:
In Windows
Example: CLASSPATH=c:\\lib\\SparkJDBC42.jar;${CLASSPATH}.
In Linux/Unix:
Example: CLASSPATH=/opt/master/lib/SparkJDBC42.jar:${CLASSPATH}.
-
Paste the evaluation license file in the same directory that houses the driver.
-
Restart the Master.