Installing and Configuring the JDBC Libraries

Note: The Databricks JDBC driver included with the adapter works with AWS, Azure, and Google Cloud Databricks. For on-premise Apache Spark SQL, use a JDBC driver/JDBC library of your choice, such as Simba® Spark JDBC Driver.

If Java version 17 or above is used for Tidal Automation, specify the path as follows for the correct work of the Databricks Spark SQL Adapter:

  1. Сreate a config directory under /master/services/{F8BFA9BD-C599-4D54-A09E-226E731983CC}.

  2. Сreate service.props under the config directory.

  3. Add the params JVMARGS=--add-opens=java.base/java.nio=ALL-UNNAMED to the service.props.

To establish a connection to the Apache Spark instance, specify the driver path as follows:

  1. Create a config directory under /master/services/{F8BFA9BD-C599-4D54-A09E-226E731983CC}.

  2. Create service.props under the config directory.

  3. Specify the driver path in service.props following the example:

    In Windows

    Example: CLASSPATH=c:\\lib\\SparkJDBC42.jar;${CLASSPATH}.

    In Linux/Unix:

    Example: CLASSPATH=/opt/master/lib/SparkJDBC42.jar:${CLASSPATH}.

  1. Paste the evaluation license file in the same directory that houses the driver.

  2. Restart the Master.