Troubleshooting the Adapter

Review Service Log files for More Information

Refer to the log files for further information regarding an issue.

Connection Failures

Note: Make sure that on the TA Master machine JAVA_HOME is pointing to JDK directory and not to a JRE directory.

  • If you are using MapR, verify that the MapR client is configured correctly.

  • If running TA on Windows with the JDK set up correctly, this error may display:

    Example: CommonsLogger: CompilationManager

    Suggested fix:

    1. Install a JDK and set $JAVA_HOME to use it.

      This is seen for JDK 1.6 update 34 onwards. The installation of JDK adds C:\windows\system32\java.exe file.

    2. Rename/Remove this file and restart the TA master.

    java.net.ConnectException: Connection refused. This error can be seen in the log SqoopService.out file after "Initiating client connection, ConnectString=127.0.0.1:5181 OR localhost:5181".

    This error is caused by the Zookeeper setting on the cluster; Zookeeper is set up on the cluster with either localhost or 127.0.0.1. The solution is to set up Zookeeper with IP address or fully qualified hostname as described in Setting Up Zookeeper on the Cluster.

Setting Up Zookeeper on the Cluster

To set up Zookeeper on the cluster:

  1. Edit /opt/mapr/conf/mapr-clusters.conf. Change Zookeeper server setting to use IP address or fully qualified hostname.

  2. Edit /opt/mapr/conf/cldb.conf. Change Zookeeper server setting to use IP address or fully qualified hostname.

  3. Edit /opt/mapr/conf/warden.conf. Change Zookeeper server setting to use IP address or fully qualified hostname.

  4. Restart Zookeeper.

  5. Restart the warden (sudo /etc/init.d/mapr-warden restart).

Job Failures

  • Verify Job is configured correctly.

  • Verify the job can be run via the Sqoop CLI before running thru TA.

  • Check adapter logs and verify how it ran from the Hadoop Admin Console.

    Note: The file paths and names are case sensitive and they must exists on the HDFS.

    Sqoop job failure on MapR Distribution of Hadoop due to Permission Denied error CommonsLogger: ImportTool - Encountered IOException running import job:

    java.io.IOException: Error: 13:Permission denied(13), file: /var/mapr/cluster/mapred/jobTracker/staging/<username>/.staging at com.mapr.fs.MapRFileSystem.makeDir(MapRFileSystem.java:448)

    Suggested Fix:

    • If the TA Master is running on Linux, verify it is running using the same user used to set up Sqoop on the cluster.

    • If the TA Master is running on Windows, verify the MapR Client is set up using the "Spoofed user" that is same as the one used to setup Sqoop on the cluster.

      Note: Log on to Hadoop cluster and delete /var/mapr/cluster/mapred/jobTracker/staging/<username>/ directory from the HDFS and try executing the job again.

Adapter Is Out-of-Memory

  • Adapter memory sizes are verified on a 10-node cluster and can be increased in the regular way within the Adapter service.props.

  • Output files cannot be viewed.