Environment
DM: 4.4.1, OS: -, DIST: -, COM: HDFS
Symptoms
After upgrading Datameer to 4.4.1, the existing datalinks fail to resolve the configured logical name. Datalink jobs start running fine, but eventually they fail with an error like this:
INFO [2014-10-19 23:41:25.797] [JobScheduler worker1-thread-991] (MrPlanRunner.java:250) - Completed postprocessing: [0 sec], progress at 100 INFO [2014-10-19 23:41:25.797] [JobScheduler worker1-thread-991] (MrPlanRunner.java:251) - ------------------------------------------- INFO [2014-10-19 23:41:25.798] [JobScheduler worker1-thread-991] (MrPlanRunner.java:157) - Completed execution plan with SUCCESS and 1 completed MR jobs. (hdfs://nameservice1/user/datameer/importlinks/7199/34922) INFO [2014-10-19 23:41:25.814] [JobScheduler worker1-thread-991] (JobArtifactFileAccessTool.java:62) - Configuring job result artifacts from [hdfs://nameservice1/user/datameer/importlinks/7199] INFO [2014-10-19 23:46:24.327] [JobScheduler worker1-thread-991] (JobArtifactFileAccessTool.java:62) - Configuring job result artifacts from [hdfs://nameservice1/user/datameer/joblogs/34922] ERROR [2014-10-19 23:46:24.410] [JobScheduler worker1-thread-991] (DasJobCallable.java:135) - Job failed! Execution plan: digraph G { 1 [label = "MrInputNode{datalink-sample-input} - 0 Bytes"]; 2 [label = "MrMapNode{datameer.dap.common.job.sample.WritePartitionedPreviewMapper@216634b4}"]; 3 [label = "MrOutputNode{datalink-sample} - 0 Bytes"]; 2 -> 3 [label = "PRODUCED_BY_MAPPER"]; 1 -> 2 [label = "REQUIRED_AS_MAPPER_INPUT"]; } datameer.dap.sdk.util.ExceptionUtil$WrappedThreadException: java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1 at datameer.dap.sdk.util.ExceptionUtil.wrapInThreadException(ExceptionUtil.java:271) at datameer.dap.sdk.util.HadoopUtil.executeTimeRestrictedCall(HadoopUtil.java:165) at datameer.dap.sdk.util.HadoopUtil.getFileSystem(HadoopUtil.java:88) at datameer.dap.sdk.util.HadoopUtil.getFileSystem(HadoopUtil.java:71) at datameer.dap.sdk.cluster.filesystem.ClusterFileSystem.open(ClusterFileSystem.java:242) at datameer.dap.sdk.cluster.filesystem.ClusterFileSystemProvider$1.open(ClusterFileSystemProvider.java:15) at datameer.dap.sdk.datastore.FileDataStoreModel.openFileSystem(FileDataStoreModel.java:120) ... Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1 at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512) ... |
All the correct HA configuration details are present in the Custom Properties field of the Administration -> Hadoop Cluster page, but the jobs are still failing to resolve the nameservice1 logical name.
Cause/Resolution
Copying the same HA Hadoop configuration details from the Administration -> Hadoop Cluster page to the custom properties field of the HDFS Connection (that datalinks use to connect to the cluster) helps to run Datalinks successfully:
dfs.nameservices=nameservice1 dfs.ha.namenodes.nameservice1=namenode1,namenode2 dfs.namenode.rpc-address.nameservice1.namenode1=hostname1.company.com:8020 dfs.namenode.rpc-address.nameservice1.namenode2=hostname2.company.com:8020 dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider |
Even having the same configuration details in the Hadoop Custom Properties field of datalinks doesn't help - the configuration needs to be present in the HDFS Connection.
Instead of setting the HDFS Name Node to hdfs://hostname:8080 and to solve the issue global, it will be necessary to use hdfs://nameservice.*
Further Information
regarding how to "Configure High Availability on a Hadoop Cluster" and "High Availability and Yarn" can be requested from Datameer service team.
Comments
0 comments
Please sign in to leave a comment.