Winutils Exe Hadoop


May 02, 2016 Apache Hadoop releases do not contain binaries like hadoop.dll or winutils.exe, which are required to run hadoop. In order to use it on Windows, the distribution must be compiled from its sources. The binary distribution of Apache Hadoop 2.2.0 release does not contain some windows native components (like winutils.exe, hadoop.dll etc). These are required (not optional) to run Hadoop on Windows. So you need to build windows native binary distribution of hadoop from source codes following 'BUILD.txt' file located inside the source. This detailed step-by-step guide shows you how to install the latest Hadoop v3.3.0 on Windows 10. It leverages Hadoop 3.3.0 winutils tool. WLS (Windows Subsystem for Linux) is not required.

Winutils Exe Hadoop

Failed to locate the winutils binary in the hadoop binary path (8)

I am getting the following error while starting namenode for latest hadoop-2.2 release. I didn't find winutils exe file in hadoop bin folder. I tried below commands

Simple Solution :Download it from here and add to $HADOOP_HOME/bin

(Source :Click here)

EDIT:

For hadoop-2.6.0 you can download binaries from Titus Barik blog >>.

Winutils.exe Hadoop 3.3.0

I have not only needed to point HADOOP_HOME to extracted directory [path], but also provide system property -Djava.library.path=[path]bin to load native libs (dll).

  • Status:Closed
  • Resolution: Not A Problem
  • Fix Version/s: None
  • Labels:

C:UsersWEI>pyspark
Python 3.5.6 |Anaconda custom (64-bit)| (default, Aug 26 2018, 16:05:27) [MSC v.
1900 64 bit (AMD64)] on win32
Type 'help', 'copyright', 'credits' or 'license' for more information.
2018-09-14 21:12:39 ERROR Shell:397 - Failed to locate the winutils binary in th
e hadoop binary path
java.io.IOException: Could not locate executable nullbinwinutils.exe in the Ha
doop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(Secur
ityUtil.java:611)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
nformation.java:273)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
rGroupInformation.java:261)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(
UserGroupInformation.java:791)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGrou
pInformation.java:761)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGr
oupInformation.java:634)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
.scala:2467)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
.scala:2467)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2467)
at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:220)
at org.apache.spark.deploy.SparkSubmit$.secMgr$lzycompute$1(SparkSubmit.
scala:408)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSub
mit$$secMgr$1(SparkSubmit.scala:408)
at org.apache.spark.deploy.SparkSubmit$$anonfun$doPrepareSubmitEnvironme
nt$7.apply(SparkSubmit.scala:416)
at org.apache.spark.deploy.SparkSubmit$$anonfun$doPrepareSubmitEnvironme
nt$7.apply(SparkSubmit.scala:416)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(Spark
Submit.scala:415)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSu
bmit.scala:250)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2018-09-14 21:12:39 WARN NativeCodeLoader:62 - Unable to load native-hadoop lib
rary for your platform... using builtin-java classes where applicable
Setting default log level to 'WARN'.
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLeve
l(newLevel).
Welcome to
____ __
/ _/_ ___ ____/ /_
/ _ / _ `/ __/ '/
/__ / ._/_,// //_ version 2.3.1
/_/

Using Python version 3.5.6 (default, Aug 26 2018 16:05:27)
SparkSession available as 'spark'.
>>>

Winutils Exe Hadoop
  • Assignee:
    Unassigned
    Reporter:
    WEI PENG
  • Votes:
    0Vote for this issue
    Watchers:
    3Start watching this issue

Winutils.exe Hadoop 3.1.0

  • Created:
    Updated:
    Resolved:

Comments are closed.