Shuffle Error: MAX_FAILED_UNIQUE_FETCHES; bailing-out

First, long time no use MapReduce! Today I wasted some time figuring this out; "Shuffle Error: MAX_FAILED_UNIQUE_FETCHES; bailing-out".

If you meet this error message w/ the higher version than the hadoop-0.20.2, you should check the file "mapred-site.xml" in {$HADOOP_HOME}/conf directory and the "/etc/hosts" because this happens when the IP addresses are all confused and things aren't on the right ports.

# The following lines are desirable for IPv6 capable hosts
#::1     localhost ip6-localhost ip6-loopback
#fe00::0 ip6-localnet
#ff00::0 ip6-mcastprefix
#ff02::1 ip6-allnodes
#ff02::2 ip6-allrouters


    The task tracker http server address and port.
    If the port is 0 then the server will start on a free port.

Or, if you have a lot of map and reduce processes in your cluster, check the "tasktracker.http.threads" property.



Popular posts from this blog

일본만화 추천 100선

음성 인공지능 스타트업의 기회 분석

공유 모빌리티 회사로 합류

인간 본성의 법칙 (책 리뷰 + 잡담)