Hadoop Job History Server Memory Issues

when we are running Hadoop jobs and some times we get the org.apache.hadoop.ipc.RemoteException, which is as mentioned below.

Caused by: org.apache.hadoop.ipc.RemoteException(java.lang.OutOfMemoryError): GC overhead limit exceeded

at org.apache.hadoop.ipc.Client.call(Client.java:1371)
at org.apache.hadoop.ipc.Client.call(Client.java:1302)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:242)
at com.sun.proxy.$Proxy48.getJobReport(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:123)
Copy to clipboard
Error Implication: The above exception implies that it has occurred due to the out-of-memory issue in the Job History Server (Job History Server).

Solution: It is recommended to use a larger coordinator node with at least 120G RAM where you can use 8GB heap memory. If you still face the issue, then you can increase the Job History Server memory by using the node bootstrap.

Increase the Job History Server’ memory by adding the following script in the node bootstrap.

increase the Job History Server memory to 16G

sudo echo ‘export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=”16,384″‘ >> /etc/hadoop/mapred-env.sh

restart the Job History Server

sudo -u mapred /usr/lib/hadoop2/sbin/mr-jobhistory-daemon.sh stop historyserver
sudo -u mapred /usr/lib/hadoop2/sbin/mr-jobhistory-daemon.sh start historyserver