Hadoop Yarn Application Master Exceeding – Physical Memory Limit Error
When we are running a Hadoop job, we may get given below errors in the ApplicationMaster’s physical memory limit.
Application application_ failed 1 times due to AM Container for appattempt_ exited
with exitCode: -204Diagnostics: Container pid=,containerID=container_ is running beyond physical memory limits.
Current usage: 6.2 GB of 4.5 GB physical memory used; 37.6 GB of 11.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_Container killed on request. Exit code is 133
Container exited with a non-zero exit code 133
Failing this attempt. Failing the application.
This error says that the YARN container is running beyond the physical memory limits.
Solution:
Hadoop Yarn processing capacity to each application by allocating containers.. This issue may get resolved by configuring appropriate value with given below parameters:
yarn.app.mapreduce.am.command-opts=-Xmxm sets the JVM arguments for an Application Master.
yarn.app.mapreduce.am.resource.mb=. sets the container size.
Handling Exceeded Physical Memory Limit Error in a Mapper
If mapper exceeds the physical memory limit, you may see this error as given below logs
Application application_ failed 1 times due to AM Container for appattempt_ exited with exitCode: -104
Diagnostics: Container pid=,containerID=container_ is running beyond physical memory limits.
Current usage: 6.2 GB of 4.9 GB physical memory used; 26.5 GB of 13.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_
Solution: Try increasing the mapper’s container and heap memory using these configuration properties.
mapreduce.map.java.opts = -Xmxm;
mapreduce.map.memory.mb = ;
Handling Exceeded Physical Memory Limit Error in a Reducer
If the reducer exceeds the physical memory limit, you may see this error in its logs
Application application_ failed 1 times due to AM Container for appattempt_ exited with exitCode: -104
Diagnostics: Container pid=,containerID=container_ is running beyond physical memory limits.
Current usage: 6.2 GB of 4.9 GB physical memory used; 26.5 GB of 13.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_
Solution: we can try to increase the reducer’s container & heap memory using this configuration.
mapreduce.reduce.java.opts = -Xmx;
mapreduce.map.memory.mb = ;
Handling the Java Heap Space Error
Exception: java.lang.OutOfMemoryError: Java heap space
When ever you are seeing this error in the mapper task, you can try to increase the mapper memory.
set mapreduce.map.memory.mb=;
set mapreduce.map.java.opts=-Xmxm;
When ever you are seeing error in the reducer task, you can try to increase the reducer memory.
set mapreduce.reduce.memory.mb=;
set mapreduce.reduce.java.opts=-Xmxm;