Problem
In Datameer task.memory
test for Cluster Health
check failed with the message
Datameer requires at least '1.0 GB' of memory for a (map/reduce) task. Configured is only 'X MB'
Cause
This is caused by the fact that the amount of memory configured for Mapreduce/Tez task is less than recommend for Datameer optimal productivity. This may lead to decreasing of overall performance.
Solution
Depends on you default execution framework it would be required adjust cluster properties. You could to this globally at cluster's backend (service's restart is required) or set appropriate custom properties in Datameer.
There are examples for Mapreduce and Tez below. You could set different values based on your cluster configuration.
MapReduce
Set below property in Hadoop Cluster
custom properties section
mapred.map.child.java.opts=-Xmx1024m
mapred.reduce.child.java.opts=-Xmx1024m
mapred.job.map.memory.mb=1024
mapred.job.reduce.memory.mb=1024
Tez
Set below property in Hadoop Cluster
custom properties section
tez.task.resource.memory.mb=1536
Spark
Set below property in Hadoop Cluster
custom properties section
spark.executor.memory=1536m
Additional information
Comments
0 comments
Please sign in to leave a comment.