Cluster Health FAIL
One of our partners is getting the following FAIL in the cluster health after installing 6.0.5 on CDH 5.7. "Datameer requires at least '1.0 GB' of memory for a (map/reduce-) task. Configured is only '729.0 MB'". Whats the custom parameter he needs to set to bump up the memory here? We've tried a bunch with no success. Thanks!
-
Official comment
Good afternoon!
Datameer has created three knobs that can be used to tune the amount of memory allocated to both Map and Reduce tasks.
Gathered from our documentation:
https://documentation.datameer.com/documentation/display/DAS50/Custom+Properties
das.job.application-manager.memory=<value>
das.job.map-task.memory=<value>
das.job.reduce-task.memory=<value>Lets try increasing the values there and see how the Cluster Health Check handles it.
Cheers!
Comment actions -
Thanks! :-) He has tried all the following so far with no success:
das.job.map-task.memory=2048mdas.job.reduce-task.memory=2048mdas.job.application-manager.memory=2048mmapred.map.child.java.opts=2048m
mapreduce.reduce.java.opts=2048m
mapred.child.java.opts=2048mHe has also increased the memory in das-env.shAny other ideas? -
I still get this error "task.memory
FAIL
Datameer requires at least '1.0 GB' of memory for a (map/reduce-) task. Configured is only '989.9 MB' when installing us on a CDH quickstrat VM. I tried setting the following (below) but it does not like the value 2048m and fails immediately. Any ideas? I want to use this VM for a training. Thanks!das.job.map-task.memory=2048mdas.job.reduce-task.memory=2048mdas.job.application-manager.memory=2048mmapred.map.child.java.opts=2048m
mapreduce.reduce.java.opts=2048m
mapred.child.java.opts=2048m
Please sign in to leave a comment.
Comments
10 comments