Cluster Health FAIL

Comments

10 comments

  • Official comment
    Brian Junio

    Good afternoon!

    Datameer has created three knobs that can be used to tune the amount of memory allocated to both Map and Reduce tasks.  

    Gathered from our documentation:

    https://documentation.datameer.com/documentation/display/DAS50/Custom+Properties

    das.job.application-manager.memory=<value>

    das.job.map-task.memory=<value>

    das.job.reduce-task.memory=<value>

     

    Lets try increasing the values there and see how the Cluster Health Check handles it.

     

     

    Cheers!

     

    Comment actions Permalink
  • Joel Stewart

    What's the cluster setup? Are there limitations on how much memory containers can request from the cluster itself? 

    0
    Comment actions Permalink
  • Nikhil Kumar

    I still get this error "task.memory

    FAIL

    Datameer requires at least '1.0 GB' of memory for a (map/reduce-) task. Configured is only '989.9 MB' when installing us on a CDH quickstrat VM. I tried setting the following (below) but it does not like the value 2048m and fails immediately. Any ideas? I want to use this VM for a training. Thanks!

    das.job.map-task.memory=2048m
    das.job.reduce-task.memory=2048m
    das.job.application-manager.memory=2048m
     
    mapred.map.child.java.opts=2048m
    mapreduce.reduce.java.opts=2048m
    mapred.child.java.opts=2048m
    0
    Comment actions Permalink
  • Nikhil Kumar

    Thanks Kosta - Since this is 6.1.2, it using Spark cluster mode. So I think there is a spark specific setting I need?

    0
    Comment actions Permalink
  • Nikhil Kumar

    Spark Client sorry. Not Spark cluster

    0
    Comment actions Permalink
  • Nikhil Kumar

    The setting  tez.task.resource.memory.mb=1536 did not resolve the problem by the way.

    0
    Comment actions Permalink
  • Konsta Danyliuk

    Hi Nikhil,

    As Spark is default execution framework for Datameer 6.1.2, try to add below option to your Hadoop Cluster custom properties section to check if this helps.

    spark.executor.memory=1536m

    0
    Comment actions Permalink
  • Nikhil Kumar

    That worked! Thanks os much Konsta!!

    0
    Comment actions Permalink
  • Nikhil Kumar

    Thanks! :-) He has tried all the following so far with no success:

    das.job.map-task.memory=2048m
    das.job.reduce-task.memory=2048m
    das.job.application-manager.memory=2048m
     
    mapred.map.child.java.opts=2048m
    mapreduce.reduce.java.opts=2048m
    mapred.child.java.opts=2048m
     
    He has also increased the memory in das-env.sh
     
    Any other ideas?
    -1
    Comment actions Permalink
  • Konsta Danyliuk

    HI Nikhil,

    Try to add below option to your Hadoop Cluster custom properties section to check if this helps.

    tez.task.resource.memory.mb=1536

     

    -1
    Comment actions Permalink

Please sign in to leave a comment.