Learn how to prioritize jobs submitted by Datameer to a Hadoop cluster.
Depending on whether or not you are using impersonation, you can employ one of the following mechanisms.
If not using impersonation, users can set the scheduling of jobs for specific cluster queues at either a global or per job level.
To do this on a global level:
- Open the
Administrationtab in Datameer.
- Select Hadoop Cluster from the side menu
- To send all jobs of all Framework types to the same queue, add the following property in the
das.job.queue=<cluster queue name>
- To specify a job queue for a preferred Executiuon Framework, add one of the following properties in the
tez.queue.name=<cluster queue name>
mapreduce.job.quename=<cluster queue name>
(MapReduce is a deprecated framework)
Note: To do this on an individual job level, add one of these properties to the
Custom Properties space within the specific artifact's
Datameer users that are running impersonation don't need to set any scheduling properties in Datameer. Jobs coming from Datameer will already be labeled and all configuration for the queues are made on the Hadoop cluster itself.