pyiron_base.jobs.worker module

Worker Class to execute calculation in an asynchronous way

class pyiron_base.jobs.worker.WorkerJob(project, job_name)

Bases: PythonTemplateJob

The WorkerJob executes jobs linked to its master id.

The worker can either be in the same project as the calculation it should execute or a different project. For the example two projects are created:

>>> from pyiron_base import Project
>>> pr_worker = Project("worker")
>>> pr_calc = Project("calc")

The worker is configured to be executed in the background using the non_modal mode, with the number of cores defining the total number avaiable to the worker and the cores_per_job definitng the per job allocation. It is recommended to use the same number of cores for each task the worker executes to optimise the load balancing.

>>> job_worker = pr_worker.create.job.WorkerJob("runner")
>>> job_worker.server.run_mode.non_modal = True
>>> job_worker.server.cores = 4
>>> job_worker.input.cores_per_job = 2
>>> job_worker.run()

The calculation are assinged to the worker by setting the run_mode to worker and assigning the job_id of the worker as master_id of each job. In this example a total of ten toyjobs are attached to the worker, with each toyjob using two cores.

>>> for i in range(10):
>>>     job = pr_calc.create.job.ToyJob("toy_" + str(i))
>>>     job.server.run_mode.worker = True
>>>     job.server.cores = 2
>>>     job.master_id = job_worker.job_id
>>>     job.run()

The execution can be monitored using the job_table of the calculation object:

>>> pr_calc.job_table()

Finally after all calculation are finished the status of the worker is set to collect, which internally stops the execution of the worker and afterwards updates the job status to finished:

>>> pr_calc.wait_for_jobs()
>>> job_worker.status.collect = True
property child_runtime
property cores_per_job
property project_to_watch
property queue_limit_factor
run_static()

The run static function is called by run to execute the simulation.

run_static_with_database()
run_static_without_database()
property sleep_interval
wait_for_worker(interval_in_s=60, max_iterations=10)

Wait for the workerjob to finish the execution of all jobs. If no job is in status running or submitted the workerjob shuts down automatically after 10 minutes.

Parameters:
  • interval_in_s (int) – interval when the job status is queried from the database - default 60 sec.

  • max_iterations (int) – maximum number of iterations - default 10

pyiron_base.jobs.worker.worker_function(args)

The worker function is executed inside an aproc processing pool.

Parameters:

args (list) – A list of arguments

Arguments inside the argument list:

working_directory (str): working directory of the job job_id (int/ None): job ID hdf5_file (str): path to the HDF5 file of the job h5_path (str): path inside the HDF5 file to load the job submit_on_remote (bool): submit to queuing system on remote host debug (bool): enable debug mode [True/False] (optional)