pyiron_base.jobs.job.runfunction module

pyiron_base.jobs.job.runfunction.execute_job_with_external_executable(job)
pyiron_base.jobs.job.runfunction.handle_failed_job(job, error)

Handle failed jobs write error message to text file and update database

Parameters:
  • job (GenericJob) – pyiron job object

  • error (subprocess.SubprocessError) – error of the subprocess executing the job

Returns:

job crashed and error message

Return type:

boolean, str

pyiron_base.jobs.job.runfunction.handle_finished_job(job, job_crashed=False, collect_output=True)

Handle finished jobs, collect the calculation output and set the status to aborted if the job crashed

Parameters:
  • job (GenericJob) – pyiron job object

  • job_crashed (boolean) – flag to indicate failed jobs

  • collect_output (boolean) – flag to indicate if the collect_output() function should be called

pyiron_base.jobs.job.runfunction.multiprocess_wrapper(working_directory, job_id=None, file_path=None, debug=False, connection_string=None)
pyiron_base.jobs.job.runfunction.run_job_with_parameter_repair(job)

Internal helper function the run if repair function is called when the run() function is called with the ‘repair’ parameter.

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_runmode_executor(job, executor, gpus_per_slot=None)

Introduced in Python 3.2 the concurrent.futures interface enables the asynchronous execution of python programs. A function is submitted to the executor and a future object is returned. The future object is updated in the background once the executor finished executing the function. The job.server.run_mode.executor implements the same functionality for pyiron jobs. An executor is set as an attribute to the server object:

>>> job.server.executor = concurrent.futures.Executor()
>>> job.run()
>>> job.server.future.done()
False
>>> job.server.future.result()
>>> job.server.future.done()
True

When the job is executed by calling the run() function a future object is returned. The job is then executed in the background and the user can use the future object to check the status of the job.

Parameters:
  • job (GenericJob) – pyiron job object

  • executor (concurrent.futures.Executor) – executor class which implements the executor interface defined in the python concurrent.futures.Executor class.

  • gpus_per_slot (int) – number of GPUs per MPI rank, typically 1

pyiron_base.jobs.job.runfunction.run_job_with_runmode_executor_flux(job, executor, gpus_per_slot=None)

Interface for the flux.job.FluxExecutor executor. Flux is a hierarchical resource management. It can either be used to replace queuing systems like SLURM or be used as a user specific queuing system within an existing allocation. pyiron provides two interfaces to flux, this executor interface as well as a traditional queuing system interface via pysqa. This executor interface is designed for the development of asynchronous simulation protocols, while the traditional queuing system interface simplifies the transition from other queuing systems like SLURM. The usuage is analog to the concurrent.futures.Executor interface:

>>> from flux.job import FluxExecutor
>>> job.server.executor = FluxExecutor()
>>> job.run()
>>> job.server.future.done()
False
>>> job.server.future.result()
>>> job.server.future.done()
True

A word of caution - flux is currently only available on Linux, for all other operation systems the ProcessPoolExecutor from the python standard library concurrent.futures is recommended. The advantage of flux over the ProcessPoolExecutor is that flux takes over the resource management, like monitoring how many cores are available while with the ProcessPoolExecutor this is left to the user.

Parameters:
  • job (GenericJob) – pyiron job object

  • executor (flux.job.FluxExecutor) – flux executor class which implements the executor interface defined in the python concurrent.futures.Executor class.

  • gpus_per_slot – number of GPUs per MPI rank, typically 1

pyiron_base.jobs.job.runfunction.run_job_with_runmode_executor_futures(job, executor)

Interface for the ProcessPoolExecutor implemented in the python standard library as part of the concurrent.futures module. The ProcessPoolExecutor does not provide any resource management, so the user is responsible to keep track of the number of compute cores in use, as over-subscription can lead to low performance.

The [ProcessPoolExecutor docs](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) state: “The __main__ module must be importable by worker subprocesses. This means that ProcessPoolExecutor will not work in the interactive interpreter.” (i.e. Jupyter notebooks). For standard usage this is a non-issue, but for the edge case of job classes defined in-notebook (e.g. children of PythonTemplateJob), the using the ProcessPoolExecutor will result in errors. To resolve this, relocate such classes to an importable .py file.

>>> from concurrent.futures import ProcessPoolExecutor
>>> job.server.executor = ProcessPoolExecutor()
>>> job.server.future.done()
False
>>> job.server.future.result()
>>> job.server.future.done()
True
Parameters:
  • job (GenericJob) – pyiron job object

  • executor (concurrent.futures.Executor) – executor class which implements the executor interface defined in the python concurrent.futures.Executor class.

pyiron_base.jobs.job.runfunction.run_job_with_runmode_manually(job, _manually_print=True)

Internal helper function to run a job manually.

Parameters:
  • job (GenericJob) – pyiron job object

  • _manually_print (bool) – [True/False] print command for execution - default=True

pyiron_base.jobs.job.runfunction.run_job_with_runmode_modal(job)

The run if modal function is called by run to execute the simulation, while waiting for the output. For this we use subprocess.check_output()

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_runmode_non_modal(job)

The run if non modal function is called by run to execute the simulation in the background. For this we use multiprocessing.Process()

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_runmode_queue(job)

The run if queue function is called by run if the user decides to submit the job to and queing system. The job is submitted to the queuing system using subprocess.Popen()

Parameters:

job (GenericJob) – pyiron job object

Returns:

Returns the queue ID for the job.

Return type:

int

pyiron_base.jobs.job.runfunction.run_job_with_runmode_srun(job)
pyiron_base.jobs.job.runfunction.run_job_with_status_busy(job)

Internal helper function the run if busy function is called when the job status is ‘busy’.

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_status_collect(job)

Internal helper function the run if collect function is called when the job status is ‘collect’. It collects the simulation output using the standardized functions collect_output() and collect_logfiles(). Afterwards the status is set to ‘finished’

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_status_created(job)

Internal helper function the run if created function is called when the job status is ‘created’. It executes the simulation, either in modal mode, meaning waiting for the simulation to finish, manually, or submits the simulation to the que.

Parameters:

job (GenericJob) – pyiron job object

Returns:

Queue ID - if the job was send to the queue

Return type:

int

pyiron_base.jobs.job.runfunction.run_job_with_status_finished(job)

Internal helper function the run if finished function is called when the job status is ‘finished’. It loads the existing job.

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_status_initialized(job, debug=False)

Internal helper function the run if new function is called when the job status is ‘initialized’. It prepares the hdf5 file and the corresponding directory structure.

Parameters:
  • job (GenericJob) – pyiron job object

  • debug (bool) – Debug Mode

pyiron_base.jobs.job.runfunction.run_job_with_status_refresh(job)

Internal helper function the run if refresh function is called when the job status is ‘refresh’. If the job was suspended previously, the job is going to be started again, to be continued.

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_status_running(job)

Internal helper function the run if running function is called when the job status is ‘running’. It allows the user to interact with the simulation while it is running.

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_status_submitted(job)

Internal helper function the run if submitted function is called when the job status is ‘submitted’. It means the job is waiting in the queue. ToDo: Display a list of the users jobs in the queue.

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_job_with_status_suspended(job)

Internal helper function the run if suspended function is called when the job status is ‘suspended’. It restarts the job by calling the run if refresh function after setting the status to ‘refresh’.

Parameters:

job (GenericJob) – pyiron job object

pyiron_base.jobs.job.runfunction.run_time_decorator(func)