pyiron_base.jobs.job.generic module

Generic Job class extends the JobCore class with all the functionality to run the job object.

class pyiron_base.jobs.job.generic.GenericError(working_directory)

Bases: object

print_message(string='')
print_queue(string='')
class pyiron_base.jobs.job.generic.GenericJob(project, job_name)

Bases: JobCore, HasDict

Generic Job class extends the JobCore class with all the functionality to run the job object. From this class all specific job types are derived. Therefore it should contain the properties/routines common to all jobs. The functions in this module should be as generic as possible.

Sub classes that need to add special behavior after :method:`.copy_to()` can override :method:`._after_generic_copy_to()`.

Parameters:
  • project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in

  • job_name (str) – name of the job, which has to be unique within the project

.. attribute:: job_name

name of the job, which has to be unique within the project

.. attribute:: status
execution status of the job, can be one of the following [initialized, appended, created, submitted,

running, aborted, collect, suspended, refresh, busy, finished]

.. attribute:: job_id

unique id to identify the job in the pyiron database

.. attribute:: parent_id

job id of the predecessor job - the job which was executed before the current one in the current job series

.. attribute:: master_id
job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel

or in serial.

.. attribute:: child_ids

list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master

.. attribute:: project

Project instance the jobs is located in

.. attribute:: project_hdf5

ProjectHDFio instance which points to the HDF5 file the job is stored in

.. attribute:: job_info_str

short string to describe the job by it is job_name and job ID - mainly used for logging

.. attribute:: working_directory

working directory of the job is executed in - outside the HDF5 file

.. attribute:: path

path to the job as a combination of absolute file system path and path within the HDF5 file.

.. attribute:: version

Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.

.. attribute:: executable

Executable used to run the job - usually the path to an external executable.

.. attribute:: library_activated

For job types which offer a Python library pyiron can use the python library instead of an external executable.

.. attribute:: server

Server object to handle the execution environment for the job.

.. attribute:: queue_id

the ID returned from the queuing system - it is most likely not the same as the job ID.

.. attribute:: logger

logger object to monitor the external execution and internal pyiron warnings.

.. attribute:: restart_file_list

list of files which are used to restart the calculation from these files.

.. attribute:: exclude_nodes_hdf

list of nodes which are excluded from storing in the hdf5 file.

.. attribute:: exclude_groups_hdf

list of groups which are excluded from storing in the hdf5 file.

.. attribute:: job_type
Job type object with all the available job types: [‘ExampleJob’, ‘ParallelMaster’,

‘ScriptJob’, ‘ListMaster’]

check_setup()

Checks whether certain parameters (such as plane wave cutoff radius in DFT) are changed from the pyiron standard values to allow for a physically meaningful results. This function is called manually or only when the job is submitted to the queueing system.

clear_job()

Convenience function to clear job info after suspend. Mimics deletion of all the job info after suspend in a local test environment.

collect_logfiles()

Collect the log files of the external executable and store the information in the HDF5 file. This method has to be implemented in the individual hamiltonians.

collect_output()

Collect the output files of the external executable and store the information in the HDF5 file. This method has to be implemented in the individual hamiltonians.

convergence_check()

Validate the convergence of the calculation.

Returns:

If the calculation is converged

Return type:

(bool)

copy()

Copy the GenericJob object which links to the job and its HDF5 file

Returns:

New GenericJob object pointing to the same job

Return type:

GenericJob

copy_file_to_working_directory(file)

Copy a specific file to the working directory before the job is executed.

Parameters:

file (str) – path of the file to be copied.

copy_template(project=None, new_job_name=None)

Copy the content of the job including the HDF5 file but without the output data to a new location

Parameters:
  • project (JobCore/ProjectHDFio/Project/None) – The project to copy the job to. (Default is None, use the same project.)

  • new_job_name (str) – The new name to assign the duplicate job. Required if the project is None or the same project as the copied job. (Default is None, try to keep the same name.)

Returns:

GenericJob object pointing to the new location.

Return type:

GenericJob

copy_to(project=None, new_job_name=None, input_only=False, new_database_entry=True, delete_existing_job=False, copy_files=True)

Copy the content of the job including the HDF5 file to a new location.

Parameters:
  • project (JobCore/ProjectHDFio/Project/None) – The project to copy the job to. (Default is None, use the same project.)

  • new_job_name (str) – The new name to assign the duplicate job. Required if the project is None or the same project as the copied job. (Default is None, try to keep the same name.)

  • input_only (bool) – [True/False] Whether to copy only the input. (Default is False.)

  • new_database_entry (bool) – [True/False] Whether to create a new database entry. If input_only is True then new_database_entry is False. (Default is True.)

  • delete_existing_job (bool) – [True/False] Delete existing job in case it exists already (Default is False.)

  • copy_files (bool) – If True copy all files the working directory of the job, too

Returns:

GenericJob object pointing to the new location.

Return type:

GenericJob

create_job(job_type, job_name, delete_existing_job=False)

Create one of the following jobs: - ‘StructureContainer’: - ‘StructurePipeline’: - ‘AtomisticExampleJob’: example job just generating random number - ‘ExampleJob’: example job just generating random number - ‘Lammps’: - ‘KMC’: - ‘Sphinx’: - ‘Vasp’: - ‘GenericMaster’: - ‘ParallelMaster’: series of jobs run in parallel - ‘KmcMaster’: - ‘ThermoLambdaMaster’: - ‘RandomSeedMaster’: - ‘MeamFit’: - ‘Murnaghan’: - ‘MinimizeMurnaghan’: - ‘ElasticMatrix’: - ‘ConvergenceEncutParallel’: - ‘ConvergenceKpointParallel’: - ’PhonopyMaster’: - ‘DefectFormationEnergy’: - ‘LammpsASE’: - ‘PipelineMaster’: - ’TransformationPath’: - ‘ThermoIntEamQh’: - ‘ThermoIntDftEam’: - ‘ScriptJob’: Python script or jupyter notebook job container - ‘ListMaster’: list of jobs

Parameters:
  • job_type (str) – job type can be [‘StructureContainer’, ‘StructurePipeline’, ‘AtomisticExampleJob’, ‘ExampleJob’, ‘Lammps’, ‘KMC’, ‘Sphinx’, ‘Vasp’, ‘GenericMaster’, ‘ParallelMaster’, ‘KmcMaster’, ‘ThermoLambdaMaster’, ‘RandomSeedMaster’, ‘MeamFit’, ‘Murnaghan’, ‘MinimizeMurnaghan’, ‘ElasticMatrix’, ‘ConvergenceEncutParallel’, ‘ConvergenceKpointParallel’, ’PhonopyMaster’, ‘DefectFormationEnergy’, ‘LammpsASE’, ‘PipelineMaster’, ’TransformationPath’, ‘ThermoIntEamQh’, ‘ThermoIntDftEam’, ‘ScriptJob’, ‘ListMaster’]

  • job_name (str) – name of the job

  • delete_existing_job (bool) – delete an existing job - default false

Returns:

job object depending on the job_type selected

Return type:

GenericJob

db_entry()

Generate the initial database entry for the current GenericJob

Returns:

database dictionary {“username”, “projectpath”, “project”, “job”, “subjob”, “hamversion”,

”hamilton”, “status”, “computer”, “timestart”, “masterid”, “parentid”}

Return type:

(dict)

drop_status_to_aborted()

Change the job status to aborted when the job was intercepted.

property exclude_groups_hdf

Get the list of groups which are excluded from storing in the hdf5 file

Returns:

groups(list)

property exclude_nodes_hdf

Get the list of nodes which are excluded from storing in the hdf5 file

Returns:

nodes(list)

property executable

Get the executable used to run the job - usually the path to an external executable.

Returns:

exectuable path

Return type:

(str/pyiron_base.job.executable.Executable)

property executor_type
from_dict(job_dict)
from_hdf(hdf=None, group_name=None)

Restore the GenericJob from an HDF5 file

Parameters:
  • hdf (ProjectHDFio) – HDF5 group object - optional

  • group_name (str) – HDF5 subgroup name - optional

classmethod from_hdf_args(hdf)

Read arguments for instance creation from HDF5 file

Parameters:

hdf (ProjectHDFio) – HDF5 group object

interactive_close()

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job. After the interactive execution, the job can be closed using the interactive_close function.

interactive_fetch()

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job. To access the output data during the execution the interactive_fetch function is used.

interactive_flush(path='generic', include_last_step=True)

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job. To write the interactive cache to the HDF5 file the interactive flush function is used.

job_file_name(file_name, cwd=None)

combine the file name file_name with the path of the current working directory

Parameters:
  • file_name (str) – name of the file

  • cwd (str) – current working directory - this overwrites self.project_hdf5.working_directory - optional

Returns:

absolute path to the file in the current working directory

Return type:

str

property job_type
[‘ExampleJob’, ‘ParallelMaster’, ‘ScriptJob’,

‘ListMaster’]

Returns:

Job type object

Return type:

JobTypeChoice

Type:

Job type object with all the available job types

kill()
property logger

Get the logger object to monitor the external execution and internal pyiron warnings.

Returns:

logger object

Return type:

logging.getLogger()

property queue_id

Get the queue ID, the ID returned from the queuing system - it is most likely not the same as the job ID.

Returns:

queue ID

Return type:

int

refresh_job_status()

Refresh job status by updating the job status with the status from the database if a job ID is available.

remove(_protect_childs=True)

Remove the job - this removes the HDF5 file, all data stored in the HDF5 file an the corresponding database entry.

Parameters:

_protect_childs (bool) – [True/False] by default child jobs can not be deleted, to maintain the consistency - default=True

remove_and_reset_id(_protect_childs=True)
remove_child()

internal function to remove command that removes also child jobs. Do never use this command, since it will destroy the integrity of your project.

reset_job_id(job_id=None)

Reset the job id sets the job_id to None in the GenericJob as well as all connected modules like JobStatus.

restart(job_name=None, job_type=None)

Create an restart calculation from the current calculation - in the GenericJob this is the same as create_job(). A restart is only possible after the current job has finished. If you want to run the same job again with different input parameters use job.run(delete_existing_job=True) instead.

Parameters:
  • job_name (str) – job name of the new calculation - default=<job_name>_restart

  • job_type (str) – job type of the new calculation - default is the same type as the exeisting calculation

Returns:

property restart_file_dict

A dictionary of the new name of the copied restart files

property restart_file_list

Get the list of files which are used to restart the calculation from these files.

Returns:

list of files

Return type:

list

run(delete_existing_job=False, repair=False, debug=False, run_mode=None, run_again=False)

This is the main run function, depending on the job status [‘initialized’, ‘created’, ‘submitted’, ‘running’, ‘collect’,’finished’, ‘refresh’, ‘suspended’] the corresponding run mode is chosen.

Parameters:
  • delete_existing_job (bool) – Delete the existing job and run the simulation again.

  • repair (bool) – Set the job status to created and run the simulation again.

  • debug (bool) – Debug Mode - defines the log level of the subprocess the job is executed in.

  • run_mode (str) – [‘modal’, ‘non_modal’, ‘queue’, ‘manual’] overwrites self.server.run_mode

  • run_again (bool) – Same as delete_existing_job (deprecated)

run_if_interactive()

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.

run_if_interactive_non_modal()

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.

run_if_modal()

The run if modal function is called by run to execute the simulation, while waiting for the output. For this we use subprocess.check_output()

run_if_refresh()

Internal helper function the run if refresh function is called when the job status is ‘refresh’. If the job was suspended previously, the job is going to be started again, to be continued.

run_if_scheduler()

The run if queue function is called by run if the user decides to submit the job to and queing system. The job is submitted to the queuing system using subprocess.Popen() :returns: Returns the queue ID for the job. :rtype: int

run_static()

The run static function is called by run to execute the simulation.

run_time_to_db()

Internal helper function to store the run_time in the database

save()

Save the object, by writing the content to the HDF5 file and storing an entry in the database.

Returns:

Job ID stored in the database

Return type:

(int)

send_to_database()

if the jobs should be store in the external/public database this could be implemented here, but currently it is just a placeholder.

property server

Get the server object to handle the execution environment for the job.

Returns:

server object

Return type:

Server

set_input_to_read_only()

This function enforces read-only mode for the input classes, but it has to be implemented in the individual classes.

signal_intercept(sig)

Abort the job and log signal that caused it.

Expected to be called from pyiron_base.state.signal.catch_signals().

Parameters:

sig (int) – the signal that triggered the abort

suspend()

Suspend the job by storing the object and its state persistently in HDF5 file and exit it.

to_dict()
to_hdf(hdf=None, group_name=None)

Store the GenericJob in an HDF5 file

Parameters:
  • hdf (ProjectHDFio) – HDF5 group object - optional

  • group_name (str) – HDF5 subgroup name - optional

transfer_from_remote()
update_master(force_update=False)

After a job is finished it checks whether it is linked to any metajob - meaning the master ID is pointing to this jobs job ID. If this is the case and the master job is in status suspended - the child wakes up the master job, sets the status to refresh and execute run on the master job. During the execution the master job is set to status refresh. If another child calls update_master, while the master is in refresh the status of the master is set to busy and if the master is in status busy at the end of the update_master process another update is triggered.

Parameters:

force_update (bool) – Whether to check run mode for updating master

validate_ready_to_run()

Validate that the calculation is ready to be executed. By default no generic checks are performed, but one could check that the input information is complete or validate the consistency of the input at this point.

Raises:

ValueError – if ready check is unsuccessful

property version

Get the version of the hamiltonian, which is also the version of the executable unless a custom executable is used.

Returns:

version number

Return type:

str

property working_directory

Get the working directory of the job is executed in - outside the HDF5 file. The working directory equals the path but it is represented by the filesystem:

/absolute/path/to/the/file.h5/path/inside/the/hdf5/file

becomes:

/absolute/path/to/the/file_hdf5/path/inside/the/hdf5/file

Returns:

absolute path to the working directory

Return type:

str

write_input()

Write the input files for the external executable. This method has to be implemented in the individual hamiltonians.