pyiron_base.jobs.script module

Jobclass to execute python scripts and jupyter notebooks

class pyiron_base.jobs.script.ScriptJob(project, job_name)

Bases: GenericJob

The ScriptJob class allows to submit Python scripts and Jupyter notebooks to the pyiron job management system.

Parameters:
  • project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in

  • job_name (str) – name of the job, which has to be unique within the project

Simple example:
Step 1. Create the notebook to be submitted, for ex. ‘example.ipynb’, and save it – Can contain any code like:

``` import json with open(‘script_output.json’,’w’) as f:

json.dump({‘x’: [0,1]}, f) # dump some data into a JSON file

```

Step 2. Create the submitter notebook, for ex. ‘submit_example_job.ipynb’, which submits ‘example.ipynb’ to the

pyiron job management system, which can have the following code: ` from pyiron_base import Project pr = Project('scriptjob_example')  # save the ScriptJob in the 'scriptjob_example' project scriptjob = pr.create.job.ScriptJob('scriptjob')  # create a ScriptJob named 'scriptjob' scriptjob.script_path = 'example.ipynb'  # specify the PATH to the notebook you want to submit. `

Step 3. Check the job table to get details about ‘scriptjob’ by using:

` pr.job_table() `

Step 4. If the status of ‘scriptjob’ is ‘finished’, load the data from the JSON file into the

‘submit_example_job.ipynb’ notebook by using: ``` import json with open(scriptjob.working_directory + ‘/script_output.json’) as f:

data = json.load(f) # load the data from the JSON file

```

More sophisticated example:

The script in ScriptJob can also be more complex, e.g. running its own pyiron calculations. Here we show how it is leveraged to run a multi-core atomistic calculation.

Step 1. ‘example.ipynb’ can contain pyiron_atomistics code like:

``` from pyiron_atomistics import Project pr = Project(‘example’) job = pr.create.job.Lammps(‘job’) # we name the job ‘job’ job.structure = pr.create.structure.ase_bulk(‘Fe’) # specify structure

# Optional: get an input value from ‘submit_example_job.ipynb’, the notebook which submits # ‘example.ipynb’ number_of_cores = pr.data.number_of_cores job.server.cores = number_of_cores

job.run() # run the job

# save a custom output, that can be used by the notebook ‘submit_example_job.ipynb’ job[‘user/my_custom_output’] = 16 ```

Step 2. ‘submit_example_job.ipynb’, can then have the following code:

``` from pyiron_base import Project pr = Project(‘scriptjob_example’) # save the ScriptJob in the ‘scriptjob_example’ project scriptjob = pr.create.job.ScriptJob(‘scriptjob’) # create a ScriptJob named ‘scriptjob’ scriptjob.script_path = ‘example.ipynb’ # specify the PATH to the notebook you want to submit.

# In this example case, ‘example.ipynb’ is in the same # directory as ‘submit_example_job.ipynb’

# Optional: to submit the notebook to a queueing system number_of_cores = 1 # number of cores to be used scriptjob.server.cores = number_of_cores scriptjob.server.queue = ‘cmfe’ # specify the queue to which the ScriptJob job is to be submitted scriptjob.server.run_time = 120 # specify the runtime limit for the ScriptJob job in seconds

# Optional: save an input, such that it can be accessed by ‘example.ipynb’ pr.data.number_of_cores = number_of_cores pr.data.write()

# run the ScriptJob job scriptjob.run() ```

Step 3. Check the job table by using:

` pr.job_table() ` in addition to containing details on ‘scriptjob’, the job_table also contains the details of the child ‘job/s’ (if any) that were submitted within the ‘example.ipynb’ notebook.

Step 4. Reload and analyse the child ‘job/s’: If the status of a child ‘job’ is ‘finished’, it can be loaded

into the ‘submit_example_job.ipynb’ notebook using: ``` job = pr.load(‘job’) # remember in Step 1., we wanted to run a job named ‘job’, which has now

# ‘finished’

` this loads 'job' into the 'submit_example_job.ipynb' notebook, which can be then used for analysis, ` job.output.energy_pot[-1] # via the auto-complete job[‘user/my_custom_output’] # the custom output, directly from the hdf5 file ```

attribute

job_name

name of the job, which has to be unique within the project

.. attribute:: status
execution status of the job, can be one of the following [initialized, appended, created, submitted, running,

aborted, collect, suspended, refresh, busy, finished]

.. attribute:: job_id

unique id to identify the job in the pyiron database

.. attribute:: parent_id

job id of the predecessor job - the job which was executed before the current one in the current job series

.. attribute:: master_id

job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.

.. attribute:: child_ids

list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master

.. attribute:: project

Project instance the jobs is located in

.. attribute:: project_hdf5

ProjectHDFio instance which points to the HDF5 file the job is stored in

.. attribute:: job_info_str

short string to describe the job by it is job_name and job ID - mainly used for logging

.. attribute:: working_directory

working directory of the job is executed in - outside the HDF5 file

.. attribute:: path

path to the job as a combination of absolute file system path and path within the HDF5 file.

.. attribute:: version

Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.

.. attribute:: executable

Executable used to run the job - usually the path to an external executable.

.. attribute:: library_activated

For job types which offer a Python library pyiron can use the python library instead of an external executable.

.. attribute:: server

Server object to handle the execution environment for the job.

.. attribute:: queue_id

the ID returned from the queuing system - it is most likely not the same as the job ID.

.. attribute:: logger

logger object to monitor the external execution and internal pyiron warnings.

.. attribute:: restart_file_list

list of files which are used to restart the calculation from these files.

.. attribute:: job_type
Job type object with all the available job types: [‘ExampleJob’, ‘ParallelMaster’, ‘ScriptJob’,

‘ListMaster’]

.. attribute:: script_path

the absolute path to the python script

collect_logfiles()

Compatibility function - but no log files are being collected

collect_output()

Collect output function updates the master ID entries for all the child jobs created by this script job, if the child job is already assigned to an master job nothing happens - master IDs are not overwritten.

disable_mpi4py()
enable_mpi4py()
from_dict(job_dict)
run_if_lib()

Compatibility function - but library run mode is not available

property script_path

Python script path

Returns:

absolute path to the python script

Return type:

str

set_input_to_read_only()

This function enforces read-only mode for the input classes, but it has to be implement in the individual classes.

to_dict()
validate_ready_to_run()

Validate that the calculation is ready to be executed. By default no generic checks are performed, but one could check that the input information is complete or validate the consistency of the input at this point.

Raises:

ValueError – if ready check is unsuccessful

write_input()

Copy the script to the working directory - only python scripts and jupyter notebooks are supported