pyiron_base.project.generic module
The project object is the central import point of pyiron - all other objects can be created from this one
- class pyiron_base.project.generic.Creator(project)
Bases:
object
- property job
- static job_name(job_name: str, ndigits: int | None = 8, special_symbols: Dict | None = None)
Creation of job names with special symbol replacement and rounding of floating numbers
- Parameters:
job_name (str/list) – Job name
ndigits (int/None) – Decimal digits to round floats to a given precision. None if no rounding should be performed.
special_symbols (dict) – Replacement of special symbols.
- Returns:
Job name
- Return type:
(str)
Default special_symbols: default_special_symbols_to_be_replaced
- table(job_name='table', delete_existing_job=False)
Create pyiron table
- Parameters:
job_name (str) – job name of the pyiron table job
delete_existing_job (bool) – Delete the existing table and run the analysis again.
- Returns:
pyiron_base.table.datamining.TableJob
- class pyiron_base.project.generic.Project(path='', user=None, sql_query=None, default_working_directory=False)
Bases:
ProjectPath
,HasGroups
The project is the central class in pyiron, all other objects can be created from the project object.
Implements
HasGroups
. Groups are sub directories in the project, nodes are jobs inside the project.- Parameters:
path (GenericPath, str) – path of the project defined by GenericPath, absolute or relative (with respect to current working directory) path
user (str) – current pyiron user
sql_query (str) – SQL query to only select a subset of the existing jobs within the current project
default_working_directory (bool) – Access default working directory, for ScriptJobs this equals the project directory of the ScriptJob for regular projects it falls back to the current directory.
- root_path
The pyiron user directory, defined in the .pyiron configuration.
- project_path
The relative path of the current project / folder starting from the root path of the pyiron user directory
- path
The absolute path of the current project / folder.
- base_name
The name of the current project / folder.
- history
Previously opened projects / folders.
- parent_group
Parent project - one level above the current project.
- user
Current unix/linux/windows user who is running pyiron
- sql_query
An SQL query to limit the jobs within the project to a subset which matches the SQL query.
- db
Connection to the SQL database.
- job_type
Job Type object with all the available job types: [‘ExampleJob’, ‘ParallelMaster’, ‘ScriptJob’, ‘ListMaster’].
- view_mode
If viewer_mode is enable pyiron has read only access to the database.
- data
A storage container for project-level data.
Examples
- Storing data:
>>> pr = Project('example') >>> pr.data.foo = 42 >>> pr.data.write() Some time later or in a different notebook, but in the same file location... >>> other_pr_instance = Project('example') >>> print(pr.data) {'foo': 42}
- property base
- compress_jobs(recursive=False)
Compress all finished jobs in the current project and in all subprojects if recursive=True is selected.
- Parameters:
recursive (bool) – [True/False] compress all jobs in all subprojects - default=False
- property conda_environment
- copy()
Copy the project object - copying just the Python object but maintaining the same pyiron path
- Returns:
copy of the project object
- Return type:
- copy_to(destination)
Copy the project object to a different pyiron path - including the content of the project (all jobs). In order to move individual jobs, use copy_to from the job objects.
- property create
- create_from_job(job_old, new_job_name)
Create a new job from an existing pyiron job
- Parameters:
job_old (GenericJob) – Job to copy
new_job_name (str) – New job name
- Returns:
New job with the new job name.
- Return type:
- create_group(group)
Create a new subproject/ group/ folder
- Parameters:
group (str) – name of the new project
- Returns:
New subproject
- Return type:
- static create_hdf(path, job_name)
Create an ProjectHDFio object to store project related information - for example aggregated data
- Parameters:
path (str) – absolute path
job_name (str) – name of the HDF5 container
- Returns:
HDF5 object
- Return type:
- create_job(job_type, job_name, delete_existing_job=False)
Create one of the following jobs: - ‘ExampleJob’: example job just generating random number - ‘ParallelMaster’: series of jobs run in parallel - ‘ScriptJob’: Python script or jupyter notebook job container - ‘ListMaster’: list of jobs
- Parameters:
job_type (str) – job type can be [‘ExampleJob’, ‘ParallelMaster’, ‘ScriptJob’, ‘ListMaster’]
job_name (str) – name of the job
delete_existing_job (bool) – delete an existing job - default false
- Returns:
job object depending on the job_type selected
- Return type:
- static create_job_class(class_name, executable_str, write_input_funct=None, collect_output_funct=None, default_input_dict=None)
Create a new job class based on pre-defined write_input() and collect_output() function plus a dictionary of default inputs and an executable string.
- Parameters:
class_name (str) – A name for the newly created job class, so it is accessible via pr.create.job.<class_name>
executable_str (str) – Call to an external executable
write_input_funct (callable) – The write input function write_input(input_dict, working_directory)
collect_output_funct (callable) – The collect output function collect_output(working_directory)
default_input_dict (dict) – Default input for the newly created job class
Example:
>>> def write_input(input_dict, working_directory="."): >>> with open(os.path.join(working_directory, "input_file"), "w") as f: >>> f.write(str(input_dict["energy"])) >>> >>> >>> def collect_output(working_directory="."): >>> with open(os.path.join(working_directory, "output_file"), "r") as f: >>> return {"energy": float(f.readline())} >>> >>> >>> from pyiron_base import Project >>> pr = Project("test") >>> pr.create_job_class( >>> class_name="CatJob", >>> write_input_funct=write_input, >>> collect_output_funct=collect_output, >>> default_input_dict={"energy": 1.0}, >>> executable_str="cat input_file > output_file", >>> ) >>> job = pr.create.job.CatJob(job_name="job_test") >>> job.input["energy"] = 2.0 >>> job.run() >>> job.output
- create_table(job_name='table', delete_existing_job=False)
Create pyiron table
- Parameters:
job_name (str) – job name of the pyiron table job
delete_existing_job (bool) – Delete the existing table and run the analysis again.
- Returns:
pyiron.table.datamining.TableJob
- property data
- property db
- delete_output_files_jobs(recursive=False)
Delete the output files of all finished jobs in the current project and in all subprojects if recursive=True is selected.
- Parameters:
recursive (bool) – [True/False] delete the output files of all jobs in all subprojects - default=False
- get_child_ids(job_specifier, project=None)
Get the childs for a specific job
- Parameters:
job_specifier (str, int) – name of the job or job ID
project (Project) – Project the job is located in - optional
- Returns:
list of child IDs
- Return type:
list
- get_db_columns()
Get column names
- Returns:
- list of column names like:
[‘id’, ‘parentid’, ‘masterid’, ‘projectpath’, ‘project’, ‘job’, ‘subjob’, ‘chemicalformula’, ‘status’, ‘hamilton’, ‘hamversion’, ‘username’, ‘computer’, ‘timestart’, ‘timestop’, ‘totalcputime’]
- Return type:
list
- static get_external_input()
Get external input either from the HDF5 file of the ScriptJob object which executes the Jupyter notebook or from an input.json file located in the same directory as the Jupyter notebook.
- Returns:
Dictionary with external input
- Return type:
dict
- get_job_id(job_specifier)
get the job_id for job named job_name in the local project path from database
- Parameters:
job_specifier (str, int) – name of the job or job ID
- Returns:
job ID of the job
- Return type:
int
- get_job_ids(recursive=True)
Return the job IDs matching a specific query
- Parameters:
recursive (bool) – search subprojects [True/False]
- Returns:
a list of job IDs
- Return type:
list
- get_job_status(job_specifier, project=None)
Get the status of a particular job
- Parameters:
job_specifier (str, int) – name of the job or job ID
project (Project) – Project the job is located in - optional
- Returns:
- job status can be one of the following [‘initialized’, ‘appended’, ‘created’, ‘submitted’, ‘running’,
’aborted’, ‘collect’, ‘suspended’, ‘refresh’, ‘busy’, ‘finished’]
- Return type:
str
- get_job_working_directory(job_specifier, project=None)
Get the working directory of a particular job
- Parameters:
job_specifier (str, int) – name of the job or job ID
project (Project) – Project the job is located in - optional
- Returns:
working directory as absolute path
- Return type:
str
- get_jobs(recursive=True, columns=None)
Internal function to return the jobs as dictionary rather than a pandas.Dataframe
- Parameters:
recursive (bool) – search subprojects [True/False]
columns (list) – by default only the columns [‘id’, ‘project’] are selected, but the user can select a subset of [‘id’, ‘status’, ‘chemicalformula’, ‘job’, ‘subjob’, ‘project’, ‘projectpath’, ‘timestart’, ‘timestop’, ‘totalcputime’, ‘computer’, ‘hamilton’, ‘hamversion’, ‘parentid’, ‘masterid’]
- Returns:
columns are used as keys and point to a list of the corresponding values
- Return type:
dict
- get_jobs_status(recursive=True, **kwargs)
Gives a overview of all jobs status.
- Parameters:
recursive (bool) – search subprojects [True/False] - default=True
kwargs – passed directly to :method:`.job_table` and can be used to filter jobs you want to have the status
for
- Returns:
prints an overview of the job status.
- Return type:
pandas.Series
- get_project_size()
Get the size of the project.
- Returns:
project size
- Return type:
float
- get_repository_status()
- groups()
Filter project by groups
- Returns:
a project which is filtered by groups
- Return type:
- property inspect
- items()
All items in the current project - this includes jobs, sub projects/ groups/ folders and any kind of files
- Returns:
items in the project
- Return type:
list
- iter_groups(progress: bool = True) Generator
Iterate over the groups within the current project
- Parameters:
progress (bool) – Display a progress bar during the iteration
- Yields:
Project
– sub projects/ groups/ folders
- iter_jobs(path: str = None, recursive: bool = True, convert_to_object: bool = True, progress: bool = True, **kwargs: dict) Generator
Iterate over the jobs within the current project and it is sub projects
- Parameters:
path (str) – HDF5 path inside each job object. (Default is None, which just uses the top level of the job’s HDF5 path.)
recursive (bool) – search subprojects. (Default is True.)
convert_to_object (bool) – load the full GenericJob object, else just return the HDF5 / JobCore object. (Default is True, convert everything to the full python object.)
progress (bool) – add an interactive progress bar to the iteration. (Default is True, show the bar.)
**kwargs (dict) – Optional arguments for filtering with keys matching the project database column name (eg. status=”finished”). Asterisk can be used to denote a wildcard, for zero or more instances of any character
- Returns:
Yield of GenericJob or JobCore
- Return type:
yield
Note
The default behavior of converting to object can cause significant slowdown in larger projects. In this case, you may seriously wish to consider setting convert_to_object=False and access only the HDF5/JobCore representation of the jobs instead.
- iter_output(recursive=True)
Iterate over the output of jobs within the current project and it is sub projects
- Parameters:
recursive (bool) – search subprojects [True/False] - True by default
- Returns:
Yield of GenericJob or JobCore
- Return type:
yield
- job_table(recursive=True, columns=None, all_columns=True, sort_by='id', full_table=False, element_lst=None, job_name_contains='', auto_refresh_job_status=False, mode: Literal['regex', 'glob'] = 'glob', **kwargs: dict)
Access the job_table.
- Parameters:
recursive (bool) – search subprojects [True/False]
columns (list) – by default only the columns [‘job’, ‘project’, ‘chemicalformula’] are selected, but the user can select a subset of [‘id’, ‘status’, ‘chemicalformula’, ‘job’, ‘subjob’, ‘project’, ‘projectpath’, ‘timestart’, ‘timestop’, ‘totalcputime’, ‘computer’, ‘hamilton’, ‘hamversion’, ‘parentid’, ‘masterid’]
all_columns (bool) – Select all columns - this overwrites the columns option.
sort_by (str) – Sort by a specific column
max_colwidth (int) – set the column width
full_table (bool) – Whether to show the entire pandas table
element_lst (list) – list of elements required in the chemical formular - by default None
job_name_contains (str) – (deprecated) A string which should be contained in every job_name
mode (str) – search mode when kwargs are given.
**kwargs (dict) – Optional arguments for filtering with keys matching the project database column name (eg. status=”finished”). Asterisk can be used to denote a wildcard, for zero or more instances of any character
- Returns:
Return the result as a pandas.Dataframe object
- Return type:
pandas.Dataframe
- keys()
List of file-, folder- and objectnames
- Returns:
list of the names of project directories and project nodes
- Return type:
list
- static list_clusters()
List available computing clusters for remote submission
- Returns:
List of computing clusters
- Return type:
list
- list_dirs(skip_hdf5=True)
List directories inside the project
- Parameters:
skip_hdf5 (bool) – Skip directories which belong to a pyiron object/ pyiron job - default=True
- Returns:
list of directory names
- Return type:
list
- list_files(extension=None)
List files inside the project
- Parameters:
extension (str) – filter by a specific extension
- Returns:
list of file names
- Return type:
list
- static list_publications(bib_format='pandas')
List the publications used in this project.
- Parameters:
bib_format (str) – [‘pandas’, ‘dict’, ‘bibtex’, ‘apa’]
- Returns:
list of publications in Bibtex format.
- Return type:
pandas.DataFrame/ list
- property load
- load_from_jobpath(job_id=None, db_entry=None, convert_to_object=True)
Internal function to load an existing job either based on the job ID or based on the database entry dictionary.
- Parameters:
job_id (int/ None) – Job ID - optional, but either the job_id or the db_entry is required.
db_entry (dict) – database entry dictionary - optional, but either the job_id or the db_entry is required.
convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.
- Returns:
Either the full GenericJob object or just a reduced JobCore object
- Return type:
- static load_from_jobpath_string(job_path, convert_to_object=True)
Internal function to load an existing job either based on the job ID or based on the database entry dictionary.
- Parameters:
job_path (str) – string to reload the job from an HDF5 file - ‘/root_path/project_path/filename.h5/h5_path’
convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.
- Returns:
Either the full GenericJob object or just a reduced JobCore object
- Return type:
- property maintenance
- move_to(destination)
Similar to the copy_to() function move the project object to a different pyiron path - including the content of the project (all jobs). In order to move individual jobs, use move_to from the job objects.
- property name
The name of the current project folder
- Returns:
name of the current project folder
- Return type:
str
- pack(destination_path, csv_file_name='export.csv', compress=True, copy_all_files=False)
Export job table to a csv file and copy (and optionally compress) the project directory.
- Parameters:
destination_path (str) – gives the relative path, in which the project folder is copied and compressed
csv_file_name (str) – is the name of the csv file used to store the project table.
compress (bool) – if true, the function will compress the destination_path to a tar.gz file.
copy_all_files (bool)
- property parent_group
Get the parent group of the current project
- Returns:
parent project
- Return type:
- static queue_check_job_is_waiting_or_running(item)
Check if a job is still listed in the queue system as either waiting or running.
- Parameters:
item (int, GenericJob) – Provide either the job_ID or the full hamiltonian
- Returns:
[True/False]
- Return type:
bool
- queue_delete_job(item)
Delete a job from the queuing system
- Parameters:
item (int, GenericJob) – Provide either the job_ID or the full hamiltonian
- Returns:
Output from the queuing system as string - optimized for the Sun grid engine
- Return type:
str
- static queue_enable_reservation(item)
Enable a reservation for a particular job within the queuing system
- Parameters:
item (int, GenericJob) – Provide either the job_ID or the full hamiltonian
- Returns:
Output from the queuing system as string - optimized for the Sun grid engine
- Return type:
str
- static queue_is_empty()
Check if the queue table is currently empty - no more jobs to wait for.
- Returns:
True if the table is empty, else False - optimized for the Sun grid engine
- Return type:
bool
- queue_table(project_only=True, recursive=True, full_table=False)
Display the queuing system table as pandas.Dataframe
- Parameters:
project_only (bool) – Query only for jobs within the current project - True by default
recursive (bool) – Include jobs from sub projects
full_table (bool) – Whether to show the entire pandas table
- Returns:
Output from the queuing system - optimized for the Sun grid engine
- Return type:
pandas.DataFrame
- queue_table_global(full_table=False)
Display the queuing system table as pandas.Dataframe
- Parameters:
full_table (bool) – Whether to show the entire pandas table
- Returns:
Output from the queuing system - optimized for the Sun grid engine
- Return type:
pandas.DataFrame
- refresh_job_status(*jobs, by_status=['running', 'submitted'])
Check if job is still running or crashed on the cluster node.
If jobs is not given, check for all jobs listed as running in the current project.
- Parameters:
*jobs (str, int) – name of the job or job ID, any number of them
by_status (iterable of str) – if not jobs are given, select all jobs with the given status in this project
- refresh_job_status_based_on_job_id(job_id, que_mode=True)
Internal function to check if a job is still listed ‘running’ in the job_table while it is no longer listed in the queuing system. In this case update the entry in the job_table to ‘aborted’.
- Parameters:
job_id (int) – job ID
que_mode (bool) – [True/False] - default=True
- refresh_job_status_based_on_queue_status(job_specifier, status='running')
Check if the job is still listed as running, while it is no longer listed in the queue.
- Parameters:
job_specifier (str, int) – name of the job or job ID
status (str) – Currently only the jobstatus of ‘running’ jobs can be refreshed - default=’running’
- classmethod register_tools(name: str, tools)
Add a new creator to the project class.
Example)
>>> from pyiron_base import Project, Toolkit >>> class MyTools(Toolkit): ... @property ... def foo(self): ... return 'foo' >>> >>> Project.register_tools('my_tools', MyTools) >>> pr = Project('scratch') >>> print(pr.my_tools.foo) 'foo'
The intent is then that pyiron submodules (e.g. pyiron_atomistics) define a new creator and in their __init__.py file only need to invoke Project.register_creator(‘pyiron_submodule’, SubmoduleCreator). Then whenever pyiron_submodule gets imported, all its functionality is available on the project.
- Parameters:
name (str) – The name for the newly registered property.
tools (Toolkit) – The tools to register.
- remove(enable=False, enforce=False)
Delete all the whole project including all jobs in the project and its subprojects
- Parameters:
enforce (bool) – [True/False] delete jobs even though they are used in other projects - default=False
enable (bool) – [True/False] enable this command.
- remove_file(file_name)
Remove a file (same as unlink()) - copied from os.remove()
- If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
- dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
- Parameters:
file_name (str) – name of the file
- remove_job(job_specifier, _unprotect=False)
Remove a single job from the project based on its job_specifier - see also remove_jobs()
- Parameters:
job_specifier (str, int) – name of the job or job ID
_unprotect (bool) – [True/False] delete the job without validating the dependencies to other jobs - default=False
- remove_jobs(recursive=False, progress=True, silently=False)
Remove all jobs in the current project and in all subprojects if recursive=True is selected - see also remove_job().
For safety, the user is asked via input() to confirm the removal. To bypass this interactive interruption, use remove_jobs(silently=True).
- Parameters:
recursive (bool) – [True/False] delete all jobs in all subprojects - default=False
progress (bool) – if True (default), add an interactive progress bar to the iteration
silently (bool) – if True the safety check is disabled - default=False
- remove_jobs_silently(recursive=False, progress=True)
- set_job_status(job_specifier, status, project=None)
Set the status of a particular job
- Parameters:
job_specifier (str) – name of the job or job ID
status (str) – job status can be one of the following [‘initialized’, ‘appended’, ‘created’, ‘submitted’, ‘running’, ‘aborted’, ‘collect’, ‘suspended’, ‘refresh’, ‘busy’, ‘finished’]
project (str) – project path
- static set_logging_level(level, channel=None)
Set level for logger
- Parameters:
level (str) – ‘DEBUG, INFO, WARN’
channel (int) – 0: file_log, 1: stream, None: both
- property size
Get the size of the project
- property state
- static switch_cluster(cluster_name)
Switch to a different computing cluster
- Parameters:
cluster_name (str) – name of the computing cluster
- switch_to_central_database()
Switch from local mode to central mode - if local_mode is enable pyiron is using a local database.
- switch_to_local_database(file_name='pyiron.db', cwd=None)
Switch from central mode to local mode - if local_mode is enable pyiron is using a local database.
- Parameters:
file_name (str) – file name or file path for the local database
cwd (str) – directory where the local database is located
- switch_to_user_mode()
Switch from viewer mode to user mode - if viewer_mode is enable pyiron has read only access to the database.
- switch_to_viewer_mode()
Switch from user mode to viewer mode - if viewer_mode is enable pyiron has read only access to the database.
- symlink(target_dir)
Move underlying project folder to target and create a symlink to it.
The project itself does not change and is not updated in the database. Instead the project folder is moved into a subdirectory of target_dir with the same name as the project and a symlink is placed in the previous project path pointing to the newly created one.
If self.path is already a symlink pointing inside target_dir, this method will silently return.
- Parameters:
target_dir (str) – new parent folder for the project
- Raises:
OSError – when calling this method on non-unix systems
RuntimeError – the project path is already a symlink to somewhere else
RuntimeError – the project path has submitted or running jobs inside it, wait until after they are finished
RuntimeError – target already contains a subdirectory with the project name and it is not empty
- unlink()
If the project folder is symlinked somewhere else remove the link and restore the original folder.
If it is not symlinked, silently return.
- unpack(origin_path, csv_file_name='export.csv', compress=True)
by this function, job table is imported from a given csv file, and also the content of project directory is copied from a given path
- Parameters:
origin_path (str) – the relative path of a directory (or a compressed file without the tar.gz exention) from which the project directory is copied.
csv_file_name (str) – the csv file from which the job_table is copied to the current project
compress (bool) – if True, it looks for a compressed file
- update_from_remote(recursive=True, ignore_exceptions=False, try_collecting=False)
Update jobs from the remote server
- Parameters:
recursive (bool) – search subprojects [True/False] - default=True
ignore_exceptions (bool) – ignore eventual exceptions when retrieving jobs - default=False
- Returns:
returns None if ignore_exceptions is False or when no error occured. returns a list with job ids when errors occured, but were ignored
- values()
All items in the current project - this includes jobs, sub projects/ groups/ folders and any kind of files
- Returns:
items in the project
- Return type:
list
- property view_mode
Get viewer_mode - if viewer_mode is enable pyiron has read only access to the database.
Change it via Project(‘my_project’).switch_to_viewer_mode() and Project(‘my_project’).switch_to_user_mode()
- Returns:
returns TRUE when viewer_mode is enabled
- Return type:
bool
- static wait_for_job(job, interval_in_s=5, max_iterations=100)
Sleep until the job is finished but maximum interval_in_s * max_iterations seconds.
- Parameters:
job (GenericJob) – Job to wait for
interval_in_s (int) – interval when the job status is queried from the database - default 5 sec.
max_iterations (int) – maximum number of iterations - default 100
- Raises:
ValueError – max_iterations reached, job still running
- wait_for_jobs(interval_in_s=5, max_iterations=100, recursive=True, ignore_exceptions=False)
Wait for the calculation in the project to be finished
- Parameters:
interval_in_s (int) – interval when the job status is queried from the database - default 5 sec.
max_iterations (int) – maximum number of iterations - default 100
recursive (bool) – search subprojects [True/False] - default=True
ignore_exceptions (bool) – ignore eventual exceptions when retrieving jobs - default=False
- Raises:
ValueError – max_iterations reached, but jobs still running
- wrap_executable(job_name, executable_str, write_input_funct=None, collect_output_funct=None, input_dict=None, conda_environment_path=None, conda_environment_name=None, input_file_lst=None, execute_job=False)
Wrap any executable into a pyiron job object using the ExecutableContainerJob.
- Parameters:
job_name (str) – name of the new job object
executable_str (str) – call to an external executable
write_input_funct (callable) – The write input function write_input(input_dict, working_directory)
collect_output_funct (callable) – The collect output function collect_output(working_directory)
input_dict (dict) – Default input for the newly created job class
conda_environment_path (str) – path of the conda environment
conda_environment_name (str) – name of the conda environment
input_file_lst (list) – list of files to be copied to the working directory before executing it execute_job (boolean): automatically call run() on the job object - default false
Example:
>>> def write_input(input_dict, working_directory="."): >>> with open(os.path.join(working_directory, "input_file"), "w") as f: >>> f.write(str(input_dict["energy"])) >>> >>> >>> def collect_output(working_directory="."): >>> with open(os.path.join(working_directory, "output_file"), "r") as f: >>> return {"energy": float(f.readline())} >>> >>> >>> from pyiron_base import Project >>> pr = Project("test") >>> job = pr.wrap_executable( >>> job_name="Cat_Job_energy_1_0", >>> write_input_funct=write_input, >>> collect_output_funct=collect_output, >>> input_dict={"energy": 1.0}, >>> executable_str="cat input_file > output_file", >>> execute_job=True, >>> ) >>> print(job.output)
- Returns:
pyiron job object
- Return type:
pyiron_base.jobs.flex.ExecutableContainerJob
- wrap_python_function(python_function, job_name=None, automatically_rename=True)
Create a pyiron job object from any python function
- Parameters:
python_function (callable) – python function to create a job object from
job_name (str | None) – The name for the created job. (Default is None, use the name of the function.)
automatically_rename (bool) – Whether to automatically rename the job at save-time to append a string based on the input values. (Default is True.)
- Returns:
pyiron job object
- Return type:
pyiron_base.jobs.flex.pythonfunctioncontainer.PythonFunctionContainerJob
Example:
>>> def test_function(a, b=8): >>> return a+b >>> >>> from pyiron_base import Project >>> pr = Project("test") >>> job = pr.wrap_python_function(test_function) >>> job.input["a"] = 4 >>> job.input["b"] = 5 >>> job.run() >>> job.output >>> >>> test_function_wrapped = pr.wrap_python_function(test_function) >>> test_function_wrapped(4, b=6)