pyiron_base.project.generic.Project

Contents

pyiron_base.project.generic.Project#

class pyiron_base.project.generic.Project(path: str = '', user: str | None = None, sql_query: str | None = None, default_working_directory: bool = False)[source]#

Bases: ProjectPath, HasGroups

The project is the central class in pyiron, all other objects can be created from the project object.

Implements HasGroups. Groups are sub directories in the project, nodes are jobs inside the project.

Parameters:
  • path (GenericPath, str) – path of the project defined by GenericPath, absolute or relative (with respect to current working directory) path

  • user (str) – current pyiron user

  • sql_query (str) – SQL query to only select a subset of the existing jobs within the current project

  • default_working_directory (bool) – Access default working directory, for ScriptJobs this equals the project directory of the ScriptJob for regular projects it falls back to the current directory.

root_path#

The pyiron user directory, defined in the .pyiron configuration.

project_path#

The relative path of the current project / folder starting from the root path of the pyiron user directory

path#

The absolute path of the current project / folder.

base_name#

The name of the current project / folder.

history#

Previously opened projects / folders.

parent_group#

Parent project - one level above the current project.

user#

Current unix/linux/windows user who is running pyiron

sql_query#

An SQL query to limit the jobs within the project to a subset which matches the SQL query.

db#

Connection to the SQL database.

job_type#

Job Type object with all the available job types: [‘ExampleJob’, ‘ParallelMaster’, ‘ScriptJob’, ‘ListMaster’].

data#

A storage container for project-level data.

Type:

pyiron_base.project.data.ProjectData

Examples

Storing data:
>>> pr = Project('example')
>>> pr.data.foo = 42
>>> pr.data.write()
Some time later or in a different notebook, but in the same file location...
>>> other_pr_instance = Project('example')
>>> print(pr.data)
{'foo': 42}
__init__(path: str = '', user: str | None = None, sql_query: str | None = None, default_working_directory: bool = False)[source]#

Open a new or an existing project. The project is defined by providing a relative or an absolute path. If no path is provided the current working directory is used. This is the main class to walk inside the project structure, create new jobs, subprojects etc. Note: Changing the path has no effect on the current working directory

Parameters:

path (GenericPath, str) – path of the project defined by GenericPath, absolute or relative (with respect to current working directory) path

root_path#
the pyiron user directory, defined in the .pyiron configuration
project_path#
the relative path of the current project / folder starting from the root path
of the pyiron user directory
path#
the absolute path of the current project / folder
base_name#
the name of the current project / folder
history#
previously opened projects / folders

Methods

__init__([path, user, sql_query, ...])

Open a new or an existing project.

close()

return to the path before the last open if no history exists nothing happens

compress_jobs([recursive])

Compress all finished jobs in the current project and in all subprojects if recursive=True is selected.

copy()

Copy the project object - copying just the Python object but maintaining the same pyiron path

copy_to(destination[, delete_original_data])

Copy the project object to a different pyiron path - including the content of the project (all jobs).

create_from_job(job_old, new_job_name)

Create a new job from an existing pyiron job

create_group(group)

Create a new subproject/ group/ folder

create_hdf(path, job_name)

Create an ProjectHDFio object to store project related information - for example aggregated data

create_job(job_type, job_name[, ...])

Create one of the following jobs: - 'ExampleJob': example job just generating random number - 'ParallelMaster': series of jobs run in parallel - 'ScriptJob': Python script or jupyter notebook job container - 'ListMaster': list of jobs

create_job_class(class_name, executable_str)

Create a new job class based on pre-defined write_input() and collect_output() function plus a dictionary of default inputs and an executable string.

create_table([job_name, delete_existing_job])

Create pyiron table

delete_output_files_jobs([recursive])

Delete the output files of all finished jobs in the current project and in all subprojects if recursive=True is selected.

get_child_ids(job_specifier[, project])

Get the childs for a specific job

get_db_columns()

Get column names

get_external_input()

Get external input either from the HDF5 file of the ScriptJob object which executes the Jupyter notebook or from an input.json file located in the same directory as the Jupyter notebook.

get_job_id(job_specifier)

get the job_id for job named job_name in the local project path from database

get_job_ids([recursive])

Return the job IDs matching a specific query

get_job_status(job_specifier[, project])

Get the status of a particular job

get_job_working_directory(job_specifier[, ...])

Get the working directory of a particular job

get_jobs([recursive, columns])

Internal function to return the jobs as dictionary rather than a pandas.Dataframe

get_jobs_status([recursive])

Gives a overview of all jobs status.

get_project_size()

Get the size of the project.

get_repository_status()

groups()

Filter project by groups

items()

All items in the current project - this includes jobs, sub projects/ groups/ folders and any kind of files

iter_groups([progress])

Iterate over the groups within the current project

iter_jobs([path, recursive, ...])

Iterate over the jobs within the current project and it is sub projects

iter_output([recursive])

Iterate over the output of jobs within the current project and it is sub projects

job_table([recursive, columns, all_columns, ...])

Access the job_table.

keys()

List of file-, folder- and objectnames

list_all()

Returns dictionary of :method:`.list_groups()` and :method:`.list_nodes()`.

list_clusters()

List available computing clusters for remote submission

list_dirs([skip_hdf5])

List directories inside the project

list_files([extension])

List files inside the project

list_groups()

Return a list of names of all nested groups.

list_nodes()

Return a list of names of all nested nodes.

list_publications([bib_format])

List the publications used in this project.

listdir()

equivalent to os.listdir list all files and directories in this path

load_from_jobpath([job_id, db_entry, ...])

Internal function to load an existing job either based on the job ID or based on the database entry dictionary.

load_from_jobpath_string(job_path[, ...])

Internal function to load an existing job either based on the job ID or based on the database entry dictionary.

move_to(destination)

Same as copy_to() but deletes the original project after copying

nodes()

Filter project by nodes

open(rel_path[, history])

if rel_path exist set the project path to this directory if not create it and go there

pack([destination_path, compress, ...])

Export job table to a csv file and copy (and optionally compress) the project directory.

queue_check_job_is_waiting_or_running(item)

Check if a job is still listed in the queue system as either waiting or running.

queue_delete_job(item)

Delete a job from the queuing system

queue_enable_reservation(item)

Enable a reservation for a particular job within the queuing system

queue_is_empty()

Check if the queue table is currently empty - no more jobs to wait for.

queue_table([project_only, recursive, ...])

Display the queuing system table as pandas.Dataframe

queue_table_global([full_table])

Display the queuing system table as pandas.Dataframe

refresh_job_status(*jobs[, by_status])

Check if job is still running or crashed on the cluster node.

refresh_job_status_based_on_job_id(job_id[, ...])

Internal function to check if a job is still listed 'running' in the job_table while it is no longer listed in the queuing system.

refresh_job_status_based_on_queue_status(...)

Check if the job is still listed as running, while it is no longer listed in the queue.

register_tools(name, tools)

Add a new creator to the project class.

remove([enable, enforce])

Delete all the whole project including all jobs in the project and its subprojects

remove_file(file_name)

Remove a file (same as unlink()) - copied from os.remove()

remove_job(job_specifier[, _unprotect])

Remove a single job from the project based on its job_specifier - see also remove_jobs()

remove_jobs([recursive, progress, silently])

Remove all jobs in the current project and in all subprojects if recursive=True is selected - see also remove_job().

remove_jobs_silently([recursive, progress])

removedirs([project_name])

equivalent to os.removedirs -> remove empty dirs

set_job_status(job_specifier, status[, project])

Set the status of a particular job

set_logging_level(level[, channel])

Set level for logger

switch_cluster(cluster_name)

Switch to a different computing cluster

switch_to_central_database()

Switch from local mode to central mode - if local_mode is enable pyiron is using a local database.

switch_to_local_database([file_name, cwd])

Switch from central mode to local mode - if local_mode is enable pyiron is using a local database.

switch_to_user_mode()

Switch from viewer mode to user mode - if viewer_mode is enable pyiron has read only access to the database.

switch_to_viewer_mode()

Switch from user mode to viewer mode - if viewer_mode is enable pyiron has read only access to the database.

symlink(target_dir)

Move underlying project folder to target and create a symlink to it.

unlink()

If the project folder is symlinked somewhere else remove the link and restore the original folder.

unpack(origin_path, **kwargs)

by this function, job table is imported from a given csv file, and also the content of project directory is copied from a given path

unpack_csv(tar_path[, csv_file])

Import job table from a csv file and copy the content of a project directory from a given path.

update_from_remote([recursive, ...])

Update jobs from the remote server

values()

All items in the current project - this includes jobs, sub projects/ groups/ folders and any kind of files

wait_for_job(job[, interval_in_s, ...])

Sleep until the job is finished but maximum interval_in_s * max_iterations seconds.

wait_for_jobs([interval_in_s, ...])

Wait for the calculation in the project to be finished

walk()

equivalent to os.listdir list all files and directories in this path

wrap_executable(executable_str[, job_name, ...])

Wrap any executable into a pyiron job object using the ExecutableContainerJob.

wrap_python_function(python_function, *args)

Create a pyiron job object from any python function

Attributes

base

base_name

The name of the current project folder

conda_environment

create

data

db

history

The history of the previously opened paths

inspect

load

maintenance

name

The name of the current project folder

parent_group

Get the parent group of the current project

path

The absolute path to of the current object.

project_path

the relative path of the current project / folder starting from the root path of the pyiron user directory

root_path

the pyiron user directory, defined in the .pyiron configuration

size

Get the size of the project

state

property base_name: str#

The name of the current project folder

Returns:

name of the current project folder

Return type:

str

close() None#

return to the path before the last open if no history exists nothing happens

compress_jobs(recursive: bool = False) None[source]#

Compress all finished jobs in the current project and in all subprojects if recursive=True is selected.

Parameters:

recursive (bool) – [True/False] compress all jobs in all subprojects - default=False

copy() Project[source]#

Copy the project object - copying just the Python object but maintaining the same pyiron path

Returns:

copy of the project object

Return type:

Project

copy_to(destination: Project, delete_original_data: bool = False) Project[source]#

Copy the project object to a different pyiron path - including the content of the project (all jobs). In order to move individual jobs, use copy_to from the job objects.

Parameters:
  • destination (Project) – project path to copy the project content to

  • delete_original_data (bool) – delete the original data after copying - default=False

Returns:

pointing to the new project path

Return type:

Project

create_from_job(job_old: GenericJob, new_job_name: str) GenericJob[source]#

Create a new job from an existing pyiron job

Parameters:
  • job_old (GenericJob) – Job to copy

  • new_job_name (str) – New job name

Returns:

New job with the new job name.

Return type:

GenericJob

create_group(group: str) Project[source]#

Create a new subproject/ group/ folder

Parameters:

group (str) – name of the new project

Returns:

New subproject

Return type:

Project

static create_hdf(path, job_name: str) ProjectHDFio[source]#

Create an ProjectHDFio object to store project related information - for example aggregated data

Parameters:
  • path (str) – absolute path

  • job_name (str) – name of the HDF5 container

Returns:

HDF5 object

Return type:

ProjectHDFio

create_job(job_type: str, job_name: str, delete_existing_job: bool = False) GenericJob[source]#

Create one of the following jobs: - ‘ExampleJob’: example job just generating random number - ‘ParallelMaster’: series of jobs run in parallel - ‘ScriptJob’: Python script or jupyter notebook job container - ‘ListMaster’: list of jobs

Parameters:
  • job_type (str) – job type can be [‘ExampleJob’, ‘ParallelMaster’, ‘ScriptJob’, ‘ListMaster’]

  • job_name (str) – name of the job

  • delete_existing_job (bool) – delete an existing job - default false

Returns:

job object depending on the job_type selected

Return type:

GenericJob

static create_job_class(class_name: str, executable_str: str, write_input_funct: callable | None = None, collect_output_funct: callable | None = None, default_input_dict: dict | None = None) None[source]#

Create a new job class based on pre-defined write_input() and collect_output() function plus a dictionary of default inputs and an executable string.

Parameters:
  • class_name (str) – A name for the newly created job class, so it is accessible via pr.create.job.<class_name>

  • executable_str (str) – Call to an external executable

  • write_input_funct (callable) – The write input function write_input(input_dict, working_directory)

  • collect_output_funct (callable) – The collect output function collect_output(working_directory)

  • default_input_dict (dict) – Default input for the newly created job class

Example:

>>> def write_input(input_dict, working_directory="."):
>>>     with open(os.path.join(working_directory, "input_file"), "w") as f:
>>>         f.write(str(input_dict["energy"]))
>>>
>>>
>>> def collect_output(working_directory="."):
>>>     with open(os.path.join(working_directory, "output_file"), "r") as f:
>>>         return {"energy": float(f.readline())}
>>>
>>>
>>> from pyiron_base import Project
>>> pr = Project("test")
>>> pr.create_job_class(
>>>     class_name="CatJob",
>>>     write_input_funct=write_input,
>>>     collect_output_funct=collect_output,
>>>     default_input_dict={"energy": 1.0},
>>>     executable_str="cat input_file > output_file",
>>> )
>>> job = pr.create.job.CatJob(job_name="job_test")
>>> job.input["energy"] = 2.0
>>> job.run()
>>> job.output
create_table(job_name: str = 'table', delete_existing_job: bool = False) TableJob[source]#

Create pyiron table

Parameters:
  • job_name (str) – job name of the pyiron table job

  • delete_existing_job (bool) – Delete the existing table and run the analysis again.

Returns:

pyiron.table.datamining.TableJob

delete_output_files_jobs(recursive: bool = False) None[source]#

Delete the output files of all finished jobs in the current project and in all subprojects if recursive=True is selected.

Parameters:

recursive (bool) – [True/False] delete the output files of all jobs in all subprojects - default=False

get_child_ids(job_specifier: str | int, project: Project | None = None) List[int][source]#

Get the childs for a specific job

Parameters:
  • job_specifier (str, int) – name of the job or job ID

  • project (Project) – Project the job is located in - optional

Returns:

list of child IDs

Return type:

list

get_db_columns() List[str][source]#

Get column names

Returns:

list of column names like:

[‘id’, ‘parentid’, ‘masterid’, ‘projectpath’, ‘project’, ‘job’, ‘subjob’, ‘chemicalformula’, ‘status’, ‘hamilton’, ‘hamversion’, ‘username’, ‘computer’, ‘timestart’, ‘timestop’, ‘totalcputime’]

Return type:

list

static get_external_input() dict[source]#

Get external input either from the HDF5 file of the ScriptJob object which executes the Jupyter notebook or from an input.json file located in the same directory as the Jupyter notebook.

Returns:

Dictionary with external input

Return type:

dict

get_job_id(job_specifier: str | int) int[source]#

get the job_id for job named job_name in the local project path from database

Parameters:

job_specifier (str, int) – name of the job or job ID

Returns:

job ID of the job

Return type:

int

get_job_ids(recursive: bool = True) List[int][source]#

Return the job IDs matching a specific query

Parameters:

recursive (bool) – search subprojects [True/False]

Returns:

a list of job IDs

Return type:

list

get_job_status(job_specifier: str | int, project: Project | None = None) str[source]#

Get the status of a particular job

Parameters:
  • job_specifier (str, int) – name of the job or job ID

  • project (Project) – Project the job is located in - optional

Returns:

job status can be one of the following [‘initialized’, ‘appended’, ‘created’, ‘submitted’, ‘running’,

’aborted’, ‘collect’, ‘suspended’, ‘refresh’, ‘busy’, ‘finished’]

Return type:

str

get_job_working_directory(job_specifier: str | int, project: Project | None = None) str[source]#

Get the working directory of a particular job

Parameters:
  • job_specifier (str, int) – name of the job or job ID

  • project (Project) – Project the job is located in - optional

Returns:

working directory as absolute path

Return type:

str

get_jobs(recursive: bool = True, columns: List[str] | None = None) dict[source]#

Internal function to return the jobs as dictionary rather than a pandas.Dataframe

Parameters:
  • recursive (bool) – search subprojects [True/False]

  • columns (list) – by default only the columns [‘id’, ‘project’] are selected, but the user can select a subset of [‘id’, ‘status’, ‘chemicalformula’, ‘job’, ‘subjob’, ‘project’, ‘projectpath’, ‘timestart’, ‘timestop’, ‘totalcputime’, ‘computer’, ‘hamilton’, ‘hamversion’, ‘parentid’, ‘masterid’]

Returns:

columns are used as keys and point to a list of the corresponding values

Return type:

dict

get_jobs_status(recursive: bool = True, **kwargs) Series[source]#

Gives a overview of all jobs status.

Parameters:
  • recursive (bool) – search subprojects [True/False] - default=True

  • kwargs – passed directly to :method:`.job_table` and can be used to filter jobs you want to have the status

  • for

Returns:

prints an overview of the job status.

Return type:

pandas.Series

get_project_size() float[source]#

Get the size of the project.

Returns:

project size

Return type:

float

groups()[source]#

Filter project by groups

Returns:

a project which is filtered by groups

Return type:

Project

property history: list#

The history of the previously opened paths

Returns:

list of previously opened relative paths

Return type:

list

items() list[source]#

All items in the current project - this includes jobs, sub projects/ groups/ folders and any kind of files

Returns:

items in the project

Return type:

list

iter_groups(progress: bool = True) Generator[source]#

Iterate over the groups within the current project

Parameters:

progress (bool) – Display a progress bar during the iteration

Yields:

Project – sub projects/ groups/ folders

iter_jobs(path: str = None, recursive: bool = True, convert_to_object: bool = True, progress: bool = True, **kwargs: dict) Generator[source]#

Iterate over the jobs within the current project and it is sub projects

Parameters:
  • path (str) – HDF5 path inside each job object. (Default is None, which just uses the top level of the job’s HDF5 path.)

  • recursive (bool) – search subprojects. (Default is True.)

  • convert_to_object (bool) – load the full GenericJob object, else just return the HDF5 / JobCore object. (Default is True, convert everything to the full python object.)

  • progress (bool) – add an interactive progress bar to the iteration. (Default is True, show the bar.)

  • **kwargs (dict) – Optional arguments for filtering with keys matching the project database column name (eg. status=”finished”). Asterisk can be used to denote a wildcard, for zero or more instances of any character

Returns:

Yield of GenericJob or JobCore

Return type:

yield

Note

The default behavior of converting to object can cause significant slowdown in larger projects. In this case, you may seriously wish to consider setting convert_to_object=False and access only the HDF5/JobCore representation of the jobs instead.

iter_output(recursive: bool = True) Generator[source]#

Iterate over the output of jobs within the current project and it is sub projects

Parameters:

recursive (bool) – search subprojects [True/False] - True by default

Returns:

Yield of GenericJob or JobCore

Return type:

yield

job_table(recursive: bool = True, columns: List[str] | None = None, all_columns: bool = True, sort_by: str = 'id', full_table: bool = False, element_lst: List[str] | None = None, job_name_contains: str = '', auto_refresh_job_status: bool = False, mode: Literal['regex', 'glob'] = 'glob', **kwargs: dict)[source]#

Access the job_table.

Parameters:
  • recursive (bool) – search subprojects [True/False]

  • columns (list) – by default only the columns [‘job’, ‘project’, ‘chemicalformula’] are selected, but the user can select a subset of [‘id’, ‘status’, ‘chemicalformula’, ‘job’, ‘subjob’, ‘project’, ‘projectpath’, ‘timestart’, ‘timestop’, ‘totalcputime’, ‘computer’, ‘hamilton’, ‘hamversion’, ‘parentid’, ‘masterid’]

  • all_columns (bool) – Select all columns - this overwrites the columns option.

  • sort_by (str) – Sort by a specific column

  • max_colwidth (int) – set the column width

  • full_table (bool) – Whether to show the entire pandas table

  • element_lst (list) – list of elements required in the chemical formular - by default None

  • job_name_contains (str) – (deprecated) A string which should be contained in every job_name

  • mode (str) – search mode when kwargs are given.

  • **kwargs (dict) – Optional arguments for filtering with keys matching the project database column name (eg. status=”finished”). Asterisk can be used to denote a wildcard, for zero or more instances of any character

Returns:

Return the result as a pandas.Dataframe object

Return type:

pandas.Dataframe

keys() list[source]#

List of file-, folder- and objectnames

Returns:

list of the names of project directories and project nodes

Return type:

list

list_all()#

Returns dictionary of :method:`.list_groups()` and :method:`.list_nodes()`.

Returns:

results of :method:`.list_groups() under the key "groups"; results of :method:`.list_nodes()` und the

key “nodes”

Return type:

dict

static list_clusters() list[source]#

List available computing clusters for remote submission

Returns:

List of computing clusters

Return type:

list

list_dirs(skip_hdf5: bool = True) list[source]#

List directories inside the project

Parameters:

skip_hdf5 (bool) – Skip directories which belong to a pyiron object/ pyiron job - default=True

Returns:

list of directory names

Return type:

list

list_files(extension: str | None = None) list[source]#

List files inside the project

Parameters:

extension (str) – filter by a specific extension

Returns:

list of file names

Return type:

list

list_groups()#

Return a list of names of all nested groups.

Returns:

group names

Return type:

list of str

list_nodes()#

Return a list of names of all nested nodes.

Returns:

node names

Return type:

list of str

static list_publications(bib_format: str = 'pandas') DataFrame[source]#

List the publications used in this project.

Parameters:

bib_format (str) – [‘pandas’, ‘dict’, ‘bibtex’, ‘apa’]

Returns:

list of publications in Bibtex format.

Return type:

pandas.DataFrame/ list

listdir() list#

equivalent to os.listdir list all files and directories in this path

Returns:

list of folders and files in the current project path

Return type:

list

load_from_jobpath(job_id: int | None = None, db_entry: dict | None = None, convert_to_object: bool = True) 'GenricJob' | 'JobCore'[source]#

Internal function to load an existing job either based on the job ID or based on the database entry dictionary.

Parameters:
  • job_id (int/ None) – Job ID - optional, but either the job_id or the db_entry is required.

  • db_entry (dict) – database entry dictionary - optional, but either the job_id or the db_entry is required.

  • convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.

Returns:

Either the full GenericJob object or just a reduced JobCore object

Return type:

GenericJob, JobCore

static load_from_jobpath_string(job_path: str, convert_to_object: bool = True) JobPath[source]#

Internal function to load an existing job either based on the job ID or based on the database entry dictionary.

Parameters:
  • job_path (str) – string to reload the job from an HDF5 file - ‘/root_path/project_path/filename.h5/h5_path’

  • convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.

Returns:

Either the full GenericJob object or just a reduced JobCore object

Return type:

GenericJob, JobCore

move_to(destination: Project) None[source]#

Same as copy_to() but deletes the original project after copying

property name: str#

The name of the current project folder

Returns:

name of the current project folder

Return type:

str

nodes() Project[source]#

Filter project by nodes

Returns:

a project which is filtered by nodes

Return type:

Project

open(rel_path: str, history: bool = True) ProjectPath#

if rel_path exist set the project path to this directory if not create it and go there

Parameters:
  • rel_path (str) – path relative to the current project path

  • history (bool) – By default pyiron stores a history of previously opened paths

Returns:

New ProjectPath object pointing to the relative path

Return type:

ProjectPath

pack(destination_path: str | None = None, compress: bool = True, copy_all_files: bool = False, **kwargs) None[source]#

Export job table to a csv file and copy (and optionally compress) the project directory.

Parameters:
  • destination_path (str) – gives the relative path, in which the project folder is copied and compressed

  • compress (bool) – if true, the function will compress the destination_path to a tar.gz file.

  • copy_all_files (bool)

property parent_group: Project#

Get the parent group of the current project

Returns:

parent project

Return type:

Project

property path: str#

The absolute path to of the current object.

Returns:

current project path

Return type:

str

property project_path: str#

the relative path of the current project / folder starting from the root path of the pyiron user directory

Returns:

relative path of the current project / folder

Return type:

str

static queue_check_job_is_waiting_or_running(item: int | 'GenericJob') bool[source]#

Check if a job is still listed in the queue system as either waiting or running.

Parameters:

item (int, GenericJob) – Provide either the job_ID or the full hamiltonian

Returns:

[True/False]

Return type:

bool

queue_delete_job(item: int | 'GenericJob') None[source]#

Delete a job from the queuing system

Parameters:

item (int, GenericJob) – Provide either the job_ID or the full hamiltonian

Returns:

Output from the queuing system as string - optimized for the Sun grid engine

Return type:

str

static queue_enable_reservation(item: int | 'GenericJob') str[source]#

Enable a reservation for a particular job within the queuing system

Parameters:

item (int, GenericJob) – Provide either the job_ID or the full hamiltonian

Returns:

Output from the queuing system as string - optimized for the Sun grid engine

Return type:

str

static queue_is_empty() bool[source]#

Check if the queue table is currently empty - no more jobs to wait for.

Returns:

True if the table is empty, else False - optimized for the Sun grid engine

Return type:

bool

queue_table(project_only: bool = True, recursive: bool = True, full_table: bool = False) DataFrame[source]#

Display the queuing system table as pandas.Dataframe

Parameters:
  • project_only (bool) – Query only for jobs within the current project - True by default

  • recursive (bool) – Include jobs from sub projects

  • full_table (bool) – Whether to show the entire pandas table

Returns:

Output from the queuing system - optimized for the Sun grid engine

Return type:

pandas.DataFrame

queue_table_global(full_table: bool = False) DataFrame[source]#

Display the queuing system table as pandas.Dataframe

Parameters:

full_table (bool) – Whether to show the entire pandas table

Returns:

Output from the queuing system - optimized for the Sun grid engine

Return type:

pandas.DataFrame

refresh_job_status(*jobs, by_status: List[str] = ['running', 'submitted']) None[source]#

Check if job is still running or crashed on the cluster node.

If jobs is not given, check for all jobs listed as running in the current project.

Parameters:
  • *jobs (str, int) – name of the job or job ID, any number of them

  • by_status (iterable of str) – if not jobs are given, select all jobs with the given status in this project

refresh_job_status_based_on_job_id(job_id: int, que_mode: bool = True) None[source]#

Internal function to check if a job is still listed ‘running’ in the job_table while it is no longer listed in the queuing system. In this case update the entry in the job_table to ‘aborted’.

Parameters:
  • job_id (int) – job ID

  • que_mode (bool) – [True/False] - default=True

refresh_job_status_based_on_queue_status(job_specifier: str | int, status: str = 'running') None[source]#

Check if the job is still listed as running, while it is no longer listed in the queue.

Parameters:
  • job_specifier (str, int) – name of the job or job ID

  • status (str) – Currently only the jobstatus of ‘running’ jobs can be refreshed - default=’running’

classmethod register_tools(name: str, tools) None[source]#

Add a new creator to the project class.

Example)

>>> from pyiron_base import Project, Toolkit
>>> class MyTools(Toolkit):
...     @property
...     def foo(self):
...         return 'foo'
>>>
>>> Project.register_tools('my_tools', MyTools)
>>> pr = Project('scratch')
>>> print(pr.my_tools.foo)
'foo'

The intent is then that pyiron submodules (e.g. pyiron_atomistics) define a new creator and in their __init__.py file only need to invoke Project.register_creator(‘pyiron_submodule’, SubmoduleCreator). Then whenever pyiron_submodule gets imported, all its functionality is available on the project.

Parameters:
  • name (str) – The name for the newly registered property.

  • tools (Toolkit) – The tools to register.

remove(enable: bool = False, enforce: bool = False) None[source]#

Delete all the whole project including all jobs in the project and its subprojects

Parameters:
  • enforce (bool) – [True/False] delete jobs even though they are used in other projects - default=False

  • enable (bool) – [True/False] enable this command.

remove_file(file_name: str) None[source]#

Remove a file (same as unlink()) - copied from os.remove()

If dir_fd is not None, it should be a file descriptor open to a directory,

and path should be relative; path will then be relative to that directory.

dir_fd may not be implemented on your platform.

If it is unavailable, using it will raise a NotImplementedError.

Parameters:

file_name (str) – name of the file

remove_job(job_specifier: str | int, _unprotect: bool = False) None[source]#

Remove a single job from the project based on its job_specifier - see also remove_jobs()

Parameters:
  • job_specifier (str, int) – name of the job or job ID

  • _unprotect (bool) – [True/False] delete the job without validating the dependencies to other jobs - default=False

remove_jobs(recursive: bool = False, progress: bool = True, silently: bool = False) None[source]#

Remove all jobs in the current project and in all subprojects if recursive=True is selected - see also remove_job().

For safety, the user is asked via input() to confirm the removal. To bypass this interactive interruption, use remove_jobs(silently=True).

Parameters:
  • recursive (bool) – [True/False] delete all jobs in all subprojects - default=False

  • progress (bool) – if True (default), add an interactive progress bar to the iteration

  • silently (bool) – if True the safety check is disabled - default=False

removedirs(project_name: str | None = None)#

equivalent to os.removedirs -> remove empty dirs

Parameters:

project_name (str) – relative path to the project folder to be deleted

property root_path: str#

the pyiron user directory, defined in the .pyiron configuration

Returns:

pyiron user directory of the current project

Return type:

str

set_job_status(job_specifier: str | int, status: str, project: Project = None) None[source]#

Set the status of a particular job

Parameters:
  • job_specifier (str, int) – name of the job or job ID

  • status (str) – job status can be one of the following [‘initialized’, ‘appended’, ‘created’, ‘submitted’, ‘running’, ‘aborted’, ‘collect’, ‘suspended’, ‘refresh’, ‘busy’, ‘finished’]

  • project (str) – project path

static set_logging_level(level: str, channel: int | None = None) None[source]#

Set level for logger

Parameters:
  • level (str) – ‘DEBUG, INFO, WARN’

  • channel (int) – 0: file_log, 1: stream, None: both

property size: float#

Get the size of the project

static switch_cluster(cluster_name: str) None[source]#

Switch to a different computing cluster

Parameters:

cluster_name (str) – name of the computing cluster

switch_to_central_database() None[source]#

Switch from local mode to central mode - if local_mode is enable pyiron is using a local database.

switch_to_local_database(file_name: str = 'pyiron.db', cwd: str | None = None) None[source]#

Switch from central mode to local mode - if local_mode is enable pyiron is using a local database.

Parameters:
  • file_name (str) – file name or file path for the local database

  • cwd (str) – directory where the local database is located

switch_to_user_mode() None[source]#

Switch from viewer mode to user mode - if viewer_mode is enable pyiron has read only access to the database.

switch_to_viewer_mode() None[source]#

Switch from user mode to viewer mode - if viewer_mode is enable pyiron has read only access to the database.

Move underlying project folder to target and create a symlink to it.

The project itself does not change and is not updated in the database. Instead the project folder is moved into a subdirectory of target_dir with the same name as the project and a symlink is placed in the previous project path pointing to the newly created one.

If self.path is already a symlink pointing inside target_dir, this method will silently return.

Parameters:

target_dir (str) – new parent folder for the project

Raises:
  • OSError – when calling this method on non-unix systems

  • RuntimeError – the project path is already a symlink to somewhere else

  • RuntimeError – the project path has submitted or running jobs inside it, wait until after they are finished

  • RuntimeError – target already contains a subdirectory with the project name and it is not empty

If the project folder is symlinked somewhere else remove the link and restore the original folder.

If it is not symlinked, silently return.

unpack(origin_path: str, **kwargs) None[source]#

by this function, job table is imported from a given csv file, and also the content of project directory is copied from a given path

Parameters:

origin_path (str) – the relative path of a directory from which the project directory is copied.

static unpack_csv(tar_path: str, csv_file: str = 'export.csv') DataFrame[source]#

Import job table from a csv file and copy the content of a project directory from a given path.

Parameters:
  • tar_path (str) – the relative path of a directory from which the project directory is copied.

  • csv_file (str) – the name of the csv file.

Returns:

job table

Return type:

pandas.DataFrame

update_from_remote(recursive: bool = True, ignore_exceptions: bool = False, try_collecting: bool = False)[source]#

Update jobs from the remote server

Parameters:
  • recursive (bool) – search subprojects [True/False] - default=True

  • ignore_exceptions (bool) – ignore eventual exceptions when retrieving jobs - default=False

Returns:

returns None if ignore_exceptions is False or when no error occured. returns a list with job ids when errors occured, but were ignored

values() list[source]#

All items in the current project - this includes jobs, sub projects/ groups/ folders and any kind of files

Returns:

items in the project

Return type:

list

static wait_for_job(job: GenericJob, interval_in_s: int = 5, max_iterations: int = 100) None[source]#

Sleep until the job is finished but maximum interval_in_s * max_iterations seconds.

Parameters:
  • job (GenericJob) – Job to wait for

  • interval_in_s (int) – interval when the job status is queried from the database - default 5 sec.

  • max_iterations (int) – maximum number of iterations - default 100

Raises:

ValueError – max_iterations reached, job still running

wait_for_jobs(interval_in_s: int = 5, max_iterations: int = 100, recursive: bool = True, ignore_exceptions: bool = False) None[source]#

Wait for the calculation in the project to be finished

Parameters:
  • interval_in_s (int) – interval when the job status is queried from the database - default 5 sec.

  • max_iterations (int) – maximum number of iterations - default 100

  • recursive (bool) – search subprojects [True/False] - default=True

  • ignore_exceptions (bool) – ignore eventual exceptions when retrieving jobs - default=False

Raises:

ValueError – max_iterations reached, but jobs still running

walk() Iterator[tuple[str, list[str], list[str]]]#

equivalent to os.listdir list all files and directories in this path

Returns:

Directory tree generator.

Return type:

Generator

wrap_executable(executable_str: str, job_name: str | None = None, write_input_funct: callable | None = None, collect_output_funct: callable | None = None, input_dict: dict | None = None, conda_environment_path: str | None = None, conda_environment_name: str | None = None, input_file_lst: list | None = None, automatically_rename: bool = False, execute_job: bool = False, delayed: bool = False, output_file_lst: list = [], output_key_lst: list = []) ExecutableContainerJob[source]#

Wrap any executable into a pyiron job object using the ExecutableContainerJob.

Parameters:
  • executable_str (str) – call to an external executable

  • job_name (str) – name of the new job object

  • write_input_funct (callable) – The write input function write_input(input_dict, working_directory)

  • collect_output_funct (callable) – The collect output function collect_output(working_directory)

  • input_dict (dict) – Default input for the newly created job class

  • conda_environment_path (str) – path of the conda environment

  • conda_environment_name (str) – name of the conda environment

  • input_file_lst (list) – list of files to be copied to the working directory before executing it execute_job (boolean): automatically call run() on the job object - default false

  • automatically_rename (bool) – Whether to automatically rename the job at save-time to append a string based on the input values. (Default is False.)

  • delayed (bool) – delayed execution

  • output_file_lst (list)

  • output_key_lst (list)

Example:

>>> def write_input(input_dict, working_directory="."):
>>>     with open(os.path.join(working_directory, "input_file"), "w") as f:
>>>         f.write(str(input_dict["energy"]))
>>>
>>>
>>> def collect_output(working_directory="."):
>>>     with open(os.path.join(working_directory, "output_file"), "r") as f:
>>>         return {"energy": float(f.readline())}
>>>
>>>
>>> from pyiron_base import Project
>>> pr = Project("test")
>>> job = pr.wrap_executable(
>>>     job_name="Cat_Job_energy_1_0",
>>>     write_input_funct=write_input,
>>>     collect_output_funct=collect_output,
>>>     input_dict={"energy": 1.0},
>>>     executable_str="cat input_file > output_file",
>>>     execute_job=True,
>>> )
>>> print(job.output)
Returns:

pyiron job object

Return type:

pyiron_base.jobs.flex.ExecutableContainerJob

wrap_python_function(python_function: callable, *args, job_name: str | None = None, automatically_rename: bool = True, execute_job: bool = False, delayed: bool = False, output_file_lst: list = [], output_key_lst: list = [], **kwargs) PythonFunctionContainerJob[source]#

Create a pyiron job object from any python function

Parameters:
  • python_function (callable) – python function to create a job object from

  • *args – Arguments for the user-defined python function

  • job_name (str | None) – The name for the created job. (Default is None, use the name of the function.)

  • automatically_rename (bool) – Whether to automatically rename the job at save-time to append a string based on the input values. (Default is True.)

  • delayed (bool) – delayed execution

  • execute_job (boolean) – automatically call run() on the job object - default false

  • **kwargs – Keyword-arguments for the user-defined python function

Returns:

pyiron job object

Return type:

pyiron_base.jobs.flex.pythonfunctioncontainer.PythonFunctionContainerJob

Example:

>>> def test_function(a, b=8):
>>>     return a+b
>>>
>>> from pyiron_base import Project
>>> pr = Project("test")
>>> job = pr.wrap_python_function(test_function)
>>> job.input["a"] = 4
>>> job.input["b"] = 5
>>> job.run()
>>> job.output
>>>
>>> test_function_wrapped = pr.wrap_python_function(test_function)
>>> test_function_wrapped(4, b=6)