Python Module paralleltools¶
Paralleltools.py Module to data-parallelize simulations and write pbs scripts. See the manual for examples of parallel usage.
last revision 070313-MLW
- paralleltools.Writepbsscript(fileStub, nodes, ExecutableName='Execute_MPSParallelMain', InputFileNames=[], MyPyFile=None, time='12:00:00', computer='mio', myusername=None)[source]¶
Write a pbs script for the given parallel job. Returns name of the pbs script written.
Arguments
- fileStubstr
the job id to identify the simulation
- nodesstr (maybe int, less likely list of nodes???)
specifying the number of nodes on the cluster.
- ExecutableNamestr
name of the OpenMPS parallel executable name default to ExecuteMPSParallelMain
- InputFileNameslist
list what to execute default to empty list
- MyPyFilestr
corresponding python script copied to keep track of the input for the job. default to None
- timestr
walltime for the simulation. default to 12:00:00
- computerstr
Choosing between HPC mio and ra. default to mio
- myusernamestr
used to email information about the job status to your @mines.edu email adress. default to None
- paralleltools.WriteMPSParallelFiles(Parameters, Operators, HamiltonianMPO, comp_info, PostProcess=False)[source]¶
Write MPI files for fortran. Returns either name of the pbs/slurm script for
PostProcess=False
or the dictionaries with simulations (being consistent with WriteMPSParallelTemplate).Arguments
- Parameterslist of dictionaries
contains the simulation for each simulation in dictionaries.
- Operatorsdictionary
containing all operators necessary for the simulation.
- HamiltonianMPOinstance of
MPO.MPO
defines the Hamiltonian of the system.
- comp_infodictionary
information for setting up a simulation on the cluster. Key
queueing
decides between using pbs (pbs) or slurm/sbatch (slurm) format. If key not given, the default key pbs for pbs is set automatically. Keys are for the pbs-scriptcomputer
(default mio),myusername
(None, used for email),time
(12:00:00),nodes
(obligatory), andThisFileName
(None, uses then random string). For the slurm/sbatch please view the function description of the dictionary cinfo inParalleltools.write_sbatch_script()
.- PostProcessBool, optional
flag if simulation should be run (
False
) or if only data is analyzed (True
). default to False
- paralleltools.WriteMPSParallelTemplate(ptemplate, Operators, H, comp_info, staticParamsforIteration=[], staticIterators=[], itermethod='CartesianProduct', PostProcess=False)[source]¶
Write parallel files that the master can send to worker units. Returns list or dictionary with simulation settings for PostProcess=True, and otherwise the name of the pbs/slurm script.
Arguments
- ptemplatedictionary
dictionary containing the simulation settings
- Operatorsdictionary
containing all operators necessary for the simulation.
- Hinstance of
MPO.MPO
defines the Hamiltonian of the system.
- comp_infodictionary
information for setting up a simulation on the cluster. Key
queueing
decides between using pbs (pbs) or slurm/sbatch (slurm) format. If key not given, the default key pbs for pbs is set automatically. Keys are for the pbs-scriptcomputer
(default mio),myusername
(None, used for email),time
(12:00:00),nodes
(obligatory), andThisFileName
(None, uses then random string). For the slurm/sbatch please view the function description of the dictionary cinfo inParalleltools.write_sbatch_script()
.- staticParamsforIterationlist, optional
parameters to be iterated over given through their identifier string. default to empty list
- staticIteratorslist, optional
corresponding values for the parameters to be iterated over. Order should correspond to the order in staticParamsforIteration. default to empty list
- itermethodstring
Defining how to derive the simulation parameters from the list of iterators. Options are 1) CartesianProduct, e.g. building from parameters values [a1, a2] and [b1, b2] the four simulations (a1, b1), (a1, b2), (a2, b1) and (a2, b2). 2) Linear, e.g. building from the parameter values [a1, a2] and [b1, b2] the two simulations (a1, b1) and (a2, b2). default to CartesianProduct
- PostProcessBool, optional
flag if simulation should be run (
False
) or if only data is analyzed (True
). default to False
- paralleltools.WriteParallelfiles(commands, job_ID, writedir='', outdir='')[source]¶
Write parallel files from Params
Arguments
- commandslist
containing the main setting files passed to fortran for each simulation, e.g. as returned by
tools.WriteFiles()
.- job_IDstr
job identifier of the parameter dictionary.
- writedirstr, optional
could the temporary directory of the MPS simulation specified in the parameter dictionary default to empyt string
- outdirstr, optional
could be the output directory specified in the parameter dictionary. default to empty string
- paralleltools.write_sbatch_script(fileStub, cinfo, ExecutableName='Execute_MPSParallelMain', InputFileNames=[])[source]¶
Write a slurm/sbatch script for submitting a job on a cluster. Returns name of slurm script written.
Arguments
- fileStubstr
the job id to identify the simulation. If you run multiple simulation this one must already be unique at the current stage
- cinfodictionary
containing all the setting for the cluster. The following keys can defined:
myusername
: used to email information about the job status to your @mines.edu email adress. Default to None.computer
: used to deduct the number of cores. More about the the number of tasks specified you can find inParalleltools.get_ntasks()
. Default to mio.time
: walltime for the simulation, default to 12:00:00.nodes
: can be an integer with the number of nodes or a list with three-character string specifying the nodes. Default to 1.ThisFileName
: corresponding python script copied to keep track of the input for the job. Default to None.partition
: set partition flag of SBATCH, default to None.mem
: set memory for job, default to None.exclusive
: if True, nodes are blocked for any other jobs in case any remaining cores would be available. Default to True.mpi
: choose mpi executer, default to srun (alternative mpiexec)
- ExecutableNamestr
name of the OpenMPS parallel executable name default to ExecuteMPSParallelMain
- InputFileNameslist
list what to execute default to empty list
- paralleltools.get_ntasks(cinfo)[source]¶
Return the number of nodes as string based on a nodelist or on the number of nodes. If no information is available, eight cores per node are assumed.
Arguments
- cinfodictionary
containing the setting of the job for a cluster, used within here are the dictionary keys
computer
andnodes
- paralleltools.get_nodelist(cinfo)[source]¶
Return the nodelist as string
Arguments
- cinfodictionary
containing the setting of the job for a cluster, used within here is the dictionary key
nodes
which should be a list when calling this function.
- paralleltools.WriteFiles_mpi(Parameters, Operators, HamiltonianMPO, PostProcess=False)[source]¶
This files has the same purpose as
tools.WriteFiles()
for MPI runs with mpi4py and the argument description should be read there. The files are only written once and broadcasted to every core.
- paralleltools.WriteFiles_mpi(Parameters, Operators, HamiltonianMPO, PostProcess=False)[source]¶
This files has the same purpose as
tools.WriteFiles()
for MPI runs with mpi4py and the argument description should be read there. The files are only written once and broadcasted to every core.
- paralleltools.encode_list(mylist)[source]¶
Put a list into a dictionary where the key is the position (as string) of the entry.
Arguments
- mylistlist
is converted to the described dictionary format.
- paralleltools.encode_list(mylist)[source]¶
Put a list into a dictionary where the key is the position (as string) of the entry.
Arguments
- mylistlist
is converted to the described dictionary format.
- paralleltools.decode_list(mydic)[source]¶
Decode a dictionary from
Paralleltools.encode_list()
back into a list.Arguments
- mydicdictionary
dictionary in the corresponding form, keys are 0 to lenght - 1 (as integer)
- paralleltools.decode_list(mydic)[source]¶
Decode a dictionary from
Paralleltools.encode_list()
back into a list.Arguments
- mydicdictionary
dictionary in the corresponding form, keys are 0 to lenght - 1 (as integer)
- paralleltools.runMPS_mpi(infileList, RunDir=None, customAppl=None)[source]¶
This file has the same purpose as
tools.runMPS()
for MPI runs with mpi4py and the arguments are restricted. You have to call the python script with the command line argument--mpi4py
to load mpi4py modules.Arguments
- infileListlist
list of of the main files passed to fortran executable as command line argument.
- RunDirstr, optional
Specifies path of the fortran-executable (local or global installation). This is available for default-simulation and custom application. (@Developers: Debugging with valgrind always works with a local installation.) None : if executable available within local folder, use local installation, otherwise assume global installation, default ‘’ (empty string) : for global installation ‘./’ : local installation PATH+’/’ : path to executable ending on slash /. Additional steps may be necessary when using this.
- customApplstr, optional
define custom executable. Global and local installation work as before. Custom applications cannot run with valgrind. default to None (running default executable)
- paralleltools.runMPS_mpi(infileList, RunDir=None, customAppl=None)[source]¶
This file has the same purpose as
tools.runMPS()
for MPI runs with mpi4py and the arguments are restricted. You have to call the python script with the command line argument--mpi4py
to load mpi4py modules.Arguments
- infileListlist
list of of the main files passed to fortran executable as command line argument.
- RunDirstr, optional
Specifies path of the fortran-executable (local or global installation). This is available for default-simulation and custom application. (@Developers: Debugging with valgrind always works with a local installation.) None : if executable available within local folder, use local installation, otherwise assume global installation, default ‘’ (empty string) : for global installation ‘./’ : local installation PATH+’/’ : path to executable ending on slash /. Additional steps may be necessary when using this.
- customApplstr, optional
define custom executable. Global and local installation work as before. Custom applications cannot run with valgrind. default to None (running default executable)
- paralleltools.master_mpi(comm, ncores, rank, infileList, RunDir, customAppl)[source]¶
This is the master for mpi runs taking care of the work distribution and calculating jobs via threading.
Arguments
- comminstance of mpi4py.MPI.COMM_WORLD
communicator for the MPI program
- ncoresint
number of processes in the MPI-run
- rankint
rank of this process (should be 0)
- infileListlist
list of of the main files passed to fortran executable as command line argument.
- RunDirstr, optional
Specifies path of the fortran-executable (local or global installation). This is available for default-simulation and custom application. (@Developers: Debugging with valgrind always works with a local installation.) None : if executable available within local folder, use local installation, otherwise assume global installation, default ‘’ (empty string) : for global installation ‘./’ : local installation PATH+’/’ : path to executable ending on slash /. Additional steps may be necessary when using this.
- customApplstr, optional
define custom executable. Global and local installation work as before. Custom applications cannot run with valgrind. default to None (running default executable)
- paralleltools.master_mpi(comm, ncores, rank, infileList, RunDir, customAppl)[source]¶
This is the master for mpi runs taking care of the work distribution and calculating jobs via threading.
Arguments
- comminstance of mpi4py.MPI.COMM_WORLD
communicator for the MPI program
- ncoresint
number of processes in the MPI-run
- rankint
rank of this process (should be 0)
- infileListlist
list of of the main files passed to fortran executable as command line argument.
- RunDirstr, optional
Specifies path of the fortran-executable (local or global installation). This is available for default-simulation and custom application. (@Developers: Debugging with valgrind always works with a local installation.) None : if executable available within local folder, use local installation, otherwise assume global installation, default ‘’ (empty string) : for global installation ‘./’ : local installation PATH+’/’ : path to executable ending on slash /. Additional steps may be necessary when using this.
- customApplstr, optional
define custom executable. Global and local installation work as before. Custom applications cannot run with valgrind. default to None (running default executable)
- paralleltools.worker_mpi(comm, rank, infileList, RunDir, customAppl)[source]¶
This is a worker for mpi runs taking care of calculating jobs received from the master.
Arguments
- comminstance of mpi4py.MPI.COMM_WORLD
communicator for the MPI program
- ncoresint
number of processes in the MPI-run
- rankint
rank of this process (should be 0)
- infileListlist
list of of the main files passed to fortran executable as command line argument.
- RunDirstr, optional
Specifies path of the fortran-executable (local or global installation). This is available for default-simulation and custom application. (@Developers: Debugging with valgrind always works with a local installation.) None : if executable available within local folder, use local installation, otherwise assume global installation, default ‘’ (empty string) : for global installation ‘./’ : local installation PATH+’/’ : path to executable ending on slash /. Additional steps may be necessary when using this.
- customApplstr, optional
define custom executable. Global and local installation work as before. Custom applications cannot run with valgrind. default to None (running default executable)
- paralleltools.worker_mpi(comm, rank, infileList, RunDir, customAppl)[source]¶
This is a worker for mpi runs taking care of calculating jobs received from the master.
Arguments
- comminstance of mpi4py.MPI.COMM_WORLD
communicator for the MPI program
- ncoresint
number of processes in the MPI-run
- rankint
rank of this process (should be 0)
- infileListlist
list of of the main files passed to fortran executable as command line argument.
- RunDirstr, optional
Specifies path of the fortran-executable (local or global installation). This is available for default-simulation and custom application. (@Developers: Debugging with valgrind always works with a local installation.) None : if executable available within local folder, use local installation, otherwise assume global installation, default ‘’ (empty string) : for global installation ‘./’ : local installation PATH+’/’ : path to executable ending on slash /. Additional steps may be necessary when using this.
- customApplstr, optional
define custom executable. Global and local installation work as before. Custom applications cannot run with valgrind. default to None (running default executable)
- paralleltools.runMPS_wrapper(infileList, RunDir=None, Debug=False, customAppl=None)[source]¶
Pass call to
tools.runMPS()
but handle errors from fortran library. Up to now we ignore those errors and continue with the next job.For arguments see in
tools.runMPS()
.
- paralleltools.runMPS_wrapper(infileList, RunDir=None, Debug=False, customAppl=None)[source]¶
Pass call to
tools.runMPS()
but handle errors from fortran library. Up to now we ignore those errors and continue with the next job.For arguments see in
tools.runMPS()
.