hello everyones so im trying to set up a new hpc cluster i made an account and added users and im using a partition but whenerver i run a job it gives me an error that request node configuration is not available i checked my slurm.conf but it seems good to me i need some help the error Batch job
Tag: slurm
Using conda activate or specifying python path in bash script?
I’m running some python scripts on some Linux clusters using SGE or SLURM. I already have my conda environment set up properly using the login node. I have been writing something like to activate the environment properly. (I have done a lot of work to figure this out) However, I just found some example codes like seems to do the
Slurm job arrays don’t work when used in argparse
I am trying to run multiple things at once (i.e. in a parallel manner) with different values of the variable –start_num. I have designed the following bash script, Then, I ran sbatch –exclude master array_bash_2, but it doesn’t work. I have tried searching many sites and have tried multiple things, but still the error FINAL_ARGPARSE_RUN.py: error: argument –start_num: expected one
how make a current directory as home directory in linux
please would you help me with your suggestions on the following : <> I am using an account on a SLURM cluster where the storage space of my home directory (ie. /home/user) is maximum 32 GB <> I am running on the SLURM cluster a singularity container that is working only if the the input files are located in the
How to feed a large number of samples in parallel to linux?
I’m trying run following command on a large number of samples. I have: but I have thousands of these samples to process. Each sample takes about a day or two to finish on my local computer. I’m using a shared linux cluster and a job scheduling system called Slurm, if that helps. Answer Write a submission script such as the
How to find from where a job is submitted in SLURM?
I submitted several jobs via SLURM to our school’s HPC cluster. Because the shell scripts all have the same name, so the job names appear exactly the same. It looks like How can I know from which directory a job is submitted so that I can differentiate the jobs? Answer You can use the scontrol command to see the job
Use Bash variable within SLURM sbatch script
I’m trying to obtain a value from another file and use this within a SLURM submission script. However, I get an error that the value is non-numerical, in other words, it is not being dereferenced. Here is the script: When I run this as a normal Bash shell script, it prints out the number of procs correctly and makes the