Computing : How much memory does my job need ?

Guessing the memory usage of a batch job can be tricky and therefore we do not handle your memory reservation very strictly, frankly if you need more memory and memory is available on the workernode you will be able to use it. 

This may lead to the effect though that out of a cluster of similar jobs most of them succeed while a couple of them do need a second or third try to finish successfully.

In a similar case it is worthwhile to check  for the exit state of the unsuccessful jobs to get an idea what went wrong and if maybe peak memory consumption is part of the problem. See: Some jobs out of a cluster don't complete successfully

In case you are not sure how much peak memory your jobs do consume the easiest thing todo is to send just one testjob and after it has finished use the syntax below to check for the peak memory usage of this job:

[root@bird-htc-sched13 ~]# condor_history <jobid> -format "%d\n" MemoryUsage

1465


Please do adjust your memory request in the submit file accordingly - in your submission file, increase the memory from the default 1500 MB

RequestMemory = <MByte> # request any amount of memory different from the 1.5 gb default in 'MiB'

Please note, that higher memory requirements can cause your jobs to wait longer for free resources, as Condor needs to free more resources than for a 'default' job as basic unit.