All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Public Member Functions | Public Attributes | List of all members
python.stagedef.StageDef Class Reference

Public Member Functions

def __init__
 
def __str__
 
def pubsify_input
 
def pubsify_output
 
def checksubmit
 
def checkinput
 
def check_output_dirs
 
def checkdirs
 
def makedirs
 

Public Attributes

 name
 
 batchname
 
 fclname
 
 outdir
 
 logdir
 
 workdir
 
 bookdir
 
 dirsize
 
 dirlevels
 
 dynamic
 
 inputfile
 
 inputlist
 
 inputmode
 
 basedef
 
 inputdef
 
 inputstream
 
 previousstage
 
 mixinputdef
 
 pubs_input_ok
 
 pubs_input
 
 input_run
 
 input_subruns
 
 input_version
 
 pubs_output
 
 output_run
 
 output_subruns
 
 output_version
 
 ana
 
 recur
 
 recurtype
 
 recurlimit
 
 singlerun
 
 filelistdef
 
 prestart
 
 activebase
 
 dropboxwait
 
 prestagefraction
 
 maxfluxfilemb
 
 num_jobs
 
 num_events
 
 max_files_per_job
 
 target_size
 
 defname
 
 ana_defname
 
 data_tier
 
 data_stream
 
 ana_data_tier
 
 ana_data_stream
 
 submit_script
 
 init_script
 
 init_source
 
 end_script
 
 mid_source
 
 mid_script
 
 project_name
 
 stage_name
 
 project_version
 
 merge
 
 anamerge
 
 resource
 
 lines
 
 site
 
 blacklist
 
 cpu
 
 disk
 
 datafiletypes
 
 memory
 
 parameters
 
 output
 
 TFileName
 
 jobsub
 
 jobsub_start
 
 jobsub_timeout
 
 exe
 
 schema
 
 validate_on_worker
 
 copy_to_fts
 
 cvmfs
 
 stash
 
 singularity
 
 script
 
 start_script
 
 stop_script
 

Detailed Description

Definition at line 33 of file stagedef.py.

Constructor & Destructor Documentation

def python.stagedef.StageDef.__init__ (   self,
  stage_element,
  base_stage,
  default_input_lists,
  default_previous_stage,
  default_num_jobs,
  default_num_events,
  default_max_files_per_job,
  default_merge,
  default_anamerge,
  default_cpu,
  default_disk,
  default_memory,
  default_validate_on_worker,
  default_copy_to_fts,
  default_cvmfs,
  default_stash,
  default_singularity,
  default_script,
  default_start_script,
  default_stop_script,
  default_site,
  default_blacklist,
  check = True 
)

Definition at line 43 of file stagedef.py.

43 
44  default_site, default_blacklist, check=True):
45 
46  # Assign default values.
47 
48  if base_stage != None:
49  self.name = base_stage.name
50  self.batchname = base_stage.batchname
51  self.fclname = base_stage.fclname
52  self.outdir = base_stage.outdir
53  self.logdir = base_stage.logdir
54  self.workdir = base_stage.workdir
55  self.bookdir = base_stage.bookdir
56  self.dirsize = base_stage.dirsize
57  self.dirlevels = base_stage.dirlevels
58  self.dynamic = base_stage.dynamic
59  self.inputfile = base_stage.inputfile
60  self.inputlist = base_stage.inputlist
61  self.inputmode = base_stage.inputmode
62  self.basedef = base_stage.basedef
63  self.inputdef = base_stage.inputdef
64  self.inputstream = base_stage.inputstream
65  self.previousstage = base_stage.previousstage
66  self.mixinputdef = base_stage.mixinputdef
67  self.pubs_input_ok = base_stage.pubs_input_ok
68  self.pubs_input = base_stage.pubs_input
69  self.input_run = base_stage.input_run
70  self.input_subruns = base_stage.input_subruns
71  self.input_version = base_stage.input_version
72  self.pubs_output = base_stage.pubs_output
73  self.output_run = base_stage.output_run
74  self.output_subruns = base_stage.output_subruns
75  self.output_version = base_stage.output_version
76  self.ana = base_stage.ana
77  self.recur = base_stage.recur
78  self.recurtype = base_stage.recurtype
79  self.recurlimit = base_stage.recurlimit
80  self.singlerun = base_stage.singlerun
81  self.filelistdef = base_stage.filelistdef
82  self.prestart = base_stage.prestart
83  self.activebase = base_stage.activebase
84  self.dropboxwait = base_stage.dropboxwait
85  self.prestagefraction = base_stage.prestagefraction
86  self.maxfluxfilemb = base_stage.maxfluxfilemb
87  self.num_jobs = base_stage.num_jobs
88  self.num_events = base_stage.num_events
89  self.max_files_per_job = base_stage.max_files_per_job
90  self.target_size = base_stage.target_size
91  self.defname = base_stage.defname
92  self.ana_defname = base_stage.ana_defname
93  self.data_tier = base_stage.data_tier
94  self.data_stream = base_stage.data_stream
95  self.ana_data_tier = base_stage.ana_data_tier
96  self.ana_data_stream = base_stage.ana_data_stream
97  self.submit_script = base_stage.submit_script
98  self.init_script = base_stage.init_script
99  self.init_source = base_stage.init_source
100  self.end_script = base_stage.end_script
101  self.mid_source = base_stage.mid_source
102  self.mid_script = base_stage.mid_script
103  self.project_name = base_stage.project_name
104  self.stage_name = base_stage.stage_name
105  self.project_version = base_stage.project_version
106  self.merge = base_stage.merge
107  self.anamerge = base_stage.anamerge
108  self.resource = base_stage.resource
109  self.lines = base_stage.lines
110  self.site = base_stage.site
111  self.blacklist = base_stage.blacklist
112  self.cpu = base_stage.cpu
113  self.disk = base_stage.disk
114  self.datafiletypes = base_stage.datafiletypes
115  self.memory = base_stage.memory
116  self.parameters = base_stage.parameters
117  self.output = base_stage.output
118  self.TFileName = base_stage.TFileName
119  self.jobsub = base_stage.jobsub
120  self.jobsub_start = base_stage.jobsub_start
121  self.jobsub_timeout = base_stage.jobsub_timeout
122  self.exe = base_stage.exe
123  self.schema = base_stage.schema
124  self.validate_on_worker = base_stage. validate_on_worker
125  self.copy_to_fts = base_stage.copy_to_fts
126  self.cvmfs = base_stage.cvmfs
127  self.stash = base_stage.stash
128  self.singularity = base_stage.singularity
129  self.script = base_stage.script
130  self.start_script = base_stage.start_script
131  self.stop_script = base_stage.stop_script
132  else:
133  self.name = '' # Stage name.
134  self.batchname = '' # Batch job name
135  self.fclname = []
136  self.outdir = '' # Output directory.
137  self.logdir = '' # Log directory.
138  self.workdir = '' # Work directory.
139  self.bookdir = '' # Bookkeeping directory.
140  self.dirsize = 0 # Maximum directory size.
141  self.dirlevels = 0 # Number of extra directory levels.
142  self.dynamic = 0 # Dynamic output/log directory.
143  self.inputfile = '' # Single input file.
144  self.inputlist = '' # Input file list.
145  self.inputmode = '' # Input file type (none or textfile)
146  self.basedef = '' # Base sam dataset definition.
147  self.inputdef = '' # Input sam dataset definition.
148  self.inputstream = '' # Input file stream.
149  self.previousstage = '' # Previous stage name.
150  self.mixinputdef = '' # Mix input sam dataset definition.
151  self.pubs_input_ok = 1 # Is pubs input allowed?
152  self.pubs_input = 0 # Pubs input mode.
153  self.input_run = 0 # Pubs input run.
154  self.input_subruns = [] # Pubs input subrun number(s).
155  self.input_version = 0 # Pubs input version number.
156  self.pubs_output = 0 # Pubs output mode.
157  self.output_run = 0 # Pubs output run.
158  self.output_subruns = [] # Pubs output subrun number.
159  self.output_version = 0 # Pubs output version number.
160  self.ana = 0 # Analysis flag.
161  self.recur = 0 # Recursive flag.
162  self.recurtype = '' # Recursive type.
163  self.recurlimit = 0 # Recursive limit.
164  self.singlerun=0 # Single run mode.
165  self.filelistdef=0 # Convert sam input def to file list.
166  self.prestart = 0 # Prestart flag.
167  self.activebase = '' # Active projects base name.
168  self.dropboxwait = 0. # Dropbox waiting interval.
169  self.prestagefraction = 0. # Prestage fraction.
170  self.maxfluxfilemb = 0 # MaxFluxFileMB (size of genie flux files to fetch).
171  self.num_jobs = default_num_jobs # Number of jobs.
172  self.num_events = default_num_events # Number of events.
173  self.max_files_per_job = default_max_files_per_job #max num of files per job
174  self.target_size = 0 # Target size for output files.
175  self.defname = '' # Sam dataset definition name.
176  self.ana_defname = '' # Sam dataset definition name.
177  self.data_tier = '' # Sam data tier.
178  self.data_stream = [] # Sam data stream.
179  self.ana_data_tier = '' # Sam data tier.
180  self.ana_data_stream = [] # Sam data stream.
181  self.submit_script = '' # Submit script.
182  self.init_script = [] # Worker initialization script.
183  self.init_source = [] # Worker initialization bash source script.
184  self.end_script = [] # Worker end-of-job script.
185  self.mid_source = {} # Worker midstage source init scripts.
186  self.mid_script = {} # Worker midstage finalization scripts.
187  self.project_name = [] # Project name overrides.
188  self.stage_name = [] # Stage name overrides.
189  self.project_version = [] # Project version overrides.
190  self.merge = default_merge # Histogram merging program
191  self.anamerge = default_anamerge # Analysis merge flag.
192  self.resource = '' # Jobsub resources.
193  self.lines = '' # Arbitrary condor commands.
194  self.site = default_site # Site.
195  self.blacklist = default_blacklist # Blacklist site.
196  self.cpu = default_cpu # Number of cpus.
197  self.disk = default_disk # Disk space (string value+unit).
198  self.datafiletypes = ["root"] # Data file types.
199  self.memory = default_memory # Amount of memory (integer MB).
200  self.parameters = {} # Dictionary of metadata parameters.
201  self.output = [] # Art output file names.
202  self.TFileName = '' # TFile output file name.
203  self.jobsub = '' # Arbitrary jobsub_submit options.
204  self.jobsub_start = '' # Arbitrary jobsub_submit options for sam start/stop jobs.
205  self.jobsub_timeout = 0 # Jobsub submit timeout.
206  self.exe = [] # Art-like executables.
207  self.schema = '' # Sam schema.
208  self.validate_on_worker = default_validate_on_worker # Validate-on-worker flag.
209  self.copy_to_fts = default_copy_to_fts # Upload-on-worker flag.
210  self.cvmfs = default_cvmfs # Default cvmfs flag.
211  self.stash = default_stash # Default stash flag.
212  self.singularity = default_singularity # Default singularity flag.
213  self.script = default_script # Upload-on-worker flag.
214  self.start_script = default_start_script # Upload-on-worker flag.
215  self.stop_script = default_stop_script # Upload-on-worker flag.
216 
217  # Extract values from xml.
218 
219  # Stage name (attribute).
220 
221  if 'name' in dict(stage_element.attributes):
222  self.name = str(stage_element.attributes['name'].firstChild.data)
223  if self.name == '':
224  raise XMLError("Stage name not specified.")
225 
226  # Batch job name (subelement).
227 
228  batchname_elements = stage_element.getElementsByTagName('batchname')
229  if batchname_elements:
230  self.batchname = str(batchname_elements[0].firstChild.data)
231 
232  # Fcl file name (repeatable subelement).
233 
234  fclname_elements = stage_element.getElementsByTagName('fcl')
235  if len(fclname_elements) > 0:
236  self.fclname = []
237  for fcl in fclname_elements:
238  self.fclname.append(str(fcl.firstChild.data).strip())
239  if len(self.fclname) == 0:
240  raise XMLError('No Fcl names specified for stage %s.' % self.name)
241 
242  # Output directory (subelement).
243 
244  outdir_elements = stage_element.getElementsByTagName('outdir')
245  if outdir_elements:
246  self.outdir = str(outdir_elements[0].firstChild.data)
247  if self.outdir == '':
248  raise XMLError('Output directory not specified for stage %s.' % self.name)
249 
250  # Log directory (subelement).
251 
252  logdir_elements = stage_element.getElementsByTagName('logdir')
253  if logdir_elements:
254  self.logdir = str(logdir_elements[0].firstChild.data)
255  if self.logdir == '':
256  self.logdir = self.outdir
257 
258  # Work directory (subelement).
259 
260  workdir_elements = stage_element.getElementsByTagName('workdir')
261  if workdir_elements:
262  self.workdir = str(workdir_elements[0].firstChild.data)
263  if self.workdir == '':
264  raise XMLError('Work directory not specified for stage %s.' % self.name)
265 
266  # Bookkeeping directory (subelement).
267 
268  bookdir_elements = stage_element.getElementsByTagName('bookdir')
269  if bookdir_elements:
270  self.bookdir = str(bookdir_elements[0].firstChild.data)
271  if self.bookdir == '':
272  self.bookdir = self.logdir
273 
274  # Maximum directory size (subelement).
275 
276  dirsize_elements = stage_element.getElementsByTagName('dirsize')
277  if dirsize_elements:
278  self.dirsize = int(dirsize_elements[0].firstChild.data)
279 
280  # Extra directory levels (subelement).
281 
282  dirlevels_elements = stage_element.getElementsByTagName('dirlevels')
283  if dirlevels_elements:
284  self.dirlevels = int(dirlevels_elements[0].firstChild.data)
285 
286  # Single input file (subelement).
287 
288  inputfile_elements = stage_element.getElementsByTagName('inputfile')
289  if inputfile_elements:
290  self.inputfile = str(inputfile_elements[0].firstChild.data)
291 
292  # Input file list (subelement).
293 
294  inputlist_elements = stage_element.getElementsByTagName('inputlist')
295  if inputlist_elements:
296  self.inputlist = str(inputlist_elements[0].firstChild.data)
297 
298  # Input file type (subelement).
299 
300  inputmode_elements = stage_element.getElementsByTagName('inputmode')
301  if inputmode_elements:
302  self.inputmode = str(inputmode_elements[0].firstChild.data)
303 
304  # Input sam dataset dfeinition (subelement).
305 
306  inputdef_elements = stage_element.getElementsByTagName('inputdef')
307  if inputdef_elements:
308  self.inputdef = str(inputdef_elements[0].firstChild.data)
309 
310  # Analysis flag (subelement).
311 
312  ana_elements = stage_element.getElementsByTagName('ana')
313  if ana_elements:
314  self.ana = int(ana_elements[0].firstChild.data)
315 
316  # Recursive flag (subelement).
317 
318  recur_elements = stage_element.getElementsByTagName('recur')
319  if recur_elements:
320  self.recur = int(recur_elements[0].firstChild.data)
321 
322  # Recursive type (subelement).
323 
324  recurtype_elements = stage_element.getElementsByTagName('recurtype')
325  if recurtype_elements:
326  self.recurtype = str(recurtype_elements[0].firstChild.data)
327 
328  # Recursive limit (subelement).
329 
330  recurlimit_elements = stage_element.getElementsByTagName('recurlimit')
331  if recurlimit_elements:
332  self.recurlimit = int(recurlimit_elements[0].firstChild.data)
333 
334  # Recursive input sam dataset dfeinition (subelement).
335 
336  recurdef_elements = stage_element.getElementsByTagName('recurdef')
337  if recurdef_elements:
338  self.basedef = self.inputdef
339  self.inputdef = str(recurdef_elements[0].firstChild.data)
340  self.recur = 1
341 
342  # Single run flag (subelement).
343 
344  singlerun_elements = stage_element.getElementsByTagName('singlerun')
345  if singlerun_elements:
346  self.singlerun = int(singlerun_elements[0].firstChild.data)
347 
348  # File list definition flag (subelement).
349 
350  filelistdef_elements = stage_element.getElementsByTagName('filelistdef')
351  if filelistdef_elements:
352  self.filelistdef = int(filelistdef_elements[0].firstChild.data)
353 
354  # Prestart flag.
355 
356  prestart_elements = stage_element.getElementsByTagName('prestart')
357  if prestart_elements:
358  self.prestart = int(prestart_elements[0].firstChild.data)
359 
360  # Active projects basename.
361 
362  activebase_elements = stage_element.getElementsByTagName('activebase')
363  if activebase_elements:
364  self.activebase = str(activebase_elements[0].firstChild.data)
365 
366  # Dropbox wait interval.
367 
368  dropboxwait_elements = stage_element.getElementsByTagName('dropboxwait')
369  if dropboxwait_elements:
370  self.dropboxwait = float(dropboxwait_elements[0].firstChild.data)
371 
372  # Prestage fraction (subelement).
373 
374  prestagefraction_elements = stage_element.getElementsByTagName('prestagefraction')
375  if prestagefraction_elements:
376  self.prestagefraction = float(prestagefraction_elements[0].firstChild.data)
377 
378  # Input stream (subelement).
379 
380  inputstream_elements = stage_element.getElementsByTagName('inputstream')
381  if inputstream_elements:
382  self.inputstream = str(inputstream_elements[0].firstChild.data)
383 
384  # Previous stage name (subelement).
385 
386  previousstage_elements = stage_element.getElementsByTagName('previousstage')
387  if previousstage_elements:
388  self.previousstage = str(previousstage_elements[0].firstChild.data)
389 
390  # If a base stage was specified, nullify any input inherted from base.
391 
392  if base_stage != None:
393  self.inputfile = ''
394  self.inputlist = ''
395  self.inputdef = ''
396 
397  # It never makes sense to specify a previous stage with some other input.
398 
399  if self.inputfile != '' or self.inputlist != '' or self.inputdef != '':
400  raise XMLError('Previous stage and input specified for stage %s.' % self.name)
401 
402  # Mix input sam dataset (subelement).
403 
404  mixinputdef_elements = stage_element.getElementsByTagName('mixinputdef')
405  if mixinputdef_elements:
406  self.mixinputdef = str(mixinputdef_elements[0].firstChild.data)
407 
408  # It is an error to specify both input file and input list.
409 
410  if self.inputfile != '' and self.inputlist != '':
411  raise XMLError('Input file and input list both specified for stage %s.' % self.name)
412 
413  # It is an error to specify either input file or input list together
414  # with a sam input dataset.
415 
416  if self.inputdef != '' and (self.inputfile != '' or self.inputlist != ''):
417  raise XMLError('Input dataset and input files specified for stage %s.' % self.name)
418 
419  # It is an error to use textfile inputmode without an inputlist or inputfile
420  if self.inputmode == 'textfile' and self.inputlist == '' and self.inputfile == '':
421  raise XMLError('Input list (inputlist) or inputfile is needed for textfile model.')
422 
423  # If none of input definition, input file, nor input list were specified, set
424  # the input list to the dafault input list. If an input stream was specified,
425  # insert it in front of the file type.
426 
427  if self.inputfile == '' and self.inputlist == '' and self.inputdef == '':
428 
429  # Get the default input list according to the previous stage.
430 
431  default_input_list = ''
432  previous_stage_name = default_previous_stage
433  if self.previousstage != '':
434  previous_stage_name = self.previousstage
435  if previous_stage_name in default_input_lists:
436  default_input_list = default_input_lists[previous_stage_name]
437 
438  # Modify default input list according to input stream, if any.
439 
440  if self.inputstream == '' or default_input_list == '':
441  self.inputlist = default_input_list
442  else:
443  n = default_input_list.rfind('.')
444  if n < 0:
445  n = len(default_input_list)
446  self.inputlist = '%s_%s%s' % (default_input_list[:n],
447  self.inputstream,
448  default_input_list[n:])
449 
450  # Pubs input flag.
451 
452  pubs_input_ok_elements = stage_element.getElementsByTagName('pubsinput')
453  if pubs_input_ok_elements:
454  self.pubs_input_ok = int(pubs_input_ok_elements[0].firstChild.data)
455 
456  # MaxFluxFileMB GENIEHelper fcl parameter (subelement).
457 
458  maxfluxfilemb_elements = stage_element.getElementsByTagName('maxfluxfilemb')
459  if maxfluxfilemb_elements:
460  self.maxfluxfilemb = int(maxfluxfilemb_elements[0].firstChild.data)
461  else:
462 
463  # If this is a generator job, give maxfluxfilemb parameter a default
464  # nonzero value.
465 
466  if self.inputfile == '' and self.inputlist == '' and self.inputdef == '':
467  self.maxfluxfilemb = 500
468 
469  # Number of jobs (subelement).
470 
471  num_jobs_elements = stage_element.getElementsByTagName('numjobs')
472  if num_jobs_elements:
473  self.num_jobs = int(num_jobs_elements[0].firstChild.data)
474 
475  # Number of events (subelement).
476 
477  num_events_elements = stage_element.getElementsByTagName('numevents')
478  if num_events_elements:
479  self.num_events = int(num_events_elements[0].firstChild.data)
480 
481  # Max Number of files per jobs.
482 
483  max_files_per_job_elements = stage_element.getElementsByTagName('maxfilesperjob')
484  if max_files_per_job_elements:
485  self.max_files_per_job = int(max_files_per_job_elements[0].firstChild.data)
486 
487  # Run number of events (MC Gen only).
488  #overriden by --pubs <run> is running in pubs mode
489 
490  run_number = stage_element.getElementsByTagName('runnumber')
491  if run_number:
492  self.output_run = int(run_number[0].firstChild.data)
493 
494  # Target size for output files (subelement).
495 
496  target_size_elements = stage_element.getElementsByTagName('targetsize')
497  if target_size_elements:
498  self.target_size = int(target_size_elements[0].firstChild.data)
499 
500 
501  # Sam dataset definition name (subelement).
502 
503  defname_elements = stage_element.getElementsByTagName('defname')
504  if defname_elements:
505  self.defname = str(defname_elements[0].firstChild.data)
506 
507  # Sam analysis dataset definition name (subelement).
508 
509  ana_defname_elements = stage_element.getElementsByTagName('anadefname')
510  if ana_defname_elements:
511  self.ana_defname = str(ana_defname_elements[0].firstChild.data)
512 
513  # Sam data tier (subelement).
514 
515  data_tier_elements = stage_element.getElementsByTagName('datatier')
516  if data_tier_elements:
517  self.data_tier = str(data_tier_elements[0].firstChild.data)
518 
519  # Sam data stream (subelement).
520 
521  data_stream_elements = stage_element.getElementsByTagName('datastream')
522  if len(data_stream_elements) > 0:
523  self.data_stream = []
524  for data_stream in data_stream_elements:
525  self.data_stream.append(str(data_stream.firstChild.data))
526 
527  # Sam analysis data tier (subelement).
528 
529  ana_data_tier_elements = stage_element.getElementsByTagName('anadatatier')
530  if ana_data_tier_elements:
531  self.ana_data_tier = str(ana_data_tier_elements[0].firstChild.data)
532 
533  # Sam analysis data stream (subelement).
534 
535  ana_data_stream_elements = stage_element.getElementsByTagName('anadatastream')
536  if len(ana_data_stream_elements) > 0:
537  self.ana_data_stream = []
538  for ana_data_stream in ana_data_stream_elements:
539  self.ana_data_stream.append(str(ana_data_stream.firstChild.data))
540 
541  # Submit script (subelement).
542 
543  submit_script_elements = stage_element.getElementsByTagName('submitscript')
544  if submit_script_elements:
545  self.submit_script = str(submit_script_elements[0].firstChild.data).split()
546 
547  # Make sure submit script exists, and convert into a full path.
548 
549  if check:
550  if len(self.submit_script) > 0:
551  if larbatch_posix.exists(self.submit_script[0]):
552  self.submit_script[0] = os.path.realpath(self.submit_script[0])
553  else:
554 
555  # Look for script on execution path.
556 
557  try:
558  jobinfo = subprocess.Popen(['which', self.submit_script[0]],
559  stdout=subprocess.PIPE,
560  stderr=subprocess.PIPE)
561  jobout, joberr = jobinfo.communicate()
562  jobout = convert_str(jobout)
563  joberr = convert_str(joberr)
564  rc = jobinfo.poll()
565  self.submit_script[0] = jobout.splitlines()[0].strip()
566  except:
567  pass
568  if not larbatch_posix.exists(self.submit_script[0]):
569  raise IOError('Submit script %s not found.' % self.submit_script[0])
570 
571  # Worker initialization script (repeatable subelement).
572 
573  init_script_elements = stage_element.getElementsByTagName('initscript')
574  if len(init_script_elements) > 0:
575  for init_script_element in init_script_elements:
576  init_script = str(init_script_element.firstChild.data)
577 
578  # Make sure init script exists, and convert into a full path.
579 
580  if check:
581  if init_script != '':
582  if larbatch_posix.exists(init_script):
583  init_script = os.path.realpath(init_script)
584  else:
585 
586  # Look for script on execution path.
587 
588  try:
589  jobinfo = subprocess.Popen(['which', init_script],
590  stdout=subprocess.PIPE,
591  stderr=subprocess.PIPE)
592  jobout, joberr = jobinfo.communicate()
593  rc = jobinfo.poll()
594  init_script = convert_str(jobout.splitlines()[0].strip())
595  except:
596  pass
597 
598  if not larbatch_posix.exists(init_script):
599  raise IOError('Init script %s not found.' % init_script)
600 
601  self.init_script.append(init_script)
602 
603  # Worker initialization source script (repeatable subelement).
604 
605  init_source_elements = stage_element.getElementsByTagName('initsource')
606  if len(init_source_elements) > 0:
607  for init_source_element in init_source_elements:
608  init_source = str(init_source_element.firstChild.data)
609 
610  # Make sure init source script exists, and convert into a full path.
611 
612  if init_source != '':
613  if check:
614  if larbatch_posix.exists(init_source):
615  init_source = os.path.realpath(init_source)
616  else:
617 
618  # Look for script on execution path.
619 
620  try:
621  jobinfo = subprocess.Popen(['which', init_source],
622  stdout=subprocess.PIPE,
623  stderr=subprocess.PIPE)
624  jobout, joberr = jobinfo.communicate()
625  rc = jobinfo.poll()
626  init_source = convert_str(jobout.splitlines()[0].strip())
627  except:
628  pass
629 
630  if not larbatch_posix.exists(init_source):
631  raise IOError('Init source script %s not found.' % init_source)
632 
633  # The <initsource> element can occur at the top level of the <stage>
634  # element, or inside a <fcl> element.
635  # Update the StageDef object differently in these two cases.
636 
637  parent_element = init_source_element.parentNode
638  if parent_element.nodeName == 'fcl':
639 
640  # This <initsource> is located inside a <fcl> element.
641  # Find the index of this fcl file.
642  # Python will raise an exception if the fcl can't be found
643  # (shouldn't happen).
644 
645  fcl = str(parent_element.firstChild.data).strip()
646  n = self.fclname.index(fcl)
647  if not n in self.mid_source:
648  self.mid_source[n] = []
649  self.mid_source[n].append(init_source)
650 
651  else:
652 
653  # This is a <stage> level <initsource> element.
654 
655  self.init_source.append(init_source)
656 
657  # Worker end-of-job script (repeatable subelement).
658 
659  end_script_elements = stage_element.getElementsByTagName('endscript')
660  if len(end_script_elements) > 0:
661  for end_script_element in end_script_elements:
662  end_script = str(end_script_element.firstChild.data)
663 
664  # Make sure end-of-job scripts exists, and convert into a full path.
665 
666  if end_script != '':
667  if check:
668  if larbatch_posix.exists(end_script):
669  end_script = os.path.realpath(end_script)
670  else:
671 
672  # Look for script on execution path.
673 
674  try:
675  jobinfo = subprocess.Popen(['which', end_script],
676  stdout=subprocess.PIPE,
677  stderr=subprocess.PIPE)
678  jobout, joberr = jobinfo.communicate()
679  rc = jobinfo.poll()
680  end_script = convert_str(jobout.splitlines()[0].strip())
681  except:
682  pass
683 
684  if not larbatch_posix.exists(end_script):
685  raise IOError('End-of-job script %s not found.' % end_script)
686 
687  # The <endscript> element can occur at the top level of the <stage>
688  # element, or inside a <fcl> element.
689  # Update the StageDef object differently in these two cases.
690 
691  parent_element = end_script_element.parentNode
692  if parent_element.nodeName == 'fcl':
693 
694  # This <endscript> is located inside a <fcl> element.
695  # Find the index of this fcl file.
696  # Python will raise an exception if the fcl can't be found
697  # (shouldn't happen).
698 
699  fcl = str(parent_element.firstChild.data).strip()
700  n = self.fclname.index(fcl)
701  if not n in self.mid_script:
702  self.mid_script[n] = []
703  self.mid_script[n].append(end_script)
704 
705  else:
706 
707  # This is a <stage> level <endscript> element.
708 
709  self.end_script.append(end_script)
710 
711  # Project name overrides (repeatable subelement).
712 
713  project_name_elements = stage_element.getElementsByTagName('projectname')
714  if len(project_name_elements) > 0:
715  for project_name_element in project_name_elements:
716 
717  # Match this project name with its parent fcl element.
718 
719  fcl_element = project_name_element.parentNode
720  if fcl_element.nodeName != 'fcl':
721  raise XMLError("Found <projectname> element outside <fcl> element.")
722  fcl = str(fcl_element.firstChild.data).strip()
723 
724  # Find the index of this fcl file.
725  # Python will raise an exception if the fcl can't be found (shouldn't happen).
726 
727  n = self.fclname.index(fcl)
728 
729  # Make sure project_name list is long enough.
730 
731  while len(self.project_name) < n+1:
732  self.project_name.append('')
733 
734  # Extract project name and add it to list.
735 
736  project_name = str(project_name_element.firstChild.data)
737  self.project_name[n] = project_name
738 
739  # Make sure that the size of the project_name list (if present) ia at least as
740  # long as the fclname list.
741  # If not, extend by adding empty string.
742 
743  if len(self.project_name) > 0:
744  while len(self.project_name) < len(self.fclname):
745  self.project_name.append('')
746 
747  # Stage name overrides (repeatable subelement).
748 
749  stage_name_elements = stage_element.getElementsByTagName('stagename')
750  if len(stage_name_elements) > 0:
751  for stage_name_element in stage_name_elements:
752 
753  # Match this project name with its parent fcl element.
754 
755  fcl_element = stage_name_element.parentNode
756  if fcl_element.nodeName != 'fcl':
757  raise XMLError("Found <stagename> element outside <fcl> element.")
758  fcl = str(fcl_element.firstChild.data).strip()
759 
760  # Find the index of this fcl file.
761  # Python will raise an exception if the fcl can't be found (shouldn't happen).
762 
763  n = self.fclname.index(fcl)
764 
765  # Make sure stage_name list is long enough.
766 
767  while len(self.stage_name) < n+1:
768  self.stage_name.append('')
769 
770  # Extract stage name and add it to list.
771 
772  stage_name = str(stage_name_element.firstChild.data)
773  self.stage_name[n] = stage_name
774 
775  # Make sure that the size of the stage_name list (if present) ia at least as
776  # long as the fclname list.
777  # If not, extend by adding empty string.
778 
779  if len(self.stage_name) > 0:
780  while len(self.stage_name) < len(self.fclname):
781  self.stage_name.append('')
782 
783  # Project version overrides (repeatable subelement).
784 
785  project_version_elements = stage_element.getElementsByTagName('version')
786  if len(project_version_elements) > 0:
787  for project_version_element in project_version_elements:
788 
789  # Match this project version with its parent fcl element.
790 
791  fcl_element = project_version_element.parentNode
792  if fcl_element.nodeName != 'fcl':
793  raise XMLError("Found stage level <version> element outside <fcl> element.")
794  fcl = str(fcl_element.firstChild.data).strip()
795 
796  # Find the index of this fcl file.
797  # Python will raise an exception if the fcl can't be found (shouldn't happen).
798 
799  n = self.fclname.index(fcl)
800 
801  # Make sure project_version list is long enough.
802 
803  while len(self.project_version) < n+1:
804  self.project_version.append('')
805 
806  # Extract project version and add it to list.
807 
808  project_version = str(project_version_element.firstChild.data)
809  self.project_version[n] = project_version
810 
811  # Make sure that the size of the project_version list (if present) ia at least as
812  # long as the fclname list.
813  # If not, extend by adding empty string.
814 
815  if len(self.project_version) > 0:
816  while len(self.project_version) < len(self.fclname):
817  self.project_version.append('')
818 
819  # Histogram merging program.
820 
821  merge_elements = stage_element.getElementsByTagName('merge')
822  if merge_elements:
823  self.merge = str(merge_elements[0].firstChild.data)
824 
825  # Analysis merge flag.
826 
827  anamerge_elements = stage_element.getElementsByTagName('anamerge')
828  if anamerge_elements:
829  self.anamerge = str(anamerge_elements[0].firstChild.data)
830 
831  # Resource (subelement).
832 
833  resource_elements = stage_element.getElementsByTagName('resource')
834  if resource_elements:
835  self.resource = str(resource_elements[0].firstChild.data)
836  self.resource = ''.join(self.resource.split())
837 
838  # Lines (subelement).
839 
840  lines_elements = stage_element.getElementsByTagName('lines')
841  if lines_elements:
842  self.lines = str(lines_elements[0].firstChild.data)
843 
844  # Site (subelement).
845 
846  site_elements = stage_element.getElementsByTagName('site')
847  if site_elements:
848  self.site = str(site_elements[0].firstChild.data)
849  self.site = ''.join(self.site.split())
850 
851  # Blacklist (subelement).
852 
853  blacklist_elements = stage_element.getElementsByTagName('blacklist')
854  if blacklist_elements:
855  self.blacklist = str(blacklist_elements[0].firstChild.data)
856  self.blacklist = ''.join(self.blacklist.split())
857 
858  # Cpu (subelement).
859 
860  cpu_elements = stage_element.getElementsByTagName('cpu')
861  if cpu_elements:
862  self.cpu = int(cpu_elements[0].firstChild.data)
863 
864  # Disk (subelement).
865 
866  disk_elements = stage_element.getElementsByTagName('disk')
867  if disk_elements:
868  self.disk = str(disk_elements[0].firstChild.data)
869  self.disk = ''.join(self.disk.split())
870 
871  # Data file types (subelement).
872 
873  datafiletypes_elements = stage_element.getElementsByTagName('datafiletypes')
874  if datafiletypes_elements:
875  data_file_types_str = str(datafiletypes_elements[0].firstChild.data)
876  data_file_types_str = ''.join(data_file_types_str.split())
877  self.datafiletypes = data_file_types_str.split(',')
878 
879  # Memory (subelement).
880 
881  memory_elements = stage_element.getElementsByTagName('memory')
882  if memory_elements:
883  self.memory = int(memory_elements[0].firstChild.data)
884 
885  # Dictionary of metadata parameters (repeatable subelement).
886 
887  param_elements = stage_element.getElementsByTagName('parameter')
888  if len(param_elements) > 0:
889  self.parameters = {}
890  for param_element in param_elements:
891  name = str(param_element.attributes['name'].firstChild.data)
892  value = str(param_element.firstChild.data)
893  self.parameters[name] = value
894 
895  # Output file name (repeatable subelement).
896 
897  output_elements = stage_element.getElementsByTagName('output')
898  if len(output_elements) > 0:
899 
900  # The output element can occur once at the top level of the <stage> element, or
901  # inside a <fcl> element. The former applies globally. The latter applies
902  # only to that fcl substage.
903 
904  # Loop over global output elements.
905 
906  for output_element in output_elements:
907  parent_element = output_element.parentNode
908  if parent_element.nodeName != 'fcl':
909  output = str(output_element.firstChild.data)
910  self.output = []
911  while len(self.output) < len(self.fclname):
912  self.output.append(output)
913 
914  # Loop over fcl output elements.
915 
916  for output_element in output_elements:
917  parent_element = output_element.parentNode
918  if parent_element.nodeName == 'fcl':
919 
920  # Match this output name with its parent fcl element.
921 
922  fcl = str(parent_element.firstChild.data).strip()
923  n = self.fclname.index(fcl)
924 
925  # Make sure project_name list is long enough.
926 
927  while len(self.output) < n+1:
928  self.output.append('')
929 
930  # Extract output name and add it to list.
931 
932  output = str(output_element.firstChild.data)
933  self.output[n] = output
934 
935  # Make sure that the size of the output list (if present) ia at least as
936  # long as the fclname list.
937  # If not, extend by adding empty string.
938 
939  if len(self.output) > 0:
940  while len(self.output) < len(self.fclname):
941  self.output.append('')
942 
943  # TFileName (subelement).
944 
945  TFileName_elements = stage_element.getElementsByTagName('TFileName')
946  if TFileName_elements:
947  self.TFileName = str(TFileName_elements[0].firstChild.data)
948 
949  # Jobsub.
950 
951  jobsub_elements = stage_element.getElementsByTagName('jobsub')
952  if jobsub_elements:
953  self.jobsub = str(jobsub_elements[0].firstChild.data)
954 
955  # Jobsub start/stop.
956 
957  jobsub_start_elements = stage_element.getElementsByTagName('jobsub_start')
958  if jobsub_start_elements:
959  self.jobsub_start = str(jobsub_start_elements[0].firstChild.data)
960 
961  # Jobsub submit timeout.
962 
963  jobsub_timeout_elements = stage_element.getElementsByTagName('jobsub_timeout')
964  if jobsub_timeout_elements:
965  self.jobsub_timeout = int(jobsub_timeout_elements[0].firstChild.data)
966 
967  # Name of art-like executables (repeatable subelement).
968 
969  exe_elements = stage_element.getElementsByTagName('exe')
970  if len(exe_elements) > 0:
971 
972  # The exe element can occur once at the top level of the <stage> element, or
973  # inside a <fcl> element. The former applies globally. The latter applies
974  # only to that fcl substage.
975 
976  # Loop over global exe elements.
977 
978  for exe_element in exe_elements:
979  parent_element = exe_element.parentNode
980  if parent_element.nodeName != 'fcl':
981  exe = str(exe_element.firstChild.data)
982  self.exe = []
983  while len(self.exe) < len(self.fclname):
984  self.exe.append(exe)
985 
986  # Loop over fcl exe elements.
987 
988  for exe_element in exe_elements:
989  parent_element = exe_element.parentNode
990  if parent_element.nodeName == 'fcl':
991 
992  # Match this exe name with its parent fcl element.
993 
994  fcl = str(parent_element.firstChild.data).strip()
995  n = self.fclname.index(fcl)
996 
997  # Make sure project_name list is long enough.
998 
999  while len(self.exe) < n+1:
1000  self.exe.append('')
1001 
1002  # Extract exe name and add it to list.
1003 
1004  exe = str(exe_element.firstChild.data)
1005  self.exe[n] = exe
1006 
1007  # Make sure that the size of the exe list (if present) ia at least as
1008  # long as the fclname list.
1009  # If not, extend by adding empty string.
1010 
1011  if len(self.exe) > 0:
1012  while len(self.exe) < len(self.fclname):
1013  self.exe.append('')
1014 
1015  # Sam schema.
1016 
1017  schema_elements = stage_element.getElementsByTagName('schema')
1018  if schema_elements:
1019  self.schema = str(schema_elements[0].firstChild.data)
1020 
1021  # Validate-on-worker.
1022 
1023  validate_on_worker_elements = stage_element.getElementsByTagName('check')
1024  if validate_on_worker_elements:
1025  self.validate_on_worker = int(validate_on_worker_elements[0].firstChild.data)
1026 
1027  # Upload-on-worker.
1028 
1029  copy_to_fts_elements = stage_element.getElementsByTagName('copy')
1030  if copy_to_fts_elements:
1031  self.copy_to_fts = int(copy_to_fts_elements[0].firstChild.data)
1032 
1033  # Cvmfs flag.
1034 
1035  cvmfs_elements = stage_element.getElementsByTagName('cvmfs')
1036  if cvmfs_elements:
1037  self.cvmfs = int(cvmfs_elements[0].firstChild.data)
1038 
1039  # Stash flag.
1040 
1041  stash_elements = stage_element.getElementsByTagName('stash')
1042  if stash_elements:
1043  self.stash = int(stash_elements[0].firstChild.data)
1044 
1045  # Singularity flag.
1046 
1047  singularity_elements = stage_element.getElementsByTagName('singularity')
1048  if singularity_elements:
1049  self.singularity = int(singularity_elements[0].firstChild.data)
1050 
1051  # Batch script
1052 
1053  script_elements = stage_element.getElementsByTagName('script')
1054  if script_elements:
1055  self.script = script_elements[0].firstChild.data
1056 
1057  # Make sure batch script exists, and convert into a full path.
1058 
1059  if check:
1060  script_path = ''
1061  try:
1062  jobinfo = subprocess.Popen(['which', self.script],
1063  stdout=subprocess.PIPE,
1064  stderr=subprocess.PIPE)
1065  jobout, joberr = jobinfo.communicate()
1066  jobout = convert_str(jobout)
1067  joberr = convert_str(joberr)
1068  rc = jobinfo.poll()
1069  script_path = jobout.splitlines()[0].strip()
1070  except:
1071  pass
1072  if script_path == '' or not larbatch_posix.access(script_path, os.X_OK):
1073  raise IOError('Script %s not found.' % self.script)
1074  self.script = script_path
1075 
1076  # Start script
1077 
1078  start_script_elements = stage_element.getElementsByTagName('startscript')
1079  if start_script_elements:
1080  self.start_script = start_script_elements[0].firstChild.data
1081 
1082  # Make sure start project batch script exists, and convert into a full path.
1083 
1084  if check:
1085  script_path = ''
1086  try:
1087  jobinfo = subprocess.Popen(['which', self.start_script],
1088  stdout=subprocess.PIPE,
1089  stderr=subprocess.PIPE)
1090  jobout, joberr = jobinfo.communicate()
1091  jobout = convert_str(jobout)
1092  joberr = convert_str(joberr)
1093  rc = jobinfo.poll()
1094  script_path = jobout.splitlines()[0].strip()
1095  except:
1096  pass
1097  self.start_script = script_path
1098 
1099  # Stop script
1100 
1101  stop_script_elements = stage_element.getElementsByTagName('stopscript')
1102  if stop_script_elements:
1103  self.stop_script = stop_script_elements[0].firstChild.data
1104 
1105  # Make sure stop project batch script exists, and convert into a full path.
1106 
1107  if check:
1108  script_path = ''
1109  try:
1110  jobinfo = subprocess.Popen(['which', self.stop_script],
1111  stdout=subprocess.PIPE,
1112  stderr=subprocess.PIPE)
1113  jobout, joberr = jobinfo.communicate()
1114  jobout = convert_str(jobout)
1115  joberr = convert_str(joberr)
1116  rc = jobinfo.poll()
1117  script_path = jobout.splitlines()[0].strip()
1118  except:
1119  pass
1120  self.stop_script = script_path
1121 
1122  # Done.
1123 
1124  return
S join(S const &sep, Coll const &s)
Returns a concatenation of strings in s separated by sep.

Member Function Documentation

def python.stagedef.StageDef.__str__ (   self)

Definition at line 1127 of file stagedef.py.

1128  def __str__(self):
1129  result = 'Stage name = %s\n' % self.name
1130  result = 'Batch job name = %s\n' % self.batchname
1131  #result += 'Fcl filename = %s\n' % self.fclname
1132  for fcl in self.fclname:
1133  result += 'Fcl filename = %s\n' % fcl
1134  result += 'Output directory = %s\n' % self.outdir
1135  result += 'Log directory = %s\n' % self.logdir
1136  result += 'Work directory = %s\n' % self.workdir
1137  result += 'Bookkeeping directory = %s\n' % self.bookdir
1138  result += 'Maximum directory size = %d\n' % self.dirsize
1139  result += 'Extra directory levels = %d\n' % self.dirlevels
1140  result += 'Dynamic directories = %d\n' % self.dynamic
1141  result += 'Input file = %s\n' % self.inputfile
1142  result += 'Input list = %s\n' % self.inputlist
1143  result += 'Input mode = %s\n' % self.inputmode
1144  result += 'Input sam dataset = %s' % self.inputdef
1145  if self.recur:
1146  result += ' (recursive)'
1147  result += '\n'
1148  result += 'Base sam dataset = %s\n' % self.basedef
1149  result += 'Analysis flag = %d\n' % self.ana
1150  result += 'Recursive flag = %d\n' % self.recur
1151  result += 'Recursive type = %s\n' % self.recurtype
1152  result += 'Recursive limit = %d\n' % self.recurlimit
1153  result += 'Single run flag = %d\n' % self.singlerun
1154  result += 'File list definition flag = %d\n' % self.filelistdef
1155  result += 'Prestart flag = %d\n' % self.prestart
1156  result += 'Active projects base name = %s\n' % self.activebase
1157  result += 'Dropbox waiting interval = %f\n' % self.dropboxwait
1158  result += 'Prestage fraction = %f\n' % self.prestagefraction
1159  result += 'Input stream = %s\n' % self.inputstream
1160  result += 'Previous stage name = %s\n' % self.previousstage
1161  result += 'Mix input sam dataset = %s\n' % self.mixinputdef
1162  result += 'Pubs input allowed = %d\n' % self.pubs_input_ok
1163  result += 'Pubs input mode = %d\n' % self.pubs_input
1164  result += 'Pubs input run number = %d\n' % self.input_run
1165  for subrun in self.input_subruns:
1166  result += 'Pubs input subrun number = %d\n' % subrun
1167  result += 'Pubs input version number = %d\n' % self.input_version
1168  result += 'Pubs output mode = %d\n' % self.pubs_output
1169  result += 'Pubs output run number = %d\n' % self.output_run
1170  for subrun in self.output_subruns:
1171  result += 'Pubs output subrun number = %d\n' % subrun
1172  result += 'Pubs output version number = %d\n' % self.output_version
1173  result += 'Output file name = %s\n' % self.output
1174  result += 'TFileName = %s\n' % self.TFileName
1175  result += 'Number of jobs = %d\n' % self.num_jobs
1176  result += 'Number of events = %d\n' % self.num_events
1177  result += 'Max flux MB = %d\n' % self.maxfluxfilemb
1178  result += 'Max files per job = %d\n' % self.max_files_per_job
1179  result += 'Output file target size = %d\n' % self.target_size
1180  result += 'Dataset definition name = %s\n' % self.defname
1181  result += 'Analysis dataset definition name = %s\n' % self.ana_defname
1182  result += 'Data tier = %s\n' % self.data_tier
1183  result += 'Data stream = %s\n' % self.data_stream
1184  result += 'Analysis data tier = %s\n' % self.ana_data_tier
1185  result += 'Analysis data stream = %s\n' % self.ana_data_stream
1186  result += 'Submit script = %s\n' % self.submit_script
1187  result += 'Worker initialization script = %s\n' % self.init_script
1188  result += 'Worker initialization source script = %s\n' % self.init_source
1189  result += 'Worker end-of-job script = %s\n' % self.end_script
1190  result += 'Worker midstage source initialization scripts = %s\n' % self.mid_source
1191  result += 'Worker midstage finalization scripts = %s\n' % self.mid_script
1192  result += 'Project name overrides = %s\n' % self.project_name
1193  result += 'Stage name overrides = %s\n' % self.stage_name
1194  result += 'Project version overrides = %s\n' % self.project_version
1195  result += 'Special histogram merging program = %s\n' % self.merge
1196  result += 'Analysis merge flag = %s\n' % self.anamerge
1197  result += 'Resource = %s\n' % self.resource
1198  result += 'Lines = %s\n' % self.lines
1199  result += 'Site = %s\n' % self.site
1200  result += 'Blacklist = %s\n' % self.blacklist
1201  result += 'Cpu = %d\n' % self.cpu
1202  result += 'Disk = %s\n' % self.disk
1203  result += 'Datafiletypes = %s\n' % self.datafiletypes
1204  result += 'Memory = %d MB\n' % self.memory
1205  result += 'Metadata parameters:\n'
1206  for key in self.parameters:
1207  result += '%s: %s\n' % (key,self.parameters[key])
1208  result += 'Output file name = %s\n' % self.output
1209  result += 'TFile name = %s\n' % self.TFileName
1210  result += 'Jobsub_submit options = %s\n' % self.jobsub
1211  result += 'Jobsub_submit start/stop options = %s\n' % self.jobsub_start
1212  result += 'Jobsub submit timeout = %d\n' % self.jobsub_timeout
1213  result += 'Executables = %s\n' % self.exe
1214  result += 'Schema = %s\n' % self.schema
1215  result += 'Validate-on-worker = %d\n' % self.validate_on_worker
1216  result += 'Upload-on-worker = %d\n' % self.copy_to_fts
1217  result += 'Cvmfs flag = %d\n' % self.cvmfs
1218  result += 'Stash cache flag = %d\n' % self.stash
1219  result += 'Singularity flag = %d\n' % self.singularity
1220  result += 'Batch script = %s\n' % self.script
1221  result += 'Start script = %s\n' % self.start_script
1222  result += 'Stop script = %s\n' % self.stop_script
1223  return result
def python.stagedef.StageDef.check_output_dirs (   self)

Definition at line 1740 of file stagedef.py.

1741  def check_output_dirs(self):
1742  if not larbatch_posix.exists(self.outdir):
1743  raise IOError('Output directory %s does not exist.' % self.outdir)
1744  if not larbatch_posix.exists(self.logdir):
1745  raise IOError('Log directory %s does not exist.' % self.logdir)
1746  return
def python.stagedef.StageDef.checkdirs (   self)

Definition at line 1749 of file stagedef.py.

1750  def checkdirs(self):
1751  if not larbatch_posix.exists(self.outdir):
1752  raise IOError('Output directory %s does not exist.' % self.outdir)
1753  if self.logdir != self.outdir and not larbatch_posix.exists(self.logdir):
1754  raise IOError('Log directory %s does not exist.' % self.logdir)
1755  if not larbatch_posix.exists(self.workdir):
1756  raise IOError('Work directory %s does not exist.' % self.workdir)
1757  if self.bookdir != self.logdir and not larbatch_posix.exists(self.bookdir):
1758  raise IOError('Bookkeeping directory %s does not exist.' % self.bookdir)
1759  return
def python.stagedef.StageDef.checkinput (   self,
  checkdef = False 
)

Definition at line 1548 of file stagedef.py.

1549  def checkinput(self, checkdef=False):
1550 
1551  if self.inputfile != '' and not larbatch_posix.exists(self.inputfile):
1552  raise IOError('Input file %s does not exist.' % self.inputfile)
1553  if self.inputlist != '' and not larbatch_posix.exists(self.inputlist):
1554  raise IOError('Input list %s does not exist.' % self.inputlist)
1555 
1556  checkok = False
1557 
1558  # Define or update the active projects dataset, if requested.
1559 
1560  if self.activebase != '':
1561  activedef = '%s_active' % self.activebase
1562  waitdef = '%s_wait' % self.activebase
1563  project_utilities.make_active_project_dataset(self.activebase,
1564  self.dropboxwait,
1565  activedef,
1566  waitdef)
1567 
1568  # If target size is nonzero, and input is from a file list, calculate
1569  # the ideal number of output jobs and override the current number
1570  # of jobs.
1571 
1572  if self.target_size != 0 and self.inputlist != '':
1573  input_filenames = larbatch_posix.readlines(self.inputlist)
1574  size_tot = 0
1575  for line in input_filenames:
1576  filename = line.split()[0]
1577  filesize = larbatch_posix.stat(filename).st_size
1578  size_tot = size_tot + filesize
1579  new_num_jobs = size_tot / self.target_size
1580  if new_num_jobs < 1:
1581  new_num_jobs = 1
1582  if new_num_jobs > self.num_jobs:
1583  new_num_jobs = self.num_jobs
1584  print("Ideal number of jobs based on target file size is %d." % new_num_jobs)
1585  if new_num_jobs != self.num_jobs:
1586  print("Updating number of jobs from %d to %d." % (self.num_jobs, new_num_jobs))
1587  self.num_jobs = new_num_jobs
1588 
1589  # If singlerun mode is requested, pick a random file from the input
1590  # dataset and create (if necessary) a new dataset definition which
1591  # limits files to be only from that run. Don't do anything here if
1592  # the input dataset is empty.
1593 
1594  if self.singlerun and checkdef:
1595 
1596  samweb = project_utilities.samweb()
1597  print("Doing single run processing.")
1598 
1599  # First find an input file.
1600 
1601  #dim = 'defname: %s with limit 1' % self.inputdef
1602  dim = 'defname: %s' % self.inputdef
1603  if self.filelistdef:
1604  input_files = list(project_utilities.listFiles(dim))
1605  else:
1606  input_files = samweb.listFiles(dimensions=dim)
1607  if len(input_files) > 0:
1608  random_file = random.choice(input_files)
1609  print('Example file: %s' % random_file)
1610 
1611  # Extract run number.
1612 
1613  md = samweb.getMetadata(random_file)
1614  run_tuples = md['runs']
1615  if len(run_tuples) > 0:
1616  run = run_tuples[0][0]
1617  print('Input files will be limited to run %d.' % run)
1618 
1619  # Make a new dataset definition.
1620  # If this definition already exists, assume it is correct.
1621 
1622  newdef = '%s_run_%d' % (samweb.makeProjectName(self.inputdef), run)
1623  def_exists = False
1624  try:
1625  desc = samweb.descDefinition(defname=newdef)
1626  def_exists = True
1627  except samweb_cli.exceptions.DefinitionNotFound:
1628  pass
1629  if not def_exists:
1630  print('Creating dataset definition %s' % newdef)
1631  newdim = 'defname: %s and run_number %d' % (self.inputdef, run)
1632  samweb.createDefinition(defname=newdef, dims=newdim)
1633  self.inputdef = newdef
1634 
1635  else:
1636  print('Problem extracting run number from example file.')
1637  return 1
1638 
1639  else:
1640  print('Input dataset is empty.')
1641  return 1
1642 
1643  # If target size is nonzero, and input is from a sam dataset definition,
1644  # and maxfilesperjob is not one, calculate the ideal number of jobs and
1645  # maxfilesperjob.
1646 
1647  if self.target_size != 0 and self.max_files_per_job != 1 and self.inputdef != '':
1648 
1649  # Query sam to determine size and number of files in input
1650  # dataset.
1651 
1652  samweb = project_utilities.samweb()
1653  dim = 'defname: %s' % self.inputdef
1654  nfiles = 0
1655  files = []
1656  if self.filelistdef:
1657  files = project_utilities.listFiles(dim)
1658  nfiles = len(files)
1659  else:
1660  sum = samweb.listFilesSummary(dimensions=dim)
1661  nfiles = sum['file_count']
1662  print('Input dataset %s has %d files.' % (self.inputdef, nfiles))
1663  if nfiles > 0:
1664  checkok = True
1665  max_files = self.max_files_per_job * self.num_jobs
1666  size_tot = 0
1667  if max_files > 0 and max_files < nfiles:
1668  if self.filelistdef:
1669  while len(files) > max_files:
1670  files.pop()
1671  dim = 'defname: %s' % project_utilities.makeFileListDefinition(files)
1672  else:
1673  dim += ' with limit %d' % max_files
1674  sum = samweb.listFilesSummary(dimensions=dim)
1675  size_tot = sum['total_file_size']
1676  nfiles = sum['file_count']
1677  else:
1678  if self.filelistdef:
1679  dim = 'defname: %s' % project_utilities.makeFileListDefinition(files)
1680  sum = samweb.listFilesSummary(dimensions=dim)
1681  size_tot = sum['total_file_size']
1682  nfiles = sum['file_count']
1683 
1684  # Calculate updated job parameters.
1685 
1686  new_num_jobs = int(math.ceil(float(size_tot) / float(self.target_size)))
1687  if new_num_jobs < 1:
1688  new_num_jobs = 1
1689  if new_num_jobs > self.num_jobs:
1690  new_num_jobs = self.num_jobs
1691 
1692  new_max_files_per_job = int(math.ceil(float(self.target_size) * float(nfiles) / float(size_tot)))
1693  if self.max_files_per_job > 0 and new_max_files_per_job > self.max_files_per_job:
1694  new_max_files_per_job = self.max_files_per_job
1695  new_num_jobs = (nfiles + self.max_files_per_job - 1) / self.max_files_per_job
1696  if new_num_jobs < 1:
1697  new_num_jobs = 1
1698  if new_num_jobs > self.num_jobs:
1699  new_num_jobs = self.num_jobs
1700 
1701  print("Ideal number of jobs based on target file size is %d." % new_num_jobs)
1702  if new_num_jobs != self.num_jobs:
1703  print("Updating number of jobs from %d to %d." % (self.num_jobs, new_num_jobs))
1704  self.num_jobs = new_num_jobs
1705  print("Ideal number of files per job is %d." % new_max_files_per_job)
1706  if new_max_files_per_job != self.max_files_per_job:
1707  print("Updating maximum files per job from %d to %d." % (
1708  self.max_files_per_job, new_max_files_per_job))
1709  self.max_files_per_job = new_max_files_per_job
1710  else:
1711  print('Input dataset is empty.')
1712  return 1
1713 
1714  # If requested, do a final check in the input dataset.
1715  # Limit the number of jobs to be not more than the number of files, since
1716  # it never makes sense to have more jobs than that.
1717  # If the number of input files is zero, return an error.
1718 
1719  if self.inputdef != '' and checkdef and not checkok:
1720  samweb = project_utilities.samweb()
1721  n = 0
1722  if self.filelistdef:
1723  files = project_utilities.listFiles('defname: %s' % self.inputdef)
1724  n = len(files)
1725  else:
1726  sum = samweb.listFilesSummary(defname=self.inputdef)
1727  n = sum['file_count']
1728  print('Input dataset %s contains %d files.' % (self.inputdef, n))
1729  if n < self.num_jobs:
1730  self.num_jobs = n
1731  if n == 0:
1732  return 1
1733 
1734  # Done (all good).
1735 
1736  return 0
1737 
do one_file $F done echo for F in find $TOP name CMakeLists txt print
list
Definition: file_to_url.sh:28
def python.stagedef.StageDef.checksubmit (   self)

Definition at line 1510 of file stagedef.py.

1511  def checksubmit(self):
1512 
1513  rc = 0
1514  if len(self.submit_script) > 0:
1515  print('Running presubmission check script', end=' ')
1516  for word in self.submit_script:
1517  print(word, end=' ')
1518  print()
1519  jobinfo = subprocess.Popen(self.submit_script,
1520  stdout=subprocess.PIPE,
1521  stderr=subprocess.PIPE)
1522  q = queue.Queue()
1523  thread = threading.Thread(target=larbatch_utilities.wait_for_subprocess,
1524  args=[jobinfo, q])
1525  thread.start()
1526  thread.join(timeout=60)
1527  if thread.is_alive():
1528  print('Submit script timed out, terminating.')
1529  jobinfo.terminate()
1530  thread.join()
1531  rc = q.get()
1532  jobout = convert_str(q.get())
1533  joberr = convert_str(q.get())
1534  print('Script exit status = %d' % rc)
1535  print('Script standard output:')
1536  print(jobout)
1537  print('Script diagnostic output:')
1538  print(joberr)
1539 
1540  # Done.
1541  # Return exit status.
1542 
1543  return rc
1544 
do one_file $F done echo for F in find $TOP name CMakeLists txt print
def python.stagedef.StageDef.makedirs (   self)

Definition at line 1762 of file stagedef.py.

1763  def makedirs(self):
1764  if not larbatch_posix.exists(self.outdir):
1765  larbatch_posix.makedirs(self.outdir)
1766  if self.logdir != self.outdir and not larbatch_posix.exists(self.logdir):
1767  larbatch_posix.makedirs(self.logdir)
1768  if not larbatch_posix.exists(self.workdir):
1769  larbatch_posix.makedirs(self.workdir)
1770  if self.bookdir != self.logdir and not larbatch_posix.exists(self.bookdir):
1771  larbatch_posix.makedirs(self.bookdir)
1772 
1773  # If output is on dcache, make output directory group-writable.
1774 
1775  if self.outdir[0:6] == '/pnfs/':
1776  mode = stat.S_IMODE(larbatch_posix.stat(self.outdir).st_mode)
1777  if not mode & stat.S_IWGRP:
1778  mode = mode | stat.S_IWGRP
1779  larbatch_posix.chmod(self.outdir, mode)
1780  if self.logdir[0:6] == '/pnfs/':
1781  mode = stat.S_IMODE(larbatch_posix.stat(self.logdir).st_mode)
1782  if not mode & stat.S_IWGRP:
1783  mode = mode | stat.S_IWGRP
1784  larbatch_posix.chmod(self.logdir, mode)
1785 
1786  self.checkdirs()
1787  return
def python.stagedef.StageDef.pubsify_input (   self,
  run,
  subruns,
  version 
)

Definition at line 1268 of file stagedef.py.

1269  def pubsify_input(self, run, subruns, version):
1270 
1271  # Don't do anything if pubs input is disabled.
1272 
1273  if not self.pubs_input_ok:
1274  return
1275 
1276  # It never makes sense to specify pubs input mode if there are no
1277  # input files (i.e. generation jobs). This is not considered an error.
1278 
1279  if self.inputfile == '' and self.inputlist == '' and self.inputdef == '':
1280  return
1281 
1282  # The case if input from a single file is not supported. Raise an exception.
1283 
1284  if self.inputfile != '':
1285  raise RuntimeError('Pubs input for single file input is not supported.')
1286 
1287  # Set pubs input mode.
1288 
1289  self.pubs_input = 1
1290 
1291  # Save the run, subrun, and version numbers.
1292 
1293  self.input_run = run;
1294  self.input_subruns = subruns;
1295  self.input_version = version;
1296 
1297  # if input is from a sam dataset, create a restricted dataset that limits
1298  # input files to selected run and subruns.
1299 
1300  if self.inputdef != '':
1301  newdef = project_utilities.create_limited_dataset(self.inputdef,
1302  run,
1303  subruns)
1304  if not newdef:
1305  raise PubsInputError(run, subruns[0], version)
1306  self.inputdef = newdef
1307 
1308  # Set the number of submitted jobs assuming each worker will get
1309  # self.max_files_per_job files.
1310 
1311  files_per_job = self.max_files_per_job
1312  if files_per_job == 0:
1313  files_per_job = 1
1314  self.num_jobs = (len(subruns) + files_per_job - 1) / files_per_job
1315 
1316  # Done.
1317 
1318  return
1319 
1320  # If we get to here, we have input from a file list and a previous stage
1321  # exists. This normally indicates a daisy chain. This is where subcases
1322  # 3 (a), (b) are handled.
1323 
1324  # Case 3(a), single subrun.
1325 
1326  if len(subruns) == 1:
1327 
1328  # Insert run and subrun into input file list path.
1329 
1330  if version == None:
1331  pubs_path = '%d/%d' % (run, subruns[0])
1332  else:
1333  pubs_path = '%d/%d/%d' % (version, run, subruns[0])
1334  dir = os.path.dirname(self.inputlist)
1335  base = os.path.basename(self.inputlist)
1336  self.inputlist = os.path.join(dir, pubs_path, base)
1337 
1338  # Verify that the input list exists and is not empty.
1339 
1340  lines = []
1341  try:
1342  lines = larbatch_posix.readlines(self.inputlist)
1343  except:
1344  lines = []
1345  if len(lines) == 0:
1346  raise PubsInputError(run, subruns[0], version)
1347 
1348  # Verify that input files actually exist.
1349 
1350  for line in lines:
1351  input_file = line.strip()
1352  if not larbatch_posix.exists(input_file):
1353  raise PubsInputError(run, subruns[0], version)
1354 
1355  # Specify that there will be exactly one job submitted.
1356 
1357  self.num_jobs = 1
1358 
1359  # Everything OK (case 3(a)).
1360 
1361  return
1362 
1363  # Case 3(b), multiple subruns.
1364 
1365  if len(subruns) > 1:
1366 
1367  # Generate a new input file list with a unique name and place
1368  # it in the same directory as the original input list. Note that
1369  # the input list may not actually exist at this point. If it
1370  # doesn't exist, just use the original name. If it already exists,
1371  # generate a different name.
1372 
1373  dir = os.path.dirname(self.inputlist)
1374  base = os.path.basename(self.inputlist)
1375  new_inputlist_path = self.inputlist
1376  if larbatch_posix.exists(new_inputlist_path):
1377  new_inputlist_path = '%s/%s_%s.list' % (dir, base, str(uuid.uuid4()))
1378  self.inputlist = new_inputlist_path
1379 
1380  # Defer opening the new input list file until after the original
1381  # input file is successfully opened.
1382 
1383  new_inputlist_file = None
1384 
1385  # Loop over subruns. Read contents of pubs input list for each subrun.
1386 
1387  nsubruns = 0
1388  total_size = 0
1389  actual_subruns = []
1390  truncate = False
1391  for subrun in subruns:
1392 
1393  if truncate:
1394  break
1395 
1396  nsubruns += 1
1397 
1398  if version == None:
1399  pubs_path = '%d/%d' % (run, subrun)
1400  else:
1401  pubs_path = '%d/%d/%d' % (version, run, subrun)
1402 
1403  subrun_inputlist = os.path.join(dir, pubs_path, base)
1404  lines = []
1405  try:
1406  lines = larbatch_posix.readlines(subrun_inputlist)
1407  except:
1408  lines = []
1409  if len(lines) == 0:
1410  raise PubsInputError(run, subruns[0], version)
1411  for line in lines:
1412  subrun_inputfile = line.strip()
1413 
1414  # Test size and accessibility of this input file.
1415 
1416  sr_size = -1
1417  try:
1418  sr = larbatch_posix.stat(subrun_inputfile)
1419  sr_size = sr.st_size
1420  except:
1421  sr_size = -1
1422 
1423  if sr_size > 0:
1424  actual_subruns.append(subrun)
1425  if new_inputlist_file == None:
1426  print('Generating new input list %s\n' % new_inputlist_path)
1427  new_inputlist_file = larbatch_posix.open(new_inputlist_path, 'w')
1428  new_inputlist_file.write('%s\n' % subrun_inputfile)
1429  total_size += sr.st_size
1430 
1431  # If at this point the total size exceeds the target size,
1432  # truncate the list of subruns and break out of the loop.
1433 
1434  if self.max_files_per_job > 1 and self.target_size != 0 \
1435  and total_size >= self.target_size:
1436  truncate = True
1437  break
1438 
1439  # Done looping over subruns.
1440 
1441  new_inputlist_file.close()
1442 
1443  # Raise an exception if the actual list of subruns is empty.
1444 
1445  if len(actual_subruns) == 0:
1446  raise PubsInputError(run, subruns[0], version)
1447 
1448  # Update the list of subruns to be the actual list of subruns.
1449 
1450  if len(actual_subruns) != len(subruns):
1451  print('Truncating subrun list: %s' % str(actual_subruns))
1452  del subruns[:]
1453  subruns.extend(actual_subruns)
1454 
1455  # Set the number of submitted jobs assuming each worker will get
1456  # self.max_files_per_job files.
1457 
1458  files_per_job = self.max_files_per_job
1459  if files_per_job == 0:
1460  files_per_job = 1
1461  self.num_jobs = (len(subruns) + files_per_job - 1) / files_per_job
1462 
1463  # Everything OK (case 3(b)).
1464 
1465  return
1466 
1467  # Shouldn't ever fall out of loop.
1468 
1469  return
1470 
do one_file $F done echo for F in find $TOP name CMakeLists txt print
def python.stagedef.StageDef.pubsify_output (   self,
  run,
  subruns,
  version 
)

Definition at line 1473 of file stagedef.py.

1474  def pubsify_output(self, run, subruns, version):
1475 
1476  # Set pubs mode.
1477 
1478  self.pubs_output = 1
1479 
1480  # Save the run, subrun, and version numbers.
1481 
1482  self.output_run = run;
1483  self.output_subruns = subruns;
1484  self.output_version = version;
1485 
1486  # Append run and subrun to workdir, outdir, logdir, and bookdir.
1487  # In case of multiple subruns, encode the subdir directory as "@s",
1488  # which informs the batch worker to determine the subrun dynamically.
1489 
1490  if len(subruns) == 1:
1491  if version == None:
1492  pubs_path = '%d/%d' % (run, subruns[0])
1493  else:
1494  pubs_path = '%d/%d/%d' % (version, run, subruns[0])
1495  self.workdir = os.path.join(self.workdir, pubs_path)
1496  else:
1497  if version == None:
1498  pubs_path = '%d/@s' % run
1499  else:
1500  pubs_path = '%d/%d/@s' % (version, run)
1501  self.workdir = os.path.join(self.workdir, str(uuid.uuid4()))
1502  self.dynamic = 1
1503  self.outdir = os.path.join(self.outdir, pubs_path)
1504  self.logdir = os.path.join(self.logdir, pubs_path)
1505  self.bookdir = os.path.join(self.bookdir, pubs_path)

Member Data Documentation

python.stagedef.StageDef.activebase

Definition at line 82 of file stagedef.py.

python.stagedef.StageDef.ana

Definition at line 75 of file stagedef.py.

python.stagedef.StageDef.ana_data_stream

Definition at line 95 of file stagedef.py.

python.stagedef.StageDef.ana_data_tier

Definition at line 94 of file stagedef.py.

python.stagedef.StageDef.ana_defname

Definition at line 91 of file stagedef.py.

python.stagedef.StageDef.anamerge

Definition at line 106 of file stagedef.py.

python.stagedef.StageDef.basedef

Definition at line 61 of file stagedef.py.

python.stagedef.StageDef.batchname

Definition at line 49 of file stagedef.py.

python.stagedef.StageDef.blacklist

Definition at line 110 of file stagedef.py.

python.stagedef.StageDef.bookdir

Definition at line 54 of file stagedef.py.

python.stagedef.StageDef.copy_to_fts

Definition at line 124 of file stagedef.py.

python.stagedef.StageDef.cpu

Definition at line 111 of file stagedef.py.

python.stagedef.StageDef.cvmfs

Definition at line 125 of file stagedef.py.

python.stagedef.StageDef.data_stream

Definition at line 93 of file stagedef.py.

python.stagedef.StageDef.data_tier

Definition at line 92 of file stagedef.py.

python.stagedef.StageDef.datafiletypes

Definition at line 113 of file stagedef.py.

python.stagedef.StageDef.defname

Definition at line 90 of file stagedef.py.

python.stagedef.StageDef.dirlevels

Definition at line 56 of file stagedef.py.

python.stagedef.StageDef.dirsize

Definition at line 55 of file stagedef.py.

python.stagedef.StageDef.disk

Definition at line 112 of file stagedef.py.

python.stagedef.StageDef.dropboxwait

Definition at line 83 of file stagedef.py.

python.stagedef.StageDef.dynamic

Definition at line 57 of file stagedef.py.

python.stagedef.StageDef.end_script

Definition at line 99 of file stagedef.py.

python.stagedef.StageDef.exe

Definition at line 121 of file stagedef.py.

python.stagedef.StageDef.fclname

Definition at line 50 of file stagedef.py.

python.stagedef.StageDef.filelistdef

Definition at line 80 of file stagedef.py.

python.stagedef.StageDef.init_script

Definition at line 97 of file stagedef.py.

python.stagedef.StageDef.init_source

Definition at line 98 of file stagedef.py.

python.stagedef.StageDef.input_run

Definition at line 68 of file stagedef.py.

python.stagedef.StageDef.input_subruns

Definition at line 69 of file stagedef.py.

python.stagedef.StageDef.input_version

Definition at line 70 of file stagedef.py.

python.stagedef.StageDef.inputdef

Definition at line 62 of file stagedef.py.

python.stagedef.StageDef.inputfile

Definition at line 58 of file stagedef.py.

python.stagedef.StageDef.inputlist

Definition at line 59 of file stagedef.py.

python.stagedef.StageDef.inputmode

Definition at line 60 of file stagedef.py.

python.stagedef.StageDef.inputstream

Definition at line 63 of file stagedef.py.

python.stagedef.StageDef.jobsub

Definition at line 118 of file stagedef.py.

python.stagedef.StageDef.jobsub_start

Definition at line 119 of file stagedef.py.

python.stagedef.StageDef.jobsub_timeout

Definition at line 120 of file stagedef.py.

python.stagedef.StageDef.lines

Definition at line 108 of file stagedef.py.

python.stagedef.StageDef.logdir

Definition at line 52 of file stagedef.py.

python.stagedef.StageDef.max_files_per_job

Definition at line 88 of file stagedef.py.

python.stagedef.StageDef.maxfluxfilemb

Definition at line 85 of file stagedef.py.

python.stagedef.StageDef.memory

Definition at line 114 of file stagedef.py.

python.stagedef.StageDef.merge

Definition at line 105 of file stagedef.py.

python.stagedef.StageDef.mid_script

Definition at line 101 of file stagedef.py.

python.stagedef.StageDef.mid_source

Definition at line 100 of file stagedef.py.

python.stagedef.StageDef.mixinputdef

Definition at line 65 of file stagedef.py.

python.stagedef.StageDef.name

Definition at line 48 of file stagedef.py.

python.stagedef.StageDef.num_events

Definition at line 87 of file stagedef.py.

python.stagedef.StageDef.num_jobs

Definition at line 86 of file stagedef.py.

python.stagedef.StageDef.outdir

Definition at line 51 of file stagedef.py.

python.stagedef.StageDef.output

Definition at line 116 of file stagedef.py.

python.stagedef.StageDef.output_run

Definition at line 72 of file stagedef.py.

python.stagedef.StageDef.output_subruns

Definition at line 73 of file stagedef.py.

python.stagedef.StageDef.output_version

Definition at line 74 of file stagedef.py.

python.stagedef.StageDef.parameters

Definition at line 115 of file stagedef.py.

python.stagedef.StageDef.prestagefraction

Definition at line 84 of file stagedef.py.

python.stagedef.StageDef.prestart

Definition at line 81 of file stagedef.py.

python.stagedef.StageDef.previousstage

Definition at line 64 of file stagedef.py.

python.stagedef.StageDef.project_name

Definition at line 102 of file stagedef.py.

python.stagedef.StageDef.project_version

Definition at line 104 of file stagedef.py.

python.stagedef.StageDef.pubs_input

Definition at line 67 of file stagedef.py.

python.stagedef.StageDef.pubs_input_ok

Definition at line 66 of file stagedef.py.

python.stagedef.StageDef.pubs_output

Definition at line 71 of file stagedef.py.

python.stagedef.StageDef.recur

Definition at line 76 of file stagedef.py.

python.stagedef.StageDef.recurlimit

Definition at line 78 of file stagedef.py.

python.stagedef.StageDef.recurtype

Definition at line 77 of file stagedef.py.

python.stagedef.StageDef.resource

Definition at line 107 of file stagedef.py.

python.stagedef.StageDef.schema

Definition at line 122 of file stagedef.py.

python.stagedef.StageDef.script

Definition at line 128 of file stagedef.py.

python.stagedef.StageDef.singlerun

Definition at line 79 of file stagedef.py.

python.stagedef.StageDef.singularity

Definition at line 127 of file stagedef.py.

python.stagedef.StageDef.site

Definition at line 109 of file stagedef.py.

python.stagedef.StageDef.stage_name

Definition at line 103 of file stagedef.py.

python.stagedef.StageDef.start_script

Definition at line 129 of file stagedef.py.

python.stagedef.StageDef.stash

Definition at line 126 of file stagedef.py.

python.stagedef.StageDef.stop_script

Definition at line 130 of file stagedef.py.

python.stagedef.StageDef.submit_script

Definition at line 96 of file stagedef.py.

python.stagedef.StageDef.target_size

Definition at line 89 of file stagedef.py.

python.stagedef.StageDef.TFileName

Definition at line 117 of file stagedef.py.

python.stagedef.StageDef.validate_on_worker

Definition at line 123 of file stagedef.py.

python.stagedef.StageDef.workdir

Definition at line 53 of file stagedef.py.


The documentation for this class was generated from the following file: