HTCondor's Event Log is probably the log file, that is the most machine-readable one. It contains all events from the local daemons [ 1 ], so it might get a bit complex.
If you are interested just in jobs, another option might be to write per finished job a file containing the job's class ads.
Assuming that the directory /var/log/condor-ce/job.history.d
exists, after adding
PER_JOB_HISTORY_DIR=/var/log/condor-ce/job.history.d
to the CondorCE's configuration (or to Condor's daemons), the schedd wil write per job a status file .../job.history.d/history.#####.0
Each job log file consists of the key=value Class Ads, which now can be parsed individually with logstash's kv filter .
Since we want to parse the each file as one event, we have to dupe the Logstash multiline input plugin as it expects to match on a character expression (which is not really the case here as we want to read the whole file). So, we match on a not existing pattern and invert the selection...
Example grok rule for Logstash
which should produce a JSON event for each new recently finished job, that can be further processed or forwarded to Elastic Search etc. .
[ 1 ] Optional Event Log Config
See Parsing Condor Event Logs as XML with Logstash how to parse the Event Log XML (should work now_
Attachments:
condor_eventlogxml.logstash.example (application/octet-stream)