If you have requirements on an execute hosts performance or want to scale your jobs by the host's power, you can use one of Condor's benchmark class ads ('kflops' for floating point performance, 'mips' for integer performance). These benchmark values are a rough estimate of the CPUs' performance and depend on the overall load of the machine like other jobs' idleness etc. and are not 'full benchmarks' - so don't take them as absolute values but as a ballpark number. HTCondor may run these benchmarks on a slot basis so that you might need to scale accordingly to your job profile.
to get an overview of the currently existing jobs/slots on the execute hosts run
condor_status -af machine name cpus kflops mips
that will print you an overview of effectively all jobs on the nodes and their slots, i.e., their carved out cores on their current execute hosts.
submission
You can require a certain minimum/maximum performance in your job file like other parameters - but to avoid dismissing too many slots better round down generously.
job scaling
Another option to use the benchmark values might be scaling the events to process per job. I.e., That you evaluate during your job's start on a node the performance of the slot you got and than scale the number of events accordingly, so that you aim to stay within your predefined job run time - preconditioned that you know how long your jobs takes roughly for an event depending on the kflops/mips performance.
warning
Obviously, requesting a certain minimum performance will reduce the chance to get any slot for your job, so that you might have to wait.
For a more elaborate requirement, see also Condor's rank expression.