I split out this topic to help address ddubs' question
The time to perform the scaling is minimal. If you had over a hundred analog, the math might bump the scan 1 ms. Also, the SIMULATOR timings will NOT be representative at all of actual PLC hardware. So only do this on actual hardware cuz the PC's timings will be much faster or slower (e.g. PC's antivirus and backup occurs in the middle of your testing, tasking out the Simulator, making its can times bump, OR nothing is going on and the 2.8 GHz multi-core Intel processor shows SUPER FAST scan times - not helpful either).
It's always best to program for CLARITY FIRST, and THEN optimize for speed. Also, when optimizing for speed, you might spend a lot of time rewriting code that takes up 6% of your scan time down to 5% because you think that's where the bottle neck is (e.g. analog scaling), when you are actually spending 20% of the PLC scan on, say string manipulation that does not have to be done EVERY scan, so THAT 20% is where you need to spend your "optimizing" effort.
To do Pareto Analysis of PLC scan time (vital few, trivial many), utilize the MATH function TICKus(), which is the processor's MICROsecond counter, and stick this pattern of MATH instructions around the specific code you are trying to measure (I like to use Designer's Trend View for this analysis).
MATH D42 "TICKus()" // save off the current microsecond counter into D42
SCALE
SCALE
SCALE
SCALE
SCALE
MATH D42 "TICKus() - D42" // now do the delta, re-using D42
So D42 would have the time in MICROseconds of the 5 SCALE instructions (not MILLIseconds).
There is some overhead of doing the MATH to measure scan time. If you are truly trying to minimize a PLC scan below 1ms, to get better numbers for determining the "vital few", just do
MATH D99 "TICKus()"
MATH D99 "TICKus() - D99"
back to back at the top of your scan, to get the "overhead" to the PLC scan time of actually measuring PLC scan time into D99, then you can know how much D42 would need to be adjusted down (in your head) by the value in D99. But since ALL of the measurements would be off by the same amount (i.e. the cost of the measurement is part of ALL the calculations), the qualitative Pareto Analysis can be done without it.