News:

  • April 16, 2026, 10:07:34 AM

Login with username, password and session length

Author Topic: What is the most effective and efficient way to scale an analog input?  (Read 9743 times)

ddubs2248

  • Sr. Member
  • ****
  • Posts: 64
Question: What is the most effective and efficient way to scale an analog input? 

Predominately we scale in the PLC to allow an HMI interface to modify values during a system calibration and it works.  What I am more inclined to learn is what is the most efficient way to implement to improve scan time and also scalability. 

Most of the time I put all of my scaling on individual rungs in the $TopOfScan.  So I may have all of my scaling for say pressure transducers on one run and then flow meters on a second rung and so on dependent on what other analog inputs I have.  I also create UDT's from Transducers, flow meters, etc....which helps the readability of the ladder.

But ultimately, is there a better method?  Some systems may have 2 transducers...others may have 20...so that would be 20 Scaling instructions on a single rung...

I'd love to get some feedback on what others are doing.  I am also setting up some different methods, on the DoMore Sim, to test the difference in ladder structure variation. 
« Last Edit: August 13, 2020, 10:53:20 AM by franji1 »

brucek

  • Sr. Member
  • ****
  • Posts: 60
Re: Re: Changing A/D Scaling Factor
« Reply #1 on: August 13, 2020, 09:38:59 AM »
I prefer to have the scaling instruction in the PLC because I like to be able to see whats happening when there is a problem ( also if you need to change something you can do it live, not sure if you have to go to program mode for a module update on the scaling in the card). The card is just your relying on the magic happening in the card.

My programs don't use more then 10% of the memory mostly so I'm in no danger of having to free up space and i run about 4 ms scan time which works fine for what I'm doing.

franji1

  • Bit Weenie
  • Host Moderator
  • Hero Member
  • *****
  • Posts: 3803
    • Host Engineering
I split out this topic to help address ddubs' question

The time to perform the scaling is minimal.  If you had over a hundred analog, the math might bump the scan 1 ms.  Also, the SIMULATOR timings will NOT be representative at all of actual PLC hardware.  So only do this on actual hardware cuz the PC's timings will be much faster or slower (e.g. PC's antivirus and backup occurs in the middle of your testing, tasking out the Simulator, making its can times bump, OR nothing is going on and the 2.8 GHz multi-core Intel processor shows SUPER FAST scan times - not helpful either).

It's always best to program for CLARITY FIRST, and THEN optimize for speed.  Also, when optimizing for speed, you might spend a lot of time rewriting code that takes up 6% of your scan time down to 5% because you think that's where the bottle neck is (e.g. analog scaling), when you are actually spending 20% of the PLC scan on, say string manipulation that does not have to be done EVERY scan, so THAT 20% is where you need to spend your "optimizing" effort.

To do Pareto Analysis of PLC scan time (vital few, trivial many), utilize the MATH function TICKus(), which is the processor's MICROsecond counter, and stick this pattern of MATH instructions around the specific code you are trying to measure (I like to use Designer's Trend View for this analysis).

MATH D42 "TICKus()"  // save off the current microsecond counter into D42
SCALE
SCALE
SCALE
SCALE
SCALE
MATH D42 "TICKus() - D42"  // now do the delta, re-using D42

So D42 would have the time in MICROseconds of the 5 SCALE instructions (not MILLIseconds).

There is some overhead of doing the MATH to measure scan time.  If you are truly trying to minimize a PLC scan below 1ms, to get better numbers for determining the "vital few", just do
MATH D99 "TICKus()"
MATH D99 "TICKus() - D99"
back to back at the top of your scan, to get the "overhead" to the PLC scan time of actually measuring PLC scan time into D99, then you can know how much D42 would need to be adjusted down (in your head) by the value in D99.  But since ALL of the measurements would be off by the same amount (i.e. the cost of the measurement is part of ALL the calculations), the qualitative Pareto Analysis can be done without it.

Controls Guy

  • Internal Dev
  • Hero Member
  • ****
  • Posts: 3607
  • Darth Ladder
I'm kind of torn about this.   I love having the scaling buried in the module, but I also realize a lot of processes need calibration.

One hybrid approach would be to scale to nominal units in the module, then do a more unitless zero and gain or span adjustment in ladder, not a full scale.   I'm kind of meh about that idea, but I hate to give up the really easy scaling in the module.

Ultimately, would it be feasible to give us programmatic access to the cal values in the module?   AB 1771 (PLC-5 form factor) modules had max and min in numeric values going out to the module along with the rest of the config.   You could do that or (better) have a struct member like WX0.Zero that we can hit from ladder.

I retract my earlier statement that half of all politicians are crooks.  Half of all politicians are NOT crooks.  There.

BobO

  • Host Moderator
  • Hero Member
  • *****
  • Posts: 6154
  • Yes Pinky, Do-more will control the world!
I'm kind of torn about this.   I love having the scaling buried in the module, but I also realize a lot of processes need calibration.

One hybrid approach would be to scale to nominal units in the module, then do a more unitless zero and gain or span adjustment in ladder, not a full scale.   I'm kind of meh about that idea, but I hate to give up the really easy scaling in the module.

Ultimately, would it be feasible to give us programmatic access to the cal values in the module?   AB 1771 (PLC-5 form factor) modules had max and min in numeric values going out to the module along with the rest of the config.   You could do that or (better) have a struct member like WX0.Zero that we can hit from ladder.

Our new analog modules were designed with the possibility of doing program based calibration, and we started down that road. Consensus was it was too clever by half, so we pulled it.
"It has recently come to our attention that users spend 95% of their time using 5% of the available features. That might be relevant." -BobO

Controls Guy

  • Internal Dev
  • Hero Member
  • ****
  • Posts: 3607
  • Darth Ladder
Why too clever?
I retract my earlier statement that half of all politicians are crooks.  Half of all politicians are NOT crooks.  There.

BobO

  • Host Moderator
  • Hero Member
  • *****
  • Posts: 6154
  • Yes Pinky, Do-more will control the world!
Why too clever?

Just more complicated than it needed to be, to be a solution to a problem almost nobody has asked us to solve.
"It has recently come to our attention that users spend 95% of their time using 5% of the available features. That might be relevant." -BobO

Controls Guy

  • Internal Dev
  • Hero Member
  • ****
  • Posts: 3607
  • Darth Ladder
Oh OK, that makes sense.  Thanks
I retract my earlier statement that half of all politicians are crooks.  Half of all politicians are NOT crooks.  There.

Garyhlucas

  • Hero Member
  • *****
  • Posts: 421
I am not a fan of programmable sensors and such. They work great until they fail and are no longer available or no knows how to program them anymore.  So I prefer raw data that I scale, massage in the PLC and when a new transmitter is needed you rescale, or change parameters it right from the HMI screen for that transmitter.  Keeps me from making a field call.