@rln401165, the timer resolution actually doesn't matter (or doesn't have to), if you write your program so the resolution-related error is non accumulating.
For example, say I have a flow number in GPM, and I want to totalize flow over a period of time. I could use a 1 second recycling timer, and add the GPM/60 each time it expires. This method should work, but due to scan time, the time when I see the timer is elapsed isn't exactly one second after the previous time. If your scan time is 20ms, you might check (scan) the timer at 1.01s, add the value, and reset the timer. For this iteration, your total is 1% low, even if the GPM number is right on. You might even have another 20ms of delay if you can't expire the timer and restart it all in one scan and it takes till the following scan to restart.
One way to avoid that is to let the timer free run, don't have it recycle at a setpoint. Then each scan, check the ACC value. If greater than your desired sample period, totalize and subtract the sample interval from the ACC (DO NOT zero the ACC). Then the ACC will reach the sample interval earlier, you still won't catch it on time due to scan delay, but each sample will be much closer to the desired interval. Right on the sample interval in fact, if your scan time is consistent, but more importantly, a series of n samples will always add up to n x sample interval + ONE error - ONE error (the start and end errors may not be identical), even with a variable scan. You've made the error non-cumulative.
Then if keeping a rolling average of something (like your line speed), you keep a queue of the last 10 values. Most of the errors get compensated away and what's left isn't near as bad because it gets spread over 10 samples.
The other thing to watch for, especially in a case like yours (few events per unit time), is the difference between fixing sample time windows and counting events and using the time between events as your sample value. Picture a scope trace of your pulses with two seconds showing. At 5Hz, you'll show about 10 pulses at any given time. When one ages off screen, unless another one appears at exactly the same instant, the total count goes down by 10%. When one appears it goes up 10%. If you were displaying line speed based on the scope trace, it'd be jumping up and down 10% all the time. You could compensate with a longer window to make one pulse a smaller percentage of the total, but the response to a real speed change will be slower. If you could stop the line abruptly, the display wouldn't reach zero for 10 seconds or whatever. That's what I was saying, get more pulses per second and you'll get a more palatable degree of trade-off between smoothness and quick updates. The other method, timing between pulses, doesn't have the abrupt up and down issue, but you'll have to watch for the types of latency issues I was talking about with the analog example, and make sure your logic doesn't lead to cumulative error.
If you used the system clock or other single, non-resetting source as your timer, the sub-resolution increments take care of themselves, possibly shifting into an adjacent time window, but doing so at both ends and thus compensating.
IOW, bottom line, just make sure the error is non-cumulative and you should be fine. Get more pulses per foot, and you'll get a better trade-off between response and noise.