[Winpcap-users] Timestamp accuracy question
Alex Foygel (TT)
Alex.Foygel at tradingtechnologies.com
Mon Apr 21 15:02:25 GMT 2008
What is the absolute accuracy of the individual packets' timestamps? As
far as I understand, the relative accuracy (one packet relative to
another packet captured within the same capture session) is 1
microsecond (aside from the issues with SMP, etc.).
But the absolute accuracy, if I understand the code correctly, seems to
be in the order of milliseconds. The code (time_calls.h) uses
KeQuerySystemTime() to get the system time and to calculate the offset
between the system time and the high-resolution values returned by
According to the documentation, even though KeQuerySystemTime() returns
the timestamps in 100 nanoseconds units, it's being updated once every
10 milliseconds. Thus, depending on when during the 10 ms cycle the
Synchronize code ran, the offset calculated by the above mentioned code
can be up to 10 ms off.
Is my interpretation of the code correct?
A simple way of fixing this problem (if it's a problem at all) seems to
be to run KeQuerySystemTime() in a tight loop until the value returned
changes (this should take at most 10 ms because that's how often the
system time is updated) and then use the new value to calculate the
offset. Am I oversimplifying the problem?
The reason I'm asking the question is because I'm trying to understand
whether I can compare the timestamps imbedded by my application in my
messages with the timestamps captured by winpcap, to check the time it
takes for my packets to get from the application code (through all the
layers, including the network stack) to the NDIS layer when it gets
captured by winpcap.
Thank you for your help,
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Winpcap-users