[ntar-workers] PCAP-NG / Interface ID size / Drops Counter size ?

Gianluca Varenni gianluca.varenni at cacetech.com
Sun Mar 5 20:07:19 GMT 2006

First of all, sorry for the long delay, I was really busy last weeks at 

Regarding the extension of timestamps, after I received this mail, I had a 
quick look at the specification of the current packet block (to make some 
computations on the largest date we can represent now).

And I discovered a sort of ambiguity: I interpreted the spec in some way, 
and now I realize that others may have interpreted in a totally different 

Basically the original version says

Precision of timestamps. If the Most Significant Bit is equal to zero, the 
remaining bits indicates the accuracy as as a negative power of 10 (e.g. 6 
means microsecond accuracy). If the Most Significant Bit is equal to zero, 
the remaining bits indicates the accuracy as as negative power of 2 (e.g. 10 
means 1/1024 of second). If this option is not present, a precision of 10^-6 
is assumed.

and then

a.. Timestamp (High): the most significative part of the timestamp. in 
standard Unix format, i.e. from 1/1/1970.
a.. Timestamp (Low): the less significative part of the timestamp. The way 
to interpret this field is specified by the 'ts_accur' option (see Figure 
4Interface Description Block format.) of the Interface Description block 
referenced by this packet. If the Interface Description block does not 
contain a 'ts_accur' option, then this field is expressed in microseconds.

Then there was a fix last summer to the first paragraph, so now it says:

Precision of timestamps (fraction of seconds). Bla bla bla

Basically I interpreted the sentence "in standard unix format, i.e. from 
1/1/1970" as "ok, the origin is 1/1/1970, AND most 
Timestamp(low)/timestamp(high) represent the low/high 32 bits of a 64bit 
timestamp, with the precision specified in the interface description block" 
(the spec does not say "in standard unix format, i.e. seconds and subseconds 
from 1/1/1970").

As a consequence, when I used NTAR within the Cpcap library (Cpcap is the 
avionics library using NTAR) I saved timestamps as low/high 32bits of a 
64bit word representing nanoseconds.

>From the last mails instead it seems to me that the "common belief" was that 
timestamps(high) meant seconds, and timestamps(low) meant subseconds (ms, 
us, ns, depending on the precision in the IDB).

The first question is "what is the right interpretation for the current 
Packet Block timestamps?"

Second, you Hannes were proposing to extend timestamp(high) to 64 bits 
(assuming that they represent seconds since 1/1/1970)  in order to overcome 
the y2038 issue. This is acceptable to me.
But for the same reason, maybe we should extend timestamp(low) to 64 bits, 
as 32 bits allow us a precision of about 1/4ns. Will 1/4ns be enough in the 
next couple of years, when we will have 100Gbps/1Tbps networks? Maybe no. 
This means that we actually reserve 128 bits for timestamps (too many bits 
for a single packet?). Is it too much? 128bit means that you can probably 
store timestamps with sub-picosecond precision for the next thousands of 
years without wrap around. Do we really need it? Moreover, is the 
seconds/sub-seconds representation the best one? I'm not an expert, but I 
can imagine that for a HW device maybe maintaining a single 64/128 counter 
of nanoseconds (or whatever other precision) is easier than maintaining 
seconds+subseconds. And this is without considering the waste in bits that 
we usually have storing seconds+subseconds.

Do you think it's acceptable to reserve 128 bits per packet for the 
timestamps? I'm thinking about the overhead...

Since captures do not span years (I think they span hours, days), an 
approach that came to my mind, *if* we want to use 64bit instead of 128 bits 
for timestamps, is:

1. use timestamp(low)/(high) as the *actual* 32bit low/high parts of a 64 
bit counter
2. maintain the spec of the precision of this timestamp as it is now in the 
3. add an option in the IDB that specifies an (optional) offset for the 
timestamps of packets related to that interface

In this way you can for example store a capture with picosecond precision, 
with a maximum length of about 5000 hours (if my computations are not 

Any opinions on this?


----- Original Message ----- 
From: "Hannes Gredler" <hannes at juniper.net>
To: "Guy Harris" <guy at alum.mit.edu>
Cc: <risso at polito.it>; <ntar-workers at winpcap.org>
Sent: Tuesday, February 21, 2006 10:26 PM
Subject: Re: [ntar-workers] PCAP-NG / Interface ID size / Drops Counter size 

> Guy Harris wrote:
>> (I'm not sure if Hannes is on ntar-workers; if not, he might want to 
>> join....)
> just did that - tx for the note;
>> On Feb 21, 2006, at 2:05 PM, Gianluca Varenni wrote:
>>> My opinion is that we should add a new packet block to the spec, 
>>> similar to the current packet block:
>>>    0                   1                   2                   3
>>>    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>>>   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>   |                          Interface ID                         |
>>>   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>   |                        Timestamp (High)                       |
>>>   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>   |                        Timestamp (Low)                        |
>>>   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>     ...
>> If we're adding a new packet block to the spec, should the Timestamp 
>> (High) field be extended to 64 bits, or do we expect that this file 
>> format won't still be used in 2038?
> makes sense - or at least reserve 32-bits ahead of the Timestamp high.
> /hannes
> _______________________________________________
> ntar-workers mailing list
> ntar-workers at winpcap.org
> https://www.winpcap.org/mailman/listinfo/ntar-workers

More information about the ntar-workers mailing list