|NPF structures and definitions|
|A stream of X86 binary code. More...|
|Structure describing a x86 filtering program created by the jitter. More...|
|#define||MOVid(r32, i32) emitm(&stream, 11 << 4 | 1 << 3 | r32 & 0x7, 1); emitm(&stream, i32, 4);|
|mov r32,i32 |
|#define||MOVrd(dr32, sr32) emitm(&stream, 8 << 4 | 3 | 1 << 3, 1); emitm(&stream, 3 << 6 | (dr32 & 0x7) << 3 | sr32 & 0x7, 1);|
|mov dr32,sr32 |
|#define||MOVodd(dr32, sr32, off)|
|mov dr32,sr32[off] |
|#define||MOVobd(dr32, sr32, or32)|
|mov dr32,sr32[or32] |
|#define||MOVobw(dr32, sr32, or32)|
|mov dr16,sr32[or32] |
|#define||MOVobb(dr8, sr32, or32)|
|mov dr8,sr32[or32] |
|#define||MOVomd(dr32, or32, sr32)|
|mov [dr32][or32],sr32 |
|bswap dr32 |
|xchg al,ah |
|#define||PUSH(r32) emitm(&stream, 5 << 4 | 0 << 3 | r32 & 0x7, 1);|
|push r32 |
|#define||POP(r32) emitm(&stream, 5 << 4 | 1 << 3 | r32 & 0x7, 1);|
|pop r32 |
|#define||RET() emitm(&stream, 12 << 4 | 0 << 3 | 3, 1);|
|add dr32,sr32 |
|add eax,i32 |
|add r32,i32 |
|add r32,i8 |
|sub dr32,sr32 |
|sub eax,i32 |
|mul r32 |
|div r32 |
|and r8,i8 |
|and r32,i32 |
|and dr32,sr32 |
|or dr32,sr32 |
|or r32,i32 |
|shl r32,i8 |
|shl dr32,cl |
|shr r32,i8 |
|shr dr32,cl |
|neg r32 |
|#define||CMPodd(dr32, sr32, off)|
|cmp dr32,sr32[off] |
|cmp dr32,sr32 |
|cmp dr32,i32 |
|jne off32 |
|je off32 |
|jle off32 |
|jle off8 |
|ja off32 |
|jae off32 |
|jg off32 |
|jge off32 |
|jmp off32 |
|typedef UINT(__cdecl *)||BPF_filter_function (PVOID *, ULONG, UINT)|
|Prototype of a filtering function created by the jitter. |
|typedef void(*)||emit_func (binary_stream *stream, ULONG value, UINT n)|
|Prototype of the emit functions. |
This section documents the internals of the Netgroup Packet Filter (NPF), the kernel portion of WinPcap. Normal users are probably interested in how to use WinPcap and not in its internal structure. Therefore the information present in this module is destined mainly to WinPcap developers and maintainers, or to the people interested in how the driver works. In particular, a good knowledge of OSes, networking and Win32 kernel programming and device drivers development is required to profitably read this section.
NPF is the WinPcap component that does the hard work, processing the packets that transit on the network and exporting capture, injection and analysis capabilities to user-level.
The following paragraphs will describe the interaction of NPF with the OS and its basic structure.
NDIS (Network Driver Interface Specification) is a standard that defines the communication between a network adapter (or, better, the driver that manages it) and the protocol drivers (that implement for example TCP/IP). Main NDIS purpose is to act as a wrapper that allows protocol drivers to send and receive packets onto a network (LAN or WAN) without caring either the particular adapter or the particular Win32 operating system.
NDIS supports three types of network drivers:
NPF is implemented as a protocol driver. This is not the best possible choice from the performance point of view, but allows reasonable independence from the MAC layer and as well as complete access to the raw traffic.
Notice that the various Win32 operating systems have different versions of NDIS: NPF is NDIS 5 compliant under Windows 2000 and its derivations (like Windows XP), NDIS 3 compliant on the other Win32 platforms.
Next figure shows the position of NPF inside the NDIS stack:
Figure 1: NPF inside NDIS.
The interaction with the OS is normally asynchronous. This means that the driver provides a set of callback functions that are invoked by the system when some operation is required to NPF. NPF exports callback functions for all the I/O operations of the applications: open, close, read, write, ioctl, etc.
The interaction with NDIS is asynchronous as well: events like the arrival of a new packet are notified to NPF through a callback function (Packet_tap() in this case). Furthermore, the interaction with NDIS and the NIC driver takes always place by means of non blocking functions: when NPF invokes a NDIS function, the call returns immediately; when the processing ends, NDIS invokes a specific NPF callback to inform that the function has finished. The driver exports a callback for any low-level operation, like sending packets, setting or requesting parameters on the NIC, etc.
Next figure shows the structure of WinPcap, with particular reference to the NPF driver.
Figure 2: NPF device driver.
NPF is able to perform a number of different operations: capture, monitoring, dump to disk, packet injection. The following paragraphs will describe shortly each of these operations.
The most important operation of NPF is packet capture. During a capture, the driver sniffs the packets using a network interface and delivers them intact to the user-level applications.
The capture process relies on two main components:
A packet filter that decides if an incoming packet has to be accepted and copied to the listening application. Most applications using NPF reject far more packets than those accepted, therefore a versatile and efficient packet filter is critical for good over-all performance. A packet filter is a function with boolean output that is applied to a packet. If the value of the function is true the capture driver copies the packet to the application; if it is false the packet is discarded. NPF packet filter is a bit more complex, because it determines not only if the packet should be kept, but also the amount of bytes to keep. The filtering system adopted by NPF derives from the BSD Packet Filter (BPF), a virtual processor able to execute filtering programs expressed in a pseudo-assembler and created at user level. The application takes a user-defined filter (e.g. “pick up all UDP packets”) and, using wpcap.dll, compiles them into a BPF program (e.g. “if the packet is IP and the protocol type field is equal to 17, then return true”). Then, the application uses the BIOCSETF IOCTL to inject the filter in the kernel. At this point, the program is executed for every incoming packet, and only the conformant packets are accepted. Unlike traditional solutions, NPF does not interpret the filters, but it executes them. For performance reasons, before using the filter NPF feeds it to a JIT compiler that translates it into a native 80x86 function. When a packet is captured, NPF calls this native function instead of invoking the filter interpreter, and this makes the process very fast. The concept behind this optimization is very similar to the one of Java jitters.
A circular buffer to store the packets and avoid loss. A packet is stored in the buffer with a header that maintains information like the timestamp and the size of the packet. Moreover, an alignment padding is inserted between the packets in order to speed-up the access to their data by the applications. Groups of packets can be copied with a single operation from the NPF buffer to the applications. This improves performances because it minimizes the number of reads. If the buffer is full when a new packet arrives, the packet is discarded and hence it's lost. Both kernel and user buffer can be changed at runtime for maximum versatility: packet.dll and wpcap.dll provide functions for this purpose.
The size of the user buffer is very important because it determines the maximum amount of data that can be copied from kernel space to user space within a single system call. On the other hand, it can be noticed that also the minimum amount of data that can be copied in a single call is extremely important. In presence of a large value for this variable, the kernel waits for the arrival of several packets before copying the data to the user. This guarantees a low number of system calls, i.e. low processor usage, which is a good setting for applications like sniffers. On the other side, a small value means that the kernel will copy the packets as soon as the application is ready to receive them. This is excellent for real time applications (like, for example, ARP redirectors or bridges) that need the better responsiveness from the kernel. From this point of view, NPF has a configurable behavior, that allows users to choose between best efficiency or best responsiveness (or any intermediate situation).
The wpcap library includes a couple of system calls that can be used both to set the timeout after which a read expires and the minimum amount of data that can be transferred to the application. By default, the read timeout is 1 second, and the minimum amount of data copied between the kernel and the application is 16K.
NPF allows to write raw packets to the network. To send data, a user-level application performs a WriteFile() system call on the NPF device file. The data is sent to the network as is, without encapsulating it in any protocol, therefore the application will have to build the various headers for each packet. The application usually does not need to generate the FCS because it is calculated by the network adapter hardware and it is attached automatically at the end of a packet before sending it to the network.
In normal situations, the sending rate of the packets to the network is not very high because of the need of a system call for each packet. For this reason, the possibility to send a single packet more than once with a single write system call has been added. The user-level application can set, with an IOCTL call (code pBIOCSWRITEREP), the number of times a single packet will be repeated: for example, if this value is set to 1000, every raw packet written by the application on the driver's device file will be sent 1000 times. This feature can be used to generate high speed traffic for testing purposes: the overload of context switches is no longer present, so performance is remarkably better.
WinPcap offers a kernel-level programmable monitoring module, able to calculate simple statistics on the network traffic. The idea behind this module is shown in Figure 2: the statistics can be gathered without the need to copy the packets to the application, that simply receives and displays the results obtained from the monitoring engine. This allows to avoid great part of the capture overhead in terms of memory and CPU clocks.
The monitoring engine is made of a classifier followed by a counter. The packets are classified using the filtering engine of NPF, that provides a configurable way to select a subset of the traffic. The data that pass the filter go to the counter, that keeps some variables like the number of packets and the amount of bytes accepted by the filter and updates them with the data of the incoming packets. These variables are passed to the user-level application at regular intervals whose period can be configured by the user. No buffers are allocated at kernel and user level.
The dump to disk capability can be used to save the network data to disk directly from kernel mode.
Figure 3: packet capture versus kernel-level dump.
In traditional systems, the path covered by the packets that are saved to disk is the one followed by the black arrows in Figure 3: every packet is copied several times, and normally 4 buffers are allocated: the one of the capture driver, the one in the application that keeps the captured data, the one of the stdio functions (or similar) that are used by the application to write on file, and finally the one of the file system.
When the kernel-level traffic logging feature of NPF is enabled, the capture driver addresses the file system directly, hence the path covered by the packets is the one of the red dotted arrow: only two buffers and a single copy are necessary, the number of system call is drastically reduced, therefore the performance is considerably better.
Current implementation dumps the to disk in the widely used libpcap format. It gives also the possibility to filter the traffic before the dump process in order to select the packet that will go to the disk.
The structure of NPF and its filtering engine derive directly from the one of the BSD Packet Filter (BPF), so if you are interested the subject you can read the following papers:
- S. McCanne and V. Jacobson, The BSD Packet Filter: A New Architecture for User-level Packet Capture. Proceedings of the 1993 Winter USENIX Technical Conference (San Diego, CA, Jan. 1993), USENIX.
- A. Begel, S. McCanne, S.L.Graham, BPF+: Exploiting Global Data-flow Optimization in a Generalized Packet Filter Architecture, Proceedings of ACM SIGCOMM '99, pages 123-134, Conference on Applications, technologies, architectures, and protocols for computer communications, August 30 - September 3, 1999, Cambridge, USA
The code documented in this manual is the one of the Windows NTx version of NPF. The Windows 9x code is very similar, but it is less efficient and lacks advanced features like kernel-mode dump.
|i32||)||emitm(&stream, 11 << 4 | 1 << 3 | r32 & 0x7, 1); emitm(&stream, i32, 4);|
|sr32||)||emitm(&stream, 8 << 4 | 3 | 1 << 3, 1); emitm(&stream, 3 << 6 | (dr32 & 0x7) << 3 | sr32 & 0x7, 1);|
|#define POP||(||r32||)||emitm(&stream, 5 << 4 | 1 << 3 | r32 & 0x7, 1);|
|#define PUSH||(||r32||)||emitm(&stream, 5 << 4 | 0 << 3 | r32 & 0x7, 1);|
|#define RET||(||)||emitm(&stream, 12 << 4 | 0 << 3 | 3, 1);|
|typedef UINT(__cdecl *) BPF_filter_function(PVOID *, ULONG, UINT)|
Prototype of the emit functions.
Different emit functions are used to create the reference table and to generate the actual filtering code. This allows to have simpler instruction macros. The first parameter is the stream that will receive the data. The secon one is a variable containing the data, the third one is the length, that can be 1,2 or 4 since it is possible to emit a byte, a short or a work at a time.
documentation. Copyright (c) 2002-2005 Politecnico di Torino. Copyright (c) 2005-2007 CACE Technologies. All rights reserved.