NPF driver internals manual
[WinPcap internals]


Modules

 NPF I/O control codes
 NPF structures and definitions
 NPF functions
 NPF Just-in-time compiler definitions

Detailed Description

This section documents the internals of the Netgroup Packet Filter (NPF), the kernel portion of WinPcap. Normal users are probably interested in how to use WinPcap and not in its internal structure. Therefore the information present in this module is destined mainly to WinPcap developers and maintainers, or to the people interested in how the driver works. In particular, a good knowledge of OSes, networking and Win32 kernel programming and device drivers development is required to profitably read this section. 

NPF is the WinPcap component that does the hard work, processing the packets that transit on the network and exporting capture, injection and analysis capabilities to user-level.

The following paragraphs will describe the interaction of NPF with the OS and its basic structure.

NPF and NDIS

NDIS (Network Driver Interface Specification) is a standard that defines the communication between a network adapter (or, better, the driver that manages it) and the protocol drivers (that implement for example TCP/IP). Main NDIS purpose is to act as a wrapper that allows protocol drivers to send and receive packets onto a network (LAN or WAN) without caring either the particular adapter or the particular Win32 operating system.

NDIS supports three types of network drivers:

  1. Network interface card or NIC drivers. NIC drivers directly manage network interface cards, referred to as NICs. The NIC drivers interface directly to the hardware at their lower edge and at their upper edge present an interface to allow upper layers to send packets on the network, to handle interrupts, to reset the NIC, to halt the NIC and to query and set the operational characteristics of the driver. NIC drivers can be either miniports or legacy full NIC drivers.
    • Miniport drivers implement only the hardware-specific operations necessary to manage a NIC, including sending and receiving data on the NIC. Operations common to all lowest level NIC drivers, such as synchronization, is provided by NDIS. Miniports do not call operating system routines directly; their interface to the operating system is NDIS.
      A miniport does not keep track of bindings. It merely passes packets up to NDIS and NDIS makes sure that these packets are passed to the correct protocols.
    • Full NIC drivers have been written to perform both hardware-specific operations and all the synchronization and queuing operations usually done by NDIS. Full NIC drivers, for instance, maintain their own binding information for indicating received data. 
  2. Intermediate drivers. Intermediate drivers interface between an upper-level driver such as a protocol driver and a miniport. To the upper-level driver, an intermediate driver looks like a miniport. To a miniport, the intermediate driver looks like a protocol driver. An intermediate protocol driver can layer on top of another intermediate driver although such layering could have a negative effect on system performance. A typical reason for developing an intermediate driver is to perform media translation between an existing legacy protocol driver and a miniport that manages a NIC for a new media type unknown to the protocol driver. For instance, an intermediate driver could translate from LAN protocol to ATM protocol. An intermediate driver cannot communicate with user-mode applications, but only with other NDIS drivers.
  3. Transport drivers or protocol drivers. A protocol driver implements a network protocol stack such as IPX/SPX or TCP/IP, offering its services over one or more network interface cards. A protocol driver services application-layer clients at its upper edge and connects to one or more NIC driver(s) or intermediate NDIS driver(s) at its lower edge.

NPF is implemented as a protocol driver. This is not the best possible choice from the performance point of view, but allows reasonable independence from the MAC layer and as well as complete access to the raw traffic.

Notice that the various Win32 operating systems have different versions of NDIS: NPF is NDIS 5 compliant under Windows 2000 and its derivations (like Windows XP), NDIS 3 compliant on the other Win32 platforms. 

Next figure shows the position of NPF inside the NDIS stack:

Figure 1: NPF inside NDIS.

The interaction with the OS is normally asynchronous. This means that the driver provides a set of callback functions that are invoked by the system when some operation is required to NPF. NPF exports callback functions for all the I/O operations of the applications: open, close, read, write, ioctl, etc.

The interaction with NDIS is asynchronous as well: events like the arrival of a new packet are notified to NPF through a callback function (Packet_tap() in this case). Furthermore, the interaction with NDIS and the NIC driver takes always place by means of non blocking functions: when NPF invokes a NDIS function, the call returns immediately; when the processing ends, NDIS invokes a specific NPF callback to inform that the function has finished. The driver exports a callback for any low-level operation, like sending packets, setting or requesting parameters on the NIC, etc.

NPF structure basics

Next figure shows the structure of WinPcap, with particular reference to the NPF driver.

Figure 2: NPF device driver.

NPF is able to perform a number of different operations: capture, monitoring, dump to disk, packet injection. The following paragraphs will describe shortly each of these operations.

Packet Capture

The most important operation of NPF is packet capture. During a capture, the driver sniffs the packets using a network interface and delivers them intact to the user-level applications. 

The capture process relies on two main components:

The size of the user buffer is very important because it determines the maximum amount of data that can be copied from kernel space to user space within a single system call. On the other hand, it can be noticed that also the minimum amount of data that can be copied in a single call is extremely important. In presence of a large value for this variable, the kernel waits for the arrival of several packets before copying the data to the user. This guarantees a low number of system calls, i.e. low processor usage, which is a good setting for applications like sniffers. On the other side, a small value means that the kernel will copy the packets as soon as the application is ready to receive them. This is excellent for real time applications (like, for example, ARP redirectors or bridges) that need the better responsiveness from the kernel. From this point of view, NPF has a configurable behavior, that allows users to choose between best efficiency or best responsiveness (or any intermediate situation). 

The wpcap library includes a couple of system calls that can be used both to set the timeout after which a read expires and the minimum amount of data that can be transferred to the application. By default, the read timeout is 1 second, and the minimum amount of data copied between the kernel and the application is 16K.

Packet injection

NPF allows to write raw packets to the network. To send data, a user-level application performs a WriteFile() system call on the NPF device file. The data is sent to the network as is, without encapsulating it in any protocol, therefore the application will have to build the various headers for each packet. The application usually does not need to generate the FCS because it is calculated by the network adapter hardware and it is attached automatically at the end of a packet before sending it to the network.

In normal situations, the sending rate of the packets to the network is not very high because of the need of a system call for each packet. For this reason, the possibility to send a single packet more than once with a single write system call has been added. The user-level application can set, with an IOCTL call (code pBIOCSWRITEREP), the number of times a single packet will be repeated: for example, if this value is set to 1000, every raw packet written by the application on the driver's device file will be sent 1000 times. This feature can be used to generate high speed traffic for testing purposes: the overload of context switches is no longer present, so performance is remarkably better. 

Network monitoring

WinPcap offers a kernel-level programmable monitoring module, able to calculate simple statistics on the network traffic. The idea behind this module is shown in Figure 2: the statistics can be gathered without the need to copy the packets to the application, that simply receives and displays the results obtained from the monitoring engine. This allows to avoid great part of the capture overhead in terms of memory and CPU clocks.

The monitoring engine is made of a classifier followed by a counter. The packets are classified using the filtering engine of NPF, that provides a configurable way to select a subset of the traffic. The data that pass the filter go to the counter, that keeps some variables like the number of packets and the amount of bytes accepted by the filter and updates them with the data of the incoming packets. These variables are passed to the user-level application at regular intervals whose period can be configured by the user. No buffers are allocated at kernel and user level.

Dump to disk

The dump to disk capability can be used to save the network data to disk directly from kernel mode.

Figure 3: packet capture versus kernel-level dump.

In traditional systems, the path covered by the packets that are saved to disk is the one followed by the black arrows in Figure 3: every packet is copied several times, and normally 4 buffers are allocated: the one of the capture driver, the one in the application that keeps the captured data, the one of the stdio functions (or similar) that are used by the application to write on file, and finally the one of the file system.

When the kernel-level traffic logging feature of NPF is enabled, the capture driver addresses the file system directly, hence the path covered by the packets is the one of the red dotted arrow: only two buffers and a single copy are necessary, the number of system call is drastically reduced, therefore the performance is considerably better.

Current implementation dumps the to disk in the widely used libpcap format. It gives also the possibility to filter the traffic before the dump process in order to select the packet that will go to the disk.

Further reading

The structure of NPF and its filtering engine derive directly from the one of the BSD Packet Filter (BPF), so if you are interested the subject you can read the following papers:

- S. McCanne and V. Jacobson, The BSD Packet Filter: A New Architecture for User-level Packet Capture. Proceedings of the 1993 Winter USENIX Technical Conference (San Diego, CA, Jan. 1993), USENIX. 

- A. Begel, S. McCanne, S.L.Graham, BPF+: Exploiting Global Data-flow Optimization in a Generalized Packet Filter Architecture, Proceedings of ACM SIGCOMM '99, pages 123-134, Conference on Applications, technologies, architectures, and protocols for computer communications, August 30 - September 3, 1999, Cambridge, USA

Note

The code documented in this manual is the one of the Windows NTx version of NPF. The Windows 9x code is very similar, but it is less efficient and lacks advanced features like kernel-mode dump.


documentation. Copyright (c) 2002-2005 Politecnico di Torino. Copyright (c) 2005-2009 CACE Technologies. All rights reserved.