M. Fourne, K. Stegemann, D. Petersen, N. Pohlmann:, „Aggregation of Network Protocol Data Near its Source“. In Proceedings of the ICT-EurAsia 2014, Second IFIP TC5/8 International Conference, ICT-EurAsia 2014, Bali, Indonesia April 2014
In Network Anomaly and Botnet Detection the main source of input for analysis is the network traffic, which has to be transmitted from its capture source to the analysis system. High-volume data sources often generate traffic volumes prohibiting direct pass-through of bulk data into researchers hands. In this paper we achieve a reduction in volume of transmitted data from network flow captures by aggregating raw data using extraction of pro-tocol semantics, orthogonal to classic bulk compression algorithms. We propose a formalization for this concept called Descriptors and extendits use to network flow data. Using this approach a preliminary selection of protocol information can be deferred into detail analysis. A comparison with common bulk data file compression formats will begiven for full Packet Capture (PCAP) files. Our approach aims to becompatible with Internet Protocol Flow Information Export (IPFIX) and other standardized network flow data formats as possible inputs. For Network Anomaly Detection as well as network based Botnet and Malware Detection (common umbrella term: NAD) the network traffic is their defining input. Any findings rely on the availability of large volumes of network data. The standard approach to get network data from Internet Service Providers (ISPs) is to simply request them for research or security purposes under a contract. A common problem emerges from the difference in bandwidth of big Internet exchanges compared to the bandwidth of common research laboratories. The most common format for network flow data is called NetFlow, which was the basis for the more modern and standardized IPFIX. Since IPFIX is the future replacement for NetFlow, it can be expected to get widespread implementation in network routing equipment. The main target for this protocol is the transfer of data from an exporter to a collector. When transmitting data for analysis, the most common formats in use today are NetFlow version 5 and raw PCAP files. Using NetFlow or IPFIX, the set of data to analyze is equally dependent on the supplier — ISPs etc. — so both have different problems. The first has — in its minimal form — only limited information for reliable detection of malicious activity and the second is not reasonable to transmit from live, high bandwidth data sources. A wider selection of records in NetFlow or IPFIX leads to more useful data, but in return raises the requirements for larger transmission bandwidth. A possible workaround is compression of the bulk data using the DEFLATE or similar compression algorithms, but this only achieves modest improvements.
Weitere Informationen zum Thema “Cyber-Sicherheits-Frühwarn- und Lagebildsystem”: Artikel: „Kommunikationslage im Blick – Gefahr erkannt, Gefahr gebannt“
„An ideal Internet Early Warning System“
„Ideales Internet-Frühwarnsystem”
„Internet Situation Awareness“
„Probe-based Internet Early Warning System”
„Internet Early Warning System: The Global View”
Vorträge: “Internet Situation Awareness”
“Internet-Frühwarnsysteme“
Vorlesung: „Cyber-Sicherheit Frühwarn- und Lagebildsysteme“ Glossareintrag: “Cyber-Sicherheits-Frühwarn- und Lagebildsystem” Informationen über das Lehrbuch: „Cyber-Sicherheit“
kostenlos downloaden |