# The Rome Labs Experience

Authors
Kevin Ziese
Speaker
Kevin Ziese
Institution
Cisco/Wheelgroup
Manager, Network Security Research
Security Internet Services Unit, Active Audit
Cisco Systems, Inc.
Biography
Kevin Ziese
Kevin Ziese leads the Network Security Research Team at Cisco Systems; he is also one of the founders of the WheelGroup Corporation which was acquired by Cisco Systems in April of 1998. In September 1998, he retired as the Chief of the Advanced Countermeasures Cell at the U.S. Air Force Information Warfare Center where he was responsible for the identification and analysis of computer and network vulnerabilities and the development of network security countermeasures. He is a pioneer in the field of computer forensics and has served as senior U.S. computer expert to the United Nations in their efforts to identify and dismantle the weapons of mass destruction program in Iraq. He is recognized throughout both the U.S. Department of Defense and the Intelligence Community as a hands-on innovator.

## Abstract

In early 1994, the United States Air Force's premier command and control research laboratory at Rome Labs was penetrated over 150 times, over a three week period, by two British Crackers. This was the first time actual Air Force operations had been disrupted by noncombatants. One of the major lessons learned from that attack, and later presented in testimony before the United States Senate, was the lack of adequate technologies for detecting, responding, and preventing organized information attacks against critical U.S. computer and telecommunications infrastructure.

In this presentation Kevin Ziese, who was the Chief of the Advanced Countermeasures cell for the U.S. Air force Information Warfare center and in charge of actual operations during the Rome Lab Attacks, will discuss the difficulties faced in tracing, tracking, and identifying the actual attackers as well as the challenges faced in attempting to balance the need to protect a critical infrastructure while concurrently attempting to gather digital evidence to support an effective prosecution.

# Intrusion Detection and Legal Proceedings(full paper available PDF or PS)

Author
P. Sommer
Speaker
P. Sommer
Institution
London School of Economics and Political Science
London, UK
Biography
Peter Sommer
Peter Sommer is Senior Research Fellow at the Computer Security Research Centre at the London School of Economics and Political Science where his speciality is computer forensics. His first degree, from Oxford University, is in law. Fifteen years ago he wrote the old archive item of the hacker sites: "The Hacker's Handbook", in the UK best-seller list for seven weeks, His substantial private practice serves insurers, loss adjusters, large corporate security companies and lawyers. Expert witness assignments have included defence work in the Rome Labs case, other cracking and phreaking cases, Official Secrets, narcotics trafficking and the distribution of paedophliac material but he has also advised prosecution and criminal intelligence agencies in the UK and elsewhere as well as giving evidence before a Select Committee of the House of Lords.

## Abstract

While the initial aim of intrusion detection tools is to alert system administrators to allow them to set up evasive and preventative measures, secondary consequences may be legal proceedings. These may include criminal prosecution of miscreants and recovery of civil damages; in the alternative the victim of an intrusion may need to defend against accusations of negligence.

What is required to turn the output of an IDS into legally reliable evidence? While "scientific" proof depends on the application of generally recognised methods of scientific investigation, "legal" proof depends more on the rules of admissibility of evidence and what is convincingly presented in court.

In this paper I propose to discuss the underlying concepts of "legal reliability" and then show how it may be achieved in practical circumstances. Technology alone is insufficient without proper regard for procedure. The features that may make for a useful IDS may have little relevance when trying to collect legally reliable evidence. There are important implications not only for the design of intrusion detection tools but also the procedures that investigators employ.

# GASSATA, A Genetic Algorithm as an Alternative Tool for Security Audit Trails Analysis(slides available PDF or PS)(full paper available PDF or PS)

Author
Ludovic Me
Speaker
Ludovic Me
Institution
SUPELEC
B. P. 28
35511 Cesson Sevigne Cedex
France
Biography
Ludovic Me
Ludovic Mé, born in 1963, is currently associate professor at Supelec. He holds the "diplôme d'ingénieur Supelec" in 1987 and a PhD ("Doctorat de l'Université de Rennes 1") in 1994. He won in 1995 the AFCET best french PhD dissertation in security and safety award. His current interest of research includes computer networks security and artificial life.

## Abstract

Security audit efficiency is low because the security officer has to manage such a huge amount of data recorded in the audit trail, that the task is humanly quite impossible.

Therefore, our objective is to design an automatic tool to increase the security audit trail analysis efficiency. The tool, so called GASSATA, for Genetic Algorithm for Simplified Security Audit Trails Analysis, should be viewed as an additional tool in the set of tools which allow the security officer to keep a sharp eye on potential intrusions.

The main ideas on which our work is based are the following: (i) anomaly detection (i.e. answering the question Is the user's behavior normal according to the past?'') is well treated by tools as NIDES, so we choose to investigate misuse detection (i.e. answering the question does the user's behavior correspond to a known attack described as an attack scenario?''), (ii) we have to detect intrusions on heterogeneous networks on which the construction of a global time is impossible, so we eliminate the timing aspect of the attack scenarii (that is the reason why we qualify our analysis by the simplified'' adjective) which are given as sets of events generated by the attacks, (iii) our approach is pessimistic in the sense that we try to explain the data contained in the audit trail by the occurrence of one or more attack, (iv) this problem of explanation is NP-Complete, so we use an heuristic method, genetic algorithms, to solve it.

Our presentation is organised as follows. Section 1 presents our view of the security audit trail analysis problem. In section 2 we show how to apply genetic algorithms to this problem. Section 3 discusses our experiments which exhibit fairly good results. Finally, section 4 concludes and proposes furthur work.

# Using Bottleneck Verification to Find Novel New Attacks with a Low False Alarm Rate(slides available PDF or PS)

Authors
R. P. Lippmann , D. Wyschogrod , S. E. Webster, D. J. Weber, S. Gorton
Speaker
R. P. Lippmann
Institution
M.I.T. Lincoln Laboratory
Information Systems Technology Group
244 Wood Street
Lexington, MA 02420-9185 ; USA
Biography
R. P. Lippmann
Richard P. Lippmann received a B.S. degree in electrical engineering from the Polytechnic Institute of Brooklyn in 1970 and a Ph.D. degree in electrical engineering from the Massachusetts Institute of Technology in 1978. His Ph.D. thesis dealt with signal processing for the hearing impaired.
From 1978 to 1981 he was Director of the Communications Engineering Laboratory of the Boys Town Institute for Communication Disorders in Children, Omaha, NE. He worked on speech perception, speech training aids for deaf children, sound alerting aids for the deaf, and signal processing for hearing aids. In 1981 he joined MIT Lincoln Laboratory, and is currently a senior staff member in the Information Systems Technology Group. Recent research interests include speech recognition by humans and machines, developing improved neural network and statistical pattern classifiers, medical risk assessment, the development of portable software for pattern classification, the development of low-power VLSI neural network circuitry, and the application of neural networks and statistics to problems in computer intrusion detection. He has supervised numerous MIT student theses in these areas.
Dr Lippmann received the first IEEE Signal Processing Magazine award for an article entitled "An Introduction to Computing with Neural Nets" published in April 1987. He was program chair of the 1989 Conference on Neural Information Processing Systems (NIPS), and is a founding member of the NIPS Foundation, created to support this yearly conference. He is an associate editor for Neural Computation, is on the editorial board of Neural Networks, has served on program committees for all IEEE Workshops on Neural Networks for Signal Processing, and is on the program committee for the annual Machines That Learn workshops. He is a member of the IEEE Signal Processing Society and the International Neural Network Society.

## Abstract

A new low-complexity approach to intrusion detection called "bottleneck verification" was developed which can find novel attacks with low false alarm rates. Bottleneck verification is a general approach to intrusion detection designed specifically for systems where there are only a few legal "bottleneck" methods to transition to a higher privilege level and where it is relatively easy to determine when a user is at a higher level. The key concept is to detect 1) when legal bottleneck methods are used and 2) when a user is at a high privilege level. This approach detects an attack whenever a user performs operations at a high privilege level without using legal bottleneck methods to transition to that level. It can theoretically detect any novel attack that illegally transitions a user to a high privilege level without prior knowledge of the attack mechanism.

Bottleneck verification was first applied to UNIX workstations to detect when an attacker illegally achieves root-level privilege and starts a root shell. Initial experiments used sniffing data, ASCII transcripts created by extracting text strings from the sniffed data, and more than 80,000 actual telnet sessions. Use of the single legal command that can transition to root (the UNIX "su" command) was detected using a telnet transcript parser written in perl. The same parser was also used to determine when a user obtains root-level privilege primarily by examining the shell prompt and the types of commands that are successfully issued. Bottleneck verification was compared to a baseline intrusion detection system which uses keyword counts and and an expert system to determine the likelihood that a telnet session contains an attack. Receiver operating characteristic (ROC) curves were generated for the baseline system and bottleneck verification using known instances of attacks in the real data. At a detection rate of 80%, bottleneck verification reduced the false alarm rate by almost two orders of magnitude compared to the baseline system. It also indicated exact locations in scripts where attacks occur and is efficient enough to consider real-time implementation. Bottleneck verification found many different types of attacks that lead to illegal root transitions including buffer overflows, suid root backdoor shells, and bugs in specific application programs without any prior knowledge of these attacks.

Following this initial work, the bottleneck verification approach was extended to work with Solaris Basic Security Module (BSM) audit log data. The BSM bottleneck verification algorithm issues a warning whenever a root shell is started from a process and the owner of the process did not either log in directly as root (and thus start with root permissions) or execute a valid "su" command anywhere in the process' ancestry. It detects a shell being launched by monitoring all fork() and exec() system calls. A table of all currently running processes is kept which includes the process id and privilege level. Entries are removed from the table by monitoring the exit() and kill() system calls to detect when processes are terminated. This system was implemented in real time using a perl program that examines BSM audit data after it has been converted to ASCII using the "praudit" application. Real-time implementation is possible because only a few audit events are monitored and the detection logic is relatively simple. Testing used buffer overflow and suid root shell script attacks. BSM bottleneck verification found all attacks and has not produced any false alarms when monitoring normal computer activities.

Current work involves performing more extensive testing of BSM bottleneck verification, developing a real-time version of the sniffer-based algorithm, and extending the BSM algorithm to detect other hacker actions besides root shell creation including modifying the passwd and other important system configuration files.

# The Use of Information Retrieval Techniques for Intrusion Detection

Authors
Ross Anderson , Abida Khattak
Speaker
Not specified
Institution
University of Cambridge
Computer Laboratory
Pembroke Street, Cambridge CB2 3QG ; UK
Biography
Ross Anderson
not provided

## Abstract

Intrusion detection is a broad problem, and we need a greater range of tools than is currently available. In this article, we report a new approach. We have applied information retrieval techniques to index audit trails. These indexes can be extremely efficient at detecting attacks whose signature is an unusual combination of events, and they may consume only a very small additional amount of storage. This approach allows the intrusion detection community to adopt a wide range of techniques developed in applications ranging from library science to web search engines. On-line version of this paper can be downloaded by ftp

# Tools for Intrusion detection: Results and Lessons Learned from the ASAX Project(slides available PowerPoint (gzipped) or HTML)

Authors
Abdelaziz Mounji , Baudouin Le Charlier
Speaker
Abdelaziz Mounji
Institution
SWIFT s.c., Belgium
and
Computer Science Institute, Namur, Belgium
Biography
Abdelaziz Mounji
Abdelaziz Mounji obtained his PhD. in Computer Science at the Computer Science Institute of the University of Namur (Belgium) in 1997. In the ASAX project, his research work involved the design and implementation of languages and tools for intrusion detection. Since March 98, he joined SWIFT Belgium, Global Information Security department as Senior Analyst, where he works in the Active Security Monitoring project.
Baudouin Le Charlier
Baudouin Le Charlier is full professor at the institut d'informatique des Facultés Universitaires Notre-Dame de la Paix à Namur. He is teaching programming methodology, theories of programming languages, and abstract interpretation. His research interests include static analysis of programming languages (especially, abstract interpretation of declarative languages), design of specialized application-oriented programming languages (in particular, languages for audit trail analysis), and programming methodology (with special focus on understanding the role of specifications, i.e., formal versus natural language).

## Abstract

The Asax project provided a significant contribution to the intrusion detection effort through the design of generic tools and languages supporting both distributed audit trail analysis and static auditing.

In the area of distributed audit trail analysis, the Asax project developed a novel rule-based language that has proven to be powerful enough for detecting complex intrusion patterns efficiently. The rule-based language can be viewed as a general mechanism for matching arbitrary event patterns in an event stream. The language is independent of the target system architecture, the operating system, and the domain the event stream belongs to. Consequently, the implemented system has been used to analyze standard Unix logs, C2-audit trails, IP-packets, and viral activity on an emulated PC environment. The system can further be customized to other applications such as Firewall logs analysis.

In the area of static analysis, an incremental implementation of a datalog-like language has been developed that allows the security administrator to express known security vulnerabilities and provides a real-time notification when any of these vulnerabilities are created in the system.

Moreover, both components (intrusion detection and static auditing) have been combined to provide adaptive intrusion detection, where the active intrusion detection rule set is dynamically changed according to the vulnerabilities detected by the static analysis component.

Besides presenting Asax's main results, the talk also intends to highlight the main lessons to be drawn from our experience. In particular, the talk will emphasize the importance of several factors that should be taken into consideration to trigger a wider acceptance within the system administrator community.

Whereas the project has provided generic mechanisms for detecting real-life attack scenarios, a further step would have been to deliver packaged, customized, and ready-to-use ("out of the box") solutions. This further step is an engineering problem (as opposed to a theoretical problem), that requires industrial partnerships for delivering ready to deploy solutions.

The important aspects related to this include:

• EXPERTISE: The system helps to translate an expertise into an operational system but it does not provide the expertise (albeit it may help to analyze traces).
• INDUSTRIAL SUPPORT: Needed for maintenance, advertising, dissemination, n, etc
• CUSTOMIZABLE INTERFACE TO PRODUCE READY TO USE SOLUTIONS: Users want a system easy to use, not requiring a deep understanding of technical security issues.

# Dependability of Large-scale Infrastructures and Challenges for Intrusion Detection(see the RAID98 workshop report by the same author PDF, PS, Word (Gzipped) or HTML)

Author
Marc Wilikens
Speaker
Marc Wilikens
Institution
Joint Research Centre of the EC
Institute for Systems, Informatics and Safety
21020 Ispra (VA)
Italy
Biography
Marc Wilikens
Marc Wilikens is staff member within the Institute of Systems, Informatics and Safety of Joint Research Centre (JRC). The JRC is a pan-European research centre of the European Communities. Marc Wilikens leads the Dependable Software Applications research group of the Reliable Information Technologies Unit. In 1997 and 1998, he contributed to defining and shaping a European Dependability Initiative which will be intregrated in the new EC's Information Society Technologies research programme (1999-2002).

## Abstract

As part of the preparatory work for the establishment of a European Dependability Initiative, a thematic industrial workshop was held in Brussels on March 18th, 1998 on the theme: "Dependability of large-scale infrastructures and services in the Information Society". The workshop objective was to formulate a comprehensive perspective on dependability issues emanating from the critical applications of large-scale networked infrastructures and services, including communications infrastructure services, energy distribution, financial services, virtual enterprises, retail, health, public administrations. The paper will report on the main results obtained and highlight issues of intrusion detection within a wider dependability and systems framework.

The main drivers that give dependability technologies a prominent role for enhancing trust and confidence in applications that rely on large-scale "open" infrastructures and services are:

• Technological: global interconnectivity of infrastructures and their systems-of-systems nature, layering of services in a deregulated telecoms industry, inclusion of COTS and legacy systems, the increasing business value of intangible goods such as information and content.
• New threats arising from globalisation, openness of networks, the increasing business value of information and the widespread availability of intrusion tools.
• Legal: Heterogeneity of legal approaches to trust services, liability issues. Societal/cultural: Changes affecting chains of trust, understanding of benefits and real threats of electronic trading

The main dependability challenges faced by Industry and impacting intrusion detection in particular are summarised as needs for:

• Better characterisation of risks and dependability needs at the various infrastructure and service layers allowing for trading and mediating required service level between service provider and customer.
• System architectures allowing for high service availability, prediction of service level in the light of malicious and accidental faults.
• Bridging culture gap by raising awareness on issues of realistic security threats, good practice and codes of conduct.

# How Re(Pro)active Should An IDS Be?(slides available PDF, LaTex or PS)

Author
Dr. Richard Overill
Speaker
Dr Richard E Overill
Institution
Department of Computer Science and International Centre for Security Analysis
King's College London
Strand, London WC2R 2LS, UK
Biography
Dr. Richard E. Overill
Dr Richard E Overill is a Senior Lecturer in Computer Science at King's College London, and a member of staff of the International Centre for Security Analysis. His current areas of research are parallel computing, computer-related crime, and computer security, particularly Intrusion Detection Systems. He has published over 50 academic papers.

## Abstract

"Some say the world will end in fire, Some say in ice."
(Fire and Ice, Robert Frost, 1923)

The classical security paradigm of Protect, Detect, React has traditionally been applied to the field of information security with Firewalls taking on the role of protection while detection is handled by Intrusion Detection Systems (IDS). This admittedly simplified picture leaves open two questions: who should react? and how?

While the role of reaction has traditionally been assumed by the system or network manager, it has become evident that IDS which operate online and in real time can also be programmed to behave either reactively or proactively. A reactive' IDS would respond to the detection of an intrusion by, for example, terminating the suspect process, disconnecting the offending user, or modifying a router filter list. A proactive' IDS, on the other hand, would not wait to flag an intrusion but would instead take pre-emptive countermeasures; it might, for example, actively interrogate all extant user processes, perhaps using counterfeit Trojan utilities [1], and terminate all those processes which did not originate from bona fide users at approved sites [2].

There are potential problems with these active defence' scenarios. Firstly, unless the IDS detection thresholds are very carefully tuned to minimise the occurrence of false positives' [3], a reactive IDS may be triggered to disconnect an innocent user or a legitimate customer, or to unneccessarily shut down a network service with a consequential loss of goodwill and business. There is also the possibility of dumping a user who has unwittingly become the subject of an electronic "framing" attack, involving protocol spoofing' of traffic that appears to contain an attack [4]. To quote David Currry of IBM "the last thing you want is to blow away a legitimate customer" [5].

These considerations have recently been sharpened and focussed by the announcement of Blitzkrieg [6] from Network Waffen und Munitionsfabriken Group [7]. Two versions of this system have reportedly been developed: an aggressive military version is designed to wage cyberwarfare by launching malicious software attacks against intruders by attempting to damage or destroy information on their computers; a somewhat milder business version attempts to ward off denial-of-service and other common attacks where the intruders' aims are to prevent the operation of a commercial service rather than to destroy data per se.

These strategies raise questions of legality and of ethics. In the UK, the Computer Misuse Act of 1990 includes both a Basic Hacking offence and an Unauthorised Modification offence. Any attempt by an IDS to gain unauthorised access to an intruder's computer would fall foul of the former offence; the launching of a malicious software attack against an intruder's system by an IDS is covered by the latter offence which carries a penalty of up to 5 years imprisonment on conviction. These actions are also illegal under US law.

At least as important as the legal issues are the ethical implications. Do we want IDS systems that may retaliate against the wrong person, or against someone who has made a genuine mistake or is harmlessly curious, or because of a software error? Verifying that a genuine hacking incident has occurred can sometimes be extremely difficult indeed.

Above all, we should not be seduced by the image of ICE, the Intrusion Countermeasures Expert in William Gibson's Neuromancer, into believing that we can delegate human responsibility to automated systems in this sensitive area.

References

[1] S M Bellovin, There Be Dragons, Proc 3rd Usenix UNIX Security Symposium, Baltimore, MD (September 1992) Ch.27, pp.1-16.

[2] A Rathmell, R Overill and L Valeri, Information Warfare Attack Assessment System (IWAAS), in Proc 1st DERA Quadripartite IW Seminar, London (October 1997).

[3] R E Overill, Intrusion Detection Systems: Threats, Taxonomy, Tuning, Journal of Financial Crime, Vol.6 No.1 (August 1998) in press.

[4] T Ptacek and T H Newsham, Insertion, Evasion and Denial of Service: Eluding Network Intrusion Detection

[5] R Power, CSI Roundtable: Experts discuss present and future intrusion detection systems, Computer Security Journal, Vol.XIV No.1 (Winter 1998) pp.1-18.

[6] C A Robinson Jr, Make-My-Day Server Throws Gauntlet to Network Hackers, AFCEA Signal Magazine, Vol 52 No.9 (May 1998) pp.19-24.

# Contribution of Quantitative Security Evaluation to Intrusion Detection(slides available PowerPoint (gzipped) or HTML)

Author
Yves Deswarte
Speaker
Yves Deswarte
Institution
INRIA
7 avenue du Colonel Roche
31077 Toulouse cedex 4 ; France
Biography
Yves Deswarte
Yves Deswarte, born in Roubaix, France, in 1949, received the Certified Engineer degree from ISEN, Lille, in 1972 and the Computer Science Specialization Engineer degree from the ENSAE, Toulouse, in 1973. Mr. Deswarte is currently "Directeur de Recherche INRIA". Formerly R&D engineer in a main French computer manufacturing company, he joined INRIA and LAAS in 1979. He has been a member of the LAAS research group on Fault Tolerance and Dependable Computing since 1984. From 1973 to now, he has been a main contributor to the design of 7 fault tolerant computing systems. Mr. Deswarte is the author of more than 50 publications in the domain of fault-tolerance and security. He is currently the chairman of the International Steering Committee of ESORICS, the European Symposium on Research in Computer Security, and is the Progam Committee Chair of ESORICS 98.

## Abstract

For the last 7 years, we have been developing a new methodology to assess the security of operational computing systems. The aim of the method is to obtain a quantitative measurement of the security of the system in operation.

The first step is to model the vulnerabilities exhibited by the system. The model is called a privilege graph in which X represents a set of privileges owned by a user or a set of users (e.g., a Unix group). An arc from node X to node Y indicates that a method exists for a user owning X privileges to obtain those of node Y. A vulnerability represented by an arc can be a direct security flaw, such as an easily guessed password or bad directory and file protections enabling the implantation of a Trojan horse. But the vulnerability is not necessarily a security flaw. Instead, it can result from the use of a feature designed to improve security. For instance, in Unix, the rhosts file enables a user U1 to grant most of his privileges to another user U2 without disclosing his password. This is not a security flaw if U1 trusts U2 and needs U2 to undertake some tasks for him (less secure solutions would be to give his password or reduce his protection). But if U2 grants some privilege to U3 (U2 is trusting U3), then by transitivity, U3 can reach U1's privileges, even if U1 does not trust U3. A third class of arcs is the representation of privilege subsets directly issued from the protection scheme, e.g., with Unix groups, there is an arc from each node representing the privilege set of a group member to the node representing the privilege set of the group.

In a second step, weights are assigned to each arc, corresponding to the effort needed for a possible attacker to succeed in executing the method indicated by the arc. This effort is a multidimensional value, encompassing the attacker's knowledge, his competence, the time and computing power needed to perform the method, etc. Then a value is computed for all the paths from the nodes representing the privileges of possible attackers to the nodes representing the privileges of possible attack targets. By analogy with reliability MTTF measures, this value is called METF for Mean Effort To security Failure.

A set of tools has been developed to generate automatically the privilege graph and to compute METF measures. These tools have been experimented to monitor a large real system (a LAN with 300 Unix machines and 700 users) during more than a year (with one privilege graph being generated each day). The experiment results will be presented and the validity of the measures discussed.

Intrusion detection techniques could benefit from the privilege graph model: if it is possible to correlate the user's behavior observed by an intrusion detection system with a progress in the privilege graph towards a potential target, alarms of different levels can be triggered according to the likelihood to reach the target. It is probably possible to integrate the privilege graph analysis in sophisticated intrusion detection tools to help in detecting malicious activities carried on by a hacker impersonating other users by using their privileges. Conversely, statistic user profiles gathered by most intrusion detection tools could be exploited by our security evaluation tools to assess more precisely the weight to associate with some arcs, in particular those representing Trojan horses.

# Intrusion Detection in Telecommunication

Author
Hai-Ping Ko
Speaker
Hai-Ping Ko
Institution
GTE Laboratories Incorporated
Waltham, MA 02254 ; USA
Biography
Hai-Ping Ko
Hai-Ping Ko is a Senior Principal Member of Technical Staff at GTE Laboratories, Incorporated. Her current research is focused on intrusion detection for the telecommunication infrastructure. She worked in the U.S. industry since 1981. She conducted industrial research and development in network protocol security analysis, logic programming, computer algebra, formal program verification, and automated theorem proving. She also conducted academic research in combinatorial theory. She published more than 25 technical papers. She is a member of ACM and a mathematical reviewer for AMS. She has also served as a referee for computing journals from time to time.
Hai-Ping received her Ph.D. degree in Mathematics from Ohio State University in 1978. She also received a M.S. degree in Computer Science from the same university. Before joining GTE in 1995, she worked at MITRE Corporation, GE Research Center, SUNY at Albany, and Oakland University in Michigan.

## Abstract

The telecommunication infrastructure becomes an increasingly critical part of everyone's daily personal and business communication. I plan to describe my current assessment of the following:
• What is the telecommunication infrastructure?
• What does it mean by intrusion detection in telecommunication?
• How important is intrusion detection in telecommunication?
• How much of it is done?
• What else needs to be done?
• What technologies seem to be ready for extensive use?

Then I shall describe an on-going research and development effort on intrusion detection at GTE Labs.

I shall describe three types of communication in the telecommunication infrastructure: (1) TCP/IP, (2) telephone switch to switch communication, and (3) satellite-to/from-ground communication. Intrusions to the type (1) communication are widely known. Intrusions to the type (2) and (3) communications have occurred but less understood. There are many intrusion detection tools developed for type (1) attacks, but not very many for type (2) or (3). It seems clear that rule-based intrusion detection and integrity-checking based intrusion detection are effective and promising. Data visualization can be also useful and ready to be extensively used.

One intrusion detection effort at GTE Labs is to detect and respond to attacks to cellular and telephone switches. In 1997, a system prototype was developed to demonstrate the possibility of (1) generate audit data of user activities on a cellular switch without jeopardizing the availability and performance of its usual service, (2) analyze the audit data and intrusion alerts in near real-time, (3) detect an early tiger team attack scenarios, 30+ misuses, and 40+ types of anomalies, and (4) adaptive control. It used one existing intrusion detection tool, NIDES, as the core of data analysis and added a second layer of expert system to control the alerts and audit data. One cellular switch was monitored. Thousands of alerts have been observed and analyzed.

In 1998, more and more cellular switches are being monitored. Telephone switches will be monitored as well. The earlier system prototype is being expanded. The intrusion alerts are being validated. A few commercial intrusion detection tools are being evaluated and possibly included in the new system prototype. The alerts arising from different switches and different intrusion detection tools are being compared. The second layer of expert system is being further developed. Problems and solutions of false alarms, anomalies, and adaptive control are being formulated.

It is clear that many intrusion detection systems are effective and useful for the above type (1) network. However it is not clear exactly how to describe what attacks can be detected under what conditions. While we gradually move on to intrusion detection for type (2) and (3) networks, we also begin developing a framework of defining the characteristics and inter-relationships among attacks, the detection methods, and the security states. This will be the foundation for our expert system for adaptive control.

# Problems with Network­based Intrusion Detection for Enterprise Computing(slides available Power Point (gzipped), PS or HTML)

Authors
Thomas Daniels, Eugene Spafford
Speaker
Thomas Daniels
Institution
COAST Laboratory
Purdue University
West Lafayette, IN 47907-1398 ; USA
Biography
Thomas Daniels
Thomas (Tom) Daniels is a graduate assistant in Purdue University's Computer Operations, Audit, and Security Technology (COAST) Laboratory. He is a computer science Ph.D. student under the advisement of Dr. Gene Spafford and a 1998-99 recipient of the Intel Graduate Fellowship. His most recent work concerns audit requirements for host-based detection of low-level network attacks. Previously, Mr. Daniels studied vulnerability analysis while contributing to the COAST vulnerability database project. Tom received his Bachelor of Science in Computer Science in 1995 from Southwest Missouri State University. During and immediately after his undergraduate studies, he worked extensively in industry on projects involving real-time systems, database systems, and system administration.
Gene Spafford
Gene Spafford is a professor of Computer Sciences at Purdue University. He is the founder and director of the COAST Laboratory, and the new Center for Education and Research in Information Assurance and Security (CERIAS). He has been involved in intrusion detection and avoidance research for over a decade.

## Abstract

Many organizations have turned to network-based intrusion detection systems (NIDS) to protect their enterprise computing and network resources from network-based attacks. Network-based intrusion detection systems are easy to install on a network, but they have fundamental problems that can lead to inaccurate or incomplete results. Additionally, new network technology may make network-based intrusion detection even more difficult. This talk discusses some of the problems of current network-based intrusion detection systems and the obstacles that new network technology may present to conventional intrusion detection approaches in an enterprise computing environment.

Recent work by others [1] has shown that existing commercial NIDS are susceptible to several different kinds of attacks that allow an attacker to avoid detection. Many of these attacks are based on the NIDS inability to fully simulate network routing behavior or an end-system's protocol stack. Furthermore, efforts to more fully simulate network behavior may increase NIDS vulnerability to denial of service attacks. In the context of these attacks, we present a simple model of an enterprise network and consider the placement of NIDS within the network. We also consider how some network technologies affect the placement and capabilities of a NIDS in the network. These technologies, such as virtual private networking, multiple routes inside and into enterprise networks, and switching hubs, critically affect the ability of network monitors to collect enough information to detect intrusive activity. In conclusion, we suggest that host-based intrusion detection systems should detect network-based attacks as well as insider misuse. To support this, further work is required to determine host audit content and auditing techniques to detect network attacks.

References

[1] Ptacek and Newsham. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection. Secure Networks, Inc. January 1998

# Lessons Learned in the Implementation of a Multi-Location Network Based Real Time Intrusion Detection System(slides available Freelance Graphics (gzipped) or HTML)

Authors
M. Puldy , M. Christensen
Speaker
Michael L. Puldy
Institution
IBM Emergency Response Service
6300 Diagonal Highway, MS-023B
Boulder, CO 80301
USA
Biography
Michael Puldy
Michael L. Puldy currently manages the global deployment and delivery of IBM's Emergency Response Service. This includes IBM's commercial ERS service, IBM's Anti-Virus services, and IBM's internal remote ethical hack scan and response team. Michael is also manager of IBM CERT. Prior to Emergency Response Service, Michael was involved in the development and operational implementation for IBM's large business recovery center in Boulder, Colorado, USA. In addition to Michael's tenure at IBM, Michael has over 15 years experience working in various industries including banking, aerospace and government. He has a BS in Computer Science, Clemson University, and a Masters of Business Administration, University of North Florida.

## Abstract

This presentation will highlight IBM's Emergency Response Service's implementation of a multi-location real time intrusion detection system. After evaluating multiple technologies, IBM ERS settled on a network based intrusion detection system to monitor internet traffic. Although the technology of a network based intrusion system is relatively straightforward, the operational and response aspects of a multi-site implementation created a number of opportunities. Issues on scalability, categorization of attacks, signature updates, and general remote management of network based RTID sensors, and how IBM ERS overcame these obstacles will be discussed. Moreover, through various installations of this hardware, across multiple industries, IBM ERS has created a unique database containing the types and the quantities of attacks on internet hosts and firewalls within the United States. Finally, the presentation will discuss operational and financial issues surrounding the establishment of a 24x7 network security operations center.

# Enhanced network intrusion detection in a smart enterprise environment(full text available Word (Gzipped) or HTML)

Authors
Ricci Ieong , James Pang
Speakers
Ricci Ieong , James Pang
Institution
Cyberspace Center
The Hong Kong University of Science and Technology
Clear Water Bay, Kowloon ; Hong Kong.
Biographies
Ricci Ieong
Ricci Ieong received the B.SC (Hons.) degree in chemistry from the Chinese University of Hong Kong in 1994 and the M. Phil. degree in computer science from the Hong Kong University of Science and Technology in 1996. From 1995 to 1996, he was a Research Assistant in the Neural Computational Laboratory in the Hong Kong University of Science and Technology. From 1996 to 1997, he was a full-time Teaching Assistant in the Hong Kong University of Science and Technology for computer programming in C++ and Unix system administration courses. He is currently working in the Cyberspace Center of the Hong Kong University of Science and Technology on Internet and System Security, Electronic Commerce and Smart Card projects. His current interests include neural network, computational neuroscience, neurodynamics, internet security, electronic commerce, smart card and artificial intelligence aided intrusion detection system.
James Pang
James Pang received his B.SC (Hons.) degree in Computer Science from the City University of Hong Kong in 1994 and his M. Phil. degree in System Engineering & Engineering Management from the Chinese University of Hong Kong in 1997. He then join the Cyberspace Center of the Hong Kong University of Science and Technology and actively involved in various projects, including internet security, smart card, Chinese-English online translation, etc. His current interests include internet technologies, internet security, intrusion detection systems, smart card technologies, and information retrieval.

## Abstract

In this electronic commerce era, more and more companies are using intranet and extranet as their confidential transaction media. This raises the competitiveness of a company on one hand but attracts misfeasors, masqueraders and clandestine users on the other hand. Studies on intrusion detection [1, 7, 8, 9] shown that most intruders and hackers on internet sites or enterprise networks are insiders of those sites. To fight against those intruders, user-profile based statistical anomaly detection would be a more suitable method than misuse detection approach, especially within an enterprise network. However, how user-profiles should be stored is one of the main problems. This problem will become more prominent when a world-wide enterprise network is involved. If these profiles are stored only on one localized unique domain profile server, whenever a user wants to access the company network during traveling, one will have to either ask the network administrator to transfer one's profile to the other site or to carry it to other sites by the users themselves. The best method is to allow the users to carry their profiles with them. As user-profiles contain sensitive data, they should be stored in a highly secure storage media, keeping intruders from accessing them. One favorable solution is to employ the smart card as the secure storage media [3, 4, 5, 12].

With the use of smart card technology, data that is kept on the smart card could only be accessed or modified by the authorized users or system. Besides, with the computational power of the chip card, encryption and other secure authentication procedures could be performed totally on the card, making the stored data more secure. Also, with the implementation of the PC/SC smart card standard [11] smart card will become a standard device in personal computers as well as Unix workstations. Furthermore, Gemplus and Microsoft have already developed a user access control on Windows NT system [2]. Therefore, use of smart card in user-profile based intrusion detection system is a reasonable projection.

In this paper, we proposed a smart card-based intrusion detection system, Card-Based user-profile Statistical Anomaly Detection System (CBSADS), for enterprise network security protection. Within this protected network, all machines are equipped with smart card readers and with software security agents running on them. A CBSADS smart card will be issued to each authorized user. The smart card provides both a secure enterprise network access, and reliable intrusion detection scheme. Inside each user card, not only the user personal information for authentication purpose is stored, user privilege in the enterprise network, behavioral data, and user specific network data are also stored.

When a user logon to the enterprise network machine, all he/she needs to do is to insert his/her CBSADS smart card. The authentication process will be performed automatically and the agent residing on that machine will assign a routing table from that machine to the destined machines and gateway dynamically according to the network information obtained from the CBSADS card and the authentication server. This controls the accessibility of an user based on the authentication. User's behaviors are captured and together with the user-profile from the CBSADS card, a user's behavior "signature" will be generated. This signature is generated by the statistical anomaly detection system that summarizes the user login and access time, most frequent logon location and files, and keystroke speed. Also, the agent will monitor and raise signal alerts if it finds some user's actions exceeding the privilege that the user should have. In case of severe compromise, the machine will be disconnected from the network by disabling the routing table.

Based on this proposed scheme, a prototype system has been implemented using the ACS 1KB smart card [14]. Currently, this system targets for a single network site. It will be extended for supporting multiple sites across the internet. Also, Java Card [6, 8, 13] will be used, which provides us with more computational power.

References

[1] Donald L. Pipkin, "Halting the Hacker, a practical Guide to Computer Security", Prentice Hall PTR, Upper Saddle River, New Jersey, 1997

[2] Gemplus S. C. A., "Gemplus Previews Windows NT 5.0 Secure-Logon With Smart Cards At CardTech/SecurTech'98", http://www.gemplus.com/presse/1998/windows_nt5.htm, April 1998

[3] Gemplus S. C. A., "Smart Cards and the Internet", http://www.gemplus.com/welcome/internet.htm

[4] Gemplus S. C. A., "Smart Card Applications", http://www.gemplus.com/application.htm

[5] Gemplus S. C. A., "What is a Smart Card?", http://www.gemplus.com/welcome/what_is.htm

[6] Gemplus S. C. A., "Frequently Asked Questions Java Card and GemXpresso RAD", https://store.gemplus.com/WebObjects/Gemplus.woa/Resources/Cache/GemXpresso_Whitepaper.htm, March 1998

[7] Internet Security System Inc., "Understanding the Risk", http://www.iss.net/prod/utr.html

[8] Internet Security System Inc., "Adaptive Security Model, A Model Solution - A Solution Model", http://www.iss.net/prod/asm-2_wp/asm-2_wp3002.html, June 1998

[9] Java Card. http://java.sun.com:80/products/javacard/index.html

[10] M. Crosbie and K. Price, "Intrusion Detection Systems", http://www.cs.purdue.edu/coast/intrusion-detection/ids.html

[11] PC/SC Workgroup, "PC/SC Workgroup, Integrating PC's and Smart Cards", http://www.smartcardsys.com/

[12] Schlumberger Limited, "Smart Card Technology", http://www.slb.com/et/technology.html

[13] Schlumberger Limited, "Cyberflex 2.0 Multi 8K", http://www.cyberflex.austin.et.slb.com/cyberflex/cyberhome3.htm

[14] Advanced Card Systems Ltd., "ACOS1 CPU Card", http://www.acs.com.hk/smartcrd.htm

# Integrating Intrusion Detection into the Network/Security Infrastructure(slides available Power Point (gzipped) or HTML)

Author
Mark Wood
Speaker
Mark Wood
Institution
Internet Security Systems, Inc.
300 Embassy Row
Atlanta, GA 30348 ; USA
Biography
Mark Wood
Mark Wood joined ISS in the spring of 1996 as the Program Manager of ISS' Intrusion Detection Technology, with specified responsibilities for the product management of ISS' real-time intrusion detection tool, RealSecure.
Prior to joining Internet Security Systems, Mark served as the Business Line Director for RNS (formally Rockwell Network Systems), where he was responsible for product management and management of hardware and software engineering teams. He has successfully held a number of marketing and marketing management positions with other companies such as Distributed Systems International, Inc. and AT&T Bell Laboratories.
Mark holds a B.S. in Computer Science from Duke University and a M.S. in Computer Science from Georgia Institute of Technology.

## Abstract

Network-based intrusion detection faces several technical and business challenges as networks evolve:
• Faster network speeds make it harder for ID systems to monitor activity,
• Highly-switched network infrastructures make it costlier for users to deploy ID systems in a ways that provide adequate coverage,
• Encrypted packet payloads reduce the effectiveness of ID systems,
• The fact that security is still a cost center for many companies incents users to deploy ID systems carefully and use them less frequently.

It's important that network-based intrusion detection systems evolve along with the networks and organizations they're protecting. What's the best way to make this happen? This talk discusses integration of intrusion detection capability into the network and security infrastructure as a solution to these challenges. It presents a representative approach to:

• Modularizing intrusion detection capability so that the appropriate components can be integrated into the target devices.
• Defining programming interfaces to make use of the services that are available on the target devices.
• Integrating intrusion detection capability into the next generation of network and security products -- routers, switches, firewalls, etc.

Specific examples of how this might work will be presented. The talk will conclude with a discussion of how this approach addresses the evolutionary problems discussed above.

# Measuring Intrusion Detection Systems(slides available PDF or PS)

Author
Roy A. Maxion
Speaker
Roy A. Maxion
Institution
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213 ; USA
Biography
Roy Maxion
Roy Maxion is a faculty member in the Computer Science Department at Carnegie Mellon University, and is the director of the CMU Dependable Systems Laboratory. His work includes automated diagnosis of large-scale systems, & system performance evaluation. He is current principle investigator for an anomaly intrusion detection system, and has built several diagnostic systems based on anomaly detection methods.

## Abstract

System effectiveness measurement is a key element of the design, procurement and use of intrusion detection systems. Measurement is sometimes seen as unexciting, but its results form the very pillars of science; they represent the basic data upon which important decisions are based, and in which confidence is placed.

To measure is to ascertain the dimensions of an artifact or process. One measures for four reasons: to characterize, to evaluate, to predict and to improve. Characterization provides understanding of processes, artifacts, resources and environments; it establishes baselines for comparisons with future assessments, and it enables comprehension of relationships among measured entities. Evaluation determines whether or not a required level of performance or service has been reached, and assesses the impacts of technologies, environments, and variabilities. Prediction extrapolates trends on the basis of available data, and estimates certain values based on observations of others, aiding in planning, risk assessment and design/cost tradeoffs. Improvement is attained when quantitative benchmark information is gathered to identify obstacles, causal mechanisms and inefficiencies that impinge on product and process quality and performance. Measurement provides a focus and an opportunity to determine how well something works and what to do to improve it.

This paper discusses effectiveness measurement for intrusion detection systems. It develops a range of techniques and metrics that can be applied to any intrusion detection system and to other kinds of systems as well. Special emphasis is given to anomaly intrusion detection systems, because these systems are the most likely to be platform and context transparent. Selected papers from the intrusion detection literature are reviewed with particular focus on measurement and evaluation issues. Papers are compared and contrasted against one another, as well as against a suite of proposed measurement methods. Example topics are: benchmarking, data and workload characteristics, test data selection, coverage, statistical analysis of measurement results, measurement scales, environmental factors, experimental designs, measures and metrics (e.g., speed, accuracy, reliability, availability, misclassification, hits, misses, false alarms, etc.), data collection procedures, instrumentation, sensitivity analyses, resource consumption, robustness, noise rejection, repeatability, performance boundaries, response bias, acquisition of normal behavior, base rate characterization, validity, signal detection, sensitivity, receiver operating characteristics, and nonstationarity.

# The 1998 DARPA/AFRL Off-line Intrusion Detection Evaluation(slides available PDF or PS)

Authors
R. P. Lippmann , R. K. Cunningham, D. J. Fried,, S. L. Garfinkel,, S. Gorton, I. Graf , K. R. Kendall, D. J. McClung, D. J. Weber, S. E. Webster, D. Wyschogrod, M. A. Zissman
Speaker
I. Graf
Institution
M.I.T. Lincoln Laboratory
Information Systems Technology Group
244 Wood Street
Lexington, MA 02420-9185 ; USA
Biographies
R. P. Lippman
See above
I. Graf
Isaac Graf was born in Perth, Australia in 1972. He received the B.S. degree in physics from Yeshiva University, New York, NY in 1994, and the S.M. degree in electrical engineering from the Massachusetts Institute of Technology in 1997. His S.M. dealt with the modeling and computer simulation of hearing impairment. In January 1998, he joined MIT Lincoln Laboratory in Lexington, MA. Since then he has worked on the off-line intrusion detection evaluation project.

## Abstract

The 1998 intrusion detection off-line evaluation is the first of an ongoing series of yearly evaluations being conducted by MIT Lincoln Laboratory under DARPA ITO and Air Force Research Laboratory sponsorship. These evaluations will contribute significantly to the intrusion detection research field by providing direction for research efforts and calibration of current technical capabilities. The evaluation is designed to be simple, to focus on core technology issues, and to encourage the widest possible participation by eliminating security and privacy concerns and by providing data types that are used by the majority of intrusion detection systems. Together with the real-time intrusion detection evaluation being coordinated by the Air Force Research Laboratory directly, this evaluation is designed to foster research progress, with the following four goals:
1. Exploring promising new ideas in intrusion detection.
2. Developing advanced technology incorporating these ideas.
3. Measuring the performance of this technology.
4. Comparing the performance of various newly developed and existing systems in a systematic, careful way.

Evaluations measure the ability of intrusion detection systems to detect attacks on computer systems and networks. This year's task focuses on UNIX workstations and the goal is to determine whether any of the following attack events occurred or were attempted during a given network session:

1. Denial of service
2. Unauthorized access from a remote machine
3. Unauthorized access to local superuser privileges by a local unprivileged user
4. Surveillance and probing
5. Anomalous user behavior

Network sessions used for scoring are complete TCP/IP connections that correspond to interactions using many services including telnet, HTTP, SMTP, FTP, finger, rlogin, and others. Sessions are generated automatically using a simulation network with more than 120 simulated hosts, more than 1,000 simulated users, and with realistic traffic patterns similar to those seen at a military base. Hundreds of attacks representing more than 25 different types of attacks, with different levels of stealthiness, and different actions following breakins are injected into normal traffic at known times and locations.

This evaluation is carefully designed to measure false alarm rates for recent attacks as well as detection rates. For each session, an intrusion detection system will be required to produce a score, indicating the relative likelihood that an attack occurred during the session. Thus, it will be possible to generate receiver operating characteristic (ROC) curves, which plot detection versus false alarm probabilities. ROC curves can be used to determine performance for any possible operating point of an intrusion detection system. Statistics based on these curves will be used to compare systems for different services and different types of attacks.

Prior to the evaluation, a set of training data has been made available to the participating sites. These data are being used to configure intrusion detection systems and train free parameters. Generally, the types of training data provided will be those that are used by most of today's commercial and research intrusion detection systems, e.g. TCP network packets and audit logs produced by Sun's Basic Security Module. These data will be generated on a simulation network. Both normal use and attack sessions will be present. A separate set of test data will be used to measure performance of each intrusion detection system being evaluated.

Some intrusion detection systems are designed specifically to detect anomalous user, system, and network behavior. We have inserted such anomalous behavior into the test and training data to evaluate such systems.

# Securing Network Audit Logs on Untrusted Machines(slides available PDF, PS, PowerPoint (gzipped) or HTML)

Authors
Bruce Schneier , John Kelsey
Speaker
Bruce Schneier
Institution
Counterpane Systems
101 E Minnehaha Parkway
Minneapolis, MN 55418 ; USA
Biography
Bruce Schneier
Bruce Schneier is president of Counterpane Systems. He is the author of Applied Cryptography (John Wiley & Sons, 1994 & 1996), the seminal work in its field. Now in its second edition, Applied Cryptography has sold over 90,000 copies world-wide and has been translated into four languages. His papers have appeared at international conferences, and he has written dozens of articles on cryptography for major magazines. He is a contributing editor to Dr. Dobbs Journal where he edited the "Algorithms Alley" column, and has been a contributing editor to Computer and Communications Security Reviews. He designed the popular Blowfish encryption algorithm, still unbroken after years of cryptanalysis, as wall as Twofish, currently a candidate for the government's Advanced Encryption Standard. Schneier served on the Board of Directors of the International Association for Cryptologic Research, is a member of the Advisory Board for the Electronic Privacy Information Center, and is on the Board of Directors of the Voter's Telcom Watch. Schneier has an M.S. in Computer Science from American University and a B.S. in Physics from the University of Rochester. He is a frequent writer and lecturer on the topics of cryptography, computer security, and privacy.

## Abstract

Many intrusion detection systems are based on collecting data about the state of a network, and then analyzing that data (either in real time or after an attack, as a forensics tool) for information about attacks. This audit data is vital, both for identifying and responding to an attack, and as evidence of attack that can be presented in court. In order to ensure that this data can be used for those purposes, it must be demonstrably accurate.

Often, this data must be stored on a computer on the same network that the data refers to. Hence, an attacker who penetrates the network may be able to access the audit data.

We present cryptography for securing audit logs on untrusted machines. In the event that an attacker captures this machine, we can guarantee that he will gain little or no information from the log files. We also can limit his ability to corrupt the log files. We describe a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read, and also impossible to undetectably modify or destroy.

We have an untrusted machine, U, which is not physically secure or sufficiently tamper-resistant to guarantee that it cannot be taken over by some attacker. However, this machine needs to be able to build and maintain a file of audit log entries of some processes, measurements, events, or tasks.

With a minimal amount of interaction with a trusted machine, T, we want to make the strongest security guarantees possible about the authenticity of the log on U. In particular, we do not want an attacker who gains control of U at time t to be able to read log entries made before time t, and we do not want him to be able to alter or delete log entries made before time t in such a way that his manipulation will be undetected when U next interacts with T.

Our system uses timestamps and hash chains to bind audit log entries to each other, and to bind different audit logs to each other. Using one-time encryption and authentication keys, an attacker who gains control of the machine with the audit trail cannot learn the secrets uses to protect already-existing log entries.

A few moments' reflection will reveal that no security measure can protect the audit log entries written {\em after} an attacker has gained control of U. We are able to make strong statements only about log entries U made before compromise, and a solution to do so is interesting only when there is no communications channel of sufficient reliability, bandwidth, and security to simply continuously store the logs on T.

In essence, this technique is an implementation of an engineering tradeoff between how online U is and how often we expect U to be compromised. If we expect U to be compromised very often---once a minute, for example---then we should send log entries to T at least once or twice every minute; hence U will need to be online nearly all the time. In many systems, U is not expected to be compromised nearly as often, and is also not online nearly as continuously. Therefore, we only need U to communicate log entries to T infrequently, at some period related to the frequency with which you expect that T may be compromised. The audit log technique in our paper enables this tradeoff. It provides a knob'' that the system architect can adjust based on his judgement of this tradeoff; furthermore, the knob can be adjusted during the operation of the system as expectations of the rate of compromise change.

# Intrusion Detection and User Privacy - A Natural Contradiction?(slides available PDF , PS, PowerPoint (gzipped) or HTML)

Authors
Roland Büschkes , Dogan Kesdogan
Speaker
Roland Büschkes
Institution
Aachen University of Technology
Informatik IV (Communication Systems)
Ahornstr. 55
52056 Aachen ; Germany
Biography
Roland Büschkes
Roland Büschkes is a researcher with the Computer Science Department (Communication Systems) at Aachen University of Technology (RWTH Aachen) and has worked so far as the local technical manager of three European research projects dealing with the introduction of telematic applications in the private and public sector. He received his diploma in computer science from RWTH in 1995. His current research interests focus on network related security questions, with a special emphasise on secure group communication and ID systems. Mr. B¨uschkes will be the speaker for this talk.
Dogan Kesdogan
Dogan Kesdogan is a researcher with the Computer Science Department (Communication Systems) at Aachen University of Technology (RWTH Aachen). He received his diploma in computer science from RWTH in 1994. His current research interests focus on the application of anonymity and pseudonym techniques within distinct network scenarios. Part of his work has been funded by the Gottlieb Daimler and Karl Benz Foundation (Ladenburg, Germany) as part of its Kolleg Security in Communication Technology''.

## Abstract

The protection of an information infrastructure is a complex and critical task. The task itself can be subdivided into the protection of the networks, i.e. the network providers, and the protection of the users. Both groups have equal rights concerning their security and privacy demands. Intrusion Detection Systems (IDS) provide a promising technique to enhance the security of complex information infrastructures (see e.g. [5] for a general overview). But their application mustn't weaken the security and privacy of the monitored users or any other cooperating components like e.g. another IDS. An IDS fulfilling this requirement is called a multilateral secure IDS.

The demand for a multilateral secure IDS can come from several sources:

• It can be a legal requirement of a data protection act or works committee
• It can be necessary as an additional protection against misuse by internal or external attackers
• It can be necessary as the result of the application of a distributed IDS with cooperating components belonging to different security domains in which each security domain does not want to expose more information than necessary

The talk deals with potential approaches towards the problem of privacy. It starts with a short survey of general anonymity and pseudonym techniques. Based on this overview the special requirements of an IDS concerning the use of pseudonyms are derived. Resulting questions are:

• When to introduce the pseudonym for a user (at login time, before the pass on of log files, etc.)?
• Where to introduce the pseudonym (at the login server, at the monitoring process, etc.)?
• How to generate the pseudonym (general technique, number of participating parties, etc.)?
• How to reveal the pseudonym in case of an intrusion (general technique, number of participating parties, etc.)?
• What additional data must be treated in a special way in order to prevent unwanted revelation of a pseudonym (access to user specific resources like home directories, etc.)?

Existing solutions to this problem like [6] leave the responsibility for the generation of pseudonyms with the IDS itself. In this talk we propose two solutions, varying in their degree of security and pragmatism, which shift the task of generating pseudonyms to a trusted third party (TTP). The TTP provides the base to turn the results of ID systems into legally reliable evidence.

The first of our proposed solutions uses group pseudonyms. Group pseudonyms are generated by a TTP (e.g. managed by the works committee), which controls the login process. The TTP generates a Kerberos like ticket for the user, which contains a certified pseudonym and a list of user groups to which the user belongs. The ticket has a limited period of validity and must be renewed periodically. For each renewal a new pseudonym is generated.

The second approach is based on the classic MIX technique [1, 2, 3, 4]. A MIX node acts as the intermediary between the clients and the servers. The MIX node relays the service requests of the clients and the answers of the servers while substituting the users real identities with pseudonyms.

It also adds the already mentioned group list to each service request. The group list used in both approaches fulfills two major tasks:

• It enable access control by other servers based on the group information.
• It supports any anomaly detection component applied as part of the IDS.

Other IDS components that are based on the detection of attack patterns (e.g. rule­based approaches and their derivatives) can continue their normal work. Instead of dealing with real user identities, they now make their analysis on the base of pseudonyms.

In case that any IDS component provides serious evidence that an attack was launched under a certain pseudonym, this evidence can be presented to the TTP, which then reveals the real identity of the user.

The described approaches stresses that future IDS systems must be embedded in the general network and operating system environments and cooperate with the authentication and access control mechanisms.

The talk closes with an outlook over open research issues and future work.

References

[1] D.L. Chaum, Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms'', Comm. ACM, Feb. 1981, Vol. 24, No. 2, pp. 84­88.

[2] A. Fasbender, D. Kesdogan, and O. Kubitz, Variable and Scalable Security: Protection of Location Information in Mobile IP'', VTC'96, Atlanta, 1996.

[3] D. Kesdogan, J. Egner, and R. B¨uschkes, Stop­And­Go­MIXes Providing Probabilistic Anonymity in an Open Systems'', to appear in the proceedings of the second workshop on information hiding (IHW98) as Lecture notes in computer science (Springer­Verlag).

[4] A. Pfitzmann, B. Pfitzmann, and M. Waidner, ISDN­MIXes: Untraceable Communication with Very Small Bandwidth Overhead'', Information Security, Proc. IFIP/SEC 91, Brighton, UK, 15­17 May 1991, D,T. Lindsay, W.L. Price (eds.), North­Holland, Amsterdam 1991, pp. 245­258.

[5] B. Mukherjee, L.T. Heberlein, and K.N. Levitt, Network Intrusion Detection'', IEEE Network, 8:26­41, May/June 1994.

[6] M. Sobirey, S. Fischer­Hübner, and K. Rannenberg, Pseudonymous Audit for Privacy Enhanced Intrusion Detection'', IFIP/SEC, 1997.

# Design and Implementation of an Intrusion Detection System for OSPF Routing Networks(slides available Power Point (gzipped) or HTML)

Authors
Y. Frank Jou , S. Felix Wu , Fengmin Gong, Chandru Sargor, Rance Cleaveland
Speaker
S. Felix Wu
Institution
MCNC
Research Triangle Park, NC 27709 ; USA
Biographies
Y. Frank Jou
Y. Frank Jou received his Ph.D. degree in Computer Engineering from North Carolina State University in 1993. He joined the Advanced Networking Research Group at MCNC after his Ph.D program. He has been involved with various projects in the area of high speed networking and secure communication. Currently, he is the PI for the DARPA JiNao project which is to design and implement an intrusion detection system for protecting the network infrastructure. His area of interests include intrusion detection, secure communication, and wireless networking.
S. Felix Wu
Dr. Wu received his Ph.D. degree in Computer Science from Columbia University in 1995, and is an Assistant Professor of Computer Science at NC State University. He is the co-PI or PI for a few research projects in the area of intrusion detection and high confidence networking. He is currently supported by DARPA/ITO, CACC, NSA, IBM, and, Fujitsu. He is the founder of NCSU's Secure and Highly Available Networking Group (SHANG).

## Abstract

Intrusion incidents have been rising, both in number and the level of sophistication, along with the increasing popularity of the Internet. Recognizing this alarming phenomenon, governments and research communities have been embarking programs to develop intrusion detection systems (IDS) for countering these intrusive activities. Until recently, major research efforts have been focused on host-based intrusion where individual host is the subject of concern. As research progresses, protection for network of hosts and even the network infrastructure itself has drawn attention due to its large scale of impact and potentially devastating consequences.

This talk will present the system design and implementation progress of the JiNao IDS. This IDS is being developed under a three-year project (code name JiNao) sponsored by DARPA for protecting against intruders from breaking into network infrastructure, like routers, switches, and network management channels. The architectural design of the JiNao IDS is general enough to cover key network protocols (e.g., OSPF, PNNI, or SNMP).

At the top level, the system consists of local detection subsystems and remote management application subsystems. The integration of these two subsystems will be mapped onto the SNMP standard management framework.

A local subsystem has three major components: rule-based prevention module, protocol-based detection module, and statistical analysis detection module. As a gate-keeper, the prevention module intercepts and filters all incoming packets according to a small set of rules. It conducts a quick check to see whether an incoming packet violate general security guidelines or special administrative security concerns. A second component of the system uses logical analysis of protocol operation. This technique detects intrusion by monitoring the execution of protocols in a router and triggering intrusion alarms when an anomalous state is entered. The statistical-based approach is founded on the contention that network routing and management protocols exhibit certain behavioral signatures. Any behavior deviating from the normal profile will be considered as an anomaly and appropriate alarms can be triggered.

The detection functions of a local subsystem are complementary in nature in terms of their capabilities and their response times. The rule and protocol based approach is meant to analyze and detect known vulnerabilities. On the other hand, the statistical analysis is intended to uncover those attacks that cannot be prevented by a set of rules embedded in a rule-based component or cannot be detected by security analysis conducted through protocol-based approach. As far as response time is concerned, the statistical approach requires an observation window to determine whether the target is anomalous. The protocol-based and, especially, the rule-based mechanisms will be able to detect the targeted intrusions with relatively low latency.

We are currently over halfway into this three-year project. Major detection components have been implemented while system integration is underway. We chose to focus on the OSPF routing network as an implementation example. To evaluate the system design and implementation, a set of attacks has been developed and used to exercise our system by attacking nodes in a testbed environment. We will report the experience learned from the implementation and present some experimentation results.

# Designing IDLE: The Intrusion Data Library Enterprise(slides available Power Point (gzipped) or HTML)

Authors
Ulf Lindqvist (1 ,3) , Douglas Moran (2) , Phillip Porras (3) , Mabry Tyson (2)
Speaker
Ulf Lindqvist
Institutions
(1) Department of Computer Engineering, Chalmers University of Technology, Goteborg, Sweden
(2) Artificial Intelligence Center, SRI International, Menlo Park, California
(3) Computer Science Laboratory, SRI International, Menlo Park, California
Biography
Ulf Lindqvist
Ulf Lindqvist is a PhD research student in the Department of Computer Engineering at Chalmers University of Technology. His research focuses on computer and network security, especially methods for analysis, categorization, and detection of intrusions and vulnerabilities. Lindqvist received an MS in computer science and engineering and a licentiate of engineering from Chalmers. For the summer of 1998, he is participating in network security research as an International Fellow in the Computer Science Laboratory of SRI International. He is a student member of the IEEE, the IEEE Computer Society, the ACM, and Usenix.

## Abstract

High quality, timely information on intrusions is crucial in the development, testing, tuning, and updating of intrusion detection systems (IDSs) and intrusion recovery systems. We present the Intrusion Data Library Enterprise (IDLE), a design and initial compilation of an extensible library of intrusion data that is efficiently parseable in both human-readable and platform-independent machine-readable forms. We are currently in the early stages of design and are building example entries. The IDLE library will be made available as a resource specifically for the intrusion detection community. IDLE will provide IDS developers and users with accurate field data for testing and tuning, and as new intrusion types are discovered, it will enable tools to automatically update rulesets and parameters.

Our experience from detailed intrusion analysis indicates that a vast amount of information is needed about the particular intrusion sample to make correct statements about an intrusion in terms of prerequisites, impact, traces, difficulty, remedies etc. Also, because systems are continually patched to block known intrusions, it can be difficult to recreate a vulnerable system configuration for each intrusion sample when detailed vulnerability information is missing. A major problem facing IDS developers is that the intrusion databases utilized provide little leverage for automating extensions to their systems. Legitimate concerns about distributing information on intrusion schemes has hampered the growth of these databases. Ironically, this has led to the current situation where security professionals find their best sources for intrusion data on underground Internet sites, even though the information published on such sites is often incomplete and of varying reliability.

In IDLE, we attempt to create a standard format that will facilitate rapid distribution of information among IDS developers and related groups in order to achieve `critical mass'' in the coverage. However, the breadth of such exchange and the access controls are outside the scope of this work. The emphasis of our design is to include information that ensures automated repeatability, detection, and diagnosis of each intrusion sample. What makes IDLE different from current intrusion databases is that it is designed to serve the IDS community by coordinating detailed information on vulnerable configurations and exploit instructions with documented observable dynamic and static traces (signs) of the intrusion. The IDLE trace information is structured in a form that will support an IDS downloading a new description and extracting the information needed to automatically generate new rules (signatures, parameters, etc.) to identify the new intrusion.

Support for partial information is a core part of the design of IDLE. Information about an intrusion will typically be initially incomplete, and different groups may tend to populate only chosen subsets of a record. Incremental population is especially important in the observables, because different developers monitor system activity from different perspectives: network traffic, audit logs, application logs, filesystem traces, etc. IDLE must also be easily extensible to support the aspects relevant to new and different target platforms, tools, and IDSs.

We have chosen to use the Extensible Markup Language (XML), successor to HTML, for the intrusion database. There is an intense ongoing development of tools for authoring, displaying, browsing and handling XML documents, and we expect substantial leverage from this activity. XML provides a number of features such as platform-independence, naturally hierarchical structure, customizable field display filtering, the possibility to mix human-readable free-text fields and machine-readable fields in the same record, and easy addition of new types of fields. Those features enable IDLE to evolve both in terms of the content of individual records and of the structure of the library. This makes us confident that for IDLE, XML is a far better choice than an existing proprietary database format.

# Design and Implementation of a Sniffer Detector(slides available Freelance Graphics (gzipped) or HTML)

Authors
Stephane Grundschober
Marc Dacier
Speaker
Stephane Grundschober
Institution
IBM Zurich Research Laboratory
Global Security Analysis Lab
Saeumerstrasse 4
8803 Rueschlikon ; Switzerland
Biography
Stephane Grundschober:
Stephane is now working at Swisscom. He has implemented the sniffer detector prototype during his master thesis at the IBM Zurich Research Lab in the Global Security Analysis lab

## Abstract

This talk will present a prototype of a sniffer detector designed to detect the use of malicious sniffers installed on shared networks. These malicious sniffers constitute a threat to the Internet world, as many protocols currently used on the Internet are insecure. For example, Telnet and FTP protocols send passwords over the network in clear text. SNMP community names are also sent in clear text. A sniffer able to log this information specifically can compromise the security of an entire network. Indeed, the sniffer's owner can reuse the gathered login and password pairs to gain unauthorized access.

Experience has shown that this is a very successful method commonly used by crackers. To prevent such intrusion, one could of course avoid the utilization of such insecure protocols. Unfortunately, this may not always be possible for the following reasons:

1. no alternative solution exists,
2. it would be too complicated to change the protocol,
3. it is illegal to change the protocol because encryption is restricted

Therefore it is of interest to find a way to detect the utilization of a sniffer.

From a network point of view, a sniffer is passive. One must wait for the sniffer's owner to launch an attack. The problem is how to differentiate between a normal Telnet session and one started by an intruder. Our idea is to spread bait that is presumably especially attractive to the sniffer's owner.

In this talk, we present a tool that we have implemented accordingly to the following principles:

• The tool sends packets over the subnet that simulate Telnet or FTP sessions, including false passwords for false accounts. These sessions must contain something of particular interest to the sniffer's owner (more so than ordinary sessions). They therefore include sensitive logins such as root and admin.
• The tool then waits for the intruder to use the information bait. Nobody besides the intruder has knowledge of these accounts and passwords. If someone reuses these pairs, the tool recognizes symptoms of an attack and can trigger an alarm.

A prototype has been tested for traffic between IBM Zurich and IBM La Gaude via IGN (IBM Global Network). We will discuss the results of these ongoing experiments.

We will present the hardware and software requirements of the system, and invite the listeners to join our experimentation in order to apply these concepts on a larger scale.

# The Application of Artificial Neural Networks to Misuse Detection: Initial Results(full paper available WORD (gzipped))(slides available Power Point (gzipped) or HTML)

Author
Speaker
Institution
Georgia Tech Research Institute
347 Ferst Dr.
Atlanta, GA 30332-0832 ; USA
Biography
James Cannady is currently a Research Scientist within the Georgia Tech Research Institute (GTRI) Information Technology and Telecommunications Lab. He specializes on the research, development, and implementation of secure computing solutions for government and commercial sponsors.
He is currently involved in a number of security related research and development projects, including work in the development of an revolutionary approach to the detection of network intrusions through the use of advanced artificial intelligence techniques. He has also been involved in the design of secure networks for a number of government sponsors. Mr. Cannady is one of the founders of the Georgia Tech Information Security Center.
Prior to joining GTRI Mr. Cannady was a Special Agent with the Naval Investigative Service, specializing in counterintelligence matters and the protection of military computer and communication systems. He has worked extensively within U.S. Government and NATO organizations where he has developed policies and procedures that have enhanced the security of critical information systems.
He is the author of numerous technical papers on the topic of database and network security. Mr. Cannady received his Bachelor's degree from Georgia State University in Atlanta, and his Master's degree from Nova Southeastern University in Ft. Lauderdale, FL. He is currently a Ph.D. candidate in Computer Information Systems at Nova Southeastern University.

## Abstract

Misuse detection is the process of attempting to identify instances of network attacks by comparing current activity against the expected actions of an intruder. Most current approaches to misuse detection involve the use of rule-based expert systems to identify indications of known attacks. However, these techniques are less successful in identifying attacks that vary from expected patterns. Artificial neural networks provide the potential to identify and classify network activity based on limited, incomplete, and nonlinear data sources. We present the results of our ongoing research efforts into the application of neural network technology for misuse detection.

The research that has been conducted to date has been divided into two phases. The first phase of our research involved the creation of a feedforward multi-level perceptron (MLP) network that was trained to identify specific events that may be an indication of misuse. After a training period during which the MLP was provided with numerous representative examples of network data, the neural network was trained against a simulated network stream. The MLP-based prototype was able to accurately identified each of the simulated attack patterns in the data stream.

The second phase of our initial research was the design of a neural network-based prototype capable of identifying complex instances of misuse involving a series of events that may be widely dispersed over time or be executed by multiple attackers working collaboratively. These attacks require analytical capabilities that go beyond the identification of isolated events. We developed a hybrid neural network approach that combined the data classification capabilities of a self-organizing map (SOM) with the pattern recognition facilities of a MLP.

# AAFID: Autonomous Agents for Intrusion Detection(slides available PowerPoint (gzipped) or HTML)

Authors
Eugene Spafford , Diego Zamboni
Speaker
Diego Zamboni
Institution
COAST Laboratory
Purdue University
West Lafayette, IN 47907-1398, U. S. A.
Biography
Gene Spafford
see above
Diego Zamboni
Diego Zamboni is a Ph.D. student at Purdue University, where he is working in the COAST Laboratory in Intrusion Detection research. Previously he obtained his bachelor's degree in Computer Engineering from the National Autonomous University of Mexico, where he also established one of the first Computer Security Incident Response Teams in Mexico.

## Abstract

The Intrusion Detection System (IDS) architectures commonly used in commercial and research systems have some problems that limit their configurability, scalability or efficiency, or that affect their security. Two specific consequences of these problems are the existence of single points of failures for the IDS, and of bottlenecks for the flow and analysis of data.

We propose an architecture, known as AAFID, that is based on multiple functionally-independent entities that we call Autonomous Agents. Each agent collects data about a specific operational aspect or about certain events in the system where it runs. When an agent detects something suspicious, it sends a message to a higher-level entity known as a transceiver. Transceivers are per-host entities that oversee and control the operation of all the agents running in their host. They may also perform data reduction on the data received from the agents. In turn, the transceivers report their results to one or more monitors. Each monitor oversees and controls several transceivers. Monitors have access to network-wide data, therefore they are able to perform higher-level correlation and detect intrusions that involve several hosts. Monitors can be organized in a hierarchical fashion such that a monitor may in turn report to a higher-level monitor. Ultimately, a monitor is responsible for providing information and getting control commands from the user.

We have implemented a prototype IDS, which we call AAFID2, which conforms to the AAFID architecture. AAFID2 is implemented in Perl5, and its purpose is to provide an easy to use, flexible and configurable framework for experimenting with the AAFID architecture, for developing agents and for exploring new ideas. AAFID2 implements an infrastructure that provides all the essential services for new agents, transceivers and monitors. This makes it easier to create new agents, because the only code that needs to be written is the code that performs the detection functions specific to the agent. AAFID2 is implemented using the object-oriented features of Perl5, and inheritance makes the utilization of the infrastructure relatively easy and transparent for the developer of new agents. AAFID2 also includes a simple Graphical User Interface (GUI) that allows to control the IDS and to experiment with ideas for making better IDS user interfaces.

To go with AAFID2, we have developed a number of agents that perform different monitoring functions. We are in the process of deploying the IDS in our internal network, as well as to selected external testers.

In this talk, we will present the architecture and the prototype, and some specific concrete results. We expect to be able to measure performance impact of the prototype in the monitored systems, as well as impact of the communications between the components of AAFID2 in the monitored network, both with respect to the number of hosts being monitored and the number of agents in each host. Finally, we will try different attacks against the monitored systems, and measure how well AAFID2 is capable of detecting them. We expect the AAFID architecture to provide solutions to some of the problems mentioned before, specifically scalability and configurability. Using the results that will be presented in this talk, we expect to be able to confirm our predictions, or to identify additional factors that have to be addressed.

By the time of the workshop, we plan on having had the AAFID2 prototype in distribution for several months. We will incorporate any experiences and reports we receive from users in this talk.

# Research Issues in Cooperative Intrusion Detection Between Multiple Domains(slides available PowerPoint (gzipped) or HTML)

Authors
Deborah A. Frincke , Donald L. Tobin, Jr. , Jesse C. McConnell
Speaker
Donald L. Tobin, Jr.
Institution
Center for Secure and Dependable Software
University of Idaho
Computer Science Department
Moscow, Id. 83844-1010 ; USA
Biographies
Deborah A. Frincke
Deborah A. Frincke received her Ph.D. from the University of California, Davis in 1992, and is currently an Assistant Professor at the University of Idaho and a principal researcher in the Center for Secure and Dependable Software. Her primary research interests are in computer security education and the development and assessment of secure systems and security tools. Her current research in secure object-oriented design and intrusion detection are funded by the National Security Agency.
Donald L. Tobin
Donald L. Tobin, Jr., is currently a doctoral student at the University of Idaho and a research assistant at the Center for Secure and Dependable Software. His primary research interests are in intrusion detection, neural networks, and information warfare. He is a retired Air Force officer and has worked with a variety of communication, satellite, and missile warning systems. Mr. Tobin earned his M.S. degree in Computer Science from Boston University, and his B.S. degree in Mathematics from the University of Texas at Arlington.
Jesse C. McConnell
Jesse C. McConnell is currently a masters student at the University of Idaho and a researcher in the Center for Secure and Dependable Software. His primary research interests are in the fault tolerance and survivability of intrusion detection systems, and multiple domain intrusion detection. Mr. McConnell received his Bachelor of Science in Computer Science from the University of Idaho in May 1998.

## Abstract

Modern computer intrusions are rarely limited to a single network or domain, and given the widespread availability of new intrusive attacks on the Internet and the current level of automated scripting, almost anyone with basic computing skills can attempt them. For instance, it would be difficult to detect a sweep attack against multiple military installations in a local geographic area, especially if the bases in question were from different services, like the Army, Air Force, and Navy. A seemingly insignificant intrusion from the perspective of one base would be handled very differently if the bases collaborated to discover the consolidated effort. Researchers have found data sharing is needed to detect many systemic attacks involving multiple hosts, whether inside a single domain or across multiple domains.

Using the Project Hummer prototype , we have begun examining issues in cooperative intrusion detection. In particular, we are examining techniques to manage, share, and assess data obtained from multiple domains. We have identified three areas requiring further research. The most important is developing a formal model of cooperation and trust between hosts across multiple domains. Data, or even data requests, from a peer may be unreliable, inaccurate, or deliberately falsified, yet there remains a need to make use of the available global information to accurately assess the local security posture. Thus a formal model of cooperation and trust must include multiple levels, and requires concise definitions of cooperation and trust in this context. It will also need to take into account the various costs of cooperation, in terms of data collection, transmission, sanitization, and the exposure risk of the local network. Another consideration is whether the cooperation levels should be statically or dynamically assigned.

Given the cooperation model, the second research area is securing transmission between peers. If the model also allows selectively giving access to peers (i.e., hosts outside our domain we share information with), the system also needs to worry about secondary authentication. While Kerberos may work well for manager-subordinate relationships within a domain, it may not be the best choice for multi-domain relationships. Our cooperation model creates a peer group between the domains for the sharing of security relevant information. A Moderator is then established within that peer group so it can control access of members. This Peer Group Moderator host can establish the necessary realm for ticket-granting, but it now becomes a single-point-of-failure, and susceptible to denial-of-service attacks. Thus, different layout topologies should be addressed to reduce the risk of these peer group failures, and retain a degree of survivability should a member of a peer group become disconnected.

The third issue is effectively and efficiently sanitizing and reducing the data. Large quantities of audit data are difficult to collect and assess. Additionally, it is important to decide the level of granularity of useful information. For example, what information is valuable enough to share with and transmit to other domains and in what level of detail, and what information is too valuable or detrimental to share? Finally, since it is unlikely different sites will have identical security policies, there needs to be a way of mapping these local security policies to multiple domains. These singular networks grouped together are a higher form of a data sharing network. The problem is compounded if a host on a network chooses to be in multiple peer groups, because the sanitizing needs to be done relatively independently for each level of cooperation the host has arrived at with each of its peers, and peer groups.

# A Large-scale Distributed Intrusion Detection Framework Based on Attack Strategy Analysis(full paper available PDF or PS)(slides available PowerPoint (gzipped) (gzipped) or HTML)

Authors
Ming-Yuh Huang , Thomas M. Wicks
Speaker
Ming-Yuh Huang
Institution
Applied Research and Technology
The Boeing Company
Seattle, Washington ; USA
Biography
Ming-Yuh Huang
Digital Equipment Corporation, Artificial Intelligence Technology Center Project Lead - ESSENSE (Ploycenter) Intrusion Detection expert system. (1985-1990)
The Boeing Company, Applied Research and Technology Project Manager - Distributed Security and System Management (1990-1998)

## Abstract

To address the problem of large-scale distributed intrusion assessment/detection, issues such as information exchange and work division/coordination amongst various Intrusion Detection Systems (IDS) must be addressed. An approach based on autonomous local IDS agents performing event processing coupled with cooperative global problem resolution is preferred.

We believe that focusing on the intruder's intent (attack strategy) provides the theme that drives how various IDS work together. Attack strategy analysis also provides an opportunity for pro-active look ahead adaptive auditing.

What's Missing?

What’s missing in today's distributed ID is the strategic decision- making and command-and-control layer where IDS can dynamically analyze enemy's strategy and formulate/execute owns. Also, command-and-control here goes beyond exchanging fragmented data. It calls for communication of commander’s intention, autonomous execution, reporting back and issuing of more commands.

Today’s Centralized Architecture

Today's centralized, one-way event collection/processing ID architecture is a passive information-processing paradigm. It faces considerable limitations when trying to detect increasingly sophisticated attacks in large and complex networks.

Today’s large-scale heterogeneous networks generate tremendously amount of diverse format real-time data - mostly system management information. Only careful analysis can determine which is security related. By the time huge amount of data arrives at a centralized location, the contextual information needed to properly analyze the event has already been lost. Worse even, the time latency may make it impossible return and collect additional data to confirm/exonerate any suspicions. It’s extremely difficult to perform out-of-context quality event correlation and pattern identification.

The Anatomy of an Intrusion

There are always multiple ways to invade. Nevertheless, it usually requires several actions/tools be applied in particular orders to launch a sequence of attacks for a particular goal. It€s this logical partial order that reveals the progression of invasions.

Strategy Analysis Based Architecture

There is a need for a large-scale ID framework based on collaborative intrusion intention analysis performed by autonomous local IDS agents. In addition to events, IDS share suspected intrusion intentions. Intrusion intentions are high-level platform-independent attack strategies that can manifest into large permutation of lower level system/network events.

By intention analysis, IDS agents recognize attacks at the strategic level. Here, attacks are characterized by sequences of logically related but not necessarily complete intrusion "sub-goals" each representing a state of accomplishment during the formation of an intrusion. ID thus becomes recognition of sub-goal/trend development, in addition to any glaring violations.

Intrusion Intention Representation

A natural intrusion intention representation is goal-tree where lower level nodes represent (ordered) alternatives/sub-goals to achieve upper node/goal. Leaves are sub-goals that can be substantiated by event(s) coming from different sources. However, a typical goal-tree doesn’t have temporal sequence or ordering construct. Therefore, augmentation is needed.

Horizontal threads through the tree nodes represent possible attack scenarios. If the nodes/sub-goals are filled when confirmed, threads are often incomplete with hole(s) during ID. This show a possibility of intrusion but past data didn’t fully support it. Here’s an opportunity to re-examine past data for confirmation/exoneration.

With strategic analysis on thread progression, IDS agent can predict possible attack directions and adjust auditing for additional data. If local IDS agent confirms a sub-goal, global IDS can re-strategize and issue more commands. Recognizing attack intention provides very good alert timings.

Integration of Paradigms, Tools and Technologies

One major consideration of this approach is integration.

With intention analysis, IDS/tools using different paradigms/technologies can contribute and work together at the strategic level. Differences in data sources, environments and approaches are less relevant.

# NIDAR: The Design and Implementation of an Intrusion Detection System

Authors
Tan Yong Tai , Tan Woon Kiong , Ong Tiang Hwee , Christopher Ting
Speaker
Tan Yong Tai
Institution
DSO National Laboratories
20 Science Park Drive
Singapore 118230
Biographies
Tan Yong Tai
Tan Yong Tai received his Bachelor of Science (Information Systems and Computer Science) (Hons) from the National University of Singapore in 1997.
Tan Woon Kiong
Tan Woon Kiong received his Bachelor of Science (Computational Science and Mathematics) from the National University of Singapore in 1997.
Ong Tiang Hwee
Ong Tiang Hwee received his Bachelor of Engineering from the Nanyang Technological University of Singapore in 1993. He has been working in field of security in DSO National Laboratories, Singapore since 1995.
Dr. Christopher Ting
Dr Christopher Ting received his Bachelor of Engineering from the University of Tokyo, Japan in 1986, and Master of Science (Physics) in 1988 from the same University. In 1993, he earned a PhD in Physics from the National University of Singapore.

## Abstract

With the proliferation of Internet and network connectivity, it is important to prevent unauthorized access to system resources and data. While it is almost impossible to prevent all these malicious attempts, many techniques have been proposed and designed for detecting these intrusion attempts so that measures can be taken. In this talk, we present the design and implementation of our in-house intrusion detection system (IDS), the NIDAR system. This talk discusses our practical approach to an intrusion detection system. We will begin our presentation by sharing our concept of IDS. This gives us a baseline on how we rate an IDS.

In the next part of our talk, we present how our system tackles the problem by operating on the network traffic (network-based ID) and the hosts' activities (host-based ID), thus providing a more rounded surveillance to the resources concerned. It further protects the resources with reactive measures against the malicious attempts. These attempts can be stopped, nullified and further attacks prevented. We will present various components of NIDAR system, namely the central controller, the network monitor agent, the host-based ID agent, agent management tools, network management agent and other reactive measures agents. Technologies such as CORBA, Java and RMI that enable IDS across different operating systems and computing architectures will be discussed. We also explore the security considerations of IDS itself and discuss how NIDAR protects itself from attacks. Agent authentication will be the main issue in our discussion.

In the third part of our talk, we share our experience in the test bed deployment of NIDAR. We will show how NIDAR is deployed in such networking environment. Valuable lessons learnt from this deployment will be presented. Shortcoming of using network traffic for detecting intrusions are also highlighted based on the experience in this deployment. Difficulties arise from software barriers such as firewalls and hardware barriers such as network switches will also be discussed here.

We discuss challenges faced during the process of developing NIDAR and our future plans in the final section of our talk. We find that it is very difficult to defend against denial of service attacks. While we are able to detect the occurrence of the attack, a single packet may be sufficient to cause the denial of service, which leaves the IDS no opportunity to respond. The representation of exploits is also a challenging problem, especially in a large heterogeneous network. Different operating system is fallible to different kind of exploits and it's not easy to find a good representation for all classes of exploits. We also look at how IDS can help in the investigation of certain classes of exploits. Our current intrusion detection techniques are based on exploit signatures. In future, we hope to explore the possibility of integrating artificial intelligence techniques into NIDAR to 'learn' exploits. For example, data mining can be used to extract meaningful non-trivial information from gigabytes of audit logs and network traffic captured.

We hope to share and exchange valuable experience through this presentation.

# A UNIX Anomaly Detection System using Self-Organising Maps(slides available Power Point (gzipped) or HTML)

Authors
Albert J. Höglund , Kimmo Hätönen, Tero Tuononen
Speaker
Albert J. Höglund
Institution
Nokia Research Center
Nokia Group (Helsinki)
P.O. Box 422
FIN-00045 NOKIA GROUP ; Finland
Biographies
Albert J. Höglund
Albert J. Höglund is working as Research Engineer at Nokia Research Center in Helsinki, Finland. His research interests include neural networks, applied mathematics, statistical analysis, anomaly detection, network management and computer security. He has a M.Sc. in applied mathematics (Helsinki University of Technology).
Kimmo Hätönen
Kimmo Hätönen is a Senior Software Engineer at Nokia Research Center in Helsinki, Finland. His research interests include different Data Mining methods and their applications in network monitoring and security analysis. He previously attended the Data Mining research group at the University of Helsinki working as a project manager of a project that developed methods for telecommunication network monitoring data analysis. At Nokia Research Center he has been manager for the project in which the UNIX Anomaly Detection System has been developed. Hätönen received a M.Sc. in computer science from the University of Helsinki.
Tero Tuononen
Tero Tuononen is working at Nokia Research Center (Helsinki, Finland) as System Specialist. His research interests include UNIX network management and computer security. He is a student at University of Helsinki (computer science) and currently working on his master’s thesis.

## Abstract

Anomaly detection attempts to recognise anomalous or abnormal behaviour to detect intrusions. This talk will present an anomaly detection system and hopefully create some discussion.

An anomaly detection system that automatically detects anomalous behaviour and provides means for behaviour visualisation has been constructed. The system consists of a data processing component, a behaviour visualisation component, an automatic anomaly detection component and a user interface.

The anomaly detection system monitors a UNIX server with over 600 users. The system uses accounting data. The system reports anomalous user behaviour that is then analysed using the behaviour visualisation component, which also allows examination of the real data.

Both the behaviour visualisation component and the automatic anomaly detection component use the Self-Organising Map (SOM) as a basis. The SOM is a neural network architecture based on unsupervised learning and it is suitable for visualisation and interpretation of large high-dimensional sets of data. The behaviour visualisation component uses the SOM to visualise behaviour on a two-dimensional map. The behaviour profiles can easily be determined from this map and this enables behaviour profile comparison. The automatic anomaly detection component uses a unique combination of Self-Organising Maps and statistical methods to decide whether a specific behaviour is anomalous or not.

The system has been in test usage for over half a year. The user feedback from the test usage has been positive. Comments like "this system really works" have been quite encouraging. The system gives the system operator a limited number of anomalous cases to analyse. Tests really indicate that the system works as it should. It detects behaviour that deviates from the normal profile.

The Anomaly Detection System has also been implemented on mobile telephone network traffic management data with successful results. The successful system will also be applied to computer network traffic management. In this area the system will be used to detect intrusions and problems in the computer network.

The assumption that intrusions imply abnormal behaviour is the basic idea when applying anomaly detection for intrusion detection. It would be interesting to know how the participants in RAID’98 see the importance and usefulness of anomaly detection in intrusion detection.

# Evaluating a Real-time Anomaly-based Intrusion Detection System(slides available Power Point (gzipped) or HTML)

Authors
Tobias Ruighaver , P. G. Thorne, K. Tan
Speaker
Tobias Ruighaver
Institution
Department of Information Systems
Computer Forensic and System Security Group
University of Melbourne ; Australia
Biography
Dr. A. B. Ruighaver
Dr. A. B. Ruighaver is a senior lecturer at the Department of Information Systems of the University of Melbourne. He currently manages the Computer Forensic and System Security Group, where he supervises research in the areas of Intrusion Detection, System Security, Electronic Commerce Security, Management of Security, and Computer Forensics.

## Abstract

An anomaly-based IDS will, as the name implies, attempt to detect anomalies in data collected from one or more computers (or networks) assuming that those anomalies will represent an actual intrusion. Although intrusions are generally defined as "the unauthorized use, misuse or abuse of a computer system", this definition is not very helpful in the practical appraisal of anomalies. Classifying whether an anomaly is a possible intrusion turns out to be a subjective decision, which depends on the particular security policy at each site. This, and other reasons discussed later, makes the simple measurement of IDS performance an impractical proposition.

Our IDS is neural network based and uses the standard Unix system logs to build up a profile of each user's behavior. Unlike many other systems we do not try to analyze the sequence of commands, instead we correlate each action with other system or user characteristics to get an instant measure of anomaly. As there is no need for full auditing, the system overhead is minimal which allows our system to be utilized as a real-time IDS.

Anomaly detection should, in our view, only be a component of a more comprehensive IDS, together with components for misuse detection. Each component has its strengths and its weaknesses. The primary target of our system is an intruder trying to masquerade as an authorized user. This can be an external intruder or a legitimate user accessing another user's account. Even though the real-time aspect of the IDS has imposed some limitations, the resulting profile provides an excellent fingerprint of a user's behavior. When a user's action does not match this fingerprint, the first response is to turn on full auditing for this user. When several successive anomalies have been detected, or when other components detect a known pattern of misuse in the full audit data, more elaborated defense strategies can be activated.

The second, more difficult, target of our IDS is intrusive behavior by the owner of an account. In appraising the anomalies generated by the system only a few turn out to be clear-cut intrusions, while a few other anomalies are clearly not. The majority of anomalies show behavior that can not clearly be understood at that time, but may later show to be an early indication of intrusive behavior. Hence, any fine-tuning should only attempt to remove those anomalies that are clearly not intrusions and should not remove any other anomalies. In a sense, the performance of the IDS for this particular target depends on its capability to prevent detection of obvious non-intrusive anomalies.

Our experience shows that an anomaly-based IDS can only function optimally when there is sufficient interaction between the IDS and its environment. A profile's efficiency as a fingerprint can be significantly improved by making the actual behavior pattern of the user more complex and by not enforcing every possible preventive security measure. A user, who sends mail on only one machine and uses other machines for specific tasks as well, has a more unique fingerprint than a user who performs all his tasks on a single machine. If all users are forced to use the same mail machine, that aspect of their fingerprint will become identical. A second vital interaction exists between the IDS administrator and each user. Many of the anomalies are minor but preventing these anomalies from being detected may jeopardize the detection of real intrusions. Other anomalies may show a lack of awareness on security policies. Monitoring user behavior and correcting it before a real serious intrusion takes place is just as important as detecting the serious intrusions themselves. Prevention is as important as detection.

# Audit Trail Pattern Analysis for Detecting Suspicious Process Behavior(slides available Freelance Graphics (gzipped), PDF , PS or HTML)

Authors
Andreas Wespi , Marc Dacier , Herve Debar , Mehdi M. Nassehi
Speaker
Andreas Wespi
Institution
IBM Zurich Research Laboratory
Global Security Analysis Lab
Saeumerstrasse 4
8803 Rueschlikon ; Switzerland
Biography
Andreas Wespi
Andreas Wespi holds a M.Sc. in Computer Science from the University of Berne, Switzerland. He is currently working at the IBM Zurich Research Laboratory in the Global Security Analysis Lab (GSAL) which is part of the Information Technology Solutions Department. His research interests include intrusion detection, network security in general, and distributed and parallel computing.

## Abstract

There are Unix processes whose normal behavior can be modeled by a set of typical patterns, a pattern being a subsequence of the audit events that a process can generate [1]. Examples of such processes are network services such as ftp or sendmail. Intrusion-detection systems that make use of this observation first need to build a table of representative patterns. The patterns are determined by letting the process invoke as many subcommands as possible, then extracting the patterns from the corresponding sequences of audit events. During real-time operation, a pattern-matching algorithm is applied to cover on the fly the audit events generated by the process examined.

An intrusion is assumed to exercise abnormal paths in the executable code. The abnormal paths correspond to sequences of audit events that cannot or only partly be covered by the entries in the pattern table. Subsequences of audit events that cannot be matched are therefore an indication of an attack.

Whereas prior work concentrated mainly on fixed-length patterns [1], approaches based on variable-length patterns are expected to reveal patterns that are more representative of a process and contain more semantic information [2, 3]. For example, examining the audit sequences the ftp daemon can generate, we found that more than 50% of the sequences start with the same subsequence. This subsequence has a length of 40 audit events.

Determining the variable-length patterns is not straightforward. In [2], they are created only manually. In [3], a variable-length pattern generator is described that consists of a suffix-tree constructor, a pattern pruner, and a pattern selector. Preliminary results showed that an intrusion-detection based on this technique can detect the attacks but is prone to issue false alarm. We will present a modified version of the pattern generator. By introducing a novel pattern selector process, variable-length patterns can be created that reduce the number of false alarms substantially.

Our intrusion-detection method comprises three main components:

1. a component that builds the pattern table,
2. one that matches the audit events of a process with the entries in the pattern table, and
3. one that differentiates between attacks and normal behavior.
Our most recent work shows that each component can be implemented with different techniques, and each technique is usually parameterized with one or more parameters, which adds additional complexity. Furthermore, the three components are heavily interrelated. For example, variable-length patterns may require a different pattern matching algorithm than fixed-length patterns. Therefore, selecting the best techniques and the best parameters is a challenging task.

Based on the ftp service, we have evaluated different techniques and parameter settings. In a first phase, we have run a test suite provided by AIX development [4]. The test suite exercises all the ftp subcommands. The audit sequences which we have recorded in this phase are the basis on which the pattern table is generated. In a second phase, we have recorded the audit events generated by real user ftp sessions as well as by eight attacks, which we have implemented against the ftp daemon. Based on the data recorded during the first two phases, we were able to compare the different techniques and parameter settings. Two main comparison criteria were used: the number of detected attacks and the number of false positives.

In our talk, we will present the techniques investigated and the results of our experiments. Emphasis will be placed on comparing the fixed-length and the variable-length approaches. We will show that variable-length patterns constitute a viable approach for intrusion-detection systems of this type.

References

[1] S. Forrest, S.A. Hofmeyr, A. Somayaji and T.A. Longstaff, "A Sense of Self for Unix Processes," Proceedings of the 1996 IEEE Symposium on Research in Security and Privacy, pp. 120-128, IEEE Computer Society Press, Los Alamitos, CA, 1996.

[2] A.P. Kosoresow and S.A. Hofmeyr, "Intrusion Detection via System Call Traces," IEEE Software, pp. 35-42, vol. 14, no. 5, 1997.

[3] H. Debar, M. Dacier, M. Nassehi and A. Wespi, "Fixed vs. Variable-Length Patterns for Detecting Suspicious Process Behavior," to be presented at Esorics '98, 5th European Symposium on Research in Computer Security, Louvain-la-Neuve, Belgium, Sep. 16-18, 1998.

[4] H. Debar, M. Dacier and A. Wespi, "Reference Audit Information Generation for Intrusion-Detection Systems," to be presented at IFIP/SEC 98 Global IT-Security, 14th Int'l Information Security Conf., Vienna, Austria & Budapest, Hungary, Aug. 31 - Sep. 4, 1998.

# An Immunological Approach to Distributed Network Intrusion Detection(slides available PowerPoint (gzipped) or HTML)

Authors
Steven A. Hofmeyr , Stephanie Forrest
Speaker
Patrik D'haeseleer
Institution
Department of Computer Science
University of New Mexico
NM, 87131 ; USA
Biography
Steven Hofmeyr
Steven Hofmeyr is a Ph.D. student in the Department of Computer Science at the University of New Mexico in Albuquerque.
Stephanie Forrest
Stephanie Forrest is an associate professor of Computer Science at the University of New Mexico in Albuquerque.
Patrik D'haeseleer
Patrik D'haeseleer is a Ph.D. student in the Computer Science Department at the University of New Mexico in Albuquerque. He holds a degree in Electrical Engineering (Burgerlijk Ingenieur) from the University of Ghent and a Masters in CS from Stanford University. He is interested in the boundary between biology and computation and is an active member of the Adaptive Computation Group at UNM.

## Abstract

We are designing and testing a prototype distributed intrusion detection system (IDS) that monitors TCP/IP network traffic. Each network packet is characterized by the triple (source host, destination host, network service). The IDS monitors the network for the occurrence of uncommon triples, which represent unusual traffic patterns within the network. This approach was introduced by researchers at the University of California, Davis, who developed the Network Security Monitor (NSM), which monitors traffic patterns on a broadcast LAN. NSM was effective because most machines communicated with few (3 to 5) other machines, so any intrusion was highly likely to create an unusual triple and thus trigger an alarm.

Although successful, NSM has serious limitations. It is computationally expensive, requiring its own dedicated machine, and even then only being able to monitor existing connections every five minutes. Further, the architecture of NSM does not scale: The computational complexity increases as the square of the number of machines communicating. Finally, NSM is a single point of failure in the system because it runs on a single machine. These limitations can be overcome by distributing the IDS over all machines in the network. Distribution will make the IDS robust by eliminating the single point of failure and will make it more flexible and efficient; computation can vary from machine to machine, fully utilizing idle cycles.

The architecture of NSM is not easily distributable. Distributing NSM would require either excessive resource consumption on every machine upon which it was run, or communication between machines. The immune system has interesting solutions to a similar problem of distributed detection. We have designed a distributed IDS based on the architecture of the immune system. This allows the IDS to function efficiently on all machines on the LAN, without any form of centralized control, data fusion or communication between machines. The architecture is scalable, flexible and tunable.

Our IDS depends on several "immunological" features, the most salient being negative detection with censoring, and partial matching with permutation masks. With negative detection, the system retains a set of negative detectors that match occurrences of abnormal or unusual patterns (in this case, the patterns are binary string representations of network packet triples). The detectors are generated randomly and censored (deleted) if they match normal patterns. Partial matching is implemented through a matching rule, which allows a negative detector to match a subset of abnormal patterns. Partial matching reduces the number of detectors needed, but can result in undetectable abnormal patterns called holes, which limit detection rates. We eliminate holes by using permutation masks to remap the triple representation seen by different detectors.

We have conducted controlled experiments on a simulation that uses real network traffic as normal and synthetically generated intrusive traffic as abnormal. Using a system of 25 detectors per machine on a network of 200 machines, one out of every 250 nonself patterns goes undetected, which is a false-negative error rate of 0.004. This number is conservative because intrusions will almost always generate more than one anomalous pattern. The computational impact of 25 detectors per machine is negligible, so performance can be improved by using more detectors per machine: If the number of detectors is doubled to 50 per machine, the error rate reduces by an order of magnitude. These results indicate that holes can be effectively eliminated using permutation masks, and that consequently the IDS can provide comprehensive coverage in a robust, fully distributed manner.

Previously, in the 1996 IEEE Symposium on Security and Privacy, we reported a technique for intrusion detection using sequences of system calls. Although the vision here is the same, this current research differs in the domain of application (network traffic), and draws far more strongly on the immune analogy.

# The Limitations of Intrusion Detection Systems on High Speed Networks

Author
Joe Kleinwaechter
Speaker
Joe Kleinwaechter
Institution
Internet Security Systems, Inc.
300 Embassy Row
Atlanta, GA 30348 ; USA
Biography
Joe Kleinwaechter
Joe Kleinwaechter joined ISS in the spring of 1997 as the Engineering Director of ISS' Intrusion Detection Technology, with specified responsibilities over the direction of ISS' real-time intrusion detection tool, RealSecure. (Joe is out of the office today, but when he returns I will send over his complete bio and background information)

## Abstract

Current network based intrusion detection (ID) systems have been fighting the long hard battle of keeping up with and implementing algorithms for detecting the very latest attacks. However, it is the simple fact that network devices are now capable of speeds far in excess of current ID systems that is fast becoming a serious concern. A significant decision criteria used in selecting an ID technology is the quantity of attack signatures the product supports. The fact remains, though, that not a single ID system exists today can guarantee the detection of these signatures at fully saturated Fast Ethernet speeds. This leaves an even bigger hole when one realizes that Gigabit Ethernet is on the very near horizon. This talk discusses these speed issues and why they are critical in determining one's security posture. In addition, some questions will be raised as to whether this solution can ever be fully solved or if it is acceptable to get as close as we can. This discussion will be based upon very simple mathematics with clear examples given. As such, discussion of any particular ID product will not be addressed. The topic will focus on current operating systems, network devices, network topologies and the software used to produce intrusion detection products. This talk will conclude by encouraging the ID community to develop an interest in addressing this problem.

# CERN Network Security Monitor( full paper available HTML)(slides available Power Point (gzipped) or HTML)

Author
Paolo Moroni
Speaker
Paolo Moroni
Institution
CERN (European Organization for Particle Physics)
CH 1211 Geneva 23
Switzerland
Biography
Paolo Moroni
After joining CERN in 1988, the Author has been system programmer on the VM/ESA service for five years. In late 1993, he moved to the Communications Systems group, to develop tools for network security and alarms monitoring.

## Abstract

This talk summarizes the characteristics of the monitoring complex developed at CERN to enhance the security of the internal network, as far as IP traffic to and from the Internet is concerned.

In this context, monitoring means trying to detect network traffic that is relevant from a security point of view, in order to locate security holes in an organization's network. Although insufficient to implement a complete security policy, such a tool is transparent to the users and to the performance of the network and provides some other advantages, including statistical information about network traffic and central management, taking only relatively few system and human resources. It is intended to support and integrate active security tools (firewalls, authentication servers, etc.), not to replace them.

Developed mainly between 1994 and 1996, the security monitor relies on accessing a shared network medium (Ethernet and/or FDDI) in promiscuous mode, in a point where all the traffic to be monitored is supposed to be channeled, as a consequence of an appropriate network topology choice.

Although issuing online security alerts from the monitor is technically feasible, CERN's implementation choice has been to produce mostly offline reports about possible security problems. To integrate and expand the information provided in the security reports, a data base of the network traffic is maintained, allowing backwards security analysis for a number of days chosen at installation time (and depending on the available resources).

# HAXOR - A Passive Network Monitor/Intrusion Detection Sensor

Author
Alan Boulanger
Speaker
Alan Boulanger
Institution
Global Security Analysis Laboratory
IBM Watson Research Center
Hawthorne, New York
USA
Biography
Alan Boulanger
Alan Boulanger joined IBM in October 1995 as a member of the TJ Watson Global Security Analysis Laboratory. His research interests include Network Security, Intrusion Detection Systems, Applied Penetration Testing Tools and Techniques, Data Forensics and compromised site postmortem, Telephony Related Security, and Researching New System Vulnerabilities. Since joining IBM, Mr Boulanger has discovered and reported a number of vulnerabilities to ERS and CERT and has provided technical assistance to numerous Federal Agencies and Businesses conducting computer security related investigations.

## Abstract

Haxor is a real-time intrusion detection sensor, passive network monitor currently under development in the Global Security Analysis Laboratory (GSAL ) at IBM's TJ Watson research center. The concepts behind Haxor have been obtained though 2 years of penetration testing and through analyzing of computer systems from victims of 'real world' security related incidents. After analyzing the data, certain patterns emerged that were common in most of these types of incidents. GSAL is now in the process of developing a system that will recognize the common signatures of attacks on networked computing systems and alert the administrators of the activity. Haxor is being designed to be a portable system that does not require proprietary vendor equipment to run. Currently, Haxor is running on Linux, Sun, And AIX systems. Haxor has been deployed on a variety of networks, both internal and external to IBM with great success.

# Using Bro to detect network intruders: experiences and status(slides available PDF or PS)

Authors
Vern Paxson
Speaker
Vern Paxson
Institution
Network Research Group
MS 50A-3111
Lawrence Berkeley National Laboratory
University of California
Berkeley, CA 94720 ; USA
Biography
Dr. Vern Paxson
Dr. Vern Paxson is a staff scientist with the Lawrence Berkeley National Laboratory's Network Research Group, where his research focuses on Internet measurement and intrusion detection. He serves on the Internet Engineering Steering Group as one of two Area Directors for the Transport area, and co-chairs the IETF working group on TCP Implementation. He was awarded U.C. Berkeley's Sakrison Memorial Prize for outstanding dissertation research and the IEEE Communications Society's William R. Bennett Prize Paper Award, as well as awards for SIGCOMM and USENIX Security Symposium papers.

## Abstract

Bro is a system for detecting network intruders in real-time by passively monitoring a network link. Its design emphasizes high-speed (FDDI-rate) monitoring, real-time notification, clear separation between mechanism and policy, and extensibility. To achieve these ends, Bro is divided into an "event engine" that reduces a kernel-filtered network traffic stream into a series of higher-level events, and a "policy script interpreter" that interprets event handlers written in a specialized language used to express a site's security policy. In a USENIX Security Symposium paper earlier this year we discussed the general architecture and design of the system. In this talk we discuss a number of extensions to the system, both envisioned and now implemented, and our further experiences with operating Bro on a continuous basis.

The system is publicly available in source code form.