[ Pobierz całość w formacie PDF ]
J Comput Virol (2009) 5:345–355
DOI 10.1007/s11416-008-0098-9
EICAR 2008 EXTENDED VERSION
Danger theory and collaborative filtering in MANETs
Katherine Hoffman
·
Attila Ondi
·
Richard Ford
·
Marco Carvalho
·
Derek Brown
·
William H. Allen
·
Gerald A. Marin
Received: 20 January 2008 / Revised: 23 June 2008 / Accepted: 8 July 2008 / Published online: 12 August 2008
© Springer-Verlag France 2008
Abstract
As more organizations grasp the tremendous
benefits of Mobile Ad-hoc Networks (MANETs) in tactical
situations such as disaster recovery or battlefields, research
has begun to focus on ways to secure such environments.
Unfortunately, the very factors that make MANETs effective
(fluidity, resilience, and decentralization) pose tremendous
challenges for those tasked with securing such environments.
Our prior work in the field led to the design of BITSI – the
Biologically-Inspired Tactical Security Infrastructure. BITSI
implements a simple artificial immune system based upon
Danger Theory. This approach moves beyond self/non-self
recognition and instead focuses on systemic damage in the
form of deviation frommission parameters. In this paper, we
briefly review our prior work on BITSI and our simulation
environment, and then present the application of collabora-
tive filtering techniques. Our results are encouraging, and
show that collaborative filtering significantly improves clas-
sification error rate and response within the MANET envi-
ronment. Finally, we explore the implications of the results
for further work in the field, and describe our plans for new
research.
ubiquity of connectivity there still exist environments that
lack fixed infrastructure to support widespread computer to
computer interaction. For example, modern battlefield sys-
tems and disaster relief efforts both lack the infrastructure on
which much communication is based. In such environments,
connectivity is often provided by “mobile ad hoc networks”
(MANETs). Such MANETs are defined in RFC2051 [5] as
follows:
Definition 1
(MANET)[
5
] A MANET consists of mobile
platforms (e.g., a router withmultiple hosts andwireless com-
munications devices)–herein simply referred to as “nodes”–
which are free to move about arbitrarily. The nodes may be
located in or on airplanes, ships, trucks, cars, perhaps even on
people or very small devices, and there may be multiple hosts
per router. A MANET is an autonomous system of mobile
nodes. The system may operate in isolation, or may have
gateways to and interface with a fixed network.
In general, MANETs need to deal with different issues
than traditional wired networks. Because there is no cen-
tral infrastructure (and nodes must instead forward traffic
collaboratively), each node in the network must either ask
other nodes for a path to a destination on demand (reactive
routing) or maintain a local view of the network topology
for route calculation (proactive routing), which must be
frequently updated. These can lead to issues of route disrup-
tion when nodes are accidentally or purposefully sent incor-
rect information about the network topology, or when critical
nodes are disabled, even if temporarily. Furthermore, there
exist a myriad of other security concerns in the MANET
environment – for an overview, see [
14
] – brought about by
the lack of centralized management, shifting topology, and
bandwidth constrictions. As such, much work is needed if
MANETs are to be used for mission-critical functions in a
potentially-hostile environment.
1 Introduction
Computer networking has become an enabling technology
for a wide variety of services. However, despite the apparent
B
K. Hoffman
·
A. Ondi
·
R. Ford (
)
·
D. Brown
·
W. H. Allen
·
G. A. Marin
Department of Computer Sciences,
Florida Institute of Technology, 150 W. University Blvd,
Melbourne, FL 32901, USA
e-mail: rford@se.fit.edu
M. Carvalho
Institute for Human Machine Cognition,
Pensacola, FL, USA
123
 346
K. Hoffman et al.
The remainder of this paper is structured as follows. We
first examine threats to MANETs and prior work in the field
of security for the MANET environment. With this under-
standing, we then provide a short overview of our Danger
Theory-inspired approach to MANET security. This frame-
work, known as the Biologically-Inspired Tactical Security
Infrastructure (BITSI), forms the basis for our experiments
using reputation and collaborative filtering. The experiments
are described in the next section, followed by a discussion of
the results. Finally, the paper concludes with a discussion of
the implication of these results to future work, and describes
our plans for new research.
Collaboration between nodes is the obvious solution, and
has been examined by many other researchers. For example,
Huynh, Jennings, & Shadbolt [
7
] examine different types
of trust as a potential for improving the selection of part-
ner agents. Similarly, Sterne et al. [
14
] explore the bene-
fits of creating hierarchies within the nodes for intrusion
detection.
The underlying idea is relatively simple. When a node
finds another node misbehaving, it could tell other nodes
about the problem, and then they could all avoid the prob-
lematic node. The trouble with reputation-based approaches
is that they introduce new problems – a node could have
been misidentified as harmful, and would still be shunned,
or a malicious node could lie about having been hurt, poten-
tially crippling the network. The notion of trust, as distinct
from reputation was introduced to deal with this. Trust is
based on most of the same information as reputation, and
introduces new complications, such as whether or not to
re-trust nodes that have previously been defined as mali-
cious, and if so, when to do it, as well as what to do if mali-
cious nodes attempt to falsely accuse good nodes of being
bad.
An interesting exploration of these ideas is found in
Buchegger & Le Boudec [
2
]. In this paper, the authors
describe a system, CONFIDANT, which attempts to harden
reputation systems against deliberate misinformation by
looking for significant differences in reputation scores
between actors. Nodes whose reputation scores for others
were significantly different from the assessing node were
considered less trustworthy. Several others have used similar
techniques – for example, Liu & Issarny [
9
] and Zouridaki
[
16
]. However, this aspect of the work is not fully explored
in [
2
], as the experimental results are taken from a fairly sim-
ple congruency metric, as opposed to the more sophisticated
dynamic trust adaptation also discussed within the work.
As can be seen, MANETs present a difficult challenge to
those who would secure them. To this end, we have elected
to explore biology for inspiration.
2 MANET security in general
When one considers the general structure of a MANET, it
quickly becomes apparent that MANET security issues are a
superset of traditional wired security problems. Thus, in addi-
tion to traditional security vulnerabilities, a MANET must
also contend with the following challenges:
1. In a MANET, nodes cooperate to route traffic. Any rout-
ing algorithmmust contendwith nodes that may be under
an attacker’s control.
2. Bandwidth is locally shared and often highly-constrained
in a MANET. How can this congestion be handled while
simultaneously detecting nodes that are maliciously
flooding the network or dropping traffic?
3. Battery life is often a concern for MANET designers,
as roaming nodes often wish to act selfishly in order to
conserve power. Thus, CPU cycles and wireless power
management are extremely valuable commodities.
4. As the traffic observed by a node depends greatly on
network topology, it is difficult for systems to learn what
“good” traffic patterns look like, and what constitutes an
“attack”.
5. Nodes frequently enter or leave the network, causing fre-
quent changes in network membership and contributing
to localized changes in topology.
6. There is no “central authority” for network monitoring
and management, as the network can become disjoint at
any time.
3 AIS and danger theory
It is our belief that aMANET security solutionmust be decen-
tralized, adaptive, and resilient to both failures and attacks.
Because of these requirements, a biologically-inspired
approach is attractive, as natural systems often display these
qualities. In particular, computer scientists have often been
tantalized by the concept of building an Artificial Immune
System (AIS), which can dynamically detect and adapt to
new threats.
Artificial Immune Systems (AIS) have held great
promise in the security field. Early work by IBM [
8
] and
Forrest [
6
] focused on systems that could detect “non-self”
Amongst these issues, some of the most commonly explored
themes in the literature are routing attacks and selfish node
behaviour. Solutions are broad, ranging from additional
encryption to virtual currency and reputation systems. In
terms of general security, IDS/IDP is more challenging in
the MANET primarily due to the frequent changes in
topology and the lack of a central authority.
123
Danger theory and collaborative filtering in MANETs
347
entities and respond to them. Despite a successful demonstra-
tion of the IBM system at the Virus Bulletin Conference in
San Francisco in 1997 [
8
] commercially available implemen-
tations of these concepts are generally weak at best.
Part of the challenge with the AIS model is that the human
immune system seems to be far more complex than simple
self/non-self discrimination. For example, many non-
self entities are accepted by the body (for example, paren-
terally-administered drugs) without provoking an immune
response. Clearly, there is more at work than just discrimi-
nating between the body and “everything else”.
In order to address this, Matzinger proposed that natural
immune systems respond not to just self/non-self, but also
detect danger [
10
]. When a cell dies via natural causes, well-
regulated biological pathways are followed; this is called
aptosis. Conversely, when a cell undergoes stress or trau-
matic destruction, certain danger signals are generated. This
is known as cellular necrosis. While this theory is somewhat
controversial among immuniologists [
11
], the paradigmdoes
turn out to be surprisingly helpful when constructing artificial
immune systems.
AIS research including aspects of Danger Theory (DT)
have begun to appear in the literature in the last few years. For
example, Aikelin et al. [
1
] proposed the use of DT as a miss-
ing component of traditional IDS/AIS systems. This early
work has sparked further exploration of such metaphors; for
example, Sarafijanovic & Le Boudec [
13
] designed an AIS
tightly linked to the biological immune system, using Danger
Theory.
Danger Theory focuses on identifying andmitigating dam-
age to the system. Note that in many cases, it is not clear if
damage (for example, in the form of packet loss) is occurring
simply due to the relative position between nodes (two nodes
may share a poor link) or due to malicious activities. How-
ever, we note that DT is a moderator of our immune system
model – only when damage is discovered does the system
attempt to discern the underlying cause. The following list
outlines some common attack classes and our triggers within
DT:
packet forwarding damage is detected) the system can
begin the process of determining the likely cause of
problems.
– To discover the presence of worms and viruses, the system
should be able to note the creation of new processes and
files, plus new outbound requests. However, none of these
are, at least directly, damage. Thus, from a pure DT per-
spective, detection will only begin if the worm/virus con-
sumes toomany resources or triggers outbound traffic that
is deemed to be damaging. In our future work, our intent
is to apply a policy model to system calls, associating a
small level of “damage” to certain call sequences (akin
to behavioural virus detection). Using this approach, our
belief is that it should be possible to use a DT model for
remediation of the effects of malicious code.
Of course, there are many classes of attack that would not
trigger a purely-DT moderated system. For example, a user
whose password had been compromised and then used mali-
ciously would not be detected unless the attacker carried out
a “damaging” action. Similarly, attacks where the damage is
not immediately critical to the mission (such as data exfiltra-
tion) will not be detected using a system wholly based upon
DT. As such, we argue that DT should be just one component
in a larger system. This larger system is discussed below.
3.1 BITSI: overview
Given the security challenges of the MANET environment,
our work has focused on applying theoretical concepts to
real-world attacks. In particular, we have begun development
of BITSI, which leverages different aspects of biological
systems.
The underlying architecture of BITSI is quite straightfor-
ward. Each node of the MANET has a BITSI agent on it.
This agent resides in a local trusted component at each sys-
tem and monitors the behaviour of the node, as well as the
traffic which is forwarded on the local network. From such
a vantage point, BITSI collaboratively works to respond to
different attacks.
In terms of attacks, our vision for BITSI is one of mission
enablement. That is, BITSI accepts that some attacks will
succeed on the network, but aims to mitigate their effects
sufficiently to ensure mission continuity. This approach is
different from (though synergistic with) more traditional
remediation attempts, whose goal is to stop all attacks.
Remediation of attack effects is another important area
of study. Softer security responses move away from binary
“go/no-go” decisions toward responseswhich represent more
of a continuum, such as rate-limiting traffic or selectively
blocking connections from a particular application. By
dynamically identifying and monitoring critical operations
and performance requirements for specific contexts and
– To protect against denial of service attacks (resource con-
sumption), the system checks the node for resource con-
straints, which can include CPU load, memory utilization
or network usage. Establishing thresholds (limits) on the
amount of resource consumed by a single client request
without triggering a reaction would not only ensure avail-
ability of service for other nodes, but can also help reserv-
ing enough resources to allow the node to further advance
towards general mission objectives.
– Routing attacks are searched for when the system notes
that packet loss is occurring. Note that such packet loss
can occur due to environmental conditions as well as
active attack. When routing errors are suspected (and
123
348
K. Hoffman et al.
missions, BITSI can focus on securing the core operation
of the system, as opposed to trying to address the possibly
unbounded space of all possible attacks.
The challenge with such a “live and let live” approach
is that it ignores the underlying sensitivity of computer data.
Clearly, some information in a military environment has long
term value and high criticality; others have no long term
value, but are, at the short time scale, critical (an example of
this might be a session key for a temporary encrypted connec-
tion). Given that this information could be extremely small
in comparison to its importance (such as a 128-bit encryp-
tion key), it is very difficult to use biological techniques to
prevent data exfiltration attacks, as there is no obvious bio-
logical analogy. However, this is not necessarily a fatal flaw
in our approach; first, it seems unlikely that BITSI would
be the only protective measure on a system; second, given
the size of the problem space, a robust solution for part of
the space is of value. BITSI has been designed with this in
mind, and is capable of being integrated with other content-
management/IDS tools.
propose to use these attributes to customize the reputation
information.
In order to test the effectiveness of BITSI, we examined
two different scenarios. In the first scenario, we simulated a
MANET network of 35 nodes, out of which 6 were assigned
the role of servers that handled requests from the other nodes.
One of the non-server nodes was assigned to be an attacker
that only sent maliciously formed requests to the servers.
Each discrete time step in the simulation was assumed to
be enough for the servers to handle all legitimate requests
received in that step. Three of the servers had an attribute
which made them vulnerable to attack, which meant that
processing an attack packet prevented servicing of all other
packets within that time step. Each non-server (client) node
sent 4 requests to randomly-selected servers each time step.
We assumed that there was no loss of requests in the network.
Each node in the network has a BITSI client on it. This cli-
ent, which is DT-inspired, classifies packets based upon their
impact on the system. Thus, only packets that are serviced are
evaluated by BITSI. Furthermore, we assumed that this clas-
sifier misclassifies “good” packets with probability
P
fp
and
“bad” packets with probability
P
fn
. The BITSI agent stores
the classification of the last ten packets received from each
node. Once this buffer is full, the oldest entry is replaced with
the status of the most recent packet received. BITSI keeps
such a buffer for each client encountered on the network.
Every time a packet is serviced, BITSI evaluates the con-
tents of the buffer to determine if a particular client should
be classified as an attacker and blocked for some time,
t
.
In our prior work [
3
], we used a SoftMax learning strat-
egy [
15
] where the index of damage was calculated by the
following equation:
4 Experimental design and goals
The work described in this paper applies collaborative
filtering techniques in a Danger-Theory driven environment.
It shows that while a node alone can detect and block attack-
ing nodes, collaboration between nodes can, in many circum-
stances, improve detection even in the face of significantly
noisy data. Furthermore, if nodes that have certain charac-
teristics in common collaborate, and those characteristics are
related to their vulnerability to attack, the results will improve
still more.
These tests abstract many of the characteristics of the
MANET, and were therefore carried out in a purpose-built
simulator. They assume a low-mobility, tightly packed clique
of nodes that are fully connected. We examine results for a
subset of the nodes, which we call servers. One or more
client nodes send “bad” messages representing a resource
consumption attack, which cripples the receiving server for
a short period, causing it to drop all subsequent messages
until the bad message is processed. The server uses BITSI
and the information shared by nodes to decide whether to
block future messages from attacking nodes. The simulation
includes a variable percentage of false positive and false neg-
ative values, which are used in this decision.
Herewe introduce the notion of attributes. These attributes
represent functions, qualities, services or software which
each node possesses. For example, one attribute may rep-
resent the operating system used, and another may indicate
the version of the Apache server the node is running. It is our
contention that nodes that share many of the same attributes
are more likely to be vulnerable to the same attack. Thus, we
e
η.χ
benign
e
η.(χ
benign
+
χ
malicious
)

(1)
Calculation of the Damage Index
In this equation,
e
is Euler’s number (

.
η
2
72),
is a learn-
ing coefficient,
χ
malicious
are the numbers of
requests classified as benign and malicious, respectively, in
the buffer, and
χ
benign
and
is the decision threshold. If the inequality
is true, the sending node is deemed to have caused definite
damage, and some remedial action may be taken. For an
examination of our previous results in this work, see [
3
]. In
our current simulation the threshold was set to 0.5.
Once a node was identified as malicious, its “bad reputa-
tion” counter local to the server was incremented and requests
from the node were blocked for an exponentially increasing
number of steps based on the counter. The local “bad reputa-
tion” counter essentially served as an indicator on how many
times the sender of the currently evaluated request tried to
attack the server.
τ
123
 Danger theory and collaborative filtering in MANETs
349
4.1 Scenario 1: results
One challenge with the work is determining how to
quantify our results; that is, how can we determine how
“well” BITSI is functioning? In traditional IDS/IDP systems
it is relatively easy to measure the Type I and Type II error
rates. However, BITSI is not a classifier
per se
, so it does not
quantify traffic in this manner. Instead, BITSI will – in the
most general description – attempt to preserve certain prop-
erties of the macroscopic system by reconfiguring nodes to
defend themselves, sometimes at the cost of local optimality.
In similar work (for example, routing protocols) research-
ers have attempted to quantify “goodput” in the system; that
is, the amount of legitimate requests serviced under certain
conditions. However, in a real system, this is not something
that can be easily done, as there is no clear cut delineation
between “good” and “bad” in a system that is overcommitted
in terms of resource consumption.
For the purposes of this paper, consider the following types
of traffic:
Fig. 1
QoS for various values of Eta (
η)
.Notehowthesystembecomes
too reactive as Eta decreases
responsiveness of the system (
was varied from 0.1 to 1.0.
As can be seen, the system correctly adapts to the attackers
for high values of
η)
decreases (corresponding
to a more reactive system), the response to misclassifications
begins to dominate, and the systembegins to block legitimate
traffic.
η
. However as
η
– A: Legitimate traffic
sent
by nodes
– B: Legitimate traffic
serviced
by nodes
– C: Malicious traffic
sent
by attackers
– D: Malicious traffic
serviced
by vulnerable nodes
– E: Malicious traffic
serviced
by immune nodes or lost in
the network
4.2 Scenario 2
It should be noted that when a vulnerable node services a
malicious attack, it becomes unable to service further traffic
for the duration of the current time step. Conversely, when
an immune node services malicious traffic, the node suffers
no ill consequences.
Using these traffic designations, we could argue that the
“optimal” strategy is where A = B – that is, where all traf-
fic sent by “good” nodes is serviced. This approach makes
sense in a simple system where there is a clear delinea-
tion between attack packets and benign traffic. However,
things are significantly more complex when one considers
systems that are naturally resource constrained (such as a
MANET). In such a system,
any
traffic can cause some level
of damage, as servicing one packet virtually guarantees that
some other packet will not be serviced. In such a case, more
complex metrics will need to be created. However, in this
paper, as we are considering simple direct attacks, Qual-
ity of Service (QoS) is defined as 100
In the second simulation, we introduce the idea that nodes
have attributes known to all other nodes in the group. These
may be given in a table before the start of amission, and could
be updated periodically. We use these attributes to improve
the recommendations from other nodes. For this simulation,
we model 8 servers, each of which has a different set of
three attributes. These servers provide service to 30 clients,
of which 28 are benign. After timestep 50, the 2 attacking
nodes begin to mix attack traffic in with their benign pack-
ets with probability
p
. However, a server is only vulnerable
to a particular attack if it has the right attributes. Thus, two
servers are vulnerable to attack A, two to attack B, two to
both attacks, and two are invulnerable. Thus, at every time-
step the attacker may attack one randomly chosen server,
but only those with a particular attribute set will experience
damage.
In this system, every time damage is detected, the server
increments its local opinion regarding each client. Further-
more, after receiving an attack packet, the rest of the mes-
sages sent to that server during that timestep are dropped.
A node’s negative reputation gets incremented by one unit
if the server identifies it as the source of damage. Other-
wise, for each message received within that time step, each
node gets 1/
n
units of blame, where
n
is the number of
messages processed during that timestep. The ratio between
B
A
. Thus a QoS
of 100% means all “good” traffic is serviced. This metric
provides a balance between penalizing the system for false
positives and rewarding the system for servicing legitimate
requests.
Figure
1
shows a plot of the percentage of legitimate ser-
vices handled by the system at a misclassification rate (
P
fp
and
P
fn
)

of 25%, with a threshold
τ
of 0.5. In this graph, the
123
  [ Pobierz całość w formacie PDF ]
  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • losegirl.htw.pl