PaperCritique
Title:
BBN: Throughput Scaling in Dense Enterprise WLANs
with Blind Beamforming and Nulling
Author:
Wenjie Zhou, Tarun Bansal, Prasun Sinha and Kannan Srinivasan
Publication: Proceeding of the 20th annual international conference on Mobile
computing and networking (MobiCom), 2014.
Reviewer:
XXXXXX
Date:
XXXXXX
In this paper, the author realizes that traditionally WLAN design pays less attention to the uplink traffic
which in contrast increases rapidly due to the vast mobile applications recently. The paper aims to scale
the uplink throughput with the number of clients. The proposed BBN (blind beamforming and nulling)
leverages the high density of access points (APs) in Enterprise WLANs (EWLANs) to beamforming the
uplink traffic simultaneously in only 2 time slots. Then, the throughput is improved compared with
traditional TDMA and 802.11 schemes. The author also claims their design holds the merits such as
shifting more computation and design complexity to APs and exchanging less overhead in the backbone
than existing efforts. Practical issues are also considered and solved to some extent.
Strengths:
The topic is timely and a rise in attention of the uplink traffic in recent communication. The approach
leveraging the nature of EWLANs is novel. In mathematics, the proposed protocol is proven to be very
efficient in achieving scaled throughput when there are sufficient APs. The algorithm does require less
operation on mobile clients. The paper is overall well written with clear logic.
Weaknesses:
Some practical issues mentioned in the paper are actually crucial to this scheme, but the author failed to
provide the convincing solutions which put the practicability of BBN to question. Also, some underlying
assumptions are oversimplified, which cannot highlight BBN from the cited related works. However, the
performance results skip the comparison with these contenders. Moreover, these related works to justify
are not the state-of-the-art. Lastly, the name of the approach, BBN, is not so properly, since throughout
the paper the blind feature is questionable.
Detailed comments:
First, the practical challenges in this paper are claimed to be solved in a tricky way. For example, 1) the
author introduces the techniques from other papers to achieve the synchronization of APs but without the
import of the overheads following these solutions; 2) to expand BBN to multi-collision domain, APs who
can hear each other are formed into groups and remain silent while the neighboring group is
communicating. This design is not novel and usually requires lots of backbone overheads; and 3) the
nature of beamforming leads to a strictly low endurance of decoding error (once a packet is decoded
incorrectly, all other beamforming packets are ruined). The extra AP solution cannot solve the problem
effectively, which can be seen from Fig. 8(c). However, the author did not take the correct decoded
packet into account in terms of the throughput, which is unfair when comparing it with other protocols
with low error decoding rate.
Then, some latent assumptions seem to indicate that the BBN is not blind. For instance, in order to
align the packet for off cutting, which packet is decoded at which AP and which subset of APs act as
transmitters are supposed to be known for every other AP. In other words, each AP has a good view of
both the clients and the role of other APs, which is hard for me to admit that this is called blind
beamforming. Even if the information is gained by estimating the expected highest SNR as claimed by
the author, this means each AP has an accurate pre-knowledge of the clients’ mobile topology, which is
an oversimplified assumption.
Further, the performance results are not sufficient to justify its conclusions: 1) the comparison is
incomplete. Author only compares its design with the general TDMA and 802.11 protocols. The previous
mentioned works are not considered in this part. Though they have the drawbacks as the paper criticized,
the BBN also has its own drawbacks that those papers do not have; 2) some metrics in this paper are
missing, such as the backbone overhead and error decoding rate as mentioned before. It is unfair to just
compare the throughput without these two metrics. Since the author claims it has less overhead in
backbone than the sample-sharing design, these two works should be compared on the overhead. Though
APs only exchange the decoded packet, I feel like BBN requires lots of other information exchanging
through the backbone while reading the paper; and 3) the performance of the multi-collision domain BBN
is not shown on the experiment. On the other hand, the assumption of the trace-driven simulation is not
clear.
At last, the related works ([18], [24], [7], and [9]) the paper used are too early works to justify BBN’s
salient features, since they are all papers before 2008 while BBN is a 2014 paper.
Conclusion:
Paper attempts to provide throughput scaling linearly with the number of clients for EWLANs, which
is an ambitious goal. This would be great, if the protocol were not based on oversimplified assumptions
and the performance results over practical issues were given.
Fund. Wireless & Mobile Protocol
COSC4301/5340
Research
Paper Critique
Instructor: Dr. Xingya
Liu
Department of Computer Science
Lamar University
Office: MAES 86
Phone: 409-880-8677
Email: xliu@lamar.edu
Fund. Wireless & Mobile Protocol
Paper Critique
• Papers can be divided roughly into two categories: original
research papers and survey papers
– Research papers contain a “related work” section that can be considered a
survey
• What is a paper critique
– A critique of an article is the objective analysis of a literary or scientific
article with emphasis on whether or not the author supported his main
points with reasonable and applicable arguments based on facts.
– A good critique demonstrates your impressions of the article, while
providing ample evidence to back up your impressions.
– Avoid simply summarizing the points of an article without truly analyzing
and challenging it.
Liu 2
Fund. Wireless & Mobile Protocol
Why Write Critiques?
• Writing a critique on a research work helps to develop:
– A knowledge of the work’s subject area or related works
– An understanding of the work’s purpose, creative idea, development of
argument, structure of evidence
– A recognition of the strengths and weaknesses of the work
3Liu
Fund. Wireless & Mobile Protocol
Steps To Take Before Writing
• Preliminary reading
– Read through the article you are to critique once to get an overview, the
main idea
– Try to understand the overall argument the author is making
• Critical reading
– Read the paper again and again, critically, on both technical contents and
presentation
– Make some markings/notes while reading
• Create a unique symbol to differentiate texts that might be confusing,
important, or inconsistent
• Underline important statements
• Circle confusing ones
• Star inconsistencies
• It is helpful to take notes when expanded thoughts come to you as you read
– May need to read related references to better understand the technical
details, to relate to a broader issue or context
4Liu
Fund. Wireless & Mobile Protocol
Steps To Take Before Writing
• Critical reading
– Ask yourself the questions you want to address in your critique while reading
• Is the research purpose made clear in the introduction? Any ambiguous statements?
• What underlying assumptions does the article have?
• Is the technical method correct and described with adequate details?
• Are the study design and methods appropriate and without major flaws for the
purpose of the research?
• Have the procedures been presented in enough detail to enable a reader to duplicate
them?
• Does the article provide ample and valid evidence to support the research
conclusions?
• Did the author build a logical argument? Are there any holes in the author’s
argument? Is there other evidence that would support a counter-argument?
• Is there any misrepresentative evidence or bias to evidence?
• Do the research conclusions have practicality and real world application?
• Is the research significant for the particular field of study?
• Is the title of the article appropriate and clear?
• Is the abstract specific, representative of the article, and in the correct form?
• Should some sections of the article be expanded, condensed or omitted?
• Are the Introduction and Conclusion match up?
5
Technical
Deeper
Thinking
Presentation
Liu
Fund. Wireless & Mobile Protocol
Steps To Take Before Writing
• Doing some research: check or gather evidence to support/justify
your opinions/arguments
• Developing the structure:
– Form opinions/arguments
– Clear logic flow/organization of what you want to present
– Present your opinions/arguments in a well-reasoned, objective tone
6Liu
Fund. Wireless & Mobile Protocol
Reading Papers
• Read Abstract first
– Get the high-level idea of the approach/solution
– May have difficulty understanding the subject
– May have doubts about the idea
• Read Introduction
– Understand the subject
– Understand why the issue is important / why need research /what are the
requirements or criteria of a good approach, etc.
• Read over the paper body, critically
– Understand the main contribution / novelty / differences with other
approaches
– Remove your doubts about the idea
– Read the performance evaluation/results to justify the contribution
• Read Conclusion: confirm the contribution of the paper
7Liu
Fund. Wireless & Mobile Protocol
Typical Structure of a Critique
• Paper title, author information, publication information
• 1 paragraph executive summary: summarize the content of the paper
– What is the paper trying to do? Statement of the problem or issue discussed
• What is the main research problem?
• What is the objective of the paper?
– What is the potential contribution of the paper?
• What is the main work presented in the paper? Approach or methods, hypothesis?
• What are the conclusions of the presented work?
• Strengths: summary of the strengths of the paper
• Weaknesses: summary of the weaknesses of the paper
• Detailed comments: explanations/evidence of the listed strengths and
weaknesses
– Begin each paragraph with a theme/topic sentence that summarizes the theme
point of the paragraph
• Conclusion: summary of your critique
– Provide a recap of your main points throughout the critique
– Suggest potential directions for improvement and broad implications for the field
of study (optional)
• References (optional): to support your assessment
8Liu
Fund. Wireless & Mobile Protocol
Technical Issues for Strengths/Weaknesses
• Significance of the research problem
– Is the research problem considered in the paper important and significant?
– Does the paper motivate the considered research problem well?
– Is the considered research problem challenging?
• Novelty / Originality
– Is the presented research novel?
– Does the paper review the state-of-the-art to justify its novelty?
– Is the novelty significant or incremental?
• Technical merits / flaws
– What are the striking merits or major flaws of the technical work?
– Is the technical depth sufficient? Is there any omitted research issues?
• Performance results
– Are there sufficient results to justify the conclusions? Any missing results?
– Do the results have potential impact or provide important insights for the
field of study?
9Liu
Fund. Wireless & Mobile Protocol
Writing Issues for Strengths/Weaknesses
• Background information
– Does the paper provide sufficient background knowledge introduction?
– Does the background information provided in the paper help to motivate
or better understand the presented research?
• Structure of the paper
– Is the organization of the paper clear? Is there a need for re-structure?
– Should some sections of the article be expanded, condensed or omitted?
• Language
– Are there any typos, grammar errors, or ambiguous words/sentences?
• References
– Are the references accurate and adequate? Any missing important
references?
– Did the author cite pertinent and only cite pertinent references?
10Liu
Fund. Wireless & Mobile Protocol
Critique Evaluation Criteria
• Satisfies page and spacing requirements
– <= 3 pages (>=11pt, 1.5 spacing) including all figures/tables and
references
– A separate title page with the title of the paper you critique and your
name
• Technical content
– Logic correctness
– Technical depth: deep analysis (show your own understanding)
– Technical completeness/thoroughness
– Technical errors
• Writing
– Structure/logic clarity
– Spelling/grammar
– Use your own words
11Liu
Fund. Wireless & Mobile Protocol
Grading
• Technical Content (30)
– Accuracy (10)
– Completeness / thoroughness (10)
– Depth (10)
• Organization /Logic (10)
• Clarity of written presentation (10)
12Liu
Fund. Wireless & Mobile Protocol
Common Problems
• Avoid simply summarizing the points of an article without truly
analyzing and challenging it.
• Coherent writing (clear logic flow)
– Every paragraph has a MAIN IDEA “main idea sentence” or “topic
sentence” (usually the first/last sentence of a paragraph)
– The remainder of the paragraph expands on the main idea
• May use connecting words to support cohesion: first of all, in addition,
furthermore, moreover, one way is …, another is …, etc.
– Link ideas between paragraphs
• The first sentence of a paragraph may be the linking sentence to link the idea
of the previous paragraph and the idea of the current paragraph
13Liu
Fund. Wireless & Mobile Protocol
Common Problems
• PLAGIARISM WARNING: Plagiarism is the act of
passing off material (a sentence, a paragraph, a section
or an entire document) that someone has written as if
you wrote it. This kind of copying is unethical,
academic misconduct, and illegal.
– If you want to quote from some source verbatim, you may do
so as long as you enclose the quoted material in double
quotations and cite the source at the end of the quote.
14Liu
A Moving-Direction-Oriented Handoff Scheme for Directional
Antennas in Wireless Local Area Networks
Xiaoqian Lyu and Jiang Xie
Department of Electrical and Computer Engineering
The University of North Carolina at Charlotte
Email: {xlv, linda.xie}@uncc.edu
Abstract—Directional antennas have been intensively studied in
wireless local area networks (WLANs) in order to increase space
reuse rate, reduce interference, and extend transmission range,
etc. However, the handoff techniques of directional antennas
still need improvement. One critical problem is to reduce the
extremely long handoff latency of directional antennas caused
by the sequential search of sectors in the channel scanning
phase. In this paper, we propose a moving-direction-oriented
handoff scheme which prioritizes the scanning of sectors b
y
the moving direction of a mobile station. The moving direction
is calculated by an analytical method utilizing the direction-of-
arrival information of the mobile station. Simulation results show
that the calculated moving direction can achieve an accuracy
of over 98%. The proposed scheme can significantly reduce the
channel scanning latency without degrading the searching range.
Moreover, the handoff frequency of stations is also reduced,
which is beneficial in saving network resources.
I. INTRODUCTION
With the feature of beamforming towards a particular direc-
tion, directional antennas have the ability to increase spatial
reuse rate, reduce interference, extend transmission range,
save power consumption, etc. Particularly, directional antennas
provide a good solution to extend the communication range at
high frequency bands (e.g., 60GHz IEEE 802.11ad networks
)
[1]. Driven by these benefits and demand, numerous research
works on wireless network design using directional antennas
have flourished over the last decade.
Smart directional antennas combine antenna arrays and
Digital Signal Processing (DSP) techniques. The DSP module
can estimate the Direction-of-Arrival (DoA) of signals. Based
on the DoA, the antenna array performs beamforming and
implements communications [2]. Due to this new property of
directional antennas, traditional MAC protocols and handoff
mechanisms for omni-directional antennas do not work any-
more. By now, various MAC protocols have been proposed for
wireless networks with directional antennas [3], [4]. However,
the handoff issue of directional antennas is not well handled
yet. The most important challenge is to reduce the extremely
long handoff latency of directional antennas. In IEEE 802.1
1
handoff process, the total handoff delay includes channel
scanning latency, authentication latency, and reassociation
latency, among which the channel scanning latency counts
for almost 90% of the total handoff latency [5]. If directional
This work was supported in part by the US National Science Foundation
(NSF) under Grant No. CNS-0953644, CNS-1218751, and CNS-1343355.
antennas are used, the channel scanning latency will multiply,
because directional antennas can only search one sector at a
time so that they have to search each sector sequentially in
order to find all available access points (APs) around them as
shown in Fig. 1. An alternative method is to scan in isotropic
mode. However, the searching range will be reduced under
constant transmission power, which reduces the probability
of finding new APs in the searching range thus degrading
the handoff success rate, as demonstrated in Fig. 2. If a
station increases its transmission power to reach the same
searching range, the extremely high transmission power may
be unaffordable for the station. According to [6], [7], for a
directional antenna with a beamwidth of 60◦, the transmission
power of isotropic transmission needs to increase by 7-8
times in order to reach the same transmission range. This is
especially intolerable in high mobility and energy-constrained
environments. Therefore, taking both the handoff latency and
searching range into consideration, it is imperative to design
an efficient handoff solution for directional antennas.
D2
moving direction
AP1
A
P3
A
P4
AP5
AP6
scanning directions
Fig. 1. Channel scanning of a station
with directional antennas.
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
AP busy rate
H
an
do
ff
s
uc
ce
ss
r
at
e
(%
)
Directional mode
Omni−directional mode
Fig. 2. Handoff success rate of
directional antennas and omni-
directional antennas under the
same transmission power.
In order to solve the long handoff latency problem of
directional antennas, two methods for vehicular networks –
cached and online mode – are proposed in [8]. The cached
mode collects the RF signature database during “idle” drive
to guide handoffs when a car moves along a known route.
The online mode requires a mobile station to probe all APs in
real time instead of using the RF signature database. Though
the cached mode outperforms the online mode, it works only
for known routes. For unknown routes, a car has to use the
online mode, but the penalty is the extremely long handoff
latency. A simple and effective channel scanning scheme is
used in [8] so that the channel scanning time can be reduced
978-1-4799-5952-5/15/$31.00 ©2015 IEEE
significantly, but the handoff latency is still very long (3080ms
in the setting of [8]). In [9], a dynamic beamforming handoff
scheme is proposed using neighbor profiles cached in APs.
However, it is not feasible when the network status changes,
e.g., APs change locations, control channels, or beam width.
Moreover, the updating of neighbor profiles causes excessive
overhead to the network.
In this paper, we propose a moving-direction-oriented hand-
off scheme which can significantly reduce the latency of
channel scanning without reducing the searching range under
consistent power consumption. First, we propose a method
to calculate the moving direction of stations using the Do
A
information collected by directional antennas. The proposed
method does not require multiple APs’ corporation and is not
restricted to indoor applications, which is different from the
localization mechanisms of WLANs [10], [11]. Based on the
calculated moving direction of stations, the number of sectors
to be scanned is reduced by choosing new APs with a higher
priority in the moving direction. Although not all directions
are scanned in our scheme, the handoff success rate is not
compromised. More importantly, the handoff frequency can
also be significantly reduced, because APs in the moving
direction of a station may serve it for a longer time, as
compared to those in the opposite moving direction of the
station. Lower handoff frequency can help to save network
resources, which is beneficial to the whole network.
The rest of this paper is organized as follows. In Section II,
the proposed scheme is described. In Section III, simulation
results are given, followed by the conclusions in Section IV.
II. THE PROPOSED MOVING-DIRECTION-ORIENTED
HANDOFF SCHEME
A. Direction-of-Arrival
The DoA of a signal at a node is defined as the direction
in which the signal arrives. In this paper, we describe the
DoA using a polar coordinate system centered at the node. We
can also describe the change of DoA in the polar coordinate
system when the transmitter of signals moves from one point
to another. An example is shown in Fig. 3. An AP (receiver)
is at point O and a station (transmitter) is at point A. The DoA
of signals from the station at the AP is DoAA = 60◦. If the
station moves to point B, then DoAB = 0◦ and the change
of DoA is Δθ = DoAB − DoAA = 0◦ − 60◦ = −60◦. The
negative sign indicates that the DoA changes clockwise.
B. Calculation of the Moving Direction
We consider the handoff scenario as shown in Fig. 4. An
AP
is communicating with a station when the station passes point
A and point B sequently in its moving trajectory. A handoff
is triggered by the station at point B, and point A is close to
point B. The moving direction of the station at point B is the
tangential direction of the moving trajectory which is denoted
by d in the figure. Since point A is close to point B, we can
use the direction of
−−→
AB to approximate the moving direction
denoted by d′ in the figure.
0°
45°
90°
135°
180°
225°
270°
315°
60°
AP B
A
Δθ
O
Fig. 3. Definition of DoA and angle
difference of two DoA values.
Δθ
αO
A
B
d’
d
AP
moving trajectory
Fig. 4. Calculation of the moving
direction of a station.
When a handoff is triggered, the station is beamforming
towards the AP. In order to beamform towards its moving
direction, the station needs to know the degree of angle
it should rotate its antenna from its current beamforming
direction. We use a rotating angle α to denote this degree
as shown in Fig. 4. Note that α is also a signed value
which indicates the rotation is counter-clockwise/clockwise by
a positive/negative sign.
1) Calculation of α: The calculation of α can be imple-
mented by either the AP or the station. The idea and math-
ematical formulas of these two ways are basically the same.
Here we introduce the method of assigning the calculation task
to the AP. We will explain the advantages and disadvantages
of the AP calculation and the station self-calculation in later
subsections.
As mentioned above, the AP can obtain the DoA of signals
from the station at point A and B, denoted by DoAA and
DoAB , respectively. The change of DoA is also obtained by
Δθ = DoAB−DoAA. On the other hand, the station monitors
the received signal strength (RSS) of signals during the whole
process. RSS is an indicator of the reception power Pr at the
station which is given by
Pr =
PtGtGr
KDμ
, (1)
where Gt is the transmitter gain, Gr is the receiver gain, Pt is
the transmission power, D is the distance between the station
and the AP, μ is the path-loss factor, and K is a constant.
Based on (1), the ratio of the two distances OA and OB in
Fig. 4 is
OA
OB
= (
Pr,B
Pr,A
)
1
µ
, (2)
where Pr,B and Pr,A are the reception power at point B and
point A, respectively.
According to the Law of Sines,
AB
sin Δθ
=
OA
sin(180◦ − α) =
OB
sin(α − Δθ) . (3)
Then, OAOB can be expressed as
OA
OB
=
sin α
sin(α − Δθ) =
sin α
sinα cos Δθ − cos α sin Δθ . (4)
After rearrangement of (4), we get
k · cos Δθ sin α = sin α + k · cos α sin Δθ, (5)
where k � OAOB . Then, we have
tan α =
k sin Δθ
k cos Δθ − 1 . (6)
Usually, OA is shorter than OB, so k is a value in (0, 1).
According to (6), if Δθ ∈ [0◦, 180◦], then α is a negative
value; if Δθ ∈ [−180◦, 0◦), then α is a positive value. Because
α is an exterior angle of the triangle �OAB, the value of α
should be in (90◦, 180] or [−180◦,−90◦). Based on this range,
we can obtain the value of α from (6).
Note that α is a signed value. A positive sign indicates
that the station should rotate its antenna counter-clockwise
and a negative sign indicates rotating clockwise. Based on the
calculated value of α, the station rotates its antenna clockwise
when Δθ > 0 and counter-clockwise when Δθ < 0. When
Δθ = 0◦, we get tan α = 0 according to (6). Based on the
range of α, we get that α should be either 180◦ or −180◦,
which means that the station should rotate its antenna to the
opposite direction in order to arrive its moving direction.
2) Tolerance to Error: As shown in Fig. 4, we use d′ to
approximate the moving direction d. Thus, there is a difference
ε between d and d′. However, the calculated α can tolerate this
error ε since the beamwidth of directional antennas is usually
larger than ε. Let β denote the beamwidth of the directional
antenna of the station. When the station beamforms towards
d′, the beam can cover the moving direction d as long as
ε < β2 .
3) Modification of Δθ: It should be noticed that when the
station moves across 0◦ in the polar coordinate system, the
calculated value of Δθ is incorrect (shifted by 360◦). Since
Δθ is an inner angle of �OAB, it should be in the range
of (−180◦, 180◦). Therefore, the value of Δθ is modified
according to Algorithm 1.
Algorithm 1 : Calculation of Δθ
1: Δθ = DoAB −DoAA;
2: if Δθ ∈ [−360◦,−180◦] then
3: Δθ ← Δθ + 360◦;
4: else if Δθ ∈ [180◦, 360◦] then
5: Δθ ← Δθ − 360◦;
6: end if
7: if Δθ ∈ (−180◦, 180◦) then
8: return Δθ
9: end if
For example, if DoAA = 345◦,DoAB = 15◦, then Δθ =
15◦− 345◦ = −330◦. In this case, DoAB should be modified
to 15◦ + 360◦ = 375◦, so Δθ = 375◦ − 345◦ = 30◦.
4) Setting of Point A: As mentioned above, two points in
the moving trajectory of the station are needed to calculate
the moving direction. Usually, point B (handoff trigger point)
is detected when the RSS of the received signal from the
AP falls below a threshold RSSth. In this paper, we use a
similar method to detect point A by setting another threshold
RSSprep. When the RSS of the received signal falls below
RSSprep, a handoff preparation event is triggered.
However, in practical situations, special cases may happen
because the station may divert, pause, or fluctuate in its
trajectory. Three special moving trajectories are shown in Fig.
5. In case (a), a station moves into the handoff preparation
area and a handoff preparation event is triggered, but then it
turns back towards the AP. In this case, the previously selected
point A should be deleted. In case (b), a station moves into the
handoff preparation area, then it turns back to the AP but then
again moves into the preparation area followed by a handoff
triggered. In this case, point A should be selected the second
time the station enters the handoff preparation area. In case
(c), a station moves into the handoff preparation area, and
pauses in the area for some time. Then it changes its moving
direction and moves out of the handoff preparation area. In
this case, it is better to update point A at the pause point in
order to get the correct moving direction for the handoff.
handoff preparation area
(a) (b) (c)
pause
AP
moving trajectory
AP AP
moving trajectorymoving trajectory
Fig. 5. Some special moving trajectories of a station.
In order to handle different moving trajectory of stations as
listed in Fig. 5, we introduce a timer to update the information
of point A during the handoff preparation process. Point A is
updated according to the following rules:
• When the RSS of the received signal falls below
RSSprep, the current position of the station is selected as
point A. The station informs the AP to store the current
DoA and Pr as DoAA and Pr,A. A timer is started and
the station enters the handoff preparation state.
• If the timer is not expired and the RSS of the received
signal keeps below RSSprep, the station stays in the
handoff preparation state.
• If the timer is not expired but the RSS of the received
signal goes above RSSprep, then the station informs the
AP to delete the previously stored DoAA and Pr,A. The
station exits the handoff preparation state.
• If the timer is expired and the station is in the handoff
preparation state, then the station informs the AP to
update the DoAA and Pr,A with the current position of
the station. The timer is restarted.
• If the timer is not expired and the station is in the handoff
preparation state while the RSS of the received signal falls
below RSSth, a handoff is triggered. The station informs
the AP to store the current DoA and Pr of the signal as
DoAB and Pr,B . The AP calculates the moving direction
of the station. The station enters the handoff state.
5) Configuration of Timer: As mentioned above, a time-
to-live (TTL) value should be assigned to the timer. If the
TTL value is too large, the timer may not restart when the
station pauses in the handoff preparation area. As a result,
point A cannot be updated and the calculated moving direction
is incorrect. Therefore, a small TTL value is preferred for the
timer. However, if the TTL is too small, point A will be
updated too often, which will result in too much overhead to
the network. Moreover, when the timer is too small, point A
and point B are too close, so the directional antenna cannot
differentiate DoAA and DoAB . Therefore, the value of TTL
should be restricted by the tolerance to error of the scheme
and the resolution of directional antennas.
The maximum value of the TTL is restricted by the
tolerance to error of our solution. The error should not be
larger than the error shown in Fig. 6. A station moves smoothly
from point A to point B along an arc in the handoff preparation
area. At point B, the calculated moving direction of the station
is d′. However, the actual moving direction should be d. The
difference between the two directions is denoted as ε in the
figure. A
B
Δθ
AP
d’
d ɛ
O
moving trajectory
beamwidth
handoff preparation area
Fig. 6. The maximum of TTL is restricted by the tolerance to error.
The relationship between ε and Δθ is ε = 180◦ − 90◦ −
(180
◦−Δθ
2 ) =
Δθ
2 . In order to cover the moving direction of the
station by the beamwidth of directional antennas, ε should be
smaller than β2 , i.e.,
Δθ
2 <
β
2 . Let v denote the moving speed
of the station. Let r denote the radius of the transmission range
of directional antennas. The value of TTL should be restricted
by
TTL < (π · β/180◦) · r
v
=
πr · β
v · 180◦ . (7)
The minimum value of the TTL is restricted by the resolu-
tion of the DoA estimation algorithm of directional antennas.
The resolution is the smallest angle difference that the DoA
estimation algorithm can detect. Let δ denote the resolution of
directional antennas in this work. The value of TTL should
be restricted by
TTL >
(π · δ/180◦) · r
v
=
πr · δ
v · 180◦ . (8)
Usually, DoA estimation algorithms can achieve a resolution
of 2◦ [12] which is smaller than the beamwidth of directional
antennas. Combining (7) and (8), the value of TTL should be
set in the range
πr · δ
v · 180◦ < TTL <
πr · β
v · 180◦ . (9)
6) Tradeoff of Moving Direction Calculation by APs or
Stations: As mentioned above, the moving direction of the
station can be calculated by either the AP or the station itself.
Both ways have advantages and disadvantages. If the AP
calculates the moving direction, additional signaling messages
are required between the station and the AP, which will result
in extra overhead. The additional signaling messages include
two types. One is Points Config signaling messages. They
are sent by the station to notify the AP to setup/delete/update
the DoA and RSS values of point A and point B. Among
them, the amount of signaling messages to update point A
is related to TTL. The smaller the TTL is, the more the
signaling messages will be. The second type of additional
messages is Move Direct signaling messages. It is sent by
the AP to inform the station of the calculated moving direction.
This message is sent only once during one handoff process.
If the station calculates the moving direction by itself, all
the updates and calculation will be executed by the station.
No additional overhead is brought in. However, this workload
will consume more energy of the station. In reality, to keep
a low battery consumption is a very important consideration,
especially for mobile devices. Therefore, whether to assign
the movement calculation task to the AP or the station is
dependent on practical requirements and the capability of
devices.
C. Moving-direction-oriented Channel Scanning
After obtaining the value of α, the AP informs the station
with the value of α. The station rotates its antenna by α degree
to arrive its moving direction. In order to reduce the number
of directions to be scanned, we classify all available APs in
different priority levels according to their angle difference to
the moving direction of the station, as shown in Fig. 7. Smaller
angle difference means that the AP is closer to the moving
direction of the station, so we give a higher scanning priority
to it. During the channel scanning process, the station first
beamforms towards the highest prioritized direction to perform
channel scanning. If one or more responses are received in this
direction, the station selects a new AP and stops the scanning.
Lower-priority directions are only scanned when no response
is received in the current direction.
AP1
A
P2
AP3
AP4
AP5
AP6
P4
P3
P3
P2
P2
P1
moving direction
Fig. 7. Priority of available neighboring APs according to their angle
difference to the moving direction of the station. (Pi: scanning priority. P1:
the highest priority.)
The channel scanning process is described in Algorithm 2.
Algorithm 2 : Moving-direction-oriented Channel Scanning
1: Set the total number of directions: Total direct = � 360
◦
β
�;
2: for k = 1 to Total direct do
3: Set current scan direction based on moving direction:
Scan direct = Mov direct + (−1)k� (k−1)
2
�β;
4: Scan all channels:
5: for ch id = 1 to Total channels do
6: Send probe: Probe(ch id) = 1;
7: if Response flag(ch id) == 1 then
8: Response RSS(ch id) = Current RSS;
9: end if
10: end for
11: Check all responses in the current direction:
12: if find(Response > 0) ≥ 1 then
13: Stop scanning of remaining directions:
Direction id = Num Directions;
14: end if
15: end for
16: NewAP channel = Index of max(Response RSS);
D. Summary
The proposed scheme aims to search new APs in the moving
direction of a mobile station with a higher priority. When a
handoff is triggered by a station, the station sends a Handoff
Request message to its current AP to inform the handoff. The
current AP calculates the rotating angle α and includes it in a
Handoff Reply message to the station. The station extracts the
rotating angle from the Handoff Reply message and rotates its
antenna to the moving direction. Using the proposed scheme,
two benefits can be obtained:
1) The channel scanning delay is greatly reduced because
fewer directions are scanned.
2) The handoff frequency is also significantly reduced in
the long run because our preferred new APs can provide
longer connectivity duration compared with other APs.
III. PERFORMANCE EVALUATION
In this section, we evaluate the performance of the proposed
scheme and compare it with alternative schemes. First, we de-
scribe the settings of scenarios in the simulations. Then, three
aspects of performance are evaluated: accuracy of moving
direction calculation, channel scanning latency, and handoff
frequency.
A. Simulation Setup
In our simulation, APs are evenly distributed in a square
area. Mobile stations are moving randomly within the area
following the Random Waypoint model. For simplicity, the
propagation of signals follows the free space model. At startup,
the control channel of APs are randomly allocated. In order
to simulate different traffic load of APs, we set different busy
rates for APs. The values of the simulation parameters are
listed in TABLE I. Note that the TTL of handoff preparation
timer is calculated based on (9).
TABLE I
SIMULATION PARAMETERS
Time slot 2 ms
Total simulation time 20000 s
Side length of the simulation area 300 m
Transmission range 150 m
Total number of APs 36
Total number of channels 11
Total number of stations 20
Beamwidth of directional antennas 60◦
Resolution of directional antennas 2◦ [12]
MinChannelTime 1 ms [5]
MaxChannelTime 10 ms [5]
Channel switching time 10 ms [8]
Beam switching time 250 μs [8]
TTL of handoff preparation timer 2-60 s
Random Waypoint: Moving speed of stations 1-5 m/s
Random Waypoint: Moving time of stations 20-200 s
B. Accuracy of Moving Direction Calculation
First, we evaluate the accuracy of the moving direction
calculated in Section II since this is the prerequisite of our
handoff scheme. We compare the calculated direction and the
actual moving direction of mobile stations. The actual moving
direction is the tangential direction of the trajectory of the sta-
tion at a particular point. The numbers of correct calculations
and wrong calculations are counted, and the percentage of
the correct calculations is used as the evaluation of accuracy.
Our method is tested in both high-mobility networks and low-
mobility networks. The result is shown in TABLE II. Note
that the values of TTL are selected in the range calculated by
(9).
TABLE II
ACCURACY OF MOVING DIRECTION CALCULATION
TTL 2s 4s 8s 20s 40s 60s
High-mobility (%) 99.39 98.95 98.62 98.62 98.62 98.62
Low-mobility (%) 99.05 99.05 99.05 99.05 99.05 99.05
It can be found that the accuracy of the moving direction
calculation is above 98% in both high-mobility networks and
low-mobility networks with TTL in the whole possible range.
For low-mobility networks, the accuracy is higher because the
change of the moving trajectory can be more easily tracked
by the updating of point A.
As there are 2% inaccurate moving direction rate, we
evaluate its impact on the channel scanning latency. Assume
that channel scanning latency of a station is L when its moving
direction is correctly calculated. The worst case of the channel
scanning latency when the moving direction is incorrect is that
the station needs to search all directions in order to find a new
AP. Under the worst case, the maximum channel scanning
latency of the station is � 360◦β �L. Therefore, the maximum
additional channel scanning latency is
98%L + 2%�360
◦
β
�L − L = (0.98 + 0.02�360
◦
β
� − 1)L,
where β is the beamwidth of the directional antenna for the
station.
For example, if the the beamwidth of the directional antenna
is 60◦, the maximum additional channel scanning latency is
(0.98 + 0.02 · 6− 1)L = 10%L at the worst case. In practice,
the actual additional latency percentage will be less than 10%.
C. Channel Scanning Latency
We compare the channel scanning latency under different
busy rate of APs. Busy rate means the probability that an AP
is busy at each time slot. The result is shown in Fig. 8. The full
scan scheme means that a station scans all sectors. We also
show the performance of the online mode mobisteer scheme
proposed in [8].
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0
100
200
300
400
500
600
AP busy rate
C
ha
nn
el
S
ca
nn
in
g
L
at
en
cy
(
U
ni
t:
m
s)
Full scan
Mobisteer [8]
Proposed
Fig. 8. Evaluation of channel scanning latency.
In Fig. 8, the full scan scheme shows a very high channel
scanning latency of more than 450ms. The mobisteer scheme
has a lower handoff latency since the channel switching time is
saved, but the latency is still above 220ms. When the network
is less busy, the latency of the proposed scheme is only 30% of
the mobisteer scheme. When the network is busy, the latency
of the proposed scheme increases fast because mobile stations
need to search more sectors to find available APs. It is easy to
infer that the latency of the proposed scheme will eventually
reach the same point as the full scan scheme when the AP busy
rate approaches 1. At that time, the mobisteer scheme will
outperform the proposed scheme. Note that the intersection is
not always 0.8 in practical scenarios, because it is affected by
many factors such as the density of APs, the transmission
range and beamwidth of the antennas. Both the full scan
scheme and the mobisteer scheme have a decreasing trend
in the figure. This is because in busy networks, less APs are
available in each channel. Thus, a station only needs to wait for
MinChannelTime on one channel instead of MaxChannelTime.
D. Handoff Frequency
In this part, we evaluate the handoff frequency of the pro-
posed scheme. We count the total number of handoffs triggered
during the simulation time divided by the simulation time
and the number of stations. Therefore, the handoff frequency
evaluates the average number of handoffs triggered by one
station per second. The handoff frequency is evaluated in both
high-mobility mode and low-mobility mode. For high-mobility
networks, the pause time of stations in the Random Waypoint
model is randomly chosen from 20s to 30s. For low-mobility
networks, the pause time is randomly selected from 200s to
300s. Since the total number of handoffs is dependent on the
moving trajectories of stations, we use the same random seed
to generate the moving trajectories of stations in the Random
Waypoint model. The simulation result is shown in Fig. 9.
0 0.2 0.4 0.6 0.8 1
0
0.01
0.02
0.03
0.04
0.05
AP busy rate
ha
nd
of
f
fr
eq
ue
nc
y
High mobility: full scan
Low mobility: full scan
High mobility: proposed
Low mobility: proposed
low−mobility
high−mobility
Fig. 9. Handoff frequency of the proposed scheme.
As shown in Fig. 9, for both high-mobility mode and low-
mobility mode, our proposed scheme can reduce the handoff
frequency significantly when the network is less busy. This
is because our scheme selects the new AP with a higher
probability in the moving direction of a station, which can
provide longer connectivity duration for the station than other
APs. If the station selects its new AP purely based on RSS,
it may choose a station in the opposite moving direction, and
the station may move out of the coverage of the new AP soon
and have to trigger another handoff. Therefore, the new AP
selected using our proposed scheme can serve the station for
a longer time on average.
Note that when the busy probability of APs is low, the
handoff frequency of the proposed scheme is much lower than
that of the full scan scheme. This is because when APs are
less busy, the station is more likely to find an idle AP in its
moving direction. On the other hand, if APs are too busy,
it is highly possible that the station cannot find any free AP
in its moving direction though there are APs deployed in this
direction. Thus, the station has to scan the remaining directions
to find a new AP. In this case, the new AP does not have the
advantage of providing a longer serving time for the station.
IV. CONCLUSION
In this paper, we address the long handoff latency prob-
lem of directional antennas. First, we proposed an analytical
method to calculate the moving direction of a station at the
time when a handoff is triggered. Based on the moving direc-
tion, we proposed a scheme to find a new AP in the moving
direction of the station instead of all sectors. Simulation results
showed that the accuracy of the calculated moving direction
is above 98%. The proposed scheme can reduce the channel
scanning latency significantly especially when the network
is less busy. In addition, we demonstrated in the simulation
results that the proposed scheme can reduce the handoff
frequency of the stations in the long run, which is beneficial
in saving network resources.
REFERENCES
[1] “IEEE Std 802.11ad: Enhancements for very high throughput in the 60
GHz band,” 2012.
[2] O. Bazan and M. Jaseemuddin, “A survey on MAC protocols for wireless
ad hoc networks with beamforming antennas,” IEEE Communications
Surveys and Tutorials, vol. 14, no. 2, pp. 216–239, May 2012.
[3] Q. Chen, J. Tang, D. Wong, X. Peng, and Y. Zhang, “Directional cooper-
ative MAC protocol design and performance analysis for IEEE 802.11ad
WLANs,” IEEE Transactions on Vehicular Technology, vol. 62, no. 6,
pp. 2667–2677, Jul. 2013.
[4] L. Wang, A. Chen, and S. Huang, “A cross-layer investigation for the
throughput performance of CSMA/CA-based WLANs with directional
antennas and capture effect,” IEEE Transactions on Vehicular Technol-
ogy, vol. 56, no. 5, pp. 2756–2766, Sep. 2007.
[5] A. Mishra, M. Shin, and W. Arbaugh, “An empirical analysis of the
IEEE 802.11 MAC layer handoff process,” ACM SIGCOMM Computer
Communication Review, vol. 33, no. 2, pp. 93–102, Apr. 2003.
[6] J. E. Hill, “Gain of directional antennas,” Watkins-Johnson Company,
Tech-notes, 1976.
[7] C. A. Balanis, Antenna Theory: Analysis and Design. John Wiley &
Sons, 2012.
[8] V. Navda, A. Subramanian, K. Dhanasekaran, A. Timm-Giel, and S. Das,
“MobiSteer: Using steerable beam directional antenna for vehicular
network access,” in Proc. ACM International Conference on Mobile
Systems, Applications and Services (MobiSys), 2007, pp. 192–205.
[9] J. Lee and E. Rhee, “Dynamic beamforming handover mechanism using
neighbor profile in IEEE 802.11 wireless LANs,” International Journal
of Advancements in Computing Technology, vol. 5, no. 11, pp. 40–46,
Jul. 2013.
[10] V. Bhargava and M. Sichitiu, “Physical authentication through localiza-
tion in wireless local area networks,” in Proc. IEEE Global Telecommu-
nications Conference (GLOBECOM), vol. 5, Dec. 2005, pp. 2658–2662.
[11] S. Fang and T. Lin, “A dynamic system approach for radio location
fingerprinting in wireless local area networks,” IEEE Transactions on
Communications, vol. 58, no. 4, pp. 1020–1025, Apr. 2010.
[12] Y. Shikagawai and K. Ichige, “High-resolution and low-cost direction-
of-arrival estimation by 2q-root-music method,” in Proc. IEEE Workshop
on Signal Processing Systems (SiPS), Oct. 2013, pp. 24–29.
<<
/ASCII85EncodePages false
/AllowTransparency false
/AutoPositionEPSFiles true
/AutoRotatePages /None
/Binding /Left
/CalGrayProfile (Gray Gamma 2.2)
/CalRGBProfile (sRGB IEC61966-2.1)
/CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2)
/sRGBProfile (sRGB IEC61966-2.1)
/CannotEmbedFontPolicy /Error
/CompatibilityLevel 1.7
/CompressObjects /Off
/CompressPages true
/ConvertImagesToIndexed true
/PassThroughJPEGImages true
/CreateJobTicket false
/DefaultRenderingIntent /Default
/DetectBlends true
/DetectCurves 0.0000
/ColorConversionStrategy /LeaveColorUnchanged
/DoThumbnails false
/EmbedAllFonts true
/EmbedOpenType false
/ParseICCProfilesInComments true
/EmbedJobOptions true
/DSCReportingLevel 0
/EmitDSCWarnings false
/EndPage -1
/ImageMemory 1048576
/LockDistillerParams true
/MaxSubsetPct 100
/Optimize true
/OPM 0
/ParseDSCComments false
/ParseDSCCommentsForDocInfo false
/PreserveCopyPage true
/PreserveDICMYKValues true
/PreserveEPSInfo false
/PreserveFlatness true
/PreserveHalftoneInfo true
/PreserveOPIComments false
/PreserveOverprintSettings true
/StartPage 1
/SubsetFonts true
/TransferFunctionInfo /Remove
/UCRandBGInfo /Preserve
/UsePrologue false
/ColorSettingsFile ()
/AlwaysEmbed [ true
/AbadiMT-CondensedLight
/ACaslon-Italic
/ACaslon-Regular
/ACaslon-Semibold
/ACaslon-SemiboldItalic
/AdobeArabic-Bold
/AdobeArabic-BoldItalic
/AdobeArabic-Italic
/AdobeArabic-Regular
/AdobeHebrew-Bold
/AdobeHebrew-BoldItalic
/AdobeHebrew-Italic
/AdobeHebrew-Regular
/AdobeHeitiStd-Regular
/AdobeMingStd-Light
/AdobeMyungjoStd-Medium
/AdobePiStd
/AdobeSansMM
/AdobeSerifMM
/AdobeSongStd-Light
/AdobeThai-Bold
/AdobeThai-BoldItalic
/AdobeThai-Italic
/AdobeThai-Regular
/AGaramond-Bold
/AGaramond-BoldItalic
/AGaramond-Italic
/AGaramond-Regular
/AGaramond-Semibold
/AGaramond-SemiboldItalic
/AgencyFB-Bold
/AgencyFB-Reg
/AGOldFace-Outline
/AharoniBold
/Algerian
/Americana
/Americana-ExtraBold
/AndaleMono
/AndaleMonoIPA
/AngsanaNew
/AngsanaNew-Bold
/AngsanaNew-BoldItalic
/AngsanaNew-Italic
/AngsanaUPC
/AngsanaUPC-Bold
/AngsanaUPC-BoldItalic
/AngsanaUPC-Italic
/Anna
/ArialAlternative
/ArialAlternativeSymbol
/Arial-Black
/Arial-BlackItalic
/Arial-BoldItalicMT
/Arial-BoldMT
/Arial-ItalicMT
/ArialMT
/ArialMT-Black
/ArialNarrow
/ArialNarrow-Bold
/ArialNarrow-BoldItalic
/ArialNarrow-Italic
/ArialRoundedMTBold
/ArialUnicodeMS
/ArrusBT-Bold
/ArrusBT-BoldItalic
/ArrusBT-Italic
/ArrusBT-Roman
/AvantGarde-Book
/AvantGarde-BookOblique
/AvantGarde-Demi
/AvantGarde-DemiOblique
/AvantGardeITCbyBT-Book
/AvantGardeITCbyBT-BookOblique
/BakerSignet
/BankGothicBT-Medium
/Barmeno-Bold
/Barmeno-ExtraBold
/Barmeno-Medium
/Barmeno-Regular
/Baskerville
/BaskervilleBE-Italic
/BaskervilleBE-Medium
/BaskervilleBE-MediumItalic
/BaskervilleBE-Regular
/Baskerville-Bold
/Baskerville-BoldItalic
/Baskerville-Italic
/BaskOldFace
/Batang
/BatangChe
/Bauhaus93
/Bellevue
/BellGothicStd-Black
/BellGothicStd-Bold
/BellGothicStd-Light
/BellMT
/BellMTBold
/BellMTItalic
/BerlingAntiqua-Bold
/BerlingAntiqua-BoldItalic
/BerlingAntiqua-Italic
/BerlingAntiqua-Roman
/BerlinSansFB-Bold
/BerlinSansFBDemi-Bold
/BerlinSansFB-Reg
/BernardMT-Condensed
/BernhardModernBT-Bold
/BernhardModernBT-BoldItalic
/BernhardModernBT-Italic
/BernhardModernBT-Roman
/BiffoMT
/BinnerD
/BinnerGothic
/BlackadderITC-Regular
/Blackoak
/blex
/blsy
/Bodoni
/Bodoni-Bold
/Bodoni-BoldItalic
/Bodoni-Italic
/BodoniMT
/BodoniMTBlack
/BodoniMTBlack-Italic
/BodoniMT-Bold
/BodoniMT-BoldItalic
/BodoniMTCondensed
/BodoniMTCondensed-Bold
/BodoniMTCondensed-BoldItalic
/BodoniMTCondensed-Italic
/BodoniMT-Italic
/BodoniMTPosterCompressed
/Bodoni-Poster
/Bodoni-PosterCompressed
/BookAntiqua
/BookAntiqua-Bold
/BookAntiqua-BoldItalic
/BookAntiqua-Italic
/Bookman-Demi
/Bookman-DemiItalic
/Bookman-Light
/Bookman-LightItalic
/BookmanOldStyle
/BookmanOldStyle-Bold
/BookmanOldStyle-BoldItalic
/BookmanOldStyle-Italic
/BookshelfSymbolOne-Regular
/BookshelfSymbolSeven
/BookshelfSymbolThree-Regular
/BookshelfSymbolTwo-Regular
/Botanical
/Boton-Italic
/Boton-Medium
/Boton-MediumItalic
/Boton-Regular
/Boulevard
/BradleyHandITC
/Braggadocio
/BritannicBold
/Broadway
/BrowalliaNew
/BrowalliaNew-Bold
/BrowalliaNew-BoldItalic
/BrowalliaNew-Italic
/BrowalliaUPC
/BrowalliaUPC-Bold
/BrowalliaUPC-BoldItalic
/BrowalliaUPC-Italic
/BrushScript
/BrushScriptMT
/CaflischScript-Bold
/CaflischScript-Regular
/Calibri
/Calibri-Bold
/Calibri-BoldItalic
/Calibri-Italic
/CalifornianFB-Bold
/CalifornianFB-Italic
/CalifornianFB-Reg
/CalisMTBol
/CalistoMT
/CalistoMT-BoldItalic
/CalistoMT-Italic
/Cambria
/Cambria-Bold
/Cambria-BoldItalic
/Cambria-Italic
/CambriaMath
/Candara
/Candara-Bold
/Candara-BoldItalic
/Candara-Italic
/Carta
/CaslonOpenfaceBT-Regular
/Castellar
/CastellarMT
/Centaur
/Centaur-Italic
/Century
/CenturyGothic
/CenturyGothic-Bold
/CenturyGothic-BoldItalic
/CenturyGothic-Italic
/CenturySchL-Bold
/CenturySchL-BoldItal
/CenturySchL-Ital
/CenturySchL-Roma
/CenturySchoolbook
/CenturySchoolbook-Bold
/CenturySchoolbook-BoldItalic
/CenturySchoolbook-Italic
/CGTimes-Bold
/CGTimes-BoldItalic
/CGTimes-Italic
/CGTimes-Regular
/CharterBT-Bold
/CharterBT-BoldItalic
/CharterBT-Italic
/CharterBT-Roman
/CheltenhamITCbyBT-Bold
/CheltenhamITCbyBT-BoldItalic
/CheltenhamITCbyBT-Book
/CheltenhamITCbyBT-BookItalic
/Chiller-Regular
/Cmb10
/CMB10
/Cmbsy10
/CMBSY10
/CMBSY5
/CMBSY6
/CMBSY7
/CMBSY8
/CMBSY9
/Cmbx10
/CMBX10
/Cmbx12
/CMBX12
/Cmbx5
/CMBX5
/Cmbx6
/CMBX6
/Cmbx7
/CMBX7
/Cmbx8
/CMBX8
/Cmbx9
/CMBX9
/Cmbxsl10
/CMBXSL10
/Cmbxti10
/CMBXTI10
/Cmcsc10
/CMCSC10
/Cmcsc8
/CMCSC8
/Cmcsc9
/CMCSC9
/Cmdunh10
/CMDUNH10
/Cmex10
/CMEX10
/CMEX7
/CMEX8
/CMEX9
/Cmff10
/CMFF10
/Cmfi10
/CMFI10
/Cmfib8
/CMFIB8
/Cminch
/CMINCH
/Cmitt10
/CMITT10
/Cmmi10
/CMMI10
/Cmmi12
/CMMI12
/Cmmi5
/CMMI5
/Cmmi6
/CMMI6
/Cmmi7
/CMMI7
/Cmmi8
/CMMI8
/Cmmi9
/CMMI9
/Cmmib10
/CMMIB10
/CMMIB5
/CMMIB6
/CMMIB7
/CMMIB8
/CMMIB9
/Cmr10
/CMR10
/Cmr12
/CMR12
/Cmr17
/CMR17
/Cmr5
/CMR5
/Cmr6
/CMR6
/Cmr7
/CMR7
/Cmr8
/CMR8
/Cmr9
/CMR9
/Cmsl10
/CMSL10
/Cmsl12
/CMSL12
/Cmsl8
/CMSL8
/Cmsl9
/CMSL9
/Cmsltt10
/CMSLTT10
/Cmss10
/CMSS10
/Cmss12
/CMSS12
/Cmss17
/CMSS17
/Cmss8
/CMSS8
/Cmss9
/CMSS9
/Cmssbx10
/CMSSBX10
/Cmssdc10
/CMSSDC10
/Cmssi10
/CMSSI10
/Cmssi12
/CMSSI12
/Cmssi17
/CMSSI17
/Cmssi8
/CMSSI8
/Cmssi9
/CMSSI9
/Cmssq8
/CMSSQ8
/Cmssqi8
/CMSSQI8
/Cmsy10
/CMSY10
/Cmsy5
/CMSY5
/Cmsy6
/CMSY6
/Cmsy7
/CMSY7
/Cmsy8
/CMSY8
/Cmsy9
/CMSY9
/Cmtcsc10
/CMTCSC10
/Cmtex10
/CMTEX10
/Cmtex8
/CMTEX8
/Cmtex9
/CMTEX9
/Cmti10
/CMTI10
/Cmti12
/CMTI12
/Cmti7
/CMTI7
/Cmti8
/CMTI8
/Cmti9
/CMTI9
/Cmtt10
/CMTT10
/Cmtt12
/CMTT12
/Cmtt8
/CMTT8
/Cmtt9
/CMTT9
/Cmu10
/CMU10
/Cmvtt10
/CMVTT10
/ColonnaMT
/Colossalis-Bold
/ComicSansMS
/ComicSansMS-Bold
/Consolas
/Consolas-Bold
/Consolas-BoldItalic
/Consolas-Italic
/Constantia
/Constantia-Bold
/Constantia-BoldItalic
/Constantia-Italic
/CooperBlack
/CopperplateGothic-Bold
/CopperplateGothic-Light
/Copperplate-ThirtyThreeBC
/Corbel
/Corbel-Bold
/Corbel-BoldItalic
/Corbel-Italic
/CordiaNew
/CordiaNew-Bold
/CordiaNew-BoldItalic
/CordiaNew-Italic
/CordiaUPC
/CordiaUPC-Bold
/CordiaUPC-BoldItalic
/CordiaUPC-Italic
/Courier
/Courier-Bold
/Courier-BoldOblique
/CourierNewPS-BoldItalicMT
/CourierNewPS-BoldMT
/CourierNewPS-ItalicMT
/CourierNewPSMT
/Courier-Oblique
/CourierStd
/CourierStd-Bold
/CourierStd-BoldOblique
/CourierStd-Oblique
/CourierX-Bold
/CourierX-BoldOblique
/CourierX-Oblique
/CourierX-Regular
/CreepyRegular
/CurlzMT
/David-Bold
/David-Reg
/DavidTransparent
/Dcb10
/Dcbx10
/Dcbxsl10
/Dcbxti10
/Dccsc10
/Dcitt10
/Dcr10
/Desdemona
/DilleniaUPC
/DilleniaUPCBold
/DilleniaUPCBoldItalic
/DilleniaUPCItalic
/Dingbats
/DomCasual
/Dotum
/DotumChe
/EdwardianScriptITC
/Elephant-Italic
/Elephant-Regular
/EngraversGothicBT-Regular
/EngraversMT
/EraserDust
/ErasITC-Bold
/ErasITC-Demi
/ErasITC-Light
/ErasITC-Medium
/ErieBlackPSMT
/ErieLightPSMT
/EriePSMT
/EstrangeloEdessa
/Euclid
/Euclid-Bold
/Euclid-BoldItalic
/EuclidExtra
/EuclidExtra-Bold
/EuclidFraktur
/EuclidFraktur-Bold
/Euclid-Italic
/EuclidMathOne
/EuclidMathOne-Bold
/EuclidMathTwo
/EuclidMathTwo-Bold
/EuclidSymbol
/EuclidSymbol-Bold
/EuclidSymbol-BoldItalic
/EuclidSymbol-Italic
/EucrosiaUPC
/EucrosiaUPCBold
/EucrosiaUPCBoldItalic
/EucrosiaUPCItalic
/EUEX10
/EUEX7
/EUEX8
/EUEX9
/EUFB10
/EUFB5
/EUFB7
/EUFM10
/EUFM5
/EUFM7
/EURB10
/EURB5
/EURB7
/EURM10
/EURM5
/EURM7
/EuroMono-Bold
/EuroMono-BoldItalic
/EuroMono-Italic
/EuroMono-Regular
/EuroSans-Bold
/EuroSans-BoldItalic
/EuroSans-Italic
/EuroSans-Regular
/EuroSerif-Bold
/EuroSerif-BoldItalic
/EuroSerif-Italic
/EuroSerif-Regular
/EuroSig
/EUSB10
/EUSB5
/EUSB7
/EUSM10
/EUSM5
/EUSM7
/FelixTitlingMT
/Fences
/FencesPlain
/FigaroMT
/FixedMiriamTransparent
/FootlightMTLight
/Formata-Italic
/Formata-Medium
/Formata-MediumItalic
/Formata-Regular
/ForteMT
/FranklinGothic-Book
/FranklinGothic-BookItalic
/FranklinGothic-Demi
/FranklinGothic-DemiCond
/FranklinGothic-DemiItalic
/FranklinGothic-Heavy
/FranklinGothic-HeavyItalic
/FranklinGothicITCbyBT-Book
/FranklinGothicITCbyBT-BookItal
/FranklinGothicITCbyBT-Demi
/FranklinGothicITCbyBT-DemiItal
/FranklinGothic-Medium
/FranklinGothic-MediumCond
/FranklinGothic-MediumItalic
/FrankRuehl
/FreesiaUPC
/FreesiaUPCBold
/FreesiaUPCBoldItalic
/FreesiaUPCItalic
/FreestyleScript-Regular
/FrenchScriptMT
/Frutiger-Black
/Frutiger-BlackCn
/Frutiger-BlackItalic
/Frutiger-Bold
/Frutiger-BoldCn
/Frutiger-BoldItalic
/Frutiger-Cn
/Frutiger-ExtraBlackCn
/Frutiger-Italic
/Frutiger-Light
/Frutiger-LightCn
/Frutiger-LightItalic
/Frutiger-Roman
/Frutiger-UltraBlack
/Futura-Bold
/Futura-BoldOblique
/Futura-Book
/Futura-BookOblique
/FuturaBT-Bold
/FuturaBT-BoldItalic
/FuturaBT-Book
/FuturaBT-BookItalic
/FuturaBT-Medium
/FuturaBT-MediumItalic
/Futura-Light
/Futura-LightOblique
/GalliardITCbyBT-Bold
/GalliardITCbyBT-BoldItalic
/GalliardITCbyBT-Italic
/GalliardITCbyBT-Roman
/Garamond
/Garamond-Bold
/Garamond-BoldCondensed
/Garamond-BoldCondensedItalic
/Garamond-BoldItalic
/Garamond-BookCondensed
/Garamond-BookCondensedItalic
/Garamond-Italic
/Garamond-LightCondensed
/Garamond-LightCondensedItalic
/Gautami
/GeometricSlab703BT-Light
/GeometricSlab703BT-LightItalic
/Georgia
/Georgia-Bold
/Georgia-BoldItalic
/Georgia-Italic
/GeorgiaRef
/Giddyup
/Giddyup-Thangs
/Gigi-Regular
/GillSans
/GillSans-Bold
/GillSans-BoldItalic
/GillSans-Condensed
/GillSans-CondensedBold
/GillSans-Italic
/GillSans-Light
/GillSans-LightItalic
/GillSansMT
/GillSansMT-Bold
/GillSansMT-BoldItalic
/GillSansMT-Condensed
/GillSansMT-ExtraCondensedBold
/GillSansMT-Italic
/GillSans-UltraBold
/GillSans-UltraBoldCondensed
/GloucesterMT-ExtraCondensed
/Gothic-Thirteen
/GoudyOldStyleBT-Bold
/GoudyOldStyleBT-BoldItalic
/GoudyOldStyleBT-Italic
/GoudyOldStyleBT-Roman
/GoudyOldStyleT-Bold
/GoudyOldStyleT-Italic
/GoudyOldStyleT-Regular
/GoudyStout
/GoudyTextMT-LombardicCapitals
/GSIDefaultSymbols
/Gulim
/GulimChe
/Gungsuh
/GungsuhChe
/Haettenschweiler
/HarlowSolid
/Harrington
/Helvetica
/Helvetica-Black
/Helvetica-BlackOblique
/Helvetica-Bold
/Helvetica-BoldOblique
/Helvetica-Condensed
/Helvetica-Condensed-Black
/Helvetica-Condensed-BlackObl
/Helvetica-Condensed-Bold
/Helvetica-Condensed-BoldObl
/Helvetica-Condensed-Light
/Helvetica-Condensed-LightObl
/Helvetica-Condensed-Oblique
/Helvetica-Fraction
/Helvetica-Narrow
/Helvetica-Narrow-Bold
/Helvetica-Narrow-BoldOblique
/Helvetica-Narrow-Oblique
/Helvetica-Oblique
/HighTowerText-Italic
/HighTowerText-Reg
/Humanist521BT-BoldCondensed
/Humanist521BT-Light
/Humanist521BT-LightItalic
/Humanist521BT-RomanCondensed
/Imago-ExtraBold
/Impact
/ImprintMT-Shadow
/InformalRoman-Regular
/IrisUPC
/IrisUPCBold
/IrisUPCBoldItalic
/IrisUPCItalic
/Ironwood
/ItcEras-Medium
/ItcKabel-Bold
/ItcKabel-Book
/ItcKabel-Demi
/ItcKabel-Medium
/ItcKabel-Ultra
/JasmineUPC
/JasmineUPC-Bold
/JasmineUPC-BoldItalic
/JasmineUPC-Italic
/JoannaMT
/JoannaMT-Italic
/Jokerman-Regular
/JuiceITC-Regular
/Kartika
/Kaufmann
/KaufmannBT-Bold
/KaufmannBT-Regular
/KidTYPEPaint
/KinoMT
/KodchiangUPC
/KodchiangUPC-Bold
/KodchiangUPC-BoldItalic
/KodchiangUPC-Italic
/KorinnaITCbyBT-Regular
/KozGoProVI-Medium
/KozMinProVI-Regular
/KristenITC-Regular
/KunstlerScript
/Latha
/LatinWide
/LetterGothic
/LetterGothic-Bold
/LetterGothic-BoldOblique
/LetterGothic-BoldSlanted
/LetterGothicMT
/LetterGothicMT-Bold
/LetterGothicMT-BoldOblique
/LetterGothicMT-Oblique
/LetterGothic-Slanted
/LetterGothicStd
/LetterGothicStd-Bold
/LetterGothicStd-BoldSlanted
/LetterGothicStd-Slanted
/LevenimMT
/LevenimMTBold
/LilyUPC
/LilyUPCBold
/LilyUPCBoldItalic
/LilyUPCItalic
/Lithos-Black
/Lithos-Regular
/LotusWPBox-Roman
/LotusWPIcon-Roman
/LotusWPIntA-Roman
/LotusWPIntB-Roman
/LotusWPType-Roman
/LucidaBright
/LucidaBright-Demi
/LucidaBright-DemiItalic
/LucidaBright-Italic
/LucidaCalligraphy-Italic
/LucidaConsole
/LucidaFax
/LucidaFax-Demi
/LucidaFax-DemiItalic
/LucidaFax-Italic
/LucidaHandwriting-Italic
/LucidaSans
/LucidaSans-Demi
/LucidaSans-DemiItalic
/LucidaSans-Italic
/LucidaSans-Typewriter
/LucidaSans-TypewriterBold
/LucidaSans-TypewriterBoldOblique
/LucidaSans-TypewriterOblique
/LucidaSansUnicode
/Lydian
/Magneto-Bold
/MaiandraGD-Regular
/Mangal-Regular
/Map-Symbols
/MathA
/MathB
/MathC
/Mathematica1
/Mathematica1-Bold
/Mathematica1Mono
/Mathematica1Mono-Bold
/Mathematica2
/Mathematica2-Bold
/Mathematica2Mono
/Mathematica2Mono-Bold
/Mathematica3
/Mathematica3-Bold
/Mathematica3Mono
/Mathematica3Mono-Bold
/Mathematica4
/Mathematica4-Bold
/Mathematica4Mono
/Mathematica4Mono-Bold
/Mathematica5
/Mathematica5-Bold
/Mathematica5Mono
/Mathematica5Mono-Bold
/Mathematica6
/Mathematica6Bold
/Mathematica6Mono
/Mathematica6MonoBold
/Mathematica7
/Mathematica7Bold
/Mathematica7Mono
/Mathematica7MonoBold
/MatisseITC-Regular
/MaturaMTScriptCapitals
/Mesquite
/Mezz-Black
/Mezz-Regular
/MICR
/MicrosoftSansSerif
/MingLiU
/Minion-BoldCondensed
/Minion-BoldCondensedItalic
/Minion-Condensed
/Minion-CondensedItalic
/Minion-Ornaments
/MinionPro-Bold
/MinionPro-BoldIt
/MinionPro-It
/MinionPro-Regular
/MinionPro-Semibold
/MinionPro-SemiboldIt
/Miriam
/MiriamFixed
/MiriamTransparent
/Mistral
/Modern-Regular
/MonotypeCorsiva
/MonotypeSorts
/MSAM10
/MSAM5
/MSAM6
/MSAM7
/MSAM8
/MSAM9
/MSBM10
/MSBM5
/MSBM6
/MSBM7
/MSBM8
/MSBM9
/MS-Gothic
/MSHei
/MSLineDrawPSMT
/MS-Mincho
/MSOutlook
/MS-PGothic
/MS-PMincho
/MSReference1
/MSReference2
/MSReferenceSansSerif
/MSReferenceSansSerif-Bold
/MSReferenceSansSerif-BoldItalic
/MSReferenceSansSerif-Italic
/MSReferenceSerif
/MSReferenceSerif-Bold
/MSReferenceSerif-BoldItalic
/MSReferenceSerif-Italic
/MSReferenceSpecialty
/MSSong
/MS-UIGothic
/MT-Extra
/MT-Symbol
/MT-Symbol-Italic
/MVBoli
/Myriad-Bold
/Myriad-BoldItalic
/Myriad-Italic
/MyriadPro-Black
/MyriadPro-BlackIt
/MyriadPro-Bold
/MyriadPro-BoldIt
/MyriadPro-It
/MyriadPro-Light
/MyriadPro-LightIt
/MyriadPro-Regular
/MyriadPro-Semibold
/MyriadPro-SemiboldIt
/Myriad-Roman
/Narkisim
/NewCenturySchlbk-Bold
/NewCenturySchlbk-BoldItalic
/NewCenturySchlbk-Italic
/NewCenturySchlbk-Roman
/NewMilleniumSchlbk-BoldItalicSH
/NewsGothic
/NewsGothic-Bold
/NewsGothicBT-Bold
/NewsGothicBT-BoldItalic
/NewsGothicBT-Italic
/NewsGothicBT-Roman
/NewsGothic-Condensed
/NewsGothic-Italic
/NewsGothicMT
/NewsGothicMT-Bold
/NewsGothicMT-Italic
/NiagaraEngraved-Reg
/NiagaraSolid-Reg
/NimbusMonL-Bold
/NimbusMonL-BoldObli
/NimbusMonL-Regu
/NimbusMonL-ReguObli
/NimbusRomDGR-Bold
/NimbusRomDGR-BoldItal
/NimbusRomDGR-Regu
/NimbusRomDGR-ReguItal
/NimbusRomNo9L-Medi
/NimbusRomNo9L-MediItal
/NimbusRomNo9L-Regu
/NimbusRomNo9L-ReguItal
/NimbusSanL-Bold
/NimbusSanL-BoldCond
/NimbusSanL-BoldCondItal
/NimbusSanL-BoldItal
/NimbusSanL-Regu
/NimbusSanL-ReguCond
/NimbusSanL-ReguCondItal
/NimbusSanL-ReguItal
/Nimrod
/Nimrod-Bold
/Nimrod-BoldItalic
/Nimrod-Italic
/NSimSun
/Nueva-BoldExtended
/Nueva-BoldExtendedItalic
/Nueva-Italic
/Nueva-Roman
/NuptialScript
/OCRA
/OCRA-Alternate
/OCRAExtended
/OCRB
/OCRB-Alternate
/OfficinaSans-Bold
/OfficinaSans-BoldItalic
/OfficinaSans-Book
/OfficinaSans-BookItalic
/OfficinaSerif-Bold
/OfficinaSerif-BoldItalic
/OfficinaSerif-Book
/OfficinaSerif-BookItalic
/OldEnglishTextMT
/Onyx
/OnyxBT-Regular
/OzHandicraftBT-Roman
/PalaceScriptMT
/Palatino-Bold
/Palatino-BoldItalic
/Palatino-Italic
/PalatinoLinotype-Bold
/PalatinoLinotype-BoldItalic
/PalatinoLinotype-Italic
/PalatinoLinotype-Roman
/Palatino-Roman
/PapyrusPlain
/Papyrus-Regular
/Parchment-Regular
/Parisian
/ParkAvenue
/Penumbra-SemiboldFlare
/Penumbra-SemiboldSans
/Penumbra-SemiboldSerif
/PepitaMT
/Perpetua
/Perpetua-Bold
/Perpetua-BoldItalic
/Perpetua-Italic
/PerpetuaTitlingMT-Bold
/PerpetuaTitlingMT-Light
/PhotinaCasualBlack
/Playbill
/PMingLiU
/Poetica-SuppOrnaments
/PoorRichard-Regular
/PopplLaudatio-Italic
/PopplLaudatio-Medium
/PopplLaudatio-MediumItalic
/PopplLaudatio-Regular
/PrestigeElite
/Pristina-Regular
/PTBarnumBT-Regular
/Raavi
/RageItalic
/Ravie
/RefSpecialty
/Ribbon131BT-Bold
/Rockwell
/Rockwell-Bold
/Rockwell-BoldItalic
/Rockwell-Condensed
/Rockwell-CondensedBold
/Rockwell-ExtraBold
/Rockwell-Italic
/Rockwell-Light
/Rockwell-LightItalic
/Rod
/RodTransparent
/RunicMT-Condensed
/Sanvito-Light
/Sanvito-Roman
/ScriptC
/ScriptMTBold
/SegoeUI
/SegoeUI-Bold
/SegoeUI-BoldItalic
/SegoeUI-Italic
/Serpentine-BoldOblique
/ShelleyVolanteBT-Regular
/ShowcardGothic-Reg
/Shruti
/SimHei
/SimSun
/SimSun-PUA
/SnapITC-Regular
/StandardSymL
/Stencil
/StoneSans
/StoneSans-Bold
/StoneSans-BoldItalic
/StoneSans-Italic
/StoneSans-Semibold
/StoneSans-SemiboldItalic
/Stop
/Swiss721BT-BlackExtended
/Sylfaen
/Symbol
/SymbolMT
/Tahoma
/Tahoma-Bold
/Tci1
/Tci1Bold
/Tci1BoldItalic
/Tci1Italic
/Tci2
/Tci2Bold
/Tci2BoldItalic
/Tci2Italic
/Tci3
/Tci3Bold
/Tci3BoldItalic
/Tci3Italic
/Tci4
/Tci4Bold
/Tci4BoldItalic
/Tci4Italic
/TechnicalItalic
/TechnicalPlain
/Tekton
/Tekton-Bold
/TektonMM
/Tempo-HeavyCondensed
/Tempo-HeavyCondensedItalic
/TempusSansITC
/Times-Bold
/Times-BoldItalic
/Times-BoldItalicOsF
/Times-BoldSC
/Times-ExtraBold
/Times-Italic
/Times-ItalicOsF
/TimesNewRomanMT-ExtraBold
/TimesNewRomanPS-BoldItalicMT
/TimesNewRomanPS-BoldMT
/TimesNewRomanPS-ItalicMT
/TimesNewRomanPSMT
/Times-Roman
/Times-RomanSC
/Trajan-Bold
/Trebuchet-BoldItalic
/TrebuchetMS
/TrebuchetMS-Bold
/TrebuchetMS-Italic
/Tunga-Regular
/TwCenMT-Bold
/TwCenMT-BoldItalic
/TwCenMT-Condensed
/TwCenMT-CondensedBold
/TwCenMT-CondensedExtraBold
/TwCenMT-CondensedMedium
/TwCenMT-Italic
/TwCenMT-Regular
/Univers-Bold
/Univers-BoldItalic
/UniversCondensed-Bold
/UniversCondensed-BoldItalic
/UniversCondensed-Medium
/UniversCondensed-MediumItalic
/Univers-Medium
/Univers-MediumItalic
/URWBookmanL-DemiBold
/URWBookmanL-DemiBoldItal
/URWBookmanL-Ligh
/URWBookmanL-LighItal
/URWChanceryL-MediItal
/URWGothicL-Book
/URWGothicL-BookObli
/URWGothicL-Demi
/URWGothicL-DemiObli
/URWPalladioL-Bold
/URWPalladioL-BoldItal
/URWPalladioL-Ital
/URWPalladioL-Roma
/USPSBarCode
/VAGRounded-Black
/VAGRounded-Bold
/VAGRounded-Light
/VAGRounded-Thin
/Verdana
/Verdana-Bold
/Verdana-BoldItalic
/Verdana-Italic
/VerdanaRef
/VinerHandITC
/Viva-BoldExtraExtended
/Vivaldii
/Viva-LightCondensed
/Viva-Regular
/VladimirScript
/Vrinda
/Webdings
/Westminster
/Willow
/Wingdings2
/Wingdings3
/Wingdings-Regular
/WNCYB10
/WNCYI10
/WNCYR10
/WNCYSC10
/WNCYSS10
/WoodtypeOrnaments-One
/WoodtypeOrnaments-Two
/WP-ArabicScriptSihafa
/WP-ArabicSihafa
/WP-BoxDrawing
/WP-CyrillicA
/WP-CyrillicB
/WP-GreekCentury
/WP-GreekCourier
/WP-GreekHelve
/WP-HebrewDavid
/WP-IconicSymbolsA
/WP-IconicSymbolsB
/WP-Japanese
/WP-MathA
/WP-MathB
/WP-MathExtendedA
/WP-MathExtendedB
/WP-MultinationalAHelve
/WP-MultinationalARoman
/WP-MultinationalBCourier
/WP-MultinationalBHelve
/WP-MultinationalBRoman
/WP-MultinationalCourier
/WP-Phonetic
/WPTypographicSymbols
/XYATIP10
/XYBSQL10
/XYBTIP10
/XYCIRC10
/XYCMAT10
/XYCMBT10
/XYDASH10
/XYEUAT10
/XYEUBT10
/ZapfChancery-MediumItalic
/ZapfDingbats
/ZapfHumanist601BT-Bold
/ZapfHumanist601BT-BoldItalic
/ZapfHumanist601BT-Demi
/ZapfHumanist601BT-DemiItalic
/ZapfHumanist601BT-Italic
/ZapfHumanist601BT-Roman
/ZWAdobeF
]
/NeverEmbed [ true
]
/AntiAliasColorImages false
/CropColorImages true
/ColorImageMinResolution 200
/ColorImageMinResolutionPolicy /OK
/DownsampleColorImages true
/ColorImageDownsampleType /Bicubic
/ColorImageResolution 300
/ColorImageDepth -1
/ColorImageMinDownsampleDepth 1
/ColorImageDownsampleThreshold 2.00333
/EncodeColorImages true
/ColorImageFilter /DCTEncode
/AutoFilterColorImages true
/ColorImageAutoFilterStrategy /JPEG
/ColorACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/ColorImageDict <<
/QFactor 1.30
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/JPEG2000ColorACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 10
>>
/JPEG2000ColorImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 10
>>
/AntiAliasGrayImages false
/CropGrayImages true
/GrayImageMinResolution 200
/GrayImageMinResolutionPolicy /OK
/DownsampleGrayImages true
/GrayImageDownsampleType /Bicubic
/GrayImageResolution 300
/GrayImageDepth -1
/GrayImageMinDownsampleDepth 2
/GrayImageDownsampleThreshold 2.00333
/EncodeGrayImages true
/GrayImageFilter /DCTEncode
/AutoFilterGrayImages true
/GrayImageAutoFilterStrategy /JPEG
/GrayACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/GrayImageDict <<
/QFactor 1.30
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/JPEG2000GrayACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 10
>>
/JPEG2000GrayImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 10
>>
/AntiAliasMonoImages false
/CropMonoImages true
/MonoImageMinResolution 400
/MonoImageMinResolutionPolicy /OK
/DownsampleMonoImages true
/MonoImageDownsampleType /Bicubic
/MonoImageResolution 600
/MonoImageDepth -1
/MonoImageDownsampleThreshold 1.00167
/EncodeMonoImages true
/MonoImageFilter /CCITTFaxEncode
/MonoImageDict <<
/K -1
>>
/AllowPSXObjects false
/CheckCompliance [
/None
]
/PDFX1aCheck false
/PDFX3Check false
/PDFXCompliantPDFOnly false
/PDFXNoTrimBoxError true
/PDFXTrimBoxToMediaBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXSetBleedBoxToMediaBox true
/PDFXBleedBoxToTrimBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXOutputIntentProfile (None)
/PDFXOutputConditionIdentifier ()
/PDFXOutputCondition ()
/PDFXRegistryName ()
/PDFXTrapped /False
/CreateJDFFile false
/Description <<
/ARA
/BGR
/CHS
/CHT
/CZE
/DAN
/DEU
/ESP
/ETI
/FRA
/GRE
/HEB
/HRV
/HUN
/ITA
/JPN
/KOR
/LTH
/LVI
/NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor weergave op een beeldscherm, e-mail en internet. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.)
/NOR
/POL
/PTB
/RUM
/RUS
/SKY
/SLV
/SUO
/SVE
/TUR
/UKR
/ENU (Use these settings to create Adobe PDF documents best suited for on-screen display, e-mail, and the Internet. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)
>>
/Namespace [
(Adobe)
(Common)
(1.0)
]
/OtherNamespaces [
<<
/AsReaderSpreads false
/CropImagesToFrames true
/ErrorControl /WarnAndContinue
/FlattenerIgnoreSpreadOverrides false
/IncludeGuidesGrids false
/IncludeNonPrinting false
/IncludeSlug false
/Namespace [
(Adobe)
(InDesign)
(4.0)
]
/OmitPlacedBitmaps false
/OmitPlacedEPS false
/OmitPlacedPDF false
/SimulateOverprint /Legacy
>>
<<
/AddBleedMarks false
/AddColorBars false
/AddCropMarks false
/AddPageInfo false
/AddRegMarks false
/ConvertColors /ConvertToRGB
/DestinationProfileName (sRGB IEC61966-2.1)
/DestinationProfileSelector /UseName
/Downsample16BitImages true
/FlattenerPreset <<
/PresetSelector /MediumResolution
>>
/FormElements false
/GenerateStructure false
/IncludeBookmarks false
/IncludeHyperlinks false
/IncludeInteractive false
/IncludeLayers false
/IncludeProfiles true
/MultimediaHandling /UseObjectSettings
/Namespace [
(Adobe)
(CreativeSuite)
(2.0)
]
/PDFXOutputIntentProfileSelector /NA
/PreserveEditing false
/UntaggedCMYKHandling /UseDocumentProfile
/UntaggedRGBHandling /UseDocumentProfile
/UseDocumentBleed false
>>
]
>> setdistillerparams
<<
/HWResolution [600 600]
/PageSize [612.000 792.000]
>> setpagedevice
A Distributed Broadcast Protocol in Multi-hop Cognitive Radio Ad Ho
c
Networks without a Common Control Channel
Yi Song and Jiang Xie
Department of Electrical and Computer Engineering
The University of North Carolina at Charlotte
Email: {ysong13, Linda.Xie}@uncc.edu
Abstract—Broadcast is an important operation in wireless ad
hoc networks where control information is usually propagated
as broadcasts for the realization of most networking protocols.
In traditional ad hoc networks, since the spectrum availability
is uniform, broadcasts are delivered via a common channel
which can be heard by all users in a network. However, in
cognitive radio (CR) ad hoc networks, different unlicensed users
may acquire different available channel sets. This non-uniform
spectrum availability imposes special design challenges for broad-
casting in CR ad hoc networks. In this paper, a fully-distributed
broadcast protocol in multi-hop CR ad hoc networks without a
common control channel is proposed. In our design, we consider
practical scenarios that each unlicensed user is not assumed to be
aware of the global network topology, the spectrum availability
information of other users, and time synchronization information.
By intelligently downsizing the original available channel set and
designing the broadcasting sequences and scheduling schemes,
our proposed broadcast protocol can provide very high successful
broadcast ratio while achieving the shortest average broadcas
t
delay. It can also eliminate broadcast collisions. To the best of our
knowledge, this is the first work that addresses the broadcasting
challenges specifically in multi-hop CR ad hoc networks under
practical scenarios.
I. INTRODUCTION
Cognitive radio (CR) technology has been proposed as an
enabling solution to alleviate the spectrum underutilization
problem [1]. With the capability of sensing the frequency
bands in a time and location-varying spectrum environment
and adjusting the operating parameters based on the sens-
ing outcome, CR technology allows an unlicensed user (or,
secondary user) to exploit those frequency bands unused by
licensed users (or, primary users) in an opportunistic manner
[2]. Secondary users (SUs) can form an infrastructure-based
network or an ad hoc network. Recently, CR ad hoc networks
have attracted plentiful research attention due to their various
applications [3].
Broadcast is an important operation in ad hoc networks,
especially in distributed multi-hop multi-channel networks.
Control information exchange among neighboring nodes, suc
h
as channel availability and routing information, is crucial for
the realization of most networking protocols in an ad hoc
network. This control information is often sent out as network-
wide broadcasts, messages that are sent to all other nodes
in a network. In addition, some exigent data packets such as
emergency messages and alarm signals are also delivered as
network-wide broadcasts [4]. Due to the importance of the
broadcast operation, in this paper, we address the broadcasting
This work was supported in part by the US National Science Foundation
(NSF) under Grant No. CNS-0855200, CNS-0915599, and CNS-0953644.
issue in multi-hop CR ad hoc networks. Since broadcast
messages often need to be disseminated to all destinations as
quickly as possible, we aim to achieve very high successful
broadcast ratio and the shortest broadcast delay.
The broadcasting issue has been studied extensively in
traditional ad hoc networks [5]–[8]. However, unlike tradi-
tional single-channel or multi-channel ad hoc networks where
the channel availability is uniform, in CR ad hoc networks,
different SUs may acquire different sets of available channels.
This non-uniform channel availability imposes special design
challenges for broadcasting in CR ad hoc networks. First of
all, for traditional single-channel and multi-channel ad hoc
networks, due to the uniformity of channel availability, all
nodes can tune to the same channel. Thus, broadcast messages
can be conveyed through a single common channel which
can be heard by all nodes in a network. However, in CR ad
hoc networks, the availability of a common channel for all
nodes may not exist. More importantly, before any control
information is exchanged, a SU is unaware of the available
channels of its neighboring nodes. Therefore, broadcasting
messages on a global common channel is not feasible in CR
ad hoc networks.
To further illustrate the challenges of broadcasting in CR
ad hoc networks, we consider a single-hop scenario shown
in Fig. 1, where node A is the source node. For traditional
single-channel and multi-channel ad hoc networks, as shown
in Fig. 1(a), nodes can tune to the same channel (e.g., channel
1) for broadcasting. Thus, node A only needs one time slot to
let all its neighboring nodes receive the broadcast message in
an error-free environment. However, in CR ad hoc networks
where the channel availability is heterogeneous and SUs are
unaware of the available channels of each other, as shown
in Fig. 1(b), node A may have to use multiple channels for
broadcasting and may not be able to finish the broadcast within
one time slot. In fact, the exact broadcast delay for all single-
hop neighboring nodes to successfully receive the broadcast
message in CR ad hoc networks relies on various factors (e.g.,
channel availability and the number of neighboring nodes) and
it is random.
A
(ch1
)
B
(ch1)
C
(ch1)
(a)Traditional ad hoc networks.
A
(ch1,ch3)
B
(ch1,ch2)
C
(ch2,ch3)
(b) CR ad hoc networks.
Fig. 1. The single-hop broadcast scenario.
Furthermore, since multiple channels may be used for
2012 Proceedings IEEE INFOCO
M
978-1-4673-0775-8/12/$31.00 ©2012 IEEE 2273
broadcasting and the exact time for all single-hop neighboring
nodes to successfully receive the broadcast message is random,
to avoid broadcast collisions (i.e., a node receives multiple
copies of the broadcast message simultaneously) is much more
complicated in CR ad hoc networks, as compared to traditional
ad hoc networks. In traditional ad hoc networks, numerous
broadcast scheduling schemes are proposed to reduce the
probability of broadcast collision while optimizing the network
performance [9]–[14]. All these proposals are on the basis
that all nodes use a single channel for broadcasting and
the exact delay for a single-hop broadcast is one time slot.
However, in CR ad hoc networks, without the information
about the channel used for broadcasting and the exact delay for
a single-hop broadcast, to predict when and on which channel
a broadcast collision occurs is extremely difficult. Hence, to
design a broadcast protocol which can eliminate broadcast
collisions, as well as provide high successful broadcast ratio
and short broadcast delay is a very challenging issue for multi-
hop CR ad hoc networks under practical scenarios. Simply
extending existing broadcast protocols to CR ad hoc networks
cannot yield the optimal performance.
Currently, research on broadcasting in multi-hop CR ad
hoc networks is still in its infant stage. There are only
limited papers addressing the broadcasting issue in CR ad
hoc networks [15] [16]. However, both papers adopt imprac-
tical assumptions which make them inadequate to be used
in practical scenarios. In [15] and [16], the global network
topology and the available channel information of all SUs
are assumed to be known. Additionally, in [16], a common
signaling channel for the whole network is employed which
is also not practical. Other proposals aiming to locally es-
tablish a common control channel may also be considered for
broadcasting [17]–[19]. However, these proposals need a-priori
channel availability information of all SUs which is usually
obtained via broadcasts. In addition, although some schemes
on channel hopping in CR networks can be used for finding
a common channel between two nodes [20]–[22], they still
suffer various limitations and cannot be used in broadcast
scenarios. In [20] and [21], the proposed channel hopping
schemes cannot guarantee rendezvous under some special
circumstances. In addition, one of the proposed schemes in
[20] only works when two SUs have exactly the same available
channel sets. Moreover, in [22], a jump-stay based channel
hopping algorithm is proposed for guaranteed rendezvous.
However, the expected rendezvous time for the asymmetric
model (i.e., different users have different available channels)
increases exponentially when the total number of channels
increases. Thus, it is unsuitable for broadcast scenarios which
usually require short broadcast delay. Other channel hopping
algorithms explained in [23] require tight time synchronization
which is also not feasible before any control information is
exchanged.
In this paper, a fully-distributed broadcast protocol in a
multi-hop CR ad hoc network is proposed. We consider
practical scenarios in our design: 1) no global and local
common control channel is assumed to exist; 2) the global
network topology is not known; 3) the channel information
of any other SUs is not known; 4) the available channel
sets of different SUs are not assumed to be the same; and
5) tight time synchronization is not required. Our proposed
broadcast protocol can provide very high successful delivery
ratio while achieving the shortest broadcast delay. It can also
eliminate broadcast collisions. To the best of our knowledge,
this is the first work that addresses the broadcasting challenges
specifically in multi-hop CR ad hoc networks under practical
scenarios.
The rest of this paper is organized as follows. In Section
II, the proposed broadcast protocol for multi-hop CR ad hoc
networks is presented. The derivation of an important system
parameter is given in Section III. Simulation results are shown
in Section IV, followed by the conclusions in Section V.
II. THE PROPOSED BROADCAST PROTOCOL
In this section, we introduce the proposed broadcast protocol
for multi-hop CR ad hoc networks. There are three compo-
nents of the protocol: 1) the construction of the broadcasting
sequences; 2) the distributed broadcast scheduling scheme; and
3) the broadcast collision avoidance scheme. We assume that
a time-slotted system is adopted for SUs, where the length of
a time slot is long enough to transmit a broadcast packet. We
also assume that each SU knows the locations of its all 2-ho
p
neighbors. We claim that this is a more valid assumption than
the knowledge of global network topology. In the rest of the
paper, we use the term “sender” to indicate a SU who has
just received a message and will rebroadcast the message. In
addition, we use the term “receiver” to indicate a SU who has
not received the message. The notations used in our protocol
design are listed in Table I.
TABLE I
NOTATIONS USED IN THE PROTOCOL
N (v) The set of the neighboring nodes of node v
N (N (v)) The set of the neighbors of the neighboring nodes of node v
d(v, u) The Euclidean distance between node v and u
rc The radius of the transmission range of each node
| · | The number of elements in a set
Lv The downsized available channel set of node v
w(v) The size of the downsized available channel set of node v
C The set of the initial w of intermediate nodes
BSv The broadcasting sequence for a sender v
RSv The broadcasting sequence for a receiver v
DSv The default sequence of a sender v
stv The starting time slot of a sender v
rtv The time slot that a receiver v receives the message
Rv The random number assigned to a receiver v by its sender
A. Construction of the Broadcasting Sequences
The broadcasting sequences are the sequences of chan-
nels by which a sender and its receivers hop for successful
broadcasts. First of all, we consider the single-hop broadcast
scenario. As explained in Section I, due to the non-uniform
channel availability in CR ad hoc networks, a SU sender may
have to use multiple channels for broadcasting in order to
let all its neighboring nodes receive the broadcast message.
Accordingly, the neighboring nodes may also have to listen
2274
to multiple channels in order to receive the broadcast mes-
sage. Hence, the first issue to design a broadcast protocol is
which channels should be used for broadcasting. One possible
method is to broadcast on all the available channels of the
SU sender. However, this method is quite costly in terms of
broadcast delay when the number of available channels is
large. Therefore, we propose to select a subset of channels
from the original available channel set of each SU. The
available channels of each SU are first ranked based on the
channel indexes. Then, each SU selects the first w channels
from the ranked channel list and forms a downsized available
channel set. The value of w needs to be carefully designed
to ensure that at least one common channel exists between
the downsized available channel sets of the SU sender and
each of its neighboring nodes. The detailed derivation process
to obtain a proper w is given in Section III. Based on the
derivation process, each SU can calculate the value of w of
its own and its 1-hop neighbors before a broadcast starts.
On the other hand, another important issue is the sequences
of the channels by which a sender and its receivers hop for
successful broadcasts. In this paper, we design two broadcast-
ing sequences for a SU sender and its receivers to guarantee
a successful broadcast in the single-hop scenario as long as
they have at least one common channel. The sender hops and
broadcasts a message on each channel in a time slot following
its sequence. On the other hand, the receiver hops and listens
on each channel following its sequence. The pseudo-codes
for constructing the broadcasting sequences are shown in
Algorithm 1 and 2.
Algorithm 1: Construction of the broadcasting sequence
BSv for a SU sender v.
Input: w(v), Lv .
Output: BSv .
randomize the order of elements in Lv ;
BSv ← ∅; /* initialization */
i ← 1;
while i ≤ w(v)2 do
BSv(i) ← Lv((i mod w(v)) + 1);
i ← i + 1; /* repeat Lv for w(v) times */
return BSv ;
Algorithm 2: Construction of the broadcasting sequence
RSv for a SU receiver v.
Input: w(v), Lv .
Output: RSv .
randomize the order of elements in Lv ;
RSv ← ∅; /* initialization */
i ← 1;
while i ≤ w(v) do
j ← 1;
while j ≤ w(v) do
RSv((i − 1)w(v) + j) ← Lv(i);
j ← j + 1; /*repeat an element for w(v) times*/
i ← i + 1; /*repeat for every element in Lv */
return RSv ;
Fig. 2 gives an example to illustrate the construction of
the broadcasting sequences for SU senders and receivers. In
Fig. 2, the downsized available channel set of the sender
and the receiver is {1, 2} and {2, 3, 4}, respectively. Based
on Algorithm 1, the broadcasting sequence of the sender is
{2, 1, 2, 1}. Similarly, based on Algorithm 2, the broadcasting
sequence of the receiver is {4, 4, 4, 3, 3, 3, 2, 2, 2}. Since a
sender usually does not know the length of the broadcasting
sequence of the receiver, it broadcasts the message based on
its broadcasting sequence for b M
2
w2 c+1 cycles, where M is the
total number of channels. Therefore, as shown in Fig. 2, the
shaded part represents a successful broadcast.
2 1 2 1 2 1Tx
Rx
2
1
4 4 4 3 3 3 2 2 2
…
1 cycle
1 cycle
Fig. 2. An example of the broadcasting sequences.
Since every SU calculates the initial value of w based on
its local information and the derivation process in Section III,
different SUs may obtain different values of w. We further
denote ws and wr as the w used by the sender and the receiver
to construct their broadcasting sequences, respectively. Note
that ws and wr may not necessarily be the same as the initial
w calculated by each SU. They also depend on the initial w of
its neighboring nodes. The following theorem gives an upper-
bound on the single-hop broadcast delay.
Theorem 1. If ws ≤ wr, the single-hop broadcast is a
guaranteed success within w2r time slots as long as the sender
and the receiver have at least one common channel between
their downsized available channel sets.
Proof: Based on Algorithm 1, a SU sender broadcasts on
all the channels in its downsized available channel set in ws
consecutive time slots. Based on Algorithm 2, a SU receiver
listens to every channel in its downsized available channel
set for wr consecutive time slots. If ws ≤ wr, during the
wr consecutive time slots for which the SU receiver stays
on the same channel, every channel of the SU sender must
appear at least once. Thus, as long as the SU sender and the
receiver have at least one common channel, there must exists
a time slot that the sender and the receiver hop on the same
channel during one cycle of the broadcasting sequence of the
receiver (i.e., w2r ). Therefore, the broadcast is guaranteed to
be successful.
Then, how to determine ws and wr ? From Theorem 1,
ws ≤ wr is a sufficient condition of a single-hop successful
broadcast. Therefore, in order to satisfy this condition, a
proper wr needs to be selected by any SU who has not
received the broadcast message to ensure the reception of the
broadcast message sent from any potential neighbor. Since
wr depends on ws and a SU receiver usually does not
know which neighboring node is sending until it receives
the broadcast message, it selects the largest initial w of all
its 1-hop neighbors as its wr . That is, for a SU receiver v,
wr(v) = max{w(u)|u ∈ N (v)}. On the other hand, the sender
uses its calculated initial w as ws to broadcast. Therefore, the
ws selected by the actual sender is bound to be smaller than
or equal to this wr . Thus, according to Theorem 1, the single-
hop broadcast is a guaranteed success as long as the sender
and its receiver have at least one common channel between
their downsized available channel sets.
2275
To illustrate the above discussed operation, we consider a
multi-hop scenario shown in Fig. 3. The initial w calculated by
each SU before the broadcast starts based on its local informa-
tion is shown in the brackets. Every node also calculates the
initial w of its 1-hop neighbors. Without loss of generality,
node A is assumed to be the source node. Thus, based on
Theorem 1, the values of wr employed by each receiver can
be obtained. For instance, since node B knows the initial w of
its neighbors (i.e., w(A)=3, w(D)=4, and w(F )=4), it selects
the largest initial w as its own wr (i.e., wr(B) = 4). Similarly,
we have wr(C) = 4, wr(D) = 3, wr(E) = 4, and wr(F ) = 5.
Then, all nodes except node A use their wr to construct the
broadcasting sequences based on Algorithm 2.
A [3]
B [3]
C [5]
D [4]
F [4]
E [2]
Fig. 3. A multi-hop broadcast sce-
nario.
A
B [3]
C [3]
D
Fig. 5. The broadcast scenario where
a broadcast collision may occur.
B. The Distributed Broadcast Scheduling Scheme
Algorithm 3: The pseudo-code of the broadcast scheduling
scheme for a SU sender v.
Input: q, N (v), N (N (v)),{w(u)|u ∈ N (q)}.
Output: Decision of rebroadcasting.
C ←{w(v)};
if {k|k ∈ (N (v) − N (v) ∩ N (q))} 6= ∅ then /* v has at
least one receiver */
foreach k do
if {u|u ∈ N (q), d(u, k) ≤ rc, u 6= v} 6= ∅ then /* there
are multiple paths from q → k */
foreach u do
C ← {C, w(u)};
if w(v) = minC and |{e|e = minC}| = 1 then /*v
is the only node with the smallest w*/
return TRUE;
else if w(v) = minC and |{e|e = minC}|> 1 then
/* v is one of the multiple nodes with
the same smallest w */
run Algorithm 4;
return TRUE;
else
return FALSE; /* v does not rebroadcast */
else return TRUE; /*vrebroadcasts the message*/
else
return FALSE;
Next, we consider the multi-hop broadcast scenario. The
goal of the proposed broadcast scheduling scheme is to intel-
ligently select SU nodes for rebroadcasting in order to achieve
the shortest broadcast delay.
First, from the simulation results shown in Fig. 4, we can
observe that the single-hop broadcast delay increases when
w
increases. Therefore, in a multi-hop broadcast scenario, if there
are multiple intermediate nodes with the same neighboring
node, the node with the smallest w is selected to rebroadcast. If
there are more than one intermediate node with the smallest w,
a broadcast collision avoidance scheme (which is explained in
detail in Section II-C) is executed before they rebroadcast the
message. The pseudo-code of the proposed scheduling scheme
is shown in Algorithm 3, where node v has just received the
broadcast message from node q and needs to decide whether
to rebroadcast. Node q includes the calculated initial w of
its 1-hop neighbors in the broadcast message. Algorithm
3
indicates that each SU should know the locations of its 1-hop
neighbors (in order to obtain N (v)) and its 2-hop neighbors
(in order to obtain N (q) and d(u, k)). Once a node receives
the message, it executes Algorithm 3 to decide whether it
should rebroadcast or not. If it needs to rebroadcast, it uses
its calculated initial w as ws to construct the broadcasting
sequence based on Algorithm 1. Thus, as shown in Fig. 3, the
message deliveries are shown by the arrows.
From the above design, it is noted that each SU (either
sending or receiving) follows the same rules and no prior
information about the sender is required. Thus, the proposed
broadcast scheduling scheme is fully distributed. In addition,
since the node with the smallest w is selected for rebroadcast-
ing, the broadcast delay is the shortest.
3 4 5 6 7
2.
5
3
3.5
4
4.5
5
5.5
w
S
in
g
le
−
h
o
p
B
ro
a
d
ca
st
D
e
la
y
(t
im
e
s
lo
ts
)
Fig. 4. The single-hop broadcast delay when ws=wr=w.
C. The Broadcast Collision Avoidance Scheme
From Algorithm 3, if there are multiple intermediate nodes
between two SUs, only the intermediate node with the smallest
w should rebroadcast. However, if more than one intermediate
node with the same smallest w, a broadcast collision may
occur if these nodes deliver the messages on the same channel
at the same time. For instance, in the example shown in Fig.
5 where node A is the source node, node B and C have the
same w, which may lead to a broadcast collision.
Most broadcast collision avoidance methods in traditional ad
hoc networks assign different time slots to different interme-
diate nodes to avoid simultaneous transmissions. However, as
explained in Section I, these methods cannot be applied to CR
ad hoc networks because the exact time for the intermediate
nodes to receive the broadcast message is random. As a result,
to assign different time slots for different intermediate nodes
is very challenging. In addition, since the intermediate nodes
use multiple channels for broadcasting, the channel on which
the broadcast collision occurs is also unknown. To the best
of our knowledge, no existing collision avoidance scheme can
address these challenges in CR ad hoc networks.
In this paper, we propose a broadcast collision avoidance
scheme for CR ad hoc networks. The main idea is to prohibit
intermediate nodes from rebroadcasting on the same channel
at the same time. The procedure of the proposed broadcast
collision avoidance scheme is summarized as follows:
Step 1 Generating a Default Sequence: When a source node
(e.g., node A in Fig. 5) broadcasts the message, it includes
2276
its own original available channel set in the message. Hence,
if an intermediate node receives the message, it obtains the
original available channel information of its parent node. Then,
the intermediate node uses the first w available channels of
its parent node to generate a default sequence, where w is
its own calculated initial w (which may not be the same as
the initial w of its parent node). If a channel in the default
sequence is not available for this intermediate node, a void
channel is assigned to replace the corresponding channel. For
instance, if node B and C both obtain w = 3 and the original
available channels of node A, B, and C are {1, 2, 3, 4, 5},
{2, 3, 4, 5}, and {1, 3, 4, 6}, respectively, node B and C only
use the first three available channels of node A to generate
their default sequences. Therefore, the default sequence of
node B is {0, 2, 3} and the default sequence of node C is
{1, 0, 3}, where 0 means a void channel. A node does not
send anything on a void channel.
Step 2 Circular Shifting the Default Sequence with a Ran-
dom Number: Apart from the available channel set, the source
node also includes a distinctive integer for each intermediate
node v randomly selected from [1, w(v)]. Then, each interme-
diate node generates a new sequence from its default sequence
using circular shift and the random integer. If we denote the
default sequence as DS and the random integer as R, the
intermediate node performs circular shift on the DS for R
times (there is no difference of right-shift or left-shift). For
instance, if node B and C get 3 and 1 as their random integers,
respectively, the new sequences they generate from left-handed
circular shift are {0, 2, 3} and {0, 3, 1}, respectively.
Step 3 Forming the Broadcasting Sequence: Denote the
starting time slot of the source node’s broadcasting sequence
as st and the time slot when an intermediate node receives the
broadcast message as rt. The source node includes its st in
the broadcast message. Then, the intermediate node performs
circular shift on the new sequence generated from Step 2 for
another (rt − st + 1) times. It repeats that sequence for w(v)
times to form a cycle of its broadcasting sequence.
The pseudo-code of the broadcast collision avoidance
scheme is shown in Algorithm 4, where q is the source
node and Circshift() is the function of circular shift. To
further elaborate the scheme, Fig. 6 shows an example of the
proposed broadcast collision avoidance scheme. Without loss
of generality, the starting time slot of the source node is 1.
When node B and C do not receive the broadcast message,
they hop through the channels based on the broadcasting
sequences generated from Algorithm 2. Then, node B and
C receive the broadcast message at time slot 4 and 1, re-
spectively. Based on Algorithm 4 and if the random integers
for node B and C are 3 and 1, respectively, node B forms
the broadcasting sequence as {2, 3, 0, 2, 3, 0, 2, 3, 0} and node
C forms the broadcasting sequence as {3, 1, 0, 3, 1, 0, 3, 1, 0}.
Then, they start rebroadcasting from time slot 5 and 2 using the
broadcasting sequences, respectively. The underlined channels
are those a node hops on if it starts from time slot 1.
Therefore, by constructing the broadcasting sequences from
the same channel set (the channel set of the common parent
node, node A) but circular shifting different times for different
nodes, the intermediate nodes are guaranteed not to send on the
same channel at the same time. Thus, broadcast collisions can
be totally eliminated. A trade-off of the proposed broadcast
collision avoidance scheme is that less available channels
are used for broadcasting because some void channels may
be assigned. However, the benefit (e.g., the increase of the
successful broadcast ratio) gained from eliminating broadcast
collisions is greater than the loss of a very few number of
channels. Hence, the only issue left is the derivation of the
initial w, which is introduced in Section III.
Algorithm 4: The pseudo-code of the broadcast collision
avoidance scheme for SU v.
Input: q, Lq , Lv , stq , rtv , Rv , w(v).
Output: BS′v .
BS′v ← ∅; /* initialization */
i ← 1;
l ← 1;
while i ≤ w(v) do /* generating a default sequence */
j ← 1;
while j ≤ w(v) do
if Lv(i) = Lq(j) then
DSv(j) ← Lq(j);
Tv ← Circshift(DSv , Rv ); /* circular shifting */
while l ≤ w(v)2 do /* forming a broadcast sequence */
BS′v(l) ← Tv(l+(rtv−stq)+1 mod w(v));
l ← l + 1;
return BS′v ;
1 2 1 2 1 2Tx
Rx
1 2Node A
Node B 4 4 4 2 2 2 3 3 3
0 2 3 0 2 3 0 2 3Tx 0 2 3 0
RxNode C 1 1 1 3 3 3 4 4 4
0 3 1 0 3Tx 1 0 3 1 0
time slot 1 2 3 4 5 6 7 8 9 10 11 12 13
…
Fig. 6. An example of the proposed broadcast collision avoidance scheme.
III. THE DERIVATION OF THE VALUE OF w
In this section, we first introduce a network model we
consider. Then, based on this model, we present the derivation
of the size of the downsized available channel set w.
A. The Network Model
In this paper, we consider a CR ad hoc network where
N SUs and K primary users (PUs) co-exist in an α×α
area. PUs are evenly distributed within the area. The SUs
opportunistically access M licensed channels. Each SU has a
circular transmission range with a radius of rc. The SUs within
the transmission range are considered as the neighboring nodes
of the corresponding SU. That is, only when a SU receiver is
within the transmission range of a SU transmitter, the signal-
to-noise ratio (SNR) at the SU receiver is considered to be
acceptable for reliable communications. In addition, apart from
the broadcast collision, other factors may also contribute to
the packet error (e.g., channel quality, modulation schemes,
and coding rate). However, in this paper, we only consider the
2277
broadcast collision as the reason for the packet error. We claim
that this is a valid assumption in most broadcast scenarios [5]–
[11] [13]–[16].
Each SU also has a circular sensing range with a radius of
rs. That is, if a PU is currently active within the sensing range
of a SU, the corresponding SU is able to detect its appearance.
Since different SUs have different local sensing ranges which
include different PUs, their acquired available channels may
be different. In addition, because the available channels of
a SU are obtained based on the sensing outcome within the
sensing range, a SU is not allowed to communicate with other
SUs outside its sensing range since it may mistakenly use an
occupied channel by a PU, which results in interference to the
PU. Therefore, in this paper, we assume that rc ≤ rs.
In this paper, we model the PU channel activity as an
ON/OFF process, where the length of the ON period is the
length of a PU packet. The length of the ON period and the
OFF period can follow arbitrary distributions. We assume that
each PU randomly selects a channel from the spectrum band
to transmit one packet which consists of multiple time slots.
Moreover, because PUs at different locations can claim any
channels for communications, the packets on the same channel
do not necessarily belong to the same PU. This is a more
practical scenario, as compared to some papers which assume
that each channel is associated with a different PU. Under
such a practical scenario, only those PUs that are within the
sensing range of a SU and are active during the broadcast
process contribute to the unavailable channels of the SU [24].
B. The Derivation of the Value of w
As explained in Section II, the value of w is essential to
ensure a successful single-hop broadcast. Denote the prob-
ability of a successful single-hop broadcast as Psucc(w),
where Psucc(w) is a function of w. Our goal is to obtain an
appropriate w that satisfies the condition: Psucc(w) ≥ 1 − δ,
where δ is a small pre-defined value. From Theorem 1, the
condition that at least one common channel exists between the
downsized available channel sets of a SU pair is a necessary
condition for a successful single-hop broadcast. Therefore, if
we denote the source SU of a single-hop broadcast as S0 and
the neighbors of S0 as {S1, S2, · · · , SH}, where H is the
number of neighbors, Psucc(w) is equal to the probability that
there is at least one common channel between S0 and each of
its neighbors in their downsized available channel sets.
S0
Si
A1 A2
A3
(a) The single-pair scenario.
S0
Si
Sj
(b)The multi-pair scenario.
Fig. 7. The single-hop broadcast scenario.
1) The single-pair scenario: we first calculate the prob-
ability that there is at least one common channel between
the downsized available channel sets of S0 and one of its
neighbors Si. The relative locations of the two SUs and their
sensing ranges are shown in Fig. 7(a). As shown in Fig. 7(a),
sensing ranges are divided into three areas: A1, A2, and A3.
Note that PUs in different areas have different impact on the
channel availability of the two SUs. For instance, if a PU is
active within A3, the channel used by this PU is unavailable for
both SUs. However, if a PU is active within A1, the channel
used by this PU is only unavailable for S0. Thus, we first
calculate the probability that a channel is available within each
area, Pk, k ∈ [1, 2, 3]. The size of the total network area is
denoted as AL (i.e., AL = α2). Since the locations of PUs are
evenly distributed, the probability that p PUs are within Ak is
Pr(p) =
(
K
p
) (
Ak
AL
)p( AL−Ak
AL
)K−p
, (1)
where
(K
p
)
represents the total combinations of K choosing
p. In addition, we define the probability that a PU is active,
ρ, as:
ρ =
E[ON duration]
E[ON duration] + E[OFF duration]
, (2)
where E[·] represents the expectation of the random variable.
Therefore, given that there are p PUs within Ak, the proba-
bility that there are b PUs active is
Pr(b|p) =
(
p
b
)
ρb(1 − ρ)p−b. (3)
Furthermore, given that there are p PUs and b active PUs
within Ak, the probability that there are c available channels is
denoted as Pr(c|p, b). Since the number of available channels
is only related to the number of active PUs, c is independent of
p. In addition, since an active PU randomly selects a channel
from M channels in the band, Pr(c|p, b) is equivalent to the
probability that there are exactly c empty boxes given that b
balls are randomly put into a total of M boxes and a box can
have more than one ball (because we do not limit a channel
to only one PU). Thus, Pr(c|p, b) can be expressed as:
Pr(c|p, b) =
(M
c
)( b−1
b−M+c
)
(b+M−1
b
) , c ∈ [max(0, M −b), M ]. (4)
Hence, the probability that there are c available channels and
there are p PUs and b active PUs within Ak is the product
of (1), (3), and (4). Then, the probability that a channel is
available within Ak is obtained from (5).
Next, we consider the relationship between the downsized
available channel sets of the two SUs. In our derivation, we
only consider the scenario where the sender and its receiver
have the same w (i.e., ws=wr ). If wr > ws, the channels after
the first ws channels do not affect the number of common
channels. Thus, the derivation process is the same. Fig. 8
shows an example of the channel availability status of two
SUs when w(S0) = 3, where a shaded square indicates an
idle channel and a white square indicates a busy channel. A
square with a cross means that a channel can be either idle or
busy. Since each SU only selects the first w available channels
to form a downsized available channel set, the availability
status of the channels after the first w available channels is not
specified. Then, without loss of generality, we denote t and h
as the index of the last available channel in the downsized
2278
Pk =
1
M
K
∑
p=0
p
∑
b=0
M
∑
c=max(0,M−b)
c
(M
c
)( b−1
b−M+c
)
(b+M−1
b
)
(
p
b
)
ρb(1 − ρ)p−b
(
K
p
) (
Ak
AL
)p ( AL − Ak
AL
)K−p
. (5)
P ′′1 (h) = P
z
C1P
w−z
C3
h−1
∑
t=w
t−w
∑
x=max(0,w+t−h)
(
t−1
w−1
)(
w
z
)(
t−w
x−z
)(
h−t−1
w−x−1
)
P x−zC4 P
(t−w−x+z)
C2 (PC1+PC4)
(w−x)(PC2+PC3)
(h−t−w+x). (7)
P ′′(h) = P zC1P
w−z
C3
∑h−1
t=w
∑t−w
x=max(0,w+t−h)
( t−1
w−1
)(w
z
)(t−w
x−z
)( h−t−1
w−x−1
)
P x−zC4 P
(t−w−x+z)
C2 (PC1 +PC4)
(w−x)(PC2 +PC3)(h−t−w+x)
+P zC1P
w−z
C4
∑h−1
t=w
∑t−w
x=max(0,w+t−h)
( t−1
w−1
)(w
z
)(t−w
x−z
)( h−t−1
w−x−1
)
P x−zC3 P
(t−w−x+z)
C2 (PC1 +PC3)
(w−x)(PC2 +PC4)(h−t−w+x). (8)
available channel sets of S0 and Si, respectively. We first
assume that t ≤ h. Hence, from channel 1 to t, there are four
possible scenarios of every channel in terms of its availability
for the two SUs. They are: 1) the channel is available for both
SUs (denoted as C1); 2) the channel is unavailable for both
SUs (denoted as C2); 3) the channel is only available for S0
(denoted as C3); and 4) the channel is only available for Si
(denoted as C4). In addition, from channel t+1 to h (if t < h),
there are two possible scenarios: 1) the channel is available
for Si but it can be any status for S0 (denoted as C5) and 2)
the channel is unavailable for Si but it can be any status for
S0 (denoted as C6). Based on Fig. 7(a), the probabilities of
the above six scenarios can be obtained: 1) PC1 = P1P2P3; 2)
PC2 = (1−P3)+(1−P1)(1−P2)P3; 3) PC3 = P1P3(1−P2);
4) PC4 = (1 − P1)P2P3; 5) PC5 = PC1 + PC4; and 6)
PC6 = PC2 + PC3.
S0
Si
h
t
M
Fig. 8. An example of the channel availability status when w(S0) = 3.
Denote Z(0, i) as the number of common channels between
S0 and Si in their downsized available channel sets. In
order to obtain Pr(Z(0, i) = z), we need to consider all the
combinations of the channel status for every channel from
channel 1 to h. There are two possible cases: 1) t = h and
2) t < h. For the first case, channel h is a common channel
between the two SUs. In addition, from channel 1 to channel
h−1, there must be z−1 channels in scenario C1; h−2w+z
channels in C2, and w−z channels in C3 and C4, respectively.
Since t = h, no channel is in scenario C5 or C6. Thus, the
probability that there are z(z > 0) common channels in the
first case is
P ′(h)=
(
h−1
z−1
)(
h−z
w−z
)(
h−w
w−z
)
P zC1P
h−2w+z
C2 P
w−z
C3 P
w−z
C4 . (6)
For the second case, since t < h, the common available channels can only be between channel 1 to t. We denote the number of available channels for Si from channel 1 to t as x. Thus, from channel 1 to t, similar to the first case, there are z channels in C1; t−w−x+z channels in C2; w−z channels in C3; and x−z channels in C4. In addition, from channel t+1 to h, there are w−x channels in C5 and h−t−w+x channels in C6. Therefore, the probability that there are totally z common
channels is obtained from (7). If we switch S0 and Si in Fig.
8, we can obtain the probability for the dual case. Hence, the
probability that there are z common channels in the second
case is expressed in (8).
Therefore, the probability that there are z common channels
for the first w available channels for each SU is
Pr(Z(0, i) = z) =
M
∑
h=2w−z
P ′(h) + P ′′(h). (9)
Thus, the probability of a successful single-hop broadcast from
S0 to Si is
Psucc(w) = 1 − Pr(Z(0, i) = 0). (10)
Fig. 9 shows the analytical and simulation results of
Psucc(w) in the single-pair scenario under various w and
different M . To obtain these results, the number of PUs
K = 40 and the probability that a PU is active ρ = 0.9.
In addition, the side length of the network area α = 10 (unit
length) and two neighboring SUs are at the border of each
other’s sensing range where rs = 2 (unit length). As shown in
Fig. 9, the simulation results match extremely well with the
analytical results.
1 2 3 4 5 6 7
0.7
0.75
0.8
0.85
0.9
0.95
1
w
P
su
cc
(w
)
M=15 analytical
M=15 simulation
M=20 analytical
M=20 simulation
Fig. 9. Analytical and simulation results of Psucc(w) in the single-pair
scenario under various w and different M .
2) The multi-pair scenario: we extend the above results to a
multi-pair scenario, as shown in Fig. 7(b) where Si and Sj are
two neighbors of S0. Based on the knowledge of combination
mathematics, the probability of a successful broadcast in the
multi-pair scenario shown in Fig. 7(b) is
Psucc(w) =1−Pr(Z(0, i) = 0)−Pr(Z(0, j) = 0)
+Pr(Z(0, i, j) = 0),
(11)
where Pr(z(0, i, j) = 0) is the probability that both Si and Sj
do not have any common channel in the downsized available
channel sets with S0. Since the other two terms in (11) (i.e.,
Pr(Z(0, i) = 0) and Pr(Z(0, j) = 0)) can be obtained from
(9), we only need to calculate Pr(Z(0, i, j) = 0).
2279
To calculate Pr(Z(0, i, j) = 0), we use the same idea
from the single-pair scenario. That is, we consider Si and
Sj together as one new neighboring node. The sensing range
of the new neighboring node is the union of the sensing
ranges of the two original nodes (i.e., the shaded area in
Fig. 7(b)). Therefore, we can obtain new P1, P2, and P3 for
the multi-pair scenario based on the new size of the sensing
range. Moreover, the probabilities of every scenario of the
channel status can also be obtained accordingly. Therefore,
by using (6)-(9), we can calculate Pr(Z(0, i, j) = 0). Then,
given the locations of the H neighbors, each SU can get the
probability of a successful single-hop broadcast by performing
the same procedure iteratively for H times. Finally, by letting
Psucc(w) ≥ 1 − δ, a proper w can be acquired for S0.
IV. PERFORMANCE EVALUATION
In this section, we evaluate the performance of the proposed
broadcast protocol. We consider two types of PU traffic models
in the simulation. The first PU traffic model is discrete-time,
where the PU packet inter-arrival time follows the biased-
geometric distribution [25]. The second PU traffic model
is continuous-time, where the PU packet inter-arrival time
follows the Pareto distribution [25]. We assume that the proba-
bility that a PU is active is fixed (i.e., ρ = 0.9). In addition, the
side length of the network area α=10 (unit length). We assume
that the radius of the sensing range and the transmission
range are the same (i.e., rs = rc = 2 (unit length)). In this
paper, we mainly investigate the following two performance
metrics: 1) successful broadcast ratio: the probability that
all nodes in a network successfully receive the broadcast
message and 2) average broadcast delay: the average duration
from the moment a broadcast starts to the moment the last
node receives the broadcast message. In addition, we compare
our proposed broadcast protocol with four other schemes: 1)
Random+Flooding: each SU randomly selects a channel to
hop and uses flooding (i.e., a SU is obligated to rebroadcast
once receiving the message); 2) Sequence+Flooding (1/3 of
our design): each SU downsizes its available channel set and
constructs broadcasting sequences based on our scheme and
uses flooding; 3) Sequence+Schedule (2/3 of our design):
each SU constructs broadcasting sequences based on our
scheme and uses our broadcast scheduling scheme; and 4)
JS+Flooding: each SU uses the jump-stay scheme [22] to
construct the broadcasting sequences and uses flooding.
A. Successful Broadcast Ratio
Since the single-hop successful broadcast ratio depends
on w which is related to a pre-defined value δ, we define
δ = 0.001. In fact, δ can be an arbitrary small value. Thus,
based on Section III, each SU calculates a proper w before
the broadcast starts in our scheme, the Sequence+Flooding
scheme, and the Sequence+Schedule scheme. Table II and III
show the simulation results of the successful broadcast ratio
under different number of SUs and PUs, where the value in the
upper cell is for the discrete-time PU traffic and the lower cell
is for the continous-time PU traffic. In Table II, M = 20 and
K = 40. In Table III, M = 20 and N = 20. As shown in Table II
and III, the successful broadcast ratio is higher than 99% under
our proposed broadcast protocol in all scenarios. In addition,
the proposed broadcast protocol outperforms other schemes in
terms of higher successful broadcast ratio. Since the jump-stay
scheme requires that the i-th available channel in the available
channel set is also channel i, it cannot utilize the technique
in our scheme to downsize the original available channel set.
In addition, the jump-stay scheme can guarantee rendezvous
within 6M P (P −G), where P is the smallest prime number
larger than M and G is the number of common channels
between two SUs. Thus, in order to ensure a successful
broadcast, each SU broadcasts the message for 6M P (P −G)
slots. However, 6M P (P −G) is usually a very large number
when M is large. Hence, to better illustrate the trade-off
between the successful broadcast ratio and broadcast delay,
we compare our scheme with JS+Flooding in Section IV-B.
TABLE II
SUCCESSFUL BROADCAST RATIO UNDER DIFFERENT NUMBER OF SUS
N =5 N =10 N =15 N =20 N =25
Random+Flooding
0.8801 0.8180 0.8100 0.8726 0.8821
0.8630 0.9148 0.9075 0.8698 0.8708
Sequence+Flooding
0.9849 0.9839 0.9828 0.9823 0.9863
0.9762 0.9769 0.9777 0.9773 0.9719
Sequence+Schedule
0.9859 0.9864 0.9823 0.9857 0.9855
0.9812 0.9845 0.9849 0.9876 0.9861
Proposed Scheme
0.9991 0.9973 0.9969 0.9982 0.9909
0.9994 0.9959 0.9954 0.9967 0.9951
TABLE III
SUCCESSFUL BROADCAST RATIO UNDER DIFFERENT NUMBER OF PUS
K=20 K=30 K=40 K=50 K=
60
Random+Flooding
0.8189 0.8326 0.8842 0.9208 0.8907
0.7980 0.8738 0.9191 0.9139 0.8849
Sequence+Flooding
0.9866 0.9863 0.9823 0.9819 0.9871
0.9742 0.9765 0.9773 0.9711 0.9797
Sequence+Schedule
0.9868 0.9872 0.9857 0.9881 0.9872
0.9874 0.9885 0.9876 0.9833 0.9850
Proposed Scheme
0.9978 0.9976 0.9982 0.9951 0.9921
0.9946 0.9941 0.9967 0.9977 0.9969
B. Average Broadcast Delay
Table IV and V show the simulation results of the average
broadcast delay under different number of SUs and PUs.
Similarly to the successful broadcast ratio, in Table IV, M =
20
and K = 40. In Table V, M = 20 and N = 20. As shown in
Table IV and V, the proposed broadcast protocol outperforms
other schemes in terms of shorter average broadcast delay.
Furthermore, Fig. 10 and 11 show the average broadcast
delay under different number of channels when N = 10
and K = 40. As explained in Section IV-A, besides our
proposed scheme, we also compare JS+Flooding and our
scheme without downsizing the available channel set (i.e.,
w=M ). It is shown that even though the successful broadcast
ratio is similar, the broadcast delay under JS+Flooding is
much longer than our proposed scheme.
To sum up, our proposed broadcast protocol outperforms
Random+Flooding in terms of higher successful broad-
cast ratio and shorter broadcast delay. It also outperforms
JS+Flooding in terms of shorter broadcast delay. In addition,
2280
even with the trade-off in our broadcast collision avoidance
scheme explained in Section II-C and limited overhead, our
proposed scheme and the schemes that use a part of our
design (e.g., Sequence+Flooding) can still achieve better per-
formance results than Random+Flooding for both metrics and
JS+Flooding for broadcast delay.
TABLE IV
AVERAGE BROADCAST DELAY UNDER DIFFERENT NUMBER OF SUS
Delay (unit: slots) N =5 N =10 N =15 N =20 N =25
Random+Flooding
19.781 26.483 28.003 29.252 31.203
20.981 23.765 27.686 33.153 32.883
Sequence+Flooding
8.458 11.168 12.744 14.243 15.909
7.712 11.799 12.903 14.534 17.257
Sequence+Schedule
7.811 10.995 13.324 13.896 15.823
7.155 11.457 13.553 14.551 15.078
Proposed Scheme
7.066 10.532 12.259 13.353 15.198
6.545 11.097 12.786 13.639 14.801
TABLE V
AVERAGE BROADCAST DELAY UNDER DIFFERENT NUMBER OF PUS
Delay (unit: slots) K=20 K=30 K=40 K=50 K=60
Random+Flooding
29.189 31.459 25.737 25.361 24.243
34.547 30.629 27.617 28.424 26.399
Sequence+Flooding
13.918 14.886 14.243 14.649 14.259
14.413 13.958 14.534 14.867 14.389
Sequence+Schedule
12.747 14.206 13.896 14.361 14.014
13.652 14.086 14.551 14.521 14.237
Proposed Scheme
12.322 13.555 13.352 14.279 13.597
13.249 13.401 13.639 13.335 13.471
5 10 15 20
0
0.5
1
1.5
2
Total Number of Channels
S
u
cc
e
ss
fu
l B
ro
a
ca
st
R
a
tio JS+Flooding
Proposed
Proposed (w=M)
Fig. 10. Successful broadcast ratio under different number of channels.
5 10 15 20
0
20
40
60
80
100
120
Total Number of Channels
A
ve
ra
g
e
B
ro
a
d
ca
st
D
e
la
y
(t
im
e
s
lo
ts
)
JS+Flooding
Proposed
Proposed (w=M)
Fig. 11. Average broadcast delay under different number of channels.
V. CONCLUSION
In this paper, the broadcasting challenges specifically in
multi-hop CR ad hoc networks under practical scenarios have
been addressed for the first time. A fully-distributed broadcast
protocol is proposed without the existence of a global or
local common control channel. By intelligently downsizing the
original available channel set and designing the broadcasting
sequences and broadcast scheduling schemes, our proposed
broadcast protocol can provide very high successful broadcast
ratio while achieving the shortest broadcast delay. In addition,
it can also eliminate broadcast collisions. Simulation results
show that our proposed broadcast protocol outperforms other
possible broadcast schemes in terms of higher successful
broadcast ratio and shorter average broadcast delay.
REFERENCES
[1] J. Mitola, “Cognitive radio: an integrated agent architecture for software
defined radio,” Ph.D. dissertation, KTH Royal Institute of Technology,
2000.
[2] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, “NeXt
generation/dynamic spectrum access/cognitive radio wireless networks:
A survey,” Computer Networks (Elsevier), vol. 50, 2006.
[3] I. F. Akyildiz, W.-Y. Lee, and K. R. Chowdhury, “CRAHNs: cognitive
radio ad hoc networks,” Ad Hoc Networks, vol. 7, no. 5, 2009.
[4] G. Resta, P. Santi, and J. Simon, “Analysis of multi-hop emergency
message propagation in vehicular ad hoc networks,” in Proc. ACM
MobiHoc, 2007, pp. 140–149.
[5] I. Chlamtac and S. Kutten, “On broadcasting in radio networks-problem
analysis and protocol design,” IEEE Transactions on Communications,
vol. 33, no. 12, pp. 1240–1246, Dec. 1985.
[6] S.-Y. Ni, Y.-C. Tseng, Y.-S. Chen, and J.-P. Sheu, “The broadcast storm
problem in a mobile ad hoc network,” in Proc. ACM MobiCom, 1999,
pp. 151–162.
[7] J. Wu and F. Dai, “Broadcasting in ad hoc networks based on self-
pruning,” in Proc. IEEE INFOCOM, 2003, pp. 2240–2250.
[8] J. Qadir, A. Misra, and C. T. Chou, “Minimum latency broadcasting
in multi-radio multi-channel multi-rate wireless meshes,” in Proc. IEEE
SECON, vol. 1, 2006, pp. 80–89.
[9] A. Qayyum, L. Viennot, and A. Laouiti, “Multipoint relaying for
flooding broadcast messages in mobile wireless networks,” in Proc.
HICSS, 2002, pp. 3866–3875.
[10] Z. Haas, J. Halpern, and L. Li, “Gossip-based ad hoc routing,”
IEEE/ACM Trans. Networking, vol. 14, no. 3, pp. 479–491, June 2006.
[11] Z. Chen, C. Qiao, J. Xu, and T. Lee, “A constant approximation
algorithm for interference aware broadcast in wireless networks,” in
Proc. IEEE INFOCOM, 2007, pp. 740–748.
[12] S. Huang, P.-J. Wan, X. Jia, H. Du, and W. Shang, “Broadcast scheduling
in interference environment,” IEEE Transactions on Mobile Computing,
vol. 7, no. 11, pp. 1338–1348, 2008.
[13] R. Mahjourian, F. Chen, R. Tiwari, M. Thai, H. Zhai, and Y. Fang,
“An approximation algorithm for conflict-aware broadcast scheduling in
wireless ad hoc networks,” in Proc. ACM MobiHoc, 2008, pp. 331–340.
[14] R. Gandhi, A. Mishra, and S. Parthasarathy, “Minimizing broadcast
latency and redundancy in ad hoc networks,” IEEE/ACM Transactions
on Networking, vol. 16, no. 4, pp. 840–851, 2008.
[15] Y. Kondareddy and P. Agrawal, “Selective broadcasting in multi-hop
cognitive radio networks,” in Proc. IEEE Sarnoff Symposium, 2008.
[16] C. J. L. Arachchige, S. Venkatesan, R. Chandrasekaran, and N. Mittal,
“Minimal time broadcasting in cognitive radio networks,” in Proc.
ICDCN, 2011, pp. 364–375.
[17] L. Lazos, S. Liu, and M. Krunz, “Spectrum opportunity-based control
channel assignment in cognitive radio networks,” in Proc. IEEE SECON,
2009, pp. 1–9.
[18] J. Zhao, H. Zheng, and G. Yang, “Spectrum sharing through distributed
coordination in dynamic spectrum access networks,” Wireless Communi-
cations and Mobile Computing, vol. 7, pp. 1061–1075, November 2007.
[19] K. Bian, J.-M. Park, and R. Chen, “Control channel establishment in
cognitive radio networks using channel hopping,” IEEE JSAC, vol. 29,
no. 4, pp. 689–703, April 2011.
[20] N. Theis, R. Thomas, and L. DaSilva, “Rendezvous for cognitive radios,”
IEEE Trans. Mobile Computing, vol. 10, no. 2, pp. 216–227, 2010.
[21] C. Cormio and K. R. Chowdhury, “Common control channel design
for cognitive radio wireless ad hoc networks using adaptive frequency
hopping,” Ad Hoc Networks, vol. 8, no. 4, pp. 430–438, 2010.
[22] Z. Lin, H. Liu, X. Chu, and Y.-W. Leung, “Jump-stay based channel
hopping algorithm with guaranteed rendezvous for cognitive radio
networks,” in Proc. IEEE INFOCOM, 2011.
[23] J. Mo, H.-S. W. So, and J. Walrand, “Comparison of multichannel MAC
protocols,” IEEE Trans. on Mobile Computing, vol. 7, no. 1, 2008.
[24] Y. Song and J. Xie, “A QoS-based broadcast protocol for multi-hop
cognitive radio ad hoc networks under blind information,” in Proc. IEEE
GLOBECOM, 2011.
[25] ——, “ProSpect: A proactive spectrum handoff framework for cognitive
radio ad hoc networks without common control channel,” to appear in
IEEE Trans. Mobile Computing, 2012.
2281
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 1
OPS: Opportunistic Pipeline Scheduling in
Long-strip Wireless Sensor Networks with
Unreliable Links
Peng Guo, Kui Zhang and Nirvana Meratnia
Abstract —Being deployed in narrow but quite long area, strip wireless sensor network (SWSNs) have drawn much attention in
applications such as coal mines, pipeline and structure monitoring. One of unique characteristics of SWSNs is the large hop counts,
which leads to long end-to-end delivery delay in low-duty-cycle SWSNs. To reduce the delay, pipeline scheduling is a promising
technique, which assigns sensor nodes sequential active time slots along the data forwarding path. However, pipeline scheduling is
prone to failure when communication links are unreliable. In this paper, we propose an opportunistic pipeline scheduling algorithm
(OPS) for SWSNs, based on the observation that sensor nodes in SWSNs can overhear data transmissions passing by them. OPS
exploits nodes outside the data forwarding path to opportunistically provide links when transmission failure happens, and hence
maintains the pipeline forwarding instead of retransmission in the next duty cycle. Theoretical calculation shows that the expectation
delay of OPS is always smaller than that of existing methods when the link quality is < 100%. Both extensive simulations and
experiments are conducted. The results verify that the end-to-end delivery delay of OPS is usually less than 30% of that of existing
methods, while the energy cost is almost the same.
Index Terms —Strip wireless sensor networks (SWSNs), End-to-end delivery delay, Scheduling, Opportunistic transmission.
✦
1 INTRODUCTION
Wireless Sensor Networks (WSNs) can be leveraged to
support a broad range of applications from habitat mon-
itoring to structure health monitoring and protection,
helping to reduce the deployment cost and provide real-
time data collection. In some applications, the physical
area monitored by a WSN is in a shape of a narrow
yet quite long strip. An example application is gas
monitoring in coal mines where tunnels are only several
meters wide but tens of kilometers long [1], [2]. Other
applications include monitoring rivers and pipelines,
which have the same shape and geographical extent
as coal mine tunnels. We call this kind of WSNs as
strip WSNs (SWSNs). Being deployed along the long
monitoring area as shown in Fig. 1, SWSNs usually have
over tens of hop counts.
In order to ensure long-term operation, sensor nodes
in SWSNs usually operate at a very low-duty-cycle (e.g.,
1% or less), implying that the sensor nodes sleep most of
the time and stay active for only a short duration in each
duty cycle. Under such circumstance, when a sender
node needs to deliver a packet, it always has to wait for a
certain period of time until its destination node becomes
active, thus resulting in sleep latency. The sleep latency in
SWSNs with large hop counts usually leads to a very
Peng Guo (e-mail: guopeng@mail.hust.edu.cn) is with the Department of
Electronics and Information Engineering, Huazhong University of Science
and Technology, Wuhan, 430074, China.
Kui Zhang (e-mail: truelykyle@gmail.com) is with Department of Computer
Science, University of Twente, Netherlands.
Nirvana Meratnia (e-mail: N.Meratnia@utwente.nl) is with Department of
Computer Science, University of Twente, Netherlands.
Sink
Fig. 1: A typical Strip WSN with unreliable links.
long end-to-end delivery delay. For example, a typical
sleep latency of 0.5s within each hop can lead to the end-
to-end delay of over tens of seconds in SWSNs, which
cannot be tolerant in many delay-sensitive applications,
such as coal mine monitoring.
To reduce the delivery delay, lots of delay-efficient
scheduling schemes [3]–[5] have been proposed. Howev-
er, generally speaking, the average end-to-end delivery
delay with these schemes is approximately proportional
to the product of the hop counts and duty cycle length,
which is still too large in large scale of SWSNs. In
order to essentially shorten the delivery delay due to
sleep latency, reducing the gap between adjacent nodes’
wakeup time is a key issue. If a receiver node wakes up
immediately when the sender node has just obtained a
packet, the sender node can send the packet directly to
the receiver without any latency. This idea has previous-
ly been described in [6]–[8], with being given different
names such as staggered wake-up [6], streamlined wake-up
[7], and ladder wake-up [8]. In this paper, we use the name
of pipeline scheduling adopted in [9].
Though pipeline scheduling can achieve minimum de-
livery delay in low-duty-cycle WSNs, its performance
becomes poor when being applied in unreliable com-
munication environments. An example about this is
Page 26 of 36IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 2
Forward di
Duty cycle T
Forward di+
1
fail retransmit di+1
Node A
Node B
Node
C
Time slot τ
Receive data & send ACK Send data & receive
D
D
D
D
D D
D D
D
D
D D
D
D D
D
D
Active time
2τ T+2τ
D
Fig. 2: Traditional pipeline scheduling on unreliable links.
illustrated in Fig. 2. Node A, B and C work in the
duty-cycle way and wake up one after another. Node
A needs to delivery packets to C via B. When there is
no transmission failure, the delivery delay is minimum
2τ, where τ is the size of time slot. However, if A fails
to transmit the packet to B in the duty cycle, it has to
retransmit the packet in the next duty cycle, resulting in
the delay of T + 2τ.
To enhance the pipeline scheduling in unreliable com-
munication environments, recently [9] proposed the
multi-pipeline scheduling algorithm, which employs multi-
ple parent nodes in each layer on the data delivery path
to establish multiple pipeline schedules. When trans-
mission failure happens, the sender node has multiple
chances to retransmit the packet in the current duty cycle
instead of the next duty cycle, thus reducing the delivery
delay due to sleep latency. However, the performance of
the multi-pipeline scheduling algorithm much depends on
the number of parent nodes in each layer.
In this paper, we propose an opportunistic pipeline
scheduling (OPS) algorithm for low-duty-cycle SWSNs
with unreliable links. OPS provides more retransmission
chances for sensor nodes than the multi-pipeline schedul-
ing algorithm in [9], while does not require more active
time or transmissions of sensor nodes in view of a given
amount of packets to be transmitted. Moreover, OPS
does not strictly require multiple parent nodes in each
layer as the multi-pipeline scheduling requires.
The design of OPS is based on the following two
motivations. First, we observe that sensor nodes in
SWSNs can overhear data transmissions passing by them
due to the feature of the narrow SWSNs area. For
instance, when a sender node transmits a packet to
the receiver node, another node near the receiver may
hear the transmission. With checking whether there is
an acknowledgement sent back from the receiver, the
overhearing node can get to know the transmission
result, and will substitute the receiver to forward the
packet if the transmission fails. Thus, the time wasted
for waiting the sender to retransmit the packet in the
next duty cycle can be saved. It’s easy to implement
the overhearing, as we can just schedule the overhearing
node with the same active time as the receiver and set
the node to receive all packets no matter if it is the
destination or not.
Another motivation depends on such a fact. When
enlarging the gap between adjacent nodes’ wakeup time
in Fig. 2, the sender node A will have multiple chances
to retransmit the packet to the receiver B in the current
duty cycle before the destination node C wakes up, thus
avoiding retransmissions in the next duty cycle. Based
on this fact, if node B receives the packet early in the
gap before node C wakes up, B can consider to send
the packet to another node closer to C. Then, the node
can help to deliver the packet to C with higher success
probability when C wakes up.
Based on the ideas introduced above, we design the
proposed OPS scheduling and give the theoretical per-
formance calculation as well as the optimization. Con-
tributions of the paper include:
• Based on the unique feature of packet transmissions
in SWSNs, we propose a novel opportunistic pipeline
scheduling OPS to reduce the data delivery delay in
low-duty-cycle SWSNs with unreliable links.
• We implement OPS on a SWSN with 20 TelosB
nodes, and demonstrate its high efficiency in terms
of low end-to-end data delivery delay in practical
environment.
The remainder of the paper is organized as follows. In
Section 2, we briefly review the related works. Section 3
describes the models in detail. Section 6 gives the design
overview of the proposed scheme, then the details of
the scheme is presented in Section 5. Simulations and
experiments are described in Section 6 and Section 7,
respectively, followed by the conclusions in Section 8.
2 RELATED WORKS
Duty-cycle is a usual way for sensor nodes to save
energy [11]. In [12], the authors conducted some com-
parisons on the data delivery delay of the classic
synchronous scheduling schemes and asynchronous
scheduling schemes, showing that, in average, at least
half of the duty cycle time is needed to transmit a packet
among adjacent nodes.
In [3], authors proposed an asynchronous wake-up
schedule to reduce the end-to-end latency, in which
each sensor node has several choices to send packet to
its neighbors that wake up at different time in a duty
cycle. Hence, the average delay among each hop can be
reduced to a fraction of the duty cycle. However, the
total delay still increases proportionally to the product of
hop counts and duty cycle length. In [4], a delay-efficient
MAC scheme is proposed, in which sensor nodes wake
up at the appropriate time according to the demand
of their sender nodes. However, the demand can be
delivered only in the short duration of nodes’ active
time in each duty cycle. Hence, the hops of transmission
demanded in each duty cycle are limited.
In [6]–[8], [13], the authors discussed the wake-up
schedule according to an established directional data
traffic path. All nodes wake up when their sender n-
odes have just gotten the data packets, and go to sleep
Page 27 of 36 IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 3
as soon as they transmit the packets to their receiver
nodes. Hence, the data packets can be delivered step
by step without any waiting and the delivery delay
can be essentially eliminated. We name this kind of
scheduling method as pipeline scheduling in this paper.
Though the pipeline scheduling has good performance on
delay, it is not efficient under unreliable communication
environments. Sensor nodes taking the pipeline scheduling
usually have to retransmit packets after a whole of duty
cycle when failure happens.
Recently, using opportunistic routing for data delivery
in duty-cycle WSNs has received attention [5], [14],
[15]. By combining scheduling and routing for the da-
ta delivery, these works can reduce the data delivery
delay. However, as these works are usually based on
asynchronous wake-up, the average data transmission
delay between each hop is still a fraction of the duty
cycle, and the total delivery delay in SWSNs with large
hop counts is still very long.
The newest related work is the multi-pipeline scheduling
proposed in [9], which utilizes multiple parent nodes
on the delivery path to set multiple pipeline schedules
for the data delivery. However, in the multi-pipeline
scheduling, each failure during the data forwarding will
induce one decrement of pipelines until a retransmission
in next duty cycle. In this paper, we’ll make extensive
comparisons between the multi-pipeline scheduling and
the proposed OPS.
3 MODEL AND A SSUMPTIONS
Before introducing the proposed OPS scheduling, we
describe the traffic model and duty-cycle model for
SWSNs. After that, assumptions made in our design will
be described.
3.1 Models
Suppose M sensor nodes are randomly deployed in a
strip area where the width is far smaller than nodes’
transmission range. Sensor nodes are connected with
each other and form a connected network, i.e., a SWS-
N. To enhance the robustness of the network, usually
multiple routes exist in SWSNs for data collection so
as to avoid the disconnection of networks due to the
unavailability of some node, as shown in Fig. 1.
We consider the traffic pattern of data collection in
the SWSN, which is the major traffic pattern utilized
in the applications such as event detection, periodic
sampling and etc. We suppose the traffic in the SWSN
is low, and each event detection result or sampling data
can be recorded with just several bytes. Our goal is to
minimized the average end-to-end packet delivery delay
in the SWSN.
We suppose sensor nodes work in a low-duty-cycle
mode with periodically alternating active state and dor-
mant state. Nodes in active state can transmit packets or
sense, and receive packets from neighbors. Sensor nodes
change their states according to their wakeup schedules.
Gap of wakeup time Slot (data+ACK+guard)
{7, 2}
{7, 4}
{7, 6}
Fig. 3: The duty-cycle model.
Each schedule is periodic with the same period T , i.e.,
the duty cycle. The active time of each sensor node
is fixed in the schedule. We assume that each node
maintains a local schedule table recording the times
when its neighbors are in active state.
The duty cycle T can be divided into a number of
time units τ, i.e.,T = L ∗ τ. Usually, L ≫ 1. τ is
approximate to a round-trip transmission time (includ-
ing data transmission, ACK and a guard interval). For
example, to transmit a packet with the size of 125 bytes
and obtain the ACK by using the radio chip CC2420, τ
can be less than 5ms. Each sensor node selects one or
several consecutive time units as its wakeup time slot.
The size of the time slot depends on traffic load carried
on the sensor node. In view of delivery latency, we
are mainly concerned with the wakup time in a sensor
node’s schedule. Therefore, we use {T, ti} to represent
the schedule of node ni, where ti denotes the wakeup
time unit of ni. An example for the duty-cycle model is
shown in Fig. 3.
3.2 Assumptions
We make following assumptions in our design:
• Links are unreliable and a transmission may not be
successful depending on the link quality.
• Each sensor node knows its wake-up schedule and
the schedules of its neighbors. This can be im-
plemented by allowing the sensor nodes exchange
information locally during the network initialization
phase.
• Each node knows the link quality between itself
and its neighbors. This can be implemented by
applying existing mechanisms such as the four-bit
link estimation [16] and STLE [17].
• Sensor nodes are locally synchronized so that they
know when to send packets to their neighbors ac-
cording to their wake-up schedules. Local synchro-
nization can be achieved by using a time stamping
technique, as described in FTSP [18], which achieves
an accuracy of 2.24µs with the cost of exchanging
a few bytes of packets among neighboring nodes
every 15 minutes. Since τ is typically several ms
for the radio chip Chipcon CC2420, the accuracy of
2.24µs is sufficient.
Page 28 of 36IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 4
A
B
C
D E S
{T, i}
{T, i+2}
{T, i+4}
A
B
C
D E S
(c) Multi-pipeline scheme (d) The proposed OPS scheme
{T, i}
{T, i+1}
{T, i+3}
{T, i+2}
{T, i+4}
A
B
C
D E S
A
B
C
D E S
{T, i+2} {T, i+4}
{T, i}
{T, i+1}
{T, i+2}
{T, i+3}
{T, i+4}
(a) Network topology (b) Traditional pipeline scheme
Fig. 4: A typical example for three scheduling schemes.
4 DESIGN OVERVIEW
In this section, we use a typical example to illustrate the
main idea of the proposed OPS scheduling scheme. For
contradistinction, we also illustrate related scheduling
schemes with this example.
Fig. 4 shows a simple SWSN. In traditional pipeline
scheduling algorithm [6]–[8], [13], usually a data collec-
tion tree is given and nodes on each path in the tree
wake up one by one with a gap of one time slot, as
shown in Fig. 4 (b) where there are two forwarding paths
A → B → C → S and D → E → F → S. The schedules
of nodes are pipelined as {T, 1} → {T, 2} → {T, 3} and
{T, 4} → {T, 5} → {T, 6}, respectively. These schedules
can achieve minimum data collection delay when no
transmission failure happens. However, if node A fails
to send the packet to B at {T, 2}, it has to wait for T
time to retransmit the packet.
In the multi-pipeline scheduling algorithm, packet can
be switched between the two parallel pipelines on the
forwarding paths as shown in Fig. 4 (c), thus mitigating
the influence of broken pipeline. For instance, when
node A fails to transmit packet to node B at {T, 2}, node
A can transmit the packet to D immediately at {T, 3},
and the packet will be forwarded along the pipeline
D → E → S.
In our proposed OPS scheme, since nodes in SWSNs
are deployed in a strip area, we consider one main
forwarding path in the SWSN for the data collection,
as shown in Fig. 4 (d) where node A, B, C and S form
the main forwarding path. Nodes outside the path, e.g.,
node D and E, send their packets just to nearby nodes on
the path. Intuitively, data collection in this way should be
energy-efficient, as the traffic in SWSNs is supposed to
be low and sensor nodes on the main path can aggregate
their local data.
To schedule nodes on the main forwarding path, we
employ an expanded pipeline scheduling scheme with en-
larging the gap between adjacent nodes’ wakeup time in
traditional pipeline scheduling. For instance, in Fig. 4 (d),
nodes A, B and C are scheduled at {T, i}, {T, i + 2} and
{T, i + 5}, respectively. They wake up one after another,
which is similar to the traditional pipeline scheduling,
while the gaps between their wakeup time are larger
than those in traditional pipeline scheduling. For some
nodes outside the path, e.g., node D and E, they are
scheduled at the same time slots as their nearby nodes on
the path, so as to provide opportunistic links to enhance
the pipeline forwarding on the path. There might be
some other nodes deployed in SWSN in practice. We
schedule them at other time slots for local data gathering
without collisions. The scheduling detail is ignored in
this paper, as it is less related to the end-to-end data
delivery delay.
We use four typical cases of packet delivery in OPS to
introduce the basic idea of OPS. The cases are shown in
Fig. 5. We explain these cases as follows.
• Case 1: In the first duty cycle, node A transm
its
packet di to node B at {T, i + 2}. Suppose B suc-
cessfully receives the packet. Then, node B goes to
sleep immediately and wakes up again at {T, i + 4}
to transmit di to node C who just wakes up.
• Case 2: In the second duty cycle, node A transmits
packet di+1 to node B still at {T, i+2}. However, this
transmission fails. Then, node B will keep listening
and node A can retransmit di+1 to B at {T, i + 3}.
Suppose node B successfully receives di+1 at {T, i+
3}. Then, B can forward di+1 to node C at {T, i+4}.
• Case 3: In the third duty cycle, node A fails to
transmit packet di+2 to node B at {T, i+2}. Also, A
fails to retransmit the packet at {T, i + 3}. However,
another node E, which is close to node B and is
scheduled to overhear node B, successfully receives
di+2 at {T, i+3}. Hence, node E substitutes node B
to forward di+2 to node C at {T, i + 4}. (Technical
details about the substitution are explained later.)
• Case 4: In the fourth duty cycle, node A successfully
transmits packet di+3 to node B at {T, i + 2}. Since
node C has not waked up now, instead of going to
sleep at {T, i + 3}, node B sends the packet to node
F which is located between node B and C. Node F
then will help to send di+3 to node C at {T, i + 4}
with higher success probability.
From case 1 and 2, due to the expanded pipeline
scheduling, nodes A succeeds to transmit packet di+1
without retransmission in the next duty cycle, though
there is transmission failure. To more clearly show the
improvement, we suppose that the link quality between
node A and B is 0.7, the length of duty cycle T is 1s and
the size of time slots equals to τ = 5ms. Since node A
has two chances to transmit the packet to B in the duty
cycle before node C wakes up. Hence, the probability
of successful packet delivery between A and B in one
duty cycle can increase to be 1 − (1 − 0.7)2 = 0.91. This
increment is achieved at a cost of only a small extra
delay τ = 5ms in the ideal case. However, the probability
of retransmission in next duty cycle can be remarkably
decreased.
From case 3 and 4, nodes E and F , which are outside
the main forwarding path, can opportunistically provide
Page 29 of 36 IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 5
Forward di
Node A
Node B
Node C
(1) No failure
Forward di+1
Node D
Forward di+2 Forward di+3
Node E
(2) Fail twice but still succeed
in current duty cycle
(3) Totally fail but another
node helps to forward
(4) Succeed early and
send to a relay node
{T, i+2}
{T, i+4} {T, i+4}
{T, i+2}
Fig. 5: Design of the proposed OPS scheduling.
additional links for the pipelined forwarding on the
path, thus effectively enhancing the pipeline scheduling
in the unreliable communication environments. Partic-
ularly, to more clearly show the improvement of case
3, we suppose that the link quality between node A
and D is 0.6. Then, the probability of successful packet
delivery between A and B or E in one duty cycle is
1 − (1 − 0.7)2(1 − 0.6)2 = 0.986. Note that, this improve-
ment does not require further extra delay in the ideal
case.
Compared with the multi-pipeline scheduling scheme,
the proposed OPS has the same delay ({T, i+4}−{T, i})
on ideal link. However, i) the sender nodes in OPS have
multiple retransmission chances in a duty cycle, while
the retransmission chances in multi-pipeline scheduling
depends on the number of receivers for the sender; ii)
attributing to overhearing, every another receiver for a
sender node in OPS will double the reception chances
in a duty cycle, as illustrated in case 3; iii) some other
kind of receiver node can even help to improve the link
quality, as illustrated in case 4. Furthermore, in view of
a given amount of packets to be delivered, OPS does not
consume more energy than existing approaches, which
will be analyzed in next section.
5 THE PROPOSED OPS SCHEDULING
In this section, we present the proposed OPS scheduling
in details. First, we give the definition of the forwarding
path and the nodes providing opportunistic links in OPS.
Then, we introduce scheduling algorithms for sensor
nodes in SWSN, and the energy consumption analysis.
Finally, the optimization of OPS, including the optimal
forwarding path selection and the optimal time slot
assignment, is conducted.
5.1 Definitions
We define the forwarding path as a connected route
on which sensor nodes can cover the whole SWSN.
Suppose Γ is the set of nodes on a forwarding path.
pi,i+1 denotes the link quality for link ni → ni+1, and
ni-1 ni ni+1 ni+2
ni-1
f
ni-1
r
ni
r
ni+1
r
ni
f
ni+1
f
ni+2
f
Fig. 6: Opportunistic transmission links.
C(ni, ni+1, ni+2) denotes the route quality for the route
ni → ni+1 → ni+2, which equals to pi,i+1 ∗ pi+1,i+2.
We define opportunistic forwarding nodes and opportunistic
relaying nodes, which provide two kinds of opportunistic
links for the forwarding path, as follows.
Definition 1: Consider a segment of the forwarding
path ni → ni+1 → ni+2, node nfi+1 is an opportunis-
tic forwarding node for node ni+1, which satisfies: (1)
n
f
i+1 /∈ Γ; (2) n
f
i+1 = argmax{C(ni, x, ni+2) : x ∈
N(ni)
⋂
N(ni+1)
⋂
N(ni+2)}, where N(ni) represents the
set of neighboring nodes of ni.
Definition 2: Consider a segment of the forwarding path
ni → ni+1 → ni+2, node nri+1 is an opportunistic relaying
node for node ni+1, which satisfies: (1) n
r
i+1 /∈ Γ; (2) nri+1 =
argmax{C(ni+1, x, ni+2) : x ∈ N(ni+1)
⋂
N(ni+2)}.
Fig. 6 illustrates all possible data forwarding links in
SWSN. Note that, not all sensor nodes on the forward-
ing path have their opportunistic forwarding nodes and
opportunistic relaying nodes. The existence of these nodes
depends on the deployment of the SWSN. From Fig. 6,
a sensor node on the forwarding path may get a packet
from its sender node on the path or its sender node’s
opportunistic forwarding node or opportunistic relaying node,
depending on the random results of wireless transmis-
sions. In addition, the sensor node on the path may
forward the packet to its receiver node on the path or its
receiver’s opportunistic forwarding node or the opportunistic
relaying node of itself.
5.2 Scheduling of nodes with OPS
For convenience, we call the destination node of a sender
as the sender’s parent node, and the destination node of
Page 30 of 36IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 6
parent node as the sender’s grandparent node. After the
initialization of OPS, each node on the forwarding path,
e.g., ni, will have its active time slot ({T, i}), its parent
node ni+1’s active slot ({T, j}) and its grandparent node
ni+2’s active slot ({T, k}). The node nfi and nri , if there
are, will also have {T, i}, {T, j} and {T, k}.
Algorithm 1 describes the scheduling of a node ni
on the forwarding path. The node ni on the path will
periodically wake up at its active time slot to receive the
coming packet, and transmit packet (if it has) to ni+1 at
ni+1’s active slot. Particularly, in ni’s reception duration
(i.e., ni’s active slots), if ni does not detect packet trans-
mission, it sends a short N-ACK to opportunistic relaying
node nri (if there is) to notice the latter to go to sleep, and
it also then goes to sleep. If ni detects there is packet
transmission in its active time but fails to receive the
packet, it will not return ACK, and will keep listening to
the channel for packet retransmission till ni+1 is going to
be active. If ni successfully receives a packet in its active
time but ni+1 has not been active, then ni will send the
packet to nri (if there is). In ni’s transmission duration
(i.e., ni+1’s active slots), if ni has already successfully
send the packet to nri in current cycle, ni will not
transmit the packet any more. Otherwise, ni will keep
transmitting the packet in its transmission duration until
it receives an ACK from ni+1.
Algorithm 1 Scheduling for node ni on forwarding path
1: STATE=FREE
2: while time ∈ [{T, i}, {T, j}) do
3: Listen to the channel
4: if Detect a packet then
5: if Receive the packet then
6: Send back ACK
7: STATE=BUSY
8: if time={T, j − 1} then
9: if ∃nri then
10: Send the packet to nri
11: STATE=FREE
12: end if
13: Go to sleep and BREAK
14: end if
15: end if
16: Go on listening to the channel
17: else
18: Send N-ACK to nri
19: Go on sleep and BREAK
20: end if
21: time++
22: end while
23: while time ∈ [{T, j}, {T, k}) do
24: if STATE=BUSY then
25: Send packet to ni+1
26: if Receive ACK then
27: STATE=FREE
28: end if
29: end if
30: time++
31: end while
Algorithm 2 describes scheduling of opportunistic for-
warding node n
f
i of node ni. When n
f
i is assigned with
Algorithm 2 Scheduling for node n
f
i
1: STATE=FREE
2: while time ∈ [{T, i}, {T, j} − ǫ) do
3: Listen to the channel
4: if Detect a packet then
5: if Receive the packet then
6: STATE=BUSY
7: end if
8: if Receive ACK then
9: STATE=FREE
10: Go to sleep and BREAK
11: end if
12: else
13: Go on sleep and BREAK
14: end if
15: time++
16: end while
17: while time ∈ [{T, j} − ǫ, {T, k}) do
18: if STATE=BUSY then
19: Send ACK to ni
20: Send packet to ni+1
21: if Receive ACK then
22: STATE=FREE
23: end if
24: end if
25: time++
26: end while
Algorithm 3 Scheduling for node nri
1: STATE=FREE
2: while time ∈ [{T, i}, {T, j}) do
3: Listen to the channel
4: if Receive N-ACK from ni then
5: Go to sleep and BREAK
6: end if
7: if Receive a packet from ni then
8: STATE=BUSY
9: Send ACK to ni
10: Go to sleep and BREAK
11: end if
12: time++
13: end while
14: while time ∈ [{T, j}, {T, k}) do
15: if STATE=BUSY then
16: Send packet to ni+1
17: if Receive ACK then
18: STATE=FREE
19: end if
20: end if
21: time++
22: end while
active time slots of ni and ni+1, it will periodically wake
up at ni’s active time slot to overhear the air-in packet.
If it receives the packet but does not receive the ACK
sent by ni, it will send ACK to ni in the guard interval
(indicated as ǫ in Algorithm 2) in {T, j}, and then send
the packet to ni+1 at ni+1’s active slot.
Algorithm 3 describes scheduling of opportunistic relay-
ing node nri of node ni. When n
r
i is assigned with active
time slots of ni and ni+1, it will periodically wake up
at ni’s active time slot to receive the possible packet. If
it receives N-ACK from ni, which indicates there is no
packet transmission in the current duty cycle, nri will
Page 31 of 36 IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 7
Shortest path
0.4
0.8 0.8
ni nj
nk
Fig. 7: Example for the optimal forwarding path.
go to sleep immediately to save the idle listening time.
If nri receives packet from ni, it sends back ACK to ni
immediately and transmits the packet to ni+1 at ni+1’s
active slot.
According to Algorithm 1, 2, when opportunistic for-
warding node provides extra link to help forward the
overheard packet, it does not bring collision to the link
on the forwarding path. Since the opportunistic forwarding
node is scheduled at the same active slots as the node
on the forwarding path, the overhearing does not bring
extra energy consumption of the sender. In addition, the
larger gap between adjacent nodes’ wakeup slots set in
OPS also will not increase the energy consumption of the
receiver, because the receiver waking up at its wakeup
slot will go to sleep immediately after it receives the
packet or detects there is no packet transmission in the
channel. Only when there is transmission failure, the
sender and receiver nodes will increase their active time
in that duty cycle. However, the increment of the active
time is efficiently utilized for packet retransmission, and
hence does not increase the energy consumption in view
of a given packet transmission task.
According to Algorithm 1, 3, opportunistic relaying node
also does not bring collision to the link on the forwarding
path. For the energy consumption of opportunistic relay-
ing node, as the the corresponding node on the forward-
ing path may additionally send packet to it and asks it to
relay the packet on a better link, this additional transmis-
sion leads to extra energy consumption, compared with
existing related works. However, this extra transmission
on better link helps to reduce retransmissions, thus
mitigating the extra energy consumption. In next section,
we’ll compare the overall energy consumption of OPS
with that of existing related works.
5.3 Optimization of the OPS scheduling
To optimize the proposed OPS scheduling, we need to
select the optimal forwarding path in the SWSN and
compute the optimal gaps between the wakeup times
of adjacent nodes on the path. Note that, the optimal
forwarding path may not be the shortest path in the
SWSN. For an example illustrated in Figure 7, in which
node ni and node nj are on the shortest path while
node nk is outside the path in the SWSN. The link
qualities between these three nodes are 80%, 80% and
40%, respectively. It is easy to obtain the optimal gap
9τ between node ni and nj’s wakeup time to achieve
the minimum expectation delay of data delivery on link
ni → nj (refer to Equation 2 below), which is about
n1 n2 n3 n4
n2
r
n3
r
n2
f
n3
f
n4
f
Layer 1 Layer 2 Layer 3 Layer 4
p1,2
p1,2-f
p2,2-r p2-r,3
p2,3
p2,3-f
p2-f,3
Fig. 8: Data forwarding between layers.
55ms (T = 1s and τ = 5ms). However, if choosing
the link ni → nk → nj as part of the forwarding path
and setting the gaps between node ni, nk and nj to
be 3τ, the total expectation delay of data delivery on
link ni → nk → nj is only 46ms. Therefore, we need to
select the optimal forwarding path according to the total
expectation delay. Meanwhile, the opportunistic links
should also be counted in during the selection.
Suppose that node ni is on a selected forwarding path.
We call the set of node ni and n
f
i as layer i. Fig. 8
shows the data delivery routes at the beginning of the
forwarding path. The link quality between node n1 and
n2 is denoted by p1,2 and the link quality between node
n1 and n
f
2 is denoted by p1,2−f . Other link quality can
be denoted in a similar way. Assume the gap between
node n1 and n2’s wakeup time is k1,2 ∗ τ, where k1,2
is an integer, and k1,2 ∗ τ ≤ T . The successful delivery
probability between layer 1 and layer 2 within one duty
cycle is:
P1,2 = 1 − (1 − p1,2)
k1,2 ∗ (1 − p1,2−f )
k1,2 (1)
Hence, the expectation of delivery delay between layer
1 and layer 2 can be calculated as:
E[D1,2] =
∞∑
m=0
P1,2(1 − P1,2)
m
(mT + τ)
=
(1 − P1,2) ∗ T
P1,2
+ k1,2 ∗ τ
(2)
Thus, with Equation 1 and 2, it is easy to compute the
optimal gap k1,2 ∗ τ to minimize E[D1,2].
We further compute the optimal k2,3 for layer 2 and
layer 3. The probability that node nr2 relays the packet
arriving at layer 2 is:
p
r
2 =
k1,2∑
m=1
p1,2 ∗ (1 − p1,2)
m−1
∗ [1 − (1 − p2,2−r)
k1,2−m] (3)
Hence, the probability that node n2 forwards the
packet in layer 2 is 1 − (1 − p1,2)k1,2 − pr2, and the
probability that node n
f
2 forwards the packet in layer 2
is (1−p1,2)k1,2 ∗[1−(1−p1,2−f)k1,2]. Thus, the equivalent
link quality for link n2 → n3, link n2 → nr2 → n3 and
link n
f
2 → n3 is:
p
e
2,3 =p2−r,3 ∗ p
r
2 + p2,3 ∗ [1 − (1 − p1,2)
k1,2 − p
r
2]
+ p2−f,3 ∗ (1 − p1,2)
k1,2 ∗ [1 − (1 − p1,2−f )
k1,2 ]
(4)
Page 32 of 36IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 8
0
100
200
300
400
500
600
700
0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
E
xp
e
ct
a
tio
n
D
e
la
y
(m
s)
Link Quality
OPS
Single pipeline
Multi-pipeline with m=2
Multi-pipeline with m=4
Multi-pipeline with m=6
(a)
0
20
40
60
80
100
120
140
160
180
200
0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
E
xp
e
ct
a
tio
n
D
e
la
y
(m
s)
Link Quality
OPS
Multi-pipeline with m=2
Multi-pipeline with m=4
Multi-pipeline with m=6
(b)
Fig. 9: Theoretical comparison on the expectation delay.
Combined with the link n2 → nf3 , the delivery prob-
ability between layer 2 and layer 3 within a duty cycle
is:
P2,3 = 1 − (1 − p
e
2,3)
k2,3 ∗ (1 − p2,3−f )
k2,3 (5)
Hence, similar to above, the expectation delay E[D2,3]
of data delivery between layer 2 and layer 3 can be
computed. Through minimizing E[D2,3], the optimal gap
k2,3 ∗ τ can be obtained.
Based on the calculation results, the optimal forward-
ing path can be founded as follows. Suppose Pi is a
forwarding path that covers the SWSN, and denote it
as:
Pi = {l
(i)
1 , l
(i)
2 , . . . , l
(i)
Hi
} (6)
Where Hi is the length of Pi, and l
(i)
j is the j
th link.
Based on the calculation method above, we can obtain
the minimum expectation delay on each link in Pi.
Denote D
(i)
j is the minimum expectation delay for link
l
(i)
j
. We define Si as the total expectation delay of Pi:
Si =
Hi∑
j=1
D
(i)
j (7)
Therefore, the optimal forwarding path is the forward-
ing path with the minimum Si.
Finding the optimal forwarding path in the SWSN can
be done through a flooding in the network. We simply
describe the process as follows. The sensor node s, which
is furthest to the sink, makes a flooding in the SWSN.
During the flooding, each node records its current Si to s
and broadcasts the record. When a sensor node receives
a packet, it will update its record if the new Si is smaller.
To theoretically compare the performance of OPS and
the multi-pipeline scheduling as well as the traditional
pipeline scheduling, we further calculate their expecta-
tion delivery delay within one hop. For simpleness,
we suppose the link qualities of pipeline link, multi-
pipeline links and opportunistic links are all the same
with the value p. Referring to Equation 2, we can get
the expectation delay of traditional pipeline scheduling
and multi-pipeline scheduling which is 1−p
p
∗ T + τ and
(1−p)m
1−(1−p)m
∗ T + mτ, respectively, where m is the number
of parent nodes in multi-pipeline scheduling. Suppose only
one opportunistic forwarding node is exploited in OPS.
Hence, based on Equation 1 and 2, we can get the
minimum expectation delay of OPS which is
(1−p)2k
1−(1−p)2k
∗
T + kτ, where k = ⌊ ln(1−
T
τ
∗ln(1−p)−
√
(1− T
τ
∗ln(1−p))2−1)
2 ln(1−p)
⌋ is
the optimal number of time units in the gap between the
wakeup time. Set T = 1s, τ = 5ms, the expectation delay
of the three methods can be illustrated with Fig. 9.
From Fig. 9, the worse the link quality is, the larger
the expectation delay of the traditional pipeline scheduling
(denoted as single pipeline in the figure) and the multi-
pipeline scheduling will be. However, the expectation de-
lay of OPS usually keeps very small even when the link
quality is 0.6. In addition, when increasing the number
of parent nodes m in the multi-pipeline scheduling, the
expectation delay will decrease as expected. While, when
m = 6, the delay will become larger in the case that the
link quality is more than 0.68.
6 SIMULATIONS
6.1 Simulation Setup
We use a randomly-generated strip network in an area
with the width no more than 15m to evaluate the OPS
method. In the simulations, the network size varies from
100 nodes to 1000 nodes. The links between those nodes
are wireless path loss channels combined with shadow-
ing effects as in [19], where the beginning transmission
range and the end transmission range of sensor nodes are
11.3m and 32.4m.
For the data collection, we assume the size of packets
initiated by nodes is small and the local data can be
aggregated into one packet. Hence, we measure the data
collection delay as the time elapsed from a packet being
sent out by the sensor node furthest to the sink until the
packet reaches the sink, i.e., the end-to-end data delivery
delay. Due to the imperfection of the links, the data
delivery delay exhibits some inherent randomness and
can be considered a random variable. Here we propose
to use the average delivery delay as a measurement of
network performance. Each simulation result is based
on 10 network topologies. For each topology, each node
sends 100 packets to the sink with a low rate of one
packet per 15s.
In addition, we measure the energy consumption by
the total number of transmissions for a single packet sent
by the node furthest to the sink. The receiver-side energy
is consumed by packet reception and idle listening,
where the former depends on the number of packet
transmissions and the latter depends on the idle listening
time of sensor nodes in each duty cycle. According to the
analysis in Section 5.2, sensor nodes with OPS have little
additional idle listening time, compared with traditional
pipeline scheduling. As for the additional transmissions of
opportunistic relaying nodes in OPS, the energy consump-
tion can be measured with the times of transmissions.
Therefore, we use only the sender-side energy as the
Page 33 of 36 IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 9
performance metric for the total energy consumption,
when comparing OPS with existing methods.
6.2 Baseline: Typical Existing Schemes
We compare the performance of OPS with two typical
existing methods. One is the common method in [6]–[8],
i.e., the traditional pipeline scheduling method (denoted
as single pipeline). For fair comparison, we improve the
method with assumptions that a shortest path with
the highest quality has been established in the SWSN
and all nodes outside the path are scheduled at other
time slots to send packets to the nodes on the path
without collisions. Another typical existing method to
be compared with is the multi-pipeline scheduling method
in [9] (denoted as multi-pipeline) which makes use of
multiple parent nodes in the network scheduled with
multiple pipeline schedules.
6.3 Performance Comparison
This section compares the performance of OPS with the
two typical existing methods.
6.3.1 Different Duty Cycles
We first evaluate the performance in networks with
different duty cycles. In this simulation, 100 nodes are
randomly deployed in 15m ∗ 300m field.
Fig. 10 (a) and (b) show the average delivery delay
and energy cost of OPS method, single pipeline and multi-
pipeline method. In Fig. 10 (a), the average delivery
delay of the multi-pipeline method is only 50% of that
of single pipeline when the duty cycle is low. It can be
seen that, multiple transmission chances can effectively
enhance the performance of the pipeline forwarding.
Moreover, the average delivery delay of OPS is further
far less than that of the multi-pipeline method. This is
because OPS provides more transmission links than the
multi-pipeline method. Meanwhile, OPS sets larger gap
between adjacent nodes with multiple slots in which
multiple transmission chances are allowed. While, the
number of transmission chances provided by the multi-
pipeline method is restricted by the number of parents
in each layer. On the other hand, OPS also utilizes the
potential parents in each layer in the SWSN to oppor-
tunistically help retransmit packet, effectively reducing
the failure cases in the duty cycle.
As shown in Fig. 10 (b), the energy costs of the single
pipeline method and the multi-pipeline method are almost
the same. This is because the number of transmissions of
these methods is mainly determined by the link quality.
In the OPS method, as the opportunistic forwarding nodes
forward packets that are received through overhearing
the transmissions of nodes on the forwarding path, some
of the retransmissions are avoided. Hence, the number
of transmission in OPS method is a bit lower than that in
the single pipeline method and the multi-pipeline method.
Fig. 13: Deployment
with 20 TelosB nodes.
0
100
200
300
400
500
600
1 2 3 4 5
A
ve
ra
g
e
D
e
la
y
/
T
im
e
U
n
its
Duty Cycle (s)
OPS
Multi-pipeline
Fig. 14: Average delay.
6.3.2 Different Network sizes and Densities
We also evaluate the performance of our design in dif-
ferent network settings as shown in Fig. 11 and Fig. 12.
For different network size from 60 to 140, the length of
the area changes from 180m to 420m to keep a similar
density. For different network densities, the network size
is fixed at 100 while the length of the area changes from
200m to 450m.
As illustrated in Fig. 11, the average delivery delay
and energy consumption increases with the network
size. This is as expected. Again, OPS outperforms the
single pipeline method and the multi-pipeline method. In
most time, the average delivery delay of OPS is even
less than 20% of that of the other two methods. For
different network densities as shown in Fig. 12, the
average delivery delay increases with the length of area.
This is because larger length of area leads to longer
distance between neighboring nodes, thus resulting in
worse link quality. Note that, when setting the length of
the area to be 200m, the average delivery delay of the
multi-pipeline method becomes close to that of OPS. This
is because the density in this case is high enough that the
multi-pipeline method has enough parents in each layer to
ensure more successful transmissions in the duty cycle.
7 EXPERIMENTS
To show the performance of OPS in real communica-
tion environment, we deployed 20 TelosB nodes in the
corridor of our office building, as shown in Fig. 13
and Fig. 15. For convenience of the deployment, we
set the minimum transmission power of each node to
be 3µW . Hence, the maximum transmission range of
sensor nodes is about 1m, when putting the nodes on
the ground. In the deployment, most sensor nodes have
two parents in each layer, except for three nodes. We
utilize this deployment to compare the performance of
the proposed OPS method with the multi-pipeline method
proposed in [9]. The dashed arrows in OPS method
in Fig. 15 are the opportunistic links provided by the
opportunistic forwarding nodes. Here, we do not employ
the opportunistic relaying nodes in the deployment for the
sake of distinct comparisons between the two methods.
We set the wakeup schedules for the nodes according
to OPS and the multi-pipeline method. The figures in
Fig. 15 denote the time slot at which the sensor node
periodically wakes up in each duty cycle. For the sake
of simplicity, we fixed the wakeup-time gap 2τ for each
Page 34 of 36IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 10
0
500
1000
1500
2000
2500
3000
0 0.05 0.1 0.15 0.2
A
v
e
ra
g
e
D
e
la
y
/
T
im
e
U
n
it
s
Duty Cycle
OPS
Multi-pipeline
Single pipeline
(a) Delay vs. Duty Cycle
0
20
40
60
80
100
0 0.05 0.1 0.15 0.2
N
u
m
b
e
r
o
f
T
ra
n
s
m
is
s
io
n
s
Duty Cycle
OPS
Multi-pipeline
Single pipeline
(b) Energy vs. Duty Cycle
Fig. 10: Duty Cycle
0
200
400
600
800
60 80 100 120 140
A
v
e
ra
g
e
D
e
la
y
/
T
im
e
U
n
it
s
Network Size
OPS
Multi-pipeline
Single pipeline
(a) Delay vs. Network Size
0
20
40
60
80
100
60 80 100 120 140
N
u
m
b
e
r
o
f
T
ra
n
s
m
is
s
io
n
s
Network Size
OPS
Multi-pipeline
Single pipeline
(b) Energy vs. Network Size
Fig. 11: Network Size
0
300
600
900
1200
200 250 300 350 400
A
v
e
ra
g
e
D
e
la
y
/
T
im
e
U
n
it
s
Length of Area (m)
OPS
Multi-pipeline
Single pipeline
(a) Delay vs. Density
0
20
40
60
80
100
200 250 300 350 400
N
u
m
b
e
r
o
f
T
ra
n
s
m
is
s
io
n
s
Length of Area (m)
OPS
Multi-pipeline
Single pipeline
(b) Energy vs. Density
Fig. 12: Network Density
2 4 6 7 9 11 12 14 16 17 19 20
3 5 8 10 13 15 18
1
3 5 7 9 11 13 15 17 19 21 23 25
3 5 9 11 15 17 21
1
Multi-pipeline:
OPS method:
S
S
D
D
Fig. 15: Topology of the deployment.
layer in OPS. The size of the time slot is τ = 6ms. The
length of the duty cycle is set to be 1s, 2s, 3s, 4s, 5s,
respectively. The node S delivers 50 packets to node D
with an interval 10s. Node D records the delivery delay
of each packet received.
Fig. 14 shows the results of the average delivery delay
with the two methods, i.e., OPS and the multi-pipeline
method. It can be seen that, OPS usually outperforms the
multi-pipeline method. When the duty cycle increases, the
predominance becomes more distinct. Even when T =
1s, the average delivery delay with OPS is about 30% of
that with the multi-pipeline method. To get insight into
the results, we can see that, each node in Fig. 15 with
the multi-pipeline method has only one or two chances in
each duty cycle to deliver the packet, while each node on
the forwarding path in OPS has two chances to deliver
to packet and one additional chance to opportunistically
forward the packet.
8 CONCLUSIONS
The large hop counts in SWSNs with unreliable links
usually cause long end-to-end data delivery delay.
Pipeline scheduling is much suitable to resolve the prob-
lem, while its performance is seriously deteriorated in
unreliable communication environments. In this paper,
we propose a novel opportunistic pipeline scheduling (OPS)
specially for SWSNs with unreliable links. OPS sets opti-
mal gap between adjacent nodes’ wakeup time to allow
multiple retransmissions in the duty cycle, while mini-
mizing the expectation delivery delay. In addition, OPS
effectively employs nodes outside the forwarding path
in SWSNs to opportunistically provide extra links, which
helps forward the packets that fails to be delivered on
the forwarding path. We conduct extensive simulations
and in-door experiments to show that OPS significantly
outperforms the existing methods in terms of end-to-end
delivery delay while it consumes the similar amount of
energy.
Page 35 of 36 IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For P
eer R
eview
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. XX, NO. Y, MONTH 2013 11
REFERENCES
[1] Chang Wen Chen, Yu Wang, Chain-Type Wireless Sensor Network
for Monitoring Long Range Infrastructures: Architecture and Protocols,
International Journal of Distributed Sensor Networks, Vol. 4, No.
4, 2008, pp. 287-314.
[2] Peng Guo, Tao Jiang and Kui Zhang. Novel Energy-Efficient Miner
Monitoring System with Duty-Cycled Wireless Sensor Networks. Inter-
national Journal of Distributed Sensor Networks, 2012.
[3] Vasanthi, N.A., S. Annadurai, AWS: asynchronous wakeup schedule to
minimize latency in wireless sensor networks, in Proceedings of SUTC,
Taichung, Taiwan, Jun. 2006, pp. 144-151.
[4] Yanjun Sun, Shu Du, Omer Gurewitz, David B. Johnson, DW-MAC:
a low latency, energy efficient demand-wakeup MAC protocol for wireless
sensor networks, in Proceedings of MobiHoc, Hong Kong, China,
2008, pp. 53-62.
[5] Shuo Guo, Yu Gu, Bo Jiang and Tian He, Opportunistic Flooding
in Low-Duty-Cycle Wireless Sensor Networks with Unreliable Links, in
Proceedings of MobiCom, Beijing, China, Sept. 2009, pp. 133-144.
[6] Lu G, Krishnamachari B, Raghavendra C., An adaptive energy-
efficient and low-latency MAC for data gathering in wireless sensor
networks, in Proceedings of IPDPS, Santa Fe, New Mexico, Apr.
2004, pp. 224-230.
[7] Q. Cao, T. F. Abdelzaher, T. He, and J. A. Stankovic, Towards
Optimal Sleep Scheduling in Sensor Networks for Rare Event Detection,
in Proceedings of IPSN, UCLA, Los Angeles, California, USA, Apr.
2005, pp. 20- 27.
[8] Abtin Keshavarzian, Huang Lee, Lakshmi Venkatraman, Wakeup
Scheduling in Wireless Sensor Networks, in Proceedings of MobiHoc,
Florence, Italy, May 2006, pp. 322-333.
[9] Yongle Cao, Shuo Guo and Tian He, Robust Multi-Pipeline Schedul-
ing in Low-Duty-Cycle Wireless Sensor Networks, in Proceeding of
INFOCOM, Orlando, USA, Mar. 2012.
[10] Peng Guo, Tao Jiang Qian Zhang, and Kui Zhang, Sleep Schedul-
ing for Critical Event Monitoring in Wireless Sensor Networks, IEEE
Transactions on Parallel and Distributed System. Vol. 23, No. 2,
2012, 345-352.
[11] Ye W, Heidemann J, Estrin D., An Energy-Efficient MAC Protocol for
Wireless Sensor Networks, in Proceedings of INFOCOM, New York,
USA, Jun. 2002, pp. 567-576.
[12] Tae Rim Park and Myung Lee, Power Saving Algorithms for Wireless
Sensor Networks on IEEE 802.15.4, IEEE communication magazine,
Vol. 46, No. 6, 2008, pp.148-155.
[13] Vasanthi, N.A., S. Annadurai, Energy Efficient Sleep Schedule for
Achieving Minimum Latency in Query based Sensor Networks, in
Proceedings of SUTC, Taichung, Taiwan, Jun. 2006, pp. 214-219.
[14] R. M. Sanjit Biswas, ExOR: Opportunistic Multi-Hop Routing for
Wireless Networks, in Proceedings of SIGCOMM, Philadelphia, PA,
USA, Oct. 2005, pp. 133-144.
[15] Q. Cao, T. Abdelzaher, T. He and R. Kravets. Cluster-Based For-
warding for Reliable End-to-End Delivery in Wireless Sensor Networks,
in Proceedings of INFOCOM, Anchorage, Alaska, USA, May 2007,
pp. 1928-1936.
[16] Rodrigo Fonseca, Omprakash Gnawali, Kyle Jamieson and Philip
Levis, Four Bit Wireless Link Estimation, in Proceedings of HotNets,
Atlanta, GA, USA, Nov. 2007.
[17] Alexander Becher, Olaf Landisiedel and Klaus Wehrle, Towards
Short-Term Wireless Link Quality Estimation, in Proceedings of Em-
Nets, Chalottesville, USA, June 2008.
[18] G. S. M. Maroti, B. Kusy and A. Ledeczi, The Flooding Time
Synchronization Protocol, in Proceedings of SenSys, Baltimore, Mary-
land, USA, Nov. 2004, pp. 39-49.
[19] M. Zuniga04 and B. Krishnamachari, Analyzing the Transitional
Region in Low Power Wireless Links, in Proceedings of SECON, Santa,
Clara, USA, Oct. 2004, pp. 517-526.
Page 36 of 36IEEE Transactions on Wireless Communications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
FD-MMAC: Combating Multi-Channel Hidden and
Exposed Terminals Using a Single Transceiver
Yan Zhang, Loukas Lazos, Kai Chen, Bocan Hu, and Swetha Shivaramaiah
Dept. of Electrical and Computer Engineering, University of Arizona
Email: {yanzhang, llazos, chenkai, bocanhu, sshivaramaiah}@email.arizona.edu
Abstract—We address the problem of improving the throughput
and delay efficiency of distributed multi-channel MAC (MMAC)
protocols. We design an MMAC protocol called FD-MMAC that
exploits recent advances in full-duplex (FD) communications to
coordinate channel access in a distributed manner. Compared
with prior MMAC designs, the FD-MMAC protocol eliminates
the use of in-band or out-of-band control channels for combat-
ing the multi-channel hidden terminal problem, discovering the
resident channel of destinations, and per
forming load balancing.
Furthermore, FD-MMAC improves the spectral efficiency by
enabling the operation of multi-channel exposed terminals. To
achieve its goals, FD-MMAC integrates an advanced suite of
PHY-layer techniques, including self interference suppression,
error vector magnitude and received power measurements, and
signal correlation techniques. We validate the proposed PHY-layer
techniques on NI USRP devices. Further, we show via simulations
that FD-MMAC achieves significantly higher throughput and
lower delay compared with prior art.
I. INTRODUCTION
The delay and throughput performance of wireless net-
works can be significantly improved by accommodating parallel
transmissions over orthogonal frequency bands. Most wireless
standards (e.g., [1]) already provision for multiple bands, herein
referred to as channels. For networks without centralized con-
trol, channel access is coordinated in a distributed fashion by
the medium access control layer (MAC), using a multi-channel
MAC (MMAC) protocol [2]–[7].
The design of MMAC protocols poses significant challenges.
Senders must employ low-overhead mechanisms for discov-
ering the resident channel of their respective destinations.
Moreover, parallel transmissions must be efficiently distributed
over all available channels to balance the traffic load and
alleviate contention. The latter is a complicated process due to
the multi-channel hidden terminal problem [3]. In this problem,
a sender that switches to a busy channel is unable to detect an
ongoing transmission if it is a hidden terminal to the transmitter.
Current solutions to the multi-channel hidden terminal problem
incur significant control overhead by requiring the use of in-
band or out-of-band control channels [2]–[5], or the availability
of multiple transceivers per device [6], [8], [9]. Finally, most
existing MMAC protocols fail to address the multi-channel
exposed terminal problem [10], whereby a sender switching to
a busy channel, but being an exposed terminal to a transmitter,
cannot proceed with a parallel non-interfering transmission.
978-1-4799-3360-0/14/$31.00 c©2014 IEEE
To improve the spectral efficiency of MMAC protocols, we
exploit recent advances in full-duplex (FD) communications
over a single channel [11]–[14]. In certain low-power wireless
environments, sophisticated self interference suppression (SIS)
techniques allow for concurrent transmission and reception over
a single channel. This is achieved by suppressing a significant
portion of the self interference (up to 80 dB), using a com-
bination of antenna-based SIS [14], signal inversion [12], RF
and digital interference cancellation [13], [15]. The integration
of FD communications in the MMAC design provides unique
opportunities for reducing the control overhead and increasing
the spatial channel reuse. In this paper, we address the problem
of improving the throughput and delay efficiency of MMAC
protocols, when nodes can operate in FD mode.
Our Contributions–We design an MMAC protocol called
FD-MMAC that enables distributed coordination of channel
access over orthogonal channels for devices using a single
transceiver. Compared with prior MMAC designs, the FD-
MMAC protocol provides the following attractive features.
• It eliminates in-band and out-of-band control signaling
for combating the multi-channel hidden terminal problem,
discovering the resident channel of destinations, and per-
forming load balancing.
• It increases the spatial channel reuse by enabling the
operation of multi-channel exposed terminals.
• It achieves load balancing and fairness autonomously.
Though security considerations are beyond the scope of the
present work, FD-MMAC is less vulnerable to denial-of-service
attacks launched against the control channel [16], [17], due to
the elimination of signaling on a dedicated control band. To
achieve its goals, FD-MMAC integrates an advanced suite of
PHY-layer techniques, including SIS, error vector magnitude
(EVM) and received power measurements, and signal correla-
tion techniques. We validate these PHY-layer techniques on an
NI USRP testbed. Further, we show via packet-level simulations
that FD-MMAC achieves significantly higher throughput and
lower delay compared with prior art. To the best of our
knowledge, this is the first work to propose an MMAC protocol
design that exploits the FD communication mode.
Paper Organization–In Section II, we motivate our problem
by discussing the limitations of related work. The system model
is described in Section III. Section IV describes the FD carrier
sensing operation. In Section V, we address the multi-channel
hidden and exposed terminal problems. In Section VI, we
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
978-14799-3360-0/14/$31.00 ©2014 IEEE 2742
2
Fig. 1. The multi-channel hidden terminal problem.
present the operational details of FD-MMAC. We compare the
performance of FD-MMAC with existing MMAC designs in
Section VII and conclude in Section VIII.
II. MOTIVATION – RELATED WORK
MMAC protocols can be broadly categorized to three classes:
(a) split-phase [3], [5], [18], (b) dedicated control channel
(DCC) [6]–[10], and (c) rendezvous [2], [19]. In split-phase
MMACs, time is divided to alternating control and data phases.
During the control phase, all nodes converge to a default
channel to negotiate the channel assignment for the upcoming
data phase using a variant of the virtual carrier sensing mech-
anism [1]. During the data phase, nodes exchange data on the
assigned channels. In DCC MMACs, nodes are equipped wi
th
two radios. One radio is always tuned to a DCC to perform
channel assignment and virtual carrier sensing. The second
radio switches between the remaining channels to perform
data transmissions. Finally, in rendezvous protocols, nodes
hop between channels using predefined hopping sequences.
These sequences are designed to enable the sender-destination
rendezvous within a fixed time period. We now motivate our
work by highlighting the limitations of existing MMACs.
The multi-channel hidden terminal problem: We first
describe the multi-channel hidden terminal problem using the
topology of Fig. 1. Let nodes A and B reside on channel
f1, while node C resides on f2. Topologically, C is a hidden
terminal to A. Assume that A performs an RTS-CTS exchange
over f1 (virtual carrier sensing enabled) before communicating
packet PA to B. Let the transmission of PA start at time t
0
and terminate at t1. Assume that C switches to f1 at t2 with
t0 < t2 < t1. Because t2 > t0, node C will not overhear
the CTSB. Moreover, because t2 < t1, the transmission of PA
is ongoing when C switches to f1. At time t3 < t1, node C
becomes active and causes a collision at B.
Split-phase MMACs avoid multi-channel hidden terminals by
performing channel negotiations on a default control channel.
Because all nodes reside on the default channel during the
control phase, the virtual carrier sensing mechanism prevents
hidden terminals. However, no data transmissions take place
during the control phase, thus decreasing the overall spectral
efficiency. The control phase can be considerably long under
high-contention conditions. Moreover, when a sender has pack-
ets for multiple destinations, it may switch channels during a
single data phase. In this case, a multi-channel hidden terminal
can avoid collisions only if it defers from transmission for a
period equal to the duration of the maximum transmission unit
plus an ACK. This delay decreases the spectral efficiency.
In DCC MMACs, virtual carrier sensing is performed over
the DCC. Because one radio is always tuned to this channel,
nodes are aware of all scheduled data transmissions. Hence,
multi-channel hidden terminals are avoided. However, the use
of one extra radio increases the device cost. Moreover, the
spectral efficiency is decreased by dedicating one channel for
signaling. The capacity of the control channel becomes the
performance bottleneck in high-contention scenarios. Finally,
from a security standpoint, the control channel constitutes a
single point of failure [16], [17]. Rendezvous protocols do not
address the multi-channel hidden terminal problem.
The multi-channel exposed terminal problem: Multi-
channel exposed terminals lose transmission opportunities when
switching to a busy channel in the middle of a data trans-
mission. Referring to Fig. 1, assume that B transmits a data
packet to A on f1 starting at t0. Node C switches to f1 at
t2 with t0 < t2 < t1. Node C will sense the channel busy
and defer from transmission, thus losing an opportunity to
operate in parallel with the B → A transmission. Prior MMAC
protocols solve the exposed terminal problem using additional
radios and/or dedicated control channels [7], [10].
Destination discovery and load balancing: In a multi-
channel setting, a destination must be discoverable by a can-
didate sender. Moreover, parallel communications must be
distributed over all channels to balance the traffic load and
alleviate contention. In DCC and split-phase MMAC proto-
cols, these two functions are performed by exchanging control
messages, thus decreasing the spectral efficiency. Rendezvous
protocols incur less overhead for destination discovery, since
a destination’s hopping sequence can be known a priory.
However, these protocols do not combat multi-channel hidden
terminals. Moreover, in some designs, an initial discovery
delay is incurred until the sender’s and destination’s hopping
sequences overlap [2]. In FD-MMAC, destination discovery and
load balancing are achieved without the exchange of control
information. Nodes independently switch to idle channels by
tracking the state of each channel.
III. SYSTEM MODEL
Network model: We consider a wireless network that op-
erates over a set of orthogonal channels, denoted by F =
{f1, f2, …, fn}. For simplicity, we assume that all channels
have the same bandwidth and propagation characteristics.
Nodes are equipped with a single radio transceiver and are
assumed to be time-synchronized to a common slotted sys-
tem. Time synchronization can be achieved using out-of-band
solutions such as GPS [20], or any of the readily available in-
band methods [21]. We note that time-slotted synchronization
is not a necessary FD-MMAC requirement. It is assumed here
to facilitate legacy operations used by FD-MMAC, such as
the slotted CSMA algorithm. However, FD-MMAC can also
operate in an asynchronous mode.
SIS and signal correlation: Nodes can operate in FD
mode by applying a combination of analog and digital SIS
techniques [11]–[14]. In FD mode, a node can receive while
simultaneously transmitting over the same channel. Moreover,
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2743
3
Fig. 2. Detecting a known bit pattern P when two packets collide using the
signal correlation technique.
nodes can apply signal correlation techniques for detecting
the transmission of known bit patterns. These techniques are
common in frame detection, even in the presence of collisions
[15]. The concept of signal correlation is shown in Fig. 2.
Consider the concurrent reception of packets PA and PB at C.
Node C is interested in detecting whether PB = P, where P is
a known bit pattern. Let the sampled signal representing P be
L samples long. Node C computes the signal correlation value
between PA + PB + w and P (w denotes the noise component
at the receiver) by aligning the L samples of P with the first
L samples of PA + PB + w. It then shifts the alignment of
P by one sample and recomputes the correlation until the end
of PA + PB + w is reached. Formally, let x[i] denote the i
th
sample of P and y[j] the jth sample of the received signal.
The correlation value at the jth sampled position of y[j] is:
C[j] =
L∑
i=
1
x∗[i]y[j + i], (1)
where x∗[i] is the complex conjugate of x[i]. The correlation
value will peak when P is aligned with PB. Using this
correlation method, node C can identify if P is transmitted,
despite the concurrent transmission of PA. In practice, node
C must compensate C[j] for the frequency offset of B. The
frequency offset can be estimated in advance from prior packet
exchanges between B and C. A limitation of signal correlation
is that the known bit pattern has to exhibit desirable cross-
correlation properties.
IV. FD CARRIER SENSING
The carrier sensing function can be extended to the receiver’s
collision domain when that receiver operates in FD mode [12],
[22]. We refer to this mechanism as FD carrier sensing. In
FD-MMAC, we improve FD carrier sensing by integrating a
suite of PHY-layer techniques. Our techniques extend beyond
the estimation of the carrier state (idle or busy) and determine
a node’s operational state relative to an ongoing transmission.
The state information is used to create transmission opportu-
nities for exposed terminals, avoid collisions caused by hidden
terminals, and discover the resident channel of a destination.
Operation in FD mode: The FD carrier sensing mechanism
is shown in Fig. 3. A sender A initiates the transmission of PA
to B. Node B decodes the MAC header of PA and determines
it is the destination. Node B transmits a beacon packet BCNB
Table 1: Region Classification Rules
BCN EV M < γEV M RSS < γRSS Region C1 No Yes Yes TO C2 No Yes No CO C3 No No - CO C4 Yes - - RO
Fig. 3. The three regions for a node C relative to a data transmission A → B.
while receiving PA by operating in FD mode. This mechanism
was demonstrated in [22] for a single channel MAC. Node
A receives BCNB by also operating in FD mode. With the
reception of BCNB, node A verifies that B is receiving PA
and continues the transmission of PA. Lack of a BCN reply
implies either that B is unavailable (B is at another channel
or a hidden terminal to another transmission) or that the MAC
header of PA got corrupted. Node A uses the lack of BCNB as
an early collision detection mechanism and aborts the further
transmission of PA.
Generally, a data packet P is expected to be much longer than
a BCN packet. To account for this difference, BCN packets are
transmitted back-to-back until the reception of P is completed.
The reception ending time te is known to the destination based
on the network allocation vector (NAV) included in P ’s MAC
header. The BCN contains the destination’s id, the time slot
tACK at which the ACK transmission is to be completed, and
a CRC code. If the reception of P is successful, the destination
replies with an acknowledgement (ACK).
Operation State Classification: To combat multi-channel
hidden terminals and enable multi-channel exposed terminals,
nodes that sense channel activity perform region classification
to determine their operation state. We divide the collision
domains of A and B to the three regions shown in Fig. 3:
(a)
the receiver-only (RO) region, (b) the collision region (CO),
and (c) the transmitter-only (TO) region. Referring to Fig. 3, a
node C can determine its region using the following rules.
1) If C can decode BCNB, it infers that it is in the RO
region (hidden terminal).
2) if C cannot decode the received signal due to the collision
of PA with BCNB, it infers it is in the CO region.
3) If C can decode PA, it infers that it is in the TO region
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2744
4
(exposed terminal).
If C concludes that it is located in the CO/RO regions, it
defers from transmission to prevent a collision at B. Otherwise,
it explores transmission opportunities as an exposed terminal.
Practical Issues: Several practical issues complicate the
proposed region classification rules. First, when C is in the
TO region (position C1 in Fig. 3), it cannot verify the correct
decoding of PA until PA’s transmission is completed and the
CRC code is checked. Similarly, if C switches to a busy channel
in the middle of PA’s transmission, the CRC code cannot
be checked. Both these scenarios eliminate exposed terminal
transmission opportunities. To evaluate the decodability of PA
before the reception of the CRC code, node C computes the
error vector magnitude (EV M) on the received symbols. The
RMS EV M value (dB) is given by [23]:
EV MRMS(dB) = 20 log
(√
1
n
∑n
k=1 |stk − srk|2
1
M
∑M
i=1 |si|2
)
, (2)
where stk is the k
th transmitted symbol, srk is the k
th received
symbol, n is the window size (in symbols) over which the
EV M is computed, si is the i
th constellation symbol, and M
is the number of constellation symbols. The EV M serves as a
measure of the signal quality and is strongly correlated to the
bit error rate [23]. It characterizes the signal distortion due to
impairments of the wireless channel and undesired interference.
Note that the stks corresponding to the received s
r
ks are
not known to C for arbitrary packets. To compute the EV M
using formula (2), node C selects those stks in the constellation
diagram that are closest to the received srks. This is because
when a packet is correctly decoded, it is expected that the
received symbols are closest to the actual transmitted symbols
(hence, the correct decoding). On the other hand, in a collision
scenario, the received symbols have a large distance from the
ideal symbols, yielding a larger EV M.
In our classification, we compute the EV M over the duration
of two BCN packets. This is to differentiate between the TO
and the RO regions, where nodes are expected to have a lower
EV M compared with the CO region. If a node C is located in
the RO region, it is likely to decode at least one BCN packet
within two BCN packet durations (recall that C can switch
to a busy channel at any time). Otherwise, if C is in the TO
region, it will not decode a BCN packet, but will have a low
EV M value. For the EV M classification rule, the EV M at
C is compared with a threshold γEV M .
Low EV M values and undecodability of a BCN packet can
also be recorded due to the capture effect [24], when C is in
the CO region but very close to A (position C2 in Fig. 3). In
this case, a potential transmission by C will cause a collision
at B. To prevent this collision, we incorporate received signal
strength (RSS) measurements. If the RSS measurement at C is
beyond a threshold γRSS, node C concludes that it is in the
CO region, despite having a low EV M value. Finally, if C is
in the CO region, but can decode the BCN due to its proximity
to B, for all practical purposes we allow C to infer that it is in
(a)
(b)
Fig. 4. (a) Combating the multi-channel hidden terminal problem, (b) exposed
terminal operation. Transmission C → D occurs in parallel with transmission
A → B on the same channel.
the RO region. This is because for both the CO and RO regions,
C will defer from transmission. The region classification rules
used by FD-MMAC are summarized in Table 1. In Section
VII, we perform testbed experiments for determining γEV M
and γRSS based on measurements at locations C1 . . . C4.
V. COMBATING HIDDEN AND EXPOSED TERMINALS
FD carrier sensing eliminates the multi-channel hidden ter-
minal problem. We illustrate this in Fig. 4(a) for the topology
of Fig. 3, in which C is a hidden terminal to A. Node A
transmits PA to B over f1 at time t0. Node B decodes the
PHY and MAC headers and infers that it is the destination.
Node B replies with BCNB that is repeated for the duration of
PA, which terminates at t1. Node C switches to f1 at t2 with
t0 < t2 < t1. First, C senses f1 to be busy due to the BCNB
transmissions. Second, C attempts to decode a BCN for two
BCN durations. By decoding BCNB, node C infers it is in the
RO region and defers from transmission. If BCNB cannot be
decoded, the EV M is expected to be high and hence, C will
infer it is in the CO region and defer from transmission.
Early collision detection: A collision due to hidden termi-
nals is still possible during the transmission of the PHY and
MAC headers of P . Under a collision scenario, the destination
will be unable to decode the MAC header and will not transmit
a BCN packet. We use the absence of a BCN packet as an early
collision detection mechanism. If the sender does not receive
a BCN reply, it assumes that the attempted data transmission
has collided or the destination is unavailable. The sender aborts
further transmission of P and attempts retransmission without
waiting for the expiration of the ACK timer.
Enabling Exposed Terminal Transmissions: An exposed
terminal node C located in the TO region of an ongoing
transmission A → B will attempt to communicate PC to a
candidate destination D. If D can decode the MAC header
of PC , it will respond with BCND packets by operating in
FD mode. Node C will continue the transmission of PC if it
detects BCND, and will abort otherwise. The destination D
will not be able to respond with BCND if one of the following
occurs: (a) D is in the collision domain of another transmission
and hence, cannot decode the MAC header of PC or, (b) D
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2745
5
(a) (b) (c)
Fig. 5. (a) The state diagram of an FD-MMAC destination, (b) the CST table for node E, and (c) the state diagram of an FD-MMAC sender.
resides on another channel. The exposed terminal operation for
transmissions A → B and C → D is shown in Fig. 4(b).
Receiving BCN/ACK packets in the presence of exposed
terminals: Exposed terminal transmissions create two funda-
mental problems. First, nodes A and C cannot decode BCNB
and BCND, respectively, due to the interference they cause to
each other. Similarly, nodes A and C cannot decode ACKB and
ACKD, respectively, due to the interfering transmissions of
PC and PA. To enable the concurrent operation of A → B and
C → D, we use the signal correlation technique for detecting
known bit patterns [15].
Node C applies the signal correlation technique to detect
BCND and ACKD when PA is concurrently transmitted. Sim-
ilarly, node A applies the signal correlation technique to detect
BCNB and ACKB when PC is concurrently transmitted. Note
that a node is aware of the exact composition of the BCN
(ACK) packet and the approximate time that a BCN (ACK) is
expected. Hence, it can limit the signal correlation within only
a few sample shifts. One limitation here is that the BCN (ACK)
has to exhibit desirable cross-correlation properties. Therefore,
the BCN (ACK) is hashed (except the PHY header) with a
uniform hash function to produce a random output.
VI. THE FD-MMAC PROTOCOL
FD-MMAC is a contention-based, time-slotted protocol
based on CSMA/CA. To improve its spectral efficiency, FD-
MMAC eliminates the message overhead related to virtual
carrier sensing, destination discovery, and channel negotiation.
The destination and sender state diagrams are shown in Fig. 5.
A. Destination Operation
When a node’s transmission queue is empty, it operates as a
destination. A destination selects a resident channel such that it
can be discovered by candidate senders. Referring to the state
diagram of Fig. 5(a), a destination operates as follows.
Sense state: In the “Sense” state, the destination contin-
uously senses the resident channel. If the resident channel
becomes busy, the destination transitions to the “Decode” state.
Decode state: In the “Decode” state, the destination attempts
to decode the received signal. It transitions to the “FD” state
if the MAC header is successfully decoded, the receiving node
is the intended destination, and it is available for reception.
Otherwise, it transitions to the “Switch” state.
FD state: In the “FD” state, the destination operates in
FD mode. Based on the MAC header of P , the destination
determines the tACK and the number of BCN packets that
need to be transmitted. Then, it transmits BCN packets until
the reception of P is completed. The destination checks the
CRC code of P . If P is successfully received, it transitions to
the “ACK” state. Otherwise, it returns to the “Sense” state.
ACK state: After a successful packet reception, the destina-
tion replies with an ACK and returns to the “Sense” state.
Switch state: In the “Switch” state, the destination au-
tonomously determines its resident channel. This decision is
based on a channel state table (CST) that records the expected
time that each channel becomes idle (idle time). The CST is
updated in the following way:
1) If the resident channel fi is idle, set the idle time for fi
to the current slot tcurr.
2) If the resident channel fi is busy and the destination is
the RO region (BCN is decodable), set the idle time for
fi to tACK (contained in the BCN).
3) If the resident channel fi is busy and the destination is in
the CO/TO region (BCN not decodable), set the idle time
for fi to tcurr +TMT U, where TMT U is the transmission
duration of the maximum transmission unit (MTU) packet
plus the corresponding ACK.
After the CST update, the destination switches to the channel
with the earliest idle time. In the case of ties, the current
resident channel is preferred to avoid unnecessary channel
switches. Otherwise, ties are broken arbitrarily. The proposed
switching mechanism achieves several desirable properties.
First, a destination waits for a candidate sender on the channel
with the earliest idle time. This facilitates destination discovery,
as the sender will also switch to the channel with the earliest
idle time. Second, load balancing is indirectly achieved, as idle
destinations avoid busy channels. Both properties are achieved
without exchanging control messages.
As an example, consider the topology of Fig. 5(b). Assume
that destination E resides on f1. Initially, E sets the idle time
for all channels to tcurr. When the A→B transmission occupies
f1, node E decodes BCNB because it is a hidden terminal to
A. Node E updates the idle time for f1 to tACK and switches
to f2, because f2 has the earliest idle time (ties are broken
arbitrarily). Assume that transmission C→D is ongoing on f2
when E switches to f2. Node E cannot decode BCND since it
is in the TO region. Node E uses the the worst-case estimate
for the idle time of f2 and sets the idle time to tcurr + TMT U.
It then switches to f3 which is currently idle.
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2746
6
(a) (b)
Fig. 6. Two operational examples of FD-MMAC.
B. Sender Operation
In FD-MMAC, we adapt the CSMA backoff mechanism
to the multi-channel environment to reduce packet delay and
achieve fairness. Referring to the state diagram of Fig. 5(c), a
sender operates as follows.
Sense state: In the “Sense” state, the sender senses its
resident channel fi. If fi is idle, it transitions to the “Backoff”
state. If fi is busy, it classifies its operation state using the
region classification rules of Section IV. If the sender is in the
TO region (exposed terminal), it transitions to the “Backoff”
state. Otherwise, it transitions to the “Switch” state.
Backoff state: In the “Backoff” state, the sender selects a
backoff value β for a packet P , by using the following rules:
1) In the first transition to the “Backoff” state for P , the
sender draws β uniformly from [0, cw0], where cw0 is
the minimum contention window.
2) In any following transition from the “Sense” state, the
sender retains the current β value (backoff is resumed
from the current value).
3) In a transition from the “Wait ACK” state, the sender
doubles the contention window and draws β uniformly
(the contention window is capped at cwmax).
In the “Backoff” state, the sender decrements β by one unit
with every “idle” slot. Here, a slot is assumed to be idle if: (a)
no channel activity is detected, or (b) the channel is busy but the
sender is in the TO region. When β = 0, the sender transitions
to the “Transmit” state. If the channel becomes busy before
β = 0 (and the sender is not in the TO region), the sender
transitions to the “Switch” state and freezes β.
Transmit state: In the “Transmit” state, the sender initiates
the transmission of P . If the destination responds with a BCN
packet, the sender continues the transmission of P . With the
completion of P ’s transmission, the sender transitions to the
‘Wait ACK” state. If a BCN is not detected, the sender aborts
the transmission of P and transitions to the “Switch” state.
Wait ACK state: With the completion of P ’s transmission,
the sender waits for an ACK by the destination. The sender
transitions to the “Backoff” state if an ACK is not received
by the expiration of the ACK timer, without transitioning to
the “Switch” state. This is because the sender is aware that the
destination resides on the current channel due to the reception
of the BCN during the “Transmit” state. If the ACK reception
is successful, the sender transitions to the “Switch” state.
Switch state: In the “Switch” state, the sender performs two
operations. First, it updates the CST information and second, it
decides on the next channel using the same switching rules as
the destination. The CST is updated using the following rules:
1) If the sender is in the RO region of a transmission on fi
(BCN is decodable), it sets the idle time of fi to tACK
2) If the sender is in the CO/TO region of a transmission
on fi (BCN is not decodable), it sets the idle time of fi
to tcurr + TMT U.
3) If a sender transmitted a packet P on fi, but did not
receive a BCN response it sets the idle time of fi to
tcurr + TMT U. This update leads to a channel switch to
continue the destination discovery process.
Once the CST has been updated, the sender switches to the
channel with the earliest idle time. Ties are broken arbitrarily.
C. FD-MMAC Operational examples
In Fig. 6, we present two operational examples for FD-
MMAC. In the example of Fig. 6(a), node C has a packet for
D while being in the TO region of the A → B transmission.
Node C determines that it is in the TO region and can operate
as an exposed terminal. Node C transitions to the “Backoff”
state, selects β, and starts the backoff countdown. When β = 0,
node C transitions to the “Transmit” state. Node C transmits
PC and receives BCND from D. Node C detects BCND using
the signal correlation technique and continues the transmission
of PC . Upon termination of the PA transmission, B transmits
ACKB which is detected at A using the signal correlation
technique. Upon termination of the PC transmission, D replies
with ACKD which is decodable at C.
In the example of Fig. 6(b), node A communicates with B at
f1. Node D switches from f1 to f2 to be available for packet
reception. Node C, who resides on f1, has a packet PC for D.
Initially, D is in the “Sense” state. Since f1 is idle from C’s
perspective, C transitions to the “Backoff” state and selects a
β. When β = 0, node C transitions to the “Transmit” state and
starts the transmission of PC . Because D resides on f2, node
C does not receive BCND. Node C transitions to the “Switch”
state and updates the idle time for f1 to tcurr + TMT U. Based
on the CST entries, C switches to f2 which has the earliest idle
time and transitions to the ”Sense” state. Since f2 is idle, C
transitions to the ”Backoff” state for a second time and retains
β = 0. It then completes the communication with D.
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2747
7
−80 −60 −40 −20 0 20
0
0.2
0.4
0.6
0.
8
1
EV M (dB)
P
r
TO/RO
Capture effect
CO
γE V M = -18 dB
−15 −10 −5 0 5 10 15
−4
−2
0
2
4
6
R
S
S
(
d
B
m
)
Distance(ft)
BA
γRSS = 1 dBm
0 500 1000 1500 2000 2500 3000 3500 4000
0
0.002
0.004
0.006
0.008
0.01
0.012
Sample position in received signal
C
o
r
r
e
la
t
io
n
(
n
o
r
m
)
(a) (b) (c)
Fig. 7. (a) The EV M CDF at the RO, CO, and TO regions, (b) average RSS at different positions, (c) normalized correlation values for 10 BCN packets.
Fig. 8. The network topology used in the simulation experiments.
VII. TESTBED EXPERIMENTS AND SIMULATIONS
In this section, we experimentally verify the PHY-layer
techniques used by FD-MMAC and compare its performance
with prior art via packet-level simulations.
A. Experimental Evaluation
Testbed: We performed our experiments on NI USRPs
devices [25], over the 2.4 GHz band. The signal processing
blocks were implemented in Labview [25]. Transmissions were
modulated using the Quadrature Phase Shift Keying (QPSK)
scheme at a transmission rate of 2Mbps. The radios applied
phase/frequency offset correction and time synchronization
using 88-bit preamble sequences.
Operation State Classification: To validate the operation
state classification rules presented in Section IV, we replicated
the topology of Fig. 3. Nodes A and B were placed 7ft
apart and transmitted concurrently. Node A transmitted 100 P
packets carrying a 500-bit payload, while node B transmitted
500 BCN packets with a 50-bit payload. We placed node C at
positions C1, . . . , C4 of Fig. 3 and measured the EV M, RSS,
and the decodability of BCNs.
Fig. 7(a) shows the CDF of the EV M for the RO/TO
region (positions C1 and C4), the CO region (position C3),
and position C2 in the CO region. The RO and TO curves were
averaged since they yielded very similar values. We observe that
the EV M in the CO region (position C3) is significantly higher
compared with all other locations due to the collision of P
with the BCN. The difference allows us to select the threshold
γEV M for the EV M classification rule. In our experiments,
we set γEV M = −18 dB to achieve a false positive rate of 2%
(EV M < γEV M when in the CO region) and a false negative
rate of 4% (EV M ≥ γEV M when in the TO/RO region).
For position C2, EV M < γEV M due to the capture effect [24]. To avoid the classification of a node located at C2 as an exposed terminal, we use the mean RSS value. Fig. 7(b) shows the mean RSS value for different receiver locations, averaged
over the experiment duration. Nodes A and B were placed at
positions 0ft and 7ft, respectively. For the RSS classification
rule, we set γRSS to 1dBm. We observe that for location C2
(within 2ft from A), C has an RSS value significantly higher
than γRSS, and therefore infers that it is located in the CO
region, despite having an EV M < γEV M . Also, for exposed
terminal locations (more than 5ft from A), the EV M and RSS
are below γEV M and γRSS, respectively.
Finally, we measured the fraction of BCN packets that can
be decoded by C over ten repeated experiments (500 BCN
packets each run). We recorded zero decodable BCN packets
at locations C1, C2, and C3, while 100% of the BCN packets
were recovered at position C4. We also placed C in the vicinity
of B but within the CO. For this position, C was able to
decode a large fraction of BCN packets due to the capture effect
and falsely assume it is in the RO region. However, this error
does not impact the correct FD-MMAC operation because, for
all practical purposes, a node in the RO region defers from
transmission similar to a node in the CO region.
Signal Correlation: We experimentally evaluated the signal
correlation technique for the exposed terminal topology of Fig.
4(b). Node A transmitted 500-bit long data packets contin-
uously while node D transmitted 50-bit long BCN packets.
Node C applied the signal correlation method to detect BCND
packets. Fig. 7(c) shows the normalized correlation [25] for a
snapshot of ten BCN packets, when C is placed between A and
D, at a 7ft distance from each. The correlation peaks correspond
to the BCN transmissions and can be clearly distinguished.
In our experiments, we set the detection threshold to 0.005.
We placed C in three discrete positions between A and D
to replicate the exposed terminal topology and measured the
percentage of BCND packets that can be detected by correlating
the received signal with the known BCN pattern (preamble +
payload). Node D transmitted 1,000 BCN packets. The results
are shown in Table 2. Distances are measured from node D.
Table 2: Fraction of Detected BCN Packets
Distance from D 3 ft 5 ft 7 ft
Percentage 100% 99% 94%
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2748
8
10
1
10
2
10
30
1
2
3
4
λ (packets/sec)
T
(M
b
p
s)
FD−MMAC, 3 flows
SP−MMAC, 3 flows
FD−MMAC, 6 flows
SP−MMAC, 6 flows
10
1
10
2
10
30
1
2
3
4
λ (packets/sec)
T
(M
b
p
s)
FD−MMAC, 9 flows
SP−MMAC, 9 flows
FD−MMAC, 12 flows
SP−MMAC, 12 flows
10
1
10
2
10
30
1
2
3
4
λ (packets/sec)
T
(M
b
p
s)
DCC MMAC, 3 flows
DCC MMAC, 6 flows
DCC MMAC, 9 flows
DCC MMAC, 12 flows
10
1
10
2
10
30
0.5
1
1.5
λ (packets/sec)
T
(M
b
p
s)
FD−MMAC, 3 flows
SP−MMAC, 3 flows
FD−MMAC, 6 flows
SP−MMAC, 6 flows
(a) (b) (c) (d)
10
1
10
2
10
30
0.1
0.2
0.3
0.4
λ (packets/sec)
T
(M
b
p
s)
FD−MMAC, 9 flows
SP−MMAC, 9 flows
FD−MMAC, 12 flows
SP−MMAC, 12 flows
10
1
10
2
10
30
1
2
3
4
5
λ (packets/sec)
T
(M
b
p
s)
FD−MMAC
SP−MMAC
DCC MMAC
FD−MMAC,
95% detection
10
1
10
2
10
30
1
2
3
4
5
λ (packets/sec)
T
(M
b
p
s)
FD−MMAC
SP−MMAC
DCC MMAC
FD−MMAC,
95% detection
3 6 9 12
0
500
1000
1500
2000
Number of competing flows
A
v
e
ra
g
e
D
e
la
y
(m
s)
FD−MMAC
SP−MMAC
DCC MMAC
(e) (f) (g) (h)
Fig. 9. (a),(b),(c) Aggregate T of FD-MMAC, SP-MMAC, and DCC MMAC when 3, 6, 9, and 12 flows are within same collision domain, (d),(e) per-flow
average T for FD-MMAC and SP-MMAC when 3, 6, 9, and 12 flows are within same collision domain, (f) aggregate T in the presence of an exposed terminal,
(g) aggregate T in the presence of one exposed and one hidden terminal, (h) average delay for transmitting a batch of 100 data packets.
Table 2 shows that a node in the collision domain of two
transmitters can reliably detect a packet with known pseudo-
random pattern using the signal correlation.
B. Simulated Experiments
Simulation Setup: We performed packet-level simula-
tions using OPNETTM [26]. In our setup, multiple sender-
destination pairs (flows) were organized in the topology of Fig.
8 and shared three orthogonal channels with 2Mbps capacity.
For each sender, the arrival process at the MAC layer followed
the Poisson distribution with parameter λ packets per second.
Each packet was 512 bytes long. The channel switching delay
was set to 20 μsec. Simulations were run for 40 sec and results
were averaged over 20 simulation runs.
Throughput (T ): In the first set of experiments, we com-
pared the FD-MMAC throughput with the throughput of the
split-phase MMAC (SP-MMAC) in [3] and of the DCC MMAC
in [6]. The control and data phase of SP-MMAC were set
to 20ms and 80ms, respectively [3]. Fig. 9(a), 9(b), and 9(c)
compare the aggregate throughput for a varying number of
contending flows, co-located in the same collision domain
(senders SE and SH were idle). For low λ’s, all protocols
achieve similar throughput due to low contention. However, in
saturation conditions, FD-MMAC achieves significantly higher
aggregate throughput compared with the other protocols, due
to the elimination of signaling for channel negotiation and
virtual carrier sensing. Fig. 9(d) and 9(e) show the average per-
flow throughput of FD-MMAC and SP-MMAC. FD-MMAC
significantly outperforms SP-MMAC in saturation conditions.
In the second set of experiments, we placed five flows
S1 → D1,. . . ,S5 → D5 in the same collision domain, while
SE operated as an exposed terminal to S1, . . . , S5. For FD-
MMAC, we considered two scenarios. In the first scenario,
BCN and ACK packets were perfectly detected using the signal
correlation technique. In the second scenario, 5% of BCNs
and 5% of ACKs were undetectable by the intended recipients.
From Fig. 9(f), we observe that in saturation conditions, FD-
MMAC achieves an aggregate throughput that is 65% higher
compared with SP-MMAC and 75% higher compared with
DCC MMAC under ideal operating conditions. The throughput
improvement drops to 49% and 58%, respectively, when 5% of
BCN and ACK packets are assumed lost.
The superior performance of FD-MMAC is due to the
parallel operation of SE with any of the S1, . . . , S5. In fact, the
individual throughput of SE was 51% higher than the through-
put of S1, . . . , S5 (1.12 Mbps for SE vs. 0.74Mbps for each
S1, . . . , S5) because SE did not contend with any other sender.
On the other hand, for SP-MMAC and DCC MMAC, SE
operated in the same collision domain with S1, . . . , S5 and its
throughput dropped to 0.51Mbps and 0.68Mbps, respectively.
Further, we evaluated the concurrent existence of exposed and
hidden terminals. Five flows were placed in the same collision
domain, SE operated as an exposed terminal and SH operated
as a hidden terminal. Fig. 9(g) shows that FD-MMAC achieves
43% and 38% higher throughput in saturation conditions com-
pared with SP-MMAC and DCC MMAC, respectively.
Delay: In the third set of experiments, we evaluated the
packet delay for bursty packet arrivals. We loaded the trans-
mission queue of each sender with 100 packets and measured
the delay until all 100 packets were delivered to their respective
destinations. All flows were within same collision domain. Fig.
9(h) shows the average delay as a function of the number of
competing flows. We observe that FD-MMAC reduces the delay
due to the elimination of the control message exchange before
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2749
9
packet transmissions. The delay increases almost linearly with
the number of contending flows for all protocols, because the
available channels are shared by more flows in a fair manner.
Fairness and Load Balancing: We also examined the
fairness and load balancing properties of FD-MMAC under
different traffic load conditions. To evaluate fairness, we use
the Raj Jain’s Fairness Index (FI):
FI =
(
∑n
i=1 Ti)
2
n × ∑ni=1(Ti)2 (3)
where Ti is the throughput of the i
th flow and n is the total
number of flows. The FI varied from 0.97 (for a topology with
an exposed terminal present) to 0.999 (for a topology with six
flows in the same collision domain), indicating that FD-MMAC
achieves fair distribution of resources among competing flows.
The FI is slightly smaller than one in the presence of an
exposed terminal flow, because the exposed terminal node does
not contend with other senders. We also evaluated the traffic
load carried by each channel by computing the Load Balancing
Index (LBI) in saturation conditions:
LBI =
(
∑3
i=1 Tfi)
2
3 × ∑3i=1(Tfi)2 , (4)
where Tfi is the aggregate throughput on channel fi. The
LBI value varied from 0.904 (for a topology with an exposed
terminal) to 1 (for a topology with only three flows). The
LBI value is lower than 1 under an exposed terminal topology
because the same channel is concurrently occupied by two flows
during an exposed terminal operation.
VIII. CONCLUSION
We proposed FD-MMAC, a distributed MMAC protocol that
exploits FD communications to coordinate channel access at
low control overhead. FD-MMAC eliminates control signaling
for combating the multi-channel hidden terminal problem,
discovering the resident channel of destinations, and performing
load-balancing. Further, it increases the spatial channel reuse by
enabling the operation of multi-channel exposed terminals. The
FD-MMAC properties are achieved by utilizing an advanced
suite of PHY-layer techniques, including SIS, EVM and RSS
measurements, and signal correlation techniques. These tech-
niques were experimentally validated on the NI USRP testbed
and via simulations.
ACKNOWLEDGMENTS
This research was supported in part by the NSF under grants
CNS-0844111 and CNS-1016943 and ARO grant W911NF-13-
1-0302. Any opinions, findings, conclusions, or recommenda-
tions expressed in this paper are those of the author(s) and do
not necessarily reflect the views of the NSF.
REFERENCES
[1] IEEE 802.11 Working Group. Wireless LAN medium access control
(MAC) and physical layer (PHY) specifications. IEEE 802.11 LAN
Standards 2007, 2007.
[2] P. Bahl, R. Chandra, and J. Dunagan. SSCH: slotted seeded channel
hopping for capacity improvement in IEEE 802.11 ad-hoc wireless
networks. In Proc. of the MOBICOM Conference, pages 216–230, 2004.
[3] J. So and N.H. Vaidya. Multi-channel MAC for ad hoc networks: handling
multi-channel hidden terminals using a single transceiver. In Proc. of the
MOBIHOC Conference, pages 222–233, 2004.
[4] T. Luo, M. Motani, and V. Srinivasan. Cooperative asynchronous
multichannel MAC: Design, analysis, and implementation. IEEE Trans.
on Mobile Computing, 8(3):338–352, 2009.
[5] J. Zhang, G. Zhou, C. Huang, S.H. Son, and J.A. Stankovic. TMMAC:
An energy efficient multi-channel mac protocol for ad hoc networks. In
Proc. of the ICC Conference, pages 3554–3561, 2007.
[6] S. L. Wu and J. Y. Yang. A novel channel assignment scheme for
improving channel reuse efficiency in multi-channel ad hoc wireless
networks. Computer Communications, 30(17):3416–3424, 2007.
[7] K. H. Almotairi and X. S. Shen. Multichannel medium access control
for ad hoc wireless networks. Wireless Communications and Mobile
Computing, pages 1–14, 2011.
[8] N. Jain, S.R. Das, and A. Nasipuri. A multichannel CSMA MAC protocol
with receiver-based channel selection for multihop wireless networks. In
Proc. of the ICCCN Conference, pages 432–439, 2001.
[9] T. Shu, S. Cui, and M. Krunz. Medium access control for multi-
channel parallel transmission in cognitive radio networks. In Proc. of
the Globecomm Conference, pages 1–5, 2006.
[10] X. Xing, K Liu, and H. Lu. A multichannel mac protocol to solve exposed
terminal problem in multihop wireless networks. In Proc. of the CCNC
Conference, pages 1–2, 2009.
[11] A. Sahai, G. Patel, and A. Sabharwal. Pushing the limits of full-duplex:
Design and real-time implementation. Technical Report TREE1104, Rice
University, February 2011.
[12] M. Jain, J. Choi, T. Kim, D. Bharadia, S. Seth, K. Srinivasan, P. Levis,
S. Katti, and P. Sinha. Practical, real-time, full duplex wireless. In Proc.
of the MobiCom Conference, pages 301–312, 2011.
[13] B. Radunovic, D. Gunawardena, P. Key, A. Proutiere, N. Singh, V. Balan,
and G. Dejean. Rethinking indoor wireless mesh design: Low power, low
frequency, full-duplex. In IEEE Workshop on Wireless Mesh Networks,
pages 1–6, 2010.
[14] J. Choi, M. Jain, K. Srinivasan, P. Levis, and S. Katti. Achieving single
channel, full duplex wireless communication. In Proc. of the MobiCom
Conference, pages 1–12, 2010.
[15] S. Gollakota and D. Katabi. ZigZag decoding: combating hidden termi-
nals in wireless networks. In Proc. of the ACM SIGCOMM Conference,
pages 159–170, 2008.
[16] S. Liu, L. Lazos, and M. Krunz. Thwarting control-channel jamming
attacks from inside jammers. IEEE Trans. on Mobile Computing,
11(9):1545–1558, 2012.
[17] A. Proano and L. Lazos. Selective jamming attacks in wireless networks.
In Proc. of the ICC Conference, pages 1–6, 2010.
[18] J. Chen, S.T. Sheu, and C.A. Yang. A new multichannel access protocol
for IEEE 802.11 ad hoc wireless LANs. In Proc. of the PIMRC
Conference, volume 3, pages 2291–2296, 2003.
[19] W. So, J. Walrand, and J. Mo. McMAC: a parallel rendezvous multi-
channel MAC protocol. In Proc. of the WCNC Conference, pages 334–
339, 2007.
[20] A. Gupta, X. Lin, and R. Srikant. Low-complexity distributed scheduling
algorithms for wireless networks. IEEE/ACM Trans. on Networking
(TON), 17(6):1846–1859, 2009.
[21] O. Simeone, U. Spagnolini, Y. Bar-Ness, and S. Strogatz. Distributed
synchronization in wireless networks. IEEE Signal Processing Magazine,
25(5):81 –97, 2008.
[22] N. Singh, D. Gunawardena, A. Proutiere, B. Radunovic, H. Balan,
and P. Key. Efficient and fair MAC for wireless networks with self-
interference cancellation. In In Proc. of the WiOpt Symposium, pages
94–101, 2011.
[23] R. A. Shafik, S. Rahman, and R. Islam. On the extended relationships
among EVM, BER and SNR as performance metrics. In Proc. of the
ICECE Conference, pages 408–411, 2006.
[24] J. Lee, W. Kim, S. Lee, D. Jo, J. Ryu, T. Kwon, and Y. Choi. An
experimental study on the capture effect in 802.11a networks. In Proc.
of ACM WiNTECH Conference, pages 19–26, 2007.
[25] NI. National instruments. http://www.ni.com/usrp/, 2013.
[26] OPNET. OPNET technologies inc. http://www.opnet.com, 2009.
IEEE INFOCOM 2014 – IEEE Conference on Computer Communications
2750
<<
/ASCII85EncodePages false
/AllowTransparency false
/AutoPositionEPSFiles false
/AutoRotatePages /None
/Binding /Left
/CalGrayProfile (Gray Gamma 2.2)
/CalRGBProfile (sRGB IEC61966-2.1)
/CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2)
/sRGBProfile (sRGB IEC61966-2.1)
/CannotEmbedFontPolicy /Warning
/CompatibilityLevel 1.4
/CompressObjects /Off
/CompressPages true
/ConvertImagesToIndexed true
/PassThroughJPEGImages true
/CreateJobTicket false
/DefaultRenderingIntent /Default
/DetectBlends true
/DetectCurves 0.0000
/ColorConversionStrategy /LeaveColorUnchanged
/DoThumbnails false
/EmbedAllFonts true
/EmbedOpenType false
/ParseICCProfilesInComments true
/EmbedJobOptions true
/DSCReportingLevel 0
/EmitDSCWarnings false
/EndPage -1
/ImageMemory 1048576
/LockDistillerParams true
/MaxSubsetPct 100
/Optimize false
/OPM 0
/ParseDSCComments false
/ParseDSCCommentsForDocInfo false
/PreserveCopyPage true
/PreserveDICMYKValues true
/PreserveEPSInfo false
/PreserveFlatness true
/PreserveHalftoneInfo true
/PreserveOPIComments false
/PreserveOverprintSettings true
/StartPage 1
/SubsetFonts false
/TransferFunctionInfo /Remove
/UCRandBGInfo /Preserve
/UsePrologue false
/ColorSettingsFile ()
/AlwaysEmbed [ true
/Arial-Black
/Arial-BoldItalicMT
/Arial-BoldMT
/Arial-ItalicMT
/ArialMT
/ArialNarrow
/ArialNarrow-Bold
/ArialNarrow-BoldItalic
/ArialNarrow-Italic
/ArialUnicodeMS
/BookAntiqua
/BookAntiqua-Bold
/BookAntiqua-BoldItalic
/BookAntiqua-Italic
/BookmanOldStyle
/BookmanOldStyle-Bold
/BookmanOldStyle-BoldItalic
/BookmanOldStyle-Italic
/BookshelfSymbolSeven
/Century
/CenturyGothic
/CenturyGothic-Bold
/CenturyGothic-BoldItalic
/CenturyGothic-Italic
/CenturySchoolbook
/CenturySchoolbook-Bold
/CenturySchoolbook-BoldItalic
/CenturySchoolbook-Italic
/ComicSansMS
/ComicSansMS-Bold
/CourierNewPS-BoldItalicMT
/CourierNewPS-BoldMT
/CourierNewPS-ItalicMT
/CourierNewPSMT
/EstrangeloEdessa
/FranklinGothic-Medium
/FranklinGothic-MediumItalic
/Garamond
/Garamond-Bold
/Garamond-Italic
/Gautami
/Georgia
/Georgia-Bold
/Georgia-BoldItalic
/Georgia-Italic
/Haettenschweiler
/Impact
/Kartika
/Latha
/LetterGothicMT
/LetterGothicMT-Bold
/LetterGothicMT-BoldOblique
/LetterGothicMT-Oblique
/LucidaConsole
/LucidaSans
/LucidaSans-Demi
/LucidaSans-DemiItalic
/LucidaSans-Italic
/LucidaSansUnicode
/Mangal-Regular
/MicrosoftSansSerif
/MonotypeCorsiva
/MSReferenceSansSerif
/MSReferenceSpecialty
/MVBoli
/PalatinoLinotype-Bold
/PalatinoLinotype-BoldItalic
/PalatinoLinotype-Italic
/PalatinoLinotype-Roman
/Raavi
/Shruti
/Sylfaen
/SymbolMT
/Tahoma
/Tahoma-Bold
/TimesNewRomanMT-ExtraBold
/TimesNewRomanPS-BoldItalicMT
/TimesNewRomanPS-BoldMT
/TimesNewRomanPS-ItalicMT
/TimesNewRomanPSMT
/Trebuchet-BoldItalic
/TrebuchetMS
/TrebuchetMS-Bold
/TrebuchetMS-Italic
/Tunga-Regular
/Verdana
/Verdana-Bold
/Verdana-BoldItalic
/Verdana-Italic
/Vrinda
/Webdings
/Wingdings2
/Wingdings3
/Wingdings-Regular
/ZWAdobeF
]
/NeverEmbed [ true
]
/AntiAliasColorImages false
/CropColorImages true
/ColorImageMinResolution 200
/ColorImageMinResolutionPolicy /OK
/DownsampleColorImages true
/ColorImageDownsampleType /Bicubic
/ColorImageResolution 300
/ColorImageDepth -1
/ColorImageMinDownsampleDepth 1
/ColorImageDownsampleThreshold 1.50000
/EncodeColorImages true
/ColorImageFilter /DCTEncode
/AutoFilterColorImages false
/ColorImageAutoFilterStrategy /JPEG
/ColorACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/ColorImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/JPEG2000ColorACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/JPEG2000ColorImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/AntiAliasGrayImages false
/CropGrayImages true
/GrayImageMinResolution 200
/GrayImageMinResolutionPolicy /OK
/DownsampleGrayImages true
/GrayImageDownsampleType /Bicubic
/GrayImageResolution 300
/GrayImageDepth -1
/GrayImageMinDownsampleDepth 2
/GrayImageDownsampleThreshold 1.50000
/EncodeGrayImages true
/GrayImageFilter /DCTEncode
/AutoFilterGrayImages false
/GrayImageAutoFilterStrategy /JPEG
/GrayACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/GrayImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/JPEG2000GrayACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/JPEG2000GrayImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/AntiAliasMonoImages false
/CropMonoImages true
/MonoImageMinResolution 400
/MonoImageMinResolutionPolicy /OK
/DownsampleMonoImages true
/MonoImageDownsampleType /Bicubic
/MonoImageResolution 600
/MonoImageDepth -1
/MonoImageDownsampleThreshold 1.50000
/EncodeMonoImages true
/MonoImageFilter /CCITTFaxEncode
/MonoImageDict <<
/K -1
>>
/AllowPSXObjects false
/CheckCompliance [
/None
]
/PDFX1aCheck false
/PDFX3Check false
/PDFXCompliantPDFOnly false
/PDFXNoTrimBoxError true
/PDFXTrimBoxToMediaBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXSetBleedBoxToMediaBox true
/PDFXBleedBoxToTrimBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXOutputIntentProfile (None)
/PDFXOutputConditionIdentifier ()
/PDFXOutputCondition ()
/PDFXRegistryName ()
/PDFXTrapped /False
/CreateJDFFile false
/Description <<
/CHS
/CHT
/DAN
/DEU
/ESP
/FRA
/ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.)
/JPN
/KOR
/NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.)
/NOR
/PTB
/SUO
/SVE
/ENU (Use these settings to create PDFs that match the “Required” settings for PDF Specification 4.01)
>>
>> setdistillerparams
<<
/HWResolution [600 600]
/PageSize [612.000 792.000]
>> setpagedevice
A Cluster-Based and On-Demand Routing
Algorithm for Large-Scale Multi-hop Wireless
Sensor Networks
Natale Guzzo1,2(B), Nathalie Mitton2, Pascal Daragon1,
and Arulnambi Nandagoban1
1 TRAXENS SAS, Marseille, France
2 Inria, Villeneuve d’Ascq, France
natale.guzzo@inria.fr
Abstract. Reducing the energy consumption and improving the robust-
ness of a Wireless Sensor Network (WSN) are the main requirements for
many industrial and research applications. The sensors usually use a
routing protocol in order to deliver the sensing data to a Base Station
(BS) which may be far away from the monitoring area. Many algorithms
proposed in the literature compute the routing process by clustering the
network and by designing new election mechanisms in which the cluster-
heads are chosen taking account of the remaining energy, the communi-
cation cost and the density of nodes. However, they do not consider the
connectivity to the BS, and assume that all the nodes or only few pre-
fixed nodes are able to directly communicate with it. We believe that this
assumption is not suitable for many applications of WSN and to tackle
this problem we propose CESAR, a multi-hop and energy-efficient rout-
ing protocol for large-scale WSN which includes a new cluster-head selec-
tion mechanism aware of the battery level and the connectivity to the
BS. Furthermore, our solution employs an innovative hybrid approach to
combine both clustering and on-demand techniques in order to provide
an adaptive behavior for different dynamic topologies. Simulation results
show that our solution outperforms in terms of energy consumption and
data delivery other known routing algorithms in the literature.
Keywords: Wireless Sensor Networks (WSN) · Distributed clustering ·
Multi-hop routing protocol · On-demand scheme · Base Station (BS)
connectivity · Energy-efficiency
1 Introduction
The Wireless Sensor Networks (WSN) are often composed of a large number of
sensors that collaborate in order to transmit the sensing data to the Base Station
(BS) by satisfying some requirements such as coverage, robustness, scalability
and lifetime. Several solutions concerning the physical, MAC and network layers
have been proposed in the literature. Regarding the routing protocols, the clus-
tering techniques are employed to reduce the message overhead, the overhearing
c© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2014
N. Mitton et al. (Eds.): ADHOCNETS 2014, LNICST 140, pp. 98–109, 2014.
DOI: 10.1007/978-3-319-13329-4 9
A Cluster-Based and On-Demand Routing Algorithm 99
and the interferences between the nodes in the network. The benefits introduced
by this approach lead to a high scalability, simple routing decisions, and low
energy dissipation by reducing the data traffic and the overhead caused by the
flow of routing information.
In this paper we present the Cluster-based Energy Saving Affiliation Routing
protocol (CESAR) which is a new multi-hop and energy-efficient algorithm that
aims to reduce the energy consumption in WSN by introducing a scheme with
innovative features. The main aspects considered in the design of the routing
scheme are the energy consumed by the nodes and the data delivery, since our
objectives are to create a robust and scalable network and extend its lifetime
as long as possible, while fulfilling application requirements. We will show in
the next sections that our clustering scheme outperforms the other simulated
algorithms.
We survey the related work in Section II. The main features of CESAR
are presented in Section III. In Section IV CESAR is employed in different
simulated scenarios inspired by industrial use case applications and we analyze
the results in comparison with other routing algorithms. Finally, Section V gives
the concluding remarks and reports the direction for future works.
2 Background and Related Work
Several cluster-based protocols have been proposed in the literature in the last
few years to reduce the energy consumption and prolong the network lifetime
in WSNs. They can be classified according to the goals and the approaches
employed for cluster formation and cluster-head selection. The main distinction
is related to the cluster formation mechanism. In this sense, we can distinguish
centralized algorithms such as PEGASIS [1] and CDC [2], and distributed algo-
rithms like LEACH [3], HEED [4], and DSBCV [5]. Other schemes are based on
Geographical clustering as RCHR [6] and TTDD [7], on the concentric clustering
such as CCS [8], or on the use of specific cluster-head election techniques like
BLAC [9]. The algorithm named SECC selects the nodes to add to the clusters
according to their energy or distance from the cluster-head [10]. In this section
we describe in brief some distributed clustering schemes by listing their features
and limitations.
Nevertheless, the most of the algorithms studied in the literature suppose
that all the nodes can always directly communicate with the BS. We believe
that this assumption is quite restrictive and not suitable for many applications
of WSNs in which the sensors may not be able to connect to the BS because of the
excessive distance or the bad radio environment. Moreover, some schemes such
as LEACH and HEED need the synchronization between the nodes in order to
start the clustering process at the same time. If the nodes are not synchronized
the performances of the two algorithms in terms of energy consumption and
data delivery are significantly degraded, since the cluster-head selection cannot
be well performed. The solution presented in this paper does not adopt such an
assumption, since it considers the connectivity to the BS as one parameter to
use in the cluster-head selection scheme.
100 N. Guzzo et al.
3 CESAR Algorithm
Our solution is addressed to specific applications where the sensors are equipped
with a long-range radio module to communicate on the link towards the BS (BS-
link) and a short-range radio module for the communication peer-to-peer with
the other sensor nodes (P2P-link). The nodes do not know the position of the BS
and they are supposed to communicate directly with it only whether the received
signal strength on the BS-link is high enough. Moreover, depending on the radio
environment and the distance from the BS, the nodes do not detect the same
quality on the BS-link. As a result, only some of them may be able to connect to
the BS and they may need to use different transmission powers in order to deliver
their data. Such nodes may also change over time depending on the variations of
the BS-link quality. In terms of energy consumption, the transmission of data on
the BS-link is much more expensive than the transmission of the same amount
of data on the P2P-link. In fact, we suppose that the communication on the BS-
link requires a transmission power 10 to 20 times higher than the communication
on the P2P-link and introduces a connection delay lasting approximately one
minute.
We focus on sensor networks where the nodes are positioned at the edges of
a 3-dimensional grid. We believe that this model well describes different kinds
of scenarios such as the monitoring of goods in storage areas. We believe that
CESAR is also suitable in applications where the nodes move within the sensing
area. Vehicle tracking in a town, surveillance in an airport, and fauna monitoring
are some examples in which CESAR can be employed.
The most significant features of CESAR are the following:
– CESAR is a hybrid algorithm that combines clustering and on-demand approaches
in order to dynamically adapt the behavior of the nodes to the topology
changes. For this purpose, it does not involve every node in a cluster without
preventing it to send data to the BS.
– The proposed cluster-head selection mechanism is aware of the connectivity to
the BS and the battery level: only the nodes that have the signal strength on
the BS-link and the battery level greater than prefixed thresholds can become
cluster-heads.
– A recovery scheme is employed to keep alive the routes between the nodes and
the cluster-heads with low cost operations.
– A data aggregation scheme can be employed by the members of the clusters
before to send data packet to the cluster-head
In the next paragraphs we explain in more details the cluster-head selection
mechanism and the cluster formation scheme. Before that, we need to introduce
the types of nodes and routing messages used in different processes. We defined
four types of nodes:
– HEAD: nodes which are able to communicate with the BS and are in charge
of creating and managing a cluster by sending announcement messages.
A Cluster-Based and On-Demand Routing Algorithm 101
– MEMBER: nodes which have joined a cluster after receiving an announcement
transmitted by a HEAD node.
– LOOSE: nodes which have not joined any cluster and are not able to transmit
data to the BS.
The routing messages were defined as follows:
– RANN: hop-bounded broadcast messages used by HEAD nodes to build their
clusters.
– REQR: hop-bounded broadcast messages used by LOOSE nodes to discover
a nearby cluster.
– REP: unicast messages used to reply to REQR messages.
– ROFF: broadcast messages used by HEAD nodes to destroy their clusters.
– DATA: unicast messages used by MEMBER nodes to deliver their data.
– ACK: unicast messages used by HEAD nodes to acknowledge DATA packets.
3.1 The Cluster-Head Selection Mechanism
Every node in the network periodically executes the cluster-head selection pro-
cess to check whether they have the requirements to become HEAD nodes. As
already explained, a node can become a cluster-head only if both the battery
level and the signal strength on the BS-link are greater than a prefixed thresh-
old. However, these are not the only conditions to become HEAD. There may
be other cluster-heads nearby that have better conditions than the considered
node. In this case, the latter should become member of one of such clusters,
rather than becoming HEAD itself.
This decision process is made by calculating the HEAD metric defined as cost
and it takes account of the battery level and the quality of the BS-link. Therefore,
the higher the metric, the less likely a node to be elected as cluster-head. As we
can see from Algorithm 1, if node u is MEMBER and has the requirements
to become HEAD, it checks whether the metric of its cluster-head is 1.5 times
greater than its own metric, as shown at line 5. In that case, u becomes HEAD,
otherwise it remains a MEMBER. Such an approach is introduced in order to
avoid the election of several HEAD nodes close to each other, and reduce the
overall number of connections to the BS and, thus the global energy consumption.
We defined a second threshold for the signal strength on the BS-link to
avoid frequent cluster-head elections and resignations when such a parameter
continuously varies around the main threshold. Regarding the resignation of
a HEAD node, it may occur in two cases. In the first case, the node has no
longer the requirements to be a cluster-head, since it has not enough energy
or it is no longer able to directly communicate with the BS. The second case
occurs when the considered node receives an announcement from a new HEAD
in the neighborhood that has a metric lower than its, as we explain in the next
paragraph. When a node resigns as HEAD, it destroys its cluster and, in the
second case, it becomes MEMBER of another cluster.
102 N. Guzzo et al.
Algorithm 1. CESAR (Cluster-head selection algorithm run on node u)
Parameters:
– connectivity: signal strength measured on the BS-link
– battery: remaining energy
– conn thr1, conn thr2: thresholds for the connectivity parameter
– batt thr: threshold for the battery parameter
1: if ( connectivity(u) ¿ conn thr1 AND battery(u) ¿ batt thr ) then
2: if (u is HEAD) then
3: Send RANN;
4: else if (u is MEMBER) then
5: if (metric(head of u) ¿ 1.5 * metric(u) then
6: Become HEAD;
7: Send RANN;
8: end if
9: else if (u is LOOSE) then
10: Become HEAD;
11: Send RANN;
12: end if
13: else if ( connectivity(u) ¿ conn thr2 AND battery(u) ¿ batt thr ) then
14: if (u is HEAD) then
15: Send RANN;
16: end if
17: else if ( connectivity(u) ¡ conn thr2 OR battery(u) ¡ batt thr ) then
18: if (u is HEAD) then
19: Become LOOSE;
20: Send ROFF to destroy the cluster;
21: end if
22: end if
3.2 Cluster Formation and Destruction
The cluster-heads form the clusters in the network and periodically connect to
the BS to deliver the sensing data received from other nodes. The clusters are
built by broadcasting a bounded-hop RANN in which the header field named
Time To Live (TTL) is set to the number of hops that can be traversed by the
message before to be discarded. In this way, only the nodes that are at a limited
hop-distance from the considered HEAD are recruited into the cluster.
The RANN messages are periodically broadcast by the HEAD nodes in their
cluster in order to keep the MEMBER nodes up-to-date about their metric.
If a non-clustered node receives a RANN message from a HEAD node, it
immediately becomes MEMBER of the announced cluster. However, if the con-
sidered node receives multiple RANN messages from different cluster-heads, it
then compares their metrics and joins the cluster with the lowest one. In the case
a HEAD node receives a RANN message, it checks whether its metric is 1.5 times
lower than the metric of the announced HEAD. If the latter condition holds, the
considered node resigns as HEAD, destroys its cluster and joins the cluster of
A Cluster-Based and On-Demand Routing Algorithm 103
Fig. 1. Cluster formation. The RANN are broadcast by the HEAD nodes to form
clusters of different sizes.
the HEAD announced in the RANN message. Such a condition has been defined
in order to avoid frequent cluster destruction when the HEAD nodes are close
to each other. After joining a cluster, the MEMBER nodes store in the routing
table the last hop traversed by the received RANN in order to have a route to
the HEAD. As a result, a routing tree is formed into the clusters between the
MEMBER nodes and the cluster-head.
The size of the clusters is decided by the HEAD nodes according to their
operating parameters. In this way, CESAR aims to concentrate the collection of
the sensing data in the HEAD nodes with better conditions in order to smartly
balance the energy consumption between them.
When a HEAD node resigns, it destroys its cluster by sending a ROFF mes-
sage to all the members that leave the cluster and become LOOSE nodes upon
the message reception.
3.3 The On-demand Scheme
As already explained, the LOOSE nodes do not have any routes to any cluster-
head, so they are not able to transmit data to the BS. Thus, when one of them
has some data to deliver to the server, it starts a special procedure that aims to
discover a cluster nearby.The discovering procedure consists of broadcasting a
hop-bounded REQR message that will be received by the nodes in the neighbor-
hood. When a MEMBER node receives such a message, it replies with a REP
message containing information about the status of the HEAD of its cluster.
The discovery process continues until a cluster is found or the max hop distance
is reached. In the latter case, the process is reinitialized and restarted after a
certain time interval.
104 N. Guzzo et al.
Fig. 2. On-demand scheme. The non-clustered nodes employ REQR messages to dis-
cover a cluster in the neighborhood and find a route a HEAD node.
For instance, let’s assume that the node e2 in Figure 2 needs to send the
sensing data to the BS. It broadcasts a REQR message in which the TTL is set
to TTL MAX. The nodes d1, d2, d3, e1 and e3 are not MEMBER nodes, so they
do not reply to the request. Nodes c1, c2, c3, members of the cluster created by
the node b3 receive the REQR and reply with a REP message that reports the
metric of their cluster-head. Such a message passes through the nodes traversed
by the latest REQR message. In the figure, we suppose that the REP messages
are received by the node d3 and forwards the message to the requesting node e2.
The latter stores in the routing table the next hop on the path towards to the
reported cluster and records the metric of its HEAD. After that, it can deliver
the collected data to the HEAD node b3 by passing through the nodes d3 and
c3.
If there are more than one cluster in the nearby, the considered node may
receive more than one REP message, and then it should evaluate to which cluster
to deliver its data by comparing the metrics of the respective cluster-heads.
3.4 Data Aggregation
The aggregation of the sensing data allows the reduction of the time needed for
transmissions and receptions decrease, and thus, the energy consumption and
the risks of collisions decrease as well. The data aggregation in CESAR can be
performed in three different ways. The least strong approach is the aggregation
of data only at the cluster-heads before deliver it to the BS. A stronger approach
is to aggregate data also at the member nodes at the borders of the clusters.
In this case, such nodes collect the data coming from the LOOSE nodes and
aggregate it before transmitting to the cluster-head. Finally, the strongest way
is to perform the data aggregation at the cluster-heads and at every member
A Cluster-Based and On-Demand Routing Algorithm 105
of the clusters. However, in this paper we evaluate CESAR without performing
any aggregation of data.
3.5 Routing Maintenance and Recovery
The CESAR algorithm uses ACK messages to acknowledge every data packet.
Thus, all the nodes expect to receive an ACK message as confirmation that the
transmission succeeded and the data was received by the cluster-head. If no ACK
messages are received after a prefixed timeout, then the sender node considers
the packet lost and transmits it again to the same HEAD. If the transmission
fails 3 times, then the considered node becomes LOOSE and restarts the on-
demand scheme to find another route to the same HEAD or to another cluster.
Thus, we set a specific timeout in the MEMBER nodes to check if the RANN
messages are periodically received. Such a timer is restarted at every reception
of RANN messages from the cluster-head and if the timeout occurs, then the
node leaves the cluster and becomes LOOSE.
4 Performance Evaluation
In this section we simulate the CESAR algorithm in WSNET [11] and we analyze
the obtained results in order to compare its performances with two of the most
popular clustering protocols studied in the literature: LEACH [3] and HEED [4].
We consider a network in which the nodes are positioned in a 3d grid with
dimensions 160x160x110 m3 and for each experiment we vary the number of
sensors in the same area. Thus, the density and the distance between the nodes
change at each scenario. In such a way, we analyze the behavior of the algorithm
in low-density and high-density networks.
Table 1. Critical parameters of CESAR
Connectivity threshold 1 40%
Connectivity threshold 2 30%
Battery threshold 20%
Status update frequency every 2 min
RANN frequency every 1 min
Maximum cluster size 5 hops
Data delivery frequency every 10 min
Since we want to use a realistic model for the energy consumption, we con-
sider the transmission and the reception powers reported in the datasheets of
XBee and XBee-PRO DigiMesh 2.4 provided by Digi International [12]. We
considered two different devices since CESAR uses a multi-hop approach and,
106 N. Guzzo et al.
Fig. 3. Data delivery
Fig. 4. Energy consumption at each node
therefore it needs a lower transmission power than LEACH and HEED which
are single-hop routing protocols.
CESAR has some critical parameters that should be configured in order to
evaluate its performances. Such parameters are set to optimized values obtained
from long series of simulation as reported in Table 1.
Regarding the other simulation parameters we set to 6 hours the virtual
duration for each experiment and we suppose to receive from the application
A Cluster-Based and On-Demand Routing Algorithm 107
layer 4096 Bytes of sensing data every 10 minutes. The energy available at each
sensor at the beginning of the simulations is 100 Joules. We believe that the latter
values are fair enough to simulate a generic application running on sensors with
a generic hardware.
4.1 Data Delivery
In many applications in which the sensor networks are employed, the robustness
is a primary requirement to ensure the delivery of the sensing data to the BS. In
this section, we measure the percentage of data generated by the sensors that is
successfully delivered to the BS, with different experiments in which we change
the number of nodes in the sensing area.
As we can observe in Figure 3, the LEACH and HEED algorithms are less
robust than CESAR, which uses dedicated mechanisms to recover and maintain
the routes between the nodes as explained in Section III. The losses experienced
by the first two algorithms are mainly caused by the interferences between the
cluster-heads in the set-up phase, in which some messages related to the cluster
formation may be lost and ,thus, some nodes are not able to join any cluster.
Moreover the LEACH algorithm experiences more data losses because of its
cluster-head selection mechanism which, unlike HEED and CESAR, does not
take account of the distribution of the cluster-heads into the network.
4.2 Energy Consumption
The energy consumed by each node is measured by performing different tests in
which we increase the number of the nodes in the sensing area.
As we can see from the results shown in Figure 4, CESAR is less expensive
in terms of energy and more scalable than LEACH and HEED, thanks to the
mechanisms described in Section III.
The graphs show that CESAR is also more scalable than LEACH and HEED
which suffer of interferences and high overhead for the assignation of the time-
slots by the cluster-heads to the members of the cluster.
5 Conclusions
The algorithm described in this paper combines the clustering and the on-
demand approaches in order to reduce the energy consumption in the whole
network. A new cluster-head selection mechanism is proposed in order to con-
sider the situations in which the nodes have limited battery capacity and not all
of them are capable to communicate with the BS to which deliver the own sensing
data. The further mechanisms employed for the reduction of the energy consump-
tion and improvement of the robustness make CESAR suitable for many low
data rate applications of WSNs. The simulations described in Section IV show
that CESAR performs better than some popular algorithms such as LEACH
and HEED, and the experiments performed on the FIT-IoT-lab [13] platform
108 N. Guzzo et al.
show that our solution well perform also in real sensor networks. Thanks to the
on-demand mechanism and the adaptability of the cluster size, we believe that
CESAR is also suitable for networks composed of mobile sensors. A new met-
ric can be investigated to take account of the mobility of the nodes and adapt
the above-mentioned features to the dynamic of the network topology. Regard-
ing future developments, CESAR will be tested in combination with some Low
Power Listening (LPL) protocols , such as X-MAC [14] and LA-MAC [15], in
order to figure out which one is the best choice to minimize the energy consump-
tion and ensure the robustness of the network.
Acknowledgments. This work was partially supported by a grant from CPER Nord-
Pas-de-Calais /FEDER Campus Intelligence Ambiante.
References
1. Lindsey, S., Raghavendra, C.: Pegasis: Power-efficient gathering in sensor informa-
tion systems. In: IEEE Aerospace Conference Proceedings. vol. 3, pp. 3–1125-3–
1130 (2002)
2. Bajaber, F., Awan, I.: Centralized dynamic clustering for wireless sensor network.
In: International Conference on Advanced Information Networking and Applica-
tions Workshops : WAINA 2009, pp. 193–198 (2009)
3. Radu, V.: Application. In: Radu, V. (ed.) Stochastic Modeling of Thermal Fatigue
Crack Growth. ACM, vol. 1, pp. 63–70. Springer, Heidelberg (2015)
4. Younis, O., Fahmy, S.: Distributed clustering in ad-hoc sensor networks: a hybrid,
energy-efficient approach. In: Twenty-third AnnualJoint Conference of the IEEE
Computer and Communications Societies, INFOCOM 2004, vol. 1, p. 640 (2004)
5. Liao, Y., Qi, H., Li, W.: Load-balanced clustering algorithm with distributed self-
organization for wireless sensor networks. Sensors Journal, IEEE 13(5), 1498–1506
(2013)
6. Gao, T., Jin, R.: A regional centralized-clustering routing algorithm for wireless
sensor networks. In: 4th International Conference on Wireless Communications,
Networking and Mobile Computing, WiCOM 2008, pp. 1–4 (2008)
7. Luo, H., Ye, F., Cheng, J., Lu, S., Zhang, L.: Ttdd: two-tier data dissemination in
large-scale wireless sensor networks. Wirel. Netw. 11(1–2), 161–175 (2005)
8. Jung, S.-M., Han, Y.-J., Chung, T.-M.: The concentric clustering scheme for effi-
cient energy consumption in the pegasis. In: The 9th International Conference on
Advanced Communication Technology, vol. 1, pp. 260–265 (2007)
9. Ducrocq, T., Mitton, N., Hauspie, M.: Energy-based Clustering for Wireless Sen-
sor Network Lifetime Optimization, in WCNC – Wireless Communications and
Networking Conference – 2013. Shanghai, Chine (2013)
10. Bala Krishna, M., Doja, M.N.: Self-organized energy conscious clustering proto-
col for wireless sensor networks. In: 14th International Conference on Advanced
Communication Technology (ICACT) (2012)
11. http://wsnet.gforge.inria.fr/
12. http://www.digi.com/xbee/
http://wsnet.gforge.inria.fr/
http://www.digi.com/xbee/
A Cluster-Based and On-Demand Routing Algorithm 109
13. Burin des Roziers, C., Chelius, G., Ducrocq, T., Fleury, E., Fraboulet, A., Gallais,
A., Mitton, N., Noél, T., Vandaele, J.: Using senslab as a first class scientific tool for
large scale wireless sensor network experiments. In: Domingo-Pascual, J., Manzoni,
P., Palazzo, S., Pont, A., Scoglio, C. (eds.) NETWORKING 2011, Part I. LNCS,
vol. 6640, pp. 147–159. Springer, Heidelberg (2011)
14. Buettner, M., Yee, G.V., Anderson, E., Han, R.: X-mac: A short preamble mac
protocol for duty-cycled wireless sensor networks. In: Proceedings of the 4th Inter-
national Conference on Embedded Networked Sensor Systems, ser. SenSys 2006,
pp. 307–320 (2006)
15. Corbellini, G., Strinati, E., Duda, A.: La-mac: Low-latency asynchronous mac for
wireless sensor networks. In: IEEE 23rd International Symposium on Personal
Indoor and Mobile Radio Communications (PIMRC), pp. 380–386 (2012)
- A Cluster-Based and On-Demand Routing Algorithm for Large-Scale Multi-hop Wireless Sensor Networks
1 Introduction
2 Background and Related Work
3 CESAR Algorithm
3.1 The Cluster-Head Selection Mechanism
3.2 Cluster Formation and Destruction
3.3 The On-demand Scheme
3.4 Data Aggregation
3.5 Routing Maintenance and Recovery
4 Performance Evaluation
4.1 Data Delivery
4.2 Energy Consumption
5 Conclusions
References
A Self-Adaptive Handoff Decision Algorithm for Densely Deployed
Closed-Group Femtocell Networks
Wahida Nasrin and Jiang Xie
Department of Electrical and Computer Engineering
The University of North Carolina at Charlotte
Email: {wnasrin, Linda.Xie}uncc.edu
Abstract—Due to the high traffic demand in cellular networks,
femtocells are considered as one promising solution for providing
cellular traffic offloading and better indoor coverage. However,
coexistence of femtocells with macrocell networks introduces
special challenges to mobility management. In particular, since
indoor and unplanned deployment of femtocells usually suffers
abrupt signal drop due to mutipath propagation, wall penetration
loss, and shadowing, unnecessary handoffs and ping-pong effects
may happen frequently, which severely degrades the quality of
connections and user experience. On the other hand, offloading
in femtocells requires a high cell utilization. Therefore, handoff
decision algorithms should be carefully designed to trigger proper
handoffs and fulfill the different requirements of macro-to-femto
and femto-to-macro handoffs. In this paper, we propose a location
history based adaptive handoff decision algorithm to address
the special challenges of indoor and unplanned deployment of
femtocells. Our proposed algorithm uses the neighboring cell
list in dense femtocell networks to obtain the location of users.
Based on the user location history, a new concept, handoff
frequency of occurrence, is introduced to assist intelligent handoff
decision-making. The hysteresis margin in our proposed handoff
decision criteria can be adaptively adjusted to meet various
handoff requirements. Simulation results show that our proposed
location history based adaptive handoff decision algorithm can
significantly improve the femtocell utilization and handoff failure
rate. To the best of our knowledge, this is the first adaptive
handoff decision algorithm that considers specific challenges of
indoor deployment of femtocells.
I. INTRODUCTION
The exponentially increased mobile data traffic is forcing
cellular networks to offload their traffic to other networks [1].
Indoor small cell deployment is one of the most promising
solutions to the seamless offloading of data from cellular net-
works. These low-powered, short-ranged, and low-cost indoor
small cells are known as femtocells [2], [3]. The support of
femtocells is a key feature of Long Term Evolution-Advanced
(LTE-A) system [4]. The deployment scenario of femtocells
in a LTE-A system is given in Fig. 1. Comparing with
macrocells, femtocells have the characteristic of unplanned
installation and management by users. The unplanned and
indoor deployment of femtocells within the existing macrocell
networks introduces a number of challenges. Spectrum allo-
cation, interference management, and mobility management
(MM) are the most important ones among them.
Handoff (HO) is an important operation to perform offload-
ing in femtocell networks [5]. How and when an offload is
This work was supported in part by the US National Science Foundation
(NSF) under Grant No. CNS-0953644, CNS-1218751, and CNS-1343355.
performed, and how long a user equipment (UE) can offload,
depend on the HO decision. There are two kinds of HO in
closed-access femtocell networks: macro-to-femto and femto-
to-macro. As only a limited number of subscribers, known
as closed subscriber group (CSG), have the access in closed-
group femtocells, they are more desirable as compared to
other access modes: open and hybrid [6]. In addition, the CSG
users want to be connected to femtocells as long as possible
while staying within the coverage area because of low-cost.
To meet this requirement, HO decision should work in a way
that macro-to-femto HO is early enough to offload from a
macrocell network, and femto-to-macro HO should wait to
trigger a HO until the signal strength from a femtocell goes
down. However, service failure may still happen if the HO-
decision cannot adapt with the unplanned nature of femtocell
networks. Due to the importance of HO decision in femtocell
networks, in this paper, we address HO decision related issues
for both macro-to-femto and femto-to-macro HO scenarios.
Fig. 1. Femtocell deployment scenario in an LTE-A system.
Though femtocells operate on the same frequency spectrum
as macrocells, the dense yet unplanned and indoor deployment
of femtocells within the overlaid macrocell networks makes
the HO decision more difficult and different from macrocell
networks. The first difficulty is the difference in the transmis-
sion power of a femto-base station (FBS). The transmission
power of a FBS (usually 10-15dB) is much lower than that
of a macro-base station (MBS) which is usually 45dB [7].
Because of this low transmission power, a femtocell might
be undiscovered by a UE. This can happen because a UE
has the natural tendency to connect to the highest received
signal strength (RSS) and it receives higher RSS from a
macrocell rather than a femtocell. There are two ways to
solve this problem. One is to take an offset to compensate
the power difference. This method is proposed in [8], [9] for
HO decision-making from macrocells to femtocells. Another
way is to set a proper threshold. In this paper, an optimization
of the HO threshold is proposed.978-1-4673-7331-9/15/$31.00 c© 2015 IEEE
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
978-1-4673-7331-9/15/$31.00 ©2015 IEEE 390
The second problem is related to co-channel interference.
Because of indoor deployment of femtocells, they suffer higher
path-loss due to multi-path propagation, wall penetration loss,
and shadowing effects than other small cells [3]. As a result,
an abrupt signal-to-interference-plus-noise-ratio (SINR) drop
occurs at the boundary of femtocells. Due to this drop, the
HO-decision algorithm for the femto-to-macro needs to be
different from that of the macro-to-femto. We will explain the
interference effect in femtocells in Section III and propose two
different HO-decision algorithms by considering the difference
between macro-to-femto and femto-to-macro HOs.
The unplanned and unstable nature of neighboring fem-
tocells introduces a third challenge. As femtocells are fully
operated by users and the network operator does not have
any control on this, the number and position of neighboring
femtocells may vary randomly. This nature of neighboring
femtocells will create an uneven interference effect on the
boundary of femtocells from time to time. As a result, it
is difficult to use the same HO-decision algorithm when the
environment changes occasionally, because this may cause a
large number of unnecessary HOs and even service failures.
Therefore, using an adaptive hysteresis margin (HM) is a good
way to solve this problem [10].
The use of an adaptive HM can reduce the number of
unnecessary HOs and ping-pong effects of users. An indoor
user follows a specific mobility pattern and moves back and
forth frequently in some boundary areas without actually
moving out of the cell (e.g., the corner of a room) and in
other boundary areas where the user actually moves out of the
cell (e.g., a door). This is the fourth obstacle when designing
the HO-decision algorithm. Using the same HM may lead to
unnecessary HOs and ping-pong effects in the former areas.
In this paper, we propose a self-adaptive HO-decision
algorithm to address the unique issues of both maro-to-femto
and femto-to-macro HOs. The HM of our algorithm is able
to adapt not only with the deployment environment, but also
with the mobility pattern of the user. The proposed algorithm
is intelligent enough to set the HM to a proper value based
on the history of previous HOs. We propose to keep a
database containing the location fingerprinting of a user who
has requested HOs before. The location fingerprinting is taken
from the measurements of the neighboring femtocells. The
goal of this self-adaptive HO-decision algorithm is to reduce
the rate of unnecessary HOs and service failure, and at the
same time, increase the cell utilization.
The rest of the paper is organized as follows. Related works
are explained in Section II. In Section III, research motivation
and contributions of this paper are discussed. In Section
IV, our location-fingerprint based self-adaptive HO-decision
algorithm is described. In Section V, simulation results are
given, followed by the conclusions in Section VI.
II. RELATED WORK
The problems addressed by existing research on HOs in
femtocell networks include transmission power difference
between MBS and FBS [8], [9], [11], [12], frequent and
unnecessary HOs [13]–[23], selecting the target cell for HOs
[16], [22]–[24], HO failure rate [14], [25], interference [12],
[14], [26], energy saving strategy [27], [28], HO delay/cost
minimization [14], [29], and ping-pong effects [30]–[32]. The
power difference between FBS and MBS during an inbound
(macro-to-femto) HO is considered in [8], [9], [11], [12]. In
[8] and [9], a combination factor is proposed to compensate
the power asymmetry in a way that the UE will be correctly
assigned to a femtocell while maintaining the number of HOs
at the same level. A window function is also proposed to
prevent the RSS from varying abruptly. However, the abrupt
signal drop cannot be ignored in real indoor scenarios. It
is claimed in [11] that only considering this combination
factor may increase the rate of unnecessary HOs. Therefore,
another parameter, transmission loss, is proposed in [11] for
HO decision-making. A cost-effective HO-decision algorithm
is proposed in [12] considering the power discrepancy which
will be discussed later.
Most of the existing works on HO decision in femtocell
networks are focused on minimizing the unnecessary HO
rate due to the dense femtocell deployment and small cell
radius [13]–[23]. A mobility prediction method to predict the
mobility pattern of a user, which is used to select the proper
cell to HO, is proposed in [13], [22], [23]. The mobility
prediction is based on the current mobility-history of a user.
Different parameters, such as user’s velocity, RSS, and traffic
type are considered for HO decision-making in [14]–[16].
In addition, call admission control (CAC) is used in [17]
and [19] for HO decision-making. A waiting time with a
SINR threshold is proposed in [18] to avoid unnecessary
HOs. Adaptive techniques to eliminate unnecessary HOs in
femtocell networks are considered in [20], [21]. An adaptive
HM is proposed based on the distance between a UE and
a BS to avoid unnecessary HOs in [20]. The efficiency of
two HO elimination techniques, i.e., windowing and HO delay
timer are investigated in [21]. Both techniques are modified for
femtocells based on the distance between a UE and the serving
BS. In conclusion, existing works on eliminating unnecessary
HOs consider user’s speed, traffic type, waiting time, mobility
pattern prediction, and distance-based adaptive HM for HO
decision-making.
Another problem of making a HO-decision in densely
deployed open-access femtocell networks is how to select the
target cell properly. Large number of femtocells may create a
long neighboring cell list and selecting a wrong target cell may
cause unnecessary HOs. To overcome this problem, mobility-
prediction is used to select a proper target cell [24] or to make
an effective neighboring cell list [16], [22]–[24]. [24] considers
that knowing the current position can help us know where a
UE is going, which can later help to select the target cell. As
described previously, [16] tries to avoid the long neighboring
cell list problem in order to eliminate unnecessary HOs by
considering user’s speed and traffic type. A mobility-history
database is proposed in [22], [23] which contains a list of
target cells where users are recently handed over.
A few works have addressed the interference problem
during a HO. Intracell HO (IHO) is considered in [12], [14],
[26] to avoid the cross-tier interference. A cost-function based
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
391
on the available bandwidth of the target cell is proposed in
[14] to provide better QoS to users by reducing interference.
Along with these main issues, some other issues are also
addressed, such as to avoid the HO failure rate which is
one of the biggest challenges for designing a HO-decision
algorithm [14], [25], to minimize the HO cost [14], [29], and
to provide cost-effective service to users [33]. An intelligent
HO management is proposed in [27] for energy efficient
green femtocell networks and [28] works on reducing power
transmission at the UE side by adapting the HM suitably with
respect to the SINR from the target cell and the standard LTE
measurements.
The location-based HO-decision algorithm for different
small cell networks is discussed in [22], [23], [31], [32], [34],
[35]. As described earlier, [22] and [23] keep the mobility-
history of users to predict the target cell in small cell networks.
Here, location is used to set the target cell and to minimize the
HO delay. User’s mobility and location are also used to provide
better service during a HO. Geographical fingerprint for HOs
are considered in [36], [37], where location-fingerprint is
obtained using artificial neural networks. This fingerprint is
used to select the target cell and neither a GPS nor a sensor
is used.
Based on the discussion above, we observe that the follow-
ing issues in femtocell networks are not addressed for HO-
decision algorithms:
• Indoor environment
• Abrupt signal drop and high interference at cell boundary
• Indoor ping-pong effects
• Ad-hoc nature of neighboring femtocells
III. RESEARCH MOTIVATION AND CONTRIBUTIONS
A. Research Motivation
The above issues, not addressed in existing works, are the
motivation for our work. How and why these issues can affect
HO decision in femtocell networks are investigated in this
section.
Fig. 2. The comparison of SINR of different small cells.
Femtocells suffer from higher interference than other small
cells due to the indoor deployment. Simulation results on
the SINR with respect to the distance between a UE and
the BS of different small cells are shown in Fig. 2. We use
the ITU-R P.1238-7 path-loss model in our simulation [38].
The indoor deployment is indicated as NLOS (non-line-of-
sight) and outdoor deployment is indicated as LOS (line-of-
sight) here. From the figure, it is observed that a femtocell
suffers from higher interference than others because of the low
transmission power and indoor deployment. Hence, it has an
abrupt signal drop at the cell boundary. How this abrupt signal
drop and high interference affect the HO decision-making is
explained in Fig. 3.
Fig. 3. Effect of abrupt signal drop at the cell boundary in HO decision
making.
In the left side of Fig. 3, the comparison of the femtocell
RSS between indoor and outdoor deployment (which is in-
dicated as smallcell RSS) is shown. In the right side of Fig.
3, the scenario of selecting different HM is given. From the
figure we observe that, a HO should initiate early (at threshold
shown by the green dotted-line) because of the RSS drop at
the cell boundary. Now, as UE A moves towards the door, the
HM (HM1 in the figure) should be selected in a way that the
service of UE A does not fail. On the other hand, this HM
causes an unnecessary HO for UE B who will stay inside the
femtocell, which also leads to poor utilization of femtocells.
In this case, setting a high HM (HM2 > HM1) is required
to overcome this problem. However, this is not acceptable for
UE A. To solve this conflicting issue, we need to design a
self-adaptive HO-decision algorithm.
Fig. 4. The effect of ad-hoc nature of neighboring femtocells on SINR at the
boundary of the home femtocell.
The changeable characteristics of both the home femtocell
and the neighboring femtocells make this situation more
complicated. Fig. 4 presents the SINR at the home femtocell
boundary with different number of neighboring femtocells. We
observe that the SINR changes randomly and unpredictably,
which can increase the ping-pong effects for indoor femtocell
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
392
users. This issue needs to be addressed in order to provide
seamless mobility support between femtocells and macrocells
and to ensure a better user experience.
Besides all of these challenges, different HO scenarios in
femtocell networks have different criteria and purposes. They
are:
• Macro-to-femto HO: To offload traffic from macrocell
networks in order to avoid network congestion.
• Femto-to-macro HO: To provide seamless mobility man-
agement and better QoS while the indoor signal is poor
and the macrocell network is not congested.
To the best of our knowledge, all of these issues cannot be
addressed by existing works. The offset-based HO-decision
algorithms [8], [9], [11] can meet the requirement of macro-
to-femto HOs. However, they can increase the number of
unnecessary HOs for users. For example, in Fig. 5, all the UEs
will be handed over to the femtocell, which is not necessary.
Fig. 5. Inbound mobility in femtocell networks.
A number of existing works in the literature propose to
eliminate these redundant unnecessary macro-to-femto HOs.
Speed-based HO algorithms and cost-function based HO algo-
rithms [14]–[16] can eliminate unnecessary HOs of high-speed
users, which may also prevent the HO of closed-group users
with high-speed. As a result, offloading cannot be performed.
Moreover, these algorithms are based on some parameters
(e.g., speed, traffic types, and available bandwidth) which are
not practical for a UE to obtain or calculate. On the other hand,
the femto-to-macro HO scenario is different in that indoor
users are usually in low speed and they suffer high interference
at the cell boundary (which implies a high HO failure rate).
The necessity of a HO is different for different users based on
their locations (which is explained in Fig. 3). As a result, the
offset-based, speed-based, and cost-function based algorithms
cannot be applied to femto-to-macro HO scenarios. Existing
HO failure rate elimination algorithms [14], [25] use the same
parameters (speed, traffic type, etc.). They also cannot be
directly applied to femto-to-macro HO scenarios. Therefore,
more advanced HO decision algorithms are necessary to meet
these requirements. Existing adaptive HM-based algorithms
are either based on distance [10] or signal strength
[20].
However, it is possible that users at the same distance to the BS
or with the same received signal strength may need different
HO decisions (as stated in Fig. 3). Considering all these issues,
we propose a self-adaptive HO-decision algorithm for both
macro-to-femto and femto-to-macro HO scenarios based on
the location-fingerprint of UE’s HO history. Existing works
related to location-fingerprint for HO decisions [36], [37] use
geographical fingerprint (i.e., latitude and longitude) to predict
the target cell. However, the target cell is always the same for
closed-group femtocell networks except that knowing the exact
location is hard for indoor users. Additionally, since we need
to find out only the HO areas, it is not necessary to know the
exact location of a UE. As a result, we use location-fingerprint
from the neighboring cell list and their RSS to find out the
HO area.
B. Contributions
In this paper, we propose a self-adaptive HO decision
algorithm based on location-fingerprint in order to provide
seamless HOs between macrocells and femtocells. The contri-
butions of this paper are summarized as follows:
• Considering the indoor deployment, we propose HO-
decision algorithms for both macro-to-femto and femto-
to-macro HO scenarios to meet offloading requirements
and to provide seamless mobility.
• We propose a self-adaptive HM to adapt to the ad-hoc
nature of femtocells and the locations of the UE. Each
time a HO is requested, a new HM is calculated from
a database entry based on the location-fingerprint of the
requested HO.
• We propose a location-fingerprint database to assist HO
decision-making. The database contains the information,
i.e., location-fingerprint of previous successful HOs. No
unrealistic data or measurement method is required to
build the database, as it contains the information of
neighboring cell IDs and their received signal strength
indicator (RSSI). Both parameters are accessible for a
UE during the HO measurement. To adjust with the ad-
hoc nature of both home and neighboring femtocells, we
propose to update the database each time a successful HO
happens.
• A realistic simulation scenario is used to evaluate the
performance of the proposed algorithm. We analyze the
proposed algorithm in terms of the rate of unnecessary
HOs, HO failure rate, and femtocell utilization, and
compare them with other existing works.
IV. THE PROPOSED SELF-ADAPTIVE HO DECISION
ALGORITHM
In this section, the proposed self-adaptive HO decision
algorithm for both macro-to-femto and femto-to-macro HOs is
introduced. The proposed algorithm works in two phases: 1)
initialization phase and 2) utilization phase. In the initialization
phase, a HO between a femtocell and a macrocell is triggered
using the LTE-A system-based HO criteria, and a database
is built in this phase. The database contains the location-
fingerprint of UEs that are successfully handed over to their
target cells. The database is used in the utilization phase
to adapt the HM for different UEs. During this phase, the
database is updated with new information to handle the ad-
hoc nature of femtocells. The notations used in our algorithm
are listed in Table I.
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
393
TABLE I
NOTATIONS USED IN THE ALGORITHMS
RSSIm Received Signal Strength Indicator for macrocell
RSSIf Received Signal Strength Indicator for femtocell
RSSImin Minimum received signal strength indicator for macrocell
Th Threshold for femtocell
HMmax Optimized value of hysteresis margin
MME Mobility management entity
FGW Femto gateway
PCI Physical cell identity
RSSIfail Minimum received signal strength indicator for femtocells
Fig. 6. Flow chart for the initialization phase.
Algorithm 1: Macro-to-femto HO-decision algorithm
if RSSIf > Th or RSSIm < RSSImin, and
RSSIf > RSSIm + HMmax then
HO to femtocell;
else
Stay in macrocell;
End;
Algorithm 2: Femto-to-macro HO-decision algorithm
if RSSIf < Th or RSSIm > RSSImin, and
RSSIf + HMmax < Th then
HO to macrocell;
else
Stay in femtocell;
End;
A. Initialization Phase
The initialization phase is activated at the time when a
new FBS is plugged in. The flow chart for this phase is
given in Fig. 6. The initialization phase is used to build up
the location-fingerprint database. In this phase, all HOs are
performed based on a fixed HM and the measurement report of
UEs. In LTE-A, the Radio Resource Control (RCC) protocol
manages the events that a UE reports its HO measurement
to the serving BS [39]. The measurement includes UE’s ID,
CSG ID, and available cell IDs (i.e., PCIs) along with their
RSSIs. The PCI is not a unique ID for FBS (totally 504 PCIs
from 0-503 are available for the LTE-A system). However,
we assume that there will be a good distribution of offered
PCIs within the coverage area of a macrocell. As shown in
the flow chart, this measurement is used to check whether the
UE is a registered-user for the closed-access femtocell. The
HO process continues for the closed-group users and the HO
decision. Whether the algorithm is in the initialization phase
or not is determined from the database. An empty database
indicates that the algorithm is in the initialization phase and the
serving cell makes the HO decision based on the measurement
report. The proposed HO algorithms for a macro-to-femto
HO and a femto-to-macro HO are given in Algorithm 1 and
Algorithm 2, respectively.
The selection of the Th and HMmax is explained later
in the paper. After a successful HO, the location-fingerprint
is entered in the database. We use both the neighboring cell
IDs and their RSSIs as the location-fingerprint of UEs. The
database has a specific length and the initialization phase ends
as soon as the database is full.
B. Utilization Phase
When the database is full, it will be used for determining the
adaptive HM in the utilization phase. Each time a user requests
a HO, it sends location-fingerprint with its measurement
report. After getting this report, MME (or FGW for femtocell)
checks the database to find matches. Suppose the number of
similar entries is Nd and the size of the database is ds. Then
the frequency of occurrence (Pfoc) can be found as
Pfoc =
Nd
ds
. (1)
Pfoc is used for calculating the adaptive HM (HMad). A high
value of Pfoc indicates a frequent HO zone. As the frequent
HO zones need a lower HM, Pfoc and HMad are inversely
proportional to each other. Hence, the relationship between
them is HMad ∝
1
Pfoc
or HMad ∝ (1 − Pfoc) in dB. Given
HMmax, the HMad is
HMad = (1 − Pfoc) ∗ HMmax. (2)
After calculating the value of the adaptive HM, the serving
BS checks the HO decision criteria. The proposed macro-to-
femto and femto-to-macro HO criteria are given in Algorithm
3 and Algorithm 4, respectively. The HO is successful if the
HO-decision criteria are met and the database is updated. The
steps of the utilization phase are shown in Fig. 7.
Algorithm 3: Macro-to-femto HO-decision algorithm
if RSSIf > Th or RSSIm < RSSImin, and
RSSIf > RSSIm + HMad then
HO to femtocell;
else
Stay in macrocell;
End;
Algorithm 4: Femto-to-macro HO-decision algorithm
if RSSIf < Th or RSSIm > RSSImin, and
RSSIf + HMad < Th then
HO to macrocell;
else
Stay in femtocell;
End;
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
394
Fig. 7. Flow chart for the utilization phase.
Fig. 8. Database building and updating.
C. Location-Fingerprint Database
One of the important purposes of our proposed HO decision
algorithm is to get the location of the user during a HO,
because a HO is necessary at a few particular locations
for indoor users. However, localization of indoor users is
difficult. A number of localization techniques are available
in the literature for indoor localization [40]. Most of them
require complex algorithms. In our design, we consider RF
fingerprinting [41] instead of calculating the coordinates of
the user location, because our HO-decision algorithm does
not require the actual position of a user. Determining the
HO zone is our main purpose of building the database. If a
location-fingerprint, obtained from the neighboring cell list, is
stored in the database each time a HO occurs, the serving BS
can compare this list with the requested location-fingerprint,
and can perform a quick HO triggering when necessary. The
process of building the database is shown in Fig. 8.
Algorithm 5: Database building and update
if Data matches a previous entry within a time x from the same UE
then
Delete both entries;
else
if Database is full then
Delete the oldest data and insert a new entry;
else
Insert the data;
End;
Each time a UE sends a measurement report to the serving
BS, the serving BS determines the target BS and forwards the
rest of the measurement to the MME/FGW. This forwarded
message contains a list of neighboring cell IDs (with RSSI >
RSSImin) and their corresponding RSSIs. The MME/FGW
stores this information in the database if the database is not
full. This database is used in the utilization phase to calculate
the adaptive values of the HM. To cope with the ad-hoc nature
of femtocells, in the utilization phase, the database is updated
in the FILO (first in last out) mode, i.e., the new location-
fingerprint is entered and the oldest data is removed from the
database if the database is full. The database building and
updating algorithm is given in Algorithm 5.
D. Determining HO Parameters
The minimum received signal strength at the cell boundary
of a macrocell and a femtocell is RSSImin and RSSIfail,
respectively. To find out these values, Okumura-Hata propaga-
tion model is used for macrocell networks and ITU-R P.1238-7
path-loss model is used for femtocell networks [38].
Fig. 9. Selecting RSSImin at the macrocell boundary.
We consider the radius of macrocells and femtocells in our
simulation as 1.2km and 15m. Fig. 9 presents the RSSI values
for different distances from the macro BS. The RSSImin is
calculated as -75dB at the macrocell boundary as shown in
the figure. Similarly, the calculated RSSIfail value is -50dB
at the femtocell boundary which is shown in Fig. 10.
Fig. 10. Selecting RSSIfail and Th for a femtocell.
The value of HMmax and Th can be obtained using
simulations. We consider two contrary performance metrics
of femtocell networks: rate of unnecessary HOs and cell
utilization. The simulation results are shown in Fig. 11 with
respect to different values of Thresholds and HMs. From the
figure, it is observed that when HM = 5dB and Th = −45dB,
both metrics show better performance than others. Therefore,
we set HMmax as 5dB. If the value of HMmax is 5dB in
Fig. 10, we can also find that Th = −45dB. These values are
used our simulation.
E. HO Signaling
Both the inbound and outbound signaling for self-adaptive
HO decision in femtocell networks are given in Fig. 12 and
Fig. 13.
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
395
(a) Rate of unnecessary HOs (b) Cell Utilization
Fig. 11. Rate of unnecessary HOs and cell utilization for different Th and
HM.
We consider the signaling procedure during a macro-to-
femto HO as the inbound signaling. In this state, the MME
checks the location-fingerprint database for similar entries
after getting the measurement report from the UE through the
serving BS. If the database is empty, the MME considers that
the initialization phase is activated and makes a HO decision.
The location-fingerprint is added to the database after the HO
is succeeded. In the utilization phase, this database is used to
calculate the HM as described previously and the database is
updated with new location-fingerprint. The database is shared
by the MME and FGW so that it can be used for both inbound
and outbound mobility. In the outbound signaling, i.e., in a
femto-to-macro HO, the signaling procedure is similar to the
inbound signaling. However, the FGW makes the HO-decision
instead of the MME. The outbound signaling is shown in Fig.
13.
Fig. 12. Inbound signaling for self-adaptive HO-decision algorithm.
V. PERFORMANCE EVALUATION
In this section, we evaluate the performance of the proposed
self-adaptive HO-decision algorithm. We use Net Logo 5.0.5
[42] to simulate our proposed algorithm in an indoor envi-
ronment. We design a single-floored two-bedroom apartment
with an FBS which has the capacity of supporting fifteen
users surrounded by six neighboring FBSs in the coverage
of a macro BS. We consider thirty users with a probability
of 0.5 to enter and exit the apartment in a random manner.
All users and FBSs are placed randomly and all users follow
a modified version of the Random Waypoint mobility model.
The mobility model is modified in a way that the users only
use the door to get in/out of the apartment and none of them
crosses the walls. For supporting the unplanned deployment of
femtocells, we also consider the random placement of FBSs
inside the apartment. The parameters used in our simulation
are listed in Table II [38].
Fig. 13. Outbound signaling for self-adaptive HO-decision algorithm.
TABLE II
SIMULATION PARAMETERS
Macrocell transmission power, Pm 45 dB
Radius of macrocell 1.2 km
Femtocell transmission power, Pf 10 dB
Radius of femtocell 15 m
Size of database, ds 30
Users speed 5 km/hr
Threshold, Th -45 dB
Wall penetration loss 5 dB
Outdoor penetration loss 2 dB – 10 dB
RSSImin -75 dB
RSSIfail -50 dB
HMmax 5 dB
In this paper, we mainly investigate the following three
performance metrics: 1) Rate of unnecessary HOs: the proba-
bility that a UE temporarily hands over to the target cell and
hands over back to the serving cell, 2) HO failure rate: the
probability of a call/service-drop before a successful HO is
triggered, and 3) cell utilization: the probability that a CSG
UE stays connected to the femtocell while within the coverage
area of it’s home FBS. In addition, we compare our proposed
self-adaptive algorithm with three other algorithms: 1) fixed
HM AL: the HM does not adapt with the ad-hoc nature of
femtocells; 2) AL1: the HM changes based on the formula from
[10], which is HM = max{HMmax ∗ (1 − 10
d
R )4; 0}. Here,
R is the radius of the femtocell and d is the distance between
the FBS and UE; and 3) AL2: the adaptive HM is calculated
from HM = max{HMmax ∗ (1 − 10
SINRact−SINRmin
SINRmin−SINRmax )4; 0}
[20].
A. Rate of Unnecessary HOs
The rate of unnecessary HOs for the macro-to-femto HO,
femto-to-macro HO, and both of them together are shown
in Fig. 14, Fig. 15, and Fig. 16, respectively. Low rate
of unnecessary HOs is desirable in order to provide better
performance. The simulation result shows that the proposed
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
396
algorithm has a lower unnecessary HO rate than other algo-
rithms. As AL1 and AL2 change the HM based on the distance
and SINR, respectively, and select the minimum value of the
HM throughout the cell boundary, both algorithms show worse
performance than others. Unlike the proposed algorithm, all
the other three algorithms fail to adapt the HM based on
the HO location area. As a result, the proposed algorithm
eliminates more unnecessary HOs than others.
Fig. 14. Rate of unnecessary HOs for the macro-to-femto HO scenario.
Fig. 15. Rate of unnecessary HOs for the femto-to-macro HO scenario.
Fig. 16. Total rate of unnecessary HOs.
B. HO Failure Rate
The HO failure rate should be as minimum as possible.
Since a high value of the HM can lead to a high value of
the HO failure rate because of the abrupt signal drop, it is
necessary to minimize the HM where a HO is necessary.
Additionally, femtocells suffer high interference at the cell
boundary, which may lead to a high service failure if HO-
decision cannot adapt to the change of the environment. If the
RSSI of the UE goes below RSSIfail and a HO does not
happen, we consider this as a HO failure. The performance of
the HO failure rate of our proposed self-adaptive algorithm as
compared to the other three algorithms is given in Fig. 17. It
is observed that the proposed algorithm outperforms the other
algorithms in terms of lower HO failure rate.
Fig. 17. HO failure rate.
C. Cell Utilization
As femtocells are deployed for offloading cellular traffic and
to provide cost-effective service to the closed-group users, it is
expected that whenever a UE is within the coverage area of its
home FBS, it should be connected to the femtocell. However,
traditional HO-decision algorithms do not consider this issue.
As a result, their cell utilization is lower than the proposed
algorithm. The simulation results of cell utilization are shown
in Fig. 18.
Fig. 18. Cell utilization.
VI. CONCLUSION
In this paper, a location-fingerprint based handoff decision
algorithm is proposed to improve the handoff performance
and to offload cellular data traffic in densely deployed het-
erogeneous networks with femtocells. In our algorithm, the
hysteresis margin changes with the handoff priority based on
the location of users. Therefore, a fast handoff can be triggered
wherever necessary. Our algorithm can reduce the handoff
failure rate and at the same time, provide better cell utilization
to insure maximum data offloading. The performance of the
proposed self-adaptive handoff-decision algorithm is analyzed
in terms of unnecessary handoff rate, handoff failure rate,
and cell utilization by considering the challenges of indoor
deployment. Simulation results show significant improvement
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
397
as compared to the existing handoff-decision algorithms in
femtocell networks. It is observed that a proper selection
of hysteresis margin and threshold can reduce unnecessary
handoff rate and handoff failure rate without sacrificing cell
utilization.
REFERENCES
[1] Cisco, “Cisco visual networking index: Global mobile data traffic
forecast update, 2010-2015,” Whitepaper, 2011.
[2] V. Chandrasekhar, J. G. Andrews, and A. Gatherer, “Femtocell networks:
a survey,” IEEE Communications Magazine, vol. 46, no. 9, pp. 59–67,
September 2008.
[3] T. Zahir, K. Arshad, A. Nakata, and K. Moessner, “Interference man-
agement in femtocells,” IEEE Communication Survey and Tututorial,
vol. 15, no. 1, pp. 293–311, 2012.
[4] 3GPP, “E-UTRA and E-UTRAN overall description,” TS 36.300
V12.1.0, March 2014.
[5] D. Xenakis, N. Passan, L. Merakos, and C. Verikoukis, “Mobility
management for femtocells in LTE-Advanced: Key aspects and survey
of handover decision algorithms,” IEEE Communications Surveys and
Tutorials, vol. 16, no. 1, pp. 64–91, First Quarter 2014.
[6] G. de la Roche, A. Valcarce, D. Lopez-Perez, and J. Zhang, “Access
control mechanisms for femtocells,” IEEE Communications Magazine,
vol. 48, no. 1, pp. 33–39, January 2010.
[7] J. G. Andrews, H. Claussen, M. Dohler, S. Rangan, and M. C. Reed,
“Femtocells: Past, present, and future,” IEEE Journal on Selected Areas
in Communications, vol. 30, no. 3, pp. 497–508, April 2012.
[8] J. Moon and D. Cho, “Efficient handoff algorithm for inbound mobil-
ity in hierarchical macro/femto cell networks,” IEEE Communications
Letters, vol. 13, no. 10, pp. 755–757, October 2009.
[9] ——, “Novel handoff decision algorithm in hierarchical macro/femto
cell networks,” in Proc. IEEE Wireless Communications and Network
Conference (WCNC), April 2010, pp. 1–6.
[10] S. Lal and D. K. Panwar, “Coverage analysis of handoff algorithm with
adaptive hysteresis margin,” in Proc. 10th International Conference on
Information Technology (ICIT), December 2007, pp. 133–138.
[11] P. Xu, X. Fang, R. He, and Z. Xiang, “An efficient handoff algorithm
based on received signal strength and wireless transmission loss in
hierarchical cell networks,” Telecommunication Systems, vol. 52, no. 1,
pp. 317–325, 2013.
[12] D. López-Pérez, A. Ladányi, A. Juttner, and J. Zhang, “OFDMA
femtocells: Intracell handover for interference and handover mitigation
in two-tier networks,” in Proc. IEEE Wireless Communications and
Networking Conference (WCNC), 2010, pp. 1–6.
[13] B. Jeong, S. Shin, I. Jang, N. W. Sung, and H. Yoon, “A smart
handover decision algorithm using location prediction for hierarchical
macro/femto-cell networks,” in Proc. IEEE Vehicular Technology Con-
ference (VTC Fall), 2011, pp. 1–5.
[14] S. Wu, “Handover strategy in HIP-based LTE femtocells networks with
hybrid access model,” in Proc. IEEE International Conference in Genetic
and Evolution Computing (ICGEC), 2012.
[15] S. Wu, X. Zhang, R. Zheng, Z. Yin, Y. Fang, and D. Yang, “Handover
study concerning mobility in the two-hierarchy network,” in Proc. IEEE
Vehicular Technology Conference, 2009, pp. 1–5.
[16] P. Xu, X. Fang, J. Yang, and Y. Cui, “A user’s state and SINR-based
handoff algorithm in hierarchical cell networks,” in Proc. International
Conference on Wireless Communications Networking and Mobile Com-
puting (WiCOM), 2010, pp. 1–4.
[17] G. Yang, X. Wang, and X. Chen, “Handover control for LTE femtocell
networks,” in Proc. International Conference on Electronics, Communi-
cations and Control (ICECC), 2011, pp. 2670–2673.
[18] M. Z. Chowdhury, W. Ryu, E. Rhee, and Y. M. Jang, “Handover between
macrocell and femtocell for UMTS based networks,” in Proc. Interna-
tional Conference on Advanced Communication Technology (ICACT),
vol. 1, 2009, pp. 237–241.
[19] J.-S. Kim and T.-J. Lee, “Handover in UMTS networks with hybrid
access femtocells,” in Proc. International Conference on Advanced
Communication Technology (ICACT), vol. 1, 2010, pp. 904–908.
[20] Z. Becvar and P. Mach, “Adaptive hysteresis margin for handover in
femtocell networks,” in Proc. International Conference on Wireless and
Mobile Communications (ICWMC), 2010, pp. 256–261.
[21] ——, “Adaptive techniques for elimination of redundant handovers in
femtocells,” in Proc. IEEE International Conference on Networks (ICN),
2011, pp. 230–234.
[22] H. Ge, X. Wen, W. Zheng, Z. Lu, and B. Wang, “A history-based
handover prediction for LTE systems,” in Proc. IEEE International
Symposium on Computer Network and Multimedia Technology (CNMT),
2009, pp. 1–4.
[23] Z. Becvar, “Efficiency of handover prediction based on handover his-
tory,” Journal of Convergence Information Technology, vol. 4, no. 4, pp.
41–47, 2009.
[24] A. Ulvan, R. Bestak, and M. Ulvan, “Handover scenario and procedure
in LTE-based femtocell networks,” in Proc. International Conference
on Mobile Ubiquitous Computing, Systems, Services and Technologies,
2010, pp. 213–218.
[25] D.-W. Lee, G.-T. Gil, and D.-H. Kim, “A cost-based adaptive handover
hysteresis scheme to minimize the handover failure rate in 3GPP LTE
system,” EURASIP Journal on Wireless Communications and Network-
ing, vol. 2010, no. 6, 2010.
[26] L.-P. David, V. Alvaro, L. Ákos, d. l. R. Guillaume, Z. Jie et al.,
“Intracell handover for interference and handover mitigation in OFDMA
two-tier macrocell-femtocell networks,” EURASIP Journal on Wireless
Communications and Networking, vol. 2010, pp. 1–15, 2010.
[27] K. S. B. Reguiga, F. Mhiri, and R. Bouallegue, “Handoff management
in green femtocell network,” Internat. J. of Comp. Apps, vol. 27, no. 4,
pp. 1–7, 2011.
[28] D. Xenakis, N. Passas, and C. Verikoukis, “A novel handover decision
policy for reducing power transmissions in the two-tier LTE network,” in
Proc. IEEE International Conference on Communications (ICC), 2012,
pp. 1352–1356.
[29] H. Zhang, W. Ma, W. Li, W. Zheng, X. Wen, and C. Jiang, “Signalling
cost evaluation of handover management schemes in LTE-advanced fem-
tocell,” in Proc. IEEE Vehicular Technology Conference (VTC Spring),
2011, pp. 1–5.
[30] L. Tang, D. Wang, and Q. Chen, “An adaptive scaling scheme for TTT in
small cell,” in Proc. IET International Conference on Wireless, Mobile
and Multimedia Networks (ICWMMN), 2013, pp. 6–11.
[31] H.-P. Lin, R.-T. Juang, and D.-B. Lin, “Validation of an improved
location-based handover algorithm using GSM measurement data,” IEEE
Transactions on Mobile Computing, vol. 4, no. 5, pp. 530–536, 2005.
[32] T. Inzerilli, A. M. Vegni, A. Neri, and R. Cusani, “A location-based
vertical handover algorithm for limitation of the ping-pong effect,” in
Proc. IEEE International Conference on Wireless and Mobile Computing
Networking and Communications (WIMOB), 2008, pp. 385–389.
[33] H. Zhang, X. Wen, B. Wang, W. Zheng, and Y. Sun, “A novel handover
mechanism between femtocell and macrocell for LTE based networks,”
in Proc. Second International Conference on Communication Software
and Networks (ICCSN), 2010, pp. 228–231.
[34] M. Ylianttila, J. Mäkelä, and K. Pahlavan, “Analysis of handoff in
a location-aware vertical multi-access network,” Computer Networks,
vol. 47, no. 2, pp. 185–201, 2005.
[35] J. Zhang, H. C. Chan, and V. C. Leung, “A location-based vertical
handoff decision algorithm for heterogeneous mobile networks,” in Proc.
IEEE Global Telecommunications Conference (GLOBECOM), 2006, pp.
1–5.
[36] T.-Y. Wu, C.-C. Lai, and H.-C. Chao, “Efficient IEEE 802.11 handoff
based on a novel geographical fingerprint scheme,” Wireless Communi-
cations and Mobile Computing, vol. 6, no. 1, pp. 127–135, 2006.
[37] T.-Y. Wu and W.-F. Weng, “Reducing handoff delay of wireless access in
vehicular environments by artificial neural network-based geographical
fingerprint,” IET Communications, vol. 5, no. 4, pp. 542–553, 2011.
[38] “Propagation data and prediction methods for the planning of indoor
radio communication systems and radio local area networks in the
frequency range 900 MHz to 100 GHz,” Recommendation ITU-R
P.1238-7, 2012.
[39] 3GPP, T. 36.839, and V.11.1.0, “Radio resource control protocol speci-
fication,” December 2013.
[40] Z. Yang, Z. Zhou, and Y. Liu, “From RSSI to CSI: Indoor localization
via channel response,” ACM Computing Surveys, vol. 46, no. 2, pp. 1–32,
November 2013.
[41] A. S. R. Zekavat and R. M. Buehrer, RF Fingerprinting Location
Techniques, 1st ed. Wiley-IEEE Press, 2012.
[42] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling
complexity,” in Proc. International Conference on Complex Systems,
2004, pp. 16–21.
2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)
398
<<
/ASCII85EncodePages false
/AllowTransparency false
/AutoPositionEPSFiles false
/AutoRotatePages /None
/Binding /Left
/CalGrayProfile (Gray Gamma 2.2)
/CalRGBProfile (sRGB IEC61966-2.1)
/CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2)
/sRGBProfile (sRGB IEC61966-2.1)
/CannotEmbedFontPolicy /Warning
/CompatibilityLevel 1.4
/CompressObjects /Off
/CompressPages true
/ConvertImagesToIndexed true
/PassThroughJPEGImages true
/CreateJobTicket false
/DefaultRenderingIntent /Default
/DetectBlends true
/DetectCurves 0.0000
/ColorConversionStrategy /LeaveColorUnchanged
/DoThumbnails false
/EmbedAllFonts true
/EmbedOpenType false
/ParseICCProfilesInComments true
/EmbedJobOptions true
/DSCReportingLevel 0
/EmitDSCWarnings false
/EndPage -1
/ImageMemory 1048576
/LockDistillerParams true
/MaxSubsetPct 100
/Optimize false
/OPM 0
/ParseDSCComments false
/ParseDSCCommentsForDocInfo false
/PreserveCopyPage true
/PreserveDICMYKValues true
/PreserveEPSInfo false
/PreserveFlatness true
/PreserveHalftoneInfo true
/PreserveOPIComments false
/PreserveOverprintSettings true
/StartPage 1
/SubsetFonts false
/TransferFunctionInfo /Remove
/UCRandBGInfo /Preserve
/UsePrologue false
/ColorSettingsFile ()
/AlwaysEmbed [ true
/Arial-Black
/Arial-BoldItalicMT
/Arial-BoldMT
/Arial-ItalicMT
/ArialMT
/ArialNarrow
/ArialNarrow-Bold
/ArialNarrow-BoldItalic
/ArialNarrow-Italic
/ArialUnicodeMS
/BookAntiqua
/BookAntiqua-Bold
/BookAntiqua-BoldItalic
/BookAntiqua-Italic
/BookmanOldStyle
/BookmanOldStyle-Bold
/BookmanOldStyle-BoldItalic
/BookmanOldStyle-Italic
/BookshelfSymbolSeven
/Century
/CenturyGothic
/CenturyGothic-Bold
/CenturyGothic-BoldItalic
/CenturyGothic-Italic
/CenturySchoolbook
/CenturySchoolbook-Bold
/CenturySchoolbook-BoldItalic
/CenturySchoolbook-Italic
/ComicSansMS
/ComicSansMS-Bold
/CourierNewPS-BoldItalicMT
/CourierNewPS-BoldMT
/CourierNewPS-ItalicMT
/CourierNewPSMT
/EstrangeloEdessa
/FranklinGothic-Medium
/FranklinGothic-MediumItalic
/Garamond
/Garamond-Bold
/Garamond-Italic
/Gautami
/Georgia
/Georgia-Bold
/Georgia-BoldItalic
/Georgia-Italic
/Haettenschweiler
/Impact
/Kartika
/Latha
/LetterGothicMT
/LetterGothicMT-Bold
/LetterGothicMT-BoldOblique
/LetterGothicMT-Oblique
/LucidaConsole
/LucidaSans
/LucidaSans-Demi
/LucidaSans-DemiItalic
/LucidaSans-Italic
/LucidaSansUnicode
/Mangal-Regular
/MicrosoftSansSerif
/MonotypeCorsiva
/MSReferenceSansSerif
/MSReferenceSpecialty
/MVBoli
/PalatinoLinotype-Bold
/PalatinoLinotype-BoldItalic
/PalatinoLinotype-Italic
/PalatinoLinotype-Roman
/Raavi
/Shruti
/Sylfaen
/SymbolMT
/Tahoma
/Tahoma-Bold
/TimesNewRomanMT-ExtraBold
/TimesNewRomanPS-BoldItalicMT
/TimesNewRomanPS-BoldMT
/TimesNewRomanPS-ItalicMT
/TimesNewRomanPSMT
/Trebuchet-BoldItalic
/TrebuchetMS
/TrebuchetMS-Bold
/TrebuchetMS-Italic
/Tunga-Regular
/Verdana
/Verdana-Bold
/Verdana-BoldItalic
/Verdana-Italic
/Vrinda
/Webdings
/Wingdings2
/Wingdings3
/Wingdings-Regular
/ZWAdobeF
]
/NeverEmbed [ true
]
/AntiAliasColorImages false
/CropColorImages true
/ColorImageMinResolution 200
/ColorImageMinResolutionPolicy /OK
/DownsampleColorImages true
/ColorImageDownsampleType /Bicubic
/ColorImageResolution 300
/ColorImageDepth -1
/ColorImageMinDownsampleDepth 1
/ColorImageDownsampleThreshold 1.50000
/EncodeColorImages true
/ColorImageFilter /DCTEncode
/AutoFilterColorImages false
/ColorImageAutoFilterStrategy /JPEG
/ColorACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/ColorImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/JPEG2000ColorACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/JPEG2000ColorImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/AntiAliasGrayImages false
/CropGrayImages true
/GrayImageMinResolution 200
/GrayImageMinResolutionPolicy /OK
/DownsampleGrayImages true
/GrayImageDownsampleType /Bicubic
/GrayImageResolution 300
/GrayImageDepth -1
/GrayImageMinDownsampleDepth 2
/GrayImageDownsampleThreshold 1.50000
/EncodeGrayImages true
/GrayImageFilter /DCTEncode
/AutoFilterGrayImages false
/GrayImageAutoFilterStrategy /JPEG
/GrayACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/GrayImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
>>
/JPEG2000GrayACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/JPEG2000GrayImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
>>
/AntiAliasMonoImages false
/CropMonoImages true
/MonoImageMinResolution 400
/MonoImageMinResolutionPolicy /OK
/DownsampleMonoImages true
/MonoImageDownsampleType /Bicubic
/MonoImageResolution 600
/MonoImageDepth -1
/MonoImageDownsampleThreshold 1.50000
/EncodeMonoImages true
/MonoImageFilter /CCITTFaxEncode
/MonoImageDict <<
/K -1
>>
/AllowPSXObjects false
/CheckCompliance [
/None
]
/PDFX1aCheck false
/PDFX3Check false
/PDFXCompliantPDFOnly false
/PDFXNoTrimBoxError true
/PDFXTrimBoxToMediaBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXSetBleedBoxToMediaBox true
/PDFXBleedBoxToTrimBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXOutputIntentProfile (None)
/PDFXOutputConditionIdentifier ()
/PDFXOutputCondition ()
/PDFXRegistryName ()
/PDFXTrapped /False
/CreateJDFFile false
/Description <<
/CHS
/CHT
/DAN
/DEU
/ESP
/FRA
/ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.)
/JPN
/KOR
/NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.)
/NOR
/PTB
/SUO
/SVE
/ENU (Use these settings to create PDFs that match the “Required” settings for PDF Specification 4.01)
>>
>> setdistillerparams
<<
/HWResolution [600 600]
/PageSize [612.000 792.000]
>> setpagedevice