A Local Search Algorithm for Branchwidth **Abstract:** As many problems can be solved in polynomial time on graphs of bounded
branchwidth, there is a need for efficient and practical algorithms that find
branch decompositions of small width of given graphs.
In this paper, we give the first local search algorithm for this problem.
Using non-trivial
combinatorial properties of the neighbourhood space, significant reductions in
the search space can be obtained.
The fitness function can be easily tuned to express the expected
cost in the dynamic programming step for specific problems.
The algorithm is tested on a number of planar and non-planar graphs. For all
tested planar graphs the algorithm found an optimal branch decomposition. For
most non-planar graphs, we obtained branch decompositions of width equal to or
smaller than the treewidth. Our experiments indicate that local search is a
very suitable technique for finding branch decompositions of small width.
[slides]

Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks **Abstract:**
We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents.
We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time O(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.
[slides]

New Results on the Complexity of the Max- and Min-Rep Problems **Abstract:**
This paper deals with the Max-Rep and Min-Rep problems, both of which are related
to the famous Label Cover problem.
These are of notable theoretical interest, since they are often used to prove
hardness results for other problems.
In many cases new complexity results for these problems may be preserved by the
reductions, and so new results for Max-Rep and Min-Rep could be applicable
to a wide range of other problems as well.
Both Max- and Min-Rep are strongly inapproximable, and the best approximation algorithms have a ratio of O(n^{1/3}) and O(n^{1/3}log^{2/3}n) respectively.
Thus, other approaches are desperately needed to tackle these hard problems.
In our paper we use the very successful parameterized complexity paradigm and
obtain new complexity results for various parameterizations of the problems.
[slides]

Structural Properties of Hard Metric TSP Inputs **Abstract:**
The metric traveling salesman problem is one of the most prominent
APX-complete optimization problems. An important particularity of
this problem is that there is a large gap between the known upper bound
and lower bound on the approximability (assuming P \neq NP). In
fact, despite more than 30 years of research, no one could find a
better approximation algorithm than the 1.5-approximation provided
by Christofides. The situation is similar for a related problem,
the metric Hamiltonian path problem, where the start and the end of
the path are prespecified: the best approximation ratio up to date
is 5/3 by an algorithm presented by Hoogeveen almost 20
years ago.
In this paper, we provide a tight analysis of the combined outcome of
both algorithms. This analysis reveals that the sets of the hardest
input instances of both problems are disjoint in the sense that any
input is guaranteed to allow at least one of the two algorithms to
achieve a significantly improved approximation ratio. In particular,
we show that any input instance that leads to a 5/3-approximation
with Hoogeveen's algorithm enables us to find an optimal solution
for the traveling salesman problem. This way, we determine a set S
of possible pairs of approximation ratios. Furthermore, for any
input we can identify one pair of approximation ratios within S
that forms an upper bound on the achieved approximation ratios.
[slides]

Upward Point-Set Embeddability **Abstract:**
We study the problem of Upward Point-Set Embeddability, that is the
problem of deciding whether a given upward planar digraph D has an
upward planar embedding into a point set S. We show that any
switch tree admits an upward planar straight-line embedding into any
convex point set. For the class of k-switch trees, that is a
generalization of switch trees (according to this definition a
switch tree is a 1-switch tree), we show that not every k-switch
tree admits an upward planar straight-line embedding into any convex
point set, for any k \geq 2. Finally we show that the problem of
Upward Point-Set Embeddability is NP-complete. More specifically,
given a n-vertex upward planar digraph G and a set of n points
on the plane S, we show that deciding whether there exists an
upward planar straight-line embedding of G so that its vertices
are mapped to the points of P is NP-Complete. The decision problem
remains NP-Complete even when G has a single source and the
longest simple cycle of G has length four and, moreover, S is an
m-layered convex point set, for some integer m\gt;0.
[slides]

One-reversal counter machines and multihead automata: revisited **Abstract:**
Among the many models of language acceptors that have been studied
in the literature are multihead finite automata (finite automata with multiple one-way input heads) and 1-reversal counter machines (finite automata with multiple counters, where each counter can only ``reverse'' once, i.e., once a counter decrements, it can no longer increment). The devices can be deterministic or nondeterministic and can be augmented with a pushdown stack. We investigate the relative computational power of these machines. Our results (where C_1 and C_2 are classes of machines) are of the following types:
1. Machines in C_1 and C_2 are incomparable.
2. Machines in C_1 are strictly weaker than machines in C_2.
In obtaining results of these types, we use counting and ``cut-and-paste'' arguments as well as an interesting technique that shows that if a language were accepted by a device in a given class, then all recursively enumerable languages would be decidable.
[slides]

SScAC: Towards a framework for small-scale software architectures comparison **Abstract:**
We present a framework for small-scale software architecture
comparison (SScAC). Despite the fact considerable chunk of software ar-
chitectures is developed in small teams, not much related work exists
on this topic. The proposed framework introduces a method to formalize
these comparisons and aims to be simple enough to be used in small-scale
projects at the same time. Still we believe it is of sufficient complexity
to support comparisons that take into account different aspects of solved
problem. The main purpose of the framework is to ease certain architec-
tural choices by giving the designer a reasoned recommendation based on
previously specified requirements on system’s qualities. It can also help
validate the suitability of chosen design patterns. We show the practical
use of the framework on case study solving architectural decision for Key
Word In Context.
[slides]

On d-regular Schematization of Embedded Paths **Abstract:**
In the d-regular path schematization problem we are given an embedded path P (e.g., a route in a road network) and an integer d. The goal is to find a d-schematized embedding of P in which the orthogonal order of all vertices in the input is preserved and in which every edge has a slope that is an integer multiple
of 90°/d. We show that deciding whether a path can be d-schematized is NP-hard for any integer d. We further model the problem as a mixed-integer linear program. An experimental evaluation indicates that this approach generates reasonable route sketches for real-world data.
[slides]

Comparing CPU and GPU in OLAP Cubes Creation **Abstract:**
GPGPU (General Purpose Graphical Processing Unit) programming is receiving more attention recently because of enormous computations speed up offered by this technology. GPGPU is applied in many branches of science and industry not excluding databases, even if this is not the primary field of expected benefits.
In this paper a typical time consuming database algorithm, i.e. OLAP cube creation, implemented on GPU is compared to its CPU counterpart by analysis of performance, scalability, programming and optimisation ease.
Results are discussed formulating roadmap for future GPGPU applications in databases.
[slides]

GreedyMAX-type algorithms for the maximum independent set problem **Abstract:**
A maximum independent set problem for a simple graph G=(V,E)
is to find the largest subset of pairwise nonadjacent vertices.
The problem is known to be NP-hard and it is also hard to approximate.
Within this article we introduce a non-negative integer valued function
p defined on the vertex set V(G) and called a potential function
of a graph G, while P(G) = \max_{v\in V(G)}p(v) is called a potential of G.
For any graph P(G)\leq\Delta(G), where \Delta(G) is the maximum degree of G.
Moreover, \Delta(G)-P(G) may be arbitrarily large.
A potential of a vertex lets us get a closer insight
into the properties of its neighborhood
which leads to the definition of the family of GreedyMAX-type algorithms
having the classical GreedyMAX algorithm as their origin.
We establish a lower bound 1/(P+1) for the performance ratio
of GreedyMAX-type algorithms which favorably compares with
the bound 1/(\Delta+1) known to hold for GreedyMAX.
The cardinality of an independent set generated by any
GreedyMAX-type algorithm is at least \sum_{v\in V(G)} (p(v)+1)^{-1},
which strengthens the bounds of Turan and Caro-Wei
stated in terms of vertex degrees.
[slides]

Privacy, Liveliness and Fairness for Reputation **Abstract:**
In various Internet applications, reputation systems are typical means
to collect experiences users make with each other. We present a reputation system
that balances the security and privacy requirements of all users involed. Our sys-
tem provides privacy in the form of information theoretic relationship anonymity
w.r.t. users and the reputation provider. Furthermore, it preserves liveliness, i. e.,
all past ratings can influence the current reputation profile of a user. In addition,
mutual ratings are forced to be simultaneous and self rating is prevented, which
enforces fairness. What is more, without performing mock interactions —even if
all users are colluding— users cannot forge ratings. As far as we know, this is the
first protocol proposed that fulfills all these properties simultaneously.
[slides]

Randomized OBDDs for the most significant bit of integer multiplication **Abstract:**
Integer multiplication as one of the basic arithmetic functions has been
in the focus of several complexity theoretical investigations
and ordered binary decision diagrams (OBDDs) are one of the most common
dynamic data structures for Boolean functions.
Only two years ago, the question whether
the deterministic OBDD complexity of the most significant
bit of integer multiplication is exponential
has been answered affirmatively.
Since probabilistic methods have turned out to be useful in almost all areas
of computer science, one may ask whether randomization can help to represent
the most significant bit of integer multiplication in smaller size.
Here, it is proved that the randomized OBDD complexity is also exponential.
[slides]

An automata-theoretical characterization of context-free trace languages **Abstract:**
We present a characterization of the class of context-free trace languages
in terms of cooperating distributed systems (CD-systems) of stateless
deterministic restarting automata with window size 1 that are governed
by an external pushdown store. Essentially these systems can be seen as
traditional pushdown automata in which each internal state has been replaced
by a stateless deterministic restarting automaton with a window of size 1.
Thus, in each step not the first symbol is necessarily read, but some symbol
that can be reached by the current restarting automaton by moving across a
prefix of the current input word. In this way these CD-systems can be
interpreted as pushdown automata with translucent letters.
[slides]

Bandwidth constrained multi-interface networks **Abstract:**
In heterogeneous networks, devices can communicate by means of multiple wired or
wireless interfaces. By switching among interfaces or by combining the available
interfaces, each device might establish several connections. A connection is
established when the devices at its endpoints share at least one active
interface. Each interface is assumed to require an activation cost, and provides
a communication bandwidth. In this paper, we consider the problem of activating
the cheapest set of interfaces among a network G=(V,E) in order to guarantee a
minimum bandwidth B of communication between two specified nodes. Nodes V
represent the devices, edges E represent the connections that can be
established. In practical cases, a bounded number k of different interfaces
among all the devices can be considered. Despite this assumption, the problem
turns out to be NP-hard even for small values of k and Delta, where
Delta is the maximum degree of the network. In particular, the problem is
NP-hard for any fixed k >= 2 and Delta >= 3, while it is polynomially
solvable when k = 1, or Delta <= 2 and k = O(1). Moreover, we show
that the problem is not approximable within \eta log{B} or Omega(log log
|V|) for any fixed k >= 3, Delta >= 3, and for a certain constant
\eta, unless P=NP. We then provide an approximation algorithm with ratio
guarantee of b_{max}, where b_{max} is the maximum communication bandwidth
allowed among all the available interfaces. Finally, we focus on particular
cases by providing complexity results and polynomial algorithms for Delta <=
2.
[slides]

Alternative Parameterizations for Cluster Editing **Abstract:**
Given an undirected graph G and a nonnegative integer k, the
NP-hard Cluster Editing problem asks whether G can be
transformed into a disjoint union of cliques by applying at most k
edge modifications. In the field of parameterized algorithmics,
Cluster Editing has almost exclusively been studied
parameterized by the solution size k. Contrastingly, in many real-world
instances it can be observed that the parameter~k is not really
small. This observation motivates our investigation of
parameterizations of Cluster Editing different from the
solution size k. Our results are as follows. Cluster
Editing is fixed-parameter tractable with respect to the
parameter ``size of a minimum cluster vertex deletion set of G'',
a typically much smaller parameter than k. Cluster
Editing remains NP-hard on graphs with maximum degree six. A
restricted but practically relevant version of Cluster
Editing is fixed-parameter tractable with respect to the combined
parameter ``number of clusters in the target graph'' and ``maximum
number of modified edges incident to any vertex in G''. Many of
our results also transfer to the NP-hard Cluster Deletion
problem, where only edge deletions are allowed.
[slides]

On Making a Distinguished Vertex Minimum Degree by Vertex Deletion **Abstract:**
For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. For the directed case where the indegree shall be made minimum, in previous work it was shown that the problem is W[2]-complete with respect to the parameter "number of vertices to delete" (even when the graph is restricted to be a tournament) but it is polynomial-time solvable on dags. Continuing this work, here we show that the problem is also W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining this parameter
with the number of deleted vertices. For the undirected case, which surprisingly seemed unstudied so far, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability with respect to the several parameterizations measuring tree-likeness including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding non-existence results also for the parameters treewidth and "feedback vertex set number".
[slides]

Advice Complexity and Barely Random Algorithms **Abstract:**
Recently, a new measurement -- the advice complexity --
was introduced for measuring the information content of online
problems. The aim is to measure
the bitwise information that online algorithms lack, causing them to perform
worse than offline algorithms. Among a large number of problems, a well-known
scheduling problem, job shop scheduling with unit length tasks,
and the paging problem were analyzed within this model.
We observe some connections between advice complexity
and randomization. Our special focus goes to barely random algorithms,
i.e., randomized algorithms that use only a constant number of random bits,
regardless of the input size. We apply the results on advice complexity
to obtain efficient barely random algorithms for both the job shop
scheduling and the paging problem.
Furthermore, so far, it has not yet been investigated for job shop scheduling
how good an online algorithm may perform when only using a
very small (e.g., constant) number of advice bits.
In this paper, we answer this question by
giving both lower and upper bounds, and also improve the
best known upper bound for optimal algorithms.
[slides]

An improved B+ tree for flash file systems **Abstract:**
Nowadays, mobile devices such us mobile phones, mp3 players and PDAs are becoming evermore common. Most of them use flash chips as storage. To store data efficiently on flash, it is necessary to adapt ordinary file systems because they are designed for use on hard disks. Most of the file systems use some kind of search tree to store index information, which is very important from a performance aspect. Here we improved the B+ search tree algorithm so as to use on flash devices more efficiently. An implementation of this solution saves ~98\%-99\% of the flash operations, and is now the part of the Linux kernel.
[slides]

A Power Consumption Analysis Technique using UML-based Design Models in Embedded Software Development **Abstract:**
Although the power consumption of embedded system depends on the operation of hardware devices, software behaviors give great effect to the power consumption because of its functionality and complexity growth. This paper proposes a power consumption estimation technique using design models of software to support energy-efficient embedded software development. Even though code-based power analysis techniques have been proposed, these techniques have demerits that the analysis time is long and feedback is not easy. Our proposed technique makes use of UML behavior models for the power consumption analysis in order to overcome the demerits of code-based analysis. When comparing with the existing code-based analysis, our technique can provide the power analysis result at earlier phase than implementation. Therefore, software engineer can apply our technique to select energy-efficient design decisions in embedded software development process.
[slides]

Collision-less gathering of robots with an extent **Abstract:**
Gathering n mobile robots in one single point in the Euclidean plane is a widely studied problem from the area of robot formation problems. Classically, the robots are assumed to have no physical extent, and they are able to share a position with other robots. We drop these assumptions and investigate a similar problem for robots with (a spherical) extent: the goal is to gather the robots as close together as possible. More exactly, we want the robots to form a sphere with minimum radius around a predefined point. We propose an algorithm for this problem, which synchronously moves the robots towards the center of the sphere unless they block each other. In this case, if possible, the robots spin around the center of the sphere. We analyze this algorithm experimentally in the plane. If R is the distance of the farthest robot to the center of the sphere, the simulations indicate a runtime which is linear in n and R. Additionally, we prove a theoretic upper bound for the runtime of O(n\cdot R) for a discrete version of the problem. Simulations also suggest a runtime of O(n+R) for the discrete version.
[slides]

The Complexity of Finding kth Most Probable Explanations in Probabilistic Networks **Abstract:**
In modern decision-support systems, probabilistic networks model uncertainty by
a directed acyclic graph quantified by probabilities. Two closely related
problems on these networks are the Kth MPE and Kth Partial
MAP problems, which both take a network and a positive integer k
for their input. In the Kth MPE problem, given a partition of the
network's nodes into evidence and explanation nodes and given specific values
for the evidence nodes, we ask for the kth most probable combination of
values for the explanation nodes. In the Bounded Kth Partial MAP problem
in addition a number of unobservable intermediate nodes are distinguished;
we again ask for the kth most probable explanation. In this paper, we
establish the complexity of these problems and show that they are FP^PP-
and FP^{PP^PP}-complete, respectively.
[slides]

Partition Into Triangles on Bounded Degree Graphs **Abstract:**
We consider the Partition Into Triangles problem on bounded degree graphs.
We show that this problem is polynomial time solvable on graphs of maximum degree three by giving a linear time algorithm.
We also show that this problem becomes NP-complete on graphs of maximum degree four.
Moreover, we show that there is no subexponential time algorithm for this problem on maximum degree four graphs unless the Exponential Time Hypothesis fails.
However, the partition into triangles problem for graphs of maximum degree at most four is in many cases practically solvable as we give an algorithm for this problem that runs in O(1.02220^n) time and linear space.
[slides]

In-Place Sorting **Abstract:**
We present an algorithm for asymptotically efficient in-place sorting. Our algorithm sorts the given array A, consisting of n elements, using only n.lg n + O(n.lg lg n) comparisons and O(n) element moves. Moreover, this algorithm works in-place, i.e. it uses using only a constant auxiliary workspace.
[slides]

Join-Queries between two Spatial Datasets Indexed by a Single R*-tree **Abstract:**
A spatial join, a common query in Spatial Databases and Geographical Information Systems, consists in testing every possible pair of data elements belonging to two spatial datasets against a spatial predicate. This predicate might be "intersects", "contains", "is enclosed by", "distance", "northwest", "adjacent", "meets", etc. The large size of datasets that appears in industrial and commercial modern applications (e.g. GIS applications, where multiple instances of the datasets are kept) raises the cost of join processing and the importance of the choice of the data indexing method and the query processing technique. The family of R-trees is considered a good choice (especially the R*-tree) for indexing a spatial dataset. When joining two datasets, a common assumption is that each dataset is indexed by a different R*-tree and the join is processed by a synchronous traversal of the two trees. In this paper, we assume that both datasets are indexed by a single R*-tree, so that spatial locality between different datasets is embedded in data indexing, facilitating the evaluation of join queries between the two datasets. We experimentally compare the I/O and Response Time performance of join queries, using this single tree indexing approach against the usual approach of indexing each dataset by a different tree.
[slides]

Information Leakage Analysis by Abstract Interpretation **Abstract:**
Protecting the confidentiality of information stored in a computer system or transmitted over a public network is a relevant problem in computer security. The approach of information flow analysis involves performing a static analysis of the program with the aim of proving that there will not be leaks of sensitive information. In this paper we propose a new domain that combines variable dependency analysis, based on propositional formulas, and variables' value analysis, based on polyhedra. The resulting analysis is strictly more accurate than the state of the art abstract interpretation based analyses for information leakage detection. Its modular construction allows to deal with the tradeoff between efficiency and accuracy by tuning the complexity of the abstract operators.
[slides]

Cooperative Query Answering by Abstract Interpretation **Abstract:**
A common problem for many database users is how to formulate and submit correct queries in order to get useful responses from the system, with little or no knowledge of the database structure and content. The notion of cooperative query answering has been explored as an effective mechanism to address this problem. In this paper, we propose a cooperative query answering scheme based on the Abstract Interpretation framework. In this context, we address three key issues: soundness, relevancy and optimality of the cooperative answers.
[slides]

A Privacy-preserving Group Key Agreement Scheme **Abstract:**
In 2008, Wan et al. presented an anonymous ID-based group key
agreement scheme for wireless networks, for which they claim that it
ensures anonymity and unlinkability of the group members, as well as
forward and backward secrecy of the group session key. In this paper, we show
that forward and backward secrecy do not hold for the protocol. We
propose a correction that introduces a shielding
factor that protects each member's input to the group key.
Additionally, we introduce a new feature which
assures the correctness of the key as computed by all group
members. This results in a increased computation cost, due to extra public key
operations, while the communication cost remains similar. We also show
in which practical setting the protocol can be deployed.
[slides]

The Straight-Line RAC Drawing Problem is NP-Hard **Abstract:**
Recent cognitive experiments have shown that the negative impact of
an edge crossing on the human understanding of a graph drawing,
tends to be eliminated in the case where the crossing angles are
greater than 70 degrees. This motivated the study of RAC
drawings, in which every pair of crossing edges intersects at right
angle. In this work, we demonstrate a class of graphs with unique RAC combinatorial embedding and
we employ members of this class in order to show that
it is NP-hard to decide whether a graph admits a straight-line RAC drawing.
[slides]

Combining Traditional Map Labeling with Boundary Labeling **Abstract:**
The traditional map labeling problems are mostly NP-hard. Therefore, effective heuristics and approximations have been developed in the past. Recently, efficient algorithms for the so-called boundary labeling model has been introduced which assumes that the labels are placed on the boundary of the map and connected by polygonal leaders to their corresponding sites. Internal labels have been forbidden. In this paper, we allow both. Since clearly internal labels should be preferred, we consider several maximization problems for the number of internal labels and we show that they can be obtained efficiently or in quasi-polynomial time.
[slides]

Tracking the Evolution of Code Clones **Abstract:**
It is believed by many academic and industrial experts, that
source code cloning (copy-paste programming) represents a
significant threat to maintainability in an evolving software
system. The real threat does not lie in the existence of
duplications, but the fears are in connection with their
evolution. There exist an abundance of algorithms for finding code
clones in one particular version of a software system, but
eliminating or even evaluating these clones often seems hopeless,
as there may exist several thousands of them. Tracking the
evolution of individual clones might solve the problem, as it
could help identifying the inconsistently changing duplications:
the clones which are really dangerous at a particular moment. In
this paper we present an approach for mapping code duplications
from one particular version of the software to another one, based
on a similarity distance function. For the suspicious evolution
patterns we introduce the term of "clone smells". By defining the
relevant categories of the possible evolution patterns, the
proposed method also gives a clue about why the reported code
fragments might be dangerous. In the case study, clone smells were
extracted, evaluated, and manually categorized throughout many
versions of the jEdit system. The findings suggest that roughly
half of the reported smells refer to inconsistent changes in the
code.
[slides]

Min-Max Coverage in Multi-Interface Networks **Abstract:**
We consider devices equipped with multiple wired or wireless interfaces. By switching among interfaces or by combining the available interfaces, each device might establish several connections. A connection is established when the devices at its endpoints share at least one active interface. Each interface is assumed to require an activation cost.
In this paper, we consider the problem of establishing the connections defined by a network G=(V,E) while keeping as low as possible the maximum cost set of active interfaces at the single nodes. Nodes V represent the devices, edges E represent the connections that must be established. We study the problem of minimizing the maximum cost set of active interfaces among the nodes of the network in order to cover all the edges. We prove that the problem is NP-hard for any fixed Delta >= 5 and k >= 16, with Delta being the maximum degree, and k being the number of different interfaces among the network. We also show that the problem cannot be approximated within Omega(ln Delta). We then provide a general approximation algorithm which guarantees a factor of O((1+b)ln(Delta)), with b being a parameter depending on the topology of the input graph. Interestingly, b can be bounded by a constant for many graph classes. Other approximation and exact algorithms for special cases are presented.
[slides]

On the Complexity of the metric TSP under Stability Considerations **Abstract:**
We consider the metric Traveling Salesman Problem (\Delta-TSP for
short) and study how stability (as defined by Bilu and Linial in
[3]) influences the complexity of the problem. On an
intuitive level, an instance of \Delta-TSP is \gamma-stable
(\gamma > 1) if there is a unique optimum Hamiltonian tour and any
perturbation by at most \gamma of arbitrary edge weights does not
change the edge set of the optimal solution (i.e., there is a
significant gap between an optimum tour and the other tours). We show
that for \gamma \geq 1.8 a simple greedy algorithm (resembling the
Prim's heuristic for minimum spanning trees) computes an optimum
Hamiltonian tour for every \gamma-stable instance of \Delta-TSP,
whereas for any \gamma a simple local search algorithm fails to find
the optimum. We further show that there are \gamma-stable instances
of \Delta-TSP for every 1<\gamma < 2.

These results provide a different view on the hardness of the \Delta-TSP and give rise to a new subclass of the problem which are substantially easier to solve than the general \Delta-TSP. [slides]

These results provide a different view on the hardness of the \Delta-TSP and give rise to a new subclass of the problem which are substantially easier to solve than the general \Delta-TSP. [slides]

Unambiguous UML composite structures: the OMEGA2 experience **Abstract:**
Starting from version 2.0, UML introduced hierarchical composite structures, which are a very expressive way of defining complex software architectures, but which have a very loosely defined semantics in the standard. In this paper we propose a set of consistency rules that ensure UML composite structures are unambiguous and can be given a precise operational semantics. Our primary goal was to have an operational model for use in the OMEGA UML profile, an executable profile used for the formal specification and validation of real-time systems. However, the rules and principles stated here are general and applicable to other hierarchical component models based on the same concepts, such as MARTE GCM or SysML. The rule set has been formalized in OCL and is currently used in the OMEGA UML compiler; we are now working on a formalization within a proof assistant (Isabelle/HOL) in order to prove the type safety of models observing this rule set.
[slides]

Minimizing interference for the highway model in Wireless Ad-hoc and Sensor Networks **Abstract:**
Finding a low-interference connected topology is one of the
fundamental problems in wireless ad-hoc and sensor networks. The
receiver-centric interference on a node is the number of other
nodes whose transmission ranges cover the node. The problem of reducing
interference through adjusting the nodes' transmission ranges in a
connected network can be formulated as that of connecting the nodes by a
spanning tree while minimizing interference. In this paper, we study
minimization of the average interference and the maximum interference for
the high-way model, where all the nodes are arbitrarily distributed
on a line. Two exact algorithms are proposed. One constructs the
optimal topology that minimizes the average interference among all the
nodes in polynomial time, O(n^3 \Delta^3), where n is the number
of nodes and \Delta\leq n-1 is the maximum node degree. The other
algorithm constructs the optimal topology that minimizes the maximum
interference in sub-exponential time, O(n^3\Delta^{O(k)}), where
k=O(\sqrt{\Delta}) is the minimum maximum interference.
[slides]

liquidsoap: a high-level programming language for multimedia streaming **Abstract:**
Programming languages offer natural and flexible ways to describe various
systems. A good language makes it easy to describe simple configurations, but,
unlike other approaches based on configuration files or static graphical
interfaces, it also allows the user to build complex, highly customized systems.
In this papier, we present liquidsoap, a high-level functional programming
language for generating, manipulating and broadcasting multimedia streams. We
describe a model that supports a rich collection of operators (track scheduling,
jingle insertion, mix, metadata updates, various transitions), and some aspects
of language design that make it adapted to the application domain.
[slides]

On approximating the d-girth of a graph **Abstract:**
For a finite, simple, undirected graph G and an integer d \ge 1, a {\em mindeg-d subgraph} is a subgraph of G of minimum degree at least d. The \emph{d-girth} of G, denoted \dgirth(G), is the minimum size of a mindeg-d subgraph of G. It is a natural generalization of the usual girth, which coincides with the 2-girth. The notion of d-girth was proposed by Erd\H{o}s \emph{et al.}~\cite{EFR88,EFR90} and Bollob\'as and Brightwell~\cite{BoBr89} over 20 years ago, and studied from a purely combinatorial point of view. Since then, no new insights have appeared in the literature. Recently, first algorithmic studies of the problem have been carried out~\cite{APP+08,ASS08}. The current article further explores the complexity of finding a small mindeg-d subgraph of a given graph (that is, approximating its d-girth), by providing new hardness results and the first approximation algorithms in general graphs, as well as analyzing the case where G is planar.
[slides]

Comparing linear conjunctive languages to subfamilies of the context-free languages **Abstract:**
Linear conjunctive grammars define the same family of languages as one-way real-time cellular automata (Okhotin, "On the equivalence of linear conjunctive grammars to trellis automata", RAIRO ITA, 2004), and this family is known to be incomparable to the context-free languages (Terrier, "On real-time one-way cellular array", Theoret. Comput. Sci., 1995). This paper investigates subclasses of the context-free languages for a possible containment in this class. It is shown that every visibly pushdown automaton (Alur, Madhusudan, "Visibly pushdown languages", STOC 2004) can be simulated by a one-way real-time cellular automaton, but already for LL(1) context-free languages and for one-counter DPDAs no simulation is possible.
[slides]

Finding the Description of Structure by Counting Method: a Case Study **Abstract:**
This paper presents a method to generate characterizing definitions for finite and parameterized
structures. In particular, the method is applied to generate a conjecture
for the properties characterizing a special class of graphs, called
superpositional graphs.
The method can be used if the exact set of properties that describes a given
finite structure
cannot be found by pure thought but we can find the number of objects for small values of the parameter.
The next step is to codify the objects as assignments to a set
of propositional variables, and the candidate properties as
propositional formulae, in such a way that an object satisfies the
property if and only if the assignment satisfies the formula.
The main idea of this method is to find models that do not fit with the current approximation of the description of the structure and stepwise refine the logical description.
Finally, we "translate" the logical description into a mathematical one and prove
it.
[slides]

Probabilistic Reductions **Abstract:**
We compare probabilistic and deterministic reductions.
We show that for every rational \frac{k}{l} > \frac{1}{2} there exist sets A and B such that A can be probabilistically many-one reduced to B with probability \frac{k}{l}, but A cannot be pr
obabilistically many-one reduced to B with any probability greater than p such that p \geq \frac{1}{2 - \frac{k}{l}}.
We show that probabilistic truth-table reduction with probability greater that \frac{2}{3} has the same reduction power as deterministic truth-table reduction, however we provide an example where probab
ilistic truth-table reduction with probability \frac{2}{3} is possible, but no deterministic truth-table reduction. We show that probabilistic Turing reduction with probability greater than \frac{1}{2}
has the same reduction power as deterministic Turing reduction. We prove some other basic probabilistic reduction properties.
[slides]

FormBuilder: A Novel Approach to Deal with View Development and Maintenance **Abstract:**
In most web applications, the attributes of entity classes directly determine the
content of corresponding view forms. In addition, these entity classes often
encapsulate constraints on associated properties. For example, a User entity
class may have an email property with constraints on the form of the address;
consequently, the view form for creating/updating users should include an email
field and perform validation on the submitted data. Unfortunately, view form
development is often done manually, an error-prone and tedious process. Form
error detection is particularly difficult because the errors only manifest
themselves at runtime because of weak type safety and limited mechanisms for
constraint verification. In addition, entity modification may cause
inconsistency with the corresponding form. In this paper, we propose a new tool,
FormBuilder, which automates the development and maintenance of view forms. The
application of this tool in production applications has repeatedly demonstrated
the practical contribution of this approach.
[slides]

An optimization strategy based on social insect behavior **Abstract:**
We propose an optimization algorithm inspired by social
behavior of bumblebees. In this algorithm, a population of agents
constitutes a social system by continual collecting of food. Every generation,
a subroutine of tabu search is performed as the optimization mechanism.
On instances of the graph coloring problem, our algorithm outperforms the method
it enhances and is able to improve results of the tabu search by using it with
dynamic parametrization, collecting its partial results and using them as starting points
for the next tabu search procedures. Our experimental results show, that this approach of ''local search managed by a swarm''
can be even competitive with a hybrid algorithm, which uses a problem-specific crossover operator.
[slides]

Don't Let The Opponents Grind You Down **Abstract:**
In recent years, a series of type systems have been developed for proving security properties of cryptographic protocols within the Dolev-Yao model. In these systems the proof burden is on the protocol de
signer in that he has to provide the type information. In this paper we present an algorithm for computing this type information in the presence of two typing rules for every construct. Kikuchi \& Kobayas
hi overcome this obstacle by redesigning the type system so that the choice between which of the rules to use disappears. We show that such a redesign is not necessary by constructing a polynomial time al
gorithm for choosing which rule to apply in the original type system. We present our approach in the setting of a type system for injective authenticity by Gordon \& Jeffrey, but it can be successfully ap
plied to several safety type systems with symmetric cryptography and opponent typing.
[slides]

Adaptive Decision Making and Controlling in Distributed Environment **Abstract:**
The paper is focused on design, implementation and
testing of a system for creating of autonomous plan-controlled
agents. The system allows easy creation of different agents and
watches their behaviour in the simulation environment. Real-time
strategy game was chosen as a simulation environment. The core of
the system is a graph-based language, used to define means of plan
creation which controls the behaviour of autonomous agent. The plan
is represented as an n-way tree, where the graphically depicted
operators define decomposition of the tasks of plan to the sub-tasks,
down to the atomic actions. The functionality of a design is verified
by the implementation of the simulation environment,
implementation of the graph operators’ compiler along with the
experiments with different agents.
The main contribution of the paper is a new flexible approach to
easy creation of the plan-oriented agents using graph-based language
to define operators for decomposition of the plan.
[slides]

Introducing identification in the k-limit **Abstract:**
The ``identification in the limit'' inductive inference paradigm, originally
due to Gold, is being generalized in the sense that an inductive inference
machine is now allowed to output an infinite k-dimensional array, as
opposed to the traditional identification where only a string is allowed to be
output. It is proved that the whole class of total recursive functions can be
BC-identified in th e 2-limit and MinEX-identified in the 3-limit. For
complexity measurement, the so-called ``number of waste'' is
introduced and studied.
[slides]

Formal analysis of Facebook Connect Single Sign-On authentication protocol **Abstract:**
We present a formal analysis of the authentication protocol of
Facebook Connect, the Single Sign-On service offered by the
Facebook Platform which allows Facebook users to login to affiliated
sites.
Formal specification and verification have been carried out using
the specification language HLPSL and AVISPA, a state-of-the-art
verification tool for security protocols. AVISPA has revealed two
security flaws, one of which (previously unheard of, up to our
knowledge) allows an intruder to impersonate a user at a service
provider affiliated with Facebook. To address this problem, we
propose a modification of the protocol, by adding a message
authentication mechanism; this protocol has been verified with
AVISPA to be safe from the masquerade attack. Finally, we sketch a
JavaScript implementation of the modified protocol.
[slides]

A tight bound on the worst-case number of comparisons for Floyd's heap construction algorithm **Abstract:**
In this paper a tight bound on the worst-case number of comparisons for Floyd's well known heap construction algorithm, is derived. It is shown that at most 2n - 2\mu(n) - \sigma(n) comparisons are exec
uted in the worst case, where \mu(n) is the number of ones and \sigma(n) is the number of zeros after the last one in the binary representation of the number of keys n.
[slides]

Optimization Problems: Refinements **Abstract:**
In this paper, we study a relatively new interpretation of optimization problems:
refinements. In a refinement version the input is augmented
with a feasible solution, and the problem is to decide
whether there exists a ``better'' solution, i.e. a solution of larger or
smaller measure in the case of maximization or minimization problems respectively.
A first exploration of the properties of such problems
is made, which eventually leads to the development of a framework for proving that refinement problems are \npc.
We give a general method to transform \npc ness proofs of decision versions of optimization problems
to \npc ness proofs of the relevant refinement variants. The framework can be applied to many existing proofs of this type.
An important application of these results can be found in lower-bounding kernelization, which implies a connection to parameterized complexity.
We obtain as corollary for several problems, parameterized by structural parameters such as treewidth,
the non-existence of polynomial kernels unless coNP\subseteq NP/poly.
[slides]

Created using Fluid 960 Grid, and Sofsems' webs. © Faculty of Mathematics, Physics, and Computer Science, Comenius University, 2010