Research Seminar

Spring 2016

Generating massive complex networks with hyperbolic geometry faster in practice (Probevortrag HPEC)

  • Date: Tue, 6 Sep 2016, 13:00
  • Location: SR 301
  • Speaker: Moritz von Looz-Corswarem, PARCO
  • Inhalt: Generative network models play an important role in algorithm development, scaling studies, network analysis, and realistic system benchmarks for graph data sets. The commonly used graph-based benchmark model R-MAT has some drawbacks concerning realism and the scaling behavior of network properties. A complex network model gaining considerable popularity builds random hyperbolic graphs, generated by distributing points within a disk in the hyperbolic plane and then adding edges between points whose hyperbolic distance is below a threshold. We present a fast generation algorithm for such graphs. Our experiments show that our new generator achieves speedup factors of 3-60 over the best previous implementation. One billion edges can now be generated in under one minute on a shared-memory workstation. Furthermore, we present a dynamic extension to model gradual network change, while preserving at each step the point position probabilities.

Querying Probabilistic Neighborhoods in Spatial Data Sets Efficiently

  • Date: Fri, 12 Aug 2016, 14:00
  • Location: SR 301
  • Speaker: Moritz von Looz-Corswarem, Parco
  • Inhalt: The probability that two spatial objects establish some kind of mutual connection often depends on their proximity. To formalize this concept, we define the notion of a emph{probabilistic neighborhood}: Let $P$ be a set of $n$ points in $mathbb{R}^d$, $q in mathbb{R}^d$ a query point, $dist$ a distance metric, and $f : mathbb{R}^+ ightarrow [0,1]$ a monotonically decreasing function. Then, the probabilistic neighborhood $N(q, f)$ of $q$ with respect to $f$ is a random subset of $P$ and each point $p in P$ belongs to $N(q,f)$ with probability $f(dist(p,q))$. Possible applications include query sampling and the simulation of probabilistic spreading phenomena, as well as other scenarios where the probability of a connection between two entities decreases with their distance. We present a fast, sublinear-time query algorithm to sample probabilistic neighborhoods from planar point sets. For certain distributions of planar $P$, we prove that our algorithm answers a query in $O((|N(q,f)| + sqrt{n})log n)$ time with high probability. In experiments this yields a speedup over pairwise distance probing of at least one order of magnitude, even for rather small data sets with $n=10^5$ and also for other point distributions not covered by the theoretical results.

Computational Complexity of Energy Network Problems

  • Date: Mon, 8 Aug 2016, 14:00
  • Location: SR 301
  • Speaker: Karsten Lehmann, ITI Wagner, Guest from NICTA Energy Systems Research Group and the Optimization Research Group
  • Inhalt: In the study of energy networks, an important question has been whether practical network problems can be efficiently approximated. In energy network analysis, there are three key problems. Power Flow models the current state of the energy network; Optimal Power Flow additionally includes an objective function representing power generation cost (which may vary from generator to generator); and, Maximum Power Flow models the generation dispatch that satisfies the most demand possible in a fixed network. In this talk, we present an overview of the computational complexity of these problems in the context of different modelling assumptions, such as Alternating Current (AC), Lossless-Sin AC Approximation, and Linear AC Approximation with line switching.

Selfish Agents on Capacity-Restricted Resources - When There is Not Enough Room for Everyone

  • Date: Tue, 2 Aug 2016, 14:00
  • Location: SR 301
  • Speaker: Max Drees, ITI Wagner, Guest from Heinz Nixdorf Institut Universität Paderborn
  • Inhalt: The field of game theory deals with systems (or games) of multiple participants (called players) who interact with each other. Every player only decides on his own behavior (his strategy), but the individual outcome he experiences also depends on the strategies of the other players. In the model of budget games, players compete over resources with finite budgets. As a strategy, a player chooses one of finitely many demand vectors, which impose a non-negative demand on each resource. If the total demand by all players at a single resource does not exceed its budget, the utility each player gains from that resource equals his demand for it. Otherwise, the budget is split between the players proportionally to their demands. The total utility of a player is then defined as the sum of the utilities gained from all resources. In this talk, we focus on pure Nash equilibria (NE) in budget games. A pure NE is a state in which no player can further increase his own utility through a unilateral strategy change. In general, these do not exist in budget games. Therefore, we take a closer look at approximate pure NE and give both upper and lower bounds for their existence. We also introduce an algorithm to compute approximate pure NE efficiently for games in which the total demands of the individual players do not deviate too much from each other.

Advancing Algorithms and Methodology for Exploratory Network Analysis

  • Date: Fri, 15 Jul 2016, 13:30
  • Location: SR 301
  • Speaker: Maximilian Vogel, PARCO (Masters Thesis)
  • Inhalt: The neighborhood function which yields for a natural number t the amount of node pairs that can be connected in up to t hops, receives little attention although some distance related measures like the effective diameter can be derived from it. This may be due to the high asymptotic running of O(n(n + m)) for unweighted graphs. In this thesis an approximation algorithm from the literature[17] as well as a heuristic based on the breadth-first search and sampling are implemented and experimentally evaluated with regards to processing rate and accuracy. Climate networks[21] are a new approach to model and analyze the global climate as a dynamic system. The methodology borrows classic network analytic measures. Beyond that sophisticated approaches[13] targeting the extraction of dipoles (stationary, opposed air pressure systems) such as the North Atlantic oscillation (NAO) have been introduced. In this thesis the methodology behind the climate networks is discussed and an exploratory analysis regarding the NAO is conducted. Furthermore, two of the more recent approaches are implemented and applied with the goal to find the NAO.

Optimierung der taktischen Transportplanung unter Berücksichtigung von Transportkapazität und Auftragszeitfenster

  • Date: Mon, 11 Jul 2016, 13:00
  • Location: SR 301
  • Speaker: Moritz Heipe, ITI Sanders, Masters Thesis

Probevortrag zur Verteidigung

  • Date: Mon, 20 Jun 2016, 13:00
  • Location: SR 301
  • Speaker: Christian Staudt, PARCO
  • Inhalt: Geplant sind 25 Minuten Talk, dann Kritik (open end, kommen/gehen individuell)

Probevortrag zur Verteidigung

  • Date: Fri, 17 Jun 2016, 14:00
  • Location: SR 301
  • Speaker: Christian Staudt, PARCO
  • Inhalt: Geplant sind 25 Minuten Talk, dann Kritik (open end, kommen/gehen individuell)

Optimization Models for Flood Mitigation

  • Date: Tue, 14 Jun 2016, 10:00
  • Location: SR 010
  • Speaker: Alberto Giudici, U Milano, PARCO
  • Inhalt: This talk deals with different combinatorial optimization models for the problem of mitigating the damage caused to buildings by a flood. The general setting is that of an arc deletion problem: in a directed graph with arc weights and node costs, one has to remove a subset of arcs in order to minimize the total cost of the nodes that are reachable, after the arc removal, from some prespecified nodes. For those problems, complexity results are obtained and algorithms for solving them to optimality are developed.

Simultaneous Representation of Proper Interval Graphs

  • Date: Fri, 10 Jun 2016, 14:45
  • Location: SR 301
  • Speaker: Michael Vollmer, Masters Thesis, ITI Wagner
  • Inhalt: In this work we study the simultaneous representation of proper interval graphs. While we show that the problem is NP-complete in general, we also present a linear-time recognition algorithm for simultaneous proper interval graphs with sunflower intersection, that is, all graphs share the same subgraph. The recognition algorithm is based on the concept that we can separate a simultaneous proper interval graphs into smaller graphs, which we can verify in an easier manner, such that no solutions are lost. These algorithms and the tools used are based on a new characterization of proper interval graphs using clique orderings instead of vertex orderings.

Extending Partial Planar Drawings of Level Graphs

  • Date: Fri, 10 Jun 2016, 14:15
  • Location: SR 301
  • Speaker: Guido Brückner, Masters Thesis, ITI Wagner
  • Inhalt: Level graphs are directed graphs where vertices are immutably assigned to discrete levels and all edges point downwards. A level graph is planar if it admits a drawing where all edges are drawn as y-monotone curves and no two edges cross each other. Di Battista and Nardelli and Jünger et al. showed that recognizing planar level graphs is possible in linear time. Jünger and Leipert extended their approach to show that finding a planar drawing of planar level graphs is feasible in linear time as well. A partial planar drawing of a level graph G is a planar drawing <' of a subgraph G' of G. This work asks, and answers, the question whether a given partial planar drawing <' can be extended to a planar drawing < of the entire graph. The first result is that extending partial planar drawings of level graphs with a single source is possible in polynomial time. This is shown on the basis of an algorithm extending the approach of Di Battista and Nardelli, which heavily relies on the PQ-tree data structure, by the concept of an order graph. The result is then improved by merging the PQ-tree and the order graph data structure into a new data structure, the constrained PQ-tree. The domain of this single-source algorithm can be trivially broadened to include all graph families with a constant number of sources. The second result is that extending partial planar drawings of level graphs with multiple sources is NP-complete in general. To this end, two Karp reductions, one from 3-Partition and one from Planar Monotone 3-Sat, are presented. These two reductions lead to graph families with differing properties, allowing for insights into where the complexity of the problem stems from, and which approaches to find efficient algorithms for certain subclasses of level graphs are unlikely to be met by success.

Local Community Detection for Global Overlapping Community Detection

  • Date: Fri, 10 Jun 2016, 13:45
  • Location: SR 301
  • Speaker: Eike Röhrs, Masterarbeitsvortrag, ITI Wagner
  • Inhalt: We present two novel Local Community Detection (LCD) algorithms. They are based on scoring vertices based on the number of triangles that their incident edges are part of, which leads to good results on synthetic benchmark graphs and real world networks. Additionally, we use a scheme of Lancichinetti et al. [1] to turn our and other LCD algorithms into Global Overlapping Community Detection (GOCD) algorithms. We build on top of this scheme with our LCD algorithms and expand it with the ability to detect communities that are nearly duplicates. Moreover, we look at the performance of this scheme with different LCD algorithms and compare them to algorithms that are specifically designed for the task of GOCD. Our LCD algorithms produce the best results in our experiments on synthetic benchmark graphs and yield good results on graphs from a set of 100 Facebook networks. Our GOCD algorithms also detect quite good communities and are the best of all GOCD algorithms that use the scheme of Lancichinetti et al. [1] Andrea Lancichinetti, Santo Fortunato, and János Kertész. Detecting the overlapping and hierarchical community structure in complex networks

Outlier Detection for Modularity-based Clustering

  • Date: Tue, 7 Jun 2016, 13:00
  • Location: SR 301
  • Speaker: Fabian Hentschel, Bachelorarbeit, ITI Wagner
  • Inhalt: Das Clustern von Graphen zu einer Partition ist eine häufig genutzte Technik, um Informationen über Substrukturen innerhalb des Netzwerks zu erhalten. Diese Substrukturen werden als Communities bezeichnet. Es lassen sich jedoch nicht immer alle Knoten eines Graphen einfach in Communities zuordnen. Knoten, die sich nur sehr schwierig in die existierende Communitystruktur einfügen, heißen Ausreißer. Als Definition für Ausreißer wird Vergleich der Modularity verschiedener Partitionen herangezogen. Im Anschluss daran werden Hypothesen entwickelt, wie Ausreißer schnell identifiziert werden können. Diese Hypthesen basieren auf den Zentralitätsmaßen Betweenness und Closeness, sowie auf dem Zählen von Drei- und Vierecken innerhalb des Graphen und außerdem auf der Vorhersage einer Partition für den Vergleich gemäß der Definition für Ausreißer. Die Hypothesen überprüfen wir mit Experimenten auf zufällige generierten Graphen und auf Netzwerken basierend auf echten Daten. Diese experimentelle Überprüfung zeigt, dass nur unsere Methode, die auf der Vorhersage einer Partition beruht, bestand hat und für die schnelle Identifikation von Ausreißern geeignet ist.

Better partitions of protein graphs for subsystem density-functional theory

  • Date: Fri, 3 Jun 2016, 14:00
  • Location: SR 301
  • Speaker: Moritz von Looz-Corswarem, Probevortrag für SEA, PARCO
  • Inhalt: Determining the interaction strength between proteins and small molecules is key to analyzing their biological function. Quantum-mechanical calculations such as Density Functional Theory (DFT) give accurate and theoretically well-founded results. With common implementations the running time of DFT calculations increases quadratically with molecule size. Thus, numerous subsystem-based approaches have been developed to accelerate quantum-chemical calculations. These approaches partition the protein into different fragments, which are treated separately. Interactions between different fragments are approximated and introduce inaccuracies in the calculated interaction energies. To minimize these inaccuracies, we represent the amino acids and their interactions as a weighted graph in order to apply graph partitioning. None of the existing graph partitioning work can be directly used, though, due to the unique constraints in partitioning such protein graphs. We therefore present and evaluate several algorithms, partially building upon established concepts, but adapted to handle the new constraints. For the special case of partitioning a protein along the main chain, we also present an efficient dynamic programming algorithm that yields provably optimal results. In the general scenario our algorithms usually improve the previous approach significantly and take at most a few seconds.

Accelerating Local Search for the Maximum Independent Set Problem

  • Date: Tue, 31 May 2016, 13:30
  • Location: SR 301
  • Speaker: Sebastian Lamm, SEA trial talk, ITI Sanders
  • Inhalt: Computing high-quality independent sets quickly is an important problem in combinatorial optimization. Several recent algorithms have shown that kernelization techniques can be used to find exact maximum independent sets in medium-sized sparse graphs, as well as high-quality independent sets in huge sparse graphs that are intractable for exact (exponential-time) algorithms. However, a major drawback of these algorithms is that they require significant preprocessing overhead, and therefore cannot be used to find a high-quality independent set quickly. In this paper, we show that performing simple kernelization techniques in an online fashion significantly boosts the performance of local search, and is much faster than pre-computing a kernel using advanced techniques. In addition, we show that cutting high-degree vertices can boost local search performance even further, especially on huge (sparse) complex networks. Our experiments show that we can drastically speed up the computation of large independent sets compared to other state-of-the-art algorithms, while also producing results that are very close to the best known solutions.

Simulated Annealing-Based Heuristics for Wind Farm Cabling Problems

  • Date: Tue, 31 May 2016, 13:00
  • Location: SR 301
  • Speaker: Sebastian Lehmann, Masters Thesis, ITI Wagner
  • Inhalt: In a wind farm, the cables connecting turbines and substations with the power grid significantly contribute to its installation cost. Minimizing the cabling cost traces back to a capacitated network flow problem similar to those arising in areas such as logistics and transport. In this work, we study the following cable layout problem: In the wind farm, the locations of turbines and substations are fixed, and they should be connected with cables. We are given a set of cable types, which differ in their cost per meter and in the amount of power they support. We are interested in the cable network minimizing the cabling cost, such that the farm is connected properly. We formally model this problem as a capacitated minimum-cost flow problem with non-convex edge cost functions. This problem is NP-hard which means that, as matters stand, no polynomial time algorithm is known. We also model the cable layout problem as a mixed integer linear program (MILP) which can be solved using a generic optimizer. As an alternative, we propose a heuristic based on Simulated Annealing, which we qualitatively compare with the MILP solved with Gurobi Optimizer, a generic optimization program. In our experiments, it turns out that our heuristic is a good alternative to the MILP model for small and medium-sized instances. For wind farms with up to 450 turbines, our solution outperforms the results of Gurobi in the majority of cases. We propose different modifications to our heuristic, of which only some were useful to achieve these results. We conclude the work with different additional ideas for improvements which could be tried to make the heuristic suitable for even larger wind farms.

Measuring Search Effectiveness: Bringing the User Back

  • Date: Tue, 24 May 2016, 11:30
  • Location: SR 301
  • Speaker: Dr. Falk Scholer, Royal Melbourne Institute of Technology, Australia
  • Inhalt: As computer scientists, many of the systems and applications that we build are ultimately intended to be used by people to complete real world tasks. However, the experimental evaluation of what makes a system "good" can be difficult, and is sometimes not related to the end user. This talk will examine the effectiveness measurement of information retrieval systems, and demonstrate that the main evaluation methodology, using test collections, can lead to conclusions that don't match task-based outcomes of end users. I will then present recent work that investigates the issue of relevance perceptions and the technique of magnitude estimation, an approach from the field of psychophysics that aims to directly measure subjective user experiences. Speaker Bio: Dr Falk Scholer is an Associate Professor at RMIT University in Melbourne, Australia. He is an active researcher in the field of information retrieval (IR), where his key interests include: the evaluation of search systems; relevance and user perceptions; interactive search and user modelling; document summarisation; and efficient indexing

Communication Efficient Algorithms for Top-k Selection Problems

  • Date: Fri, 13 May 2016, 09:15
  • Location: SR 236
  • Speaker: Lorenz Hübschle-Schneider, Probevortrag für IPDPS, ITI Sanders
  • Inhalt: We present scalable parallel algorithms with sublinear per-processor communication volume and low latency for several fundamental problems related to finding the most relevant elements in a set, for various notions of relevance: We begin with the classical selection problem with unsorted input. We present generalizations with sorted inputs, dynamic content (bulk-parallel priority queues), and multiple criteria. Then we move on to finding frequent objects and top-k sum aggregation.

Fast Maximization of Betweenness and Closeness of a Node

  • Date: Tue, 10 May 2016, 13:30
  • Location: SR 301
  • Speaker: Dominik Kiefer, ITI Meyerhenke, Abschlussarbeit

Semi-externes Clustern von Graphen mit der Louvain-Methode

  • Date: Tue, 10 May 2016, 13:00
  • Location: SR 301
  • Speaker: Oliver Plate, Bachelorarbeit, ITI Wagner
  • Inhalt: Im Kontext dieser Arbeit wird das Clustern von Netzwerken betrachtet. Ziel ist dabei, Netzwerke so in Cluster zu zerlegen, dass die Knoten innerhalb der Cluster möglichst viele Verbindungen untereinander aufweisen. Zur Findung solcher Clusterungen auf Graphen gibt es viele verschiedene Ansätze und Algorithmen, hier wird die Louvain-Methode von Blondel et al. betrachtet. Die meisten Algorithmen behalten bei ihren Berechnungen alle relevanten Daten, wie zum Beispiel die verschiedenen Knoten und Kanten eines Graphen, im internen Speicher. Mit der rasant zunehmende Größe von sozialen Netzwerken oder Netzwerken, die aus dem World Wide Web generiert werden, müssen aber immer größere Graphen untersucht werden. Diese weisen oft mehrere Milliarden Knoten und Kanten auf, sodass den internen Verfahren durch den verfügbaren Hauptspeicher Grenzen gesetzt sind. In dieser Arbeit wird untersucht, inwiefern die Louvain-Methode auch die Auslagerung von Daten auf externe Speicher (Festplatten) zulässt, sodass ohne Vergrößerung des Hauptspeicher auch die Clusterung großer Graphen durchgeführt werden kann. Es dazu zwei verschiedene Ansätze vorgestellt und experimentell evaluiert.

Improving a Hierarchical Evolutionary Algorithm with Applications in Optimization and Machine Learning

  • Date: Fri, 29 Apr 2016, 14:30
  • Location: SR 301
  • Speaker: Michael Neumann, Diplomarbeit, PARCO


  • Date: Fri, 29 Apr 2016, 14:00
  • Location: SR 301
  • Speaker: Daniel Ferizovic, Bachelorarbeit, ITI Sanders

Experimental Evaluation of a Divide&Conquer-Based Algorithm for Bend Minimization in Planar Orthogonal Drawings

  • Date: Fri, 22 Apr 2016, 15:30
  • Location: SR 301
  • Speaker: Tobias Stocker, Bachelorarbeit, ITI Wagner
  • Inhalt: Knickminimierung ist das algorithmische Problem, die Anzahl der Knicke in einer orthogonalen planaren Zeichnung eines planaren Graphen zu minimieren. Das Ziel dieser Minimierung der Knicke ist es, das Aussehen und die Lesbarkeit dieser Graphen zu verbessern. Während das Problem im Allgemeinen NP-vollständig ist, ist es für plane Graphen - planare Graphen mit fester Einbettung - in Polynomialzeit lösbar. Diese Arbeit konzentriert sich auf einen bestimmten Algorithmus für das Knickminimierungsproblem eines planen Graphen, welcher das Problem durch ein Flussproblem mit minimalen Kosten modelliert und es mit Hilfe des Konzeptes von "teile und herrsche" rekursiv löst. Wir präsentieren eine Implementation dieses rekursiven Algorithmus, der das Problem eines Flusses mit minimalen Kosten löst, und evaluieren den Algorithmus in der Praxis basierend auf dieser Implementation. Aufgrund der Tatsache, dass der rekursive Algorithmus in der Theorie eine bessere Laufzeit hat als vorherige Algorithmen für das Flussproblem mit minimalen Kosten, wollen wir die Laufzeiten dieser Algorithmen vergleichen. Wir kamen zu dem Ergebnis, dass der rekursive Algorithmus auch in der Praxis schneller ist, zumindest für größere Graphen mit mehr als 90.000 Knoten.

Algorithms for crossing minimisation in book drawings

  • Date: Fri, 22 Apr 2016, 15:00
  • Location: SR 301
  • Speaker: Jonathan Klawitter, Masterarbeit, ITI Wagner
  • Inhalt: A book with k pages consists of a line (the spine) and k half-planes (the pages), each with the spine as boundary. In a k-page book drawing of a graph the vertices lie on the spine, and each edge is drawn as arc in one page. The minimal number of edge crossings in a k-page book drawing of a graph is called its k-page crossing number, which, in general, is NP-hard to determine [BE14]. A book drawing can be described by an order of the vertices on the spine and a distribution of the edges to pages. To compute book drawings, we combine vertex order heuristics with edge distribution heuristics and create heuristics that compute both simultaneously. We experimentally evaluate the performance of these heuristics on a new test suite that is centered on different graph classes. It turns out that our new heuristics produce drawings with fewer crossings for most of the tested graphs. We further investigate the optimisation of book drawings with force-based approaches, greedy and evolutionary algorithms as well as the successful combinations of the latter two.

Zeitabhängige energieoptimale Routen für Elektrofahrzeuge

  • Date: Fri, 22 Apr 2016, 14:30
  • Location: SR 301
  • Speaker: Oliver Thal, Bachelorarbeit, ITI Wagner
  • Inhalt: Diese Arbeit beschäftigt sich mit dem Finden zeitabhängiger energieoptimaler Routen. Es wird gezeigt, dass dieses Problem NP-schwer ist und auf dessen Komplexität eingegangen. Weiterhin werden drei Algorithmen vorgestellt, die energieoptimale Routen in zeitabhängigen Straßennetzwerken finden. Unter diesen befindet sich eine Heuristik, die nicht immer eine optimale Lösung liefert, dafür aber deutlich schneller ist als die beiden anderen Algorithmen. Die drei Algorithmen werden experimentell ausgewertet und es stellt sich heraus, dass nur die Heuristik schnell genug ist, um in der Praxis eingesetzt zu werden.

Optimierung von Straßenbeschriftungen einer Karte bezüglich zweier Maßstäbe

  • Date: Fri, 22 Apr 2016, 14:00
  • Location: SR 301
  • Speaker: Almut Demel, Abschlussarbeit, ITI Wagner
  • Inhalt: Landkarten helfen dabei, sich in unbekannten Umgebungen zurechtzufinden. Um einen Kartenausschnitt besser einer realen Umgebung zuordnen zu können werden die dargestellten Objekte mit Beschriftungen versehen. Interaktive Landkarten sind Landkarten, deren Maßstab veränderlich ist. Die Beschriftungen werden dabei je nach Maßstab angepasst, damit immer möglichst viele Straßen identifizierbar sind. Insbesondere in Navigationsgeräten in Fahrzeugen ist es dabei wichtig, einen fließenden Übergang zwischen den verschiedenen Beschriftungen zu finden, um möglichst wenig Ablenkung zu schaffen. Thema des Vortrags ist ein Algorithmus, der einen fließenden Übergang zwischen den Beschriftungen von Straßenkarten zweier Maßstäbe erzeugt. Dabei wird von einer festen Beschriftung in einem der beiden Maßstäbe ausgegangen, auf deren Grundlage eine Beschriftung für den zweiten Maßstab berechnet wird, die der ersten ähnelt und trotzdem viele Straßen identifizierbar macht. Mittels dynamischer Programmierung wird eine optimale Lösung für jede Straße einzeln berechnet. Durch eine Heuristik werden die gefundenen Beschriftungen zu einer Beschriftung des dargestellten Kartenausschnitts zusammengefügt.

Zeitabhängige energieoptimale Routen für Elektrofahrzeuge

  • Date: Tue, 12 Apr 2016, 14:00
  • Location: SR 301
  • Speaker: Oliver Thal, Bachelorarbeit, ITI Wagner
  • Inhalt: Diese Arbeit beschäftigt sich mit dem Finden zeitabhängiger energieoptimaler Routen. Es wird gezeigt, dass dieses Problem NP-schwer ist und auf dessen Komplexität eingegangen. Weiterhin werden drei Algorithmen vorgestellt, die energieoptimale Routen in zeitabhängigen Straßennetzwerken finden. Unter diesen befindet sich eine Heuristik, die nicht immer eine optimale Lösung liefert, dafür aber deutlich schneller ist als die beiden anderen Algorithmen. Die drei Algorithmen werden experimentell ausgewertet und es stellt sich heraus, dass nur die Heuristik schnell genug ist, um in der Praxis eingesetzt zu werden.

Experimentelle Untersuchung der Existenz planarer Greedy-Zeichnungen von Graphen

  • Date: Tue, 5 Apr 2016, 14:00
  • Location: SR 301
  • Speaker: Johannes Ernst, Bachelorarbeit

Fall 2015

On drawing planar triangulations with bends

  • Date: Fri, 18 Mar 2016, 14:00
  • Location: SR 301
  • Speaker: Denis Knöpfle, Masterarbeit
  • Inhalt: According to the previously conducted user studies, increasing the number of bends decreases the understandability of a graph drawing. In this thesis we want to analyze whether in certain circumstances existing graph layouts can be improved by introducing bends. We have conducted a user study to compare straight-line graph drawings with their corresponding drawings that have bends and tested the user preference and performance on general tasks on both types of layouts. While in the experiment conducted in the past the bends have been assigned at random and bends on the same edge have different curvature, we assign the bends to the edges in an “intelligent” way and keep the polyline edges “smooth”, that is, without sharp corners and with the same curvature. Our experiment shows that “intelligently” placed bends do not have a significant negative impact on the readability, and that the aesthetic quality of the straight line and polyline drawings is comparable. Further, we take a closer look at several traditional metrics that measure the aesthetics of the drawings. Our experiments show that neither of the metrics captures the human feeling of aesthetics.

Optimization of Mobile Client-Server-Applications

  • Date: Thu, 3 Mar 2016, 14:00
  • Location: SR 301
  • Speaker: Thomas Breitbach, Technische Hochschule Mittelhessen
  • Inhalt: Nowadays, smartphones are full-time companions, facilitating the handling of important tasks and a steady flow of information. Unfortunately, the resources of the phones are most commonly unused in client-server-applications, with all the work being done server-side. To explain the objective and the complexity of the project in detail, bottlenecks and trade-offs will be identified first. Subsequently, a model showing subproblems like caching and prefetching will be developed. Based on the model, our recent work as well as the next steps of the project will be outlined.


  • Date: Tue, 16 Feb 2016, 14:00
  • Location: SR 301
  • Speaker: Jonas Fuchs, Masterarbeit

Well-shaped partitions of bipartite graphs through gradual convexity

  • Date: Mon, 15 Feb 2016, 14:00
  • Location: Raum 010
  • Speaker: Peter Eisenmann
  • Inhalt: This thesis is about the partitioning of graphs by using 'gradual convexity', a newly coined term, adhering to convex cuts, which describes a cut as convex to a certain degree. The Djokovic relation forms the basis for rating how gradually convex a cut is, for in a convex cut all cut-set edges are in this relation with one another. As there is no precise definition for gradual convexity (as of yet), multiple conceivable variants are derived. With the methods, resulting from these variants, simple cuts of bipartite graphs are rated. Based on their ratings, these simple cuts are combined to form new cuts. The goal is to determine a best rated cut.

Raum belegt

  • Date: Fri, 12 Feb 2016, 14:00
  • Location: SR 301
  • Speaker: IPD Snelting, (14:00-15:30)

Analyse und Optimierung des Raft Consensus Algorithmus in hochverteilten und performancekritischen Datenbanksystemen

  • Date: Tue, 2 Feb 2016, 14:00
  • Location: SR 301
  • Speaker: Nico Mürdter, Bachelorarbeitsvortrag (ITI Sanders)
  • Inhalt: Das Konsensproblem existiert seit Beginn der Verwendung von verteilten Systemen. Es beschreibt die Problematik der einheitlichen Zustandsänderung von Zustandsautomaten in einem Cluster. Alle Zustandsautomaten sollen jederzeit denselben Zustand aufweisen. Um dieses Problem zu lösen wurden Konsensalgorithmen formuliert, die die verteilte Haltung von Zustandsautomaten ermöglichen. Einer davon ist der von Diego Ongaro entwickelte Raft Consensus Algorithmus, dessen Ziel es ist, eine einfache und leicht verständliche Lösung zu liefern. Raft gehört zu den Leader-basierten Konsensalgorithmen. Das heißt, pro Cluster existiert immer ein eindeutiger Leader, welcher die Verwaltung und Steuerung des Clusters übernimmt. Somit verfügen solche Algorithmen auch über eine notwendige Leader Election. Es hat sich gezeigt, dass diese Leader Election in großen Clustern über 50 Knoten stark an Performanz verliert. Des Weiteren bietet sie keine aktive Möglichkeit, Knoten, die bei der Leader Election teilnehmen, unterschiedlich zu gewichten. Im Laufe dieser Arbeit konnte ein neuer Leader Election Algorithmus entwickelt werden, welcher einen Performancegewinn von bis zu Faktor 3 in großen Clustern, mit 100 Knoten generiert und eine aktive Gewichtung verschiedener Knoten ermöglicht. Es wurden Cluster von 5 bis 100 Knoten betrachtet und Timeouts von 150ms, 500ms und 1000ms getestet. Dabei ergab sich ein deutlicher Trend zu hohen Speedups in Clustern mit großen Timeouts und hohen Knotenzahlen. Durch diesen Speedup und die Möglichkeit, aktiv in die Leader Election einzugreifen, konnte die Auswahl eines Leaders innerhalb eines Raft Clusters deutlich verbessert werden.

Drawing Metro Maps on Concentric Circles

  • Date: Mon, 1 Feb 2016, 13:00
  • Location: SR 301
  • Speaker: Lukas Barth, Masterarbeitsvortrag (ITI Wagner)
  • Inhalt: We examine algorithms for the drawing of metro maps. The most important elements of a metro map are stations and lines connecting the stations. We restrict the elements used for representing lines to one of two classes: On the one hand, circular segments lying on circles with a common center S are allowed. On the other hand, line segments lying on lines passing through S are allowed. To make the metro maps as legible as possible, we try to create a drawing in which the lines bend as little as possible. To this end, we adapt the Topology-Shape-Metrics framework by Roberto Tamassia, the purpose of which originally is bend minimization in orthogonal drawings of graphs. To produce a compact drawing, we present a compaction approach based on Simulated Annealing. We practically evaluate the adapted framework for drawing metro maps with concentric circles and demonstrate its usability even for complex metro networks such as Berlin or London, at the same time also determining reasonable values for the parameters of our framework.

Algorithmische Grundlagen eines Routingdienstes fuer Luftfracht

  • Date: Fri, 29 Jan 2016, 14:00
  • Location: SR 301
  • Speaker: Julian Englert, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: Auf Basis des Connection Scan Algorithmus wurden die Grundlagen eines Routingdienstes geschaffen, der einige Randbedingungen beim Transport von Luftfracht berücksichtigt. Dazu zählen die physikalische Dimensionierung der Fracht (Abmaße, Gewicht, Gefahrgutklasse), die zur Verfügung stehenden Verkehrsträger (Flugzeuge, LKWs), die Arten von Flughäfen (Hubs, Stationen) und politische Vereinbarungen zum Frachttransport zwischen Staaten (Freiheiten der Luft)

How to partition a graph when you think like a vertex

  • Date: Wed, 20 Jan 2016, 11:30
  • Location: SR 236
  • Speaker: Jan Ebbing, Bachelorarbeitsvortrag (ITI Sanders)

Automatisierte Analyse komplexer Netzwerke

  • Date: Fri, 15 Jan 2016, 14:00
  • Location: SR 301
  • Speaker: Mark Erb, Studienarbeitsvortrag (ITI Meyerhenke)

Finding Attractive Routes for Bicycles and Pedelecs

  • Date: Tue, 12 Jan 2016, 14:00
  • Location: SR 301
  • Speaker: Philipp Glaser, Diplomarbeitsvortrag (ITI Wagner)
  • Inhalt: Radfahrer berücksichtigen bei ihrer Routenwahl meist mehrere Kriterien. Häufig ist die Gewichtung dieser Kriterien nicht fest, sondern hängt von den individuelle Vorlieben oder der momentanen Situation ab. Beispielsweise könnte ein Radfahrer kleine Verlängerungen der Reisezeit in Kauf nehmen, wenn die Alternative als sicherer empfunden wird. Auf der Grundlage des Multi-label-Setting-Algorithmus kombiniert mit A*, suchen wir zunächst die gesamte Pareto-Menge. Die Menge der Pareto-optimalen Routen wird allerdings schnell sehr groß, was nicht nur zu langen Laufzeiten führt, sondern auch bei der Routenplanung nicht weiterhilft. Deshalb betrachten wir verschiedene Heuristiken, um die Anzahl der gefundenen Routen zu verringern. Als Erstes untersuchen wir ?-Dominanz, bei der auch Routen dominiert werden, die mit der Pareto-Dominanz nicht vergleichbar sind. Die zweite untersuchte Heuristik basiert auf dem Ausschluss von Routen, die nur kurz von einer bereits gefundenen Route abweichen. Außerdem betrachten wir den Einfluss der Reihenfolge, in der die Routen gesucht werden, auf das Resultat und die Laufzeit.

Finding Near-Optimal Independent Sets at Scale

  • Date: Fri, 8 Jan 2016, 14:00
  • Location: SR 301
  • Speaker: Darren Strash, Probevortrag ALENEX
  • Inhalt: The independent set problem is NP-hard and particularly difficult to solve in large sparse graphs, which typically take exponential time to solve exactly using the best-known exact algorithms. In this work, we develop an advanced evolutionary algorithm, which incorporates kernelization techniques to compute large independent sets in huge sparse networks. A recent exact algorithm has shown that large networks can be solved exactly by employing a branch-and-reduce technique that recursively kernelizes the graph and performs branching. However, one major drawback of their algorithm is that, for huge graphs, branching still can take exponential time. To avoid this problem, we recursively choose vertices that are likely to be in a large independent set (using an evolutionary approach), then further kernelize the graph. We show that identifying and removing vertices likely to be in large independent sets opens up the reduction space--which not only speeds up the computation of large independent sets drastically, but also enables us to compute high-quality independent sets on much larger instances than previously reported in the literature. Authors: Sebastian Lamm, Peter Sanders, Christian Schulz, Darren Strash, and Renato F. Werneck

k-way Hypergraph Partitioning via n-Level Recursive Bisection

  • Date: Fri, 18 Dec 2015, 15:00
  • Location: SR 301
  • Speaker: Sebastian Schlag, Probevortrag ALENEX
  • Inhalt: We develop a multilevel algorithm for hypergraph partition- ing that contracts the vertices one at a time. Using sev- eral caching and lazy-evaluation techniques during coarsen- ing and refinement, we reduce the running time by up to two-orders of magnitude compared to a naive n-level algo- rithm that would be adequate for ordinary graph partition- ing. The overall performance is even better than the widely used hMetis hypergraph partitioner that uses a classical mul- tilevel algorithm with few levels. Aided by a portfolio-based approach to initial partitioning and adaptive budgeting of imbalance within recursive bipartitioning, we achieve very high quality. We assembled a large benchmark set with 310 hypergraphs stemming from application areas such VLSI, SAT solving, social networks, and scientific computing. Ex- periments indicate that our algorithm is the method of choice for a wide range of hypergraph partitioning tasks. The algorithm presented in this work forms the basis of our hypergraph partitioning framework KaHyPar (Karlsruhe Hypergraph Partitioning).

Drawing Ordered Trees with Small Height (and Width)

  • Date: Tue, 8 Dec 2015, 13:15
  • Location: SR 301
  • Speaker: Johannes Batzill, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: A planar drawing of a graph in which every node is on one of $k$ horizontal layers is called a $k$-layer drawing of a graph. If every edge of the drawing is horizontal, vertical or between consecutive layers, we call the drawing an HVA-drawing. Restricting edges to be straight lines and trees to be ordered, we prove a tight upper bound on the maximal number of horizontal layers required for a layered drawing of an ordered tree, and provide a way to construct a drawing matching the upper bound. Furthermore, we provide a way to construct a layered HVA-drawing of an ordered tree that uses at most $1.5$ times the layers compared to the upper bound of unconstrained layered drawings. The advantage of a layered HVA-drawing is that we are also able to bound the number of required vertical layers, i.e., we are able to bound its area. The structure of our constructions base on the work of Suderman. Suderman presented tight lower and upper bounds on the height of different types of layered drawings of trees. However, he did not present an upper bound on the height of layered drawings of ordered trees.

Editing to (P5, C5)-free Graphs - a Model for Community Detection?

  • Date: Fri, 27 Nov 2015, 14:00
  • Location: SR 301
  • Speaker: Philipp Schoch, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: Bei der (eher informell definierten) Aufgabe der Erkennung von Communities versucht man in einem Graphen G = (V, E) eine bestimmte Partitionierung der Knotenmenge V in einzelne Teilmengen (sog. Communities) zu finden. Dabei sollen die Knoten innerhalb einer einzelnen Teilmenge (Community) stärker miteinander verbunden sein als die Knoten aus verschiedenen Teilmengen (Communities). Diese stärkere Verbindung innerhalb der Communities kann dabei auf eine gemeinsame Funktion oder eine gemeinsame Rolle hinweisen, die sich die Knoten aus derselben Community teilen. (P5, C5)-free editing bedeutet, dass man versucht, durch Hinzufügen und durch Löschen von Kanten einen Graphen G (P5, C5)-frei zu machen. (P5, C5)-frei bedeutet, dass der resultierende Graph G' keinen P5 und keinen C5 als knoteninduzierten Subgraph enthalten darf. Die Schwierigkeit besteht darin dies in möglichst wenigen Schritten zu tun. Das heißt, man soll die Anzahl an hinzugefügten und gelöschten Kanten minimieren. Das Thema dieser Bachelorarbeit ist die Frage, ob man mithilfe von (P5, C5)-free editing Communities in einem Graphen finden kann. Dies beinhaltet erstens die Frage wie aufwändig es ist das (P5, C5)-free editing Problem zu lösen. Und zweitens die Frage, ob die Zusammenhangskomponenten von dem editierten Graphen G' tatsächlich sinnvolle Communities für den ursprünglichen Graphen G definieren. Die Idee für die Bachelorarbeit stammt von einem Paper, das versucht mithilfe von (P4, C4)-free editing Communities zu finden. Die wichtigsten Resultate dieser Bachelorarbeit sind der Beweis der NP- Vollständigkeit des (P5, C5)-free editing Problems, die Adaption von Verfahren, um einige reale Graphen zu (P5, C5)-freien Graphen zu editieren sowie eine Analyse der auf diese Weise gefundenen Communities. Die Analyse zeigt, dass auf manchen Graphen wohldefinierte Communities gefunden werden, auf anderen dagegen mehrere Communities vereinigt werden.

Boosting Local Search for the Maximum Independent Set Problem

  • Date: Tue, 17 Nov 2015, 13:15
  • Location: SR 301
  • Speaker: Jakob Dahlum, Bachelorarbeitsvortrag (ITI Sanders)
  • Inhalt: An independent set of a graph $G = (V, E)$ with vertices $V$ and edges $E$ is a subset $S subseteq V$, such that the subgraph induced by $S$ does not contain any edges. The goal of the maximum independent set problem (MIS problem) is to find an independent set of maximum size. It is equivalent to the well-known vertex cover problem (VC problem) and maximum clique problem. This thesis consists of two main parts. In the first one we compare the currently best algorithms for finding near-optimal independent sets and vertex covers in large, sparse graphs. They are Iterated Local Search (ILS) by Andrade et al., a heuristic that uses local search for the MIS problem and NuMVC by Cai et al., a local search algorithm for the VC problem. As of now, there are no methods to solve these large instances exactly in any reasonable time. Therefore these heuristic algorithms are the best option. In the second part we analyze a series of techniques, some of which lead to a significant speed up of the ILS algorithm. This is done by removing specific vertices and structures within the graph, which have a low probability of being part of the MIS. These are vertices of high degree, vertices within dense subgraphs or the like. In addition, we analyze the behavior of the ILS algorithm when inserting the cut vertices back into the graph. We also perform exact reductions which reduce the graph’s complexity without destroying information about the MIS. We can revert them after running ILS on the reduced graph. We present the experimental results of the comparison of ILS and NuMVC, as well as the improvement of ILS by removing vertices and the like. The used instances are known graphs from literature like road maps, social networks, meshes and web graphs.

The Physical Travelling Salesperson Problem - An Approach With Turn Segments

  • Date: Fri, 13 Nov 2015, 14:30
  • Location: SR 301
  • Speaker: Lars Gottesbüren, Bachelorarbeitsvortrag (ITI Meyerhenke)

Probabilistic Neighborhood Queries

  • Date: Fri, 13 Nov 2015, 14:00
  • Location: SR 301
  • Speaker: Moritz von Looz


  • Date: Fri, 6 Nov 2015, 14:00
  • Location: SR 301
  • Speaker: PSE, (ITI Wagner)

Logistics and Optimization at Amazon

  • Date: Thu, 5 Nov 2015, 17:30
  • Location: HS -101
  • Speaker: Andrew V. Goldberg, Senior Principal Scientist,
  • Inhalt: We discuss optimization problems at Amazon Logistics. Amazon is the biggest online retailer, fulfilling millions of orders a day. This leads to many classical OR problems. In addition, many of these problems are NP-hard, big, stochastic, and interrelated. This makes Amazon Logistics an exciting environment for research in optimization, algorithms, and algorithm engineering.

Erweiterung partieller planerer orthogonaler Zeichnungen

  • Date: Fri, 30 Oct 2015, 14:30
  • Location: SR 301
  • Speaker: Stefan Rettenmayr, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: Thema dieser Bachelorarbeit ist die planare Erweiterbarkeit orthogonaler Zeichnungen. Das Problem der planaren Erweiterbarkeit behandelt die Frage, ob eine vorgegebene Zeichnung eines Subgraphen zu einer Zeichnung des gesamten Graphen erweitert werden kann, ohne Planarita?t zu verletzen. Wir geben einen kurzen U?berblick u?ber verwandte Arbeiten, insbesondere u?ber einen ku?rzlich entdeckten Linearzeitalgorith- mus fu?r topologische Zeichnungen. Anschließend betrachten wir das Problem der planaren Erweiterbarkeit fu?r zwei verschiedene Zeichenstile, orthogonale Zeichnungen und orthogonale Gitterzeichnungen. Fu?r orthogonale Gitterzeichnungen fu?hren wir zwei Reduktionen durch, um zu zeigen, dass das Problem NP-schwer ist, selbst unter der zusa?tzlichen Einschra?nkung, dass die Einbettung des gesamten Graphen fest ist. Fu?r orthogonale Zeichnungen hingegen ko?nnen wir das Problem auf den Fall topologischer Zeichnungen reduzieren und einen Linearzeitalgorithmus herleiten.

Energy-Optimal Routes for Electric Vehicles with Charging Stops

  • Date: Fri, 30 Oct 2015, 14:00
  • Location: SR 301
  • Speaker: Jonas Sauer, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: Die Bedeutung von Elektrofahrzeugen hat in den letzten Jahren stetig zugenommen. Dank steigender Batteriekapazitäten und zunehmender Verbreitung von Ladestationen für Elektrofahrzeuge werden Langstreckenrouten mit Ladestopps allmählich praktikabler. Während Routenplanungsprobleme für Elektrofahrzeuge bereits ausführlich untersucht wurden, hat sich nur wenig Forschung mit der Berücksichtigung von Ladestationen beschäftigt. In dieser Arbeit erweitern wir frühere Arbeiten zu diesem Thema und entwickeln zwei Algorithmen, die Energie-optimale Routen finden und dabei Ladestopps erlauben. Unser erster Algorithmus basiert auf einer multikriteriellen Kürzeste-Wege-Suche, die Pareto-Mengen verwendet. Obwohl multikriterielle Kürzeste-Wege-Algorithmen im Allgemeinen eine exponentielle Worst-Case-Komplexität haben, führen wir eine Klasse von Graphen ein, für die wir eine polynomielle Laufzeit unseres Algorithmus zeigen. Der zweite Algorithmus verwendet einen vorberechneten Graph, der die kürzesten Wege zwischen alle Ladestationen enthält, um die Anfragen zu beschleunigen. Wir werten unsere Algorithmen für echte Straßennetzwerkdaten aus und beobachten für beide Algorithmen Anfragezeiten im Sekundenbereich auf dem Straßennetzwerk Deutschlands.

Annotation von Texten unter Berücksichtigung von Textzwischenräumen

  • Date: Fri, 9 Oct 2015, 14:30
  • Location: SR 301
  • Speaker: Amrei Loose, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: Für die Arbeit mit Textdokumenten ist es mitunter nützlich wenn das Anfügen von Kommentaren möglich ist. Viele Textverarbeitungsprogramme ermöglichen es bereits Stellen im Text zu markieren und etwas anzufügen. Häufig werden diese Implementierungen aber bei einer grösseren Anzahl Kommentaren schnell unübersichtlich. Diese Arbeit beschäftigt sich mit einer kreuzungsfreien Lösung für die Randbeschriftung in Texten, wobei die gefundene Lösung keine Überschneidungen mit dem Text aufweisen soll. Dafür werden zwei verschiedene Ansätze betrachtet: Der erste Ansatz verwendet opo-Leader für die Beschriftung und berechnet mithilfe eines Clusterings der Label eine Beschriftung. Dieser Ansatz besitzt eine Laufzeit in O(n log n). Der zweite Ansatz basiert auf einem Flussnetzwerk, in welchem mithilfe eines maximalen Flusses minimalen Gewichts eine optimale Lösung berechnet werden soll.

New Results on Trajectory Grouping under Geodesic Distance

  • Date: Fri, 9 Oct 2015, 14:00
  • Location: SR 301
  • Speaker: Jérôme Urhausen, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: Technology nowadays allows us to easily track the position of humans and animals alike. When repeated multiple times, one obtains trajectories describing the movement of the observed entities. This data provides information about which entities assemble and form groups and how these groups evolve over time. We extend the results of Kostitsyna et al. who studied the grouping of entities in scenarios with polygonal obstacles. We prove that in some cases fast algorithms can neither compute an explicit representation of the evolution of the groups, nor use all epsilon-events. These epsilon-events occur when the geodesic distance between two entities is epsilon, which is the threshold distance determining if two entities are in the same group. Moreover, we present a new algorithm for trajectory grouping with obstacles. The algorithm is signi?cantly faster than the previously known one, if the amount of observed entities is signi?cantly bigger than the total complexity of the obstacles. In addition we provide evidence that if obstacles are big compared to epsilon, then the worst-case size of the output does not depend on the complexity of the obstacles. Finally we present an efficient algorithm for said big obstacles.

Spring 2015

Correspondences between Partitions and their Computation

  • Date: Wed, 23 Sep 2015, 13:00
  • Location: SR 301
  • Speaker: Roland Glantz
  • Inhalt: Partitions/clusterings are instrumental in tasks such as classification, pattern recognition and network analysis. In the past, an abundance of similarity measures for pairs (C, C') of partitions (clusterings) of the same ground set have been proposed. Such a measure for the overall similarity of C and C', however, does not provide specifics on what the similarities/correspondences between C and C' consist in, e.g., when the union of two clusters in C is similar to a cluster in C'. We think of a correspondence between C and C' as a pair (B, B'), where B [B'] is a subset (block) of C [C'], and the symmetric difference between the union of clusters in B and the union of clusters in B' has low cardinality. Weights of elements in the ground set can be taken into account through replacing ``cardinality'' by ``total weight''. We show that such an objective function is essentially one for blocks of C only, since (i) the objective function can be written without B' and (ii) finding the optimal partner B' for any B is straight-forward. The objective function's value for B is a measure for the fragmentation of B w.r.t. C'. Analogous to graphs, a C_s-C_t cut of C is a pair (B, C - B) such that B contains C_s but not C_t. We show how one can compute a Gomory-Hu tree T of optimal C_s-C_t cuts (optimal w.r.t the fragmentation of B) using a branch-and-bound approach. The tree T defines a hierarchical decomposition of C and also gives rise to the best correspondences between C and C'.

On Minimizing Crossings in Storyline Visualizations

  • Date: Fri, 18 Sep 2015, 14:00
  • Location: SR 301
  • Speaker: Darren Strash, Practice talk for GD 2015
  • Inhalt: In a storyline visualization, we visualize a collection of interacting characters (e.g., in a movie, play, etc.) by $x$-monotone curves that converge for each interaction, and diverge otherwise. Given a storyline with $n$ characters, we show tight lower and upper bounds on the number of crossings required in any storyline visualization for a restricted case. In particular, we show that if (1) each meeting consists of exactly two characters and (2) the meetings can be modeled as a tree, then we can always find a storyline visualization with $O(nlog n)$ crossings. Furthermore, we show that there exist storylines in this restricted case that require $Omega(nlog n)$ crossings. Lastly, we show that, in the general case, minimizing the number of crossings in a storyline visualization is fixed-parameter tractable, when parameterized on the number of characters $k$. Our algorithm runs in time $O(k!^2klog k + k!^2m)$, where $m$ is the number of meetings.

Parallel Graph Algorithms on the Xeon Phi Coprocessor

  • Date: Mon, 7 Sep 2015, 14:00
  • Location: Raum 010
  • Speaker: Dennis Felsing, Masterarbeitsvortrag (ITI Meyerhenke)
  • Inhalt: In this thesis two graph algorithms, one for graph generation, the other for graph visualization are studied examplarily. We detail the work of adapting and porting the algorithms to the Intel Xeon Phi coprocessor architecture. Problems in porting real software projects and used libraries are encountered and solved. Memory allocations turned out to be a major problem for the graph generation algorithm. The limited memory of the Xeon Phi forced us to offload chunks of the data from the host system to the Xeon Phi, which impeded performance, eliminating any significant speedup. The data sets for the graph visualization algorithm fir into the Xeon Phi's memory easily, which simplified the porting process significantly. We achieve a speedup for sparse graphs over the host system containing two 8-core Intel Xeon (Sandy Bridge) processors.

Efficient Enumeration of All Reasonable Journeys in Public Transport Networks

  • Date: Thu, 3 Sep 2015, 15:00
  • Location: SR 301
  • Speaker: David Weiß, Masterarbeitsvortrag
  • Inhalt: Recent research on route planning in public transport networks mostly focused on customer perspective of answering individual queries for specific journeys in realtime. We shift this focus towards the viewpoint of traffic planners, who need to enumerate all reasonable journeys throughout the network over a given timespan at once, to e.g. compute demand data down to specific lines and vehicles. While realtime is not an issue, faster algorithms that incorporate realistic footpath models, are required to make all-to-all journey enumerations feasible for large networks. Also, the extraction of concrete journey representations becomes a substantial and time consuming part. We present an algorithm for searching and enumerating journeys which are optimal in the Pareto sense in regard to travel time and number of transfers between all pairs of destinations. The algorithm is capable of handling realistic transfer models as well as reasonable footpath networks. For our real world test data incorporating 1,156 distinct destinations, we manage to enumerate 388,471,381 optimal journeys, averaging 291 journeys per destination pair, in less than 33 minutes or 5 microseconds per journey. Exploiting the embarrassingly parallelizable structure of our algorithm, concurrent enumeration using 8 cpu cores with a speed up of 5.6 brings down computation time to about 5.8 minutes.

Parallel Lightweight Wavelet-Tree, Suffix-Array and FM-Index Construction

  • Date: Thu, 27 Aug 2015, 15:00
  • Location: SR 236
  • Speaker: Julian Labeit, Bachelorarbeitsvortrag (ITI Sanders)
  • Inhalt: We present parallel lightweight algorithms to construct rank and select structures, wavelet trees and suffix arrays in a shared memory setting. In experiments the presented wavelet tree algorithms achieve a speedup of 2-4x over existing parallel algorithms while reducing the asymptotic memory requirement. The suffix array construction algorithm presented is the first parallel algorithm using the induced copying approach. It only uses a small amount of extra space while staying competitive with existing parallel algorithms. Finally, we show how our algorithms can be used to build compressed full-text indexes, such as the FM-index, in parallel.

Engineering Initial Partitioning Algorithms for direct k-way Hypergraph Partitioning

  • Date: Wed, 26 Aug 2015, 09:00
  • Location: SR 236
  • Speaker: Tobias Heuer, Bachelorarbeitsvortrag (ITI Sanders)

Pareto-Optimization of Travel Time and the Number of Turns in Road Networks

  • Date: Tue, 18 Aug 2015, 14:00
  • Location: SR 301
  • Speaker: Benjamin Davis, Bachelorarbeitsvortrag

Structure-Preserving Sparsification of Social Networks

  • Date: Tue, 18 Aug 2015, 13:00
  • Location: SR 301
  • Speaker: Gerd Lindner, ASONAM Probevortrag
  • Inhalt: Sparsification reduces the size of networks while preserving structural and statistical properties of interest. Various sparsifying algorithms have been proposed in different contexts. We contribute the first systematic conceptual and experimental comparison of edge sparsification methods on a diverse set of network properties. It is shown that they can be understood as methods for rating edges by importance and then filtering globally by these scores. In addition, we propose a new sparsification method (Local Degree) which preserves edges leading to local hub nodes. All methods are evaluated on a set of 100 Facebook social networks with respect to network properties including diameter, connected components, community structure, and multiple node centrality measures. Experiments with our implementations of the sparsification methods (using the open-source network analysis tool suite NetworKit) show that many network properties can be preserved down to about 20% of the original set of edges. Furthermore, the experimental results allow us to differentiate the behavior of different methods and show which method is suitable with respect to which property. Our Local Degree method is fast enough for large-scale networks and performs well across a wider range of properties than previously proposed methods.

Optimal Shuffle Code with Permutation Instructions (Probevortrag für WADS)

  • Date: Tue, 28 Jul 2015, 14:00
  • Location: SR 301
  • Speaker: Manuel Mohr, IPD Snelting
  • Inhalt: During compilation of a program, register allocation is the task of mapping program variables to machine registers. During register allocation, the compiler may introduce emph{shuffle code}, consisting of copy and swap operations, that transfers data between the registers. Three common sources of shuffle code are conflicting register mappings at joins in the control flow of the program, e.g, due to if-statements or loops; the calling convention for procedures, which often dictates that input arguments or results must be placed in certain registers; and machine instructions that only allow a subset of registers to occur as operands. Recently, Mohr et al. [2013] proposed to speed up shuffle code with special hardware instructions that arbitrarily permute the contents of up to five registers and gave a heuristic for computing such shuffle codes. In this paper, we give an efficient algorithm for generating optimal shuffle code in the setting of Mohr et al. An interesting special case occurs when no register has to be transferred to more than one destination, i.e., it suffices to permute the contents of the registers. This case is equivalent to factoring a permutation into a minimal product of permutations, each of which permutes up to five elements.

Fast Computation of Isochrones in Road Networks

  • Date: Fri, 24 Jul 2015, 15:00
  • Location: SR 301
  • Speaker: Valentin Buchhold, Masterarbeitsvortrag (ITI Wagner)
  • Inhalt: Wir untersuchen das Problem, in einem Straßennetz Isochronen effizient zu berechnen. Intuitiv ist eine Isochrone die Region, die innerhalb einer gewissen Zeit von einem fixen Startpunkt erreicht werden kann. Praktische Anwendungen sind Erreichbarkeitsanalysen in der Städteplanung, Geomarketing und die Anzeige der verbleibenden Reichweite eines Fahrzeugs. Wir stellen verschiedene formale Definitionen aus der Literatur zusammen und präsentieren mehrere Algorithmen zur Berechnung von Isochronen. Neben der Erweiterung von bekannten Beschleunigungstechniken schlagen wir eine neue Familie von Algorithmen zur Berechnung von Isochronen vor, die auf Graphenseparatoren und Contraction-Hierarchies aufbaut. Dagegen basieren bestehende Ansätze auf Multilevel-Dijkstra-Suchen. Wir schließen mit einer experimentellen Evaluation unserer Algorithmen.

Route planning for electric vehicles and preliminary studies on traffic pattern classification

  • Date: Fri, 10 Jul 2015, 14:15
  • Location: SR 301
  • Speaker: Shao-Chieh, Intern student from National Tsing Hua University, Taiwan

Online energy scheduling problem

  • Date: Fri, 10 Jul 2015, 14:00
  • Location: SR 301
  • Speaker: I-Hsuan, Intern student from National Tsing Hua University, Taiwan

Algorithms for Contraction Hierarchies on Public Transit Networks

  • Date: Tue, 7 Jul 2015, 13:15
  • Location: SR 301
  • Speaker: Alexander Wirth, Diplomarbeitsvortrag (ITI Wagner)
  • Inhalt: Modern applications for route planning in public transit networks require algorithms that can answer earliest arrival queries within milliseconds. This thesis revits Contraction Hierarchies as a preprocessing technique for time-dependent transit networks of various sizes in order to achieve faster query times. We evaluate existing algorithms and propose alternative approaches for earliest arrival queries and profile queries that work on both the contracted and uncontracted networks. We also provide an alternative algorithm for finding witnesses during the construction of the hierarchy and show how it can be extended to allow a combination of time-dependent and time-independent edges. Finally, we evaluate the algorithms on real public transit networks of various sizes.

Separators in Planar Graphs and Road Networks

  • Date: Fri, 26 Jun 2015, 15:00
  • Location: SR 301
  • Speaker: Christian Sommer, PhD, Apple
  • Inhalt: Graph Partitioning is the well-studied problem of cutting a graph into disjoint regions of approximately equal size while minimizing the number of edges between regions. Graph partitions are used as essential building blocks in various efficient algorithms. One way of obtaining a partition is by cutting the graph into two pieces and then recursing on each subgraph. The recursion ends when the resulting subgraphs are sufficiently small. This way of partitioning is known as recursive bisection. Recursive bisection works well for those graphs that have small balanced separators. We review theoretical and experimental results on separators in planar graphs and road networks. For planar graphs (and, more generally, minor-free graphs), separator theorems have been known for decades. More recently, even smaller separators have been found for road networks. Joint work with Philip Klein and Shay Mozes (planar separators) and Aaron Schild (road network separators)

Graph partitioning for independent sets

  • Date: Fri, 26 Jun 2015, 14:00
  • Location: SR 301
  • Speaker: Sebastian Lamm, SEA Probevortrag

Proportional Contact Representation of Planar Graphs

  • Date: Wed, 24 Jun 2015, 14:00
  • Location: Mittlerer Hörsaal, Gebäude 10.91 - Maschinenbau
  • Speaker: Prof. Stephen Kobourov, University of Arizona, im Rahmen der
  • Inhalt: In a proportional contact representation of a planar graph, each vertex is represented by a simple polygon with area proportional to a given weight, and edges are represented by adjacencies between the corresponding pairs of polygons. We show how to use Schnyder realizers and canonical orders for planar graphs to obtain different types of contact representations. Specifically, we describe an algorithm that guarantees 10-sided rectilinear polygons and runs in linear time. We also describe a construction with 8-sided polygons, which is optimal in terms of polygonal complexity as 8-sided polygons are sometimes necessary. Specifically, we show how to compute the combinatorial structure and how to refine it into an area-universal rectangular layout in linear time. However, to compute the exact representation from the area-universal layout required numerical iteration, or can be approximated with a hill-climbing heuristic.

From Graphs to Maps

  • Date: Tue, 23 Jun 2015, 09:45
  • Location: SR 301
  • Speaker: Prof. Stephen Kobourov, University of Arizona, Im Rahmen der VL Algorithmische Kartografie
  • Inhalt: Relational data sets are often visualized with graphs: objects become the graph vertices and relations become the graph edges. Graph drawing algorithms aim to present such data in an effective and aesthetically appealing way. We describe map representations, which provide a way to visualize relational data with the help of conceptual maps as a data representation metaphor. While graphs often require considerable effort to comprehend, a map representation is more intuitive, as most people are familiar with maps and ways to interact with them via zooming and panning. Map-based visualization allows us to explicitly group clusters of related vertices as "countries" and to present additional information with contour and heatmap overlays. We consider map representations of the DBLP bibliography server. Words and phrases from paper titles are the cities in the map, and countries are created based on word and phrase similarity, calculated using co-occurence. With the help of heatmaps, we can visualize the profile of a particular conference or journal over a base map of all computer science. Similarly, we can create heatmap profiles for individual researchers or research groups such as a department. Alternatively, a specific journal or conference can be used to generate the base map and then a series of heatmap overlays can show the evolution of research topics in the field over the years. As before, individual researchers or research groups can be visualized using heatmap overlays but this time over the journal or conference base map. Finally, visual abstracts can be generated from research papers, providing a snapshot view of the topics in the paper. See for more details.

Tree compression with top trees revisited

  • Date: Fri, 19 Jun 2015, 15:00
  • Location: SR 301
  • Speaker: Lorenz Hübschle-Schneider, SEA Probevortrag
  • Inhalt: We revisit tree compression with top trees (Bille et al, ICALP'13) and present several improvements to the compressor and its analysis. By significantly reducing the amount of information stored and guiding the compression step using a RePair-inspired heuristic, we obtain a fast compressor achieving good compression ratios, addressing an open problem posed by Bille et al. We show how, with relatively small overhead, the compressed file can be converted into an in-memory representation that supports basic navigation operations in worst-case logarithmic time without decompression. We also show a much improved worst-case bound on the size of the output of top-tree compression (answering an open question posed in a talk on this algorithm by Weimann in 2012).

A bulk-parallel priority queue in external memory with STXXL

  • Date: Fri, 19 Jun 2015, 14:30
  • Location: SR 301
  • Speaker: Thomas Keh, SEA Probevortrag
  • Inhalt: We present the design and implementation of a bulk-parallel priority queue which takes advantage of both shared-memory parallelism and high external memory transfer speeds to parallel disks. To achieve higher performance by decoupling item insertions and extractions, we offer a new interface using "bulk" sequences. We discuss how to parallelize insertions using multiple heaps, and how to calculate a dynamic prediction sequence to prefetch blocks and apply parallel multiway merge for extraction.

How to Draw a Planarization

  • Date: Fri, 19 Jun 2015, 14:00
  • Location: SR 301
  • Speaker: Marcel Radermacher, Masterarbeitsvortrag (ITI Wagner)
  • Inhalt: In this thesis we study the problem of computing a straight-line representation of a planarization with a fixed combinatorial structure. By the results of Mnëv and Shor this problem is as hard as the Existential Theory of the Reals (ER-Complete). We present a heuristic called Geometric Planarization Drawing to compute a straight-line representation of a planarization. We take an iterative approach inspired by a force-directed method utilizing geometric operations to place a vertex to a locally optimal position. The core of our algorithm is the planarity region, in which we can move a vertex without changing the embedding. We take a statistical approach to evaluate the final drawings of different configurations of our Geometric Planarization Drawing approach. The evaluation shows that we improve the initial drawing, and find better results than our implementation of a force-directed method based on PrEd. Depending on the number of crossings per edge, a relaxation of the constraints improves the final drawing. To the best of our knowledge, the Geometric Planarization Drawing approach is the first heuristic to tackle the problem of straight-line representations of a planarization. In practice, our algorithms finds close to optimal solutions for drawings with at most 3 crossings per edge.

Cache-Efficient Aggregation: Hashing Is Sorting

  • Date: Wed, 17 Jun 2015, 11:30
  • Location: SR 236
  • Speaker: Ingo Müller
  • Inhalt: For decades researchers have studied the duality of hashing and sorting for the implementation of the relational operators, especially for efficient aggregation. Depending on the underlying hardware and software architecture, the specifically implemented algorithms, and the data sets used in the experiments, different authors came to different conclusions about which is the better approach. In this paper we argue that in terms of cache efficiency, the two paradigms are actually the same. We support our claim by showing that the complexity of hashing is the same as the complexity of sorting in the external memory model. Furthermore we make the similarity of the two approaches obvious by designing an algorithmic framework that allows to switch seamlessly between hashing and sorting during execution. The fact that we mix hashing and sorting routines in the same algorithmic framework allows us to leverage the advantages of both approaches and makes their similarity obvious. On a more practical note, we also show how to achieve very low constant factors by tuning both the hashing and the sorting routines to modern hardware. Since we observe a complementary dependency of the constant factors of the two routines to the locality of the input, we exploit our framework to switch to the faster routine where appropriate. The result is a novel relational aggregation algorithm that is cache-efficient---independently and without prior knowledge of input skew and output cardinality---, highly parallelizable on modern multi-core systems, and operating at a speed close to the memory bandwidth, thus outperforming the state-of-the-art by up to 3.7x.

Informatikkolloquium: Visibility and Transformation of Paths in Simple Polygons

  • Date: Mon, 15 Jun 2015, 17:30
  • Location: Hörsaal -101
  • Speaker: Professor Subir Ghosh, IIT Mumbai
  • Inhalt: Let s be a source point and t be a destination point inside an n-vertex simple polygon P. Let SP(s,t) denote the Euclidean shortest path, the path of the shortest length, from s to t inside P. Similarly, let MLP(s,t) denote a minimum link path, a path of the minimum number of turns, from s to t. We know that these two paths are simple and piece-wise convex. A path from a light source s to t inside P is called a diffuse reflection path (denoted as drp(s,t)) if the turning points of the path lie in the interiors of the boundary edges of P. A drp(s,t) is said to be optimal if it has the minimum number of turning points amongst all drp(s,t). An optimal drp(s,t) may not be simple. In this talk, we first show how SP(s,t) can be transformed to a MLP(s,t) and then we show that MLP(s,t) can also be transformed to a drp(s,t).

Engineering Fully-Compressed Suffix Trees

  • Date: Fri, 12 Jun 2015, 14:00
  • Location: SR 301
  • Speaker: Christian Ocker, Masterarbeitsvortrag (ITI Sanders)
  • Inhalt: The suffix tree is a classical data structure that provides optimal solutions to countless string processing problems. For a text of length n, a pointer- based representation of a suffix tree requires ?(n log n) bits, whereas compact representations use O(n) bits on top of the size of the compressed text. The fully-compressed suffix tree (FCST) provides the same functionality using o(n) bits on top of the size of the compressed text, whereas queries are slowed down by a logarithmic factor compared to compact representations. Our contribution is a generic implementation of the FCST and of a recent proposal by Navarro and Russo that uses a different sampling to achieve a better query time complexity. We propose a variant of the FCST that improves pattern matching both in theory and in practice using a blind search approach. We provide an extensive empirical evaluation and comparison of the implemented data structures and show that out implementation outperforms the previous prototype in both space consumption and query/construction time.

Practical Massively Parallel Sorting

  • Date: Tue, 9 Jun 2015, 13:10
  • Location: SR 301
  • Speaker: Michael Axtmann, Probevortrag SPAA 2015

Informatikkolloquium: Analyzing the Language of Food on Social Media

  • Date: Mon, 8 Jun 2015, 17:30
  • Location: Hörsaal -101
  • Speaker: Prof. Stephen G. Kobourov, University of Arizona
  • Inhalt: We investigate the predictive power behind the language of food on social media. We collect a corpus of over three million food-related posts from Twitter and demonstrate that many latent population characteristics can be directly predicted from this data: overweight rate, diabetes rate, political leaning, and home geographical location of authors. For all tasks, our language-based models significantly outperform the majority- class baselines. Performance is further improved with more complex natural language processing, such as topic modeling. We analyze which textual features have most predictive power for these datasets, providing insight into the connections between the language of food, geographic locale, and community characteristics. Lastly, we design and implement an online system for real-time query and visualization of the dataset. Visualization tools, such as geo-referenced heatmaps, semantics-preserving wordclouds and temporal histograms, allow us to discover more complex, global patterns mirrored in the language of food.
  • Date: Tue, 2 Jun 2015, 13:00
  • Location: Raum 010
  • Speaker: Kolja Esders, Bachelorarbeitsvortrag (ITI Meyerhenke)
  • Inhalt: Link prediction methods try to predict the likelihood of a future connection between two nodes in a given network. This can be used in biological networks to infer protein-protein interactions or to suggest possible friends to a user in an online social network. Due to the enormous amounts of data that is collected today, there is a need for scalable approaches to this problem. In this thesis, we make use of machine learning and evaluate nine unsu- pervised as well as three supervised methods on six networks with the Receiver Operating Characteristic (ROC) and Precision-Recall (PR) metrics. To investigate the implications of different experimental setups, the testing and training set are varied in size and den- sity. We also introduce a new local index called Neighborhood Distance. The experiments show that supervised methods consistently outperform unsupervised methods by up to 7.2% with respect to the ROC metric. The overall prediction quality is also highly depen- dent on the properties and structure of the analyzed network. In an effort to parallelize predictors, we achieve a speedup of more than ten. The findings suggest that generating custom predictoars for networks based on their properties deserves further research.

Partitioning and Repartitioning Using Size-Constrained Label Propagation and NetworKit

  • Date: Thu, 21 May 2015, 14:30
  • Location: SR 131
  • Speaker: Patrick Bisenius, Bachelorarbeitsvortrag (ITI Meyerhenke)
  • Inhalt: Multilevel graph partitioning schemes have proven very effective in practice. As part of this thesis, we implement a multilevel graph partitioner based on Size-constrained Label Propagation in NetworKit which achieves comparable solution quality to the established partitioners KaHIP and ParMETIS for partitioning and repartitioning. We also evaluate several strategies to improve existing partitioners such as better tie-breaking strategies in the Label Propagation Algorithm and the application of Edge Ratings for Label Propagation. We achieve considerable runtime improvements compared to KaHIP on certain graphs by speeding up certain phases of the multilevel partitioner. Finally, we implement and evaluate a greedy algortihm to optimize the maximum communication volume of a k-partition.

Soon on SIGMOD: "SimpleMinds" in the SIGMOD Programming Contest

  • Date: Thu, 21 May 2015, 09:45
  • Location: SR -118
  • Speaker: Ingo Müller
  • Inhalt: Authors: Ismail Oukid (TU Dresden/SAP SE), Ingo Müller (KIT/SAP SE), Iraklis Psaroudakis (EPFL/SAP SE) Contest website: Duration: 30 minutes presentation + 15 minutes questions and feedback

Iterative Local Community Detection Combining Graph Structure and Attribute Similarity

  • Date: Tue, 19 May 2015, 13:00
  • Location: SR 301
  • Speaker: Jonas Krautter, Bachelorarbeitsvortrag (IPD Böhm, ITI Wagner)
  • Inhalt: Graph clustering has received a lot of attention by scientists and many algorithms have been developed to approximate solutions to variants of the problem, which is usually NP-hard. Lately methods have been proposed to combine attribute information and graph topology into the clustering process. However, none of the available combining algorithms offers a local approach, able to detect a community around a given seed node or a set of seed nodes. Therefore, we present an iterative solution in this thesis: Learning attributes in a local environment, adding edges to represent attribute similarity and using a known local structural algorithm, to provide a method for local community detection that respects both graph structure and attribute data. Evaluation results of the method on synthetic data and real world data from the Facebook friendship network and the Amazon co-purchase network prove the usefulness of the method to improve the detection of communities and identify communities in overlapping structures.

Label Propagation for Hypergraph Partitioning

  • Date: Tue, 12 May 2015, 13:00
  • Location: SR 301
  • Speaker: Vitali Henne, Masterarbeitsvortrag (ITI Sanders)

Experimental Evaluation of Distributed Node Coloring Algorithms in the SINR Model

  • Date: Tue, 5 May 2015, 13:00
  • Location: SR 301
  • Speaker: Markus Schlegel, Bachelorarbeitsvortrag (ITI Wagner)
  • Inhalt: One approach to minimizing interference in ad hoc or sensor networks is to limit the number of concurrently transmitting nodes in every neighborhood by using a specific periodic TDMA schedule. Such a TDMA schedule ensures that nodes, which are neighbors in the communication graph, are transmitting in different time slots. Establishing such a schedule is closely related to the node coloring problem. In a properly colored graph, two neighboring nodes are never assigned the same color. In consequence, if we map each color to a slot in the periodic TDMA schedule, neighboring nodes are never allowed to transmit in the same time slot. In this thesis, we experimentally evaluate the runtime performance of three distributed node coloring algorithms in the Signal to Interference and Noise Ratio model (SINR) with the help of the network simulator framework sinalgo. Additionally we look at some practical improvements and evaluate their impact on the execution time.

An in depth look at LU-decomposition on modern multi-socket hardware

  • Date: Wed, 29 Apr 2015, 13:00
  • Location: SR 301
  • Speaker: Tobias Maier, Masterarbeitsvortrag (ITI Sanders)
  • Inhalt: Modern hardware makes new demands on algorithms. To properly utilize multi-socket architectures with non-trivial cache hierarchies, algorithms have to be designed with the hardware’s strengths, and weaknesses in mind. If properly addressed, modern hardware offers new, and interesting optimization possibilities. In this thesis we present a new scheduling approach, using the example of an LU-decomposition. Our scheduler considers data reuse opportunities between different subtasks, and schedules these subtasks in a way that lets them share common inputs through the L3 cache. The LU-decomposition of matrices is one of the most important numerical algorithms. It is used for matrix inversion, to compute the determinant of a matrix, and to solve systems of linear equations. The algorithm for LU-decomposition is also representative for many other numerical computations. This is the reason, why it is part of the famous LINPACK benchmark, that is used to rank the most powerful supercomputers in the TOP500 list. It is important, to improve the performance of the LU-decomposition, without decreasing its numerical accuracy. Therefore, we use the same numerical algorithm for the LU-decomposition as PLASMA (Parallel Linear Algebra for Scalable Multi-core Architectures), which is a current state of the art implementation. By adapting the scheduling scheme, to take better advantage of data sharing, we achieve performance increases between 15%, and 29%.

Connecting Points with Low-Complexity Polynomial Curves in a Polygon

  • Date: Tue, 21 Apr 2015, 13:00
  • Location: SR 301
  • Speaker: Fabian Klute, Masterarbeitsvortrag (ITI Wagner)
  • Inhalt: Adding an edge to an already existing graph drawing can be done in a lot of ways. In my thesis we explored the possibility to draw the edge using a polynomial curve. Generalizing the k-link-path problem to curves the problem can be stated as finding a curve that connects two points, while respecting a given bounding polygon. For degree two curves we present a direct algorithmic solution. For any curve with higher degree the problem is more difficult to solve. Sampling the control points is the logical next step. With a naive approach one needs an exponential amount of sample points, but using epsilon-nets, we proved that a small set of points is sufficient to find a set of control points.

Consumption and Travel Time Profiles in Electric Vehicle Routing

  • Date: Fri, 17 Apr 2015, 14:00
  • Location: SR 301
  • Speaker: Simeon Andreev, Masterarbeitsvortrag (ITI Wagner)
  • Inhalt: Ein wichtiger Aspekt der Routenplanung für elektrische Autos ist der Trade-off zwischen Energieverbrauch und Reisezeit. Um diesen Trade-off möglichst allgemein zu berechnen, wurden in bisherigen Verfahren variable Fahrgeschwindigkeiten auf Straßensegmenten erlaubt: Pro Straßensegment ist dabei eine feste Anzahl an Geschwindigkeiten möglich. In meiner Masterarbeit wird dieser Ansatz um Kostenfunktionen erweitert, so dass der Geschwindigkeitsverlauf nicht mehr diskret ist. Dadurch liefert eine Profilsuche sowohl schnellere Anfragezeiten als auch ein kompakteres Ergebnis. Im Vortrag werde ich die dazu benötigten Profiloperationen, den Profilalgorithmus und eine Adaption der zeitabhängigen Contraction Hierarchies vorstellen. Am Ende des Vortrags steht die experimentelle Evaluation der vorgeschlagenen Algorithmen.

Fall 2014

Parallelisiertes Pruned Landmark Labeling für Kurzeste-Wege-Anfragen auf Netzen mit Einheitsgewicht

  • Date: Tue, 31 Mar 2015, 14:00
  • Location: SR 301
  • Speaker: Damir Ferizovic, Bachelorarbeitsvortrag (ITI Sanders)
  • Inhalt: Bei der Benutzung von Graphen in verschiedenen Anwendungen existiert oft der Bedarf die Distanz des kürzesten Pfades zwischen Knotenpaaren berrechnen zu können. Für dieses Problem existieren mehrere Lösungsansätze. In dieser Arbeit stellen wir einen Algorithmus vor welcher die Resultate von Labeling-Methoden mit Parallelismus erweitert. Unser Fokus ist dabei ein Verfahren einzuführen, dass gut in einer parallelen Umgebung skaliert. Hierbei wenden wir uns an ein existierendes Labeling-Verfahren (Pruned Landmark Labeling), welches gute Eigenschaften erzielt. Dieses Verfahren benutzen wir dann auch, um es mit unseren Ergebnisen zu vergleichen.

Software Visualization via Hierarchic Graphs

  • Date: Tue, 3 Mar 2015, 14:00
  • Location: SR 301
  • Speaker: Alfred Schuhmacher, Masterarbeitsvortrag

Algorithms for Mapping Parallel Processes onto Grid and Torus Architectures

  • Date: Mon, 2 Mar 2015, 14:30
  • Location: SR 236
  • Speaker: Roland Glantz, Probevortrag PDP 2015
  • Inhalt: Static mapping is the assignment of parallel processes to the processing elements (PEs) of a parallel system, where the assignment does not change during the application's lifetime. In our scenario we model an application's computations and their dependencies by an application graph. This graph is first partitioned into (nearly) equally sized blocks. These blocks need to communicate at block boundaries. To assign the processes to PEs, our goal is to compute a communication-efficient bijective mapping between the blocks and the PEs. This approach of partitioning followed by bijective mapping has many degrees of freedom. Thus, users and developers of parallel applications need to know more about which choices work for which application graphs and which parallel architectures. To this end, we not only develop new mapping algorithms (derived from known greedy methods). We also perform extensive experiments involving different classes of application graphs (meshes and complex networks), architectures of parallel computers (grids and tori), as well as different partitioners and mapping algorithms. Surprisingly, the quality of the partitions, unless very poor, has little influence on the quality of the mapping. More importantly, one of our new mapping algorithms always yields the best results in terms of the quality measure maximum congestion when the application graphs are complex networks. In case of meshes as application graphs, this mapping algorithm always leads in terms of maximum congestion and maximum dilation, another common quality measure.

Combining DibaP and KaHIP for fast diffusion-based repartitioning

  • Date: Mon, 2 Mar 2015, 14:00
  • Location: SR 236
  • Speaker: Sebastian Gieße, Bachelorarbeitsvortrag
  • Inhalt: For large, parallel computations an intelligent distribution of the data is important to speed up the calculation. Every processing element should have the same amount of work and communication between processing elements should be minimal. This can be modeled with the graph partitioning problem. When the data gets altered the old distribution of data might be unbalanced and a new one should be calculated. In such a scenario the graph repartitioning problem is useful. A variety of approaches to the problem of partitioning and repartitioning a graph exist. Some yield high quality results but are far slower than others. One approach uses the idea of a diffusion within a graph. The results achieved by this approach are of very good quality, especially for the graph repartitioning problem. We will take the TruncCons algorithm~cite{mms-a-09}, a diffusion based algorithm, and improve its running time. TruncCons running time is in $mathcal{O}(km)$. We remove its dependence on $k$ whilst trying to preserve the good quality.

Algorithmen zur Analyse historischer Landkarten

  • Date: Fri, 27 Feb 2015, 14:00
  • Location: SR 301
  • Speaker: Benedikt Budig, Universität Würzburg
  • Inhalt: Ich werde zunächst eine innovative Metadaten-Extraktions-Pipeline vorschlagen, deren Module ich im Rahmen meiner Promotion umsetzen will. Im Anschluss werde ich auf eines dieser Module, nämlich das für die Zuordnung von Beschriftungen und Ortsmarkierungen auf historischen Landkarten, besonders eingehen. Für dieses Problem habe ich im Rahmen meiner Masterarbeit eine algorithmische Lösung entwickelt, die ich vorstellen und deren Performance ich mit experimentellen Ergebnissen belegen werde. Mittels einer Sensitivitätsanalyse kann das Vertrauen des Systems in die jeweiligen Zuordnungen quantifiziert werden. Zum Abschluss meines Vortrags stelle ich den Prototypen einer Benutzerschnittstelle vor, der die Ergebnisse dieser Analyse ausnutzt.

Kreuzungsminimierung bei simultaner Einbettung planarer Graphen

  • Date: Fri, 20 Feb 2015, 14:00
  • Location: SR 301
  • Speaker: Maximilian Geißer, Bachelorarbeitsvortrag
  • Inhalt: In diesem Vortrag werde ich mich mit Kreuzungsminimierung bei gegebener simultaner Einbettung zweier planarer Graphen beschäftigen. Dieses Problem ist im allgemeinen Falle NP-vollständig, auch wenn die Einbettung beider Graphen gegeben ist. Wir werden deswegen spezielle, zusammenhängende gemeinsame Graphen betrachten. Auf der einen Seite kann man nur die Einbettung des gemeinsamen Graphen als gegeben voraussetzen. Wir werden hierzu den Fall betrachten, dass der gemeinsame Graph ein Kreis ist. Ich werde erklären, wie die Äquivalenz dieser Problemstellung mit dem Finden eines speziellen Schnittes gezeigt werden kann. Eine andere Problemstellung ergibt sich, falls man die Einbettung von beiden Graphen als gegeben vorausgesetzt. Dort betrachten wir den Fall, dass der gemeinsame Graph 2-fach zusammenhängend ist. Ich gebe effiziente Algorithmen zur Kreuzungsminimierung an, falls die exklusiven Teile in jeder Facette des gemeinsamen Graphen nicht zu sehr verwoben sind.


  • Date: Wed, 18 Feb 2015, 11:00
  • Location: SR 301
  • Speaker: Algorithm Engineering
  • Inhalt: 11:00 - 12:30 Uhr

Implementation and Evaluation of an External Memory String B-Tree

  • Date: Wed, 11 Feb 2015, 14:00
  • Location: SR 301
  • Speaker: Fellipe Lima, Bachelorarbeitsvortrag
  • Inhalt: Preprocessing texts of huge size to answer substring queries is not trivial whenever considering realistic models. We approach this problem by offering an efficient implementation of the String B-Tree data structure, which aims to solve the substring search problem under the dynamic operations. We achieve optimal space usage for the Patricia Tries by representing them via multiarray encoding and our approach for the String B-Tree uses less space than any other external memory full-text index available. The tree representation is well suited for entropy compression and the String B-Tree is efficiently constructed regarding CPU processing and I/O operations. During that construction process we reuse the SA and the LCP arrays, which can be generated by other efficient algorithms.

Engineering ( succinct l [ monotone ] minimal perfect hash ) functions.

  • Date: Tue, 3 Feb 2015, 14:00
  • Location: SR -118
  • Speaker: Sebastiano Vigna, Universität Mailand
  • Inhalt: Succinct representations of static functions that can be queried in constant time have been developed since the eighties, but their importance has grown due to increasing amount of processing in data warehouses. Starting from the classic Majewski-Wormald-Havas-Czech technique based on hypergraph peeling, we illustrate how to build static functions that can be used, among other things, to store static probabilistic dictionaries in space very close to optimal. Then, building on the same ideas, we show how to store minimal perfect hash functions using 2.5 bits per key, and how to store monotone minimal perfect hash functions using log w bits per key, where w is the length of a key in bits. Making these constructions realistically scalable to billions of keys requires some care, as the final structure may use as a little as 1.23 bits per key, and we do not want to require more memory for construction than for usage. We present our variation of a practical, efficient key bucketing technique introduced by Botelho, Pagh and Ziviani, and some implementation details that improve significantly construction time, such as the XOR trick. We will also briefly discuss a MapReduce-friendly version of the peeling procedure that has been recently discovered.

    Sebastiano Vigna obtained his PhD in Computer Science from the Università degli Studi di Milano, where he is currently an Associate Professor. His interests lie in the interaction between theory and practice. He has worked on highly theoretical topics such as computability on the reals, distributed computability, self-stabilization, minimal perfect hashing, succinct data structures, query recommendation, algorithms for large graphs and theoretical/experimental analysis of spectral rankings such as PageRank, but he is also (co)author of several widely used software tools ranging from high-performance Java libraries to a model-driven software generator, a search engine, a crawler, a text editor and a graph compression framework. In 2011 he collaborated to the computation the distance distribution of the whole Facebook graph, from which it was possible to evince that there on Facebook there are just 3.74 degrees of separation. Recently, he participated to the analysis of the largest available public web crawl (Common Crawl 2012), which led to the publication of the first open ranking of web sites (

Extending NetworKit

  • Date: Fri, 30 Jan 2015, 14:30
  • Location: SR 301
  • Speaker: Marc Nemes
  • Inhalt: I will start talking about what I have added to NetworKit and then tell what the results of my tests were. After I will talk about the approach and functions I used to get these results. The last part will be a summary of the entire thesis.

Network Analysis on Distributed Systems

  • Date: Fri, 23 Jan 2015, 14:00
  • Location: SR 301
  • Speaker: Jannis Koch, Bachelorarbeitsvortrag
  • Inhalt: This Bachelor thesis explores how distributed systems can be used for network analysis. The increasing sizes of network data sets require the computation in a distributed setting, due to the computational and memory constraints of single shared-memory systems. Several programming models for distributed computing, both general-purpose and graph-specific, have emerged in recent years. This thesis identifies, describes and compares important programming models, giving an in-depth analysis of their suitability for network analysis algorithms. The four distributed frameworks GraphLab, Apache Giraph, Giraph++ and Apache Flink are described and used to implement algorithms for the graph problems Connected Components, PageRank, Shortest Paths, Community Detection and Clustering Coefficients. Finally, the implementations are executed on a computer cluster to evaluate the frameworks’ suitability in practice and to compare their performance to the shared-memory graph toolkit NetworKit. The results show that the distributed system of eight machines is able to handle data sets that are too large for a single shared-memory system of the same type. However, for those graphs that fit into memory of one machine, the performance of the shared-memory execution is far superior to the distributed one. Out of the examined frameworks, GraphLab and Apache Giraph generally exhibit the best performance.

Fast graph generation with a hyperbolic unit disk model

  • Date: Thu, 22 Jan 2015, 16:00
  • Location: Room 010
  • Speaker: Moritz von Looz

Routing mit Violations

  • Date: Tue, 13 Jan 2015, 14:30
  • Location: SR 301
  • Speaker: Matthias Tangemann, Bachelorarbeitsvortrag

Parallel SAT Solving

  • Date: Tue, 13 Jan 2015, 14:00
  • Location: SR 301
  • Speaker: Tomas Balyo
  • Inhalt: The goal of our project is to develop a scalable massively parallel Boolean satisfiability (SAT) solver. The talk will start with an introduction of some of the most successful sequential SAT solving algorithms like local search and conflict driven clause learning (CDCL). Then we will describe the parallel portfolio approach based on running several instances of a CDCL SAT solver on the same problem and exchanging learned clauses. We will present some previous results regarding portfolio SAT solving together with our own observations and ideas for improvement.

Cache-Efficient Parallel Sparse Matrix-Matrix Multiplication

  • Date: Tue, 16 Dec 2014, 14:30
  • Location: SR 301
  • Speaker: Luben Alexandrov, Masterarbeitsvortrag
  • Inhalt: The thesis investigates the BLAS-3 routine of sparse matrix-matrix multiplication (SpGEMM) based on the outer product method. Several algorithmic approaches have been implemented and empirically analyzed. The experiments have shown that an algorithm presented by Gustavson outperforms other alternatives. In this work we propose optimization technique that improves the scalability and the cache efficiency of the Gustavson's algorithm for large matrices. Our approach succeeded to reduce the cache misses by more than a factor of five and to improve the net running time by 30% with some instances. The thesis also presents an algorithm for flops estimation, which can be used to determine an upper bound for the density of the result matrix. Furthermore, the work analyzes and empirically evaluates techniques for parallelization of the multiplication in a shared memory model by using Intel TBB and OpenMP. We investigate the cache efficiency of the algorithm in a parallel setting and compare several approaches for load balancing of the computation.

Application of Efficient Matrix Inversion to the Decomposition of Hierarchical Matrices

  • Date: Fri, 12 Dec 2014, 13:15
  • Location: SR 301
  • Speaker: Moritz Klammler, Bachelorarbeitsvortrag
  • Inhalt: -

An experimental study of a nearly-linear time Laplacian solver

  • Date: Tue, 9 Dec 2014, 14:00
  • Location: Room 010
  • Speaker: Daniel Hoske
  • Inhalt: Solving sparse Laplacian systems is a problem of significant practical importance. Spielman and Teng's [ST04] nearly-linear time Laplacian solver was, therefore, an important theoretical breakthrough that spawned a series of extensions and simplifications. Although these solvers are an enormous theoretical achievement, as of this writing they have not been extensively validated in practice. In this thesis we seek to fill this gap by implementing and benchmarking the nearly-linear time Laplacian solver proposed by Kelner et al. [Kel+13] that is much simpler than Spielman and Teng's [ST04] original algorithm. While we confirm that its running time grows nearly-linearly, the constant factor is so large that the solver performs poorly compared with common iterative solvers. In particular, we find that the convergence of the solver strongly depends on the stretch of a chosen spanning tree. As Papp [Pap14] observed, known spanning tree algorithms with provable stretch give poor stretch in practice and, therefore, result in slow convergence of the solver. In addition, we show that using this solver as a preconditioner in common solvers causes convergence problems. However, it quickly dampens the high-frequency components of the error and could, therefore, work well as a smoother. Overall, Spielman and Teng's [ST04] Laplacian solver did not prove to be competitive against common iterative solvers. Citations: [Kel+13] Jonathan A. Kelner et al. “A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-linear Time”. In: STOC. 2013, pp. 911–920. doi: 10.1145/2488608.2488724. [Pap14] Pál András Papp. “Low-Stretch Spanning Trees”. Bachelor thesis. Eötvös Loránd University, Budapest, 2014. [ST04] Daniel A. Spielman and Shang-Hua Teng. “Nearly-linear Time Algorithms for Graph Partitioning, Graph Sparsification, and Solving Linear Systems”. In: STOC. 2004, pp. 81–90. doi: 10.1145/1007352.1007372.

Schematische, graphenbasierende Darstellung von Sekundärstrukturen von Proteinen

  • Date: Fri, 28 Nov 2014, 14:00
  • Location: SR 301
  • Speaker: Eric Sallermann, Bachelorarbeitsvortrag
  • Inhalt: Proteine sind Bestandteil aller lebender Organismen und Schwerpunkt vieler Untersuchungen, besonders derjenigen, in denen es um das bessere Verständnis des menschlichen Körpers geht. Die räumliche Struktur des Proteins wird auf verschiedenen Ebenen betrachtet. Dieser Vortrag beschäftigt sich mit der zweiten Ebene, der Sekundärstruktur des Proteins, und deren Darstellung. Zunächst wird motiviert, warum es lohnenswert ist, Proteine näher zu betrachten. Danach wird erörtert, welche Proteinmodelle existieren, was deren Vor- und Nachteile sind und wie ein graphenbasierender Ansatz gewählt wird, um das Protein zu modellieren. Es folgt dann eine Erklärung, welche visuellen Aspekte eine schematische Darstellung der Protein-Sekundärstruktur erfüllen soll. Schließlich wird unsere Methode vorgestellt, mit der diese Darstellung automatisch generiert wird. Zuletzt diskutieren wir die Qualität der Zeichnung anhand zweier Beispielgraphen und eines Proteins, und welche Verbesserungen noch machbar sind.

A New Approach to Social Network Analysis

  • Date: Thu, 27 Nov 2014, 13:00
  • Location: Room 236
  • Speaker: Prof. Ulrik Brandes, Universität Konstanz
  • Inhalt: Network science is a burgeoning domain of data analysis in which the focus is on structures and dependencies rather than populations and independence. Social network analysis is network science applied to the empirical study of social structures, typically utilizing observations on social relationships to analyze the actors involved in them. Methods for the analysis of social networks abound. They include, for instance, numerous centrality indices, vertex equivalences, and clustering techniques, many of which are applied on networks in other disciplines as well. For substantively oriented analysts, however, it is often difficult to choose, let alone justify, a particular variant method. Similarly, it is difficult for researchers interested in computational aspects to understand which methods are worthwhile to consider and whether variants and restrictions are meaningful and relevant. In an attempt to bridge the gap between theory and methods, and drawing on a substantial record of interdisciplinary cooperation, we have developed a comprehensive research program, the positional approach to network analysis. It provides a unifying framework for network analysis in the pursuit of two closely related goals: * to establish a science of networks, and * to facilitate mathematical and algorithmic research. The first caters to methodologists and social scientists: by embracing measurement theory, network-analytic methods are opened up for theoretical justification and detailed empirical testing. The second caters to mathematicians and computer scientists: by structuring the space of methods, gaps and opportunities are exposed. After a brief introduction and delineation of network science and social network analysis, the main elements of the positional approach are introduced in this talk. I will then concentrate on exemplary instantiations for analytic concepts such as centrality, roles, and cohesion. Particular emphasis is placed on resulting combinatorial and algorithmic challenges involving, for instance, partial orders, graphs, and path algebras.

Algorithms for focus-and-context maps

  • Date: Wed, 26 Nov 2014, 11:30
  • Location: SR 301
  • Speaker: Prof. Jan-Henrik Haunert, Universität Osnabrück
  • Inhalt: A focus-and-context map provides a user with detailed information about an area of interest while displaying context for orientation. In my talk I present algorithmic approaches to the automatic generation of focus-and-context maps from large-scale geographic data. This includes an optimization-based distortion technique to enlarge a user-selected region in a map as well as algorithms for map generalization and map labeling.

Automated Generation of Destination Maps

  • Date: Tue, 25 Nov 2014, 14:00
  • Location: SR 301
  • Speaker: Yordan Boev, Bachelorarbeitsvortrag
  • Inhalt: The main subject will be destination maps and their automated creation. A destination map resembles a hand-made sketch, which purpose is to show a person how to easily get to a fixed destination. Our goal is creating a computer application capable of drawing such maps. We will quickly outline some previous attempts on this problem and present our solution. Our solution is based on routing with shortest paths and consists of two main parts: path selection and force-based drawing. We will present the underlying algorithms that define our process of creating a destination map and at the end, show a short demo of our prototype application that implements our ideas and allows us to visualize and experiment.

Weighted Disk Contact Graphs

  • Date: Fri, 21 Nov 2014, 15:00
  • Location: SR 301
  • Speaker: Boris Klemz, Diplomarbeit
  • Inhalt: Disk touching graphs realize graphs by representing each vertex as a disk in the plane such that disks touch each other if and only if the corresponding vertices are adjacent. Application areas for disk graphs include modeling physical problems and visualizing statistical data. Deciding whether a graph can be realized such that the disks' radii coincide with some predefined values or such that the disks cover some specified points in the plane has been proven to be NP-hard. In the thesis we consider several variations and combinations of these two scenarios and analyze what can be guaranteed for special graph classes. We show that many of the considered problems remain NP-hard even for very basic graph classes. We thereby strengthen several previous NP-hardness results and provide NP-hardness proofs for new problem variations. On the other hand, we present some linear-time algorithms for graph class/problem combinations for which we do not prove NP-hardness. In the talk we discuss one of these algorithms as well as one of the NP-hardness results in detail. In particular, we present a linear-time algorithm for deciding whether a caterpillar can be realized as a disk touching graph with uniform radii and we show that it is NP-hard to decide whether a graph can be realized as a disk touching graph whose radii coincide with predefined values and whose disks cover specified points, even if all radii are chosen uniformly and even if the input graph is a path (and, therefore, even if a combinatorial embedding is provided). We conclude the talk by briefly discussing additional results.

Communication Efficient Algorithms for Distributed OLAP Query Execution

  • Date: Fri, 21 Nov 2014, 14:30
  • Location: SR 301
  • Speaker: Demian Hespe, Bachelorarbeitsvortrag
  • Inhalt: As a result of the growing amounts of Data in todays Databases, one machine is often not sufficient to store and process these. The proper solution to this problem is to scale the system out on a cluster. However, the distribution of the data throughout the machines of the cluster results in a high percentage of communication time in the overall execution time of a query, especially for complex analytical queries. For this reason, we try to minimize the volume of communicated data to allow faster runtimes when a query cannot be executed on a single node of the cluster without any communication. We analyze techniques from previous work and propose improvements to them backed by a complexity analysis of the communication volume for both, our algorithms and the algorithms from the previous work. For the evaluation of our algorithms we implement them for chosen queries of the TPC-H benchmark and run them on a cluster of up to 128 nodes with a database of up to 30 terabytes of uncompressed data (128 TB if only a small proportion of the database is used). We provide both, scaling experiments and runtime comparisons to previous work and the current TPC-H record holder. The main contributions of this work are: - A technique to find a better partitioning of the tables in a database to allow the execution of joins without communication effort - An algorithm that selects the first k tuples of the result set of a query with a communication effort independent from the size of the database, given certain conditions of the partitioning - An analysis of the communication effort of a delayed join that can't be evaluated locally on a node, in comparison to the communication effort when executing the join early - The application of our algorithms to solve complex queries of the TPC-H benchmark that can't be executed without a high amount of communication effort - The implementation of the queries in a prototype and evaluation of our algorithms on a large cluster consisting of 128 nodes for a database with up to 30 terabytes of uncompressed data (or 128 TB if only a small proportion of the database is used)

Efficient Update Handling in Main Memory Column-Stores

  • Date: Fri, 21 Nov 2014, 14:00
  • Location: SR 301
  • Speaker: Shengjia Feng, Bachelorarbeitsvortrag
  • Inhalt: With the growing amount of data collected over time and the demand of fast data analysis on executive floors for ad-hoc business decision, the very fast execution of analytic queries on big databases becomes crucial to gain the requested information. Main memory column-store databases can process these analytic queries by magnitudes faster than conventional databases. However, column-store databases are not able to process transactions such as insertions and deletions as efficiently as row-stores due to their physical structure. This thesis analyzes methods to optimize update handling within the restrictions given by the physical structure to achieve results that are comparable to conventional row-stores. We firstly analyze different OLAP query constellations and apply the techniques on the queries of the TPC-H Benchmark theoretically and practically. In addition, we implement possible algorithms for the merge of the main table with Delta buffers that are used in the chosen update method which is the Stage Update. In summary, this thesis contributes: - Different possibilities of update handling - Detailed analyses of methods in Stage Updating - Fast and scalable algorithms for Delta-main merges - Least slackening Delta incorporation methods for analytic queries

Data Structures for Planar Graph Embeddings

  • Date: Wed, 19 Nov 2014, 15:45
  • Location: SR 301
  • Speaker: Patrizio Angelini (University of RomaTre), Graph Visualization course
  • Inhalt: In this lecture we discuss the relationship between the connectivity of a planar graph and the number of planar embeddings it admits. Since low-connected graphs turn out to admit a super-polynomial number of planar embeddings, we will describe some data structures to succinctly represent all such embeddings. Finally, we shall see an example of how this data structures can be used to solve problems efficiently.

Transforming Rectangles into Squares: An Introduction to the Squarability Problem

  • Date: Fri, 14 Nov 2014, 09:45
  • Location: Geb. 05.20 (Allianzgebäude am Kronenplatz), Raum 1C-10
  • Speaker: Jonathan Klawitter, Bachelorarbeitsvortrag
  • Inhalt: We introduce the Squarability problem: When can a set of axis-aligned rectangles be transformed into squares without changing combinatorial properties? This means, that we do not allow to change whether, how and in which order the rectangles respectively squares intersect. We give a full characterisation of triangle-free rectangle arrangements via enhanced intersection graphs. We define a mixed integer linear program to solve any instance of the Squarability problem. We give some exemplary instances, which indicate that the problem is in general not that easy to solve. However, we show that some classes of rectangle arrangements can always be transformed into squares in the desired way.

Coloring Complex Networks

  • Date: Fri, 7 Nov 2014, 15:00
  • Location: SR 301
  • Speaker: Lukas Hübner, Bachelorarbeitsvortrag
  • Inhalt: In this thesis an overview over existing graph coloring algorithms and their applicability to complex graphs representing social networks is given. Based on the observation that for a small k, the k-core of these graphs consists of very few vertices, two new algorithms to color social network graphs are proposed. Both try to color a k-core of the graph using an expensive heuristic and then coloring the remaining graph using a fast heuristic. The running time and number of colors used by the two proposed algorithms on social networks is then tested and compared to various other heuristics and a parallel version of LDF. LDF and the parallel version Parallel LDF proved very well. They where the fastest algorithms tested and used as many colors as Dsatur for most of the graphs.

Finding Small Node Separators

  • Date: Fri, 7 Nov 2014, 14:30
  • Location: SR 301
  • Speaker: Michael Wegner, Bachelorarbeitsvortrag
  • Inhalt: We present a new algorithm to compute small node separators on large, undirected graphs ensuring a user defined balance on the induced connected components. Our algorithm uses edge separators computed by the open source graph partitioning package KaHIP and a new technique to transform them into node separators. Given an undirected graph G, our algorithm constructs a flow problem on a subgraph of G containing the edge separator so that the induced minimum cut in this subgraph corresponds to a node separator in G. Experiments show that our algorithm finds significantly better node separators than other popular tools. The speed of our algorithm is some orders of magnitude slower than other tools, however, parallelizing the algorithm and optimizing some crucial parts should decrease the running time without too much effort. To further evaluate the quality of our node separators we implemented a node ordering algorithm based on nested dissection which uses our separators to compute high quality node orderings.

Evolutionary Algorithms For Independent Sets

  • Date: Fri, 7 Nov 2014, 14:00
  • Location: SR 301
  • Speaker: Sebastian Lamm, Bachelorarbeitsvortrag
  • Inhalt: An independent set of a graph G = (V, E) is a subset S ? V , such that there are no adjacent nodes in S. The independent set problem is that of finding the maximum cardinality set among all possible independent sets. We present a novel evolutionary algorithm for this problem. Our algorithm starts by generating a population of individuals by using greedy algorithms to create initial independent sets. We then use tournament selection for picking individuals, called parents, for mating. We introduce four new ways to combine the selected individuals. The first procedure creates new individuals by building the overlap of the parents independent sets. The other schemes make use of graph partitioning to produce new offsprings applying both 2-way and k-way partitions as well as node separators. We use them to quickly exchange whole parts of the given independent sets. For example, we make use of node separators obtained from the partitioner to generate new independent sets in the first of these combine operators. We also provide an indirect approach by using vertex covers and partitions in our third operator. Finally, we use k-way partitions and node separators to build a multiple- point combination operator. All newly created sets are afterwards improved by using the fast local search algorithm by Andrade et. al. [1]. We present an experimental evaluation of our algorithm using instances from the literature as well as ones proposed by Andrade et. al. [1]. Our algorithm performs particularly well on social network graphs and the meshes which were introduced by Andrade et. al. [1]. Notable improvements can also be found on instances from Walshaw’s Partition Archive, which are common benchmarks for graph partitioners. In the initial phases of almost all test runs we performed, our algorithm is able to establish better solutions. Additionally, we are able to show that graph partitioners benefit our algorithm when compared to more simplistic methods.

Non-parametric Methods for Correlation Analysis in Multivariate Data

  • Date: Fri, 7 Nov 2014, 13:00
  • Location: SR 131
  • Speaker: Hoang Vu Nguyen
  • Inhalt: Knowledge discovery in multivariate data often is involved in analyzing the relationship of two or more dimensions. Correlation analysis with its root in statistics is one of the most effective approaches towards addressing the issue. In this talk, I will present some non-parametric methods for correlation analysis in multivariate data. I will focus on real-valued data where probability density functions (pdfs) are in general not available at hand. Instead of estimating them, we propose to work with cumulative distribution functions (cdfs) and cumulative entropy - a new concept of entropy for real-valued data. For the talk, I will first discuss two methods for scalable mining of correlated subspaces in large high dimensional data. Second, I will introduce an efficient and effective non-parametric method for computing total correlation - a well-known correlation measure based on Shannon entropy. This method is based on discretization and hence, can be perceived as a technique for correlation-preserving discretization (compression) of multivariate data. Lastly, I will go beyond correlation analysis and present our ongoing research in multivariate causal inference.

Beschleunigung von Flussberechnungen auf Shared-Memory-Plattformen

  • Date: Tue, 4 Nov 2014, 14:00
  • Location: SR 301
  • Speaker: Niklas Baumstark, Bachelorarbeitsvortrag
  • Inhalt: In diesem Vortrag werde ich die Ergebnisse meiner Forschung zu parallelen Flow- und Cut-Algorithmen vorstellen, die ich im Zuge meiner Bachelorarbeit angestellt habe. Ich werde grob den Inhalt vorhergehender Veröffentlichungen aufzeigen und daraus auf die zentralen Probleme und Herausforderungen schließen. Mithilfe eines neuen Benchmarks kann ich zeigen, dass viele in der Praxis relevanten Graphen einfacher zu lösen sind als bisher in Publikationen verwendete Instanzen und dass sie von einer sehr einfachen Klasse von Algorithmen, Push-Relabel mit FIFO-Auswahl und Global Relabeling, effizient lösbar sind. Darauf aufbauend stelle ich eine parallele, synchrone Implementierung dieses Algorithmus vor, welche sich in den Benchmarks durchsetzen kann und gute absolute Speedups erreicht. Das Einführen von Nichtdeterminismus und Asynchronität als Optimierung bringt in einigen Fällen weitere Verbessungen bei der Laufzeit.

On the Distributed Computation of Fractional Connected Dominating Set Packings

  • Date: Fri, 31 Oct 2014, 14:00
  • Location: SR 301
  • Speaker: Matthias Wolf, Bachelorarbeitsvortrag
  • Inhalt: Das Ziel von Kommunikationsnetzwerken ist es, Nachrichten von einem Knoten zu anderen zu schicken. Ha?ufig interessiert man sich dafu?r, den Informationsfluss zu maximieren. Dieser wird durch den Zusammenhang des Netzwerkes begrenzt. Bei der Betrachtung des Knotenzusammenhangs haben sich „connected dominating sets“ (CDS) als hilfreich erwiesen. Packt man mehrere connected dominating sets in das Netzwerk, erha?lt man ein „fractional CDS packing“. Dieses kann verwendet werden, um einen Informationfluss in der Gro?ße der Packung zu erhalten. In dieser Arbeit stellen wir einen verteilten Algorithmus vor, der zu einem Netzwerk mit n Knoten und Knotenzusammenhang k eine solche Packung der Gro?ße ?(k/ log n) bestimmt. Dieser beno?tigt O(log2 n · (D + ?n log n log? n + k?)) Runden im V- CONGEST-Modell, wobei D den Durchmesser und ? den gro?ßten auftretenden Knotengrad bezeichnet. In Netzwerken, deren Knotenzusammenhang nicht zu groß ist, erreicht unser Algorithmus eine bessere Laufzeit als der bislang schnellste bekannte Algorithmus von Censor-Hillel, Ghaffari und Kuhn [CHGK14a].

Visualisierung eines Abfahrtsgraphen mittels ILP

  • Date: Tue, 28 Oct 2014, 14:00
  • Location: SR 301
  • Speaker: Anne Cathérine Jäger, Bachelorarbeitsvortrag
  • Inhalt: Vielleicht kennen Sie dieses Problem: Sie möchten mit der Bahn verreisen und verpassen beim Umsteigen Ihren Anschlusszug. Nun wüssten Sie gern, welchen Zug Sie als nächstes nehmen sollten, wann dieser abfährt und wohin. Um dieses Problem zu lösen wurde von Dibbelt et al. (2014) ein sogenannter Abfahrtsgraph entwickelt, der alternative Verbindungen enthält. Im Vortrag wird ein Ansatz beschrieben um eben diesen Abfahrtsgraphen mithilfe ganzzahliger linearer Programmierung (ILP) zu visualisieren. Dazu wird ein Grundmodell als ganzzahliges lineares Programm vorgestellt, welches den Abfahrtsgraphen und seine Visualisierung repräsentiert. Verschiedene Designentscheidungen und Gütekriterien werden in das Grundmodell mit einbezogen und anhand einer Zielfunktion als Minimierungsproblem formuliert. Aufbauend auf dem Grundmodell werden bestimmte Modellvariationen eingeführt. Für diese soll erörtert werden, inwiefern sie die Berechnung beschleunigen können. Abschließend wird der Ansatz anhand von Fallbeispielen evaluiert und diskutiert.

Complex Network Backbones

  • Date: Thu, 23 Oct 2014, 13:15
  • Location: Room 010
  • Speaker: Gerd Lindner, Bachelorarbeitsvortrag
  • Inhalt: The goal of network reduction or graph sparsification is to reduce the size of networks while preserving those network properties that are of particular interest. We regard backbone calculation of a graph as edge filtering only, i.e. keep the set of nodes unaltered. We are experimentally evaluating a variety of backbone algorithms, including Local Similarity as introduced by Satuluri et al. and Simmelian Backbones presented by Nick et al. We propose a new backbone definition called Local Degree which is closely related to the approach of Satuluri et al. Our evaluation dataset consists of generated networks as well as social networks and other types of networks, and amongsts other properties we analyze backbones regarding community structure, diameter, clustering coefficients, degree distribution and centrality measures. The backbone algorithms we considered can be reduced to simple edge filtering, an approach that is both modular and fast. We used it for an implementation in the high-performance network analysis toolkit NetworKit. The concept of quality measures for determining how well certain properties are preserved on one or multiple backbones is introduced and we show how this approach can be used for convenient exploration of backbone qualities for the many possible combinations of backbone algorithms, networks and graph properties. We show that the preservation of many graph properties in backbones is possible for an amount of edges down to about 15% to 20% of the original set of edges. We conclude that Local Similarity performs excellent in terms of community structure, whereas our Local Degree exhibits desirable behaviour concerning the combination of small-world properties, local clustering coefficient and community structure.

presentation of research topic

  • Date: Mon, 20 Oct 2014, 13:00
  • Location: SR 301
  • Speaker: Yuya Hodokuma


  • Date: Fri, 17 Oct 2014, 14:00
  • Location: SR 301
  • Speaker: Thomas Bläsius, Benjamin Niedermann

Abschlusspräsentation PSE

  • Date: Wed, 15 Oct 2014, 15:45
  • Location: SR -101
  • Speaker: Martin Mahler, Maurice Rickfelder, Lukas Hennig, Patrick Firnkes, Dominik Eira Elias

Benchmarking of Topology-aware Mappings and a Bottom-Up Multilevel Mapping Algorithm

  • Date: Mon, 13 Oct 2014, 14:00
  • Location: SR -102
  • Speaker: Alexander Noe, Bachelorarbeitsvortrag
  • Inhalt: Graph algorithms can be parallelized to improve computation time. For this purpose, the application graph will be partitioned into k equally sized blocks, each of which will run on a single processor. This partitioning is represented in the communication graph, in which vertices stand for application blocks and edges stand for communication between them. The purpose of topology-aware mapping is the mapping of the communication graph onto a processor graph, so that average dilation, maximum dilation and maximum congestion are as low as possible. The processor graph contains k processor and potentially additional non-processing nodes. The undirected edges stand for connections in the computing network. As minimizing comgestion or dilation is NP-hard, heuristics have to be employed. This work describes some of the heuristics commonly used for topology-aware mapping and also introduces a new multilevel bottom-up approach to the problem, which represents the processor graph in a hierarchical fashion. For testing those heuristics, the benchmarking tool KarMa-Bench was created. KarMa-Bench is able to create a communication and a processor graph. A mapping heuristic is then employed to map the communication graph to the processor graph. Afterward, this mapping is evaluated in terms of quality and running time. This work also compares the multilevel bottom-up approach with established heuristics and shows, which algorithm should be used under which conditions.

Coloring Complex Networks

  • Date: Thu, 9 Oct 2014, 14:00
  • Location: SR 301
  • Speaker: Lukas Hübner, Bachelorarbeitsvortrag
  • Inhalt: In this thesis an overview over existing graph coloring algorithms and their applicability to complex graphs representing social networks is given. Based on the observation that for a small k, the k-core of these graphs consists of very few vertices, two new algorithms to color social network graphs are proposed. Both try to color a k-core of the graph using an expensive heuristic and then coloring the remaining graph using a fast heuristic. The running time and number of colors used by the two proposed algorithms on social networks is then tested and compared to various other heuristics and a parallel version of LDF. LDF and the parallel version Parallel LDF proved very well. They where the fastest algorithms tested and used as many colors as Dsatur for most of the graphs.

Heuristiken für kombinierte Standort- und Gebietsplanung mit vorgegebenen und zusätzlichen, frei wählbaren Standorten

  • Date: Mon, 6 Oct 2014, 15:00
  • Location: SR 301
  • Speaker: Tim Ulrich, Diplomarbeit

Spring 2014

Experimental study on the Wind Farm Substation Cable Installation Problem

  • Date: Thu, 18 Sep 2014, 14:00
  • Location: SR 301
  • Speaker: Christian Schmitz, Masterarbeitsvortrag
  • Inhalt: In order to include wind farms to the power grid their turbines have to be connected via a cable installation to a substation first. Given a set of coordinates for a substation and for a set of turbines, the turbines’ rating and a set of cable types, the objective of the Substation Cable Installation Problem (SCIP) is to find the cost minimal cable installation such that all turbines are connected to the substation by an unsplittable flow and the capacity constraints imposed by the installation are met. In this talk we will look into exact and heuristic approaches for solving the SCIP which we implemented and evaluated experimentally. Afterwards, we will present the results of the experiments.

Energy-Optimal Routing with Turn Costs for Electric Vehicles

  • Date: Tue, 26 Aug 2014, 11:00
  • Location: SR 301
  • Speaker: Alexandru Lesi, Bachelorarbeitsvortrag
  • Inhalt: Diese Arbeit befasst sich mit der Integration von Abbiegekosten in die Routenplanung für Elektrofahrzeuge. Passende Verbrauchswerte werden aus Daten von simulierten Autofahrten erstellt. Es werden Modelle beschrieben, um die entstandenen Kosten für Brems- und Beschleuningungsvorgänge auf Abbiegekosten abzubilden. Es werden zwei Arten gezeigt, Graphen zu erweitern um diese Werte speichern zu können, zusammen mit den entsprechenden Methoden um diese Graphen zu erstellen. Des Weiteren werden die nötigen Anpassungen der Routingalgorithmen vorgestellt. Die resultierenden Algorithmen und Graphen werden anschließend experimentell evaluiert.

Bulk-Parallel Priority Queue in External Memory

  • Date: Wed, 6 Aug 2014, 11:30
  • Location: SR 236
  • Speaker: Thomas Keh, Bachelorarbeitsvortrag
  • Inhalt: Wir präsentieren eine Priority Queue mit Unterstützung für externen Speicher. Besonderes Augenmerk wurde darauf gelegt, Vorteile aus parallelen Rechnerarchitekturen mit gemeinsamem Speicher zu ziehen. Es ist die erste parallele Optimierung einer Priority Queue für externen Speicher. Eine zusätzliche Schnittstelle zum blockweisen Einfügen beschleunigt längere Sequenzen von gleichartigen Operationen, wie sie besonders bei Anwendungen auftreten, die große Datenmengen verarbeiten. Der Algorithmus wird als Erweiterung zur STXXL verfügbar sein, einer bekannten C++-Templatebibliothek für sehr große Datenmengen. Für homogene, blockweise Operationen ergibt sich eine deutliche Verbesserung gegenüber der aktuellen STXXL Priority Queue für externen Speicher. Die hohen Fixkosten bei der Threaderzeugung, sowie der hohe Aufwand für Cache-Synchronisierung bei der globalen Extract-Min-Operation, zeigen jedoch die inhärenten Grenzen der Parallelisierbarkeit von Priority Queues auf.

Entwicklung einer anwendungsorientierten Packheuristik mit Hohlraumdefragmentierung

  • Date: Fri, 1 Aug 2014, 14:00
  • Location: SR 301
  • Speaker: Tilman Väth, Bachelorarbeitsvortrag
  • Inhalt: Die Arbeit befasst sich mit der Entwicklung einer dreidimensionalen Packheuristik für ein spezielles Containerladeproblem. Das Ganze erfolgt dabei auf Basis von zwei unterschiedlichen wissenschaftlichen Arbeiten, welche miteinander kombiniert werden. Die Grundlage bildet die erste Heuristik, welche auf einer lokalen Suche beruht. Diese optimiert die Güte der Lösung alleine durch die Veränderung der Beladesequenzen, da dabei ein einfacher, deterministischer Belademechanismus verwendet wird. Darauf aufbauend werden Konzepte der zweiten Arbeit für ein allgemein gefasstes Bin-Packing Problem genutzt. Dieses versucht vor allem Freiräume zu defragmentieren und dadurch eine kompaktere Beladung zu erreichen.

Compact Indexes for Flexible Top-k Retrieval

  • Date: Wed, 30 Jul 2014, 11:30
  • Location: SR 236
  • Speaker: Simon Gog
  • Inhalt: We engineer a self-index based retrieval system capable of rank-safe evaluation of top-k queries. The framework generalizes the GREEDY approach of Culpepper et al. (ESA 2010) to handle multi-term queries, including over phrases. We propose two techniques which significantly reduce the ranking time for a wide range of popular Information Retrieval (IR) relevance measures, such as TF×IDF and BM25. First, we reorder elements in the document array according to document weight. Second, we introduce the repetition array, which generalizes Sadakane’s (JDA 2007) document frequency structure to document subsets. Combining document and repetition array, we achieve attractive functionality-space trade-offs. We provide an extensive evaluation of our system on terabyte-sized IR collections. This is joint work with Matthias Petri (The University of Melbourne).

Non-Planar Square-Orthogonal Drawing with Few-Bend Edges

  • Date: Fri, 25 Jul 2014, 14:00
  • Location: SR 301
  • Speaker: Sheung-Hung Poon, National Tsing-Hua University, Taiwan
  • Inhalt: We investigate square-orthogonal drawings of non-planar graphs with vertices represented as unit grid squares. We present quadratic-time algorithms to construct the square-orthogonal drawings of 5-graphs, 6-graphs, and 8-graphs such that each edge in the drawing contains at most two, two, and three bends, respectively. In particular, the novel analysis method we use to split a vertex so as to build some specific propagation channels in our algorithms is an interesting technique and may be of independent interest. Moreover, we show that the decision problem of determining whether an 8-graph has a square-orthogonal drawing without edge-bends is NP-complete.

Online Route Planning - the Canadian Traveller Problem Revisited

  • Date: Tue, 15 Jul 2014, 14:00
  • Location: SR 301
  • Speaker: Chung-Shou Liao
  • Inhalt: This study revisits the Canadian Traveller Problem (CTP), which finds applications to dynamic navigation systems. Given a road network G=(V,E) with a source s and a destination t in V, a traveller knows the entire network in advance, and wishes to travel as quickly as possible from s to t, but discovers online that some roads are blocked (e.g., by snow or accidents) once reaching them. The objective is to derive an adaptive strategy so that its competitive ratio, which compares the distance traversed with that of the static shortest s,t-path in hindsight, is minimized. This problem was initiated by Papadimitriou and Yannakakis in 1991. They proved that it is PSPACE-complete to obtain an algorithm with a bounded competitive ratio. Furthermore, if at most k roads can be blocked, then the optimal competitive ratio for a deterministic online algorithm is 2k+1, while the only randomized result known is a lower bound of k+1. In this study, we show for the first time that a polynomial time randomized algorithm can beat the best deterministic algorithms, surpassing the 2k+1 lower bound by an o(1) factor. Moreover, we prove the randomized algorithm achieving a better (1.7k +1)-competitive ratio in pseudo-polynomial time.

Visualisierung planarer Graphen mit kleiner äußerer Facette

  • Date: Fri, 11 Jul 2014, 14:30
  • Location: SR 301
  • Speaker: Adrian Hermann, Bachelorarbeitsvortrag
  • Inhalt: Bei der Visualisierung planarer Graphen wirkt sich eine kleine äußere Facette oft negativ auf die Darstellung aus (schlechte Winkelauflösung, großer Platzbedarf, viele Knicke...). Gutwenger et. al haben sich damit beschäftigt, wie man für einen planaren Graphen eine Einbettung mit minimaler Facettentiefe und maximaler äußerer Facette finden kann. Was ist aber wenn alle Facetten (in jeder Einbettung) klein sind? Dies ist z.B. bei einem triangulierten Graphen der Fall. Dieser Vortrag beschäftigt sich damit, wie man solche Graphen gut lesbar zeichnen kann. Der Ansatz dabei ist Kanten temporär zu löschen, um die äußere Facette zu vergrößern. Zeichnet man dann den Graphen, erhofft man sich ihn, wegen der großen äußeren Facette, gut lesbar zeichen zu können. Anschließend fügt man die Kanten wieder ein. Es wird ein Framework vorgestellt welches einen Graphen mit diesem Verfahren zeichnet.

Experiments on Simmelian Backbones for Clustering Social Networks

  • Date: Fri, 11 Jul 2014, 14:00
  • Location: SR 301
  • Speaker: Michael Hamann, Masterarbeitsvortrag
  • Inhalt: Regarding the task of detecting communities in a social network a problem for some community detection algorithms is the very dense network structure. We are exploring ways to solve this problem by applying the Louvain community detection algorithm on top of Simmelian Backbones, a concept introduced by Nick et al. that identifies strong connections in the network and thus allows to filter the network. In our experimental evaluation we use a set of 100 Facebook networks from universities and colleges in the US. We compare the detected communities to the dormitories and graduation years of the students using the Jaccard index and give some insights into the structure of these networks that can be identified by the Louvain and the Simmelian Backbone algorithm.

Three Dimensional Dynamic Point Labelling

  • Date: Fri, 4 Jul 2014, 14:30
  • Location: SR 301
  • Speaker: Erik Prause, Masterarbeitsvortrag
  • Inhalt: Recently, much theoretical as well as practical research work has been done on attaching labels to point features in dynamic 2D map applications. On 3D dynamic point labelling however, investigations have been rather heuristic and phenomenological than developing a theoretical groundwork. We take this circumstance as a reason to introduce a novel theoretical model for labelling of three dimensional point features under certain dynamics. In particular, we concentrate on zooming operations being applied to a two dimensional view port, onto which the data points are projected and their corresponding labels are attached using a discrete one position labelling model. We aim for a theoretical foundation on whose basis we develop an approximation algorithm to the NP-hard active range optimization problem of placing as many labels as possible for a given zooming range. For this purpose, we introduce the novel concept of the radial stabbing tree to reduce the initial three dimensional problem to several two dimensional problems to be solved.

Optimierung von orthogonalem Kantenrouting für Biochips

  • Date: Fri, 4 Jul 2014, 14:00
  • Location: SR 301
  • Speaker: Ruben Gehring, Bachelorarbeitsvortrag
  • Inhalt: So genannte Biochips integrieren eine bis mehrere Funktionen eines makroskopischen Labors auf einen kleinen Plastikchip von nur wenigen Quadratzentimetern Fläche. Da dies ein noch recht junges Forschungsgebiet ist, müssen Chipdesigner das Layout ihrer Biochips allerdings noch komplett manuell entwerfen, was gerade bei komplexeren Chips sehr zeitaufwändig ist. Das Layout eines solchen Biochips besteht aus funktionalen Komponenten und orthogonalen Verbindungen zwischen diesen. Die Kosten dieser Layouts wird zu großen Teilen von der Anzahl der Kreuzungen bestimmt, die beim Verbinden der Komponenten nötig werden. In diesem Vortrag betrachten wir das Problem zu gegebenen Komponenten mit fester Platzierung ein Routing zu finden das möglichst wenige Kreuzungen und gleichzeitig möglichst kurze Verbindungen hat. Dabei erklären wir, wie das Chiplayout als Graph modelliert werden kann, und wie unser Algorithmus für diesen dann ein solches Kantenrouting mit orthogonalen Kanten findet.

On layered drawings of planar graphs

  • Date: Fri, 27 Jun 2014, 14:00
  • Location: SR 301
  • Speaker: Sarah Luteropp, Bachelorarbeitsvortrag
  • Inhalt: If a graph has a planar y-monotone embedding in which each vertex is mapped to one of k parallel horizontal lines, it is called k-level planar. The set of vertices mapped to the same line is then called a layer. If the layering, i.e., the assignment of each vertex to a layer, is given, it can be tested in linear time whether there exists a planar y-monotone embedding of the graph with respect to the given layering. In case that no such embedding exists, the most commonly used method tries to minimize the number of crossings between the layers. This is known to be NP-complete. We introduce a new approach for embedding layered graphs. Given a planar graph and a layering of it, we want to perform as few modifications as possible to the layering in order to obtain a level planar embedding. We define several new optimization problems, provide some complexity results and give a heuristic for our newly defined problems.

Retrieval and Perfect Hashing using Fingerprinting

  • Date: Fri, 27 Jun 2014, 08:30
  • Location: SR 236
  • Speaker: Wei Zhou, rehearsal for SEA
  • Inhalt: Recent work has shown that perfect hashing and retrieval of data values associated with a key can be done in such a way that there is no need to store the keys and that only a few bits of additional space per element are needed. We present FiRe -- a new, very simple approach to such data structures. FiRe allows very fast construction and better cache efficiency. The main idea is to substitute keys by small fingerprints. Collisions between fingerprints are resolved by recursively handling those elements in an overflow data structure. FiRe is dynamizable, easily parallelizable and allows distributed implementation without communicating keys. Depending on implementation choices, queries may require close to a single access to a cache line or the data structure needs as low as 2.58 bits of additional space per element.

Tree-based Coarsening and Partitioning of Complex Networks

  • Date: Wed, 25 Jun 2014, 11:30
  • Location: SR 236
  • Speaker: Roland Glantz, SEA-Probevortrag
  • Inhalt: Many applications produce massive complex networks whose analysis would benefit from parallel processing. Parallel algorithms, in turn, often require a suitable network partition. For solving optimization tasks such as graph partitioning on large networks, multilevel methods are preferred in practice. Yet, complex networks pose challenges to established multilevel algorithms, in particular to their coarsening phase. One way to specify a (recursive) coarsening of a graph is to rate its edges and then contract the edges as prioritized by the rating. In this paper we (i) define weights for the edges of a network that express the edges' importance for connectivity, (ii) compute a minimum weight spanning tree $T^m$ with respect to these weights, and (iii) rate the network edges based on the conductance values of $T^m$'s fundamental cuts. To this end, we also (iv) develop the first optimal linear-time algorithm to compute the conductance values of emph{all} fundamental cuts of a given spanning tree. We integrate the new edge rating into a leading multilevel graph partitioner and equip the latter with a new greedy postprocessing for optimizing the maximum communication volume (MCV). Experiments on bipartitioning frequently used benchmark networks show that the postprocessing already reduces MCV by 11.3\%. Our new edge rating further reduces MCV by 10.3\% compared to the previously best rating with the postprocessing in place for both ratings. In total, with a modest increase in running time, our new approach reduces the MCV of complex network partitions by 20.4\%.

Parallel Multiway LCP-Mergesort

  • Date: Wed, 11 Jun 2014, 11:30
  • Location: SR 236
  • Speaker: Andreas Eberle, Bachelorarbeitsvortrag
  • Inhalt: In this talk, multiway LCP-Merge is introduced and applied to create a fully parallel LCP-Mergesort, as well as NUMA optimized pS^5 . As an advancement of binary LCP-Mergesort, a multiway LCP-aware tournament tree is introduced and parallelized. For dynamic load balancing, one well-known and two new strategies for splitting merge work packages are utilized. Besides the introduction of fully parallel multiway LCP-Mergesort, further focus is put on NUMA architectures. Thus parallel `Super Scalar String Sample Sort' (pS^5 ) is adapted to the special properties of these systems by utilizing the parallel multiway LCP-Merge. Moreover this yields an efficient and generic approach for parallelizing arbitrary sequential string sorting algorithms and making parallel algorithms NUMA-aware. Comprehensive experiments on two current NUMA platforms are then reported and discussed. The experiments show the good scalability of the introduced algorithms and especially, the great improvements of NUMA-aware pS^5 with real-world input sets on modern machines.

Distributed Node Coloring Algorithms for Directed Graphs

  • Date: Fri, 6 Jun 2014, 14:00
  • Location: SR 301
  • Speaker: Manuel Schweigert, Bachelorarbeitsvortrag
  • Inhalt: Distributed node coloring algorithms are the heart of creating efficient, collision free communication schemes in ad hoc wireless communication networks such as sensor networks. As such, there is great practical importance in understanding these algorithms and in creating models which are as close to reality as possible, such as the Signal to Interference and Noise Ratio (SINR) communication model. In this thesis, we analyze the implications of different transmission ranges of nodes on coloring algorithms, resulting in an underlying directed graph problem. We show that a chain of unidirectional communication links will have an effect on the run times of algorithms, but otherwise will not prohibit a valid coloring to be computed. We present randomized algorithms roughly based on Luby’s algorithm [Lub86], which produce ? + 1-colorings in O(? + l log n) and 2? + 1-colorings in O(l log n) broadcast rounds. We also present observations from our simulations that l depends on the deviation in transmission ranges.

Analysis of human tissue-specific protein-protein interaction networks

  • Date: Tue, 27 May 2014, 14:00
  • Location: SR 301
  • Speaker: Patrick Flick, Masterarbeitsvortrag
  • Inhalt: Proteins are the core machinery of all living cells and protein interactions determine the inner workings of life itself. Insights into the nature of these interactions are important for learning about how and why cells work. The interactions be- tween all proteins in a cell compose a so-called protein-protein interaction (PPI) network, in form of a graph. Not all proteins are present in all cell and tissue types, hence protein interactions are restricted to cell and tissue types where both interacting proteins exist. These tissue dependent interactions form tissue-specific PPI (TSPPI) networks. In this thesis, we construct and analyze TSPPI networks from different data sources. We follow the goal to gain insights into the structure of interactions as well as into the properties of specific groups of proteins inside the TSPPI net- works. To that end, we implement an analysis pipeline and develop efficient anal- ysis algorithms, which operate on our graph representation for TSPPI networks. Moreover, we study the basic properties of TSPPI networks and investigate prop- erties of certain classes of proteins. Then, we provide a method to identify proteins which gain in importance by cellular specialization. Furthermore, we re-evaluate prior research results on a large set of TSPPIs and demonstrate that some previ- ous conclusions have to be reconsidered. Finally, we employ clustering algorithms with the objective to identify tissue-specific functional modules within TSPPIs. In addition to using available clustering methods, we pursue two more approaches.

Approximation von kürzesten, flachen Wegen auf Terrains

  • Date: Fri, 16 May 2014, 14:00
  • Location: SR 301
  • Speaker: Leonie Sautter, Vortrag Studienarbeit
  • Inhalt: Ein Pfad auf einem polygonalen Terrain heißt flach wenn er nicht zu stark ansteigt oder fällt, das heißt wenn seine Steigung für alle Punkte unter einer gegebenen Schranke bleibt. Für das Problem, kürzeste, flache Pfade auf Terrains zu finden, ist bis jetzt noch nicht bekannt, ob es NP-schwer ist. In diesem Vortrag werde ich einen (1 + epsilon)-Approximationsalgorithmus für das Problem vorstellen und auf die Frage der NP-Vollständigkeit eingehen.

Engineering Parallel Algorithms

  • Date: Mon, 28 Apr 2014, 14:00
  • Location: SR -120
  • Speaker: Prof. Dr. Peter Sanders, Probevortrag Parallel 2014
  • Inhalt: Algorithms are at the heart of every nontrivial computer application. With the ubiquity of parallel processing this means that we need efficient parallel algorithms for all performance critical applications. The talk presents algorithm engineering as an approach for developing such methods: Modelling, design, analysis, implementation and experiments work together and lead to high performance applications and widely usable software libraries. Parallel sorting algorithms are used as an example throughout the talk. Further examples include parallel graph algorithms.

NetworKit: An Interactive Tool Suite for High-Performance Network Analysis

  • Date: Fri, 25 Apr 2014, 14:00
  • Location: SR 301
  • Speaker: Christian Staudt
  • Inhalt: We introduce NetworKit, an open-source software package for high-performance analysis of large complex networks. Complex networks are equally attractive and challenging targets for data mining, and novel algorithmic solutions as well as parallelization are required to handle data sets containing billions of connections. Our goal for NetworKit is to package results of our algorithm engineering efforts and put them into the hands of domain experts. NetworKit is a hybrid combining the performance of kernels written in C++ with a convenient interactive interface written in Python. The package supports general multicore platforms and scales from notebooks to workstations to servers. In comparison with related software for network analysis, we propose NetworKit as the package which satisfies all of three important criteria: High performance (partly enabled by parallelism), interactive workflows and integration into an ecosystem of tested tools for data analysis and scientific computation. The current feature set includes standard network analytics kernels such as degree distribution, connected components, clustering coefficients, community detection, k-core decomposition, degree assortativity and centrality. Applying these to massive networks is enabled by efficient algorithms, parallelism or approximation. Furthermore, the package comes with a collection of graph generators and has basic support for visualization. With the current release, we present and open up the project to a community of both algorithm engineers and domain experts.