Although most graph theory textbooks focus on graphs $G=(V, E)$ where $V$ and $E$ remain unchanged, most of the real world applications of graph algorithms run on graphs where the topology changes: edges can be added, vertices can enter and leave. So, how can we define a dynamic graph?
Model: We define a dynamic graph $\mathcal{G}$ to be a sequence of static graphs $G_1, G_2, ...$ where $G_i=(V, E_i)$ for an unchanging set of vertices $V$. We can view $\mathcal{G}$ as the complete description of how the dynamic graph evolves. We can think of $\mathcal{G}$ as evolving round by round, where in round $i$, the dynamic graph is represented by $G_i$.
We are assuming that computation occurs in synchronous distributed rounds: in round $r$, nodes do some computation based on their internal state, the graph topology changes to $G_r$, and the nodes send a message of size $O(\log{n})$ to each of their neighbors in $G_r$ in this order.
The way that $\mathcal{G}$ evolves can be controlled by an adversary. In the offline setting, the adversary decides the sequence of graphs $G_1, G_2,...$ before the first round begins. This is called an oblivious adversary. In the online setting, for a given round $r$, the adversary decides $G_{r}$ based on the internal states of all nodes in round $r$. This is an adaptive adversary.
Of course we can restrict the class of dynamic graphs that our algorithms will run on. For example, we introduce the idea of $T$-interval-connectedness.
Property 1: A dynamic graph $\mathcal{G}$ is $T$-interval-connected if from rounds $t\leq r\leq t+T$, the $G_t, G_{t+1},..., G_{t+T}$ there is a connected spanning subgraph that is unchanged.
We can choose the class of dynamic graphs based on any properties and this may lead us to more specialized dynamic algorithms for this graph class. For example, another class of dynamic graphs may require all $G_i$ to be $d$-regular.
Some problems in dynamic graphs have strong lower bounds implying poor performance. However, this is not what we observe in practice. How can we explain this? Maybe the instances that match the lower bound are extremely rare so if we ignored them the algorithm would actually be "efficient".
For example, the hitting time of a pair of vertices $u, v$ is the expected number of rounds for a random walk to go from $u$ to $v$. For dynamic graphs, the hitting time can be lower bounded by $\Omega (2^n)$ by considering the star graph with the addition of a self-loop at each vertex. However, is this bound really useful in practice? We want to explore such questions by looking at the smoothed lower bound.
Def 1: Given a graph $G=(V, E)$ a $k$-smoothed graph of $G$ is any graph in $\{G' | G' \text{ has edit distance at most } k \text{ from } G\}$. The edit distance between $G$ and $G'$ is just the minimum number of edges needed to be added/removed from $G$ to get $G'$. We can thus $k$-smooth a dynamic graph $H$ by $k$-smoothing each of its $G_i\in H$.
The idea behind smoothing is that by changing our graph at random by a small amount, we can "ruin" any specific graph instances which achieve the worst-case runtime. Hence in expectation we can analyse how fast these "ruined" worst case graphs run on our algorithms. If only small perturbations of these worst-case graphs weaken the lower bounds significantly, then it must be that these instances are very artificial/unlikely to occur in practice.
Smoothed analysis has been applied to the hitting time of dynamic graphs giving a $k$-smoothed lower bound of $\Omega (n^{5/2}/(\sqrt{k}\log{n}))$.
Over the following $20$ weeks or so, I will be attempting to explore other problems on dynamic graphs which have vastly different lower bounds to $k$-smoothed lower bounds. Many different smoothing models exist and so an exploration of those will also be conducted. Finally, I may specialize the dynamic graph class depending on the problem being studied.
No comments:
Post a Comment