Detecting communities, and labeling nodes according to which community they belong to, is a ubiquitous problem in the study of social and biological networks. One of the best algorithms for this task (and many others) is Belief Propagation, which updates probability distributions on nodes and edges until they reach a fixed point. In addition to being of practical use, these algorithms can be studied analytically, revealing phase transitions in the ability of any algorithm to solve this problem. Specifically, there is a detectability transition in the stochastic block model, below which no algorithm can label nodes better than chance.
I'll explain this transition, and give an accessible introduction to Belief Propagation and the analogy with the cavity method of statistical physics. We'll see that the consensus of many good solutions is a better labeling than the "best" solution — a property that holds for many real-world optimization problems. While many algorithms overfit, and find "communities" even in random graphs where none exist, our method lets us focus on statistically-significant communities. In physical terms, we focus on the free energy rather than the ground state energy.
I'll then turn to dynamic networks, where nodes change which community they belong to at a certain rate. Analogous to the static case, there is a precise relationship between this rate and the strength of the community structure that tells us whether we can detect communities or not; and there is a belief propagation algorithm that works in space and time to succeed precisely up to this point. This approach does significantly better than treating the network as a series of static snapshots, or simply integrating the network over time.
This is joint work with Aaron Clauset, Aurelien Decelle, Amir Ghasemian, Florent Krzakala, Elchanan Mossel, Joe Neeman, Leto Peel, Allan Sly, Lenka Zdeborova, and Pan Zhang.