1 Introduction
Individual decisionmaking in many domains is driven by personal as well as social factors. If one wants to decide a level of time, money, or effort to exert on some task, the behaviors of one’s friends or neighbors can be powerful influencing factors. We can view these settings as games where agents in a network are playing some game, each trying to maximize their individual utility as a function of their “standalone value” for action as well as their neighbors’ actions. The actions of agents who are “central” in a network can have large ripple effects. Identifying and understanding the role of central agents is of high importance for tasks ranging from microfinance [7] and vaccinations [6], to tracking the spread of information throughout a community [8]. We view our work as providing theoretical support for heuristic approaches to intervention in these settings.
A model for such a setting is studied in recent work by Galeotti, Golub, and Goyal [20], where they ask the natural question of how a third party should “intervene” in the network to maximally improve social welfare at the equilibrium of the game. Interventions are modeled by assuming that the third party can pay some cost to adjust the standalone value parameter of any agent, and must decide how to allocate a fixed budget. This may be interpreted as suggesting that these targeted agents are subjected to advertizing, monetary incentives, or some other form of encouragement. For their model, they provide a general framework for computing the optimal intervention subject to any budget constraint, which can be expressed in terms of the spectral decomposition of the graph. For large budgets, the optimal intervention is approximately proportional to the first eigenvector of the adjacency matrix of the graph, a common measure of network centrality.
While this method is optimal, and computable in polynomial time if the adjacency matrix is known, it is rare in practice that we can hope to map all connections in a large network. For physical networks, edges representing personal connections may be far harder to map than simply identifying the set of agents, and for large digital networks we may be bound by computational or data access constraints. However, realworld networks are often wellbehaved in that their structure can be approximately described by a simple generating process. If we cannot afford to map an entire network, is optimal targeted intervention feasible at all? A natural target would be to implement interventions which are competitive with the optimal intervention, i.e. obtaining almost the same increase in social welfare, without access to the full adjacency matrix. Under what conditions can we use knowledge of the distribution a graph is drawn from to compute a nearoptimal intervention without observing the realization of the graph? Without knowledge of the distribution, how much information about the graph is necessary to find such an intervention? Can we ever reach nearoptimality with no information about the graph? These are the questions we address.
1.1 Contributions
Our main result shows that for random graphs with independent edges, the first eigenvector of the “expected adjacency matrix”, representing the probability of each edge being included in the graph, constitutes a nearoptimal intervention simultaneously for almost all generated graphs, when the budget is large enough and the expected matrix satisfies basic spectral conditions. We further explore graphs with given expected degrees, ErdősRényi graphs, power law graphs, and stochastic block model graphs as special cases for which our main result holds. In these cases, the first eigenvector of the expected matrix can often be characterized by a simple expression of parameters of the graph distribution.
Yet in general, this approach still assumes a fair amount of knowledge about the distribution, and that we can map agents in the network to their corresponding roles in the distribution. We give several samplingbased methods for approximating the first eigenvector of a graph in each of the aforementioned special cases, which do not assume knowledge of agent identities or distribution parameters, other than group membership in the stochastic block model. These methods assume different query models for accessing information about the realized graph, such as the ability to query the existence of an edge or to observe a random neighbor of an agent. Using the fact that the graph was drawn from some distribution, we can reconstruct an approximation of the first eigenvector more efficiently than we could reconstruct the full matrix. The lowerinformation settings we consider can be viewed as assumptions about qualitative domainspecific knowledge, such as a degree pattern which approximately follows an (unknown) power law distribution, or the existence of many tightknit disjoint communities.
We evaluate our results experimentally on both synthetic and realworld networks for a range of parameter regimes. We find that our heuristic interventions can perform quite well compared to the optimal intervention, even at modest budget and network sizes. These results further illustrate the comparative efficacies of interventions requiring varying degrees of graph information under different values for distribution parameters, budget sizes, and degrees of network effects.
On the whole, our results suggest that explicit mapping of the connections in a network is unnecessary to implement nearoptimal targeted interventions in strategic settings, and that distributional knowledge or limited queries will often suffice.
1.2 Related Work
Recent work by Akbarpour, Malladi, and Saberi [3] has focused on the challenge of overcoming network data barriers in targeted interventions under a diffusion model of social influence. In this setting, for and power law random graphs, they derive bounds on the additional number of “seeds” needed to match optimal targeting when network information is limited. A version of this problem where network information can be purchased is studied in [19]. Another similar model was employed by Candogan, Bimpikis, and Ozdaglar [11] where they study optimal pricing strategies to maximize profit of a monopolist selling service to consumers in a social network where the consumer experiences a positive local network effect, where notions of centrality play a key role. Similar targeting strategies are considered in [18], where the planner tries to maximize aggregate action in a network with complementarities. [23] studies the efficacy of blind interventions in a pricing game for the special case of ErdősRényi graphs. In [26], targeted interventions are also studied for “linearquadratic games”, quite similar to those from [20], in the setting of infinitepopulation graphons, where a concentration result is given for nearoptimal interventions.
Our results can be viewed as qualitatively similar findings to the above results in the model of [20]. While they have showed that exact optimal interventions can be constructed on a graph with full information, we propose that local information is enough to construct an approximately optimal intervention for many distributions of random graphs. It is argued in [10] that collecting data of this kind (aggregate relational data) is easier in real networks compared to obtaining full network information. We make use of concentration inequalities for the spectra of random adjacency matrices; there is a great deal of work studying various spectral properties of random graphs (see e.g. [13, 15, 14, 2, 16]). Particularly relevant to us is [16], which characterizes the asymptotic distributions of various centrality measures for random graphs. There is further relevant literature for studying centrality in graphons, see e.g. [4]
. Of relevance to our sampling techniques, a method for estimating eigenvector centrality via sampling is given in
[27], and the task of finding a “representative” sample of a graph is discussed in [25].2 Model and Preliminary Results
Here we introduce the “linearquadratic” network game setting from [20], also studied in e.g. [26], which captures the dynamics of personal and social motivations for action in which we are interested.
2.1 Setting
Agents are playing a game on an undirected graph with adjacency matrix . Each agent takes an action and obtains individual utility given by:
Here, represents agent ’s “standalone marginal value” for action. The parameter controls the effect of strategic dynamics, where a positive sign promotes complementary behavior with neighbors and a negative sign promotes acting in opposition to one’s neighbors. In this paper we focus on the case where each value is in and . The assumption that corresponds to the case where agents’ actions are complementary, meaning that an increase in action by an agent will increase their neighbors’ propensities for action. ^{1}^{1}1When , neighbors’ actions act as substitutes, and one obtains less utility when neighbors increase levels of action. In that case, the optimal intervention for large budgets is approximated by the last eigenvector of the graph, which measures its “bipartiteness”. We assume that for each agent as well.
The matrix
can be used to determine the best response for each agent given their opponents’ actions. The best response vector
, given current actions , can be computed as:Upon solving for , we get that , giving us the Nash equilibrium for the game as all agents are simultaneously best responding to each other. We show in Appendix 0.D that when agents begin with null action values, repeated best responses will converge to equilibrium, and further that the new equilibrium is likewise reached after intervention.
Our results will apply to cases where all eigenvalues of
are almost surely positive, ensuring invertibility. ^{2}^{2}2If and is not invertible, equilibrium actions will be infinite for all agents in some component of the graph. The social welfare of the game can be computed as a function of the equilibrium actions:Given the above assumptions, equilibrium actions will always be nonnegative.
2.2 Targeted Intervention
In this game, a central authority has the ability to modify agents’ standalone marginal utilities from to by paying a cost of , and their goal is to maximize social welfare subject to a budget constraint :
Here, an intervention is a vector such that . Let denote the social welfare at equilibrium following an intervention . ^{3}^{3}3Unless specified otherwise, refers to the norm. When the argument is a matrix, this denotes the associated spectral norm. It is shown in [20] that the optimal budgetconstrained intervention for any can be computed using the eigenvectors of , and that in the largebudget limit as tends to infinity, the optimal intervention approaches . Throughout, we assume is the unit norm eigenvector associated with , the th largest eigenvalue of a matrix . We also define , which is the square of the corresponding eigenvalue of . Note that we do not consider eigenvalues to be ordered by absolute value; this is done to preserve the ordering correspondence between eigenvalues of and . may have negative eigenvalues, but all eigenvalues of will be positive when , as we will ensure throughout.
The key result we use from [20] states that when
is positive, as the budget increases the cosine similarity between the optimal intervention
and the first eigenvector of a graph, which we denote by ,^{4}^{4}4 The cosine similarity of two nonzero vectors and is . For unit vectors , by the law of cosines, , and so . Thus for if and only if . approaches 1 at a rate depending on the (inverted) spectral gap of the adjacency matrix.^{5}^{5}5 It will sometimes be convenient for us to work with what we call the inverted spectral gap of a matrix , which is the smallest value such that .Our results will involve quantifying the competitive ratio of an intervention , which we define as , where denotes the social welfare at equilibrium after an intervention vector is applied, and where . This ratio is at most 1, and maximizing it will be our objective for evaluating interventions.
2.3 Random Graph Models
We introduce several families of random graph distributions which we consider throughout. All of these models generate graphs which are undirected and have edges which are drawn independently.
Definition 1 (Random Graphs with Independent Edges)
A distribution of random graphs with independent edges is specified by a symmetric matrix . A graph is sampled by including each edge independently with probability .
Graphs with given expected degrees (, or ChungLu graphs) and stochastic block model graphs, often used as models of realistic “wellbehaved” networks, are notable cases of this model which we will additionally focus on.
Definition 2 ( Graphs)
A graph is an undirected graph with an expected degree sequence given by a vector , whose length (which we denote by ) defines the number of vertices in the graph. For each pair of vertices and with respective expected degrees and , the edge is included independently with probability .
Without loss of generality, we impose an ordering on values so that . To ensure that each edge probability as described above is in , we assume throughout that for all vectors we have that .
graphs and power law graphs are wellstudied examples of graphs which can be generated by the model.^{6}^{6}6There are several other wellstudied models of graphs with power law degree sequences, such as the BA preferential attachment model, as well as the fixeddegree model involving a random matching of “halfedges”. Like the model, the latter model can support arbitrary degree sequences. We restrict ourselves to the independent edge model described above. For graphs, is a uniform vector where for each . Power law graphs are another notable special case where is a power law sequence such that for , some constant , and some integer . In such a sequence, the number of elements with value is asymptotically proportional to .
Definition 3 (Stochastic Block Model Graphs)
A stochastic block model graph with vertices is undirected and has groups for some . Edges are drawn independently according to a matrix , and the probability of an edge between two agents depends only on their group membership. For any two groups and , there is an edge probability such that for any agent in group and agent in group . ^{7}^{7}7If , the stochastic block model can express any distribution of random graphs with independent edges, but will be most interesting when there are few groups.
For each graph model, one can choose to disallow selfloops by setting for , as is standard for graphs. Our results will apply to both cases.
3 Approximately Optimal Interventions
The main idea behind all of our intervention strategies is to target efforts proportionally to the first eigenvector of the expected adjacency matrix. Here we assume that this eigenvector is known exactly. In Section 4, we discuss cases when an approximation of the eigenvector can be computed with zero or minimal information about the graph. Our main theorem for random graphs with independent edges shows conditions under which an intervention proportional to the first eigenvector of the expected matrix is nearoptimal.
We define a property for random graphs which we call concentration which will ensure that the expected first eigenvector constitutes a nearoptimal intervention. In essence, this is an explicit quantification of the asymptotic properties of “large enough eigenvalues” and “nonvanishing spectral gap” for sequences of random graphs from [16]. Intuitively, this captures graphs which are “wellconnected” and not too sparse. One can think of the first eigenvalue as a proxy for density, and the (inverse) second eigenvalue as a proxy for regularity or degree of clustering (it is closely related to a graph’s mixing time). Both are important in ensuring concentration, and they trade off with each other (via the spectral gap condition) for any fixed values of and .
Definition 4 (Concentration)
A random graph with independent edges specified by satisfies concentration for if:

The largest expected degree is at least

The inverted spectral gap of is at most

The quantity is at least
Theorem 3.1
If satisfies concentration, then with probability at least , the competitive ratio of for a graph drawn from is at least for a sufficiently large budget if the spectral radius of the sampled matrix is less than .
The concentration conditions are used to show that the relevant spectral properties of generated graphs are almost surely close to their expectations, and the constraint on is necessary to ensure that actions and utilities are finite at equilibrium.^{8}^{8}8 The spectral radius condition holds with probability when is at least (follows from e.g. [15], see Section 0.E.2 for details). The sufficient budget will depend on the size of the spectral gap of , as well as the standard marginal values. For example, if holds in the realized graph for all , then a budget of will suffice. Intuitively, a large would mean more correlation between neighbors’ actions at equilibrium. A large would mean a denser graph (more connections between agents) in expectation and a large would mean that the realized graph is more likely to be close to expectation. All of these conditions reduce the required budget because a small intervention gets magnified by agent interaction. Further, the smaller the magnitude of initial , the easier it is to change its direction.
The proof of Theorem 3.1 is deferred to Appendix 0.C. At a high level, our results proceed by first showing that the first eigenvector is almost surely close to , then showing that the spectral gap is almost surely large enough such that the first eigenvector is close to the optimal intervention for appropriate budgets. A key lemma for completing the proof shows that interventions which are close to the optimal intervention in cosine similarity have a competitive ratio close to 1.
Lemma 1
Let be the vector of standalone values, and assume that . For any where and for some , the competitive ratio of is at least .
The main idea behind this lemma is a smoothness argument for the welfare function. When considering interventions as points on the sphere of radius , small changes to an intervention cannot change the resulting welfare by too much. This additionally implies that when a vector is close to , the exact utility of for some budget can be achieved by an intervention proportional to with a budget which is not much larger than .
In Appendix 0.A, we give a specialization of Theorem 3.1 to the case of graphs. There, the expected first eigenvector is proportional to when selfloops are not removed. We give more explicit characterizations of the properties for , , and power law graphs which ensure the above spectral conditions (i.e. without relying on eigenvalues), as well as a budget threshold for nearoptimality. We discuss the steps of the proof in greater detail, and they are largely symmetric to the steps required to prove Theorem 3.1.
4 Centrality Estimation
The previous sections show that interventions proportional to are often nearoptimal simultaneously for almost all graphs generated by . While we often may have domain knowledge about a network which helps characterize its edge distribution, we still may not be able to precisely approximate the first eigenvector of a priori. In particular, even if we believe our graph comes from a power law distribution, we may be at a loss in knowing which vertices have which expected degrees.
In this section, we discuss approaches for obtaining nearoptimal interventions without initial knowledge of . We first observe that “blind” interventions, which treat all vertices equally in expectation, will fail to approach optimality. We then consider statistical estimation techniques for approximating the first eigenvector which leverage the special structure of and stochastic block model graphs. In each case, we identify a simple target intervention, computable directly from the realized graph, which is nearoptimal when concentration is satisfied. We then give efficient sampling methods for approximating these target interventions. Throughout Section 4, our focus is to give a broad overview of these techniques rather than to present them as concrete algorithms, and we frequently omit constantfactor terms with asymptotic notation.
4.1 Suboptimality of Blind Interventions
Here we begin by showing that when the spectral gap is large, all interventions which are far from the optimal intervention in cosine similarity will fail to be nearoptimal even if the budget is very large.
Lemma 2
Assume that is sufficiently large such that the role of standalone values is negligible. For any where and , the competitive ratio is bounded by
where is the square of the th largest eigenvalue of .
This tells us that if one were to design an intervention without using any information about the underlying graph, the intervention is unlikely to do well compared to the optimal one for the same budget unless eigenvector centrality is uniform, as in the case of graphs. Thus, there is a need to try to learn graph information to design a closetooptimal intervention. We discuss methods for this next.
4.2 Degree Estimation in Graphs
For graphs, we have seen that expected degrees suffice for nearoptimal interventions, and we show that degrees can suffice as well.
Lemma 3
If a graph specified by satisfies concentration, then with probability at least ,
where is the empirical degree vector, and the intervention proportional to obtains a competitive ratio of when the other conditions for Theorem 3.1 are satisfied.
Thus, degree estimation is our primary objective in considering statistical approaches. As we can see from the analysis in Theorems 0.A.1 and 3.1, if we can estimate the unitnormalized degree vector to within distance, our competitive ratio for the corresponding proportional intervention will be . Our approaches focus on different query models, respresenting the types of questions we are allowed to ask about the graph; these query models are also studied for the problem of estimating the average degree in a graph [22, 17]. If we are allowed to query agents’ degrees, nearoptimality follows directly from the above lemma, so we consider more limited models.
Edge Queries.
Suppose we are allowed to query whether an edge exists between two vertices. We can then reduce the task of degree estimation to the problem of estimating the mean of biased coins, where for each vertex, we “flip” the corresponding coin by picking another vertex uniformly at random to query. By Hoeffding and union bounds, total queries suffice to ensure that with probability , each degree estimate is within additive error. Particularly in the case of dense graphs, and when is not too small compared to , this will be considerably more efficient than reconstructing the entire adjacency matrix. In particular, if , the above error bound on additive error for each degree estimate directly implies that the estimated degree vector is within (and thus ) distance of .
Random Neighbor Queries.
Suppose instead we are restricted to queries which give us a uniformly random neighbor of a vertex. We give an approach wherein queries are used to conduct a random walk in the graph. The stationary distribution is equivalent to the the first eigenvector of the diffusion matrix , where is the diagonal matrix of degree counts.^{9}^{9}9 The stationary distribution of a random walk on a simple connected graph is for all vertices , where is the degree. While graphs may fail to be connected, in many cases the vast majority of vertices will belong to a single component, and we can focus exclusively on that component. We show this in Lemma 4. We can then learn estimates of degree proportions by sampling from the stationary distribution via a random walk.
The mixing time of a random walk on a graph determines the number of steps required such that the probability distribution over states is close to the stationary distribution in total variation distance. We can see that for
graphs satisfying concentration with a large enough minimum degree, mixing times will indeed be fast.Lemma 4
For graphs satisfying concentration and with , the mixing time of a random walk to within total variation distance to the stationary distribution is . Further, the largest connected component in contains vertices in expectation.
If a random walk on our graph has some mixing time to an approximation of the stationary distribution, we can simply record our location after every steps to generate a sample. Using standard results on learning discrete distributions (see e.g. [12]), samples from approximate stationary distributions suffice to approximate within distance of with probability , directly giving us the desired bound. Joining this with Lemma 4, our random walk takes a total of steps (and thus queries) to obtain our target intervention, starting from an arbitrary vertex in the largest connected component.
4.3 Matrix Reconstruction in SBM Graphs
There is a fair amount of literature on estimation techniques for stochastic block model graphs, often focused on cases where group membership is unknown [31, 1, 30, 28]. The estimation of eigenvectors is discussed in [5], where they consider stochastic block model graphs as a limit of a convergent sequence of “graphons”. Our interest is primarily in recovering eigenvector centrality efficiently from sampling, and we will make the simplifying assumption that group labels are visible for all vertices. This is reasonable in many cases where a close proxy of one’s primary group identifier (e.g. location, job, field of study) is visible but connections are harder to map.
In contrast to the case, degree estimates no longer suffice for estimating the first eigenvector. We assume that there are groups and that we know each agent’s group. Our aim will be to estimate the relative densities of connection between groups. When there are not too many groups, the parameters of a stochastic block model graph can be estimated efficiently with either edge queries or random neighbor queries, From here, we can construct an approximation of and compute its first eigenvector directly. In many cases, the corresponding intervention is nearoptimal.
A key lemma in our analysis shows that the “empirical block matrix” is close to its expectation in spectral norm. We prove this for the case where all groups are of similar sizes, but the approach can be generalized to cover any partition.
Lemma 5
For a stochastic block model graph generated by with groups, each of size , let denote the empirical block matrix of edge frequencies for each group. Each entry per block in will contain the number of edges in that block divided by the size of the block. With probability at least ,
The same bound will then apply to the difference of the first eigenvectors, rescaled by the first eigenvalues (which will also be close). Similar bounds can also be obtained when group sizes may vary, but we stick to this case for simplicity.
Edge Queries.
If we are allowed to use edge queries, we can estimate the empirical edge frequency for each of the pairs of groups by repeatedly sampling a vertex uniformly from each group and querying for an edge. This allows reconstruction of the empirical frequencies up to error for each group pair, with probability , with samples. For the block matrix of edge frequencies for all group pairs, Lemma 5 implies that this will be close to its expectation when there are not too many groups, and so our estimation will be close to in spectral norm as well. If satisfies concentration and the bound from Lemma 5 is small compared to the norm of , then the first eigenvectors of , , and will all be close, and the corresponding intervention proportional to will be nearoptimal.
When all group pairs may have unique probabilities, this will only provide an advantage over a naive graph reconstruction with queries in the case where . If we know that all outgroup probabilities are the same across groups, our dependence on becomes linear, as we can treat all pairs of distinct groups as one large group. If ingroup probabilities are the same across groups as well, the dependence on vanishes, as we only have two probabilities to estimate.
Random Neighbor Queries.
We can also estimate the empirical group frequency matrix with random neighbor queries. For each group, the row in corresponding to the edge probabilities with other groups can be interpreted as a distribution of frequencies for each group. samples per row suffice to get additive error at most for all of the relative connection probabilities for our chosen group. This lets us estimate each of the rows up to scaling, at which point we can use the symmetry of the matrix to recover an estimate of up to scaling by some factor. Again, when concentration holds and the bound from Lemma 5 is small, the first eigenvector of this estimated matrix will give us a nearoptimal intervention.
5 Experiments
Our theoretical results require graphs to be relatively large in order for the obtained bounds to be nontrivial. It is natural to ask how well the heuristic interventions we describe will perform on relatively small random graphs, as well as on realworld graphs which do not come from a simple generative model (and may not have independent edges). Here, we evaluate our described interventions on real and synthetic network data, by adjusting values and computing the resulting welfare at equilibrium, and find that performance can be quite good even on small graphs. Our experimental results on synthetic networks are deferred to Appendix 0.B.
5.1 Real Networks
To test the usefulness of our results for realworld networks which we expect to be “wellbehaved” according to our requirements, we simulate the intervention process using network data collected from villages in South India, for purposes of targeted microfinance deployments, from [7]. In this context, we can view actions as indicating levels of economic activity, which we wish to stimulate by increasing individual propensities for spending and creating network effects. The dataset contains many graphs for each village using different edge sets (each representing different kinds of social connections), as well as graphs where nodes are households rather than individuals. We use the household dataset containing the union of all edge sets. These graphs have degree counts ranging from 77 to 365, and our experiments are averaged over 20 graphs from this dataset. We plot competitive ratios while varying (scaled by network size) and the spectral radius of , fixing for each agent.
The expected degree intervention is replaced by an intervention proportional to exact degree. We also fit a stochastic block model to graphs using a version of the approach described in Section 4.3, using exact connectivity probabilities rather than sampling. Our group labels are obtained by running the GirvanNewman clustering algorithm [21] on the graph, pruning edges until there are either at least 10 clusters with 5 or more vertices or 50 clusters total. We evaluate the intervention proportional to the first eigenvector of the reconstructed block matrix. All interventions are compared to a baseline, where no change is applied to , for demonstrating the relative degree in social welfare change.
In Figure 1, we find that degree interventions perform quite well, and are only slightly surpassed by first eigenvector interventions. The stochastic block model approach performs better than uniform when the spectral radius sufficiently large, but is still outperformed by the degree and first eigenvector interventions. Upon inspection, the end result of the stochastic block model intervention was often uniform across a large subgraph, with little or no targeting for other vertices, which may be an artifact of the clustering method used for group assignment. On the whole, we observe that minimalinformation approaches can indeed perform quite well on both real and simulated networks.
Acknowledgments.
We thank Ben Golub, Yash Kanoria, Tim Roughgarden, Christos Papadimitriou, and anonymous reviewers for their invaluable feedback.
References
 [1] (2018) Community detection and stochastic block models: recent developments. 18 (177), pp. 1–86. Cited by: §4.3.

[2]
(2000)
A random graph model for massive graphs.
In
Proceedings of the ThirtySecond Annual ACM Symposium on Theory of Computing
, STOC ’00, New York, NY, USA, pp. 171–180. External Links: ISBN 1581131844 Cited by: §0.A.3, §1.2.  [3] (2018) Diffusion, seeding, and the value of network information. In Proceedings of the 2018 ACM Conference on Economics and Computation, EC ’18, New York, NY, USA, pp. 641–641. External Links: ISBN 9781450358293 Cited by: §1.2.
 [4] (2017) Centrality measures for graphons. CoRR abs/1707.09350. External Links: Link, 1707.09350 Cited by: §1.2.
 [5] (2017) Centrality measures for graphons. Vol. abs/1707.09350. External Links: Link, 1707.09350 Cited by: §4.3.

[6]
(201902)
Using Gossips to Spread Information: Theory and Evidence from Two Randomized Controlled Trials.
The Review of Economic StudiesGames and Economic BehaviorAnnals of CombinatoricsCombinatorics, Probability and ComputingAnnals of CombinatoricsElectr. J. Comb.Random Struct. AlgorithmsScienceRandom Structures & AlgorithmsJournal of Machine Learning ResearchOperations ResearchProceedings of the National Academy of Sciences
86 (6), pp. 2453–2490. Cited by: §1.  [7] (2013) The diffusion of microfinance. Science 341 (6144), pp. 1236498. External Links: Document, ISSN 00368075 Cited by: §1, §5.1.
 [8] (201406) Gossip: identifying central individuals in a social network. Cited by: §1.
 [9] (1999) Emergence of scaling in random networks. 286 (5439), pp. 509–512. External Links: ISSN 00368075 Cited by: §0.A.3.
 [10] (2017) Using aggregated relational data to feasibly identify network structure without network data. External Links: 1703.04157 Cited by: §1.2.
 [11] (2012) Optimal pricing in networks with externalities. 60 (4), pp. 883–905. Cited by: §1.2.
 [12] (2020) A short note on learning discrete distributions. External Links: 2002.11457 Cited by: §4.2.
 [13] (20030601) Eigenvalues of random power law graphs. 7 (1), pp. 21–33. External Links: ISSN 02193094 Cited by: §1.2, Appendix 0.C.
 [14] (20021101) Connected components in random graphs with given expected degree sequences. 6 (2), pp. 125–145. External Links: ISSN 02193094 Cited by: §1.2.
 [15] (201110) On the spectra of general random graphs. 18, pp. . Cited by: §0.A.1, §0.A.2, §0.E.2, Theorem 0.E.1, §1.2, Lemma 14, Appendix 0.C, Appendix 0.C, Appendix 0.C, Appendix 0.C, footnote 8.
 [16] (201709) Distributions of Centrality on Networks. Papers arXiv.org. Cited by: §0.E.2, §1.2, §3.
 [17] (2014) On estimating the average degree. In Proceedings of the 23rd International Conference on World Wide Web, WWW ’14, New York, NY, USA, pp. 795–806. External Links: ISBN 9781450327442 Cited by: §4.2.
 [18] (201709) Optimal targeting strategies in a network under complementarities. 105 (), pp. 84–103. Cited by: §1.2.
 [19] (2019) Seeding with costly network information. External Links: 1905.04325 Cited by: §1.2.
 [20] (2017) Targeting interventions in networks. Vol. abs/1710.06026. Cited by: §0.A.2, §0.E.2, §0.E.2, §1.2, §1.2, §1, §2.2, §2.2, §2, Appendix 0.C.
 [21] (2002) Community structure in social and biological networks. 99 (12), pp. 7821–7826. External Links: Document, ISSN 00278424 Cited by: §5.1.
 [22] (2008) Approximating average parameters of graphs. 32 (4), pp. 473–493. Cited by: §4.2.
 [23] (2019) The value of price discrimination in large random networks. Proceedings of the 2019 ACM Conference on Economics and Computation. Cited by: §1.2.
 [24] (1999) The web as a graph: measurements, models, and methods. In Computing and Combinatorics, T. Asano, H. Imai, D. T. Lee, S. Nakano, and T. Tokuyama (Eds.), Berlin, Heidelberg, pp. 1–17. External Links: ISBN 9783540486862 Cited by: §0.A.3.
 [25] (2006) Sampling from large graphs. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, New York, NY, USA, pp. 631–636. External Links: ISBN 1595933395 Cited by: §1.2.
 [26] (2018) Graphon games: a statistical framework for network games and interventions. External Links: 1802.00080 Cited by: §1.2, §2.
 [27] (2019) Sampling on networks: estimating eigenvector centrality on incomplete graphs. External Links: 1908.00388 Cited by: §1.2.
 [28] (2019) Blind identification of stochastic block models from dynamical observations. External Links: 1905.09107 Cited by: §4.3.

[29]
(1992)
Improved bounds for mixing rates of markov chains and multicommodity flow
. In LATIN ’92, I. Simon (Ed.), Berlin, Heidelberg, pp. 474–487. External Links: ISBN 9783540470120 Cited by: Appendix 0.C.  [30] (2017) Variational inference for stochastic block models from sampled data. External Links: 1707.04141 Cited by: §4.3.
 [31] (2019) Optimal sampling and clustering in the stochastic block model. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 13422–13430. Cited by: §4.3.
Appendix 0.A Graphs with Given Expected Degrees
In this section, we show a method for obtaining nearoptimal interventions in graphs generated by the model. We show that the first eigenvector of a graph is almost surely close to , and our intervention will simply be proportional to . This indicates that degree estimates are often sufficient for nearoptimal intervention. We assume that is sorted in descending order and that each entry is strictly positive. Our main theorem for this section holds for all distributions which satifsy the following specialization of concentration. The second condition corresponds to requiring a sufficiently large first eigenvalue, as was the case for general random graphs; graphs do not exhibit clustering on average, and so we do not need an additional condition for the second eigenvalue.
Definition 5 (Concentration for Graphs)
A graph satisfies concentration for if:

The largest expected degree is at least and at most

The secondorder average of the expected degree sequence is at least
Theorem 0.A.1
For distributions satisfying concentration, and for at least , with probability at least ,
where is the vector of standalone marginal values and is the optimal intervention for a budget , if the spectral radius of is less than .^{10}^{10}10 The spectral radius of is less than 1 with probability when
Our conditions ensure that the first eigenvalue of the graph is not too small, and that the other eigenvalues are not too large in magnitude. It is worth noting that is small unless the network effects in the game are negligible, in which case we should not expect eigenvector centrality to be important for small budgets. In Section 0.A.3, we consider applications to and power law graphs. Here we assume the vector which parameterizes the graph distribution is known, and in fact our intervention will simply be proportional to .
In the proof of Theorem 0.A.1, we proceed by observing that the first eigenvector of is proportional to , and then prove that when the spectral conditions hold, the first eigenvector of is nearly proportional to with high probability. We then determine sufficient budget sizes such that nearoptimality of the intervention follows from Lemma 1.
0.a.1 Proportionality of Eigenvector Centrality and Degree
Here we show that the first eigenvector of the adjacency matrix is almost surely close to the unit vector rescaling of . We first observe that this holds in the standard version of the model which allows for selfloops, and then we show that the first eigenvector does not change by much upon pruning loops.
Lemma 6
For graph distributions, any eigenvector of corresponding to a nonzero eigenvalue is proportional to .
It can be checked that up to scaling, is the unique vector which satisfies the eigenvalue equation for a nonzero eigenvalue. As all rows and columns of are proportional to , there is only one nonzero eigenvalue. From the eigenvalue equation it is simple to check that the nonzero eigenvalue will be equal to the secondorder average degree .
In this formulation of the model, agents are allowed to have selfloops with positive probability. We note now that even if we remove the possibility of selfloops by setting the diagonal entries of to 0, which in turn will remove the rankdeficiency, the spectral norm between the expected matrix (with loops) and the realized matrix (with or without loops) will be small. This in turn will imply that the first eigenvectors of these matrices are close.
A key tool in this proof is a bound on the difference in first eigenvectors of matrices which are close in norm when one of them has a small second eigenvalue. We only make use of this in the case where the second eigenvalue of one matrix is zero, but a more general version of the result, used to prove Theorem 3.1, is included in Appendix 0.C.
Lemma 7
Let be a symmetric matrix with largest absolute eigenvalue (with multiplicity 1) and all other eigenvalues equal to 0. Let be a symmetric matrix with largest absolute eigenvalue , and suppose . Then,
This follows from considering a decomposition of the first eigenvector of into a component proportional to and one proportional to some vector orthogonal to . Given that has a small spectral norm, the image of in will be large, showing that the orthogonal component is small, which we can use to show that the eigenvectors are close.
We can then show that norm difference of the expected and realized matrices is almost surely small using a matrix concentration bound from [15], allowing us to apply Lemma 7 to bound the difference in their eigenvectors, as the second eigenvalue of is 0.
Lemma 8
If the above assumptions about the distribution are satisfied, then with probability it holds that:
As the first eigenvector of is proportional to , this shows that that the first eigenvector will be nearly proportional to regardless of whether we allow selfloops, and so our intervention will be close to the true graph’s first eigenvector. Next, we will see that this implies nearoptimality for a sufficiently large budget.
0.a.2 Bounding Suboptimality of Interventions
The previous results indicate that the first eigenvector will be close to its expectation when our assumptions hold, even upon removing selfloops. We now give a similar results for the first and second eigenvalues, which allows us to guarantee a sufficiently large spectral gap for . First, we give a bound on the second eigenvalue of with the diagonal removed.
Lemma 9
Let be the matrix which is equal to along the diagonal and 0 elsewhere, and let denote the secondlargest absolute eigenvalue of . Then,
This follows from a similar orthogonal decomposition approach to the proof of Lemma 7, as well as another direct application of Lemma 7. We can then get an absolute bound on the first and second eigenvalues of .
Lemma 10
With probability at least , the second eigenvalue of is at most
and the first eigenvalue of is at least
This follows from applying the triangle inequality to Lemma 9 and Theorem 1 from [15] as well as from our observation about the first expected eigenvalue of . To complete the proof of Theorem 0.A.1, we can combine the previous results to show that when the stated conditions hold. Proposition 2 from [20] allows us to use this fact to show that when budget is above our lower bound, the first eigenvector is close to the optimal intervention in cosine similarity. We can then show that and the optimal intervention are close in cosine similiarity, using Lemma 8 as a key step. Plugging this into Lemma 1 gives us the theorem.
0.a.3 Examples: and Power Law Graphs
graphs are perhaps the most wellstudied family of random graphs, which we can interpret as a special case of the model and give explicit conditions for when concentration holds. These conditions are lower bounds on and which guarantee the requirements for applying Theorem 0.A.1, and for clarity we restate the eigenvector similarity and nearoptimality results for the case of graphs.
Lemma 11
For graphs with and at least ,
Theorem 0.A.2
For graphs with and at least , with probability at least , the uniform intervention achieves utility within of the optimal intervention for budgets at least .
The constant factor for the lower bound on we obtain in the proof of Lemma 11 is large, but can likely be optimized, and our empirical results in Section 5 indicate the uniform intervention is close to optimal on reasonably small graphs.
This theorem also applies directly to stochastic block model graphs where all blocks are the same size and each has the same ingroup and outgroup probabilities. Here, all agents are equally central in expectation, and it is simple to check that the first eigenvector of will be uniform. This allows a nearoptimal intervention for sufficiently large and dense graphs without any knowledge of the edges, connectivity probabilities, group memberships, or degrees.
We can also show that our approach is nearoptimal for many power law graphs, as introduced in Section 2. Power law graphs are a notable special case of the model, and are often studied as models of realworld networks. Our results hold for power law graphs where , a range containing many observed examples [24, 9, 2].
Lemma 12
For power law graphs with , if we have that is at least , then with probability at least ,
Theorem 0.A.3
For power law graphs with , if we have that is at least , if is small enough to ensure that , then with probaility at least , the intervention achieves utility within a factor of the optimal intervention for budgets at least , where .
Appendix 0.B Additional Experiments
0.b.1 Random Networks
We evaluate our results in simulated and power law graphs, analyzing the competitive ratio; we compare a baseline (no intervention), expected degree interventions, the first eigenvector intervention (computed from the realized graph), and the optimal intervention for each graph (computed via quadratic programming).
For all experiments, we fix and for each agent. For both graph families, we experiment by independently varying the budget size , a distribution parameter ( or ), and the spectral radius of . We plot the competitive ratio of each heuristic intervention with the optimal intervention as parameters are varied, and each parameter specification is averaged over 10 graph samples. We generate power law graphs according to the model with a power law sequence, where the maximum expected degree is fixed at 25 and the minimum is fixed at 1 for all exponent values.
In graphs (see Figure 2), we see that the first eigenvector intervention is close to optimal in almost all cases, and that the uniform intervention is quite competitive as well, particularly when the graph is increasingly dense. When the spectral radius is small, the uniform outperforms the first eigenvector intervention as expected, it is optimal when is 0. The small baseline values at most points indicate that the change to social welfare from our interventions is indeed quite drastic, even when .
In power law graphs (see Figure 3), the uniform intervention does not perform nearly as well unless the spectral radius is small, where it outperforms other approaches. The expected degree intervention does considerably better in general. When , where our theoretical results do not hold, we see that heuristics still perform well. We expect that this is an artifact of the fixed expected degree bounds and the small graph size. In large power law graphs, larger exponents correspond to the graph having a smaller “core” of dense connectivity, which can be quite important for influencing the rest of the network. Our experiments suggest that smaller cores are less important in graphs of this size, which more closely resemble sparse graphs with a few wellconnected vertices.
Appendix 0.C Omitted Proofs
First, we prove a result about general matrices that we will use later in the proofs.
Lemma 13
For any two symmetric matrices and , let and be the largest (according to magnitude) eigenvalues of , and let be the largest (according to magnitude) eigenvalue of (Note: By definition, and ). Let be the inverted spectral gap, i.e. . Then,
Where is the unit eigenvector with the largest absolute eigenvalue of the matrix , and .
Proof
First, we observe that,
(1) 
by the triangle inequality. Also, we can write, where is a unit vector, and . Since , its image under can at most have a magnitude of . That is,
From our assumption, we can write
Combining with (1), we get
Using the fact that , we get
Proof (Proof of Theorem 3.1)
From Lemma 13, for matrices and , we have that
where is the first eigenvalue of , for all , is the first eigenvalue of , and . By the triangle inequality , and so
Since , we have, from the proof of Theorem 1 in [15] that . If , we have that
and so . For a sufficiently large budget, by Proposition 2 in [20], , where is the optimal intervention. By the triangle inequality this gives us that . Applying Lemma 1 and law of cosines gives a competitive ratio of .
Proof (Proof of Lemma 1)
First we give a general expression of the change in welfare observed at equilibrium after some intervention .
By the law of cosines, there is some vector with norm such that . Let . We can then give an additive bound on the utility loss :
(CauchySchwarz)  
(spectral rad. def.)  
(law of cosines) 
From our lower bound on we then have:
Further, we have that
as this is obtainable even if by letting be the first eigenvector of associated with the spectral radius . Recall that social welfare at equilibrium is increasing in . We can then give a multiplicative bound on utility loss:
and so
Proof (Proof of Lemma 6)
Let be an eigenvector of , and let be the corresponding eigenvalue. It suffices to show that for any and any , if , then:
(2) 
In graphs, an edge between vertices and is constructed with a probability proportional to . As such, for any it holds that:
(3) 
By the eigenvalue equation, for any ,
and so for any , by (3),
which implies (2) when is nonzero.
Proof (Proof of Lemma 7)
This follows from Lemma 13 with . We also give a simpler direct alternate proof here, which may be of independent interest.
First, we observe that
(4) 
by the triangle inequality. We can write , where is a unit vector orthogonal to , and , for some values and . Since , its image under can at most have a magnitude of . That is,
From our assumption, we can write
Comments
There are no comments yet.