Games Networks Play

Aimal Tariq Rextin & Zahid Irfan

 (01030042 & 01030037)@lums.edu.pk

Lahore University of Management Sciences,

Lahore, Pakistan.

12 May 2003

 

Abstract:

 


Traditional Network protocols emphasize the cooperation of the sources to achieve optimal performance. This is unfortunately not the case, as is evident by frequent problems faced by the Internet. The game theoretic approach to network problems is a ray of hope. It tries to design protocols and mechanisms which provide an efficient and fair distribution of resources even in the presence of rogue agents. This survey provides the theoretical background of the game theoretic approach to networks, proposed schemes for achieving performance, pricing mechanisms as well as routing solutions using game theory. We show that pricing is the only currently viable solution to the congestion control problem in network. But we believe that given the current work and potential the future lies in Mechanism Design.

Key Words:

Networks, Game Theory, Mechanism Design, Congestion Control, Routing

 


1         Introduction:

 

Resource optimization is one of the most important issues in computer networks; a lot of research has been conducted to find an optimal solution. Resource allocation can broadly be divided into three areas, which are flow control, routing and congestion control.

 

Congestion control is a set of techniques that are used to ensure that a network performs at an acceptable level even when the demand exceeds or is near the capacity of the network. In absence of congestion control, routers tend to drop large number of packets when the arriving traffic exceeds the capacity of the network. This results in packet retransmissions and the performance of the network drops to an unacceptable level.

 

A number of approaches have been developed to solve the problem of network congestion, such as the pioneering work by Nagle in Congestion Control [3], Jacobson’s Slow Start, exponential back off [4] and Random Early Detection (RED) [5]. All these techniques worked quite well in restoring confidence in networks by effective congestion control. RED in particular is a very effective technique; it has gained recognition and is being implemented on routers all along the Internet. Research in the field of congestion avoidance and control was extended through Explicit Congestion Notification (ECN) [6], it control’s flow in the routers but depends heavily on the end users to behave in a responsible manner. Active Queue Management techniques developed over the years have also been a source of improvement in the congestion control and congestion avoidance [7]. These network congestion control or avoidance techniques depend heavily on the assumption that the end user will lower its rate once it is informed through probing or explicit notification that there is congestion in the network. It is possible that a source will ignore these notifications and continue with its efforts to acquire additional network resources; in fact such a source will be able to get higher throughput by adopting a selfish policy [14]. The sources that deviate from the agreed technique are called as rogue sources and game theory provides us a frame work through which we can study the behavior of the network in the presence of rogue sources.

 

Game Theory is mature field in mathematics that provides a framework in which to model and analyze conflict and cooperation among independent decision makers [1, 9]. It analyzes decision making process when two or more players are involved. Each player has a payoff for the action it takes and the payoff can depend on the actions of the other players. The action a player takes depends upon the moves (or strategies) available to it, called the strategy space, the information it has about the other players, called information sets and r how much satisfaction a user can get from a particular outcome called as the payoff or utility function. In 1950 John Nash Jr., introduced the concept of solution in games [12]. This solution concept is now known as the Nash Equilibrium. It represents a state when no user can gain by changing its strategy given the strategies of all other users.

 

Game Theory has been applied to a number of areas in computer networks such as congestion control, flow control and multicasting [1] [11] [27]. In this report we will focus on the game theoretic approach towards congestion control. The game theoretic approach for congestion control looks at the network from the user’s perspective, where the user is a source or host computer. It assumes network as a game and the sources, routers and destinations as the players of this game also called as users, each player uses its set of strategy to maximize their payoff. Unlike the traditional approach where each user is assumed to be following a mandated protocol, the game theoretic approach makes no such assumption. In fact, this approach goes to the other extreme and considers all users to be selfish and they act only according to their own self-interest. Scott Shenker in his seminal paper [14] identifies the basic premises on which the game theoretic perspective is based, which are that all users are selfish, the performance of any system is only evaluated by the amount of satisfaction that it can deliver to the users, where the user satisfaction is directly proportional to the amount and quality of service that it gets from the system. The challenge in the game theoretic approach is to design algorithms so that the selfish motives and actions by individual users translate into desired results for the whole system.

 

The rest of the report is organized as follows. Section 2 describes the theoretical aspects of game theory, consisting of game theoretic model, learning networks, mechanism design and complexity analysis. Section 3 describes the schemes that achieve Nash Equilibrium. Section 4 describes use of pricing techniques to control congestion. We believe that routing strategies are determinant in congestion control and congestion avoidance so section 5 describes the application of game theoretic analysis to routing. Concluding remarks are given in section 6

 

2         Theoretical Aspects of Game Theory:

2.1      The Game Theoretic Model of a Network

The game theoretic model of a network approach considers n users, with private information. The private information can be its throughput, drop rate or average delay of its packets. The user selects a strategy from its set of possible strategies so that it can increase its benefit gained from the system; this benefit from the system is called the user utility and is known only to the user. The utility of a user determines a user’s preferences over its own strategies. Since a user’s gain depends upon the strategies adopted by all other users, therefore a user does not attain a utility maximizing strategy immediately; instead all users change their strategy until a stable operating point is reached. This equilibrium point for the network is known as the Nash Equilibrium, which represents a state when no user can gain by changing its strategy given the strategies of all other users.

 

A system can achieve multiple Nash Equilibrium points, each one with its own properties and there can be an equilibrium point where no user is gaining any benefit. Hence achieving Nash Equilibrium alone is not desirable; Shenker [14] argues that only Nash Equilibrium points with certain valuable properties such as efficiency and fairness are desirable. Efficient Nash Equilibrium means that an equilibrium point should produce good results in terms of the users utility function i.e. the utility function of all the agents should be high. A standard measure of efficiency is called the Pareto Optimality, which means that no user in the system can increase its payoff from the system without decreasing another users payoff, hence the network is performing at its best when it is at Pareto optimal Nash equilibrium. Another desirable property at Nash Equilibrium is that the benefit gained from the system should be more or less equal for all users. A standard measure of fairness is called Max-Min Fair Rate, which states that for a link of limited bandwidth, the link’s bandwidth should be allocated in such a way that maximizes the minimum rate of any user.  Shenker argues that since a network is not always at equilibrium, therefore better criteria for fairness would be that as long as a user is able to increase its utility, the system would be considered as fair.

 

Another point that must be understood is that the system has to converge towards a Nash Equilibrium, the faster this convergence the better. Convergence is very similar to a simple hill-climbing algorithms, hill-climbing algorithms always takes the best immediate, or local, solution while finding an answer. Therefore users converge to a Nash Equilibrium point with the users selecting a strategy, based upon the resulting utility from the previous strategy, which would give them the best possible result in the immediate future. However this is a simplified model of the system since there can be users who are relatively more sophisticated, e.g. they can have information about the other user’s utility functions, in which case they can allow other users to equilibrate before varying its rate, this way it becomes a leader while the other users follow him. The equilibrium reached in these situations is called as the Stackelberg Equilibrium. The utility of the leader in Stackelberg equilibrium is never less than its corresponding Nash equilibrium, the leader can affect the equilibrium point finally attained and hence from the system’s fairness point of view it is not desirable. Therefore the mechanism should ensure that the Stackelberg equilibrium and Nash equilibrium should coincide so that less sophisticated users can be protected from the sophisticated users.

 

We also want the convergence to Nash Equilibrium to be robust; by robustness we mean that as long as the users apply some kind of self-optimizing techniques Nash equilibrium will be finally reached. Self Optimizing techniques are the means through which users observe their utility and change their strategy accordingly with the aim of maximizing their utility. It was proved by Shenker [14] that the convergence could only be robust if there is a unique Nash equilibrium. Unique Nash equilibrium is also desirable because if there are multiple Nash equilibrium points the users get involved in super games, where the players try to influence the Nash equilibrium that the system ultimately achieves. Another option discussed in [16] is to eliminate self-optimization and hill climbing all together, this can be achieved by letting the users to directly report their utility functions to the switch. The allocation function, which quantifies the allocation of resources to the users, in this case becomes directly related to the reported utilities. The problem in this case is that the users would be motivated to lie about their true utilities in order to gain more resources, the challenge is to design mechanisms that encourage truth revelation, and such mechanisms are called Revelation Mechanism. The revelation principle works by ensuring that the benefit gained by lying is always less than the actual benefit that it should have gained. Moreover the Nash equilibrium should be attained quickly and the mechanism should protect a user from actions of other users.

2.2      Learning in Networks

Shenker [24] argues that the assumption of Nash Equilibrium as the primary solution concept in a network game is not correct because Nash Equilibrium requires that all the players have perfect knowledge about the game and the strategies employed by the other players, which is not true in networks. Moreover network game, is a form of repeated game and where the players learn a best response after repeated trials of the game. The basic model for this game is that all players have a number of possible actions, the payoff function is unknown to the players but it can observe the result of an action. When the user acts in a manner which must profitable in the current scenario, by considering the history of the strategies and their results, after the action is taken and it’s observed result the player may update the action history.

 

Shenker [24] describes in detail why the network context is different from traditional game theoretic context. He gives four reasons for that;

·         Users have very limited information; they don’t know the number of other users, their own and other player’s payoff functions.

·         A network game, the payoff function and the number of users change over time, hence if the users detect some change in performance it doesn’t know whether it is due to strategy change of another user or if the underlying network properties have changed.

·         Since optimization is carried out by software, e.g. TCP congestion control algorithm hence it is necessary that the software is generic in nature. This implies that we cannot use prior probabilities with different strategies, which are used in mixed strategies in classical game theory.

·         Since the Internet is spread geographically, therefore it is impossible to synchronize the activities of the different users. A user can update his strategy immediately with out considering other user. Hence there is no concept of rounds which is required in classical game theory for repeated games.

 

Learning algorithms use random experiments to find the optimal solution, responsiveness so that it can respond to changes in bounded time, each strategies is assigned probabilities such that the most promising strategy is assigned the highest probability and its chances of getting selected is the most.

2.3      Mechanism Design

Algorithmic Mechanism Design (AMD) intends to design systems in which agent’s selfish behavior results in system wide goals [10]. The system wide goals are quantified as the Social Choice Function (SCF), which maps the instantiation of agents into the particular outcome from a universal space of outcomes. This space has all the possible outcomes from which the social choice function relates the strategy adopted by the agent. An SCF is called strategy proof if the agents in the system have no incentive to lie and the desired social goals are achieved by asking the agents to reveal their utility functions. Such cases are called direct mechanisms.

 

There is a class of mechanisms called Vickery-Clark-Grooves (VCG) mechanisms [10] where the utilities are quasi-linear; they are represented by simple addition of payoffs and agent’s valuation of the system. In such cases the mechanism tries to optimize the total valuations by all of the agents. 

 

In contrast to the direct strategy mechanism there is another class of mechanisms where the agents don’t have to reveal the utility functions rather they have to select a strategy from a strategy space. Such mechanism called indirect mechanism attempt to optimize the system performance while agents attempt to maximize their own utilities by selfishly choosing strategies. The mechanism acts in a way where no agent has any incentive to change behavior unilaterally. This type of centralized mechanism designs cannot be implemented in the Internet where the agents and resources are distributed. This calls for implementing distributed mechanism called as the Distributed Algorithmic Mechanism Design (DAMD), which must reflect the distributed nature of the Internet.

 

DAMD system comprises of the following types of entities.

·         Obedient Nodes are correctly functioning agents that act, as they are required or programmed to act.

·         Faulty Nodes have developed faults due to any reason e.g., hardware, software or configurations errors and don’t have any independent strategic goals.

·         Strategic Nodes strategies implemented which intend to maximize their own goals.

·         Adversarial Nodes are most interesting and are categorized as having ‘malicious’ intentions. They are either innocently malicious or Byzantine enemies, which deviate from prescribed algorithms or behavior in arbitrary manner.

 

The most challenging part of DAMD is to design a protocol or mechanism where the adversarial behavior can be detected without any modifications such as public keys installation.

2.4      Applications

Mechanism Design has been successfully applied to simple task allocation problem and DAMD to the BGP-based Low Cost Routing and Multicast Cost Sharing. These successful applications increase the confidence that these techniques can be successfully applied to congestion control.

2.5      Complexity Analysis

Game Theoretic Model in most of Economic Theory is concerned about how systems react to various incentives. When the computer scientists used this as a tool, they treated it like the way they treat their own problem. They used the notions of computability and complexity to analyze games and then come up with algorithms which can be used to solve the games. Nisan and Ronen’s [15] did seminal work in this regard. They proved that games are not only computable but also calculated the complexities. Feigenbaum’s [9] analysis of a simple 2-Player game with mixed strategies, where the game is probabilistic, shows that it is computable in NEXP (Polynomial power of constant) time complexity.

 

The complexity of mechanism design is found to be NP-Complete [24].  Different games [23] such as pure strategy Bayesian Games, where the players have private information of their preferences, is NP-hard even to prove that a Nash Equilibrium exists. This makes the application of pure game-theoretic models to real life problems hard.

3         Nash Equilibrium Achieving Schemes:

3.1      CHOKe+

Akella, Seeshan,et.al [11] modeled the TCP as a game. In a TCP based system, a user starts its transmission with a predefined input rate. The input rate is increased by a parameter called additive increase parameter (α) whenever congestion is not detected in the network. In the event of congestion detection, the user decreases its transmission rate by the parameter known as the multiplicative decrease parameter (β). It should be noted here that congestion is detected when the user detects that its packets have been dropped. The two parameters when specified together at congestion are called the congestion parameters (α, β). The objective is to find the (αE, βE), which are (α, β) at Nash Equilibrium. It must be remembered that the end-users are selfish and they adjust α and β parameters to maximize their utility, which the authors considers as the goodput for a user, which is the average number of packets, sent in a unit time per flow. The users change α and β parameters until the system stabilizes at Nash Equilibrium, when no source has the incentive to modify its AIMD (additive increase and multiplicative decrease) flow parameters to increase its utility. The authors analyzed the efficiency of Nash equilibrium on three variants of TCP by simulation.

-          Varying Additive Increase parameter of flow

-          Varying Multiplicative Decrease parameter of flow

-          Varying Both the parameters

 

Three different TCP implementations Reno, SACK (selective acknowledgement) and Tahoe are evaluated on routers employing drop tail or RED policy.

 

The primary difference in the three implementations of TCP lay in their penalty schemes.

1.       Severe Penalty Scheme: This Scheme is implemented in TCP-Tahoe, where transmission is stopped as soon as congestion is sensed

2.       Gentle Penalty Scheme: This Scheme is implemented in TCP-SACK where the losses observed determine how much penalty shall be imposed.

3.       Hybrid Penalty Scheme: This Scheme is implemented in TCP-Reno where the penalty is gentle until a specific threshold after which the severe penalty is imposed.

 

The analysis is based on the following; the maximization of goodput Gi (the ratio of output to the input) per Fi (which is the transmission rate of each source). Each agent selects a parameter set (αi, βi) such that Gi is maximized. When all the users have no other option but to change its parameters so that Gi is maximized, the TCP game is said to be in Nash Equilibrium. The major parameters for the analysis of TCP game are the (αE, βE) and the network‘s efficiency.

 

The following results were observed through simulations. [11]

 

 

Drop Tail

RED

αE

βE

αE

βE

Tahoe

49

0.98

34

0.98

Reno

1

0.98

9

0.98

SACK

40

0.98

48

0.98

 

The preceding table shows that the Nash Equilibrium point is efficient only in the case of TCP-Reno, but it is also unfair since it almost results of additive increase and additive decrease. For the other cases too Nash Equilibrium is inefficient.

 

This simulation results above show that TCP-Reno along with FIFO-drop tail not only attains Nash equilibrium, but it is also efficient. The most important result is that RED is tolerant to aggressive users, because the probability of packet dropping is uniform for all users. On the other hand FIFO-drop tail is effective in prohibiting aggressive behavior because the number of packets dropped for a source is directly proportional to the bandwidth it consumes. Since FIFO-Drop Tail routers are not exclusively used these days and because RED is widely deployed along with TCP-SACK, the authors concluded that there is need for a preferential drop mechanism to effectively manage congestion.

 

CHOKE (CHOose and KEep) [13] is a simple mechanism with preferential drop policy. Choke works in the following fashion; for every arriving packet it takes k samples from the queue, the arriving packet is dropped if the arriving packet source matches one of the packets. This ensures that user who is not aggressive is not punished.

 

 

Figure1: CHOKe Algorithm (Reproduced from [13])

 

The authors suggest CHOKe+ it works according to the following algorithm. The new packet P arrives from a flow.

 

-          Pick k packets at random from queue

-          Let m be #packets from the same flow as P

-          Let 0 <= γ2 < γ1 <= 1 be constants that indicate the ratio limits between which the packets are dropped or kept.

-          If m > γ1k, P and the m packets are dropped

-          Else if γ2k <= m < γ1k, drop P and the m packets only if RED were to drop P.

-          Else just drop P according to RED.

 

Performance results of CHOKe+ with TCP-SACK are very encouraging because efficient Nash equilibrium is attained and moreover its most interesting property is that it encourages βE\< 1and more its αE = 3, which is very reasonable and makes aggressive behavior increasingly unattractive at Nash Equilibrium. More over loss rate is low and goodput is also very high. Thus it is shown that at Nash equilibrium CHOKE+ is fair and very desirable

3.2      VLRED & EN-AQM

D Dutta et.al. [1] studied the active queue management (AQM) schemes employed in routers to manage the input queues from overflowing. The schemes are divided into two categories stateful schemes and the oblivious schemes. The stateful schemes consider the flow of the packet before dropping a packet, while oblivious scheme drop a packet without considering the flow of the packet. Fair Queuing, a stateful scheme performs exceptionally well, but oblivious schemes like RED and FIFO-Drop are easier to deploy.

 

The Markovian Internet game [1] comprises of players who are end-point traffic users. These users are selfish and are only concerned about their own good. Each player has a strategy to control its average rate of traffic. The rules of the game are determined by the AQM schemes, which are deployed at the routers. An oblivious router has a drop probability p due to an average aggregate load of λ and an average service time of unity. A symmetric Nash Equilibrium, if imposed, is when all the users have the same good put at equilibrium.

 

The Nash Equilibrium is exhibited under the following conditions.

-          No users can increase its goodput by either increasing or decreasing its throughput.

-          At Nash Equilibrium, all flows have the same utility or good put.

 

Nash Equilibrium is called efficient if the good put of the selfish agent is bounded from below when the throughput (offered load) is bounded from above.

 

Drop tail and RED routers do not attain a Nash Equilibrium point. The reason for RED not attaining Nash equilibrium is that it punishes all flows with the same probability, making the drop mechanism gentle and hence there is no incentive for any source to stop behaving aggressively.

 

VLRED is a variant of RED that uses a virtual infinite length queue to decide whether to drop a packet. This makes it more punishing for flows with high rates. Its drop probability is directly proportional to the virtual queue length, which depends on the arrival rate for a particular flow.

 

It has been proved in [1] that VLRED attains Nash Equilibrium, but the good put asymptotically falls to 0 and hence utilization is low (shown in Figure 2). The reason for this behavior is the harsh drop probability, which needs to be made a bit gentler. This is done in EN-AQM scheme presented below.

Figure 2: Total Offered Load and good put for VLRED (reproduced from [1])

 

EN-AQM [1] scheme ensures the existence of an efficient Nash equilibrium. The drop probability is given by the following equation.

 

Where, λ is average aggregate load.

 

This enforces a bound on the throughput and good put (as shown in Figure 3). The offered load increases but the good put becomes fairly constant. This is introduced because the drop probability is a directly proportional and inversely proportional to the aggregate load shown in equation above.

 

Figure 3: Total Offered Load, good put and drop probability (reproduced from [1])

 

There is still a problem with the EN-AQM scheme; its equilibrium point is sensitive to the number of users. So it’s difficult to deploy Oblivious AQM schemes without having any help from the protocol.

3.3      Diminishing Weight Scheduling

Garg, Karma and Khurana argue that so far the current voluntarily congestion control mechanisms have worked, but they may not work in the future because of Internet’s growth, commercialization and diversity. They believe these factors indicate that we need a model of congestion control that considers selfish user. They introduced the idea of Diminishing Weight Schedulers (DWS) in [17], which handles congestion control in the presence of selfish users.  The basic premise behind DWS is that cutting down the rates of users trying to send data at an unfair rate would encourage well-behaved users. The punishment inflicted on the users depends upon the steepness of the diminishing weight function; the steeper the function the more punishment is inflected to the user for misbehaving.

 

There can be a number of diminishing weight functions but the authors mainly focused on one particular class of DWS called the Rate Inverse Scheduling (RIS), where the diminishing weight function is the inverse of the input rate, so that the more a particular flow tries to grab extra bandwidth the more it gets its input rate reduced. It was shown that this mechanism encourages congestion avoiding behavior and punishes behavior that leads to congestion.

 

There are several desirable properties of DWS such as a unique max-min fair rate; at this fair rate the users attain Nash Equilibrium as well as Stackelberg equilibrium, which is highly desirable from the system’s point of view.  More importantly it was proved that when DWS is deployed even the selfish users would try to estimate their max-min fair rate and send data at that rate, it is however not clear how the users would estimates their max-min fair rates.

4         Pricing:

A number of researchers have attempted to control congestion by adopting the idea of pricing from economics. Essentially this paradigm puts the responsibility of congestion control on the shoulders of the end users by charging them for their share in contributing to congestion, called as congestion pricing.

4.1      The Model

The basic model considers a computer network as an open market, where sellers offer their products to the consumers in a market. An open market has the property that they are self regulating i.e. when an item’s supply is depleted, the prices move up and lesser consumers are willing to buy them, hence an equilibrium is maintained. The basic purpose of this model is to model a network as an economy to take advantage of its self regulating properties. The mapping from a computer network to an open market is done in the following manner;

 

·         The main item of interest is the link bandwidth and hence it is the product that is sold in a network economy.

·         Routers are assumed to own the bandwidth and hence are considered as the sellers.

·         The end users or end stations need the link bandwidth for their use and are considered as the buyers.

·         The price of the bandwidth is set to reflect the current supply and demand situation.

4.2      Distributed Pricing Scheme

This scheme considers [26] was introduced by Fulp considers routers as the owners of the resources and there are multiple routers in a network, hence there are multiple economies in a single network. Each router has multiple microeconomics for each output port and it sets the prices for each output port based on the current supply and demand situation. This way we have a decentralized economy and a failure does not affect the whole economy, which was a problem in previous attempts to introduce pricing in computer networks in a centralized manner.

The price is calculated at each router by dividing time to intervals, the price for the current time interval is based on the demand in the previous interval and the previous price. The new price is calculated with the following formula [26].

pin+1= pin +c (din – α Si / α Si)

 

This equation shows that the price for link i in interval n will be equal to its price plus a correction factor c. The correction factor depends upon the total demand at the end of interval n din and the supply Si which is time invariant. It can be seen from the equation that the price increases after α percentage of the total bandwidth, moreover the constant c dictates how quickly the price will rise or fall, this is important since it is possible that a user using a large portion of the bandwidth might quit lowering the price considerably, many users might rush in to take advantage of the lower price, resulting in bandwidth scarcity, hence we would want the price to decrease gradually.

In this scheme [26] the authors assume that users run applications that require a constant bandwidth bm, application with varying bandwidth requirements are modeled by dividing time into different time interval during which demand can be considered as constant. Users may not be able to purchase the required bandwidth due to their wealth. The utility function of the users is captured through a two dimensional graph called as the QoS profiles, where the horizontal axis gives the percentage of the desired bandwidth bm that it managed to buy and the vertical axis gives us a QoS score from five to one, where five represents excellent QoS and the authors assume that each user should at least get a score of three which they call as the acceptable QoS. The slope of the graph gives us the bandwidth scalability of the application, hence a video application require high bandwidth and does not perform well below the desired bandwidth has a steep slope.

There is a network broker who controls admission of a user to the network; it is logically located between the edge of the network and the user. The network brokers ensures that the user has enough wealth to buy an acceptable QoS, this is necessary to ensure that the network is not overwhelmed by users, since the authors believe that greater social benefit is in having a smaller number of users. The network broker gathers information about the desired bandwidth and budget from the users, it also gets the latest prices from all the switches along the path, based on this information it decides to buy a certain amount of bandwidth, for the users. It is possible that the user may not be able to buy a minimum acceptable bandwidth in that case the user would be required either to terminate its session or increase its budget.

Simulation results show that this scheme was able to provide 95% utilization of the network and better QoS than min max for a network with large users. Moreover it was also shown that a competitive market solution provides a Pareto optimal allocation of resources. 95% utilization of resources is achieved because at equilibrium supply equals demand and if the situation changes the price is changed to regain the equilibrium.

4.3      Paris Metro Pricing

This scheme [25] takes it inspiration from the Paris Metro System which had two service classes with exactly the same number and quality of seats but one had a higher price tag than the other. The advantage of this scheme was that when the upper class became crowded some of the passengers decided that it was not worth paying extra and would move to the lower class. Hence this system ensured an expected level of service foe the passengers traveling in the upper class without any out side intervention. On the same lines the Paris Metro Pricing (PMP) scheme advocates that computer network should be divided into multiple logical networks and where each logical network has different prices. It should be noted that PMP does not guarantee a particular QoS, it works on best effort basis, but unlike traditional networks high end PMP sub networks would have a lesser users and hence a higher expected QoS. The authors argue in [25] that PMP networks should have 2 to 4 subnets because economic research on railways and airlines has shown that the economic benefit decreases with greater number of service classes. PMP can be implemented by partitioning the bandwidth of the network between the subnets, but this has the disadvantage that we loose the benefit of statistical multiplexing of large networks. A better way would be to utilize the three unused priority bits in the IP datagram header. This way packet with higher priority would always be treated before lower priority packets at the routers. The paper [25] does not mention about any simulation to verify the usefulness of this scheme.

4.4      Leader Follower Pricing

The service providers play an important role in the Internet. The model presented in [21] is a hierarchical network with multiple links, a single service provider and a large number of users with multiple classes. Each class of users has a different entry point and exit point in the network. The service provider charges the users fixed price per unit of bandwidth. The user opts for a level of usage, hence the congestion and pays the service provider fixed cost. The service provider’s concern is maximization of the revenue. This situation is similar to a leader follower (Stackelberg) game, with a single leader and large number of followers. The pricing strategy is to charge the user according to the bandwidth consumed. The leader sets prices of using various links and the followers react to these prices by following links so as to maximize their own benefits. This game is proved to attain unique Nash Equilibrium (called Stackelberg Equilibrium, since its leader follower model). The results indicate that the service provider has an incentive to increase the available capacity in response to increase in users. Thus the leader follower game’s prescribes an increase in capacity of network and hence the quality of service (QoS) is also increased.

4.5      Progressive Second Price Auction Mechanism

The Progressive Second Price Auction Mechanism is suggested in [22] tries to solve the “tragedy of commons” vulnerability of the network. The network is vulnerable to the fast depletion of resources because of the Internet practice of pricing by physical capacity instead of charging according to actual resource utilization.  Progressive Second Price (PSP) auction is an efficient mechanism for allocation of network resources by using the second best price. This is realized in the networks domain by asking all players to bid for the network resource. The players ask for a quantity of the network resource at some price. The auctioneer’s job is to allocate the network resource to the highest bidder according to his demand but at the price of the second highest bidder, this is called second price auction. The complexity of implementing this is O (N2) where N is the number of users and hence it proves to be a very attractive mechanism. It is also proved [22] that this scheme attains a Nash Equilibrium under the assumption that the users bid according to their valuation of the resource they are demanding.

 

Decentralized PSP Auctioning is the mechanism, which allocates network resources according to the progressive second price auction. The difference here is that the allocations depend only on the local information as in the bids for the link and the quantity (network resource) available at the link. The only problem with this mechanism is that it’s the responsibility of the user to coordinate bidding for the links which are of interest to it. This makes it a little bit more computationally intricate but the benefits are that this scheme is also proved to attain Nash Equilibrium [22].

5         Game Theory in Routing:

Routing is very closely connected to congestion control, Because if an end host has multiple paths to its destination with each path having different levels of traffic, therefore if a source selects a path which is already overloaded it will not only delay the delivery of its packet but also may also cause congestion. The models discussed in this section model assume that the source decides on the route that its packets would take, although generally a source does not decide on the route for its traffic but it is achievable through the source routing option of IPv4 and IPv6.

 

5.1      Selfish Routing

 

Theoretically there can be two types of routing, selfish routing in which each user decides the route to be taken by only considering its own benefit and coordinated routing where the routes for all the users is decided keeping  so that maximum social benefit can be attained. Tim Roughgarden [20] assumes that the objective for each user is to select a route so that it can route its packet as quickly as possible i.e. with minimum latency.

 

It is important to understand that in selfish routing each user selects its path independently and the path a particular user chooses effects the latency for all the other users, therefore a user may decide to move to a different path because the latency on the current path has increased. Therefore users are involved in  a game whose Nash Equilibrium is when all paths from a given source to a given destination have the same latency, because if a given flow does not have this property a user would be inclined to change its strategy. Hence Nash equilibrium is the natural operating point for a network where traffic is routed selfishly.

 

Roughgarden explains that selfish routing is suboptimal as compared to coordinated routing, the objective of his work [20] is to quantify the performance degradation when selfish routing is used. The criteria he uses is the ratio of latency experienced by users when selfish routing attains Nash equilibrium and the latency for optimal routing, which Roughgarden calls this ratio  as the price of anarchy. This ratio was derived for two types of latency functions, a latency function is the delay experienced on a link as a function of the traffic on that link. The first type of latency function is a linear function i.e. of the type ax+b, where x is the traffic on the link and a and b are constants which are both greater than 0. Roughgarden [20] found out that the worst case price of anarchy in this type of latency function is 4/3. The second type of latency type corresponds to the waiting time in a M/M/1 queue, it has a function of the type (u-x)-1 where u is the link capacity. The price of anarchy is unbounded in this case unless the maximum traffic rate Rmax through the links is limited to less than the minimum allowable edge capacity umin ,in which case the price of anarchy comes out to be[20].

(1 + _umin[(umin - Rmax)]1/2)/2.

 However due to its importance in computer networks of M/M/1 style latency function, Roughgarden tries to give it a meaningful bound. He shows [20] that the latency of traffic through a Nash equilibrium in selfish routing in M/M/1 latency function is at most equal to the latency experienced by optimal routing if it routes twice as much traffic between the same source destination pairs.

5.2      Achieving Network Optima

As discussed the goal of a  successful mechanism is to engineer a game such that the Nash equilibrium achieved is always optimal, different strategies are used for this purpose i.e. pricing [25,26]or designing the service disciplines [11,17]. The common point in these strategies is that they require a-priori design decisions. Lazar et al discusses in [19] a scheme where the optimal equilibrium point is achieved during the operation of the network. There approach [19] aims at optimizing the routing decisions of a network such that the delay is minimum for a user.

 

Lazar's model [19] assumes two kinds of users, one type is the non-cooperative users who share a number of parallel links and they aim to efficiently route their own traffic to a common destination. The other entity is called as the Manager, which has the ability to monitor non-cooperative behavior of the users and its goal is to optimize the overall network performance. The manager is able to predict the responses of the users and it selects a strategy such that the overall network performance is optimized. Therefore, the Manager becomes a leader and the other users become its followers, since the leader fixes a routing strategy that is optimal for the network and the followers converge to it. As discussed in section 2, an equilibrium attained in this fashion is called as the Stackelberg equilibrium; it is a special case of Nash Equilibrium.

The model that the authors considered in [19] is a set of I users that share a set of L parallel links between a common source and destination. The authors’s deliberately left the meaning of the term user ambiguous but indicate that it can be individual connections that are controlled independently. Each user aims to minimize its cost function J, which is represented by the following equation [19].

Ji(f)= ∑iεlfiltl(fl)

Here i represents the user, l is the link, fil is the flow of user i on link l and Tl is the average delay for user i on link l. Each user divides its flow between the L links so that it minimizes its cost function. The manner in which flow is divided between the L links is called as its Strategy Profile.

The routing strategy of the manager is fixed as long as the routing strategies of the non-cooperative users don’t change, on the other hand the non-cooperative users adjust their routing strategies according to the strategies of the other users and the manager in order to minimize their cost. The authors in [19] prove that the Nash equilibrium attained in this game is always unique. It must be noted here that shenker in [14] claims that a stackelberg equilibrium is not desirable since the super user can induce an equilibrium point that it desires, but here the Manager is playing a social role hence it is acceptable.

The main results in this paper [19] are the following;

·                     If there is a single user the Manager can always enforce network optima, i.e. routing performance equals that of centrally controlled routing.

·                     If there are N users the Manager can induce network optima if its flow into the network is greater than a certain threshold r0.

·                     Network optimum can only be enforced if the total demand of the users is less than the network capacity.

·                     The Manager can easily enforce a network optimum in heavily loaded networks because the threshold r0 is small in this case.

·                     It is harder to enforce a network optimum as the number of users increase because the threshold r0 increases

6         Conclusions

Game Theory has been used by economists for almost half a century. Researchers applying game-theoretic paradigm to computer science problems have been working for 15 years. At the core of all research lies the pursuit of Nash Equilibrium. Mechanism Design revolves around designing mechanisms which ensure that Nash Equilibrium is attained. Nisan and Ronen’s [15] work led to the applications of algorithmic mechanism design to various problems of Internet. This has been further refined by using the Distributed Mechanism Design [10]. Although this approach has been limited to a few problems buts its application to congestion control is promising.

 

Analysis of TCP as a game [11] is very instrumental in proving the existence of Nash Equilibrium under different network configurations. CHOKe+ is proposed as a scheme, which can attain efficient Nash Equilibrium. The Markovian Internet Game [1] is more rigorous analysis and comes up with two schemes VLRED and EN-AQM which yield Nash Equilibrium.  Diminishing weight scheduler [17] gives real penalty functions which result in fair distribution of network resources.

 

This paper also review a number of pricing schemes that were developed to control congestion. Congestion control through pricing exploits the self regulating property of open markets. We review a number of pricing schemes. The distributed pricing schemes consider each router as an independent market. In Paris Metro Pricing multiple sub networks are created that offer bandwidth at different rates and the expected QoS at the expensive sub network is high. In Leader Follower Pricing the ISP acts as a super user and it fixes the price according to the congestion cost on the network, the game attains a Stackelberg equilibrium that is efficient in the current network situation. The Progressive Second Price scheme conducts an auction for the allocation of bandwidth among the users; the bandwidth is allocated to the highest bidder but at the price of the second highest bid.

Routing strategies were reviewed from the game theoretic point of view. First we took a look at the price of selfish routing and found that the ratio of price of routing traffic in a selfish manner to that of centralized routing is 4/3. Then we showed that a network can attain optimum routing in the presence of a leader that converge the system to an efficient Nash Equilibrium.

The dominant solution concept used in the game theoretic analysis is Nash Equilibrium. Classical game theory requires complete information about the number of users and their utility functions for a Nash Equilibrium; this is not true in computer networks, where a user has no information about the number of users and their utility functions. We must keep in mind that the work we surveyed was done primarily to understand the behavior of computer networks from a game theoretic perspective, hence the use of Nash Equilibrium although does not depict real world computer networks but is very useful in understanding networks from a theoretic point of view.

In the end we would like to say that game theory is a useful tool to analyze computer networks, but it has not been properly explored. We feel that there is a lot of potential in applying game theory to networks and hope that this area is explored further by networks researchers.

 

 

 


7         References:

[1] D. Dutta, A Goel and J Heidermann, Oblivious AQM and Nash Equilibria, ACM SIGCOMM Computer Communications Review, Volume 32, No.3: July 2002.

[2] J Nagle, On Packet Switches with Infinite Storage, RFC 970, December 1985.

[3] J Nagle, Congestion Control in IP/TCP Networks, Computer Communications Review Volume 14, No 4, 1984.

[4] V. Jacobson, Congestion Avoidance and Control, ACM SIGCOMM, 1988.

[5] S Floyd and V. Jacobson, Random Early Detection Gateways for congestion avoidance, IEEE /ACM Transactions on Networking, Volume 1, No 4, 1993.

[6] K Ramakrishnan and S. Floyd, A Proposal to add Explicit Congestion Notification (ECN) to IP, RFC 2481 Experimental, January 1999.

[7] B Braden, D Clark, J Crowcroft, B Davie, S Deering, D Estrin, S Floyd, V Jacobson, G Minshall, C Partridge, L Peterson, K Ramakrishnan, S Shenker, J Worclawski and L Zhang, Recommendations on Queue Management and Congestion Avoidance in the Internet, RFC 2309 April 1998.

[8] J Nash, Equilibrium Points in N-Person Games, Proceedings of National Academy of Sciences, 1950.

[9]J Feigenbaum, Games, Complexity Classes and Approximation Algorithms, Proceedings of the International Congress of Mathematicians, Volume III,  Documenta Mathematica, Journal der Deutschen Mathematiker-Vereinigung, 1998.

[10] J Feigenbaum and S Shenker, Distributed Algorithmic Mechanism Design: Recent Results and Future Directions, ACM SIGCOMM 2002.

[11] Aditiya Akella, Srivnivasan Seshan, Richard Karp, S Shenker and C Papadimitriou, Selfish Behavior and Stability of the Internet: A Game-Theoretic Analysis of TCP, ACM SIGCOMM 2002.

[12] J Nash, Non-Cooperative Games, October 11, 1950, Princeton University.

[13] R. Pan, B. Prabhakar and K Psounis, CHOKE, a stateless active queue management scheme for approximating fair bandwidth allocation, Proceedings of the conference on computer communications (IEEE Infocom), March 2000.

[14] S. Shenker, "Making Greed Work in Networks: A Game-Theoretic Analysis of Switch Service Disciplines," in SIGCOMM Symposium on Communications Architectures and Protocols, (London, UK), pp. 47-57, Sept. 1994.

[15] Noam Nisan, Algorithms for Selfish Agents In Proc. 16th Annual Symposium on Theoretical Aspects of Computer Science (STACS'99), pp.1--15, 1999.

[16] Parkes, D.C. Mechanism Design. Chapter 2 in PhD dissertation, ``Iterative Combinatorial Auctions: Achieving Economic and Computational Efficiency'', May 2001 Department of Computer and Information Science, University of Pennsylvania

[17] R, Garg “A Game Theoretic Approach Towards Congestion Control in Communication Networks , July 2002, ACM SIGCOMM.

[18] J Feigenbaum, D Koller and P Shor, A Game-Theoretic classification of interactive complexity classes, 1995,  Proceedings of the 10th Conference on Structure of Complexity Theory, IEEE Computer Society Press, Los Alamitos pp227-237.

[19] Y.A Korillas, A.A Lazar, A Orda, Acheiving Network Optima Using Stackelberg Routing IEEE/ACM Transactions on Networking, Vol 5,No 1, February 1997

[20] Roughgarden, T., and Tardos, E. (2002). How bad is selfish routing ? Journal of the ACM, 49 (2): 236-259.

[21] T. Basar and R. Srikant, A Stackelberg Network Game with Large Number of Followers, Journal of Optimization Theory and Applications, 2002.

[22] Aurel A. Lazar and Nemo Semret, The Progressive Second Price Auction Mechanism for Network Resource Sharing, 8th International Symposium on Dynamic Games and Applications, Maastricht, The Netherlands, July 1998.

[23] Vincent Conitzer and Tuomas Sandholm, Complexity Results about Nash Equilibria, Proceedings of the 18th International Joint Conference on Aritificial Intelligence (IJCAI-03), Acapulco, Mexico, 2003.

[24] Vincent Conitzer and Tuomas Sandholm, Complexity of Mechanism Design,  Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pp. 103-110, Edmonton, Canada, 2002

[24] Eric J. Friedman, Scott Shenker, and Implementation on the Internet, 1998

[25] Andrew Odlyzko, A Modest Proposal for Preventing Internet Congestion, DIMACS Technical Report 97-68, Ocotober 1997

[26] Errin W.Fulp, Douglas S. Reeves, Network Flow Control Based on Dynamic Competitive Markets,Proceedings International Conference on Network Protocol (ICNP'98), Austin Texas, Oct. 13-16, 1998.

[27] C. H. Papadimitriou, Algorithms, Games, and the Internet, STOC 2001 (extended abstract in ICALP 2001).