In this paper, we investigate game theoretic coverage control whose objective is to lead agents
to optimal congurations over a mission space. In particular, the objective of this paper is to
achieve the control objective (i) in the absense of the perfect prior knowledge on importance of each
point and (ii) in the presence of the action constraints. For this purpose, we rst formulate coverage
problems with two different global objective functions as so-called potential games. Then, we present
a payoff-based learning algorithm determining actions based only on the past actual outcomes. The
feature of the present algorithm is to allow an agent to take an irrational action. We also clarify
a relation between a design parameter of the algorithm and the probability which agents take the
optimal actions and prove that the probability can be arbitrarily increased. Then, we demonstrate
the effectiveness of the present algorithm through experiments on a testbed.