Reference:
L. Busoniu,
B. De Schutter, and
R. Babuska,
"Multiagent reinforcement learning with adaptive state focus,"
Proceedings of the 17th Belgium-Netherlands Conference on
Artificial Intelligence (BNAIC 2005) (K. Verbeeck, K. Tuyls, A.
Nowé, B. Manderick, and B. Kuijpers, eds.), Brussels, Belgium,
pp. 35-42, Oct. 2005.
Abstract:
In realistic multiagent systems, learning on the basis of complete
state information is not feasible. We introduce adaptive state
focus Q-learning, a class of methods derived from Q-learning that
start learning with only the state information that is strictly
necessary for a single agent to perform the task, and that monitor the
convergence of learning. If lack of convergence is detected, the
learner dynamically expands its state space to incorporate more state
information (e.g., states of other agents). Learning is faster and
takes less resources than if the complete state were considered from
the start, while being able to handle situations where agents
interfere in pursuing their goals. We illustrate our approach by
instantiating a simple version of such a method, and by showing that
it outperforms learning with full state information without being
hindered by the deficiencies of learning on the basis of a single
agent's state.