Self-learning power control in wireless sensor networks

Michele Chincoli, Antonio Liotta

Research output: Contribution to journalArticleAcademicpeer-review

52 Citations (Scopus)
132 Downloads (Pure)


Current trends in interconnecting myriad smart objects to monetize on Internet of Things applications have led to high-density communications in wireless sensor networks. This aggravates the already over-congested unlicensed radio bands, calling for new mechanisms to improve spectrum management and energy efficiency, such as transmission power control. Existing protocols are based on simplistic heuristics that often approach interference problems (i.e., packet loss, delay and energy waste) by increasing power, leading to detrimental results. The scope of this work is to investigate how machine learning may be used to bring wireless nodes to the lowest possible transmission power level and, in turn, to respect the quality requirements of the overall network. Lowering transmission power has benefits in terms of both energy consumption and interference. We propose a protocol of transmission power control through a reinforcement learning process that we have set in a multi-agent system. The agents are independent learners using the same exploration strategy and reward structure, leading to an overall cooperative network. The simulation results show that the system converges to an equilibrium where each node transmits at the minimum power while respecting high packet reception ratio constraints. Consequently, the system benefits from low energy consumption and packet delay.

Original languageEnglish
Article number375
Number of pages29
Issue number2
Publication statusPublished - 1 Feb 2018


  • Energy efficiency
  • Game theory
  • Multi-agent
  • Q-learning
  • Quality of service
  • Reinforcement learning
  • Transmission power control
  • Wireless sensor network


Dive into the research topics of 'Self-learning power control in wireless sensor networks'. Together they form a unique fingerprint.

Cite this