IEEECIMDRL&Games2019: IEEE CIM Special Issue on Deep Reinforcement Learning & Games |
Website | https://cis.ieee.org/ieee-computational-intelligence-magazine/791-ieee-computational-intelligence-magazine-information-for-special-issues.html |
Submission link | https://easychair.org/conferences/?conf=ieeecimdrlgames2019 |
Submission deadline | October 1, 2018 |
IEEE Computational Intelligence Magazine
Special Issue on
Deep Reinforcement Learning and Games
Aims and Scope
Recently, there has been tremendous progress in artificial intelligence (AI) andcomputational intelligence (CI) and games. In 2015, Google DeepMind published a paper “Human-level control through deep reinforcement learning” in Nature, showing the power of AI&CIin learning to play Atari video games directly from the screen capture. Furthermore, in Nature2016, it published a cover paper “Mastering the game of Go with deep neural networks and tree search” and proposed the computer Go program, AlphaGo. In March 2016, AlphaGo beat the world’s top Go player Lee Sedol by 4:1. In early 2017, the Master, a variant of AlphaGo, won 60 matches against top Go players. In late 2017, AlphaGo Zero learned only from self-play and was able to beat the original AlphaGo without any losses (Nature2017). This becomes a new milestone in the AI&CI history, the core of which is the algorithm of deep reinforcement learning (DRL). Moreover, the achievements on DRL and games are manifest. In 2017, the AIs beat the expert in Texas Hold’em poker (Science2017). OpenAI developed an AI to outperform the champion in the 1V1 Dota 2 game. Facebook released a huge database of StarCraft I. Blizzard and DeepMind turned StarCraft II into an AI research lab with a more open interface. In these games, DRL also plays an important role.
The theoretical analysis of DRL, e. g., the convergence, stability, and optimality, is still in early days. Learning efficiency needs to be improved by proposing new algorithms or combining with other methods. DRL algorithms still need to be demonstrated in more diverse practical settings.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Survey on DRL and games;
- New AI&CI algorithms in games;
- Learning forward models from experience;
- New algorithms of DL, RL and DRL;
- Theoretical foundation of DL, RL and DRL;
- DRL combined with search algorithms or other learning methods;
- Challenges of AI&CI games as limitations in strategy learning, etc.;
- DRL or AI&CI Games based applications in realistic and complicated systems.
Guest Editors
D. Zhao, Institute of Automation, Chinese Academy of Sciences, China, Dongbin.zhao@ia.ac.cn
S. Lucas, Queen Mary University of London, UK, simon.lucas@qmul.ac.uk
J. Togelius, New York University, USA, julian.togelius@nyu.edu.
Submission Instructions
- The IEEE CIM requires all prospective authors to submit their manuscripts in electronic format, as a PDF file. The maximum length for Papers is typically 20 double-spaced typed pages with 12-point font, including figures and references. Submitted manuscript must be typewritten in English in single column format. Authors of Papers should specify on the first page of their submitted manuscript up to 5 keywords. Additional information about submission guidelines and information for authors is provided at the IEEE CIM website. Submission will be made viahttps://easychair.org/conferences/?conf=ieeecimdrlgames2019.
- Send also an email to guest editor D. Zhao (dongbin.zhao@ia.ac.cn) with subject “IEEE CIM special issue submission” to notify about your submission.
- Early submissions are welcome. We will start the review process as soon as we receive your contribution.