Home | Aims & Scope | Editorial Board | Information for Authors | Review Flowchart | Call for Papers ADP | Download | Online Subscription | Chinese Journal 
  Wireless sensor networks
  Approximate dynamic programming


Call for papers

Journal of Control Theory and Applications

Special Issue on

Approximate dynamic programming and reinforcement learning


Approximate dynamic programming (ADP) is a general and effective approach for solving optimal control and estimation problems by adapting to uncertain and non-convex environments over time.  ADP optimizes the expected value of the sensing objectives accrued over a future time interval with respect to an adaptive control law, conditioned on prior knowledge of the system, its state, and the probability distributions of uncertainties. A numerical search over the present value of the control minimizes a stochastic Hamilton-Jacobi-Bellman (HJB) equation providing a basis for real-time, approximate optimal control.


A key feature of ADP is that it can approximate the optimal control law (or policy) with only partial knowledge of the value function or of the system dynamics, in an online fashion.  In an effort to reduce computational complexity, the control law and value function have been represented by nonlinear function approximators, such as adaptive neural networks, giving rise to the field known as Neurodynamic Programming (NDP).  Much work has been done in this area toward the development of adaptive control systems that are based on value or policy iteration, or both (i.e., adaptive critics).  The ability to improve performance over time subject to new or unexplored objectives or dynamics has made ADP an attractive approach in a number of application domains and communities, including optimal control and estimation, operation research, and computer science.  In particular, with the advent of affordable automation and information technologies, recent years have seen a flurry of research activities and applications of ADP.  Additionally, recent results in the neuroscience literature suggest that the sensorimotor system functions as a feedback controller that optimizes neuronal representation of behavioral goals through a reinforcement signal that resembles the temporal difference error in value-iteration algorithms.


This special issue aims to showcase recent findings by the automatic control and computational intelligence communities in order to highlight emerging trends and frontiers in ADP methods and applications, such as, continuous-time adaptive critics, dual control, nonlinear control, optimal decision making in advertising, distributed energy and power allocation, and neuroengineering.  Submissions are expected in, but not limited to, the following areas


               Adaptive Dynamic Programming

               Reinforcement Learning for System Control

               Approximately optimal control

               Value function approximation for control systems design

               Q learning (action-dependent learning)

               Neural network applications in Dynamic Programming for feedback systems

               Fuzzy logic systems and fuzzy-neural systems for Dynamic Programming

               Direct policy search

               Actor-critic methods

               Learning rules and architectures for ADP

               Partially observable Markov decision processes

               Approximate solutions to nonlinear controller design equations

               Approximation-based optimal control systems

               Applications of ADP


All the submitted papers will be subject to peer review in accordance with the standard review procedures of the Journal of Control Theory and Applications. Prospective authors are invited to submit manuscripts no later than May 1, 2010, to one of the guest editors below:


Silvia Ferrari

Laboratory for Intelligent Systems and Control (LISC)

Department of Mechanical Engineering & Materials Science

Box 90300

Duke University

Durham, NC 27708-0005

e-mail: sferrari@duke.edu

Tel: +1 919-660-5484; Cell: + 1 919-597-9586; Fax: + 1 919-660-8963


Frank L. Lewis

Automation and Robotics Research Institute

The University of Texas at Arlington

7300 Jack Newell Blvd. S

Ft. Worth, Texas, 76118-7115, USA

e-mail: lewis@uta.edu


Jagannathan Sarangapani

Department of Electrical & Computer Engineering

University of Missouri-Rolla

MO, 65401, USA

e-mail: sarangap@mst.edu


Submissions have to be made electronically in pdf format. The special issue is tentatively scheduled for May 2011.


          You are guest number 111367.

Sponsored by:South China University of Technology & Academy of Mathematics and Systems Science, CAS      Address:Wushan 381,Guangzhou,P.R.China

Tel:020-87111464   Fax:020-87111464    Postcode:510640    Email: aukzllyy@scut.edu.cn

    Designed by Beijing E-Tiller Co.,Ltd.