BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
PRODID:iCalendar-Ruby
BEGIN:VEVENT
CATEGORIES:
DESCRIPTION:Behzad Farzanegan\, a doctoral candidate in electrical engineer
 ing\, will defend their dissertation titled “Lifelong Safe Optimal Adaptive
  Tracking Control of a Class of Nonlinear Discrete-time Systems.” Their adv
 isor\, Dr. Jagannathan Sarangapani\, is a curators distinguished professor 
 in the electrical and computer engineering department. The dissertation abs
 tract is provided below.\n\nState constraints and uncertainties are inheren
 t in many autonomous and robotic systems\, posing significant challenges fo
 r ensuring safety\, adaptability\, and optimal control performance. The opt
 imal adaptive tracking control of nonlinear discrete-time (DT) systems\, pa
 rticularly in safety-critical applications\, necessitates reinforcement lea
 rning-based solutions that can continuously learn and adapt while maintaini
 ng safety. Therefore\, in this research\, a suite of novel lifelong safe op
 timal adaptive tracking control (LSOATC) techniques is developed for nonlin
 ear DT systems with uncertain dynamics\, leveraging deep reinforcement lear
 ning (DRL) and actor-critic neural network architectures.\n\nFirst\, the op
 timal tracking control of partially uncertain nonlinear DT systems is addre
 ssed using a zero-sum game (ZSG) formulation\, wherein an augmented system 
 is designed to incorporate the tracking error and its integral value. A bar
 rier function (BF) is incorporated into the cost function to enforce state 
 constraints\, ensuring that system trajectories remain within a safe set wh
 ile optimizing performance. The actor-critic framework is employed to appro
 ximate the optimal control policy and worst-case disturbance\, while a life
 long learning scheme is introduced to mitigate catastrophic forgetting.\n\n
 Next\, the optimal adaptive tracking problem is extended to multi-task safe
  optimal adaptive tracking (MSOAT) for nonlinear DT systems in strict-feedb
 ack form. A Hamilton-Jacobi-Bellman (HJB)-based reinforcement learning fram
 ework is developed\, integrating a time-varying control barrier function (C
 BF) for real-time safety enforcement. To address catastrophic forgetting in
  multi-task learning\, an online Elastic Weight Consolidation (EWC)-based r
 egularization term is introduced in the critic and actor update laws\, enha
 ncing stability and adaptability across multiple tasks.\n\nSubsequently\, a
 n explainable deep reinforcement learning-based safety-aware optimal adapti
 ve tracking (SOAT) framework is proposed for nonlinear DT affine systems su
 bject to state constraints. The higher-order control barrier function (HOCB
 F) and Karush-Kuhn-Tucker (KKT) conditions are incorporated into the optima
 l policy derivation to ensure safe exploration during both online learning 
 and steady-state operation. Furthermore\, the Shapley Additive Explanations
  (SHAP) method is utilized to provide interpretability\, identifying key fe
 atures that influence control decisions and enabling efficient neural netwo
 rk architecture design.\n\nTo enhance robustness\, a safe lifelong learning
  (SLL)-based trajectory tracking controller is developed for autonomous sur
 face vessels (ASV) using deep multilayer neural networks (MNN). The propose
 d controller employs an MNN-based observer to estimate uncertain system dyn
 amics and a singular value decomposition (SVD)-based tuning mechanism to mi
 tigate vanishing gradient issues. The lifelong learning-based approach impr
 oves adaptability in varying system dynamics\, ensuring sustained optimal p
 erformance.\n\nFinally\, a resilient DRL-based control strategy is introduc
 ed to counteract sensor\, actuator and reward adversarial attacks in autono
 mous systems. An MNN-based observer is designed to detect attack residuals\
 , enabling real-time anomaly detection in networked communication. Safety i
 s enforced using a CBF-constrained quadratic programming (QP) formulation\,
  while adaptive reward clipping and Gaussian-based forgetting factors mitig
 ate the impact of adversarial reward perturbations.\n\nThe proposed methodo
 logies are rigorously validated through extensive simulations on autonomous
  surface vessels (ASV)\, autonomous underwater vehicles (AUV)\, shipboard p
 ower systems (SPS)\, and rear-wheel-drive autonomous (RWDA) vehicles. The r
 esults demonstrate significant improvements in tracking accuracy\, safety e
 nforcement\, and robustness to adversarial perturbations compared to conven
 tional actor-critic-based controllers. This research advances the field of 
 lifelong safe optimal adaptive tracking control\, providing a foundation fo
 r the real-world deployment of reinforcement learning-based controlle
DTEND:20250409T223000Z
DTSTAMP:20260420T181350Z
DTSTART:20250409T203000Z
LOCATION:
SEQUENCE:0
SUMMARY:Final Doctoral Defense for Behzad Farzanegan
UID:tag:localist.com\,2008:EventInstance_49285340527426
URL:https://calendar.mst.edu/event/final-doctoral-defense-for-behzad-farzan
 egan
END:VEVENT
END:VCALENDAR
