2 Citations (Scopus)
276 Downloads (Pure)

Abstract

This paper presents a kernel-based learning approach for black-box nonlinear state-space models with a focus on enforcing model stability. Specifically, we aim to enforce a stability notion called convergence which guarantees that, for any bounded input from a user-defined class, the model responses converge to a unique steady-state solution that remains within a positively invariant set that is user-defined and bounded. Such a form of model stability provides robustness of the learned models to new inputs unseen during the training phase. The problem is cast as a convex optimization problem with convex constraints that enforce the targeted convergence property. The benefits of the approach are illustrated by a simulation example.
Original languageEnglish
Title of host publication2023 62nd IEEE Conference on Decision and Control, CDC 2023
PublisherInstitute of Electrical and Electronics Engineers
Pages2897-2902
Number of pages6
ISBN (Electronic)979-8-3503-0124-3
DOIs
Publication statusPublished - 19 Jan 2024
Event62nd IEEE Conference on Decision and Control, CDC 2023 - Singapore, Singapore
Duration: 13 Dec 202315 Dec 2023
Conference number: 62

Conference

Conference62nd IEEE Conference on Decision and Control, CDC 2023
Abbreviated titleCDC 2023
Country/TerritorySingapore
CitySingapore
Period13/12/2315/12/23

Funding

This research was partially supported by the Engineering and Physical Sciences Research Council (grant number: EP/W005557/1) and by the Eötvös Loránd Research Network (grant number: SA-77/2021).

FundersFunder number
Engineering and Physical Sciences Research CouncilEP/W005557/1, SA-77/2021

    Fingerprint

    Dive into the research topics of 'Kernel-based learning of stable nonlinear state-space models'. Together they form a unique fingerprint.

    Cite this