QA498 : An application of least square support vector optimization methods for solving a class of differential equations
Thesis > Central Library of Shahrood University > Mathematical Sciences > MSc > 2018
Authors:
Abstarct: In this thesis a new approach baxsed on Least Squares Support Vector Machines (LS-SVMs) is proposed for solving linear and nonlinear ordinary differential equations (ODEs). The approximate solution is presented in closed form by means of LSSVMs, whose parameters are adjusted to minimize an appropriate error function. For the linear and nonlinear cases, these parameters are obtained by solving a system of linear and nonlinear equations respectively. The method is well suited for solving mildly stiff, nonstiff and singular ordinary differential equations with initial and boundary conditions. Numerical results demonstrate the efficiency of the proposed method over existing methods. support vector machines have been very successful in pattern recognition and function estimation problems. In this paper we introduce the use of least squares support vector machines (LS-SVM's) for the optimal control of nonlinear systems. Linear and neural full static state feedback controllers are considered. The problem is formulated in such a way that it incorporates the N-stage optimal control problem as well as a least squares support vector machine approach for mapping the state space into the action space. The solution is characterized by a set of nonlinear equations. An alternative formulation as a constrained nonlinear optimization problem in less unknowns is given, together with a method for imposing local stability in the LS-SVM control scheme. The results are discussed for support vector machines with radial basis function kernel. Advantages of LS-SVM control are that no number of hidden units has to be determined for the controller and that no centers have to be specified for the Gaussian kernels when applying Mercer's condition. The curse of dimensionality is avoided in comparison with defining ning a regular grid for the centers in classical radial basis function networks. This is at the expense of taking the trajectory of state variables as additional unknowns in the optimization problem, while classical neural network approaches typically lead to parame- tric optimization problems. In the SVM methodology the number of unknowns equals the number of training data.
Keywords:
#Least squares support vector machines #Ordinary differential equations #Closed form approximate solution #Collocation method #neural optimal contorol #Support vector machines #Radial basis functions.
Keeping place: Central Library of Shahrood University
Visitor:
Keeping place: Central Library of Shahrood University
Visitor: