Stability Results for Efficient Solutions of Vector Optimization Problems
نویسندگان
چکیده
Using the additive weight method of vector optimization problems and the method of essential solutions, we study some continuity properties of the mapping which associates the set of efficient solutions S(f ) to the objective function f . To understand such properties, the key point is to consider the stability of additive weight solutions and the relationship between efficient solutions and additive weight solutions.
منابع مشابه
Duality for vector equilibrium problems with constraints
In the paper, we study duality for vector equilibrium problems using a concept of generalized convexity in dealing with the quasi-relative interior. Then, their applications to optimality conditions for quasi-relative efficient solutions are obtained. Our results are extensions of several existing ones in the literature when the ordering cones in both the objective space and the constr...
متن کاملOptimality conditions for approximate solutions of vector optimization problems with variable ordering structures
We consider nonconvex vector optimization problems with variable ordering structures in Banach spaces. Under certain boundedness and continuity properties we present necessary conditions for approximate solutions of these problems. Using a generic approach to subdifferentials we derive necessary conditions for approximate minimizers and approximately minimal solutions of vector optimizatio...
متن کاملStability of a majority efficient solution of a vector linear trajectorial problem
The multicriteria problem of majority choice on a system of subsets of a finite set with linear partial criteria (MINSUM) is considered. Sufficient and necessary conditions of preserving majority efficiency by an efficient trajectory under " small " perturbations of vector criterion coefficients have been found. Lower and upper attainable estimates of the stability radius of a majority efficien...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملOn duality in nonconvex vector optimization in Banach spaces using augmented Lagrangians
This paper shows how the use of penalty functions in terms of projections on the constraint cones, which are orthogonal in the sense of Birkhoff, permits to establish augmented Lagrangians and to define a dual problem of a given nonconvex vector optimization problem. Then the weak duality always holds. Using the quadratic growth condition together with the inf-stability or a kind of Rockafellar...
متن کامل