Численные методы решения негладких задач выпуклой оптимизации с функциональными ограничениями / Numerical Methods for Non-Smooth Convex Optimization Problems with Functional Constraints тема диссертации и автореферата по ВАК РФ 01.01.07, кандидат наук Алкуса Мохаммад

  • Алкуса Мохаммад
  • кандидат науккандидат наук
  • 2020, ФГАОУ ВО «Московский физико-технический институт (национальный исследовательский университет)»
  • Специальность ВАК РФ01.01.07
  • Количество страниц 149
Алкуса Мохаммад. Численные методы решения негладких задач выпуклой оптимизации с функциональными ограничениями / Numerical Methods for Non-Smooth Convex Optimization Problems with Functional Constraints: дис. кандидат наук: 01.01.07 - Вычислительная математика. ФГАОУ ВО «Московский физико-технический институт (национальный исследовательский университет)». 2020. 149 с.

Оглавление диссертации кандидат наук Алкуса Мохаммад

In each chapter of the thesis, the results of numerical experiments illustrating the advantages of the proposed and studied procedures for some examples were presented.

Table of Contents

Contents Page

Acknowledgments iii

Abstract v

List of Figures ix

List of Tables xi

Introduction

1 Some Basic Tools in Convex Analysis and Optimization

1.1 Convex Analysis Tools

1.1.1 Convex sets

1.1.2 Differentiable convex functions

1.1.3 Non-differentiable convex functions

1.1.4 Lipschitz continuity

1.2 Convex Optimization Tools

1.2.1 Properties of convex optimization problems

1.2.2 Numerical methods for convex optimization problems

1.2.3 Mirror Descent basics

2 Mirror Descent Algorithms for Deterministic and Stochastic Constrained Optimization Problems

2.1 Mirror Descent Algorithms for Deterministic Constrained Optimization

Problems

2.1.1 Problem statement and standard Mirror Descent methods

2.1.2 The modification of an adaptive Mirror Descent algorithm to minimize the Lipschitz-continuous objective function

2.1.3 The modification of an adaptive Mirror Descent algorithm to minimize the objective function with Lipschitz gradient

2.1.4 Partial adaptive Mirror Descent algorithm to minimize the objective function with Lipschitz gradient

2.1.5 The estimates of the convergence rate of the proposed Mirror Descent algorithms

2.1.6 Numerical experiments

2.2 Mirror Descent Algorithms for Special Strongly Convex Optimization

Problems

2.2.1 Adaptive Mirror Descent algorithm for Lipschitz-continuous strongly convex function

2.2.2 Adaptive Mirror Descent algorithm for strongly convex functions

with Lipschitz gradient

2.2.3 Partial adaptive Mirror Descent algorithm for strongly convex objective function with Lipschitz gradient

2.2.4 Numerical experiments

2.3 Mirror Descent Algorithms for Stochastic Constrained Optimization Problems

2.3.1 Problem statement and standard concerning basics

2.3.2 Adaptive stochastic Mirror Descent algorithm for Lipschitz-continuous function

2.3.3 The modification of an adaptive stochastic Mirror Descent algorithm for Lipschitz-continuous function

2.3.4 Numerical experiments

2.3.5 Additional experiments: Fermat-Torricelli-Steiner problem

3 Mirror Descent and Constrained Online Optimization Problems

3.1 Deterministic Constrained Online Optimization Problems

3.1.1 Non-adaptive Mirror Descent algorithm for the case of nonnegative regret

3.1.2 Adaptive Mirror Descent algorithm for the case of non-negative regret

3.1.3 Numerical experiments

3.2 Stochastic Constrained Online Optimization Problems

3.2.1 Non-adaptive stochastic algorithm for constrained online optimization problems

3.2.2 Adaptive stochastic algorithm for constrained online optimization problems

3.2.3 Numerical experiments

4 Accelerated Methods for Saddle Point Problems

4.1 Accelerated Methods for Composite Non-Bilinear Saddle Point Problem

4.1.1 Problem statement and some related concepts

4.1.2 Some basic lemmas related to the considered problem

4.1.3 Fundamental results and the scheme of accelerated methods for

the considered saddle point problem

4.2 On Some Algorithms for Strongly Convex Optimization Problems with

One Functional Constraint

4.2.1 Problem statement and related basics

4.2.2 The proposed algorithm and the estimates of the accuracy of the solution

4.2.3 An estimate of the convergence rate for the Lipschitz-continuous functions

4.2.4 Numerical experiments

References

List of Publications

List of Figures

1.1 The best-known complexities of the first-order methods for several classes

of problems

2.1 The results of Algorithms 1 (AMD-L), 2 (AMD-LG), 3 (Mod.AMD-L)

and 4 (Mod.AMD-LG), with m = r = 50, n = 500 and different values of e

2.2 The results of Algorithms 1 (AMD-L), 2 (AMD-LG), 3 (Mod.AMD-L)

and 4 (Mod.AMD-LG), with m = 100,n = 1000, r = 50 and e =

2.3 The results of Algorithms 1 (AMD-L), 2 (AMD-LG), 3 (Mod.AMD-L)

and 4 (Mod.AMD-LG), with m = 100,n = 1000, r = 50 and e =

2.4 The results of Algorithms 1 (AMD-L), 2 (AMD-LG), 3 (Mod.AMD-L) and 4 (Mod.AMD-LG), for Fermat-Torricelli-Steiner problem, with m =

50, n = 100, r = 25 and different values of e

2.5 The results of Algorithms 2 (AMD-LG) and 5 (PAMD-LG), when Mg < 1, with m = 500, r = 100, n = 2000 and different values of e

2.6 The results of Algorithms 2 (AMD-LG) and 5 (PAMD-LG), when Mg > 1, with m = 500, r = 100, n = 2000 and different values of e

2.7 The results of Algorithms 2 (AMD-LG) and 5 (PAMD-LG), when Mg < 1, with m = 500, r = 100, n = 2000 and e = 0.05. The logarithmic scale on both axes in all figures

2.8 The results of Algorithms 2 (AMD-LG) and 5 (PAMD-LG), when Mg > 1, with m = 500, r = 100, n = 2000 and e = 0.05. The logarithmic scale on both axes in all figures

2.9 The results of Algorithms 1 (AMD-L) and 6 (AMD-SL), with different values of e

2.10 The results of Algorithms 2 (AMD-LG) and 7 (AMD-SLG), with different values of e

2.11 The results of Algorithms 9 (ASMD-L) and 10 (Mod.ASMD-L), for Fermat-Torricelli-Steiner problem, with m = 250, n = 1000, r = 100 and different values of e

3.1 The results of Algorithms 11 (Non Adap-OMD), 12 (Adap-OMD) and 13 (Mod.Adap-OMD), with m = 300, n = 100 and different values of N

3.2 The results of Algorithms 14 (Non Adap-SOMD) and 15 (Adap-SOMD),

for objective function (3.37) with m = 10, n = 20 and different values of N

3.3 The results of Algorithms 14 (Non Adap-SOMD) and 15 (Adap-SOMD),

for Example 20, with m = 200, n = 500 and different values of N

3.4 The results of Algorithm 15 (Adap-SOMD), for Example 20, with m =

and different values of n and N

List of Tables

2.1 The results of Algorithms 1-4, with constraints (2.35). m = 50 and n =

2.2 The results of Algorithms 1-4, with constraints (2.36). m = 100 and

n =

2.3 The results of Algorithms 1-4, with constraints (2.37). m = 50 and n =

2.4 The results of Algorithms 2 and 5, when Mg < 1, with m = 500, r =

100, n =

2.5 The results of Algorithms 2 and 5, when Mg > 1, with m = 500, r =

100, n =

2.6 Results of Algorithms 9 and 10, with m = 50, n = 1500 and different values

of r

2.7 The results of Algorithms 9 and 10, with m = 50, n = 100 and different values of r

3.1 The results of Algorithms 11 and 12, with m = 100 and n =

3.2 The results of Algorithm 13 (Mod.Adap-OMD), with m = 100 and n =

3.3 Results of Algorithms 14 and 15, with m = 10, n = 20 and different values

of N

4.1 The best-known results regarding the complexity of solving the problem (4.6)

4.2 The results of Algorithms 20 and 21, for Example 21, with r = m =

and n =

4.3 The results of Algorithms 20 and 21, for Example 22, with m = 50 and

n =

4.4 The results of Algorithms 6, 20 and 21, for Example 23, with p = 10, m =

50 and n =

4.5 The results of Algorithms 6, 20 and 21, for Example 24, with r = 50, m =

50 and n =

Рекомендованный список диссертаций по специальности «Вычислительная математика», 01.01.07 шифр ВАК

Введение диссертации (часть автореферата) на тему «Численные методы решения негладких задач выпуклой оптимизации с функциональными ограничениями / Numerical Methods for Non-Smooth Convex Optimization Problems with Functional Constraints»

Introduction

Optimization may be regarded as the cornerstone of many areas of applied mathematics, computer science, engineering, and a number of other scientific disciplines. This field was described by Roger Fletcher as "a fascinating blend of theory and computation, heuristics and rigor" [94].

Optimization problems arise naturally in many different fields, but unfortunately for a majority of optimization problems, there is no hope to find a solution analytically (i.e. find an explicit-form to an optimal solution). Therefore in order to solve an optimization problem, we have to use numerical methods. There are different classifications and types of these methods, one of them is the first-order methods, which go back to 1847 with the work of Cauchy on the steepest descent method. With the increase in the amount of applications that can be modeled as large- or even huge-scale optimization problems (some of such applications arising in: machine learning, deep learning, data science, control, signal processing, statistics, and so on), first-order methods, which require low iteration cost as well as low memory storage, have received much interest over the past few decades in order to solve the convex optimization problems, in the smooth and non-smooth cases [17]. Especially, the optimization problem of non-smooth functions with several functional constraints attracts widespread interest, in large-scale optimization and its applications [23, 111]. On continuous optimization with functional constraints, there is a long history of studies. The monographs in this area include [24, 94]. The recent works on first-order methods for convex optimization problems with convex functional constraints include [14, 43, 58, 71, 119, 123, 125] for the deterministic setting and [2, 3, 12, 66, 126] for the stochastic setting. However, the parallel development for problems with non-convex objective functions and also with non-convex constraints, especially for theoretically provable algorithms, remains limited, see [70] and references therein.

There are various first-order methods, for solving the convex optimization problems in the case of the non-smooth objective function. Some examples of these methods, to name but a few, are: subgradient methods, subgradient projection methods [85, 101, 109], OSGA [93], bundle-level method [85], Lagrange multipliers method [26] and many others. Among them, the Mirror Descent method [18, 81], which is viewed as a simple method for non-smooth convex optimization problems.

The Mirror Descent method which originated in [80, 81] and was later analyzed in [18], is considered as the non-Euclidean extension of standard subgradient methods. This method is used in many applications, see [22, 59, 75, 76, 120] and references therein. The standard subgradient methods employ the Euclidean distance function with a suitable step-size in the projection step. Mirror Descent extends the standard projected subgradient methods by employing a nonlinear distance function with an optimal step-size in the nonlinear projection step [74]. The Mirror Descent method not only generalizes the standard subgradient methods but also achieves a better convergence rate [40]. In addition, the Mirror Descent method is applicable to optimization problems in Banach spaces where gradient descent is not [40]. An extension of the Mirror Descent method for constrained problems was proposed in [19, 81]. Usually, the step-size and stopping criterion for the Mirror Descent method requires to know the Lipschitz constant of the objective function and constraints, if any. Adaptive step-sizes, which do not require this information, are considered for unconstrained problems in [21], and for constrained problems in [19]. Recently in [14], some optimal adaptive Mirror Descent methods were proposed for convex optimization problems with non-smooth functional constraints.

If we focus on the problems of minimization of an objective function consisting of a large number of component functionals, then in each iteration of any iterative minimization procedure, computing a single (sub)gradient becomes very expensive. Therefore, instead of the deterministic (sub)gradient, there is an incentive to calculate the stochastic (sub)gradient. This permits accelerating the solution process, with the earning from randomization growing progressively with problem sizes. A different approach to solving stochastic optimization problems is called stochastic approximation (SA), which was initially proposed in a seminal paper by Robbins and Monro in 1951 [103]. An important improvement of this algorithm was developed by Polyak and Juditsky [98, 99]. More recently, Nemirovski et al. [82] presented a modified stochastic approximation method and demonstrated its superior numerical performance for solving a general class of non-smooth convex problems. The Mirror Descent method also has been widely used, in order to solve the optimization problem in the stochastic setting [64, 77, 78].

Also in recent years, online convex optimization (OCO) has become a leading online learning framework, via its powerful modeling capability for a lot of problems from diverse domains. For example, selection for search engines and spam filtering can all be modeled as special cases. OCO plays a key role in solving the problems where statistical information is being updated [54, 55]. There are a lot of examples of such problems, concerning Internet network, consumer data sets or financial market, and in machine learning applications such as adaptive routing in networks, online display advertising [9, 58], dictionary learning, auctions, classification, and regression, see [127] and references therein, and [54, 55] for more applications and background. In OCO, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in this convex

set before seeing the objective function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed [131]. In recent years, methods for solving online optimization problems have been actively developed [29, 30, 46, 47, 54, 55, 58, 73]. In [58] authors considered adaptive methods for OCO problem with constraints, but only with a standard Euclidean prox-structure. In [56] authors proposed an algorithm for OCO with stochastic constraints, where the objective function varies arbitrarily but the functionals of constraints are varying independently and identically distributed (i.i.d.) over time.

For the display to be complete, it is useful to take into account the saddle point problems, which are closely related to the optimization problems. Where in the general case we can consider the saddle point problem as a non-smooth convex optimization problem with a certain structure. Moreover, saddle point problems appear when applying the Lagrange multiplier method to the convex optimization problems with functional constraints (linear types of equalities and nonlinear convex types of inequalities). Recently, many researchers actively working in the subject of accelerated methods for saddle point problems, based on their structure. Over the past 15 years, the Nesterov's acceleration scheme [89] has been successfully transferred to smooth constrained optimization problems and to problems with structure (in particular, the so-called composite problems). Also, the acceleration scheme was successfully transferred to gradient-free methods, subgradient methods, directional descents, coordinate descents and methods using higher derivatives. It was also possible to accelerate randomized methods (for example, variance reduction methods in minimizing the sum of functions) and methods for solving smooth stochastic optimization problems. The success of the transfer, mentioned above, is understood to mean the achievement of well-known lower bounds of estimates by using the corresponding accelerated methods (with accuracy to a numerical factor). It is also important to note that in all the described cases, the basis of the acceleration scheme is the same. Details and a more review of the literature can be found in [45]. Very recently, in [72], the authors generalized our results in [4] and resolved a longstanding open question pertaining to the design of near-optimal first-order algorithms for smooth and strongly-convex-strongly-concave minimax problems. Despite the noted achievements, as mentioned previously, there remains a number of problems that are quite important for practice, in which it is not yet completely clear how to accelerate the available methods.

The main theme of the thesis focuses on the non-smooth convex optimization problems with convex functional constraints and on their connection with the convex-concave composite saddle point problems.

We begin, in Chapter 1, by reviewing the fundamental concepts and tools in convex analysis and convex optimization, which will be useful in the remaining chapters of the thesis. We also mention a review of some basics and fundamental properties of the convex

optimization problems and numerical methods in order to solve them. We focus on the first-order methods, especially more attention on the basics of the Mirror Descent methods.

In Chapter 2, we consider the minimization problem of a convex function with convex non-smooth functional constraints and propose some Mirror Descent algorithms in order to solve such problem, in deterministic or stochastic (randomized) setting, and in different situations: smooth or non-smooth, convex or strongly convex objective function and constraints. It is demonstrated that the proposed algorithms are applicable to the objective functions of various levels of smoothness: The Lipschitz condition holds either for the objective function itself or for its gradient or Hessian (while the function itself can fail to satisfy the Lipschitz condition). Section 2.1 (see [113]) is devoted to a new modification of an adaptive Mirror Descent algorithm (not requiring the knowledge of the Lipschitz constants neither for the objective function nor for constraints), which was proposed in [14], for the deterministic setting of the optimization problems, in the case when the objective function is Lipschitz-continuous. The proposed modification allows saving the running time of the algorithm, due to the consideration of not all functional constraints on non-productive steps (i.e. skipping some of the functional constraints). The class of problems with a non-smooth objective function equal to the maximum of smooth functionals with a Lipschitz-continuous gradient was considered. Some estimates of the convergence rate for all proposed algorithms were obtained, in dependence on the level of smoothness of the objective function. These estimates show that the proposed algorithms are optimal from the viewpoint of lower bounds of estimates [81]. In Section 2.2, we mention some adaptive and partially adaptive Mirror Descent algorithms, which were proposed in [114], for strongly convex minimization problems with strongly convex constraints. The idea of restarting an algorithm for convex problems was used in the mentioned algorithms in order to obtain a faster rate of convergence. Section 2.3 (see [3]) is devoted to a new modification of a recently proposed adaptive stochastic Mirror Descent algorithm (Algorithm 4 in [14]). Estimates for the convergence rate of the proposed modified algorithm were obtained. The proposed modified algorithm, with an accuracy £ of an approximate solution to the problem, is optimal in the terms of lower bounds of estimates. At the end of each section, in this chapter, some numerical experiments demonstrating the effectiveness of the proposed modification and the advantages of using the technique of restarts in the case when the objective function is strongly convex were carried out for certain examples.

In Chapter 3, we consider a non-smooth convex online optimization problem (OOP) with convex functional constraints, in both deterministic and stochastic settings [2, 119]. This problem can be summarized as follows: assume that N convex Lipschitz-continuous non-smooth functionals are given on a closed set of a vector space. The problem is to minimize the arithmetic mean of these functionals with convex Lipschitz-continuous non-smooth functional constraints. In addition, it is allowed to calculate the (sub)gradient of

each functional only once. For solving this problem, in both deterministic and stochastic settings (see Sections 3.1 and 3.2, respectively), non-adaptive and adaptive algorithms were proposed with an arbitrary proximal structure. In the case of non-negative regret, we find that the number of non-productive steps is O(N), which indicates the optimality of the proposed algorithms. At the end of each section, in this chapter, to show the advantages of the adaptive variant of the proposed algorithms, some numerical experiments are also given for some examples.

Finally, in Chapter 4, in Section 4.1 we consider a more general class of the saddle point problems, which is the class of non-bilinear convex-concave smooth composite saddle point problems. It was proposed to reduce the considered saddle point problem to a combination of auxiliary smooth strongly convex optimization problems of separately for each group of variables. Therefore, the application of Nesterov's accelerated gradient method leads to improve the estimates of convergence. We got an analogue of the Lan's results [67] and showed that the best-known bounds for the bilinear convex-concave smooth composite saddle point problems [67] keep true for the non-bilinear problems [4]. In Section 4.2, for the non-smooth case of the saddle point problems, as a special case, we consider the classical optimization problem of minimizing a non-smooth, Lipschitz-continuous, strongly convex function with one non-smooth functional constraint. For solving this problem, we propose an analogue of an adaptive algorithm, which was proposed by F. S. Stonyakin in [115]. The approach in the proposed algorithm is based on the transition to a strongly convex dual problem, in this case, the dual function depends on one dual variable, which allows us to use the dichotomy method and solving an auxiliary one-dimensional problem at each iteration. The stopping criterion of the proposed algorithm can speed up the work of the algorithm compared to the other optimal algorithm for some examples of non-smooth problems. In the proposed algorithm, the strong convexity of the functional constraint is not required, and there is also no need to know the value of the strong convexity parameter of the objective function, which is not available in some optimal algorithms, such as Mirror Descent (see Section 2.2), wherein these algorithms require the strong convexity and knowing the value of the strong convexity parameter of both objective function and functional constraints. Also, theoretical estimates of the convergence rate for the proposed algorithm were obtained. The results for some examples of numerical experiments illustrating the comparison of the proposed algorithm with other algorithms, including an optimal adaptive algorithm (Mirror Descent) for non-smooth strongly convex functions were also given.

The goals of the thesis.

1. Development adaptive Mirror Descent algorithms, in order to solve convex optimization problems with functional constraints, in both deterministic and stochastic

settings, with different levels of smoothness for the convex or strongly convex objective function, such as Lipschitz-continuous, the gradient or the Hessian of the objective function is Lipschitz-continuous.

2. Development some Mirror Descent algorithms for online optimization problems with functional constraints, in both deterministic and stochastic settings.

3. Development accelerated methods for a more general class of the convex-concave composite saddle point problems and obtain the optimal estimates of the convergence rate for accelerated methods for the considered class of problems.

The tasks of the thesis.

1. Developing modifications of adaptive Mirror Descent algorithms to solve the optimization problem of a convex function with convex functional constraints, in deterministic and stochastic settings.

2. Developing some algorithms for online optimization problems with functional constraints, in both deterministic and stochastic settings.

3. Developing adaptive algorithms for solving strongly convex optimization problems with one functional constraint.

4. Studying a more general class of the convex-concave composite saddle point problems and obtain the estimates of the convergence rate for accelerated methods for non-bilinear convex-concave smooth composite saddle point problems.

Scientific novelty.

It was proposed a new modification of an adaptive deterministic and stochastic Mirror Descent algorithms, in order to solve the convex optimization problems with non-smooth functional constraints, in the case when the objective function is Lipschitz-continuous. The proposed modification allows saving the running time of the algorithms due to the consideration of not all functional constraints on non-productive steps. Specific estimates of the convergence rate for some adaptive and partial adaptive Mirror Descent algorithms, in the case, when the objective function is Lipschitz-continuous and when its Hessian is Lipschitz-continuous, were obtained. Also, adaptive and non-adaptive algorithms, for stochastic online optimization problems with functional constraints were proposed. In order to solve the non-smooth, strongly convex optimization problems with one functional constraint, it was proposed an adaptive algorithm, its stopping criterion can speed up the work of the algorithm compared to the other optimal algorithm for some examples of non-smooth strongly convex problems. Also, it was studied a more general class of the convex-concave saddle point problems and obtained the estimates of the convergence rate

for accelerated methods for non-bilinear convex-concave smooth composite saddle point problems. Furthermore, numerical experiments were carried out for some examples, to show the advantages of the proposed algorithms, in deterministic and stochastic settings of the considered optimization problem, and the advantages of the using the technique of restarts, in order to solve the optimization problems with functional constraints in the case when the objective function and the functional constraints are strongly convex.

The theoretical and practical value of the work in the thesis.

The proposed algorithms in the thesis, for convex optimization and online optimization problems with functional constraints, in both deterministic and stochastic settings, are adaptive. The adaptive adjustment is with respect to the Lipschitz constant of the objective function itself or of its gradient or Hessian, as well as the Lipschitz constant of the functional constraints. This adaptivity of the proposed algorithms is very important in practice and many applications, such as in Machine Learning scenarios, large-scale optimization problems, and their applications. Also, the proposed modified algorithms are applicable to the objective functions of various levels of smoothness: the Lipschitz condition holds either for the objective function itself or for its gradient or Hessian (while the function itself can fail to satisfy the Lipschitz condition) and they allow saving the running time of the algorithms. In the proposed algorithms, we consider an arbitrary proximal structure, which allows us to solve the optimization problems in the case of non-Euclidean distance. Also, in order to solve the classical problem of minimizing a strongly convex function with one non-smooth functional constraint, in the proposed algorithm there is an adaptive adjustment with respect to the strong convexity parameter, where the strong convexity of the functional constraint is not required, and there is also no need to know the value of the strong convexity parameter of the objective function, which is not available in some optimal algorithms, such as Mirror Descent. The studied approach of accelerated methods for saddle point problems was for a more general class of the convex-concave composite saddle point problems. Such problems arise, for example, in image processing and in solving various inverse problems. The obtained results can be generalized to the case of more general settings, such as a stochastic setting. Also, instead of the Euclidean norm, one could consider more general norms and proximal structures; finally, one could try to consider more than two terms in the structure of the objective function.

Statements to be defended.

The statements to be defended in the thesis can be enumerated as follows:

1. A new modification of adaptive deterministic and stochastic Mirror Descent algorithms, in order to solve the convex optimization problems with non-smooth functional constraints, in the case when the objective function is Lipschitz-continuous.

2. Specific estimates of the convergence rate for some adaptive and partial adaptive Mirror Descent algorithms, in the case, when the objective function is Lipschitz-continuous and when its Hessian is Lipschitz-continuous.

3. Adaptive and non-adaptive algorithms, for the stochastic online optimization problems with functional constraints.

4. An adaptive algorithm, in order to solve non-smooth, strongly convex optimization problems with one functional constraint and comparison with an optimal Mirror Descent algorithm.

5. Studying a more general class of the convex-concave saddle point problems and obtain the estimates of the convergence rate for accelerated methods for non-bilinear convex-concave smooth composite saddle point problems.

Presentations and validation of research results.

The results of the thesis were reported and discussed at the following scientific conferences and seminars:

1. 61st Scientific Conference MIPT. Moscow, 24.11.2018;

2. Conference on graphs, networks, and their applications (Workshop Network Optimization). Moscow, MIPT, 16.05.2019;

3. 18th International Conference on Mathematical Optimization Theory and Operation Research (MOTOR-2019), Ekaterinburg, Russia, July 8-12, 2019;

4. International conference "Equation of Convolution Type in Science and Technology" ECTST-2019, Simferopol, Russian Federation, September 25-28, 2019;

5. 62nd Scientific Conference MIPT. Moscow, 23.11.2019;

6. Fifth international conference Quasilinear Equations, Inverse Problems and Their Applications. Moscow, MIPT, 02.12.2019;

7. Scientific seminar in the laboratory of numerical methods of applied structural optimization (under the guidance of Professor Yu. G. Evtushenko). Moscow, MITP, 29.05.2019 and 12.02.2020;

8. Scientific seminar of the Department of Algebra and Functional Analysis, Faculty of Mathematics and Computer Science, Taurida Academy, V. I. Vernadsky Crimea Federal University (under the guidance of Professor I. V. Orlov). Simferopol, Russia, 22.09.2019.

Publications

The results in the thesis are represented in five published papers (1)-(5) and in two accepted papers to print, (6) and (7). Published papers:

(1) F. S. Stonyakin, M. S. Alkousa, A. N. Stepanov, M. A. Barinov: Adaptive mirror descent algorithms in convex programming problems with Lipschitz constraints. Trudy Instituta Matematiki i Mekhaniki URO RAN, 24(2), pp. 266-279, 2018. http://journal.imm.uran.ru/node/287 "Web of Science"

(2) A. A. Titov, F. S. Stonyakin, A. V. Gasnikov, M. S. Alkousa: Mirror Descent and Constrained Online Optimization Problems// Optimization and Applications. 9th International Conference OPTIMA-2018. Communications in Computer and Information Science, 974, pp. 64-78, 2019. https://link.springer.com/chapter/ 10.1007/978-3-030-10934-9_5 "Scopus"

(3) F. S. Stonyakin, M. S. Alkousa, A. N. Stepanov, A. A. Titov: Adaptive Mirror Descent Algorithms for Convex and Strongly Convex Optimization Problems with Functional Constraints. Journal of Applied and Industrial Mathematics, 13(3), pp. 557-574, 2019. https://link.springer.com/article/10.1134/ S1990478919030165 "Scopus"

(4) M. S. Alkousa: On Some Stochastic Mirror Descent Methods for Constrained Online Optimization Problems. Computer Research and Modeling, 11(2), pp. 205-217, 2019. DOI: 10.20537/2076-7633-2019-11-2-205-217. http://crm-en.ics.org. ru/journal/article/2775/ "Scopus"

(5) F. S. Stonyakin, M. S. Alkousa, A. A. Titov, V. V. Piskunova: On Some Methods for Strongly Convex Optimization Problems with One Functional Constraint. In book: Mathematical Optimization Theory and Operations Research, 18th International Conference, MOTOR 2019, LNCS 11548, pp. 82-96, 2019. https://doi.org/ 10.1007/978-3-030-22629-9_7 "Scopus"

Accepted papers to the print:

(6) M. S. Alkousa: On Modification of an Adaptive Stochastic Mirror Descent Algorithm for Convex Optimization Problems with Functional Constraints. Accepted to the print as a chapter in the forthcoming book: Communications in Mathematical Computations and Applications, Springer. https://arxiv.org/pdf/1904.09513.pdf

(7) M. S. Alkousa, D. M. Dvinskikh, F. S. Stonyakin, A. V. Gasnikov, D. Kovalev: Accelerated Methods for Saddle Point Problems. Accepted paper to the print in the

journal Comput. Math. and Math. Phys. https://arxiv.org/pdf/1906.03620. pdf

Похожие диссертационные работы по специальности «Вычислительная математика», 01.01.07 шифр ВАК

Список литературы диссертационного исследования кандидат наук Алкуса Мохаммад, 2020 год

References

[1] M. Ahookhosh: Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity. Mathematical Methods of Operations Research, 89(3), pp. 319-353, 2019.

[2] M. S. Alkousa: On Some Stochastic Mirror Descent Methods for Constrained Online Optimization Problems. Computer Research and Modeling, 11(2), pp. 205-217, 2019.

[3] M. S. Alkousa: On Modification of an Adaptive Stochastic Mirror Descent Algorithm for Convex Optimization Problems with Functional Constraints. Accepted chapter for publication in the forthcoming book: Communications in Mathematical Computations and Applications, Springer. https://arxiv.org/pdf/1904. 09513.pdf

[4] M. S. Alkousa, D. M. Dvinskikh, F. S. Stonyakin, A. V. Gasnikov, D. Kovalev: Accelerated Methods for Saddle Point Problems. Accepted paper to the print in the journal Computational Mathematics and Mathematical Physics. https://arxiv. org/pdf/1906.03620.pdf

[5] A. S. Antipin: Solution methods for variational inequalities with coupled constraints. Comput. Math. and Math. Phys., 40(9), pp. 1291-1307, 2000.

[6] A. S. Antipin, F. P. Vasil'ev: Regularized prediction method for solving variational inequalities with an inexactly given set. Comput. Math. and Math. Phys., 44(5), pp. 796-804, 2004.

[7] A. S. Antipin, V. Jacimovic, M. Jacimovic: Dynamics and variational inequalities. Comput. Math. Math. Phys. 57(5), pp. 784-801, 2017.

[8] A. Y. Aravkin, J. V. Burke, D. Drusvyatskiy: Convex Analysis and Nons-mooth Optimization. 2017. https://sites.math.washington.edu/~burke/ crs/516/notes/graduate-nco.pdf

[9] B. Awerbuch, R. Kleinberg: Online linear optimization and adaptive routing. Journal of Computer and System Sciences. 74(1), pp. 97-114, 2008.

[10] W. Azizian, I. Mitliagkas, S. Lacoste-Julien, G. Gidel: A Tight and Unified Analysis of Extragradient for a Whole Spectrum of Differentiable Games. 2019. https: //arxiv.org/pdf/1906.05945.pdf

[11] M. Baes, M. Bürgisser: An acceleration procedure for optimal first-order methods, Optimization Methods and Software, 9(3), pp. 610-628, 2014.

[12] K. Basu, P. Nandy: Optimal Convergence for Stochastic Optimization with Multiple Expectation Constraints. 2019. https://arxiv.org/pdf/1906.03401. pdf

[13] H. H. Bauschke, J. M. Borwein, P. L. Combettes: Bregman monotone optimization algorithms. SIAM Journal on Controal and Optimization 42(2), pp. 596-636, 2003.

[14] A. Bayandina, P. Dvurechensky, A. Gasnikov, F. Stonyakin, A. Titov: Mirror descent and convex optimization problems with non-smooth inequality constraints. In: Large-Scale and Distributed Optimization, Springer, Cham pp. 181-213, 2018.

[15] A. Bayandina, A. Gasnikov, E. Gasnikova, S. Matsievsky: Primal-dual mirror descent for the stochastic programming problems with functional constraints. Computational Mathematics and Mathematical Physics, 58(11), pp. 1728-1736, 2018

[16] A. Bayandina: Adaptive Stochastic Mirror Descent for Constrained Optimization. 2017 Constructive Nonsmooth Analysis and Related Topics (dedicated to the memory of V. F. Demyanov) (CNSA), pp. 40-43, 2017.

[17] A. Beck: First-Order Methods in Optimization. Society for Industrial and Applied Mathematics, Philadelphia 2017.

[18] A. Beck, M. Teboulle: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett., 31(3), pp. 167-175, 2003.

[19] A. Beck, A. Ben-Tal, N. Guttmann-Beck, L. Tetruashvili: The comirror algorithm for solving nonsmooth constrained convex problems. Operations Research Letters, 38(6), pp. 493-498, 2010.

[20] A. Beck, M. Teboulle: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2(1), pp. 183-202, 2009.

[21] A. Ben-Tal, A. Nemirovski: Lectures on Modern Convex Optimization. Society for Industrial and Applied Mathematics, Philadelphia 2001.

[22] A. Ben-Tal, T. Margalit, A. Nemirovski: The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography. SIAM Journal on Optimization, 12(1), pp. 79-108,2001.

[23] A. Ben-Tal, A. Nemirovski: Robust Truss Topology Design via semidefinite programming. SIAM Journal on Optimization, 7(4), pp. 991-1016, 1997.

[24] D. P. Bertsekas: Constrained optimization and Lagrange multiplier methods. Academic press, 2014.

[25] D. P. Bertsekas: Convex Optimization Theory. Athena Scientific, 2009.

[26] S. Boyd, L. Vandenberghe: Convex Optimization. Cambridge University Press, New York, 2004.

[27] L. M. Bregman: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics 7(3), pp. 200-217, 1967.

[28] S. Bubeck: Convex optimization: algorithms and complexity. Foundations and Trends in Machine Learning, 8(3-4), pp. 231-357, 2015. https://arxiv.org/ pdf/1405.4980.pd

[29] S. Bubeck, R. Eldan: Multi-scale exploration of convex functions and bandit convex optimization. JMLR: Workshop and Conference Proceedings 49, pp. 1-7, 2016.

[30] S. Bubeck, N. Cesa-Bianchi: Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundation and Trends in Machine Learning, 5(1), pp. 1-122, 2012.

[31] A. Chambolle, T. Pock: A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging. J. Math. Imaging Vis. 40(1), pp. 120-145, 2011.

[32] A. V. Chernov: Primal-dual methods to solve entropy-linear programming problems. PhD thesis at MIPT, 2017. (in Russian)

[33] A. R. Conn, K. Scheinberg, L. N. Vicente: Introduction to Derivative-Free Optimization. SIAM, Philadelphia, 2009.

[34] A. B. Conn, N. I. M. Gould, P. L. Toint: Trust Region Methods. SIAM, Philadelphia, 2000.

[35] J. M. Danskin: Theory of Max-Min. Sov. radio, 1970.

[36] V. F. Demyanov, V. N. Malozemov: Introduction to minimax. 1972.

[37] O. Devolder: Exactness, inexactness and stochasticity in first-order methods for large-scale convex optimization. PhD thesis. CORE UCL, 2013.

[38] O. Devolder, F. Glineur, Yu. Nesterov: First-order methods of smooth convex optimization with inexact oracle. Math. Program. Ser. A. 146(1-2), pp. 37-75, 2014.

[39] Z. Ding, Y. Chen, Q. Li, X. Zhu: Error Lower Bounds of Constant Step-size Stochastic Gradient Descent. 2019. https://arxiv.org/pdf/1910.08212. pdf

[40] T. T. Doan, S. Bose, D. H. Nguyen, C. L. Beck: Convergence of the Iterates in Mirror Descent Methods. IEEE Control Systems Letters, 3(1), pp. 114-119, 2019.

[41] J. C. Duchi: Introductory Lectures on Stochastic Convex Optimization. in Park City Mathematics Institute, Graduate Summer School Lectures, 2016.

[42] F. Facchinei, J. S. Pang: Finite-Dimensional Variational Inequality and Complementarity Problems. Vols. 1, 2. Springer-Verlag, New York, 2003.

[43] O. Fercoq, A. Alacaoglu, I. Necoara, V. Cevher: Almost surely constrained convex optimization. Proceedings of the 36th International Conference on Machine Learning, PMLR 97, pp. 1910-1919, 2019. http://proceedings.mlr.press/ v97/fercoq19a/fercoq19a.pdf

[44] A. V. Gasnikov: Searching equillibriums in large transport networks. 2016. https: //arxiv.org/ftp/arxiv/papers/1607/1607.03142.pdf. (Doctoral Dissertation in Russian)

[45] A. V. Gasnikov: Modern numerical optimization methods. The method of universal gradient descent. 2018. https://arxiv.org/ftp/arxiv/papers/1711/1711. 00394.pdf (in Russian)

[46] A. V. Gasnikov, A. A. Lagunovskaya, L. E. Morozova: On the relationship between simulation logit dynamics in the population game theory and a mirror descent method in the online optimization using the example of the shortest path problem. Proceedings of MIPT, 7(4), pp. 104-113, 2015. (in Russian)

[47] A. V. Gasnikov, A. A. Lagunovskaya, I. N. Usmanova, F. A. Fedorenko, E. A. Kry-mova: Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case. Automation and Remote Control, 78(2), pp. 224-234, 2017.

[48] A. V. Gasnikov, P. E. Dvurechensky, F. S. Stonyakin, A. A. Titov: Adaptive proximal method for variational inequalities. Comput. Math. Math. Phys. 59(5),pp. 836-841, 2019.

[49] A. V. Gasnikov, A. I. Tyurin: Fast Gradient Descent for Convex Minimization Problems with an Oracle Producing a (8, L)-Model of Function at the Requested Point. Computational Mathematics and Mathematical Physics, 59(7), pp. 10851097, 2019.

[50] A. V. Gasnikov, P. E. Dvurechensky, D. I. Kamzolov, Y. E. Nesterov, V. G. Spokoiny, P. I. Stetsyuk, A. L. Suvorikova, A. V. Chernov: Searching for equilibriums in multistage transport models. Proceedings of MIPT, 7(8), pp. 143-155, 2015. (in Russian)

[51] A. V. Gasnikov, D. I. Kamzolov, M. A. Mendel: Basic design of convex optimization algorithms and their application to obtain new estimates for strongly convex problems. Proceedings of MIPT, 8(3), pp. 25-42, 2016. (in Russian)

[52] A. V. Gasnikov, P. E. Dvurechensky, Y. E. Nesterov: Stochastic gradient methods with inexact oracle. Proceedings of MIPT, 8(1), pp. 41-91, 2016. (in Russian)

[53] C. C. Gonzaga, E. W. Karas, D. R. Rossetto: An optimal algorithm for constrained differentiable convex optimization, SIAM Journal on Optimization, 23(4), pp. 19391955,2013.

[54] E. Hazan, S. Kale: Beyond the regret minimization barrier: Optimal algorithms for stochastic strongly-convex optimization. JMLR. 15, pp. 2489-2512, 2014.

[55] E. Hazan: Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3-4), pp. 157-325, 2015.

[56] Y. Hao, M. J. Neely, W. Xiaohan: Online Convex Optimization with Stochastic Constraints. Published in NIPS, pp. 1427-1437, 2017.

[57] L. T. K. Hien, R. Zhao, W. B. Haskell: An Inexact Primal-Dual Smoothing Framework for Large-Scale Non-Bilinear Saddle Point Problems. 2019. https: //arxiv.org/pdf/1711.03669v3.pdf

[58] R. Jenatton, J. Huang, C. Archambeau: Adaptive Algorithms for Online Convex Optimization with Long-term Constraints. Proceedings of The 33rd International Conference on Machine Learning, PMLR 48, pp. 402-411, 2016. http: //proceedings.mlr.press/v48/jenatton16.pdf.

[59] A. B. Juditsky, A. V. Nazin, A. B. Tsybakov, N. Vayatis: Recursive Aggregation of Estimators by the Mirror Descent Algorithm with Averaging. Problems of Information Transmission, 41(4), pp. 368-384, 2005.

[60] A. Juditsky, A. Nemirovski: First Order Methods for Non-smooth Convex Large-scale Optimization, I: General purpose methods, in Optimization for Machine Learning, S. Sra et al, Eds., Cambridge, MA: MIT Press, pp. 121-184, 2012.

[61] A. Juditsky, J. Kwon, E. Moulines: Unifting Mirror Descent and Dual Averaging.

2019. https://arxiv.org/pdf/1910.13742v1.pdf

[62] S. M. Kakde, S. Shalev-Shwartz, A. Tewari: On the duality of strong convexity and strong smoothness: learning applications and matrix regularization. 2009. http://ttic.uchicago.edu/~shai/papers/KakadeShalevTewari09.pdf

[63] A. Kalai, S. Vempala: Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71, pp. 291-307, 2005.

[64] G. Lan, A. Nemirovski, A. Shapiro: Validation analysis of mirror descent stochastic approximation method. Math. Program., 134(2), pp. 425-458, 2012.

[65] G. Lan: Gradient Sliding for Composite Optimization, Math. Program., 159(1-2), pp. 201-235, 2016.

[66] G. Lan, Z. Zhou: Algorithms for stochastic optimization with functional or expectation constraints. 2019. https://arxiv.org/pdf/1604.03887.pdf

[67] G. Lan: Lectures on Optimization Methods for Machine Learning. H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA. 2019.

[68] L. Lessard, B. Recht, A. Packard: Analysis and design of optimization algorithms via integral quadratic constraints. SIAM Journal on Optimization, 26(1), pp. 57-95, 2016.

[69] H. Lin, J. Mairal, Z. Harchaoui: Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice. Journal of Machine Learning Research, 18, pp. 1-54, 2018.

[70] Q. Lin, R. Ma, Y. Xu: Inexact Proximal-Point Penalty Methods for Non-Convex Optimization with Non-Convex Constraints. 2019. https://arxiv.org/pdf/1908. 11518.pdf

[71] Q. Lin, R. Ma, T. Yang: Level-set methods for finite-sum constrained convex optimization. In International Conference on Machine Learning, pp. 3118-3127, 2018.

[72] T. Lin, C. Jin, M. I. Jordan: Near-Optimal Algorithms for Minimax Optimization.

2020. https://arxiv.org/pdf/2002.02417.pdf

[73] G. Lugosi, N. Cesa-Bianchi: Prediction, learning and games. New York, Cambridge University Press, 2006.

[74] D. V. N. Luong, P. Parpas, D. Rueckert, B. Rustem: A Weighted Mirror Descent Algorithm for Nonsmooth Convex Optimization Problem. J Optim Theory Appl 170(3), pp. 900-915, 2016.

[75] A. V. Nazin, B. M. Miller: Mirror Descent Algorithm for Homogeneous Finite Controlled Markov Chains with Unknown Mean Losses. Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011.

[76] A. Nazin, S. Anulova, A. Tremba: Application of the Mirror Descent Method to Minimize Average Losses Coming by a Poisson Flow. European Control Conference (ECC) June 24-27, 2014.

[77] A. V. Nazin, A. S. Nemirovsky, A. B. Tsybakov, A. B. Juditsky: Algorithms of Robust Stochastic Optimization Based on Mirror Descent Method. Automation and Remote Control, 80(9), pp. 1607-1627, 2019.

[78] A. V. Nazin: Algorithms of Inertial Mirror Descent in Convex Problems of Stochastic Optimization. Automation and Remote Control, 79(1), pp. 78-88, 2018.

[79] A. Nemirovskii, Y. Nesterov: Optimal methods of smooth convex minimization. USSR Comput Math Math Phys, 25(2), pp. 21-30, 1985.

[80] A. Nemirovskii: Efficient methods for large-scale convex optimization problems. Ekonomika i Matematicheskie Metody, 1979. (in Russian)

[81] A. Nemirovsky, D. Yudin: Problem Complexity and Method Efficiency in Optimization. J. Wiley & Sons, New York 1983.

[82] A. Nemirovski, A. Juditsky, L. Guanghui, A. Shapiro: Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4), pp. 1574-1609, 2009.

[83] A. Nemirovski: Lectures on Modern Convex Optimization analysis, algorithms, and engineering applications. Philadelphia: SIAM, 2015. http://www2.isye.gatech. edu/~nemirovs/Lect_ModConvOpt.pdf

[84] A. Nemirovski: Prox-method with rate of convergence O(1/T) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1), pp. 229-251, 2004.

[85] Y. Nesterov: Introductory Lectures on Convex Optimization. Springer Optimization and Its Applications, 137, 2018.

[86] Y. Nesterov: Subgradient methods for convex functions with nonstandard growth properties: http://www.mathnet.ru:8080/PresentFiles/16179/growthbm_ nesterov.pdf

[87] Y. Nesterov: Smooth minimization of non-smooth functions. Mathematical programming 103(1), 127-152, 2005.

[88] Y. Nesterov: Gradient methods for minimizing composite functions. Mathematical programming 140(1), 125-161, 2013.

[89] Y. Nesterov: A method of solving a convex programming problem with convergence rate O(1/k2). Soviet Mathematics Doklady, 27(2), pp. 372-376, 1983.

[90] Y. E. Nesterov: Algorithmic Convex Optimization. Dissertation for Doctor of Physical and Mathematical Sciences, 01.01.07. M.: MIPT, 2013. (in Russian)

[91] Y. Nesterov, A. Nemirovsky: Interior-Point Polynomial Methods in Convex Programming. volume 13 of Studies in Applied Mathematics, SIAM, Philadelphia, 1994.

[92] Y. Nesterov, B. Polyak: Cubic regularization of Newton's method and its global performance. Math. Program., 108(1), pp. 177-205, 2006.

[93] A. Neumaier: OSGA: a fast subgradient algorithm with optimal complexity, Mathematical Programming 158(1-2), pp. 1-21, 2016.

[94] J. Nocedal, S. J. Wright: Numerical Optimization. Springer, New York, 2006.

[95] B. O'Donoghue, E. Candés: Adaptive Restart for Accelerated Gradient Schemes. Found Comput Math, 15, pp. 715-732, 2015.

[96] Y. Ouyang, Y. Xu: Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems. Math. Program, pp. 1-35, 2019.

[97] I. B. Petrov, A. I. Lobanov: Lectures on computational mathematics. M: Internet University of Information Technology; BINOM. Knowledge lab, 2006. (in Russian)

[98] B. T. Polyak, A. B. Juditsky: Acceleration of stochastic approximation by averaging. SIAM J. Control and Optimization, 30(4), pp. 838-855, 1992.

[99] B. T. Polyak: New stochastic approximation type procedures. Automat. i Telemekh., 51(7), pp. 937-1008, 1990.

[100] B. T. Polyak: A general method of solving extremum problems. Soviet Mathematics Doklady, 8(3), pp. 593-597, 1967. (in Russian)

[101] B. T. Polyak: Introduction to Optimization, Optimization Software, Inc., Publications Division, New York, 1987.

[102] X. Qian, A. Sailanbayev, K. Mishchenko, P. Richtarik: MISO is Making a Comeback With Better Proofs and Rates. 2019. https://arxiv.org/pdf/1906.01474.pdf

[103] H. Robbins, S. Monro: A stochastic approximation method. Annals of Mathematical Statistics, 22(3), pp. 400-407, 1951.

[104] L. M. Rios, N. V. Sahinidis: Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56, pp. 1247-1293, 2013.

[105] D. Reem, S. Reich, A. De Pierro: Re-examination of Bregman functions and new properties of their divergences. Optimization, 68(1), pp. 279-348, 2019.

[106] D. Reem, S. Reich, A. De Pierro: A Telescopic Bregmanian Proximal Gradient Method Without the Global Lipschitz Continuity Assumption. Journal of Optimization Theory and Applications, 182, pp. 851-884, 2019.

[107] R. T. Rockafellar: Convex analysis. Princeton University Press, 1996.

[108] A. Shapiro, D. Dentcheva, A. Ruszczynski: Lectures on Stochastic Programming: Modeling and Theory. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2014.

[109] N. Z. Shor: Minimization Methods for non-differentiable functions. Springer Series in Computational Mathematics, Springer, 1985.

[110] N. Z. Shor: Generalized gradient descent with application to block programming. Kibernetika, 3(3), pp. 53-55, 1967. (in Russian)

[111] S. Shpirko, Y. Nesterov: Primal-dual subgradient methods for huge-scale linear conic problem. SIAM Journal on Optimization, 24(3), pp. 1444-1457, 2014.

[112] B. V. Scoy, R. A. Freeman, K. M. Lynch: The Fastest Known Globally Convergent First-Order Method for Minimizing Strongly Convex Functions. IEEE Control Systems Letters, 2(1), 2018.

[113] F. S. Stonyakin, M. S. Alkousa, A. N. Stepanov, M. A. Barinov: Adaptive mirror descent algorithms in convex programming problems with Lipschitz constraints. Trudy Instituta Matematiki i Mekhaniki URO RAN, 24(2), pp. 266-279, 2018. http://journal.imm.uran.ru/node/287

[114] F. S. Stonyakin, M. S. Alkousa, A. N. Stepanov, A. A. Titov: Adaptive Mirror Descent Algorithms for Convex and Strongly Convex Optimization Problems with Functional Constraints. Journal of Applied and Industrial Mathematics, 13(3), pp. 557-574, 2019. https://link.springer.com/article/10.1134/ S1990478919030165

[115] F. S. Stonyakin, M. S. Alkousa, A. A. Titov, V. V. Piskunova: On Some Methods for Strongly Convex Optimization Problems with One Functional Constraint. In book: Mathematical Optimization Theory and Operations Research, 18th International Conference, MOTOR 2019, Ekaterinburg, Russia, July 8-12, 2019, Proceedings Springer Nature Switzerland AG 2019. M. Khachay et al. (Eds.): MOTOR 2019, LNCS 11548, pp. 82-96, 2019. https://doi.org/10.1007/ 978-3-030-22629-9_7

[116] F. S. Stonyakin, A. A. Titov: One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties. In Proceedings of the School-Seminar on Optimization Problems and their Applications (OPTA-SCL 2018) Omsk, Russia, July 8-14, 2018. CEUR Workshop Proceedings, 2098, pp. 372-384, 2018.

[117] F. Stonyakin, A. Gasnikov, P. Dvurechensky, M. S. Alkousa, A. Titov: Generalized Mirror Prox for Monotone Variational Inequalities: Universality and Inexact Oracle. 2019. https://arxiv.org/pdf/1806.05140.pdf.

[118] F. S. Stonyakin, A. S. Ivanova, M. S. Alkousa, I. V. Baran: On an adaptive method for convex programming problems with one strongly convex functional constraint. Submitted paper to the Journal of Applied and Industrial Mathematics.

[119] A. A. Titov, F. S. Stonyakin, A. V. Gasnikov, M. S. Alkousa: Mirror Descent and Constrained Online Optimization Problems// Optimization and Applications. 9th International Conference OPTIMA-2018 (Petrovac, Montenegro, October 1-5, 2018). Revised Selected Papers. Communications in Computer and Information Science, 974,pp.64-78, 2019. https://link.springer.com/chapter/10.1007/ 978-3-030-10934-9_5

[120] A. Tremba, A. Nazin: Extension of a saddle point mirror descent algorithm with application to robust PageRank. 52nd IEEE Conference on Decision and Control December 10-13, 2013.

[121] F. Vasilyev: Optimization Methods. Fizmatlit, Moscow, 2002. (in Russian)

[122] E. A. Vorontsova, A. V. Gasnikov, A. C. Ivanova, E. A. Nurminsky: The Walrasian equilibrium and centralized distributed optimization in terms of modern convex

optimization methods on the example of resource allocation problem. Siberian J. Num. Math. Sib. Branch of Russ. Acad. of Sci. Novosibirsk, 22(4), pp. 415--436, 2019.(in Russian)

[123] X. Wei, H. Yu, Q. Ling, M. Neely: Solving non-smooth constrained programs with lower complexity than O( 1/e): A primal-dual homotopy smoothing approach. In Advances in Neural Information Processing Systems, pp. 3995-4005, 2018.

[124] M. Xiangrui, C. Hao: Accelerating Nesterov's Method for Strongly Convex Functions with Lipschitz Gradient. 2011. https://arxiv.org/pdf/1109.6058.pdf

[125] Y. Xu: Iteration complexity of inexact augmented lagrangian methods for constrained convex programming. Mathematical Programming, Series A, pp. 1-46, 2019.

[126] Y. Xu: Primal-dual stochastic gradient method for convex programs with many functional constraints. 2019. https://arxiv.org/pdf/1802.02724.pdf

[127] J. Yuan, A. Lamperski: Online convex optimization for cumulative constraints. Published in NIPS, pp. 6140-6149, 2018.

[128] V. G. Zhadan: Methods of optimization. Vol. 2, M.: MIPT, 2017. (in Russian)

[129] V. G. Zhadan: Methods of optimization. Vol. 3, M.: MIPT, 2017. (in Russian)

[130] J. Zhang, M. Hong, S. Zhang: On lower iteration complexity bounds for the saddle point problems. 2019. https://arxiv.org/pdf/1912.07481.pdf

[131] M. Zinkevich: Online Convex Programming and Generalized Infinitesimal Gradient Ascent. In Proceedings of International Conference on Machine Learning (ICML), 2003.

List of Publications

1. F. S. Stonyakin, M. S. Alkousa, A. N. Stepanov, M. A. Barinov: Adaptive mirror descent algorithms in convex programming problems with Lipschitz constraints. Trudy Instituta Matematiki i Mekhaniki URO RAN, 24(2), pp. 266-279, 2018. http://journal.imm.uran.ru/node/287

2. A. A. Titov, F. S. Stonyakin, A. V. Gasnikov, M. S. Alkousa: Mirror Descent and Constrained Online Optimization Problems// Optimization and Applications. 9th International Conference OPTIMA-2018 (Petrovac, Montenegro, October 1-5, 2018). Revised Selected Papers. Communications in Computer and Information Science, 974, pp. 64-78, 2019. https://link.springer.com/chapter/10.1007/ 978-3-030-10934-9_5

3. F. S. Stonyakin, M. S. Alkousa, A. N. Stepanov, A. A. Titov: Adaptive Mirror Descent Algorithms for Convex and Strongly Convex Optimization Problems with Functional Constraints. Journal of Applied and Industrial Mathematics, 13(3), pp. 557-574, 2019. https://link.springer.com/article/10.1134/ S1990478919030165

4. M. S. Alkousa: On Some Stochastic Mirror Descent Methods for Constrained Online Optimization Problems. Computer Research and Modeling, 11(2), pp. 205-217, 2019. DOI: 10.20537/2076-7633-2019-11-2-205-217. http://crm-en.ics.org. ru/journal/article/2775/

5. F. S. Stonyakin, M. S. Alkousa, A. A. Titov, V. V. Piskunova: On Some Methods for Strongly Convex Optimization Problems with One Functional Constraint. In book: Mathematical Optimization Theory and Operations Research, 18th International Conference, MOTOR 2019, Ekaterinburg, Russia, July 8-12, 2019, Proceedings Springer Nature Switzerland AG 2019. M. Khachay et al. (Eds.): MOTOR 2019, LNCS 11548,pp. 82-96, 2019. https://doi.org/10.1007/978-3-030-22629-9_7

6. F. S. Stonyakin, E. A. Vorontsova, M. S. Alkousa: New Version of Mirror Prox for Variational Inequalities with Adaptation to Inexactness. In: Jacimovic M., Khachay M., Malkova V., Posypkin M. (eds) Optimization and Applications. OPTIMA

2019. Communications in Computer and Information Science, vol 1145. Springer, Cham. pp. 427-442, 2020. https://link.springer.com/chapter/10.1007/ 978-3-030-38603-0_31

7. M. S. Alkousa: On Modification of an Adaptive Stochastic Mirror Descent Algorithm for Convex Optimization Problems with Functional Constraints. Accepted to the print as a chapter in the forthcoming book: Communications in Mathematical Computations and Applications, Springer. https://arxiv.org/pdf/1904.09513.pdf

8. M. S. Alkousa, D. M. Dvinskikh, F. S. Stonyakin, A. V. Gasnikov, D. Kovalev: Accelerated Methods for Saddle Point Problems. Accepted paper to the print in the journal Comput. Math. and Math. Phys. https://arxiv.org/pdf/1906.03620. pdf

9. F. Stonyakin, A. Gasnikov, P. Dvurechensky, M. S. Alkousa, and A. Titov: Generalized Mirror Prox for Monotone Variational Inequalities: Universality and Inexact Oracle. Submitedin SIAM, 2019. https://arxiv.org/pdf/1806.05140.pdf

10. F. S. Stonyakin, A. S. Ivanova, M. S. Alkousa, I. V. Baran: On an adaptive method for convex programming problems with one strongly convex functional constraint. The paper was submitted to the Journal of Applied and Industrial Mathematics.

Обратите внимание, представленные выше научные тексты размещены для ознакомления и получены посредством распознавания оригинальных текстов диссертаций (OCR). В связи с чем, в них могут содержаться ошибки, связанные с несовершенством алгоритмов распознавания. В PDF файлах диссертаций и авторефератов, которые мы доставляем, подобных ошибок нет.