home - Accounting
Optimal control of linear dynamic systems. Optimal control of linear dynamic systems Main material. Let the problem of optimal control of a continuous dynamic system under consideration be determined by the differential equation

FEDERAL EDUCATION AGENCY

STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION "SAMARA STATE AEROSPACE UNIVERSITY named after Academician S.P. KOROLEV"

Yu. Zabolotnov

OPTIMAL CONTROL OF CONTINUOUS DYNAMIC SYSTEMS

Approved by the University Editorial and Publishing Council as a teaching aid

SAMARA 2005


UDC 519.9+534.1

Reviewers: S.A. Ishkov, L.V. Kudyurov

Zabolotnov Yu.

Optimal control of continuous dynamic systems: textbook. allowance / Yu. Zabolotnov; Samar. state aerospace univ. Samara, 2005. 149 p. : ill.

The manual includes a description of methods for optimal control of dynamic systems. Particular attention is paid to the optimal solution of the stabilization problem for linear dynamic systems. Along with the presentation of classical methods for optimal control of linear systems, based mainly on the Bellman principle of dynamic programming, approximately optimal control of oscillatory dynamic systems using the averaging method is considered.

The material of the manual is included in the course of lectures “Theoretical foundations of automated control”, given by the author for students of specialty 230102 - automated information processing and control systems at the departments of information systems and technologies, mathematics and mechanics of SSAU. However, the manual may be useful for students of other specialties when studying the theory of optimal control of dynamic systems.


PREFACE……………………………………………………. 5

1. BASIC THEORETICAL PROVISIONS OF OPTIMAL CONTROL OF DYNAMIC SYSTEMS………………………….………………………….. 8

1.1. Statement of the problem of optimal control of dynamic systems…………………………….…...8

1.2. Software optimal control and problem

stabilization………………………………………………………. eleven

1.3. Unperturbed and disturbed motions of a dynamic system…………………………………………….………….. 12

1.4. Statement of the problem of optimal motion stabilization for a linear dynamic system……………………………..… 14

2. CONTROLABILITY AND OBSERVABILITY

DYNAMIC SYSTEMS………………………………….….16

2.1. Similar transformations of linear dynamic systems.16

2.2. Controllability of dynamic systems.……………………….18

2.3. Observability of dynamic systems……………………….21

3. BELLMAN’S PRINCIPLE OF DYNAMIC PROGRAMMING AND LYAPUNOV’S THEORY OF STABILITY…….24

3.1. Bellman's principle of dynamic programming…….24

3.2. Optimal control of linear dynamic systems………………………………………………………..………… 29


3.3. Lyapunov's stability theory……………………………31

3.4. Connection of the dynamic programming method with Lyapunov’s theory of stability …………………………………………... 37

4. DETERMINATION OF OPTIMUM CONTROL FOR LINEAR DYNAMIC SYSTEMS……………………… 39

4.1. Solution of the Bellman equation for linear stationary dynamic systems..……………………………………………………………… 39

4.2. Solution of the Bellman equation for linear nonstationary dynamic systems..…………………………………………… 41

4.3. On the choice of optimality criterion when solving the stabilization problem……………………………………………………….43

4.4. An example of the optimal choice of controller coefficients

when controlling a second-order linear system....……….. 47

5. DYNAMIC VIBRATIONAL SYSTEMS ………….56

5.1. Small oscillations of dynamic systems…………………….…56

5.2. Controllability and observability of linear oscillatory dynamic systems………………………………………………………………. 65

5.3. Small parameter method..…………………………………….. 68

5.4. Averaging method..………………………………………….… 72

5.5. Averaging method for a system with one degree of freedom... 76

5.6. Averaging method for systems with several fast

phases…………………………………………………………………………………. 79

5.7. Averaging method for a system with two powers

freedom………………………………………………………..…… 86

6. APPROXIMATE OPTIMAL CONTROL OF DYNAMIC VIBRATIONAL SYSTEMS.... 93

6.1. Control of a linear oscillatory system with one degree of freedom………………………………………………………….… 93

6.2. Control of a linear oscillatory system with two degrees of freedom..……………………………………………………………………. 106

6.3. The influence of nonlinear disturbances on the solution of the optimal control problem………………//…………………………… 115

LIST OF SOURCES USED…..…………127

APPENDIX 1. Similar transformations of linear dynamic systems …………………………………………..…129

APPENDIX 2. Qualitative study of linear dynamic systems on the phase plane …………………… 134

APPENDIX 3. Differentiation of functions with a vector argument………………………………………………………... 142

APPENDIX 4. Basic concepts of the theory of asymptotic series………………………………………………………………. 143

APPENDIX 5. Averaging trigonometric

functions………………………………………..………………….. 148

PREFACE

Traditionally, classical control theory considers two main problems: the problem of determining the program motion of a dynamic system and the problem of designing controllers that implement a given program motion of the control object (stabilization problem). The main focus of the manual is on solving the stabilization problem, which is usually solved using linear dynamic models. Compared to static systems, in dynamic systems the process develops over time and control in the general case is also a function of time.

When solving the stabilization problem, various methods can be used. Here, first of all, it should be noted the classical methods of automatic control theory, based on the apparatus of transfer functions and frequency characteristics. However, the advent of high-speed computers led to the development of new methods that form the basis of modern control theory. In modern control theory, the behavior of a system is described in state space and control of the system comes down to determining the optimal, in a certain sense, control actions on the system at each moment in time. Moreover, mathematical models of continuous dynamic systems are usually systems of ordinary differential equations, in which time is the independent variable.

When solving a stabilization problem, control optimality is understood in the sense of the minimum of a certain optimality criterion (functional), which is written in the form of a definite integral. The optimality criterion can characterize various aspects of control quality: control costs (energy, fuel, etc.), control errors (for various state variables), etc. To determine the optimal control when solving the stabilization problem, the classical Bellman principle of dynamic programming is used.

The first section of the manual is introductory: it contains a mathematical formulation of problems solved in the control of continuous dynamic systems. The second section is devoted to issues that precede the construction of optimal control for linear systems: issues of controllability and observability. In the third section, the basic relations of the Bellman dynamic programming principle are derived, from which the optimal control for a linear dynamic system is further determined when solving the stabilization problem. In the same section it is shown that Bellman's principle of dynamic programming for linear systems is organically connected with the second Lyapunov method, the fulfillment of the theorems of which provides a solution to the stabilization problem. The fourth section of the manual outlines algorithms for determining optimal control when solving the stabilization problem for a given quadratic optimality criterion (the integrand of the functional is a quadratic form of the control and state variables of the system). An example is given of determining optimal control with a given optimality criterion for a specific linear system. The fifth section outlines the fundamentals of the theory of dynamic oscillatory systems. The basic relations of the averaging principle are derived, which in many cases makes it possible to significantly simplify the analysis and synthesis of oscillatory systems. The sixth section discusses a method for determining approximately optimal control for the problem of stabilization by oscillatory systems. Examples of control of oscillatory systems with one and two degrees of freedom are given. The issues of the possible influence of nonlinear disturbances on solving problems of stabilization of oscillatory systems are analyzed.

The methods presented in the manual make it possible to find optimal control for solving problems of stabilization of dynamic systems in the form of analytical functions depending on the state variables of the system. In this case, they say that the problem of control synthesis is being solved. These methods can be attributed to the theory of analytical design of regulators, which is one of the important directions in the development of modern control theory.

The material in the manual is based on works in the field of control theory, which over time have already become classic. Here, first of all, it is necessary to note the works of L.S. Pontryagin. , Letova A.M. , Demidovich B.P. , Gropa D., Bellmana R., Moiseeva N.N., Bogolyubov N.N., Mitropolsky Yu.A. and other famous domestic and foreign scientists.


1. BASIC THEORETICAL POINTS OF OPTIMAL CONTROL OF DYNAMIC SYSTEMS

1.1. Statement of the problem of optimal control of dynamic systems

Mathematical models of dynamic systems can be constructed in various forms. These can be systems of ordinary differential equations, partial differential equations, corresponding discrete models, etc. A distinctive feature of the mathematical description of any dynamic system is that its behavior develops in time and is characterized by functions ,..., which are called state variables (phase coordinates) systems. In what follows we will consider systems with continuous time. The movement of a dynamic system can be controlled or uncontrollable. When implementing controlled movement, the behavior of a dynamic system also depends on the control functions ,…. Let us also assume that the behavior of the system is determined uniquely if the vector control function and the initial phase state are given, where is the initial time.

As a mathematical model of a dynamic system, we will consider a system of ordinary differential equations written in Cauchy normal form

where , , is a known vector function.

Various mathematical models of dynamic systems with continuous time are most often reduced to system (1.1). So, for example, if the behavior of a dynamic system is described by a system of partial differential equations and occurs in space and time (mathematical models of continuum mechanics), then, by discretizing over space (finite element approach), we arrive at a system of ordinary differential equations similar to ( 1.1), the solution of which is sought as a function of time.

The previously introduced assumption about the uniqueness of the control process for system (1.1) is determined by the fulfillment of the conditions of the theorem on the existence and uniqueness of solutions to systems of ordinary differential equations in Cauchy form.

Let us formulate the problem of optimal control of system (1.1). At the initial moment, system (1.1) is in state, it is necessary to determine such a control that will transfer the system to a given final state (different from the initial one), where is the final time. It is usually required that the transition from point to point (transition process) be in some sense the best of all possible transitions. For example, if a certain technical system is considered, then the transition process must satisfy the condition of minimum expended energy or the condition of minimum transition time. Such the best transition process is usually called the optimal process.

A control function usually belongs to some control domain, which is a set of -dimensional Euclidean space. In technical applications, it is assumed that a region is a closed region, that is, a region that includes its boundaries. Let us call an admissible control any control that transfers the system from point to point. For a quantitative comparison of various admissible controls, an optimality criterion is introduced, which, as a rule, is presented in the form of some functional

The functional is calculated on solutions of system (1.1) satisfying the conditions and , for a given admissible control .

Finally, the optimal control problem is formulated as follows: two points and are given in the phase space; among all admissible controls that transfer the phase point from position to position, find one for which the functional (1.2) takes the smallest value.

The control that gives a solution to the problem posed above is called optimal control and is denoted by , and the corresponding trajectory is the optimal trajectory.

Comment. If it is necessary to ensure the maximum of some criterion, then this problem can be reduced to the problem of finding the minimum by formally changing the sign in front of the functional (1.2).

A special case of the stated optimal control problem is the case when . Then the functional (1.2) takes the form and optimality lies in the implementation of the minimum transition time from point to point . This optimal control problem is called a performance problem.


1.2. Software optimal control and stabilization problem

Let us consider the motion of the dynamic system (1.1). Let the optimal control be found for this system and the corresponding optimal trajectory obtained. When implementing an optimal trajectory in technical problems, one inevitably encounters significant difficulties, which consist in the impossibility, firstly, of accurately setting the real system (or control object) to the initial state, secondly, of accurately implementing the optimal control itself, and thirdly, of accurately predicting in advance external conditions for the functioning of the system (proximity of the original mathematical model). All this leads to the need to solve the problem of correcting the optimal control law during the functioning of any technical system (or object). Thus, the problem of optimal control in real conditions can be divided into two parts: 1) construction of a nominal optimal control of the original dynamic system under ideal conditions within the framework of the mathematical model (1.1); 2) construction of corrective control actions in order to implement a given nominal optimal control and optimal trajectory during the operation of the system. The first part of the optimal control problem is usually called the problem of constructing optimal program control, and it is solved within the framework of a priori information known in advance about the system under consideration. The second part of the problem is called the problem of stabilization of a given nominal control program and it must be solved during the operation of the system using information received from the measuring devices of the control system. The problem of stabilizing a nominal control program can also be posed as a problem of finding optimal control according to the corresponding criterion, which will be done below (see Section 1.4).

Comment. Obviously, not only optimal control can be used as a nominal control program, but also any other admissible control (if the program control optimization problem is not solved). In the simplest particular case, for example, the task of stabilizing a certain constant position of the system can be posed.

1.3. Unperturbed and perturbed motion of a dynamic system

Since the real motion of the system inevitably differs from the nominal program motion, this fact led to the concept of unperturbed and perturbed motions by Lyapunov A.A. . Thus, any program motion of system (1.1), regardless of whether it is optimal or admissible, is called unperturbed motion. Moreover, this movement corresponds to some particular solution of system (1.1). The perturbed motion is assessed by certain deviations from the unperturbed motion. Consequently, the perturbed motion will be described by the following variables

where the variables and characterize the nominal control program, and the variables and are deviations from the nominal program.

Substituting relations (1.3) into system (1.1), we obtain

By adding and subtracting the same term on the right side of system (1.4) and taking into account that

we obtain the system in deviations from the nominal movement

where , , and are determined as a result of solving system (1.5).

It is usually considered that deviations from the nominal movement are small. Therefore, if we expand the function into a Taylor series and introduce the notation , , where the index (o) means that the partial derivatives are determined for a given nominal program, we obtain

Here the function determines the terms of the second order and higher in deviations; matrices and select the linear part of the series and have components and ; .

The equations written in deviations (1.7) are of great importance in control theory. Based on these equations, a large number of optimization problems of practical interest are formulated. One of these problems is the stabilization problem formulated above. When solving this problem, it is necessary to determine how corrective control actions should be selected in order to reduce deviations in some sense in the best way.

1.4. Statement of the problem of optimal motion stabilization for a linear dynamic system

Most often, when solving the problem of stabilizing the motion of a system or control object, a linear dynamic system in deviations is used, obtained from system (1.7) by discarding nonlinear terms. Then

where matrices and in the general case are functions of time, since they depend on the nominal control program. , and then they say that the problem of control synthesis is being solved. After substituting the law. Let us consider the case when the matrix does not have multiple (identical) eigenvalues. In this case, such a transformation leads the matrix to a diagonal form, where is a diagonal matrix, on the main diagonal of which there are the eigenvalues ​​of the matrix (the proof is given in Appendix 1).

MINISTRY OF EDUCATION AND SCIENCE

RUSSIAN FEDERATION

MOSCOW STATE UNIVERSITY

FACULTY OF PHYSICS

Department of Physics and Mathematics Methods of Control

TASKS

for course work

"Optimal control of linear dynamic systems"

course "Optimal control"

Compiled by: Prof., Doctor of Technical Sciences Afanasyev V.N.

Moscow 2014

  1. GOAL OF THE WORK

Mathematical design of optimal linear control systems.

  1. THE CONTENT OF THE WORK
    1. Studying the necessary theoretical material from sources;
    2. Obtaining an analytical solution to the problem;
    3. Drawing up a block diagram of the control system.
    4. Acquiring skills in mathematical modeling of a control system using the package MatLab.
  1. WORK TIME

VIII semester, 4th year.

Assignments are given in the 5th school week.

Acceptance of completed work is carried out at 10 and 11 weeks.

BASIC THEORETICAL PROVISIONS.

FORMULATION OF THE PROBLEM

Many control objects can be described quite accurately by linear dynamic models. By intelligently choosing quadratic quality criteria and quadratic constraints, in this case it is possible to synthesize very successful control devices with linear feedback.

Let controlled dynamic systems described by linear differential equations

(1)

here: - system state; - control input of the system; - system output. Thus, the matrices A(t), B(t), C(t) have the corresponding dimensions: n x n, n x r, m x n. Let us assume that no restrictions are imposed on control.

Let us determine the purpose of the system from a physical point of view. Let be the “desired” output of the system. It is necessary to find such a control u(t) , in which the system error

(2)

would be "small".

Since management u(t) in the problem under consideration is not limited, then in order to avoid large efforts in the control loop and high energy consumption, it is possible to introduce an appropriate requirement into the quality criterion that takes these facts into account.

It is often important to make a “small” error at the final moment of the transition process.

The translation of these physical requirements into the form of one or another mathematical functional depends on many reasons. This chapter will consider a private class of quality criteria having the following form:

(3)

where F, Q(t) positive semidefinite matrices with dimension m x m ; R(t) positive definite matrix with dimension r x r .

Let us consider each member of the functional (3). Let's start with. Obviously, since the matrix Q(t) positive semidefinite, then this term is nonnegative for any e(t) and is equal to zero at e(t)=0. Since, where q ij (t) matrix element Q (t), and e i (t) and e j (t) vector components e(t), then large errors are valued “more expensive” than small ones.

Let's consider the member. Because R(t) is a positive definite matrix, then this term is positive for any and “punishes” the system for large control actions more strongly than for small ones.

Finally, . This term is often called the cost of the final state. Its purpose is to guarantee the “smallness” of the error at the final moment of time of the transient process.

Quality criterion (3) is convenient mathematically, and its minimization leads to the fact that optimal systems turn out to be linear.

The optimal control problem is formulated as follows: a linear dynamic controlled system (1) and a functional (3) are given. It is required to find the optimal control, i.e. control, under the influence of which the system (1) moves in such a way as to minimize the functionality (3). The search for solutions will be carried out for problems with an open range of changes in control actions and problems in which the control actions belong to a given set.

  1. EXERCISE
    1. Study the method for constructing optimal control of linear dynamic systems
    2. In accordance with the option number, take the problem condition from the application
    3. Check controllability and observability properties
    4. Build Luenberger Observer
    5. Obtain an analytical solution to the problem
    6. Draw a block diagram of an optimal control system
    7. Study the influence of weighting coefficients on the quality of transient processes and on the value of the quality functional
    8. Mathematical modeling of a control system using the package MatLab

APPLICATION

Control object:

Functionality: .

Option #1

Consider when:

  1. ;

Option No. 2

Consider when:

  1. ;

Option No. 3

Consider when:

  1. ;

Option No. 4

Consider when:

  1. ;

Option No. 5

Consider when:

  1. ;

Option No. 6

Consider when:

  1. ;

Option No. 7

Consider when:

  1. ;

Option No. 8

Consider when:

  1. ;

Option No. 9

Consider when:

  1. ;

Option No. 10

Consider when:

  1. ;

Option No. 11

Consider when:

  1. ;

Option No. 12

Consider when:

  1. ;

Option No. 13

Consider when:

  1. ;

Option No. 14

Consider when:

14.1. ;

14.2. .

Option No. 15

Consider when

15.1. ;

15.2. .

LITERATURE

  1. Afanasyev V.N., Kolmanovsky V.B., Nosov V.R. Mathematical theory of design of control systems Higher school. M., 2003, 616 p.
  2. Afanasyev V.N. Theory of optimal control of continuous dynamic systems. Analytical design. M. Physics Faculty of Moscow State University 2011, 170 p.
  3. Afanasyev V.N. Optimal control systems. RUDN University 2007. − 260 p.

Introduction. The market economy in Ukraine requires new approaches to management: economic and market efficiency criteria come to the fore. Scientific and technological progress and the dynamics of the external environment force modern manufacturing enterprises to transform into more complex systems that require new management methods. Strengthening the market orientation of enterprises and sudden changes in the external environment necessitate the development of competitive management systems designed to develop complex management decisions, and therefore more effective approaches and algorithms for solving large-scale problems.

The work was carried out in accordance with the state scientific and technical program 6.22 - advanced information technologies and systems plans for scientific and scientific-technical activities of the Odessa Order of Lenin Institute of the Ground Forces for 2004, according to the topics of research work.

Analysis of recent research. Currently, one of the main and most effective approaches to solving high-dimensional control problems is decomposition. This approach combines a group of methods based on decomposing the original high-dimensional problem into subproblems, each of which is significantly simpler than the original one and can be solved independently of the others. The connection between individual subtasks is carried out using a “coordinating” task, which is also simpler than the original one. To do this, the control problem is brought to a form that satisfies the requirements of decomposition, the main of which are: additivity (separability) of the objective function; block nature of restrictions; the presence of block connections. However, when solving practical problems of synthesis of high-dimensional optimal control, it is often difficult to satisfy the listed requirements. For example, the quality of operation of a production system can be assessed by a criterion of a very general type, which may be inseparable with respect to the tasks of managing individual subsystems. Therefore, when converting the original control problem to a form that satisfies the requirements of decomposition, various simplifications, approximations, and various options for dividing the problem into local subtasks are inevitable, i.e. blocks of restrictions and interblock connections. All these factors influence both the quality of the solution and the complexity of calculations when finding the optimal solution.

Due to the absence to date of methods for qualitatively assessing the influence of the listed factors on the quality of the solution, it seems relevant to develop a method for solving a high-dimensional problem that would leave a certain freedom in choosing the structure of local problems, as well as satisfying and assessing the impact of various simplifications on the quality of solutions.

From the analysis of literature sources it follows that acceptable numerical methods for solving nonlinear optimization problems are associated with significant costs of computer time and memory, and the use of linearization leads to losses in control quality. Therefore, it is advisable that the new method being developed for solving the problem preserves its nonlinear nature, and the optimal control is determined within the framework of a decentralized computing structure.

The object of research is algorithms for solving large-dimensional control problems.

The subject of research is the development of an approach based on the idea of ​​equivalence or quasi-equivalence of the original high-dimensional problem and the corresponding block decomposition problem.

The scientific task is to develop algorithms, the use of which would ensure optimal control within a decentralized structure, without the need for iterative exchange of information between control levels.

The goal of the work is to develop and supplement elements of applied theory and problem-oriented tools for optimizing large-dimensional control problems.

The scientific novelty lies in the development of an approach to the synthesis of optimization algorithms for large-scale control problems within the framework of a decentralized computing structure, in which there is no need to organize an iterative process between control levels.

Main material.Let the problem of optimal control of a continuous dynamic system under consideration be determined by the differential equation

(1)

by criterion

(2)

at

where - n m – dimensional control vector; - n – a dimensional function whose components are continuously differentiable with respect to the arguments; - convex, differentiable scalar function; - specified initial and final times, respectively.

In order to represent the control object (1) in the form of a series of interacting subsystems, we expand (1) into a Taylor series relative to the equilibrium point

Where ,

or

(3)

In expression (3), A and B represent block-diagonal parts of the matrices and, respectively, with blocks and .

and and are the non-diagonal parts and, respectively.

By introducing a relationship vector in such a way that the i – this component is determined by the expression

, (4)

we can write the equationi– th subsystems

where - is the dimensional control vector; - - dimensional vector of state; - n – dimensional vector of relationship.

The proposed decomposition method for synthesizing optimal controls is as follows. Component subsystem

and taking into account the relationship with other subsystems, we will call it isolated.

Composition i – х i = 1,2,…, P subsystems are represented by the model

(5)

where and are block diagonal matrices with blocks and respectively.

Let us formulate the criterion

, (6)

where is a positive semidefinite block diagonal matrix

with blocks; - positive-definite block-diagonal matrix

with blocks - optimal control.

We determine matrices and from the condition of quasiequivalence of problems (1) – (2) and (5) – (6), which has the form

Here , ,

Where .

To determine matrix elements, we have a system of algebraic equations

. (7)

After solving equation (7), we have P independent optimization problems in connection with the block-diagonal structure of the matrices

,

Local optimal control has the form

, (8)

, satisfies the linear differential equation.

, . (9)

The global solution is a composition of optimal solutions

. (10)

Conclusions. Thus, the problem of synthesizing optimal control for the original high-dimensional problem (1) – (2) comes down to the following: formulation of local optimization problems (5) – (6); determination of parameters of local problems using formulas (3) and (6); solving local problems according to (8) – (9); composition of local solutions (10).

Quality losses with an optimal approach to the synthesis of approximately optimal controls can be estimated using the formulas proposed in.

The new approach to problem solving of control, based on the idea of ​​equivalence an initial problem of large dimension and conforming unitized offcomposite of a problem is offered.

1. Mesarovic M., Mako D., Takahara I. Theory of hierarchical multi-level systems. – M.: Mir, 1973.

2. Aesdon L.S. Optimization of large systems. – M.: Mir, 1975.

3. Albrecht E.G. On the optimal stabilization of nonlinear systems. – Applied mathematics and mechanics, 1961, vol. 25.

4. Zhivoglyadov V.P., Krivenko V.A. A method for decomposing large-dimensional control problems with a non-separable quality criterion. Abstracts of the II All-Union Interuniversity Conference “Mathematical, algorithmic and technical support of automated process control systems.” Tashkent, 1980.

5. Hassan Mohamed, Sinqh Madan G. The optimization for non – linear systems using a new two level method.“Automatica”, 1976, 12, No. 4.

6. Mahmoud M.S. Dynamic multilevel optimization for a class of non-linear systems, “Int. J. Control”, 1979, 30, No. 6.

7. Krivenko V.A. Quasi-equivalent transformation of optimization models in problems of synthesis of control algorithms. – In the book: Adaptation and optimization in large systems. – Frunze, 1985.

8. Krivenko V.A. A method for synthesizing control algorithms using the idea of ​​modifying the objective function. – Frunze, 1985.

9. Rumyantsev V.V. On optimal stabilization of controlled systems. – Applied mathematics and mechanics, 1970, issue. 3.

10. Ovezgeldyev A.O., Petrov E.T., Petrov K.E. Synthesis and identification of multifactor evaluation and optimization models. – K.: Naukova Dumka, 2002.

Answers on questions

In the examples considered (the backpack loading problem and the reliability problem), only one variable was used to describe the states of the system, and the control was set to one variable. In general, in dynamic programming models, states and control can be described using several variables that form state and control vectors.

An increase in the number of state variables causes an increase in the number of possible solutions associated with each of the stages. This can lead to the so-called “curse of dimensionality” problem, which is a serious obstacle when solving medium- and high-dimensional dynamic programming problems.

As an example, consider the problem of loading a backpack, but under two restrictions (for example, weight and volume restrictions):

Where , . Since the task has two types of resources, it is necessary to enter two state parameters and. Let's denote , , . Then restrictions (1) can be reduced to the form:

Where . In the recurrent equations of the dynamic programming method for the “knapsack” problem with two restrictions (1):

each of the functions is a function of two variables. If each of the variables can take 10 2 values, then the function has to be tabulated at 10 4 points. In the case of three parameters, under the same assumptions, it is necessary to calculate 10 8 powers of function values.

So, the most serious obstacle to the practical application of dynamic programming is the number of parameters of the problem.

Inventory management problem.

The problem of inventory management arises when it is necessary to create a stock of material resources or consumer goods in order to satisfy demand over a given time interval (finite or infinite). Any inventory management task requires determining the quantity of products to be ordered and the timing of order placement. Demand can be satisfied by creating a one-time stock for the entire time period under consideration or by creating a stock for each time unit of this period. The first case corresponds to excess inventory in relation to a unit of time, the second - insufficient inventory in relation to the full period of time.

With excess inventory, higher specific (per unit time) capital investments are required, but shortages occur less frequently and the frequency of ordering is lower. On the other hand, when there is insufficient inventory, specific capital investment is reduced, but order frequency and the risk of stockouts increase. Any of these extreme cases is characterized by significant economic losses. Thus, decisions regarding the size of the order and the timing of its placement can be based on minimizing the corresponding total cost function, which includes costs due to losses from excess inventory and shortages.



These costs include:

1. Acquisition costs, which become a particularly important factor when the unit price is expressed in the form of volume discounts in cases where the unit price decreases with increasing order size.

2. Ordering costs are fixed costs associated with placing an order. When satisfying demand over a given period of time by placing smaller orders (more frequently), costs increase compared to satisfying demand by placing larger orders (and therefore less frequently).

3. Inventory carrying costs, which are the costs of holding inventory in a warehouse (interest on invested capital, depreciation costs, and operating costs), generally increase as inventory levels increase.

4. Losses from shortages due to the lack of stock of necessary products. They are usually associated with economic sanctions from consumers and potential loss of profits. Figure 1 illustrates the dependence of the considered types of costs on the level of product inventory. In practice, a cost component may be ignored if it does not constitute a significant portion of the total costs. This leads to simplification of inventory management models.


Types of inventory management models.

A wide variety of inventory management models are determined by the nature of demand for products, which can be deterministic or probabilistic. Figure 2 shows the demand classification scheme adopted in inventory management models.

Deterministic static demand assumes that the intensity of consumption remains constant over time. Dynamic demand - demand is known but changes over time.

The nature of demand can be most accurately described through probabilistic non-stationary distributions. However, from a mathematical point of view, the model becomes significantly more complex, especially as the time period under consideration increases.

Essentially, the classification in Fig. 2 can be considered a representation of different levels of abstraction of the demand description.

At the first level, it is assumed that the demand probability distribution is stationary in time, i.e. During all time periods studied, the same probability distribution function is used. Under this assumption, the influence of seasonal fluctuations in demand is not taken into account in the model.

The second level of abstraction takes into account changes in demand from one period to another. However, in this case, distribution functions are not applied, and the needs in each period are described by the average demand. This simplification means that the element of risk in inventory management is not taken into account. But it allows us to study seasonal fluctuations in demand, which, due to analytical and computational difficulties, cannot be taken into account in the probabilistic model.

At the third level of simplification, it is assumed that demand during any period is equal to the average value of known demand over all periods under consideration, i.e. evaluate it at a constant intensity.

The nature of demand is one of the main factors when constructing an inventory management model, but there are other factors that influence the choice of model type.

1. Late deliveries. Once an order is placed, it may be delivered immediately or may take some time to complete. The time interval between the moment an order is placed and its delivery is called delivery lag. This quantity can be deterministic or random.

2. Replenishment of stock. The replenishment process can be carried out instantly or evenly over time.

3. Period of time defines the interval during which the stock level is regulated. Depending on the period of time over which the stock can be reliably forecast, the period under consideration is assumed to be finite or infinite.

4. Number of stocking points. An inventory management system may include several stock storage points. In some cases, these points are organized in such a way that one acts as a supplier to the other. This scheme is sometimes implemented at different levels so that a consumer point at one level can become a supplier point at another. In this case, there is a control system with a branched structure.

5. Number of types of products. An inventory management system may contain more than one type of product. This factor is taken into account provided that there is some dependence between types of products. Thus, the same warehouse space may be used for different products, or their production may be carried out under restrictions on the general production assets.

Deterministic inventory management models.

1. Deterministic generalized model for determining the optimal size of a production batch under the assumption of a shortage.

An inventory management system is considered when products are delivered to the warehouse directly from the production line with a constant intensity of units of production per unit of time. Upon reaching a certain level of stock volume Q production stops. The resumption of production and delivery of products to the warehouse is carried out at the moment when unsatisfied demand reaches a certain value G. The reserve is consumed with intensity. The following parameters are known: - the cost of storing a unit of goods in a warehouse per unit of time; - cost of organizing an order (one batch of products); - losses from unsatisfied demand (fine). It is required to find the optimal volume of a product batch and the time interval between points of resumption of supply according to the criterion of minimum total costs from the functioning of the inventory management system.

Graphically, the conditions of the problem are shown in Fig. 3.

The figure shows that replenishment and depletion of the stock are carried out simultaneously during the interval of each cycle. Accumulated stock Q is completely consumed during the interval. During the interval, demand is not satisfied, but accumulates. Unsatisfied demand G covered in the interval .

The quantity is called full cycle of inventory management.- limiting stock of products, G– marginal shortage of products.

Obviously, the current level of product inventory is determined by the formula:

From triangle OAB it follows:

Similarly, we can determine , and (2)

From the similarity of triangles OAC and CEF we can write. From the equality it follows that (3)

Expression (3) taking into account (1) will be rewritten:

Then the total cost of replenishment, storage of product stock and a possible penalty for unsatisfactory demand will be determined by the expression:

If we bring costs per unit of time, then the expression for unit costs will look like:

So there is a function of two arguments Q and T, the optimal values ​​of which are determined as a solution to the problem:

In order to find the minimum of a function of two arguments, it is necessary and sufficient to solve the system of equations:

This follows from the fact that the function is a concave function with respect to its arguments. Solving the system of equations (5) gives the following non-negative roots:

The minimum total costs per unit of time will be:

We can consider special cases.

1. Product shortages are not allowed. The solution to the problem in this case is obtained from formula (6)-(8), if we impose a penalty. Then C 1 /C 3 = 0 and the optimal values ​​of the required quantities will be:

This case corresponds to a graph of changes in the stock level over time:

2. Replenishment of stock is carried out instantly. In this case it is assumed and accordingly

The stock level change chart looks like this:

3. Shortages are not allowed, stocks are replenished instantly, i.e. . Then it follows:

These formulas are called Wilson's formulas, and the magnitude is called economic lot size.

The graph for changing stock levels looks like this:


Dynamic models of inventory management.

In previous lectures, static problems of inventory management for one period were considered. In a number of such problems, analytical expressions for the optimal stock level were obtained.

If the operation of the system is considered over n periods, and demand is not constant, one comes to dynamic models of inventory management. These problems, as a rule, cannot be solved analytically, but optimal inventory levels for each period can be calculated using the dynamic programming method.

The problem of inventory management is considered when demand for the j-th period (j=1,n) is determined by the value . Let be the stock level at the beginning of the jth period, and let be the volume of stock replenishment in this period. Inventory replenishment is carried out instantly at the beginning of the period and product shortages are not permitted. Graphically, the conditions of the problem are shown in Fig. 1.

Let be the total costs of storage and replenishment in the j-th period. The value is specified, and because At the end of the systems operation, the reserve is not needed.

It is required to determine the optimal volumes of orders in each period according to the criterion of minimum total costs.

The mathematical model of the problem will have the form

here it is necessary to determine , which would satisfy constraints (2)-(6) and minimize the objective function (1).

In this model, the objective function is separable, restrictions (2) have a recurrent form. And this feature of the model suggests the possibility of using the dynamic programming method to solve it. Model (1)-(6) differs from the standard dynamic programming model by the presence of a condition; this condition can be transformed as follows. From (2) and (3) it follows that , or can be written

Then from (7) taking into account (4) the range of possible values ​​is determined: or finally:

Thus, condition (3)-(4) is replaced by condition (8), and model (1),(2),(5)-(6),(8) has a standard form for the dynamic programming method.

In accordance with the dynamic programming method, solving this problem consists of the following steps:

Follows from constraint (12)-(14).(j=2,n).

The algorithm is reversed and, as a result, the optimal values ​​of the required variables and are found. The minimum value of the objective function (1) is determined by the value

REFERENCES

1. Popov E.V. Real-time expert systems [Electronic resource] // Open Systems - 1995. - No. 2. - Electron. Dan. - Access mode: http://www.osp.ru/text/302/178608/

2. Crossland R., Sims W.J.H., McMahon C.A. An object-oriented modeling framework for representing uncertainty in early variant design. // Research in Engineering Design - 2003. - No. 14. -P. 173-183.

3. Landmark Graphics ARIES [Electronic resource] - Electronic. Dan. - 2006. - Access mode: http://www.geographix.com/ps/vi-ewpg.aspx?navigation_id=1273

4. Schlumberger Merak [Electronic resource] - Electronic. Dan. -2006. - Access mode: http://www.slb.com/content/servi-ces/software/valuerisk/index.asp

5. Gensim G2 [Electronic resource] - Electronic. Dan. - 2006. - Access mode: - http://www.gensym.com/?p=what_it_is_g2

6. Thurston D.L., Liu T. Design Evaluation of Multiple Attribute Un-

der Uncertainty // Systems Automation: Research and Applications.

1991. - V. 1. - No. 2. - P. 93-102.

7. Paredis C.J.J., Diaz-Calderon A., Sinha R., Khosla P.K. Composable Models for Simulation-Based Design // Engineering with Computers. - 2001. - No. 17. - P. 112-128.

8. Silich M.P. System technology: object-oriented approach. - Tomsk: Vol. state University of Control Systems and Radioelectronics, 2002. - 224 p.

9. Silich M.P., Starodubtsev G.V. Object model for selecting investment projects for the development of oil and gas fields. // Automation, telemechanization and communications in the oil industry. - 2004. - No. 11. - P. 16-21.

10. Khabibulina N.Yu., Silich M.P. Search for solutions using a model of functional relationships // Information technologies

2004. - No. 9. - P. 27-33.

11. Jess Rete Algorithm [Electronic resource] - Electron. Dan. -

2006. - Access mode: http://www.jessru-

les.com/jess/docs/70/rete.html

USING EXCESSIVE DIMENSIONAL CONTROLS FOR AUTONOMIZATION OF CONTROLLED OUTPUTS OF MULTIDIMENSIONAL CONTROL OBJECTS

A.M. Malyshenko

Tomsk Polytechnic University E-mail: [email protected]

Information on the influence of controls of excess dimensionality on the autonomization of outputs of stationary linear dynamic objects is systematized, and algorithms for the synthesis of precompensators and state and output feedbacks that provide a similar effect are proposed.

Introduction

The problem of autonomous (independent) control of the components of the controlled output of an object is one of the particularly important in practical terms problems in the synthesis of automatic control systems (ACS), perhaps for the majority of multi-dimensional control objects in terms of output. It is reflected in many publications, including monographs, in particular in.

The issues of autonomy for linear stationary multidimensional objects have been studied in more detail. Most often, problems of autonomization (decoupling) of each of the outputs of an object are posed and solved, and not having an excessive dimension m of the control vector (CRV). Due to the unattainability in principle of such a solution for many objects of the specified type, this problem is modified into a more general problem of line-by-line decoupling, defined as the Morgan problem, when for an object with p outputs it is necessary to determine p sets of t>p controls and the corresponding control law, with of which each of the sets affects only one output. Thus, the solution is determined in the class of automatic control systems with an excessive dimension of the control vector in terms of

compared to the dimension of the vector of controlled variables.

Along with the above statements, the problems of autonomization are also formulated as problems of block-by-block autonomy (decoupling), when independence is ensured only between the output coordinates included in their different blocks, but not within these blocks (groups), as well as cascade autonomy. In the latter case, the dependence of the output coordinates on each other is of a “chain” nature (each subsequent one depends only on the previous ones, but not the subsequent ones in the series established for them). And in these cases, solving autonomation problems often requires redundancy in the dimension of the control vector compared to the number of controlled variables.

Conditions for the solvability of autonomization problems

Solutions to autonomation problems are usually found in the class of linear precompensators or linear static or dynamic feedbacks, and for these purposes both the apparatus of transfer matrices (most often) and state space methods, structural and geometric approaches are used. Last two

approach is successfully complemented by the first ones, since in fact only with their help it was possible to establish most of the known conditions for the solvability of autonomization problems [b], and to give deeper interpretations of their solutions.

When using a precompensator for autonomization (decoupling) of the outputs of a linear multidimensional object, i.e., a controller that implements rigid control in the setting function ¡d(t) without feedback, its transfer matrix Wy(s) is selected from the condition

Wœ(s) = Wo(s) -W y(s), (1)

where Wo(s) is the transfer matrix of the control object, and Wx(s) is the desired transfer matrix of the synthesized system, satisfying the conditions for its decoupling of the outputs.

The linear static feedback used for these purposes corresponds to the control algorithm

u(t) = F x(t) + G /u(t), (2)

and dynamic -

u (s) = F (s) x(s) + G fi(s). (3)

The indicated feedbacks can be implemented with both regular (the matrix G is invertible) and irregular transformation of the system’s task ¡d(t).

According to the above dynamic feedbacks can be defined as a special case of dynamic extensions that complement the object described by a system of equations in the form “input-state-output” of the form

x (t) = Ax (t) + Bu (t), y(t) = C x (t),

ua (t) p_xa (t)_

where xa(/) = ia(/), or by the generalized operator equation

and (5) = Г(5) x(5") + О(5) ¡л(5).

Controlling an object with a model of the form according to algorithm (2) gives the final transfer matrix of the system

F^) = C (51 - (A + B G (5))) ~1BO =

Ж0(5) . (1 - Г(5)(51 - А) -1 В)-1 О = Ж0(5) . N(5), (4)

where Wo(s)=C(sI-AylB and #(£) are, respectively, the transfer matrices of the object and the precompensator, equivalent in feedback effect; I is the unit matrix of dimension nxn.

The canonical Morse transformation g=(T,F,G,R,S) used in the geometric approach with the invertible T,G,S of the transfer matrix Wo(s) of the object "Lo(C,A,B)

(A, B, C) ^ (T A + BF + R C)T,T ~lBG, SCT)

reduces Wo(s) to its left and right bicausal transformations of the form

W0(s) ^ Bi(s)-W0(s)-B2(s), (5)

where B1(s) = S_1;

B 2(s) = -G.

From (4) and (5) it follows that regular static

(2) and dynamic (3) feedbacks can be interpreted as bicausal precompensations, i.e., they can be replaced by bicausal precompensators equivalent in effect. With respect to the second, the converse statement is also true, however, the bicausal precompensator H(s) is implementable according to the form of an equivalent linear static feedback only for an object with Wo(s) of minimal implementation, and if and only if Wo(s) and H-1(s) - polynomial matrices.

From (5) we can also conclude that bicausal precompensators and the corresponding regular static and dynamic feedback cannot change the structure of the system at infinity and the properties determined by it, in particular the minimum inertia (delay) of autonomous control channels. These changes can only be achieved in the class of irregular control algorithms.

Conditions for the solvability of autonomization problems are associated with the structural properties of controlled objects, described by their lists of invariants. Moreover, the set required for this is determined by which algorithm (compensator) is planned to be used for these purposes. According to this, to determine the implementable decoupling dynamic feedbacks, information about the input-output structure of the object embedded in its transfer matrix or in the minimal part of the description in the state space is sufficient. The solvability of this problem using static state feedback is established by the internal structure of the control object, in particular based on the study of its Rosenbrock or Kronecker system matrices or the canonical Morse decomposition.

A row-by-row pre-compensator decoupling the outputs of the object can be determined from (1) if and only if m>p, and the matrices [ Wo(s) : W(s)] and Wo(s) have the same structure of the Smith-McMillan form at infinity.

If the transfer matrix of an object has full row rank (a necessary condition for row-

decoupling, provided only at t>p), then decoupling can be provided by a precompensator with a transfer matrix

where Wnoб(s) is the right inverse matrix of W0(s), and k is an integer that makes Wn(s) its own matrix.

It has been proven that decoupling using regular static feedback (2) is possible if and only if decoupling is possible using regular dynamic feedback

(3). In turn, according to , the latter is possible if and only if the infinite structure of the object’s transfer matrix is ​​a union of the infinite structures of its rows.

The regularity of feedback actually presupposes that the object has no redundancy in the dimension of the control vector (m=p). Therefore, if decoupling in this case is not achievable, and the controlled object has a potential IRVU, then in order to achieve autonomy in the control of each of the output quantities, it is advisable to take advantage of this redundancy or make some design changes to the control object to first obtain an IRVU from it. It should also be borne in mind that in situations where t>p, regular feedback may not lead to the desired result, while in the class of irregular precompensators or the same feedback it can be obtained. For example, for an object with a transfer matrix

Irregular feedbacks correspond to simply causal (strictly own) precompensators. Therefore, the systems they form with the control object in the general case will not preserve the structure of the controlled object at infinity. This, in particular, can be used to ensure the stability of the synthesized system. Let us recall that it was already proven that with the help of regular feedback, decoupling and stability of the system can be simultaneously achieved if and only if the object does not have unstable invariant zeros of the relationship. The latter are those invariant zeros £0(C,A,B) that are not one-to-one

temporarily and invariant zeros of string subsystems £;(C,A,B). Here c, /e 1,p is the /th row of the matrix C of the object. These zeros, according to the decoupling conditions, determine restrictions on the choice of poles of the synthesized system. In this case, the set of fixed (not allowing arbitrary assignment) poles of a system isolated by outputs must necessarily include all invariant zeros of the relationship.

Thus, the control algorithm in the case of right invariant zeros of the relationship in the object must be selected from the condition that it will be able to make the correction necessary under the stability conditions to the structural properties of the system. These, as shown above, can be algorithms with irregular feedback, which are actually implemented in the class of systems with IRVU.

A complete solution to the problem of decoupling using feedback for objects with right invariant zeros of the relationship has not yet been obtained. In particular, to implement it with static feedback it is necessary, as follows from , to make the structure of the maximum controllability subspace contained in KerS rich enough to increase the infinite structure to a list of essential orders of the object. The latter characterize the degree of dependence at infinity between individual outputs and all others and can be calculated using the formula:

pgv =ХПг -Х Пг g=1 g=1

outputs are not decoupled using regular feedbacks, but are decoupled by a static precompensator with a transfer matrix

Here n is the order of infinite zero of the system s¡ in the Smith-McMillan form of the transfer matrix of the object. The first sum in (6) is determined for the system £0(C,A,B) as a whole, and the second - for CS;,A,B), where C / is the matrix C without the i-th row. The essential orders indicated here define the minimum infinite structure that can be obtained from a decoupled system.

For dynamic irregular feedback, only the decoupling condition is established, which boils down to the fact that the excess dimension of the control vector (m-p) must be greater than or equal to the deficit of the column rank at infinity of the interactor matrix W0(s), and the latter must have full row rank. The specified interactor of the transfer matrix of the object W0(s) is the matrix inverse to the Hermitian form W0(s). In passing, note that the ith essential order of an object can be determined through the interactor of its transfer matrix and is equal to the polynomial degree of its ith column.

General solutions for the synthesis of control algorithms in the class of ACS with IRVU, even for linear objects that provide autonomy

their outputs have not yet been received. The use of controls of excess dimensionality when solving problems of line-by-line decoupling (autonomy of outputs) of an object is actually necessary.

this condition in those cases when the controlled object does not satisfy the conditions for the solvability of this problem in the class of bicausal precompensators and their corresponding feedbacks.

BIBLIOGRAPHY

1. Wonham M. Linear multidimensional control systems. - M.: Nauka, 1980. - 375 p.

2. Rosenbrock H.H. State-space and multivariable theory. - London: Nelson, 1970. - 257 p.

3. Meerov M.V. Research and optimization of multiply connected control systems. - M.: Nauka, 1986. - 233 p.

4. Malyshenko A.M. Automatic control systems with excessive dimension of the control vector. - Tomsk: Tomsk Polytechnic Publishing House. University, 2005. - 302 p.

5. Commault C., Lafay J.F., Malabre M. Structure of linear systems. Geometric and transfer matrix approaches // Cybernetika. - 1991.

V. 27. - No. 3. - P. 170-185.

6. Descusse J., Lafay J.F., Malabre M. Solution of Morgan’s problem // IEEE Trans. Automat. Control. - 1988. - V. aC-33. -P. 732-739.

7. Morse A.S. Structural invariants of linear multivariable systems // SIAM J. Control. - 1973. - No. 11. - P. 446-465.

8. Aling H., Schumacher J.M. A nine fold canonical decomposition for linear systems // Int. J. Control. - 1984. - V. 39. - P 779-805.

9. Hautus M.L.J., Heymann H. Linear feedback. An algebraic approach // SIAM J. Control. - 1978. - No. 16. - P. 83-105.

10. Descusse J., Dion J.M. On the structure at infinity of linear square decouplable systems // IEEE Trans. Automat. Control. - 1982. -V. AC-27. - P. 971-974.

11. Falb P.L., Wolovich W. Decoupling in the design and synthesis of multi-variable systems // IEEE Trans. Automat. Control. - 1967. -V. AC-12. - P 651-669.

12. Dion J.M., Commault C. The minimal delay decoupling problem: feed-back implementation with stability // SIAM J. Control. -1988. - No. 26. - P. 66-88.

UDC 681.511.4

ADAPTIVE PSEUDO-LINEAR CORRECTORS OF DYNAMIC CHARACTERISTICS OF AUTOMATIC CONTROL SYSTEMS

M.V. Skorospeshkin

Tomsk Polytechnic University E-mail: [email protected]

Adaptive pseudolinear amplitude and phase correctors of the dynamic properties of automatic control systems are proposed. A study of the properties of automatic control systems with adaptive correctors was carried out. The effectiveness of using pseudolinear adaptive correctors in automatic control systems with non-stationary parameters is shown.

In systems for automatic control of objects whose properties change over time, it is necessary to ensure a targeted change in the dynamic characteristics of the control device. In most cases, this is done by changing the parameters of proportional-integral-derivative controllers (PID controllers). Such approaches are described, for example, in, however, the implementation of these approaches is associated either with identification or with the use of special methods based on calculations along the transient process curve. Both of these approaches require significant setup time.

This paper presents the results of a study of the properties of automatic control systems with a PID controller and sequential adaptive amplitude and phase pseudo-linear correctors of dynamic characteristics. This method of adaptation is characterized

the fact that during the operation of the control system, the controller parameters do not change and correspond to the settings prior to the system being put into operation. During operation of the control system, depending on the type of corrector used, the transmission coefficient of the corrector or the phase shift created by it changes. These changes occur only in cases where fluctuations in the controlled quantity occur due to changes in the properties of the control object or due to the impact of disturbances on the control object. And this makes it possible to ensure the stability of the system and improve the quality of transient processes.

The choice of pseudolinear correctors for implementing an adaptive system is explained as follows. Correctors used to change the dynamic properties of automatic control systems can be divided into three groups: linear, nonlinear and pseudolinear. The main disadvantage of linear correctors is related to

 


Read:



Necromancer is such a job. Necromancer. Such work. Quotes from the book “Necromancer. Such work" Sergey Demyanov

Necromancer is such a job.  Necromancer.  Such work.  Quotes from the book “Necromancer.  Such work

Fact: no one writes original about necromancers among Russian authors. Against the backdrop of languid beauties and heroic lovers with vile character and...

Corporate life cycle management

Corporate life cycle management

Corporate life cycle management Itzhak Adizes (No ratings yet) Title: Corporate life cycle management About the book “Management...

Labor potential of an employee: concept of labor potential, components of labor potential, functions of labor potential

Labor potential of an employee: concept of labor potential, components of labor potential, functions of labor potential

3.1. Labor potential: concept, structure and characteristics. 3.2. Assessment of labor potential and analysis of its use. 3.3. Staff...

Production (operating) leverage

Production (operating) leverage

The term "leverage" translated from English literally means the action of a small force (lever). In economics, leverage means...

feed-image RSS