Optimization Technique

Google+ Pinterest LinkedIn Tumblr +

In the simplest case, this means solving problems in which one seeks to minimizeor maximize a real functionby systematically choosing the values of realor integervariables from within an allowed set. This formulation, using a scalar, real-valued objective function, is probably the simplest example; the generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. More generally, it means finding “best available” values of some objective function given a defined domain, including a variety of different types of objective functions and different types of domains. 

History

The first optimization technique, which is known as steepest descent, goes back to Gauss. Historically, the first term to be introduced was linear programming, which was invented by George Dantzigin the 1940s. The term programming in this context does not refer to computer programming(although computers are nowadays used extensively to solve mathematical problems). Instead, the term comes from the use of program by the United States military to refer to proposed training and logisticsschedules, which were the problems that Dantzig was studying at the time. (Additionally, later on, the use of the term “programming” was apparently important for receiving government funding, as it was associated with high-technology research areas that were considered important.) 

Major subfields

  • Convex programmingstudies the case when the objective function is convexand the constraints, if any, form a convex set. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming.

    • Linear programming(LP), a type of convex programming, studies the case in which the objective function f is linear and the set of constraints is specified using only linear equalities and inequalities. Such a set is called a polyhedronor a polytopeif it is bounded.

    • Second order cone programming(SOCP) is a convex program, and includes certain types of quadratic programs.

    • Semidefinite programming(SDP) is a subfield of convex optimization where the underlying variables are semidefinitematrices. It is generalization of linear and convex quadratic programming.

    • Conic programmingis a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone.

    • Geometric programmingis a technique whereby objective and inequality constraints expressed as posynomialsand equality constraints as monomialscan be transformed into a convex program.

  • Integer programmingstudies linear programs in which some or all variables are constrained to take on integervalues. This is not convex, and in general much more difficult than regular linear programming.

  • Quadratic programmingallows the objective function to have quadratic terms, while the set A must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming.

  • Nonlinear programmingstudies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, the convexity of the program affects the difficulty of solving more than the linearity.

  • Stochastic programmingstudies the case in which some of the constraints or parameters depend on random variables.

  • Robust programmingis, as stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. This is not done through the use of random variables, but instead, the problem is solved taking into account inaccuracies in the input data.

  • Combinatorial optimizationis concerned with problems where the set of feasible solutions is discrete or can be reduced to a discreteone.

  • Infinite-dimensional optimizationstudies the case when the set of feasible solutions is a subset of an infinite-dimensionalspace, such as a space of functions.

  • Heuristic algorithms

    • Metaheuristics

  • Constraint satisfactionstudies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning).

    • Constraint programming.

  • Disjunctive programming used where at least one constraint must be satisfied but not all. Of particular use in scheduling.

  • Trajectory optimizationis the specialty of optimizing trajectories for air and space vehicles.

In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):

  • Calculus of variationsseeks to optimize an objective defined over many points in time, by considering how the objective function changes if there is a small change in the choice path.

  • Optimal controltheory is a generalization of the calculus of variations.

  • Dynamic programmingstudies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation.

  •  Mathematical programming with equilibrium constraintsis where the constraints include variational inequalitiesor complementarities.

Multi-objective optimization 

Adding more than one objective to an optimization problem adds complexity. For example, if you wanted to optimize a structural design, you would want a design that is both light and rigid. Because these two objectives conflict, a trade-off exists. There will be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and stiffness. This set of trade-off designs is known as a Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier.

A design is judged to be Pareto optimal if it is not dominated by other designs: a Pareto optimal design must be better than another design in at least one aspect. If it is worse than another design in all respects, then it is dominated and is not Pareto optimal 

Applications

Problems in rigid body dynamics(in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equationon a constraint manifold; the constraints are various nonlinear geometric constraints such as “these two points must always coincide”, “this surface must not penetrate any other”, or “this point must always lie somewhere on this curve”. Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem.

Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineeringproblems.

Economicsalso relies heavily on mathematical programming. An often studied problem in microeconomics, the utility maximization problem, and its dual problemthe Expenditure minimization problem, are economic optimization problems. Consumersand firms are assumed to maximize their utility/profit. Also, agents are most frequently assumed to be risk-aversethereby wishing to minimize whatever risk they might be exposed to. Asset pricesare also explained using optimization though the underlying theory is more complicated than simple utility or profit optimization. Tradetheory also uses optimization to explain trade patterns between nations.

Another field that uses optimization techniques extensively is operations research.

Share.

About Author

Leave A Reply