Showing posts with label NSF. Show all posts
Showing posts with label NSF. Show all posts

Thursday, April 18, 2013

NSF Award: (Mixed) Integer and Combinatorial Optimization: New Convexification Techniques

A new NSF Award was recently given to Egon Balas on (Mixed) Integer and Combinatorial Optimization: New Convexification Techniques.

The objective of this research project is to investigate new ideas that could potentially further accelerate the revolution in integer programming. This project will investigate more efficient convexification techniques: a new cutting plane paradigm that generates a pool of points from which cuts can be produced, and a theory of cut-generating functions, both aimed at finding deeper cuts more efficiently. Integer programming has experienced a revolution in the last two decades that has greatly advanced our ability to solve practical problems in engineering, manufacturing, transportation, telecommunication, finance, marketing and many other areas of economic activity. According to recently performed extensive testing, integer programming solvers are now close to a billion times faster than they were twenty years ago. Better integer programming algorithms (including their linear programming components) account for a speedup of about half a million times, the rest of the improvement (by a factor of about 1600) coming from faster computers. A key element of this transformation was a breakthrough in the use of cutting planes in the early nineties, including the design of the lift-and-project cuts and the ensuing revival of the Gomory mixed integer cuts, two projects carried out by the principal investigators with previous NSF support.


Progress in the ability to solve large-scale mixed integer linear programs will affect problem solving and improve efficiency of operations in an extremely broad range of activities that include industrial production, supply chain management, logistics, transportation, electricity production, airport operations, telecommunication networks, health care applications such as scheduling intensive care units and determining radiation dosage, combinatorial auctions, finance and economics. The widespread impact of tools developed in this project will contribute to technological excellence and strengthen US technological leadership.

NSF Award: Efficiently Computable Convex Approximation Methods for Non-Convex Optimization Problems


A recent NSF Award is given to Luis Zuluaga on Computable Convex Approximation Methods for Non-Convex Optimization Problems.

This research project takes on the challenge of closing the large gap between the current ability to solve convex or integer-linear optimization problems and the capacity to solve non-convex optimization problems by introducing novel methodologies to solve this class of problems using polynomial optimization techniques. In particular, the research will be focused on the following three main directions. The first direction is the introduction of new polynomial optimization techniques that instead of solving non-convex problems exhaustively, do so dynamically. The second one is the introduction and effective use of polynomial optimization results that beyond semidefinite programming tools make full use of the powerful convex optimization toolkit. The third one is the use of polynomial optimization results, related to the convex reformulation of low-dimensional or low-degree non-convex problems, as the foundation to obtain better reformulations for general non-convex problems. This project's practically oriented research is aimed at solving key and difficult decision-making problems that are now regularly faced in the communications, healthcare, electricity, and oil and gas industries. Many times, the difficulty to solve these problems arises from the use of polynomials to closely model the behavior of real-world systems. Modeling real-world systems more accurately is an overarching need in today's competitive and resource-constrained environment.
The results of this research will substantially improve computer-based solvers for this class of problems. The software developed to achieve this goal will use open source software tools that will be made available to the public. The corresponding positive impact on profits, safety, resource management, client satisfaction, etc., will further broaden the practical successes that make Operations Research the "Science of the Better". Furthermore, the challenging nature of this research will provide ample opportunity to the participating graduate students to excel, and its results will be incorporated into appropriate graduate level courses.

Sunday, February 3, 2013

CAREER: Stochastic Multiple Time-Scale Co-Optimized Resource Planning of Future Power Systems with Renewable Generation, Demand Response, and Energy Storage

A recent NSF Career award was given to Dr.Wu in Clarkson University in the area of smart grid, combining optimization and power system planning and scheduling.


This PI will develop co-optimized generation, transmission, and DR planning solutions to cope with the impacts of short-term variability and uncertainty of renewable generation (RG) and demand response (DR) as well as hourly chronological operation details of energy storage (ES) and generators. The approach is to (1) propose a stochastic multiple time-scale co-optimized planning model that explicitly integrates short-term variability and uncertainty as well as hourly chronological operation into long-term planning; (2) develop efficient solution methodologies and implement on high performance parallel computing facilities. Intellectual Merit: The interaction among variability, uncertainty, and constraints from long-term planning and hourly chronological operation will be quantified for enhancing security and sustainability of power systems with significant RG, DR, and ES. This research can be used to evaluate effective load carrying capability (ELCC) of variable energy sources, and to study policies on portfolios of energy production and storage techniques. This study is of practical importance since RG, DR, and ES are being implemented worldwide and their distinctive contributions to energy security and sustainability need to be well understood.

Broader Impacts: This project has profound impacts on the sound deployment of RG, DR, ES, and smart grid. For example, it allows better treatment of investment options which require transmission and generation together, in order to exploit favorable sites for wind or solar. The research and educational findings would help educate engineers to meet challenges of the secure and sustainable electricity infrastructure. The project will increase public awareness and understanding of the complexity of power system planning, and appeal to researchers and educators with interests in power systems-based research and education.

Wednesday, January 16, 2013

CAREER: Reduced-order Methods for Big-Data Challenges in Nonlinear and Stochastic Optimization


A recent NSF Career Award was given to Dr.Lan in University of Florida. Details of the award are below

The objective of this Faculty Early Career Development (CAREER) Program project is to develop a set of new reduced-order algorithms to tackle the big-data challenges in optimization. The last several years have seen an unprecedented growth in the amount of available data. While nonlinear, especially convex programming (CP) models are important to extract useful knowledge from raw data, high problem dimensionality, large data volumes and inherent uncertainty present significant challenges to the design of optimization algorithms. This research aims to attack these challenges by investigating: (i) novel first-order methods for deterministic CP that converge faster, require little structural information and do not rely on line search, based on level methods; (ii) stochastic first-order methods that handle data uncertainty in an optimal manner, based on stochastic approximation; (iii) novel randomization schemes for solving certain challenging deterministic CP problems beyond the capability of first-order methods; and (iv) stochastic first- and zeroth-order methods for general, not necessarily convex, stochastic programs. The research focuses on two fundamental issues across these topics: (i) the study of complexity which provides guarantees on algorithmic performance; and (ii) the exploitation of structures that leads to the design of algorithms with stronger complexity and superior practical performance.

If successful, a set of new algorithmic schemes will advance the state-of-the-art in nonlinear and stochastic optimization, bringing many practically relevant data analysis problems within the range of tractability. Example applications include algorithms for faster and more accurate medical image reconstruction and classification, which will be beneficial to healthcare. In addition, in seismology, effective stochastic programming methods will help to build predictive models by measuring thousands of earthquakes detected at seismic stations. The project will also support the PI's educational goals to improve students' learning in operations research, broaden the representation of underrepresented groups in the PhD program, and contribute to open research infrastructure through the development of optimization solvers.

Thursday, September 13, 2012

An Effective and Time-efficient Approach to Solving Linear Discrete Optimization Problems using Discretized Network Flow

A recent NSF grant is award to Shantanu Dutt. Here is the abstract below from the NSF website

Discrete optimization finds widespread use in almost all areas of human endeavor ranging from science to technology to business, and encompassing diverse applications such as chip design, power system design, robotics, bioinformatics, transportation, financial computing and industrial engineering. However, currently there is no general discrete optimization solver that can solve hard problems near-optimally in tractable runtimes (fast to moderate runtimes). To rectify this, this project will explore developing novel and efficient techniques for solving the class of 0/1 integer linear programming (ILP) problems that can be used to model a wide range of discrete optimization problems (DOPs). The approach used for this purpose is a solution technique termed discretized network flow (DNF), in which classical network flow (that solves a class of continuous linear programming problems), is constrained by special discrete requirements on the flow to yield valid solutions to 0/1 ILPs.

A successful completion of this project will yield algorithms and a software tool for solving large 0/1 ILP problems near-optimally and much faster than current techniques. This will represent a significant advance in the state-of-the-art of such solvers, and can be used to solve large DOPs more accurately and faster in many application areas ranging from genomics to chip design to robotics. This can help in answering fundamental issues in these application areas that have not been attempted so far, and also lead to better products and services in these areas. For example, in the area of chip design, the use of our solver can lead to much lower power and higher quality chips (e.g., with good performance and reliability) than are possible with current CAD tools, and thereby result in better and greener electronic products in several consumer and commercial application areas.

Optimization without Round-off Errors

Recently, a NSF grant is awarded to Erick Moreno-Centeno on Optimization without Round-off Errors. It will be thrilling in the community since round-off erros have been a serious concern to solve large scale optimization problems, especially those with integer variables.

This Early-concept Grant for Exploratory Research (EAGER) provides funding to further develop an algorithm to solve systems of linear equations with a specific advantage over Gaussian Elimination in that the algorithm does not have round-off errors. The algorithm will be tailored and adapted to be used within existing optimization algorithms. The key property of the algorithm being further developed is that it maintains the integrality of the numbers throughout its execution. In particular, although the algorithm does contain divisions (which is crucial for the polynomial-time complexity of the

algorithm) all the divisions have a remainder of zero. A detailed complexity analysis of the algorithm will be performed; specifically, both its running time and the number of bits required to represent the numbers throughout the algorithm's execution will be investigated. A modification of the algorithm to perform LU factorizations will also be studied. Finally, the algorithm will be adapted to enable its use for the basis updates performed in state-of-the-art implementations of the revised simplex method.

If successful, the outcomes of this research will lead to a deeper understanding of computational linear algebra free of round-off errors and the associated applications within optimization algorithms. In the linear programming (LP) context, this research will enable us to solve large-scale LPs exactly; this, in turn, will significantly improve the accuracy and speed of integer programming solvers. In addition, since solving systems of linear equations is a critical subroutine of many numerical algorithms, this research will impact many application areas where linear systems frequently arise, including economics, physics, chemistry, and engineering.

Tuesday, March 20, 2012

NSF Award on Polynomial Optimization: Solution Methods and Applications

Recently, a NSF grant was award to Shuzhong Zhang on polynomial optimization. 
The research objective of this project is to develop numerical tools for solving polynomial and tensor optimization models arising from engineering applications such as design of radar waveforms in signal processing, information extraction in magnetic resonance imaging, and gene expression data analysis in bioinformatics. An integrated approach for polynomial and tensor optimization models will be studied and tested in this project. The key components of the integrated approach include relaxation techniques, approximations to optimal solutions, randomized sampling techniques utilizing known structure of the model, and disciplined local improvements of feasible solutions. Numerical tools to be developed under the framework of the integrated approach will be investigated experimentally and validated using the data obtained from practical engineering applications including signal processing and bioinformatics. Performance of the algorithms developed from the project will be measured and compared against currently existing benchmarks. 

If successful, the results of this research will enhance the effectiveness of methods for solving nonlinear optimization models where polynomial functions are found in the objective as well as in the constraints. In particular, such algorithms will lead to substantial practical improvements in radar waveform design, image processing, and data co-clustering analysis. As polynomial functions are known to be pervasive in the context of engineering design and decision making, success of the project will lead to broad societal benefits. Moreover, the proposed research addresses fundamental challenges in optimization theory, because most current research has not gone beyond polynomial models of degree 2; thus, results extending the theory beyond these models would constitute a major advance on the theory side, as well.

NSF Award : Practical Algorithms for Applied Submodular Optimization

Recently, a NSF grant was granted to Jon Lee on submodular optimization. 
This award provides funding for the development and analysis of practical exact and approximation algorithms for general submodular optimization problems, by generalizing and extending mathematical-optimization and approximation methods that have been successful in specific application areas. Important application areas include redesigning environmental monitoring networks and Boolean quadratic optimization. Thus far, general techniques for submodular optimization have been developed with the aim of providing provably good worst-case behavior. On the other hand, practical algorithms, not encumbered by theoretical requirements, have been developed and implemented for several special cases. This project is aimed at extending such success to the general case.

Algorithms will be instantiated and distributed as open-source software thus providing practical, general-purpose tools for attacking this ubiquitous problem class. With particular special cases defined through a black box (i.e., a function-evaluation subroutine), such software will have broad applicability across a variety of application areas, such as economics, machine learning, biodiversity conservation and statics. Moreover, the project is within the broader area of nonlinear discrete optimization, and it is natural to expect to have a broader influence within this important current topic.

Wednesday, February 1, 2012

NSF CAREER AWARD: Stochastic Optimization for Water Resources Management

 http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1151226


The research objective of this Faculty Early Career Development (CAREER) project is to investigate stochastic optimization models and methodologies that will assist in managing and adapting water resources and infrastructures in an effort to ensure a sustainable future. The project will focus on two problems. The first is a large-scale multi-period water allocation problem under uncertainty (Lower Colorado River Basin) and the second is a regional-scale water reuse infrastructure design and operation problem. The proposed research will address these water resources management problems in a comprehensive manner by: (1) formulating mathematical optimization models under known and ambiguous uncertainties; (2) investigating decomposition-based solution methods that exploit the special structures of these models; and (3) advancing theory and methodology to evaluate the resulting solutions and other multi-period adaptation and mitigation policies.

The project will have a significant impact on the well-being of the 25-30 million people who reside in the southwestern United States. The models and methods developed can be applied wherever water is scarce. The project also addresses two of the grand challenges identified by the National Academy of Engineering: to help restore and improve urban infrastructure and to provide access to clean water. While the project focuses on water resources management, if successful, it will result in modeling, algorithmic and computational advances in stochastic optimization. These advances may help solve other important optimization problems. The curricular materials and tutorials developed will serve to educate the next generation of engineers on the methods and real-world applications of stochastic optimization. The project will also promote and increase the visibility of women students through a "Women in Industrial and Systems Engineering Research (WISER)" program.

Sunday, January 22, 2012

NSF CAREER AWARD FOR Santanu Dey: Non-traditional Cutting-Plane Algorithms for Mixed-Integer Programs

http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1149400

The research objective of this Faculty Early Career Development (CAREER) project is to improve the current generation of mixed-integer programming (MIP) solvers by devising new general-purpose cutting-plane methods. A great deal of useful information that could be used in deriving and selecting cutting-planes is often left unused by state-of-the-art MIP techniques. This project will investigate some of the following non-traditional paradigms for incorporating more information into cutting planes: (i) Use information from multiple constraints simultaneously to derive cutting planes, instead of using a single implied constraint. (ii) Design suitable cutting planes and then verify their validity before use, instead of deriving cutting planes without any control over their quality. (iii) Incorporate information from explicit enumeration of integer points to guide the choice of cutting planes and improve their strength. Since many mathematical challenges need to be overcome in order to tap the potential of these non-traditional paradigms, results from this project could significantly enhance the mathematical toolkit used by integer programmers for the generation and analysis of cutting planes.

If successful, this project will not only make theoretical advances in mathematical programming, but also lead to significant improvements in the performance of MIP solvers, leading to huge gains in a broad spectrum of applications of MIP models in areas such as health care, forestry, finance, supply-chain design, and chemical engineering. The key educational objective of this award is to develop an outside-classroom operations research puzzle competition to foster and enrich an environment for undergraduate research. Moreover a new graduate course will be designed with the aim of dissemination of research results directly to future practitioners and to bring students from different engineering communities together, thus providing opportunities for new research directions and collaborations.

Saturday, July 16, 2011

NSF AWARD: Models and Algorithms for Risk Adjusted Optimization with Robust Utilities

Recently, a NSF funding is awarded to Sanjay Mehrotra. The introduction of the project is shown as below

The research objective of this award is to study concepts of relaxed univariate and multivariate stochastic dominance motivated from the specification of utility functions. A major challenge in risk-adjusted decision making is the assessment of a decision maker's risk/utility function associated with uncertain outcomes. The stochastic dominance concept enables the defining of preferences of one random entity (variable, vector, matrix, process, etc.) over another and therefore can be used to compare the different risks between decision alternatives. The concept of stochastic dominance is related to utility theory, which works with a class of utility functions where limited prior knowledge on a decision maker's utility function is assumed, which unfortunately results in very conservative models. Developing models, and the corresponding algorithms, that can systematically circumvent the challenges of traditional utility theory, especially when the decision is based on conflicting objectives and multiple random outcomes, is thus critically needed.

If successful, the results of this research will lead to the development of a new class of risk-adjusted decision models that incorporate risk in a more realistic way than currently existing techniques. Computationally efficient algorithms will be developed to solve these models.

Tuesday, July 12, 2011

NSF Award: Approximate Augmented Lagrangians, First-Order and Parallel Optimization Methods, with Applications to Stochastic Programming

Recent NSF Award funds a project lead by Jonathan Eckstein . The introduction of the project is stated as below
Continuous optimization is a mathematical discipline with extensive applications in engineering design and business/logistical planning. Its currently most common solution techniques are difficult to adapt to newly evolving computer architectures comprising dozens to thousands of processing elements working in parallel. Combining several existing techniques with some recent results of the principal investigator, this project explores a means of solving continuous optimization problems that should adapt more readily to parallel serial implementation, test it extensively, and address some computer architectures than present standard solvers, allowing the architectures' full power to be brought to bear on large, time-consuming problems. Without such new solution approaches, solution of critical design and planning problems may not benefit from most of the advances in computing power anticipated for the next decade. The project will also involve cooperative work with the Brazilian research community. The technical approach is to capitalize on recent advances in augmented Lagrangian and conjugate gradient algorithms to produce a new kind of modular parallel continuous constrained optimization solver. The solver consists of a classical augmented Lagrangian outer loop, with subproblems solved by the a state-of-the art box-constrained conjugate gradient method terminated by a recently developed relative error criterion. The research consists of three stages: the goal of stage one is to create an object-oriented, modular theoretical issues. Stage two aims to evolve the stage-one substrate into a parallel solver for which the user explicitly specifies how to map the problem structure to multiple processing elements. Stage three's goal is to automate the structure detection and mapping process. Stages two and three will use stochastic programming problems as test cases.

Monday, June 13, 2011

NSF AWARD:Polynomial Optimization and Convex Algebraic Geometry

Recently, NSF funded a project lead by Rekha R. Thomas on polynomial optimization and convex algebraic geometry.

This study focuses on problems from polynomial optimization and convex algebraic geometry. The latter is a new research area that concerns convex sets and convex hulls of sets that are described algebraically and arise in optimization. The key tool is the use of efficient algorithms in semidefinite programming, a branch of convex optimization that is used in polynomial optimization. The first set of questions studies the general phenomenon of when a given convex body is the linear projection of a slice (by an affine plane) of a closed convex cone. This phenomena is central to all lift-and-project methods for discrete and polynomial optimization. This study provides a uniform view of all lift-and-project methods via new notions of cone factorizations of certain operators associated to the convex body. The investigator and collaborators have recently constructed a new hierarchy of convex relaxations for algebraic sets called theta bodies. Various open questions about these bodies are posed. The methods from polynomial optimization and convex algebraic geometry can be applied to problems from computer vision. Here the application is primarily to object reconstruction from images taken by multiple cameras.

The work of the PI with her collaborators improves our understanding of the algebraic and geometric structures that underlie optimization problems that involve polynomials. Such problems have a wide array of applications and admit methods from both the algebraic and analytic sides of mathematics. This research considers both improvements in our understanding of the theoretical aspects of polynomial optimization, and the application of these methods to problems in computer vision.

NSF AWARD:Interior-point algorithms for conic optimization with sparse matrix cone constraints

Recently NSF funded a project lead by Lieven Vandenberghe on interior-point algorithms for conic optimization with sparse matrix cone constraints. 

Conic optimization is an extension of linear programming in which the componentwise vector inequalities are replaced by inequalities with respect to nonpolyhedral convex cones. The conic optimization model is widely used in the recent literature on convex optimization and provides an elegant framework for extending interior-point algorithms from linear programming to convex optimization. It is also the basis of popular modeling systems for convex optimization.

The research on algorithms for conic optimization has mainly focused on three types of inequalities, associated with the nonnegative orthant, the second-order cone, and the positive semidefinite cone. This restriction is motivated by symmetry properties that can be exploited to formulate symmetric primal-dual interior-point algorithms. However, large gaps in linear algebra complexity exist between the three types of conic constraints, and this can lead to inefficiencies when convex optimization problems are converted to the standard conic format.

This study considers approaches to improve the efficiency of conic optimization solvers by considering a larger class of conic constraints, defined by chordal sparse matrix cones, i.e., cones of positive semidefinite matrices with a given chordal sparsity pattern, and the associated dual cones of chordal sparse matrices that have a positive semidefinite completion. These cones include as special cases the three standard cones, but also several interesting non-self-dual cones. Moreover non-chordal sparsity patterns can often be efficiently embedded in chordal patterns and, as a consequence, sparse semidefinite programs can be solved as non-symmetric cone programs involving lower-dimensional cones than the positive semidefinite cone used in semidefinite programming methods. The choice for chordal matrix cones is further motivated by the existence of fast algorithms for evaluating the associated barrier functions and their derivatives.

The investigator and his collaborators study nonsymmetric interior-point algorithms for sparse matrix cones, building on techniques developed for large-scale sparse matrix computations, in particular, multifrontal and supernodal factorization algorithms and parallel sparse matrix algorithms. A wide variety of practical problems in engineering and science can be formulated as nonlinear convex optimization problems, and solved using algorithms developed over the last few decades.

The success of these techniques has created a demand for robust and efficient algorithms for very large convex optimization problems, especially for applications in machine learning, computer vision, electronic design automation, sensor networks, and combinatorial optimization. The problem sizes that arise in these fields often exceed the capabilities of general-purpose solvers. The work of the prinicipal investigator with his collaborators considers approaches to improve the scalability of interior-point algorithms, an important class of convex optimization algorithms. Freely available high-quality software implementations of the techniques developed in the project are a product of the research.

NSF AWARD:Ranking and Clustering by Integer and Linear Optimization

Recently, NSF funded a project lead by Amy N.Langville on ranking and clustering by integer and linear optimization. 

The research component of this proposal is about ranking and clustering. A given collection of items is to be ordered according to some criterion (e.g., from most to least important). In clustering, the goal is to group items so that similar items appear together. Though well-studied, ranking and clustering research has been dominated by heuristic methods that produce approximate results quickly. Optimization methods that produce exact optimal results have been far less studied, possibly because these methods require longer computational times, and the gains in accuracy do not outweigh the computational costs in many applications. This is unfortunate since optimization methods are based on a beautiful theoretical framework and can lead to novel and interesting results. For example, preliminary work by the PI on a ranking optimization method, indicates that multiple optimal solutions can be linked to ties in the ranked list. On the other hand, heuristic methods are not designed to handle ties in the output ranking. Two main goals of the proposed research are: (1) to reveal interesting theoretical connections, and (2) to increase the size limits of optimally solvable problems using both classical and clever new relaxation techniques.

Ranking, also known as linear ordering (LOR), is close in spirit to the Traveling Salesman Problem (TSP): both are simple to state, yet hard to solve optimally. Over several decades, huge gains have been made on the size of solvable TSPs, that have resulted in new and unforeseen uses. Notable examples occur in the transportation industry (involving thousand-city TSPs) and in the microprocessor industry (with even larger TSPs routing copper wiring on circuit-boards). Similar progress is expected for the LOR, whose current limit is a few hundred to a few thousand items in some cases. Yet applications for much larger LORs abound, e.g. rankings of genes, products, and webpages. Furthermore, breakthroughs for the TSP (and likewise the proposed LOR work) have not solely been related to scale, but theoretical advances and progress in understanding connections to other problems and fields have been equally crucial in the development of general purpose methods for integer programming, such as cutting plane and branch and cut techniques.

The proposed research has great impact. Ranking and clustering have become standard data analysis tools with many uses. For example, Google uses ranking to order webpages resulting from user queries, and also uses clustering for its "Find Similar Pages" function. Amazon uses clustering in its "Customers Who Bought X Also Bought Y" feature. Facebook and Twitter can generate ads and friend recommendations based on ranking and clustering algorithms. To ensure the results of the proposed research reach these consumers, the work will be widely disseminated through journal publications and two books. The project's novel student training plan, which includes peer mentoring and an international exchange program, will broaden participation of underrepresented groups. The educational component and its Calculus Activity Book, being aimed at changing students attitudes towards the sciences and at instilling confidence in scientific ability, will have the stronger impact on those students who more commonly fail to be retained (likely a relatively large number coming from underrepresented groups), and thus will help further diversify and broaden participation in STEM disciplines.

Sunday, May 15, 2011

Computational optimization applied in electrical power market

Advanced computational optimization techniques developed recently greatly contribute to the cost reduction and automatic revolution in the electrical power market. As a result, Midwest ISO is the winner of the Edelman Award in 2011. The presentation is titled  Unlocks Billions in Savings through the Application of O.R. to Energy and Ancillary Services Markets . Midwest ISO developed mixed integer linear programming model and solved it by adding specific cuts.

Also recently NSF funded a project called "Stochastic optimization and coordination control of demand response for enhancing the secure and economic operation of power systems". The PI is Lei Wu . The detailed project description is shown below

The objective of this research is to explore novel demand response management models and robust solutions for both Aggregators of Retail Consumers and Independent System Operators, which enable demand response to become a resource for providing ancillary services and maintaining system security. The approach is to (1) establish an adaptive price-sensitive load forecasting model; (2) propose a risk-constrained stochastic demand response scheduling model for Aggregators of Retail Consumers; (3) construct a two-stage security-constrained stochastic scheduling model for Independent System Operators.

Intellectual Merit: The demand response integration will be analyzed and quantified for minimizing daily operating costs, satisfying hourly security constraints, and accommodating uncertainties. This project is of practical importance since demand response is being implemented worldwide, and consumers will have opportunities to make distinctive contributions to energy security and environmental improvements in power system operation. The project team is qualified to perform the study and the research and educational facilities at Clarkson and Illinois Institute of Technology are adequate.

Broader Impacts: This project has profound impacts with the increasing deployment of distributed generations and plug-in hybrid vehicles. The proposed integrated research and educational activities and the new course development on modern power system operation and control in smart grid will attract more students to seek careers in power engineering. Underrepresented students will be recruited to participate in various tasks of this research. This project will increase public awareness and understanding of the complexity of power system operation among ratepayers, regulators, politicians, utility executives, market participants, and electricity consumers.

Thursday, April 14, 2011

NSF AWARD : Distribution and Moment-Robust Optimization Models and Algorithms

Recently, one NSF funding is awarded to Sanjay Mehrotra. 
Abstract:
Optimization models and methods are widely used in decision problems. Incorporating parameter uncertainty is important in constructing representative optimization models. This parameter uncertainty may be described by a probability distribution, the moments of a probability distribution, or using bounds and confidence intervals on moments. When estimating uncertain parameters, information is also available on the error distribution of the estimated parameter. The research objective of this award is to study properties and develop algorithmic methods for solving multivariate optimization models that incorporate parameter uncertainty using limited knowledge on their distribution, and the parameter estimation errors. The optimization models will be based on using parameter distribution moment estimates. The models will incorporate moment estimation errors using a suitable penalty approach. 

If successful, the results of this research will lead to the development of a new class of decision optimization modeling tools with associated algorithmic techniques for solving these models. This award will contribute to better handling of parameter uncertainty in single and two-stage inventory planning models, the classical least squares model, and the models involving objectives described by quadratic functions. A better understanding of algorithmic implications of parameter uncertainty in these problems will provide foundation for handling parameter uncertainty in other application models that use the planning, the least squares and quadratic objectives as basic building blocks. The award will also contribute to the development of computational tools implementing the algorithms solution techniques. Experiments will be performed to validate the algorithms, and to compare the properties of the solutions generated from the models incorporating distributional knowledge with those that ignore this information.

Monday, March 28, 2011

Integrating Dynamic Programming within Mixed-Integer Programming Techniques

This NSF funding is awarded to J. Cole Smith and Joseph Hartman. 

This objective of this award is to improve the solvability of combinatorial optimization problems by integrating information obtained from a partial application of dynamic programming (DP) within valid inequality generation schemes for mixed-integer programming (MIP) algorithms. Computationally, the advantage of this technique is that one can execute a partial DP algorithm (up to a tractable number of stages) for the problem at hand. The optimal state values obtained in this process can then be used to provide lower bounds (for minimization problems) on partial objective function values, as a function of a small number of key variables. A deeper analysis of the technique reveals that the process uses DP to project the MIP feasible region onto a key subset of MIP variables. The state information obtained from the truncated DP execution yields valid inequalities, which provide bounds on a portion of the problem objective as a function of these key variables. It is hoped that these investigations will shift focus from not only examining facet-defining inequalities for MIP polyhedra, but also to generating inequalities that capture strong relationships between a set of designated key variables and partial objective function values.

A number of important problems in production, supply chain management and national security can be modeled as combinatorial optimization problems. For example, the capacitated lot-sizing problem (CLSP) is solved in numerous industries to make periodic inventory re-ordering and production scheduling decisions. Unfortunately, this, and other combinatorial problems, can be hard to solve. Preliminary analysis has shown the proposed solution method to be successful at solving the CLSP. If successful, we hope the method can improve the solvability of other hard, but important, problems in supply chain management (generalized assignment and prize-collecting routing), finance (knapsack) and security (node detection and network monitoring).

Tuesday, March 15, 2011

NSF AWARD:Complex Integer Rounding Cuts for Mixed Integer Programming

Investigator(s): Kiavash Kianfar kianfar@tamu.edu (Principal Investigator) 

The research objective of this award is to create and evaluate new cutting plane methods for mixed integer programming using a new approach here called Complex Integer Rounding. Cutting planes are a crucial part of the algorithms used for solving mixed integer programming problems. Mixed integer programming is an optimization framework with numerous applications in science, engineering, and business. The proposed approach consists of deriving novel forms of three major elements and making innovative use of them: one or multiple facets of base polyhedra and/or one or multiple sub-additive functions are utilized within a relaxation/combination procedure which is applied on the original constraints and a series of intermediate inequalities to eventually obtain a cut generator function. Both single-constraint and multi-constraint cuts will be considered and facet-defining properties of the developed cuts will be investigated. The customization of the cuts to a collection of important special-structure problems will be studied. In order to evaluate performance of the developed cuts, efficient separation methods will be developed and comprehensive computational experiments will be performed.

Mixed integer programming is a powerful and flexible optimization paradigm with ubiquitous applications in science, engineering, and business ranging from flight crew scheduling to molecular biology. Yet solving mixed integer programs is generally very difficult. Through introduction of new strong cutting planes, this research, if successful, will result in faster solution algorithms for mixed integer programming and will increase the size of the problems that we are able to solve. Consequently, it will have a significant impact on all aforementioned areas. Moreover, the methodological developments in this research open doors to several new research avenues regarding cutting plane methods.

Friday, March 11, 2011

NSF Award Stochastic Mixed-Integer Optimization: Polyhedral Theory, Large-Scale Algorithms and Computations

Recently, National Science Foundation funds a research project in the stochastic mixed integer optimization, which focuses on the polyhedral theory, large-scale optimization and computations. The investigators are Suvrajeet Sen and
Simge Kucukyavuz. The abstract of the award is described as below

This award focuses on a class of constrained optimization problems in which data are uncertain, and some decisions need to be made before uncertainty about the data clears (first-stage). The remaining decisions are made once the data becomes more reliable (second-stage). In addition, these problems involve both discrete and continuous decisions, and hence are referred to as Two-stage Stochastic Mixed-Integer Programs (SMIP). The goal of this project is to integrate recently developed integer programming tools based on multi-term disjunctions, and stochastic programming ideas based on decomposition and coordination. These tools will provide the basis for sequential convexification of SMIP problems, and will allow their solution via a finite sequence of approximations. These algorithms will be implemented and rigorously tested on a wide variety of instances.

If successful, this project will allow engineers to add greater intelligence to software that is used in engineering design, contingency planning in manufacturing, military operations planning, and many more. For these and other real-world engineering problems, the exact setting of future operations is impossible to predict accurately, and SMIP provides a formal basis to cope with the uncertainty. While these issues are ubiquitous in most operations, there is a serious paucity of methodologies that can solve such computational problems. The widespread applicability of the proposed methodology is expected to transform the way in which discrete decisions are made in an uncertain environment. Moreover, results from this project will build a unifying theory for discrete and continuous optimization under uncertainty.