Wednesday, March 30, 2011

Computational Scientist Job Position

Job Description
Numerica Corporation seeks Ph.D. and M.S. graduates to work on its research and development programs in information science and software development.   Numerica, a small business located in Loveland, Colorado with a development office in Pasadena, CA, is a leading provider of data fusion, tracking, surveillance, sensor and communications resource management, situation assessment, distributed systems, and other information science systems and solutions to the US Government and industry.   Numerica offers a rich working environment for highly talented, motivated individuals who wish to apply their scientific and technical background to the development of innovative solutions of challenging scientific problems in a team atmosphere.   Numerica develops both algorithms and software for a variety of problems in information systems and science and related fields.  Areas of Work include, but are not limited to,  probability theory, statistics, random processes, controls, dynamical systems, estimation theory, signal processing, combinatorial and continuous optimization, Bayesian networks, multiple target tracking, data fusion, sensor and communications resource management, distributed algorithms and systems, GIS, chemical detection, numerical algorithms and their analysis, scientific computing, computer science, and software design and development using object oriented principles.   At Numerica, employees participate on small teams that conduct basic and applied research and development for both the US Government and commercial customers.   
Successful applicants should have a demonstrated ability, interest, and desire to learn new areas and to work on new challenging scientific and engineering problems.     Numerica strives to hire talented M.S. or Ph.D. graduates with a solid and broad background as opposed to specific subject training or experience.  We encourage exceptionally strong applicants regardless of their specific background.  
COMPUTATIONAL SCIENTIST AT NUMERICA
Computational scientists at Numerica work in the spectrum of numerical algorithms, their efficient and robust implementation, and software design and development.  This is an indispensable position in support of delivering software solutions to the U.S. Government and commercial markets.   Essential duties and responsibilities may include the following tasks:
·         Develop state-of-the-art software algorithms, designs, and prototypes; 
·         Implement algorithms using object oriented design principles;
·         Develop, optimize, test,  and deliver computationally rigorous (industrial-strength) scientific software;
·         Evaluate and document algorithm and software design and performance; 
·         Prepare technical reports and publish papers describing algorithms and software designs.
Job Requirements
Successful applicants generally have the following backgrounds and experiences:
·         M.S. or Ph.D. in Computer Science, Computational or Applied Mathematics, Statistics, Electrical Engineering, or closely related field with a focus on computational science/ engineering or on software engineering; 
·         Significant experience in C++ programming in a technical or scientific environment using the Linux operating system as evidenced by technical publications, released software, and/or work with large scale scientific computations;
·         Course work in or demonstrated in-depth working knowledge of efficient implementations, software engineering, object oriented design principles, and one of linear algebra, numerical analysis or optimization, or probability and statistics with a record of academic excellence; 
·         Effective written and verbal communication skills, with the ability to clearly communicate technical and programmatic details to both colleagues and customers.
U.S. CITIZENSHIP AND SECURITY REQUIREMENTS 
Because of the nature of the work performed by Numerica, all applicants must be capable of obtaining a US Security Clearance.  At a minimum, this requires that a candidate be a US citizen and have a background such that sufficient trustworthiness can be established (e.g., clean criminal record, reasonable credit, no use of illegal drugs, etc).

ADDITIONAL INFORMATION ABOUT NUMERICA 
 Visit www.numerica.us for additional information and an on-line application.  Numerica is an equal opportunity employer.

Monday, March 28, 2011

Smart Pointer in IPOPT

The Smart Pointer Implementation: SmartPtr<T>

The SmartPtr class is described in IpSmartPtr.hpp. It is a template class that takes care of deleting objects for us so we need not be concerned about memory leaks. Instead of pointing to an object with a raw C++ pointer (e.g. HS071_NLP*), we use a SmartPtr. Every time a SmartPtr is set to reference an object, it increments a counter in that object (see the ReferencedObject base class if you are interested). If a SmartPtr is done with the object, either by leaving scope or being set to point to another object, the counter is decremented. When the count of the object goes to zero, the object is automatically deleted. SmartPtr's are very simple, just use them as you would a standard pointer.
It is very important to use SmartPtr's instead of raw pointers when passing objects to IPOPT. Internally, IPOPT uses smart pointers for referencing objects. If you use a raw pointer in your executable, the object's counter will NOT get incremented. Then, when IPOPT uses smart pointers inside its own code, the counter will get incremented. However, before IPOPT returns control to your code, it will decrement as many times as it incremented earlier, and the counter will return to zero. Therefore, IPOPT will delete the object. When control returns to you, you now have a raw pointer that points to a deleted object.
This might sound difficult to anyone not familiar with the use of smart pointers, but just follow one simple rule; always use a SmartPtr when creating or passing an IPOPT object.

Integrating Dynamic Programming within Mixed-Integer Programming Techniques

This NSF funding is awarded to J. Cole Smith and Joseph Hartman. 

This objective of this award is to improve the solvability of combinatorial optimization problems by integrating information obtained from a partial application of dynamic programming (DP) within valid inequality generation schemes for mixed-integer programming (MIP) algorithms. Computationally, the advantage of this technique is that one can execute a partial DP algorithm (up to a tractable number of stages) for the problem at hand. The optimal state values obtained in this process can then be used to provide lower bounds (for minimization problems) on partial objective function values, as a function of a small number of key variables. A deeper analysis of the technique reveals that the process uses DP to project the MIP feasible region onto a key subset of MIP variables. The state information obtained from the truncated DP execution yields valid inequalities, which provide bounds on a portion of the problem objective as a function of these key variables. It is hoped that these investigations will shift focus from not only examining facet-defining inequalities for MIP polyhedra, but also to generating inequalities that capture strong relationships between a set of designated key variables and partial objective function values.

A number of important problems in production, supply chain management and national security can be modeled as combinatorial optimization problems. For example, the capacitated lot-sizing problem (CLSP) is solved in numerous industries to make periodic inventory re-ordering and production scheduling decisions. Unfortunately, this, and other combinatorial problems, can be hard to solve. Preliminary analysis has shown the proposed solution method to be successful at solving the CLSP. If successful, we hope the method can improve the solvability of other hard, but important, problems in supply chain management (generalized assignment and prize-collecting routing), finance (knapsack) and security (node detection and network monitoring).

Rima 0.05: Math Programming for Lua

Geoff is happy to announce version 0.05 of Rima, a symbolic math modelling package for Lua[1] binding to CLP, CBC, lpsolve and ipopt.

Rima has a number of nice features:

- models are symbolic and functional rather than imperative
- Rima allows very rich interaction with data structures - dynamic objects and duck typing for math modelling
- models are very easily encapsulated and extended
- there's strong and flexible separation between models and data.  All data is late bound, and functions and expressions are just data
- you can compose models from parts

Rima's documentation starts at http://www.incremental.co.nz/projects/lua.html and development is hosted at github https://github.com/geoffleyland/rima/
You can get the tarball from
https://github.com/downloads/geoffleyland/rima/rima-latest.tar.gz

Changes since 0.04 are
 - support for ipopt and consequently nonlinear problems
 - symbolic differentiation
 - compilation of expressions to Lua functions
 - hosting moved to github

The symbolic differentiation and compilation are used to pass functions for evaluating objectives, constraints, gradients, and the hessian to ipopt.  It's quite cool: rima differentiates the symbolic expressions, writes them out as a lua string and then compiles the string.  With LuaJIT [2], you get native code for a symbolically differentiated function!

Rima is not part of COIN, but it's been in the review queue for half its life!

Any feedback would be much appreciated.


[1] http://www.lua.org/
[2] http://www.luajit.org/
 

Sunday, March 27, 2011

Coopr 2.5 Release announcement

Hart, William E is pleased to announce the release of Coopr 2.5 (2.5.3890). Coopr is a collection of Python software packages that supports a diverse set of optimization capabilities for formulating and analyzing optimization models.

The following are highlights of this release:

- Solvers
   * MIP solver interface updates to use appropriate objective names
   * Added support for suffixes in GUROBI solver interface
   * Improved diagnostic analysis of PH solver for the extensive form

- Usability enhancements
   * Improved robustness of coopr_install
   * Fixed Coopr installation problem when using easy_install
   * Added a script to launch the CooprAge GUI.
   * LP files now are written with the true objective name
   * Rework of pyomo command line to create a concise output
   * Many efficiency improvements during model generation!
   * Many improvements to diagnostic output and error handling
   * Expressions like "model.p > 1" can now be used within generation rules

- Modeling
   * Added support for generalized disjunctive programs (in coopr.gdp)
   * Constraints can now be specified in "compound" form:  lb <= expr <= ub
   * Significant robustness enhancements for model expressions
   * Improved error handling for constraint generation

- Other
   * Python 2.5 is deprecated due to performance issues
   * Python versions 2.6 and 2.7 are supported
   * New MS Windows installer is now available


See https://software.sandia.gov/trac/coopr/wiki/GettingStarted for instructions for getting started with Coopr.  Installers are available for MS Windows and Unix operating systems to simplify the installation of Coopr packages along with the third-party Python packages that they depend on.  These installers can also automatically install extension packages from Coin Bazaar.

Friday, March 25, 2011

Video Presentation on the Gurobi Interactive Shell

Basic capabilities: 
  • Reading and optimizing a model
  • Displaying results
  • Changing parameter settings

Advanced capabilities:
  • Model modification
  • Custom functions
  • Callbacks
 Go here to watch the video now.

Thursday, March 24, 2011

Generic Decomposition Algorithms for Integer Programs

Generic Decomposition Algorithm for Integer Programs is a research project conducted in Germany by Marco Lübbecke and Martin Bergner.

Here is the introduction of the project quoted from the website

There is no alternative to integer programming when it comes to computing proven quality or even optimal solutions to large-scale hard combinatorial optimization problems. In practical applications, matrices often have special structures exploitable in decomposition algorithms, in particular in brance-and-price. This opened the way to the solution of mixed integer programs (MIPs) of enormous size and complexity, both from industry and within mathematics, computer science, and operations research.
Yet, as the state-of-the-art, branch-and-price is implemented ad hoc for every new problem. Various frameworks are very helpful in this respect, still, this requires a solid expert knowledge. This project aims at making a significant contribution towards a generic implementation of decomposition algorithms. As a longterm vision, a MIP solver should be able to apply a decomposition algorithm without any user interaction. A precondition for such an automation is the answer to important algorithmic and theoretical questions, among them:
  • recognition of decomposable structures in matrices and/or MIPs
  • development of a theory (and appropriate algorithms) for evaluating the quality of a decomposition
In this project we address these questions. From a mathematical point of view, there are interesting relations to polyhedral combinatorics, graph theory, and linear algebra. A generic implementation of our findings is planned to be provided to the research community. To this end we closely cooperate with developers of MIP solvers (such as SCIP) and modeling languages (such as GAMS).

Wednesday, March 23, 2011

A primer in Column Generation

"A primer in Column Generation" is the first chapter of the book Column Generation, which is a good tutorial for Column Generation Techniques for solving integer linear programming problems. 

This paper is divided into four parts:

This first part provides an example to motivate the column generation, which is  shortest path problem with a time constraint. It provides detailed solution steps using column generation within the branch-and-bound framework.

The second part provides theoretical part and introduces the Dantzig-Wolfe Decomposition.

The third part  gives a dual point of view of the Dantzig-Wolfe Decomposition, which is the Lagrangian relaxation.

The fourth part actually hits a hot topic these days, how to exploit the matrix structure for diagonal blocks, which is the title of those days-"on finding a good formulation"

Monday, March 21, 2011

configure output in COIN-Osi

checking build system type... i686-pc-linux-gnu
checking whether we want to compile in debug mode... yes
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
configure: C compiler options are: -g -pipe -pedantic-errors -Wimplicit -Wparentheses -Wsequence-point -Wreturn-type -Wcast-qual -Wall -Wno-unknown-pragmas
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking whether C++ compiler g++ works... yes
configure: C++ compiler options are: -g -pipe -pedantic-errors -Wimplicit -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas
configure: Trying to determine Fortran compiler name
checking for gfortran... gfortran
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether gfortran accepts -g... yes
configure: Fortran compiler options are: -g -pipe
checking for egrep... grep -E
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking dependency style of g++... gcc3
checking whether to enable maintainer-specific portions of Makefiles... no
checking host system type... i686-pc-linux-gnu
checking for a sed that does not truncate output... /bin/sed
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for /usr/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -B
checking whether ln -s works... yes
checking how to recognise dependent libraries... pass_all
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking how to run the C++ preprocessor... g++ -E
checking the maximum length of command line arguments... 32768
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC
checking if gcc PIC flag -fPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... no
checking whether to build static libraries... yes
configure: creating libtool
appending configuration tag "CXX" to libtool
checking for ld used by g++... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes
checking for g++ option to produce PIC... -fPIC
checking if g++ PIC flag -fPIC works... yes
checking if g++ static flag -static works... yes
checking if g++ supports -c -o file.o... yes
checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
appending configuration tag "F77" to libtool
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... no
checking whether to build static libraries... yes
checking for gfortran option to produce PIC... -fPIC
checking if gfortran PIC flag -fPIC works... yes
checking if gfortran static flag -static works... yes
checking if gfortran supports -c -o file.o... yes
checking whether the gfortran linker (/usr/bin/ld) supports shared libraries... yes
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
configure: Build is "i686-pc-linux-gnu".
checking if library version is set... no
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking whether project Glpk is available... not given
checking whether project ThirdParty/Glpk needs to be configured... no
checking whether project Blas is available... no (but will check for system blas later)
checking whether project ThirdParty/Blas needs to be configured... no
checking whether project Lapack is available... no (but will check for system lapack later)
checking whether project ThirdParty/Lapack needs to be configured... no
checking whether project Sample is available... yes, source in Data/Sample
checking whether project Data/Sample needs to be configured... yes
checking whether project Netlib is available... not given
checking whether project Data/Netlib needs to be configured... no
checking whether project CoinUtils is available... yes, source in CoinUtils
checking whether project CoinUtils needs to be configured... yes
checking whether project Osi is available... yes
checking whether project Osi needs to be configured... yes
configure: configuring doxygen documentation options
checking for doxygen... no
checking for dot... YES
checking for doxygen doc'n for CoinUtils ... /home/jiw508/Osi-coin/build-debug/CoinUtils/doxydoc (tag)
checking for doxygen doc'n for Cgl ... NONE/share/coin/doc/Cgl/doxydoc (tag)
checking for doxygen doc'n for Clp ... NONE/share/coin/doc/Clp/doxydoc (tag)
checking for doxygen doc'n for DyLP ... NONE/share/coin/doc/DyLP/doxydoc (tag)
checking for doxygen doc'n for Vol ... NONE/share/coin/doc/Vol/doxydoc (tag)
checking for doxygen doc'n for SYMPHONY ... NONE/share/coin/doc/SYMPHONY/doxydoc (tag)
checking which command should be used to link input files... ln -s
configure: creating ./config.status
config.status: creating Makefile
config.status: creating doxydoc/doxygen.conf
config.status: executing depfiles commands
configure: configuring in Data/Sample
configure: running /bin/sh '../../../Data/Sample/configure' --prefix=/home/jiw508/Osi-coin/build-debug  '-enable-debug' --cache-file=/dev/null --srcdir=../../../Data/Sample
checking for svnversion... yes
checking for egrep... grep -E
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking whether this is a VPATH configuration... yes
checking build system type... i686-pc-linux-gnu
checking whether ln -s works... yes
configure: Creating links to the example files (*.mps)
configure: Creating links to the example files (input.130)
configure: Creating links to the example files (app0110.* app0110R.* bug.*)
checking which command should be used to link input files... ln -s
configure: creating ./config.status
config.status: creating Makefile
config.status: creating coindatasample.pc
config.status: creating coindatasample-uninstalled.pc
configure: Configuration of DataSample successful
configure: configuring in CoinUtils
configure: running /bin/sh '../../CoinUtils/configure' --prefix=/home/jiw508/Osi-coin/build-debug  '-enable-debug' --cache-file=/dev/null --srcdir=../../CoinUtils
checking build system type... i686-pc-linux-gnu
checking for svnversion... yes
checking whether we want to compile in debug mode... yes
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
configure: C compiler options are: -g -pipe -pedantic-errors -Wimplicit -Wparentheses -Wsequence-point -Wreturn-type -Wcast-qual -Wall -Wno-unknown-pragmas
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking whether C++ compiler g++ works... yes
configure: C++ compiler options are: -g -pipe -pedantic-errors -Wimplicit -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas
configure: Trying to determine Fortran compiler name
checking for gfortran... gfortran
configure: Trying to determine Fortran compiler name
checking for gfortran... gfortran
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether gfortran accepts -g... yes
configure: Fortran compiler options are: -g -pipe
checking how to get verbose linking output from gfortran... -v
checking for Fortran libraries of gfortran...  -L/usr/lib/gcc/i486-linux-gnu/4.3.2 -L/usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/i486-linux-gnu/4.3.2/../../.. -lgfortranbegin -lgfortr\
an -lm -lgcc_s
checking for dummy main to link with Fortran libraries... none
checking for Fortran name-mangling scheme... lower case, underscore, no extra underscore
checking for egrep... grep -E
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking dependency style of g++... gcc3
checking whether to enable maintainer-specific portions of Makefiles... no
configure: Using libtool script in directory ..
checking if library version is set... no
checking cmath usability... yes
checking cmath presence... yes
checking for cmath... yes
checking cfloat usability... yes
checking cfloat presence... yes
checking for cfloat... yes
checking cieeefp usability... no
checking cieeefp presence... no
checking for cieeefp... no
checking ieeefp.h usability... no
checking ieeefp.h presence... no
hecking for ieeefp.h... no
checking cassert usability... yes
checking cassert presence... yes
checking for cassert... yes
checking whether finite is declared... yes
checking whether isnan is declared... yes
checking cinttypes usability... no
checking cinttypes presence... no
checking for cinttypes... no
checking inttypes.h usability... yes
checking inttypes.h presence... yes
checking for inttypes.h... yes
checking for int64_t... yes
checking for intptr_t... yes
checking windows.h usability... no
checking windows.h presence... no
checking for windows.h... no
checking endian.h usability... yes
checking endian.h presence... yes
checking for endian.h... yes
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking whether -lblas has BLAS... yes: -lblas
checking whether LAPACK is already available with BLAS library... no
checking whether -llapack has LAPACK... yes: -llapack
checking for COIN-OR package Glpk... not given: No package 'coinglpk' found
checking for COIN-OR package Sample... yes: 1.2
checking for COIN-OR package Netlib... not given: No package 'coindatanetlib' found
hecking whether this is a VPATH configuration... yes
configure: configuring doxygen documentation options
checking for doxygen... no
checking for dot... YES
checking which command should be used to link input files... ln -s
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating test/Makefile
config.status: creating coinutils.pc
config.status: creating coinutils-uninstalled.pc
config.status: creating doxydoc/doxygen.conf
config.status: creating inc/config_coinutils.h
config.status: inc/config_coinutils.h is unchanged
config.status: executing depfiles commands
configure: Creating VPATH links for data files
configure: Configuration of CoinUtils successful
configure: configuring in Osi
configure: running /bin/sh '../../Osi/configure' --prefix=/home/jiw508/Osi-coin/build-debug  '-enable-debug' --cache-file=/dev/null --srcdir=../../Osi
checking build system type... i686-pc-linux-gnu
checking for svnversion... yes
checking whether we want to compile in debug mode... yes
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
configure: C compiler options are: -g -pipe -pedantic-errors -Wimplicit -Wparentheses -Wsequence-point -Wreturn-type -Wcast-qual -Wall -Wno-unknown-pragmas
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking whether C++ compiler g++ works... yes
configure: C++ compiler options are: -g -pipe -pedantic-errors -Wimplicit -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas
checking for egrep... grep -E
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking dependency style of g++... gcc3
checking whether to enable maintainer-specific portions of Makefiles... no
configure: Using libtool script in directory ..
checking if library version is set... no
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for COIN-OR package CoinUtils... yes: 2.8
checking for COIN-OR package Glpk... not given: No package 'coinglpk' found
checking for COIN-OR package Sample... yes: 1.2
checking for COIN-OR package Netlib... not given: No package 'coindatanetlib' found
checking if user provides library for Cplex... no
checking if user provides library for Mosek... no
checking if user provides library for Xpress... no
checking if user provides library for Gurobi... no
checking if user provides library for Soplex... no
configure: configuring doxygen documentation options
checking for doxygen... no
checking for dot... YES
checking for doxygen doc'n for CoinUtils ... /home/jiw508/Osi-coin/build-debug/share/coin/doc/CoinUtils/doxydoc (tag)
checking which command should be used to link input files... ln -s
configure: creating ./config.status
config.status: creating Makefile
config.status: creating examples/Makefile
config.status: creating src/Osi/Makefile
config.status: creating src/OsiCpx/Makefile
config.status: creating src/OsiGlpk/Makefile
config.status: creating src/OsiMsk/Makefile
config.status: creating src/OsiXpr/Makefile
config.status: creating src/OsiGrb/Makefile
config.status: creating src/OsiSpx/Makefile
config.status: creating src/OsiCommonTest/Makefile
config.status: creating test/Makefile
config.status: creating osi.pc
config.status: creating osi-uninstalled.pc
config.status: creating osi-unittests.pc
config.status: creating osi-unittests-uninstalled.pc
config.status: creating doxydoc/doxygen.conf
config.status: creating inc/config_osi.h
config.status: inc/config_osi.h is unchanged
config.status: executing depfiles commands
configure: Configuration of Osi successful
configure: Main configuration of Osi successful

Finite Disjunctive Programming Characterizations for General Mixed-Integer Linear Programs


In the recent Operations Research, a paper titled  
Finite Disjunctive Programming Characterizations for General Mixed-Integer Linear Programs caught my attention.


The abstract of the paper is shown below
In this paper, we give a finite disjunctive programming procedure to obtain the convex hull of general mixed-integer linear programs (MILP) with bounded integer variables. We propose a finitely convergent convex hull tree algorithm that constructs a linear program that has the same optimal solution as the associated MILP. In addition, we combine the standard notion of sequential cutting planes with ideas underlying the convex hull tree algorithm to help guide the choice of disjunctions to use within a cutting plane method. This algorithm, which we refer to as the cutting plane tree algorithm, is shown to converge to an integral optimal solution in finitely many iterations. Finally, we illustrate the proposed algorithm on three well-known examples in the literature that require an infinite number of elementary or split disjunctions in a rudimentary cutting plane algorithm.

Saturday, March 19, 2011

Column Generation in CPLEX

Column generation is not automated in Cplex. But Cplex provides some example files for user to implement column generation as a guide.

Decomposition example programs that use CPLEX

Friday, March 18, 2011

CVCR Poster Competition

Dear Friends,

We invite you to make a poster presentation at the forthcoming Center
for Value Chain Research (CVCR) Spring Symposium on May 11-12, 2011.
You will be able to present your projects, network with faculty and
industry representatives, and benefit from constructive suggestions
from colleagues and potential employers. We welcome any projects
related to supply chains or value chains.

This is an excellent networking opportunity. Several industry
representatives attend the symposium. Please find the attached list of
organizations that attended last year.

The posters will be displayed throughout the symposium, including the
Student Networking Reception on May 11. The designated Poster Session
will be held on May 12. The session will also serve as a competition
and symposium attendees will vote for the best poster. The first-place
award is $200 and the second-place award is $100.

All graduate students who would like to present their work are invited
to contact us at informs@lehigh.edu. Please include a short
description (app. 100 words) of your projects. We are going to print
the posters at no cost to you, and make them ready at the symposium
venue. Below are a few guidelines for you:

 - Please remember that most of the attendees will be from industry
so make sure that your posters are non-technical enough to be easily
understood.
 - The posters are going to be printed on 24in x 36in paper. Please
take this into account while preparing your poster. Resizing
afterwards can distort the page content seriously.
 - The poster submissions are going to be in PDF format. You are free
to use any software (i.e. PowerPoint) to prepare your poster as long
as you submit a PDF file in the end.
 - You can find several poster examples, templates and guidelines on
the internet. Below are some useful ones to start from
     http://www.swarthmore.edu/NatSci/cpurrin1/posteradvice.htm
     http://people.eku.edu/ritchisong/posterpres.html
     http://www.ncsu.edu/project/posters/NewSite/index.html

We look forward to receiving your posters.

Cheers,

INFORMS Student Chapter

Wednesday, March 16, 2011

glibc detected double free or corruption error

glibc detected double free or corruption error
When such error occurs,  in your bash , you can try
export MALLOC_CHECK_=0,

Some programs might try to free a space through delete that has already been unallocated. That command suppresses the error by switching off checks for this activity.

[DOS seminar Reminder] Today 12pm Network Congestion Control with Markovian Multipath Routing

Time: Wednesday, March 16, 2011, 12:00pm-1:00pm
Title: Network Congestion Control with Markovian Multipath Routing
Speaker: Cristóbal Guzmán, Georgia Tech ACO program
Location: ISyE executive classroom

Abstract:
Routing and congestion control are basic components of packet-switched communication networks. While routing is responsible for determining efficient paths along which the sources communicate to their corresponding receivers, congestion control manages the transmission rate of each source in order to keep network congestion within reasonable limits.

Mathematical modeling in network engineering copes with both of these problems, but usually in a separate manner, i.e., solving one problem when the variables of the other are fixed. One of the main models for rate control is the so called Network Utility Maximization (NUM), which is a convex optimization formulation for steady state flows. On the other hand, there has been some progress in the last 10 years in the design of distributed routing protocols for large networks, even for the multipath case.

In this work, we present a model that combines rate control and multipath routing, where rate control is based on the NUM model, and routing is based on discrete choice distribution models that lead to a Markovian Traffic Equilibrium. The combination of these models leads to a system of equations that corresponds to the optimality conditions of a strictly convex unconstrained program of low dimension, where the variables are link congestion prices. This characterization allows to establish existence and uniqueness of equilibria.

If time allows, we will show an algorithm (the Method of Succesive Averages) that solves this problem. Moreover, we show how this algorithm can be implemented in a distributed fashion by slight modifications on current internet protocols.

This is a joint work with Roberto Cominetti (Universidad de Chile).

Tuesday, March 15, 2011

NSF AWARD:Complex Integer Rounding Cuts for Mixed Integer Programming

Investigator(s): Kiavash Kianfar kianfar@tamu.edu (Principal Investigator) 

The research objective of this award is to create and evaluate new cutting plane methods for mixed integer programming using a new approach here called Complex Integer Rounding. Cutting planes are a crucial part of the algorithms used for solving mixed integer programming problems. Mixed integer programming is an optimization framework with numerous applications in science, engineering, and business. The proposed approach consists of deriving novel forms of three major elements and making innovative use of them: one or multiple facets of base polyhedra and/or one or multiple sub-additive functions are utilized within a relaxation/combination procedure which is applied on the original constraints and a series of intermediate inequalities to eventually obtain a cut generator function. Both single-constraint and multi-constraint cuts will be considered and facet-defining properties of the developed cuts will be investigated. The customization of the cuts to a collection of important special-structure problems will be studied. In order to evaluate performance of the developed cuts, efficient separation methods will be developed and comprehensive computational experiments will be performed.

Mixed integer programming is a powerful and flexible optimization paradigm with ubiquitous applications in science, engineering, and business ranging from flight crew scheduling to molecular biology. Yet solving mixed integer programs is generally very difficult. Through introduction of new strong cutting planes, this research, if successful, will result in faster solution algorithms for mixed integer programming and will increase the size of the problems that we are able to solve. Consequently, it will have a significant impact on all aforementioned areas. Moreover, the methodological developments in this research open doors to several new research avenues regarding cutting plane methods.

Monday, March 14, 2011

THE REPRESENTATION OF INTEGERS BY QUADRATIC FORMS

Spring 2011 Everett Pitcher Lectures

MANJUL BHARGAVA
Professor of Mathematics
Princeton University        

Monday, March 14, 2011
THE REPRESENTATION OF INTEGERS BY QUADRATIC FORMS
Lewis Lab 270 - 7:00pm Lobby reception at 6:30pm
The classical Four squares theorem of Lagrange asserts that any positive integer can be expressed as the sum of four squares; that is, the quadratic form a2+b2+c2+d2 represents all (positive) integers. When does a general quadratic form represent all integers? When does it represent all odd integers? When does it represent all primes? We will show how all these questions turn out to have very simple and surprising answers.

Friday, March 11, 2011

NSF Award Stochastic Mixed-Integer Optimization: Polyhedral Theory, Large-Scale Algorithms and Computations

Recently, National Science Foundation funds a research project in the stochastic mixed integer optimization, which focuses on the polyhedral theory, large-scale optimization and computations. The investigators are Suvrajeet Sen and
Simge Kucukyavuz. The abstract of the award is described as below

This award focuses on a class of constrained optimization problems in which data are uncertain, and some decisions need to be made before uncertainty about the data clears (first-stage). The remaining decisions are made once the data becomes more reliable (second-stage). In addition, these problems involve both discrete and continuous decisions, and hence are referred to as Two-stage Stochastic Mixed-Integer Programs (SMIP). The goal of this project is to integrate recently developed integer programming tools based on multi-term disjunctions, and stochastic programming ideas based on decomposition and coordination. These tools will provide the basis for sequential convexification of SMIP problems, and will allow their solution via a finite sequence of approximations. These algorithms will be implemented and rigorously tested on a wide variety of instances.

If successful, this project will allow engineers to add greater intelligence to software that is used in engineering design, contingency planning in manufacturing, military operations planning, and many more. For these and other real-world engineering problems, the exact setting of future operations is impossible to predict accurately, and SMIP provides a formal basis to cope with the uncertainty. While these issues are ubiquitous in most operations, there is a serious paucity of methodologies that can solve such computational problems. The widespread applicability of the proposed methodology is expected to transform the way in which discrete decisions are made in an uncertain environment. Moreover, results from this project will build a unifying theory for discrete and continuous optimization under uncertainty.

Thursday, March 10, 2011

Generating block file for MILPBlock application using Python in DIP

MILPBlock is a prototype for a generic black-box solver for block-diagonal mixed integer linear programming problems that fully automates the branch-and-price-and-cut algorithm without additional user input. The user only needs to provide the mps file of the problem and the block file which indicates how many blocks in the model, which rows in each block etc.

Sometimes it is tedious to write the block file for the large scale problem. So we always expect to create the block file by the script language, which is easy and fast. Python is the right choice.

Here is an example showing the bin packing problem for using Python to create block file.

The objective function and the constraints are described using AMPL language as below :

param numItems;
param Capacity;

set Items = 1 .. numItems;
set Bins  = 1 .. (numItems+1);

param Weight{Items};

var x{i in Items, j in Bins } binary ;

# minimize the number of bins

minimize num_Of_Bins : sum{i in Items} x[i,numItems+1];

subject to condition1 {i in Items}: sum{j in 1 .. numItems}x[i,j] = 1 ;

subject to condition2 {j in Items, i in Items}: x[i,j]<=x[j,numItems+1] ;

subject to condition3 {j in Items}: sum{i in Items}x[i,j]*Weight[i] <=Capacity*\
x[j,numItems+1];


Suppose we have 50 items, and 50 blocks, each block has 51 rows corresponding to j.

The python code to create a block file of the ' list ' format is shown below

*************************************************************************
f = open('test.block','w')
for i in range(50):
    f.write(str(i))
    f.write(' ')
    f.write(str(51))
    f.write('\n')
    for j in range(50):
    f.write(str(50+50*i+j))
    f.write(' ')
    f.write(str(2550+i))
    f.write('\n')

f.close()
*************************************************************************

Wednesday, March 9, 2011

dump/restore function in AMPL by Bob Fourer

Sometimes solving a mathematical programming problem is a quite time consuming. So we need to store the solution in the file and hope to restore it next time when we load the model and data. Hopefully AMPL is capable of doing that:

If you give the command of the form

  write bfilename;

before the solve command, then the solution file that the solver sends back
to AMPL will be a permanent file "filename.sol" rather than a temporary
file.  Then if you leave AMPL and later return, you can read back the model
and data files -- and whatever else you need to do to return the current
problem to the state it was in at the previous solve -- and then read back
the previous solution with the command

  solution filename.sol;

There is not a facility to dump the entire state of AMPL at a certain point,
however.

Bob Fourer
4er@ampl.com

Tuesday, March 8, 2011

[DOS seminar Mar 10 12:00pm] Bad semidefinite programs: they all look the same

Time: Thursday, March 10, 2011 (12:00pm)
Title: Bad semidefinite programs: they all look the same
Speaker: Gabor Pataki, University of North Carolina at Chapel Hill
Location: ISyE executive classroom

Abstract:

Semidefinite Programming (SDP) is the problem of optimizing a linear objective function of a symmetric matrix variable, with the requirement that the variable also be positive semidefinite. SDP is vastly more general than LP, with applications ranging from engineering to combinatorial optimization, while it is still efficiently solvable.

Duality theory is a central concept in SDP, just like it is in linear programming, since in optimization algorithms a dual solution serves as a certificate of optimality. However, in SDP, unlike in LP, rather fascinating ``pathological'' phenomena occur: nonattainment of the optimal value, and positive duality gaps between the primal and dual problems.

This research was motivated by the curious similarity of pathological SDP instances appearing in the literature. We find an exact characterization of semidefinite systems, which are badly behaved from the viewpoint of duality, ie. show that -- surprisingly -- ``all bad SDPs look the same''. We also prove an “excluded minor” type result: all badly behaved semidefinite systems can be reduced (in a well defined sense) to a minimal such system with just one variable, and two by two matrices. Our characterizations imply that recognizing badly behaved semidefinite systems is in NP and co-NP in the real number model of computing.

We prove analogous results for second order conic programs: in fact, the main results are derived from a fairly general result for conic linear programs. While the main tool we use is convex analysis, the results have a combinatorial flavor.

The URL of the short version of the paper is: http://www.optimization-online.org/DB_HTML/2010/11/2809.html

Monday, March 7, 2011

Mixed Integer Nonlinear Programming Solvers

While my current research is not directly on mixed integer nonlinear programming (MINLP), which is a hot topic these days, I have felt an urge to learn and the necessity to develop fast algorithms for such a kind of problems. Last week, one of  my friends from Electrical Engineering asked me which solver can solve mixed integer nonlinear programming problems. I told him several options:

1. Bonmin,  can  solve MINLP whose objective functions are required to be convex

2. Couenne, is similar to Bonmin, while the problems can extend to nonconvex ones.

3. KNITRO, designed for nonlinear optimization problems, can also solve convex mixed integer nonlinear problems.

 The first two solvers are affiliated with COIN-OR, which are open-source. KNITRO is a commercial one. 

Sunday, March 6, 2011

Post-doctoral position at Sandia National Laboratories

The Scalable Algorithms Department at Sandia National Laboratories has an open post-doctoral position.  A brief description is below.  All details and application forms can be found at the following link:


Postdoctoral Researcher: HPC Scalable Algorithms R&D
Sandia Corporation
Albuquerque, NM

You may also contact (maherou@sandia.gov
directly if you have questions.

Job Details
Algorithms R&D for Scalable Multicore Computers:
Candidates are sought with a strong background in high performance numerical methods for PDE and particle methods including parallel multilevel algorithms, algorithms for multicore architectures, and object-oriented scientific software engineering.Outstanding applicants in related areas will also be considered. This postdoctoral position is for motivated and enthusiastic individuals with excellent communication skills who have the ability to work in a collaborative research environment. Successful applicants will be expected to develop new ideas, publish in journals and conferences, and present at national and international venues.

Required
- Applicants must have (or soon have) a Ph.D. in mathematics, computer science, engineering or related fields with a record of academic excellence
-- Programming experience (C++ required; Fortran & Matlab desired) with knowledge of parallel programming such as MPI, OpenMP, and/or threads.
-- Communication skills appropriate for participating in multi-disciplinary teams of mathematicians, engineers and computer scientists.
-- Research experience in your field of expertise as evidenced by presentations, technical publications, released software, and/or work with applications.

Desired
-- Experience with CUDA, OpenCL, Threaded Building Block, or other advanced parallel programming paradigms.
-- Background in parallel implicit and explicit mesh based PDE solution approaches.
-- Background in parallel particle methods such as Molecular Dynamics and Direct Simulation Monte Carlo.
-- Interest and experience in the use of advanced object-oriented software engineering practices and processes.
-- Experience in high-performance computing on distributed, parallel and/or other specialized architectures.
-- Experience with Trilinos software.
-- 3.5 GPA or higher

vpath && srcdir

If you want to build the project in a different folder of the original code.

1. mkdir build // create a new folder called build

2. ../configure in the build folder

3. it will create a config.status and makefile

4. but when you type  ""make"", it will produce errors

5. then you can change the VPATH and srcdir .
to indicate where the source code is located. In our case , they are located in the code folder

6. then type make, it will create new errors
fatal error: opening dependency file .deps/hello.Tpo: Permission denied

7. sudo make and it proceeds successfully

coral

rsync rsync.samba.org/

aptitude

For those of you who like a little more power behind your tools you will certainly appreciate the Aptitude front-end for the apt package management system. Aptitude is based on the ncurses computer termina
l library so you know it’s a pseudo-hybrid between console and gui. Aptitude has a powerful search
system as well as an outstanding ncurses-based menu system that allows you to move around selections with the tab key and the arrow keys.

dselect

dselect is one of the primary user interfaces for managing packages on 
a Debian system.   At  the  dselect  main
      menu, the system administrator can:
       - Update the list of available package versions,
       - View the status of installed and available packages,
       - Alter package selections and manage dependencies,
       - Install new packages or upgrade to newer versions.

uname -a 

logout 

df -h

/dev/sda2  /
/dev/sda7  /scratch
/dev/sda5  /user
/dev/sda6  /var

less
Less is a program similar to more (1), but which allows backward movement in the file as well as forward movement. Also, less does not have to read the entire input file before starting, so with large input files
it starts up faster than text editors like vi (1). Less uses termcap (or terminfo on some systems), so it can run on a variety of terminals. There is even limited support for hardcopy terminals. (On a hardcopy terminal, lines which should be printed at the top of the screen are prefixed with a caret.)
less xorg.conf

Ubuntu LTS long term support

ssh polyps p3 

beluga p1

sudo shutdown -h now

agpgart-serverworks can not determine aperture size

open science grid
*********************************************************************
The df command is used to show the amount of disk space that is free on file systems. In the examples, df is first called with no arguments. This default action is to display used and free file space in blocks. In this particular case, th block size is 1024 bytes as is indicated in the output.
The first column show the name of the disk partition as it appears in the /dev directory. Subsequent columns show total space, blocks allocated and blocks available. The capacity column indicates the amount used as a percentage of total file system capacity.
The final column show the mount point of the file system. This is the directory where the file system is mounted within the file system tree. Note that the root partition will always show a mount point of /. Other file systems can be mounted in any directory of a previously mounted file system. In the example, there are two other file systems, the first in mounted as /home and the second is mounted as /p4.
In the second example, df is invoked with the -i option. This option instructs df to display information about inodes rather that file blocks. Even though you think of directory entries as pointers to files, they are just a convenience for humans. An inode is what the Linux file system uses to identify each file. When a file system is created (using the mkfs command), the file system is created with a fixed number of inodes. If all these inodes become used, a file system cannot store any more files even though there may be free disk space. The df -i command can be used to check for such a problem.
The df command allows you to select which file systems to display. See the man page for details on this capability.
www.linuxjournal.com/article/2747
****************************************************************************

fdisk -l

NAME

fdisk - Partition table manipulator for Linux

SYNOPSIS

fdisk [-u] [-b sectorsize] [-C cyls] [-H heads] [-S sects] device fdisk -l [-u] [device ...]
fdisk -s partition ...
fdisk -v

DESCRIPTION

Hard disks can be divided into one or more logical disks called partitions. This division is described in the partition table found in sector 0 of the disk. In the BSD world one talks about `disk slices' and a `disklabel'.
Linux needs at least one partition, namely for its root file system. It can use swap files and/or swap partitions, but the latter are more efficient. So, usually one will want a second Linux partition dedicated as swap partition. On Intel compatible hardware, the BIOS that boots the system can often only access the first 1024 cylinders of the disk. For this reason people with large disks often create a third partition, just a few MB large, typically mounted on /boot, to store the kernel image and a few auxiliary files needed at boot time, so as to make sure that this stuff is accessible to the BIOS. There may be reasons of security, ease of administration and backup, or testing, to use more than the minimum number of partitions.

**********************************************************
 /etc/init.d/gdm stop

/etc/init.d/gdm start


NIS & NFS & AFS

The Network Information Service or NIS (originally called Yellow Pages or YP) consists of a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network. Sun Microsystems developed the NIS and licenses this technology to virtually all other Unix vendors.
Network File System (NFS) is a network file system protocol originally developed by Sun Microsystems in 1984,[1] allowing a user on a client computer to access files over a network in a manner similar to how local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System is an open standard defined in RFCs, allowing anyone to implement the protocol.
The Andrew File System (AFS) is a distributed networked file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. It was developed by Carnegie Mellon University as part of the Andrew Project.[when?] It is named after Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.

coral these days

最近几天终于把ubuntu LTS(long term version) 10.4 给机子安装完毕
然后把要点总结一下

1. 通过df -h 查看分区, 查看mount point 和 分区大小
2. 两个primary partition, \  , swap, 其余都是logical partitions
3. 安装NIS和NFS
NIS:
首先安装nis,
dpkg-reconfigure nis
Enter Domainname as: fishy.domain

Edit /etc/hosts to include: <server-ip-address> server-name
128.180.35.235 shark.ie.lehigh.edu shark

Edit /etc/adduser.conf to change to: FIRST_UID=6000
so that new uids on the client dont conflict with nis-ids.
Edit /etc/passwd to add as the last line:
+::::::
Edit /etc/group to add as the last line:
+:::
Edit /etc/shadow to add as the last line:
+::::::::
Do ypcat passwd so see if nis is functional.
2. NFS

First move /home directory to something like /localhome. Make necessary changes in /etc/passwd so that locally created users are not affected.

在/passwd这个文件中修改 beluga的路径, 将/home改成 /localhome
Edit /etc/auto.master to include:
/home /etc/auto.home --timeout=600

Edit /etc/auto.home to include:
* -rw,intr,hard,tcp shark.ie.lehigh.edu:/home/&
Now do umount /home. And then, /etc/init.d/autofs start
用apt-get install安装autofs
service autofs start /stop
tip:
apt-get remove nfs-common
apt-get install nfs-common
fstab -> file system table
shark.ie.lehigh.edu:/home  /home nfs rw, intr, hard, tcp 0 0
which means mount from shark.ie.lehigh.edu:/home to /home in the current file system
下一步就是要安装软件AFS

coral software installation

1. 首先先安装 AFS

Debian provides a way in which modules can be compiled from source and installed as Debian Packages (like RPMs) automatically, without manual compilation, editing of configuration files etc. Here is a summary of how to do this:
  1. apt-get install openafs-client module-assistant.
  2. module-assistant update
  3. module-assistant prepare openafs-modules
  4. module-assistant auto-build openafs-modules
  5. This will download the source files and compile modules for the kernel to make debian packages which can then be installed using dpkg.
  6. The configurations files are same, but are located in /etc/openafs instead of /usr/vice/
  7. Fedora doesnt seem to have a process of automating the compilation of modules for the kernel. Switch to Debian now

Installation is complete. Now we will configure the client to access AFS directories provided by Lehigh.

Edit /usr/vice/etc/CellServDB?

It should only have:

>cc.lehigh.edu #lehigh university
  1. 128.180.39.25           #fs2.cc.lehigh.edu
        128.180.2.13            #fs3.cc.lehigh.edu
        128.180.2.10            #fs4.cc.lehigh.edu
        128.180.2.11            #fs5.cc.lehigh.edu
  2. vi /usr/vice/etc/ThisCell
    It should only have one line:
    cc.lehigh.edu
  3. see that you have directory /afs. It should have been created when we installed openafs RPMs.
  4. To start the client do
    /etc/init.d/openafs-client start
**********************************************************************************************************************************

上面都是网上coral wiki里面的内容

下面是一些其他内容 

1. 在afs中 只有 cc.lehigh.edu 这一个文件夹

2. kernel 的版本 i386_linux26

3. usr/ local/bin ln -s /afs/cc.lehigh.edu/com/i386_linux26/

4. cd cc.lehigh.edu/common/i386_linux26/mathematics

5. exim4 mailserver

6. ilm is the liscense manager

7. can not resolve trout hostname
solution: emacs hostname  -> trout.ie.lehigh.edu

8. rsync -a barrucuda.ie.lehigh.edu: /usr/local/ilm ......

9. rsync -a shark.ie.lehigh.edu:~/.ssh/*

10. bash.bashrc 和 .bashrc 区别, bash.bashrc是系统文件, 
.bashrc是每一个user login 之后才更新的

11. option solver cplex12

12. /usr/local/cplex/21/bin/x86-.....

13. 别忘加path和export文件路径

wordexpress@coral

coral.ie.lehigh.edu/newcoral/seminar/wp-login

admin

welcome****


srv/www/newcoral/wp-content/files/coralseminar

how to copy one folder from one direc to another direc

cp -R [from dirctory] [to directory]

coral condor prep

1. install the condor from aptitude

2. in the condor_config file
change condor_host=shark.ie.lehigh.edu
sudo dpkg --purge condor

3. sudo updatedbd ( update database)

locate condor_config

4. /etc/init.d/condor restart

5 scp shark.ie.lehigh.edu: .......

emacs hot key

1. ctrl+a : go to the beginning of a line
ctrl+e:  go to the end of a line

ctrl+p: downward of a line
ctrl+n: upward of a line

ctrl+b:backward of a character
ctrl+f: forward of a character

alt+b: backward of a word
alt+f: forward of a word

2. ctrl+y: yank back last thing killed
ctrl+k: kill the forward line

powerful use

3. select another buffer : ctrl-x b
list all buffers : ctrl-x ctrl-b
kill a buffer: ctrl-x, k

4. Ctrl-x ctrl-w: write buffer to a specified file : like save as
ctrl-x ctrl-v: replace this file with the file you really want

ctrl-x, ctrl-f : read a file into emacs
ctrl-x, ctrl-s: save a file back to disk

some linux command tips

1.

To ignore upper/lower case distinctions, use the -i option, i.e. type
% grep -i science science.txt 
2.  wc command, short for word count.
3.
Some of the other options of grep are:
-v display those lines that do NOT match
-n precede each matching line with the line number
-c print only the total count of matched lines

4. The character ? will match exactly one character.

5.whatis
gives a one-line description of the command, but omits any information about options etc.

6. To background a process, type an & at the end of the command line. For example, the command sleep waits a given number of seconds before continuing
sleep 10
sleep 10 &

7. jobs
8. fg  To restart (foreground) a suspended processes, type
% fg %jobnumber
9. quota
To check your current quota and how much of it you have used, type
% quota -v
10. du
The du command outputs the number of kilobyes used by each subdirectory
du -s *

11. zcat
zcat
will read gzipped files without needing to uncompress them first.
12. file
file classifies the named files according to the type of data they contain, for example ascii (text), pictures, compressed data, etc.. To report on all files in your home directory, type
file *
13. find
This searches through the directories for files and directories with a given name, date, size, or any other attribute you care to specify.
To search for all fies with the extention .txt, starting at the current directory (.) and working through all sub-directories, then printing the name of the file to the screen, type
find . -name "*.txt" -print
To find files over 1Mb in size, and display the result as a long listing, type
% find . -size +1M -ls
14. history
The C shell keeps an ordered list of all the commands that you have entered
% history (show command history list)
If you are using the C shell, you can use the exclamation character (!) to recall commands easily.
% !! (recall last command)
% !-3 (recall third most recent command)
% !5 (recall 5th command in list)
% !grep (recall last command starting with grep)
15.
You can increase the size of the history buffer by typing
% set history=100

nohup linux command

When working with the UNIX operating system, there will be times when you will want to run commands that are immune to log outs or unplanned login session terminations.  This is especially true for UNIX system administrators.  The UNIX command for handling this job is the nohup (no hangup) command.

Normally when you log out, or your session terminates unexpectedly, the system will kill all processes you have started.  Starting a command with nohup counters this by arranging for all stopped, running, and background jobs to ignore the SIGHUP signal.

The syntax for nohup is:

nohup command [arguments]
You may optionally add an ampersand to the end of the command line to run the job in the background:

nohup command [arguments] &

coral update

1. touch

    Change file access and modification time.
2. you can rsync all the folders to the place using :, :, :, :,
3. to do a ubuntu upgrade on the command line:
  do-release-upgrade
4. ssh add root to dolphin.
               .ssh
5. to invoke gurobi
we need to type gurobi.sh.

make debugging

make -rd

will provide additional debugging information.

linux command to obtain cpu information

cat /proc/cpuinfo | grep processor | wc -l

We can obtain the number of cores in the system

less /proc/cpuinfo

We can get the system information of the computer

quick commands for using emacs for latex

C-c C-v : Quick View

C-c ` : Next Error

C-c ; : comment or uncomment region

C-c C-e : insert begin end pair

HPC@lehigh

matlab -nojvm -nodisplay

Coral Update

1. osg: open science grid

2. Tiger auditing report
http://www.nongnu.org/tiger/

3. root /.ssh/known_hosts
.grbd

4. rsync

5. piranha and stingray are all 64 bits

6. rc.local
goto-line

7.  .htaccess

************************************************************************************

rounding in AMPL

precision(x,n) rounds x to n significant digits
round(x,n) rounds x to n decimal places

How to Model x[k,i] != x[k,j] in AMPL

> > subject to riga {k in ELEMENTI,i in ELEMENTI,j in ELEMENTI: j != i} :
> > x[k,i] != x[k,j] ;

This is not correct

>
> The != is a logical relational operator, not a constraint relation.
> Put another way, constraints in a math program are limited to =, <=
> and >= (unless the model is going to a constraint solver, but I don't
> think AMPL supports constraint solvers yet).  So AMPL saw this as a
> logical constraint, tried to convert it to an indicator constraint (if
> logical condition then algebraic relation), failed and burped up the
> error message.
>
> Since x is integer in {0, ..., n}, you can enforce x[k, i] != x[k, j]
> by introducing a binary variable z[k, i, j] and adding constraints
>
> x[k, i] <= x[k, j] - 1 + (n + 1)*z[k, i, j]
> x[k, j] <= x[k, i] - 1 + (n + 1)*(1 - z[k, i, j])
>

AMPL:Tutorial

Installing AMPL

Windows users:
  1. Download the package amplcml.zip (this link is from the AMPL home page).
  2. Unzip the file amplcml.zip (double-click it).
  3. The unzipped amplcml/ folder contains, among others:
    • sw.exe: a command line interface;
    • ampl.exe: the true AMPL modeler;
    • cplex.exe: a solver for Linear and Integer Programming problems;
    • minos.exe: a solver for Nonlinear Programming problems.
  4. Create a new folder C:\Programs\AMPL (you may instead create the AMPL folder on the desktop, depending on your preference or if C:\Programs is read-only), and move all the files (the four above AND all the others) into the newly created AMPL folder.
  5. For your convenience, create a link to sw.exe and place it on your desktop.
  6. Double-click on sw.exe (either the link or the real file).
  7. A window appears with a prompt "sw:". Type "ampl". The AMPL prompt appears.
You can now enter your optimization model. Note: When using AMPL, the working directory is C:\Programs\AMPL or wherever you placed the sw.exe file. This means that, if you are using model files, you should place ALL these files in the same folder where sw.exe is located, otherwise you would have to specify the full path (e.g. model "C:\Documents and settings\pietro\Desktop\tin-can.mod").

Using AMPL

You can also create model files and data files with your favorite text editor (Notepad, WinEDT, etc.) and use the ampl prompt simply to specify model and data before solving the problem. Examples:
Notice: clicking on these .mod files may suggest Windows that these are music files. You may want to right-click on them and select "Save link as..." instead. In order to solve these models, start AMPL and type, at the prompt:
 ampl: model tin-can.mod;
Then set the proper solver for each problem (Minos for tin-can.mod and all Nonlinear Programming problems, Cplex for all Linear and Integer Programming problems).
 ampl: option solver minos;
(notice the ";" at the end of each command). Finally, give the "solve;" command to solve it:
 ampl: solve;
At this point, AMPL calls the Minos solver, and when Minos is done you will see again the AMPL prompt. At this point, if the solver did not encounter errors, you should be able to visualize the value of the variables:
 ampl: display radius, height;
http://coral.ie.lehigh.edu/~belotti/teaching/ampl/ampl.html

ampl: include branch_and_bound.run
reset

Research Associate Position at University of Minnesota

The Supply Chain and Operations Research Laboratory (www.isye.umn.edu/labs/scorlab
) invites applications for an immediate opening for a full time research associate position. The position is to provide support for the Scientific Registry of Transplant Recipients (SRTR) contract. Salary is highly competitive.

This position is initially for a period of three months beginning July 1, 2010 and renewable in 6-month increments subject to satisfactory performance and availability of funds. The research associate will report to Professor Diwakar Gupta, Director of SCORLAB and Professor in the Industrial and Systems Engineering program. Candidates must have strong analytical and programming skills needed to run, maintain, update, and create the next generation of simulation models of the U.S. solid organ transplant allocation system. Candidates with prior experience of programming in Object Pascal using Borland's Delphi development environment will be given preference, but other competencies including C++ and open-source package integration are also desirable. Duties of the research associate will include development of decision support tools based on advanced mathematical models and algorithms, analyzing and summarizing large data sets, organizing and summarizing simulation results, and communicating the results to a diverse mix of audience. That is, candidates must be familiar with techniques used in statistical modeling, mathematical optimization, and quantitative data analysis. The project requires the ability to work in a team and excellent written and oral communication skills.

Required/Preferred Qualifications
Ideal candidates are expected to hold an advanced degree (PhD preferred) in either computer science and operations research, with background in the other field.

Application Instructions
Please apply online via the Employment System at
https://employment.umn.edu/applicants/Central?quickFind=92453 Applicants shall submit a cover letter, resume/curriculum vitae, proof of highest degree (attach as Additional Document 1), statement of research (attach as Additional Document 2), and complete contact information of three references with this online application. Review of applications will begin immediately and the position will remain open until filled.

The University of Minnesota is committed to the policy that all persons shall have equal access to its programs, facilities, and employment without regard to race, color, creed, religion, national origin, sex, age, marital status, disability, public assistance status, veteran status, or sexual orientation.

Research Positions in Vehicle Routing, Scheduling, OR, CP

National ICT Australia (NICTA) is Australia's ICT Research Center of Excellence.
We currently have a position for a researcher or a senior researcher to join the expanding Intelligent Fleet Logistics project, in Canberra.
Successful candidates will hold a PhD in computer science, mathematics operations research or a related discipline. They will have an interest to research one or more of the following areas:

Routing and scheduling algorithms for transport optimisationMixed Integer Programming, Linear ProgrammingConstraint programmingLocal Search and Meta-heuristicsRobustness in routing and scheduling
The ability to contribute new ideas and experience from previous projects together with professionalism and good communication skills is sought. Software development using C/C++ is required.

Appointments will be initially for three years with possibility of extension.
Closing date: February 21, 2011.
Applications: Please speak to Prof Toby Walsh for more details and visit NICTA careers to apply on-line