Operations Research Reports

 Continuous Optimization
 ORR 2018-01 S. Asadi, H. Mansouri, Zs. Darvay, M. Zangiabadi, N. Mahdavi-Amiri Large-neighborhood infeasible predictor-corrector algorithm for P-horizontal linear complementarity problems over Cartesian product of symmetric cones May, 2018. [pdf]  We present an infeasible interior-point predictor-corrector algorithm, based on a large neighborhood of the central path, for the general Phorizontal linear complementarity problem over the Cartesian product of symmetric cones. The polynomial convergence is shown for the commutative class of search directions. We specialize our algorithm further by prescribing some scaling elements and also consider the case of feasible starting points. We believe this to be the first interior-point method based on large neighborhoods for the P-horizontal linear complementarity problems over the Cartesian product of symmetric cones. ORR 2016-04 Miklós Újvári Norm-sum optimization problems over hyperplanes in word spaces October, 2016. [pdf]  In the paper, after preliminary studies in word spaces ( constrution of bases in hyperplanes, local optimization for norm-sum problems), we present an algorithmical conjecture, which if true provides a practically efficient solution for the long-standing Hadamard conjecture on orthogonal matrices. ORR 2016-03 Petra-Renáta Takács, Zsolt Darvay Infeasible interior-point method for symmetric optimization using a positive-asymptotic barrier April, 2016. [pdf]  We propose a new primal-dual infeasible interior-point method for symmetric optimization by using Euclidean Jordan algebras. Different kinds of interior-point methods can be obtained by using search directions based on kernel functions. Some search directions can be also determined by applying an algebraic equivalent transformation on the centering equation of the central path. Using this method we introduce a new search direction, which can not be derived from a usual kernel function. For this reason, we use the new notion of positive-asymptotic kernel function which induces the class of corresponding barriers. In general, the main iterations of the infeasible interior-point methods are composed of one feasibility and several centering steps. We prove that in our algorithm it is enough to take only one centering step in a main iteration in order to obtain a well-defined algorithm. Moreover, we conclude that the algorithm finds solution in polynomial time and has the same complexity as the currently best known infeasible interior-point methods. ORR 2016-01 Zsolt Darvay, Petra-Renáta Takács New interior-point algorithm for symmetric optimization based on a positive-asymptotic barrier function April, 2016. [pdf]  We define a new interior-point method (IPM), which is suitable for solving symmetric optimization (SO) problems. The proposed algorithm is based on a new search direction. In order to obtain this direction we apply the method of algebraically equivalent transformation on the centering equation of the central path. We prove that the associated barrier can not be derived from a usual kernel function. Therefore, we introduce a new notion, namely the concept of the positive-asymptotic kernel function. We conclude that this algorithm solves the problem in polynomial time and has the same complexity as the best known IPMs for SO. ORR 2015-02Tibor Illés, Rihárd Molnár-Szipai Strongly polynomial primal monotonic build-up simplex algorithm for maximal flow problems September, 2015. [pdf]  The maximum flow problem (MFP) is a fundamental model in operations research. The network simplex algorithm is one of the most efficient solution methods for MFP in practice. The theoretical properties of established pivot algorithms for MFP is less understood. Variants of the primal simplex and dual simplex methods for MFP have been proven strongly polynomial, but no similar result exists for other pivot algorithms like the monotonic build-up or the criss-cross simplex algorithm. The monotonic build-up simplex algorithm (MBU SA) starts with a feasible solution, and fixes the dual feasibility one variable a time, temporarily losing primal feasibility. In the case of maximum flow problems, pivots in one such iteration are all dual degenerate, bar the last one. Using a labelling technique to break these ties we show a variant that solves the maximum flow problem in 2|V ||A|2 pivots. ORR 2015-01Miklós Újvári Counterpart results in word spaces June, 2015. [pdf]  In this paper after algebraical and geometrical preliminaries we present a Gram-Schmidt-type algorithmical conjecture, which if true settles the long-standing Hadamard conjecture concerning the existence of orthogonal matrices with elements of the same absolute value. ORR 2014-04Tibor Illés Lineáris optimalizálás elmélete és belsõpontos algoritmusai August, 2014. [pdf]  Belsõpontos módszerek megjelenése a lineáris programozásban egy hosszabb folyamat eredménye. Mai ismereteink szerint az elsõ említésre méltó eredmény Frisch nevéhez fûzõdik, aki 1955-ben egy szemináriumi elõadást tartott az Oslói Egyetem ökonometriai szemináriumán a logaritmikus barrier módszer lineáris programozási alkalmazhatóságáról. Módszerét multiplex algoritmusnak nevezte el, amelyet 1957-ben publikált. A másik eredmény, amely szintén észrevétlen maradt, Dikin nevéhez fûzõdik, és 1967-ben került publikálásra. Dikin bevezette a róla elnevezett ellipszoidot, amelyik segítségével egy speciális struktúrájú lineáris programozási feladatot tudott közelíteni és a közelítést megoldani. Módszerének ismételt alkalmazásával fogalmazható meg a primál, affin skálázású belsõpontos algoritmus. Az elsõ belsõpontos eredmény, amelyikre azonnal felfigyelt a szakmai közösség Karmarkar projektív skálázású belsõpontos algoritmusa, amelyet 1984-ben a Combinatorica folyóirat 4. évfolyamának, a 4. számában publikált. A belsõpontos módszerek elterjedése a lineáris és kvadratikus programozás területén az 1990-es évekre tehetõ. Lineáris komplementaritási illetve szemidefinit programozási feladatok megoldására a 90-es évek második felében fejlesztették tovább a belsõpontos módszereket. A jegyzet vázát azok az elõadás fóliák alkotják, amelyeket az elmúlt 15 évben készítettem, illetve azok, amelyek szemináriumi vagy konferencia elõadásaimhoz, cikkeimhez kapcsolódnak. Ez a jegyzetem kapcsolódik a Lineáris optimalizálás elmélete és pivot algoritmusai címû, korábbi jegyzetemhez, annak folytatásának tekinthetõ. A jegyzet célja, hogy a lineáris programozási feladatokkal kapcsolatos, a belsõpontos módszerek elméleti és algoritmikus aspektusai közül a legalapvetõbb és legfontosabb témaköröket tárgyalja. A jegyzet 6 fejezetbõl áll. Az elsõ fejezetben a ferdén szimmetrikus, önduális feladatot vezetjük be. Ehhez a feladathoz kapcsolódóan definiáljuk a Newton-irányok fogalmát és megadjuk ezek kiszámításának a módját is. Bevezetünk kétféle szinthalmazt, amelyek fontos szerepet játszanak majd a centrális út létezésének és egyértelmûségének a bizonyításában. A második fejezetben az optimális megoldás halmaz tulajdonságait vizsgáljuk meg. Igazoljuk szigorúan komplementáris megoldás létezését, bizonyítjuk a Goldmann-Tucker tételt, amelyiknek az eredeti bizonyítása 1956-ból származik. Sonnevend-tétele az analitikus centrum egy fontos tulajdonságát mutatja be. A fejezetet (B,N) partíció és tulajdnonságainak a bemutatása illetve a feladat kondíciószámának a bevezetése zárja. A harmadik fejezet Dikin 1967-ben publikált algoritmusának a primál-duál változatát mutatja be és dolgozza fel. A 4. fejezet egy nagyon fontos és kevéssé ismert tényt dolgoz fel, az erõsen polinomiális kerekítési eljárást, amelyik segítségével egy speciális epszilon-optimális megoldásból, pontos megoldás állítható elõ, lényegében egy lineáris egyenletrendszer megoldásának a segítségével. Az 5. fejezet oldja fel azokat a kérdéseket, hogyan adható meg egy induló belsõpont egy ferdén szimmetrikus, önduális feladat esetén illetve miért elegendõ ilyen típusú feladatokkal foglalkozni. A 6. fejezteben egy teljes Newton-lépéses, primál-duál logaritmikus büntetõfüggvényes belsõpontos módszert tárgyalunk részletesen. Ez az algoritmus általánosítja Frisch-módszerében megfogalmazott ötleteket. Tárgyaljuk az algoritmus polinomiális lépésszám becslését. A jegyzetet Utószó zárja. DOI: 10.13140/2.1.5086.4004 ORR 2014-03Miklós Újvári Bounds on the stability number of a graph via the inverse theta function June, 2014. [pdf] [ps] In the paper we consider degree, spectral, and semidefinite bounds on the stability number of a graph. The bounds are obtained via reformulations and variants of the inverse theta function, a notion recently introduced by the author in a previous work. ORR 2014-02Adrienn Nagy Finiteness of the criss-cross method for the linear programming problem when s-monotone index selection rules are applied May, 2014. [pdf] [ps] The traditional criss-cross method for the linear programming problem is shown to be finite when s-monotone index selection rules are used. ORR 2014-01Tibor Illés, Adrienn Nagy Finiteness of the quadratic primal simplex method when s-monotone index selection rules are applied May, 2014. This paper considers the primal simplex method for linearly constrained convex quadratic programming problems. Finiteness of the algorithm is proven when s-monotone index selection rules are applied. The proof is rather general: it shows that any index selection rule that only relies on the sign structure of the reduced costs / transformed right hand side vector and for which the traditional primal simplex method is finite is necessarily finite as well for the primal simplex method for linearly constrained convex quadratic programming problems. ORR 2013-03Tibor Illés Lineáris optimalizálás elmélete és pivot algoritmusai October 2013. [pdf]  A jegyzet vázát azok az előadás fóliák alkotják, amelyeket az elmúlt 15 évben készítettem, illetve azok, amelyek szemináriumi vagy konferencia előadásaimhoz, cikkeimhez kapcsolódnak. Az anyag felépítése teljesen az alapoktól indul, így azt remélem, hogy érdeklődő és kitartó középiskolást, aki lineáris egyenletrendszerek megoldásáról már tanult, végig tudom vezetni a lineáris programozás érdekes és szép témakörein. A lineáris algebrai bevezető célja, hogy a közös kiindulási alapokat lerakja, rávilágítson a pivot technika szerepére; a pivot tábla, mint modell szerepére és előkészítse a következő fejezetek anyagát. A második fejezet a lineáris egyenletrendszerek megoldhatóságával és megoldásával foglalkozik. Ebben a fejezetben jelenik meg az első alternatíva tétel a Rouché-Kronecker-Capelli lemma. Kedvencem a Klafszky Emiltől származó un. Farkas-Minty-féle pivot táblás változat, amely tömören, képszerűen foglalja össze az állítást, ha az olvasó már megbarátkozott a pivot táblák világával. A lemma elnevezése is Klafszky Emiltől származik, és akkor válik igazán érthetővé, amikor a következő fejezetben a lemma előjeles változatát fogalmazzuk meg. Klafszky Emil, ennek a lemmának a kapcsán többször beszélt arról, hogy a Farkas-Minty előjeles lemma a Farkas lemmának és a Minty-féle színezési lemmának egy szép közös megfogalmazása, ha az előjeleket - a Minty lemma esetén - megfelelően értelmezzük. A második fejezetben a megoldások méretével kapcsolatos eredmények nem szerepeltek a Klafszky Emillel és Terlaky Tamással elképzelt jegyzet témakörei között. Ezek szerepe a harmadik fejezetben található MBU-szimplex algoritmus elemzéséhez kapcsolódnak először, amelyik szintén az elmúlt évek alatt jelent meg a jegyzet témakörei között. A harmadik fejezet geometriai jellegű része az új struktúrában került előre. Így a harmadik fejezet már jelentősen eltér az eredeti tervektől, annak ellenére, hogy a jegyzet egyik legérdekesebb Klafszky Emiltol származik. A 4. és az 5. fejezetek annak ellenére, hogy klasszikus eredményeket tárgyalunk bennük, mégis - a mai napig is újszerű tárgyalásról van szó - hiszen a végtelenül egyszerű, Terlaky-féle criss-cross algoritmus végességének a bizonyításán alapul az erős dualitás tétel konstruktív bizonyítása. A criss-cross algoritmus végesség bizonyítása egyszerűbb, mint Terlaky Tamás eredeti, az 1980-as évek közepéről származó bizonyítása és Klafszky Emil egy észrevételén alapul. ORR 2013-02Tibor Illés, Gábor Lovics Approximation of the Whole Pareto-Optimal Set for the Vector Optimization Problem April 2013. In multi objective optimization problems several objective functions have to be minimized simultaneously. In this work, we present a new computational method for the numerical solution of the linearly constrained, convex multi objective optimization problem. We propose some technique to find joint decreasing direction for unconstrained and linearly constrained case as well. Based on these results we introduce a method using subdivision technique to approximate the whole Pareto-optimal set of the linearly constrained, convex multi objective optimization problem. Finally, we illustrate computations of our algorithm by solving the Markowitz-model on real data. ORR 2013-01Zsolt Csizmadia, Tibor Illés, Adrienn Nagy, The s-Monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems March 2013. Accepted to Acta Sapientia Informatica. [pdf] [ps] In this paper we introduce the $\mbs$-monotone index selection rules for the well-known criss-cross method for solving the linear complementarity problem (LCP). Most LCP solution methods require a priori information about the properties of the input matrix. One of the most general matrix properties often required for finiteness of the pivot algorithms (or polynomial complexity of interior point algorithms) is sufficiency. However, there is no known polynomial time method for checking the sufficiency of a matrix (classification of column sufficiency of a matrix is co-NP-complete). Following the ideas of Fukuda, Namiki and Tamura, using Existentially Polynomial (EP)-type theorems, a simple extension of the criss-cross algorithm is introduced for LCPs with general matrices. Computational results obtained using the extended version of the criss-cross algorithm for bi-matrix games and for the Arrow-Debreu market equilibrium problem with different market size is presented. ORR 2012-02Tibor Illés, Adrienn Nagy, Computational aspects of simplex and MBU-simplex algorithms using different anti-cycling pivot rules Oct 2012. [pdf] [ps] Several variations of index selection rules for simplex type algorithms for linear programming, like the Last-In-First-Out or the Most-Often-Selected-Variable are rules not only theoretically finite, but also provide significant flexibility in choosing a pivot element. Based on an implementation of the primal simplex and the monotonic build-up (MBU) simplex method, the practical benefit of the flexibility of these anti-cycling pivot rules are evaluated using public benchmark LP test sets. Our results also provide numerical evidence that the MBU-simplex algorithm is a viable alternative to the traditional simplex algorithm. ORR 2012-01Tibor Illés, Rihárd Molnár-Szipai, On Strongly Polynomial Variants of the MBU-Simplex Algorithm for a Maximum Flow Problem with Nonzero Lower Bounds May 2012. [pdf] [ps] In this paper we are concerned with maximum flow problems with non-zero lower bounds. We show a strongly polynomial variant of the MBU (monotonic build-up) simplex algorithm for finding a feasible flow without embedding the problem into a bigger network. One can then maximize the flow using the same data structure by already existing methods. At the start of our algorithm, we have all infeasible variables in the basis, and we proceed by making them feasible one by one, not letting an already feasible variable turn infeasible in the process. We show that our algorithm terminates after at most n2 m pivots which makes it the first strongly polynomial pivot algorithm that solves the problem without transforming the network. ORR 2011-02Miklós Újvári, Multiplically independent word systems Sept 2011. [pdf] [ps] Tressler's Theorem states that the long-standing Hadamard conjecture (concerning the existence of n by n orthogonal matrices with elements of the same absolute value, for n = 4k, k = 1, 2,...) will be settled if we find n - 2 pairwise orthogonal words in a hyperplane of words. In this paper we will prove the counterpart of Tressler's Theorem: the existence of n - 2 multiplically independent words in a hyperplane of words. ORR 2011-01Miklós Újvári, Applications of the inverse theta number in stable set problems Feb 2011. [pdf] [ps] Recently the author introduced a semidefinite upper bound on the square of the stability number of a graph, the inverse theta number, which is proved here to be multiplicative with respect to the strong graph product, hence to be an upper bound for the square of the Shan- non capacity of the graph. We also describe a heuristic algorithm for the stable set problem based on semidefinite programming, Cholesky factorization, and eigenvector computation. ORR 2010-03Miklós Újvári, Four new upper bounds for the stability number of a graph Dec 2010. [pdf] [ps] In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which has several remarkable properties (e.g. it is exact for perfect graphs). The definition of Lov´asz’s theta number relies on the notion of orthonormal representation of the graph, and, dually, can be obtained by optimizing over the theta body. In the paper we will describe counterparts of theorems due to Wilf, Hoffman, and Lov´asz; three spectral and one semidefinite bound on the stability number, obtained via optimizing over a polyhedron of nonnegative matrices, resp. the inverse theta body. ORR 2010-02Tibor Illés, Zsolt, Csizmadia, The s-Monotone Index Selection Rules for Pivot Algorithms of Linear Programming Nov 2010. [pdf]  In this paper we introduce the concept of s-monotone index selection rule for linear programming problems. We show that several known anticycling pivot rules like the minimal index-, last-in- rst-out- and the-most-often-selected-variable pivot rules are s-monotone index selection rules. Furthermore, we show a possible way to de ne new s-monotone pivot rules. We prove that several known algorithms like the primal (dual) simplex- and MBU-simplex algorithms and criss-cross algorithm with s-monotone pivot rules are nite methods. Therefore, one possible research direction in the area of pivot algorithms might be to nd s-monotone index selection rules that have interesting properties either from theoretical or from computational (for example larger exibility in pivot selection) viewpoint. ORR 2010-01Miklós Újvári, Strengthening weak sandwich theorems in the presence of inconnectivity Jan 2010. [pdf] [ps] In the paper we consider degree, spectral, and semidefinite bounds on the stability and chromatic numbers of a graph: so-called weak sandwich theorems. We examine the additivity properties of the bounds (the sum of two graphs is their disjoint union), and as an application we tighten the bounds in the weak sandwich theorems, if the graph or its complementer is not connected. ORR 2009-01Miklos Ujvari, On closedness conditions, strong separation, and convex duality Okt 2009. [pdf] [ps] In the paper, we describe various applications of the closedness and duality theorems of [On a closedness theorem, Pure Math. Appl. Vol. 15 Num. 4] and [On Abrams theorem, Pure Math. Appl.]. First, the strong separability of a polyhedron and a linear image of a convex set is characterized. Then, it is shown how stability conditions (known from the generalized Fenchel-Rockafellar duality theory) can be reformulated as closedness conditions. Finally, we present a generalized Lagrange duality theorem for Lagrange programs described with cone-convex/conepolyhedral mappings. ORR 2009-02Miklos Ujvari, Reformulation of the Hadamard conjecture via Hurwitz-Radon word systems Okt 2009. [pdf] [ps] The Hadamard conjecture (unsolved since 1867) states that there exists an orthogonal matrix with entries of the same absolute value if and only if the order of the matrix is one, two, or is divisible by four. In the paper we reformulate this conjecture using Hurwitz-Radon word systems. (A Hurwitz-Radon word system is a system of words formed from an alphabet, which is a Klein group, so that the letterwise product of any two different words from the system contains an odd number of the letter $b$.) We present also algorithms for calculating maximal orthogonal word systems. ORR 2007-03: Tibor Illés, Marianna Nagy, Tamás Terlaky, Polynomial interior point algorithms for general LCPs. Feb 2007. [pdf] [ps] Linear Complementarity Problems (LCPs) belong to the class of NP-complete problems. Therefore we can not expect a polynomial time solution method for LCPs without requiring some special property of the matrix coefficient matrix. Our aim is to construct some interior point algorithms which, according to the duality theorem in EP form, gives a solution of the original problem or detects the lack of property P*(kappa) (with arbitrary large, but apriori fixed kappa and gives a polynomial size certificate of it in polynomial time (depending on parameter kappa, the initial interior point and the dimension of the LCP. We give the general idea of a modification of interior point algorithms and present three concrete methods: affine scaling, long-step path-following and predictor-corrector interior point algorithm. ORR 2007-02: Tibor Illés, Marianna Nagy, Tamás Terlaky, An EP theorem for dual linear complementarity problem. Feb 2007. [pdf] [ps] The linear complementarity problem (LCP) belongs to the class of NP-complete problems. Therefore we can not expect a polynomial time solution method for LCPs without requiring some special property of the matrix of the problem. We show that the dual LCP can be solved in polynomial time if the matrix is row sufficient, moreover in this case all feasible solutions are complementary. Furthermore we present an existentially polytime (EP) theorem for the dual LCP with arbitrary matrix. ORR 2007-01: Miklós Ujvári, On Abrams' theorem. Feb 2007. [pdf] [ps] Abrams' theorem describes a necessary and sufficient condition for the closedness of a linear image of an arbitrary set. Closedness conditions of this type play an important role in the theory of duality in convex programming. In this paper we present generalizations of Abrams' theorem, as well as Abrams-type theorems characterizing other properties (such as relatively openness or polyhedrality) of linear images of convex sets. ORR 2006-02: Miklós Ujvári, New descriptions of the Lovász number and a Brooks-type theorem. Okt 2006. [pdf] [ps] In the seminal work [1] L. Lovász introduced the concept of an orthonormal representation of a graph, and also a related value, now popularly known as the Lovász number of the graph. One of the remarkable properties of the Lovász number is that it lies sandwiched between the stability number and the complementer chromatic number. This fact is called the sandwich theorem. In this paper, using new descriptions of the Lovász number and linear algebraic lemmas we give three proofs for a weaker version of the sandwich theorem. A Brooks-type theorem is also presented concerning a simple lower bound for the stability number. [1] Lovász, L., On the Shannon capacity of a graph, IEEE Trans. Inf. Theory, IT-25 1 (1979) 1-7. ORR 2006-01: Miklós Ujvári, Simplex-type algorithm for optimizing a pseudolinear quadratic fractional function over a polytope. Okt 2006. [pdf] [ps] Recently Cambini and Carosi described a characterization of pseudolinearity of quadratic fractional functions. A reformulation of their result was given by Rapcs\'ak. Using this reformulation, in this paper we describe an alternative proof of the Cambini--Carosi Theorem. Our proof is shorter than the proof given by Cambini--Carosi and less involved than the proof given by Rapcsák. As an application we present a simplex-type algorithm for optimizing a pseudolinear quadratic fractional function over a polytope. Our algorithm works in a more general setting than the convex simplex algorithm adapted to the above problem. ORR 2005-04: Illés Tibor, Makai Márton, Vaik Zsuzsanna, Combinatorial Optimization Model for Railway Engine Assignment Problem. Dec 2005. [pdf] [ps] This paper presents an experimental study for the Hungarian State Railway Company (MÁV). The engine assignment problem was solved at MÁV by their experts without using any explicit operations research tool. Furthermore, the operations research model was not known at the company. The goal of our project was to introduce and solve operations research model for the engine assignment problem on real data set. For the engine assignment problem we are using a combinatorial optimization model. At this stage of research the single type train that is pulled by a single type engine is modeled and solved for real data. There are two regions in Hungary where the methodology described in this paper can be used and MÁV started to use it regularly. There is a need to generalize the model for multiple type trains and multiple type engines. ORR 2005-03: Bilen Filiz, Zsolt Csizmadia, Tibor Illés, Anstreicher-Terlaky type monotonic simplex algorithms for linear feasibility problems. May 2005. Accepet to Optimization Methods & Software. We define a variant of Anstreicher and Terlaky's (1994) monotonic build-up (MBU) simplex algorithm for linear feasibility problems. Under a nondegeneracy assumption weaker than the normal one, the complexity of the algorithm can be given by $m\Delta$, where $\Delta$ is a constant determined by the input data of the problem and $m$ is the number of constraints. The constant $\Delta$ cannot be bounded in general by a polynomial of the bit length of the input data. Realizing an interesting property of degeneracy led us to construct a new recursive procedure to handle degenerate problems. The essence of this procedure is as follows. If a degenerate pivot tableau is obtained, we define a smaller problem, restricting the pivot position to a smaller part of the tableau. On this smaller problem, we follow the same principles as before to choose the pivot position. The smaller problem is either solved completely or a new degenerate subproblem is identified. If the subproblem was solved then we return to the starting larger problem, where either we can make a nondegenerate pivot, or detect that the problem is infeasible. It is easy to see that the maximum depth of the recursively embedded subproblems is smaller than $2m$. Upper bounds for the complexities of linear programming version of MBU and the first phase of the simplex algorithm can be found similarly under the nondegeneracy assumption. ORR 2005-02: Faluközy Tamás, Vizvári Béla, Az árutőzsde gabona szekciójának árvárakozásai a kukorica kereskedésének tükrében [in Hungarian]. Marc 2005. [pdf]  Supplementary tableaus: [pdf] A cikk célja, hogy az árutőzsde gabona szekciójának árait megvizsgálja, méghozzá a piaci szereplők árvárakozásainak szempontjából. Arra keresünk választ, hogy a meglévőmodellek jelen vannak-e ezen a piacon, a kiszámolt árvárakozási paraméterek mennyire illeszkednek ezekhez a modellekhez, illetve milyen összefüggésben vannak az ár százalékos változásával. ORR 2005-01: Miklós Ujvári, On a closedness theorem. Marc 2005. [pdf]  [ps] In [1] several conditions are described which imply the closedness of linear images of convex sets. Here we combine two such theorems into a common generalization. We give a proof of the general theorem which is simpler than the proof obtained by combining the proofs of the two theorems. The paper is almost self-contained as we give transparent proofs of the separation theorems used. Also we present an application of our general closedness theorem in the theory of duality in convex programming. [2 ] R. T. Rockafellar, Convex analysis, Princeton University Press, Princeton, 1970. ORR 2004-01: Tibor Illés and Marianna Nagy, New variant on the Mizuno-Todd-Ye predictor--corrector algorithm for the sufficient matrix linear complementarity problem. Oktober 2004. Accepted the European Journal on Optimization. We analyze a version of the Mizuno--Todd--Ye predictor--corrector interior point algorithm for the $\mathcal{P}_*(\kappa)$--matrix linear complementarity problem (LCP). We assume the existence of a strictly positive feasible solution. Our version of the Mizuno--Todd--Ye predictor--corrector algorithm is a generalization of Potra's (2002) conclusions on the LCP with $\mathcal{P}_*(\kappa)$--matrices. To derive a formulation of the complexity for this algorithm we are using a $\|\mathbf{v}^{-1}-\mathbf{v}\|$ proximity measure like Potra. Our algorithm is different from Miao's method (1995) in both the proximity measure used and the way of updating the centrality parameter. Our analysis is easier than the previosly stated results. We also show that the complexity of our algorithm is $O((1+\kappa)^{\frac{3}{2}}\sqrt{n}L)$ ORR 2003-02: Tibor Illés and Ádám B. Nagy, A sufficient optimality criteria for linearly constrained, separable concave minimization problems. December 2003. Accepted to Journal of Optimization Theory and Applications. (http://www.springerlink.com/content/u53652477202k205/) Sufficient optimality criteria for linearly constrained, concave minimization problems is given in this paper. Our optimality criteria is based on the sensitivity analysis of the relaxed linear programming problem. Our main result is similar to that of Phillips and Rosen (1993), however our proofs are simpler and constructive. Phillips and Rosen (1993) in their paper derived sufficient optimality criteria for a slightly different, linearly constrained, concave minimization problem using exponentially many linear programming problems. We introduced special test points and using these, for several cases, we are able to show the optimality of the current basic solution. The sufficient optimality criteria, described in this paper, can be used as a stopping criteria for branch and bound algorithms developed for linearly constrained, concave minimization problems. ORR 2003-01: Zsolt Csizmadia and Tibor Illés, New criss-cross algorithm for linear complementarity problems with sufficient matrices. August 2003. Accepted to Optimization Methods & Software (http://journalsonline.tandf.co.uk/link.asp?id=kp7288340v308463). We generalize new criss-cross type algorithms for linear complementarity problems (LCPs) given with sufficient matrices. Most LCP solvers require apriori information about the input matrix. The sufficiency of a matrix is hard to be checked (no polynomial time method is known). Our algorithm is similar to Zhang's linear programming, and Akkeleº-Balogh-Illés's criss-cross type algorithm for LCP-QP problems. We modify our basic algorithm in such a way that can start with any matrix M, without having information about the property of the matrix (sufficiency, bisymmetricity, positive definitness, etc) in advance. Even in this case our algorithm terminates with one of the following cases in finite number of steps: it solves the LCP problem, solves its dual problem, or gives a certificate that the input matrix is not sufficient, so cycling can occur. Although our algorithm is more general than that of Akkeleº and Illés's, the finiteness proof has benn simplified. Furhermore, the finiteness proof of our algorithm gives a new, constructive proof to Fukuda and Terlaky`s LCP duality theorem as well.

 Department of Operations Research, Eötvös Loránd University of Sciences 1117 Budapest, Pázmány Péter Sétány 1/c, Hungary Contact: Adrienn Nagy - orr@cs.elte.hu