There are three forms of least squares adjustment: parametric, conditional, and combined:
In parametric adjustment, one can find an observation equation h(X) = Y relating observations Y explicitly in terms of parameters X (leading to the A-model below).
In conditional adjustment, there exists a condition equation which is g(Y) = 0 involving only observations Y (leading to the B-model below) — with no parameters X at all.
Finally, in a combined adjustment, both parameters X and observations Y are involved implicitly in a mixed-model equation f(X, Y) = 0.
Clearly, parametric and conditional adjustments correspond to the more general combined case when f(X,Y) = h(X) - Y and f(X, Y) = g(Y), respectively. Yet the special cases warrant simpler solutions, as detailed below. Often in the literature, Y may be denoted L.
Solution
The equalities above only hold for the estimated parameters and observations , thus . In contrast, measured observations and approximate parameters produce a nonzero misclosure:
One can proceed to Taylor series expansion of the equations, which results in the Jacobians or design matrices: the first one,
and the second one,
The linearized model then reads:
where are estimated parameter corrections to the a priori values, and are post-fit observation residuals.
In the parametric adjustment, the second design matrix is an identity, B=-I, and the misclosure vector can be interpreted as the pre-fit residuals, , so the system simplifies to:
which is in the form of ordinary least squares.
In the conditional adjustment, the first design matrix is null, A = 0.
For the more general cases, Lagrange multipliers are introduced to relate the two Jacobian matrices, and transform the constrained least squares problem into an unconstrained one (albeit a larger one). In any case, their manipulation leads to the and vectors as well as the respective parameters and observations a posteriori covariance matrices.
Computation
Given the matrices and vectors above, their solution is found via standard least-squares methods; e.g., forming the normal matrix and applying Cholesky decomposition, applying the QR factorization directly to the Jacobian matrix, iterative methods for very large systems, etc.
Worked-out examples
This section needs expansion. You can help by adding to it. (June 2014)
If rank deficiency is encountered, it can often be rectified by the inclusion of additional equations imposing constraints on the parameters and/or observations, leading to constrained least squares.
References
^Kotz, Samuel; Read, Campbell B.; Balakrishnan, N.; Vidakovic, Brani; Johnson, Norman L. (2004-07-15). "Gauss-Helmert Model". Encyclopedia of Statistical Sciences. Hoboken, NJ, USA: John Wiley & Sons, Inc. doi:10.1002/0471667196.ess0854.pub2. ISBN978-0-471-66719-3.
Friedrich Robert Helmert. Die Ausgleichsrechnung nach der Methode der kleinsten Quadrate (Adjustment computation based on the method of least squares). Leipzig: Teubner, 1872. <http://eudml.org/doc/203764>.
Peter Vaníček and E.J. Krakiwsky, "Geodesy: The Concepts." Amsterdam: Elsevier. (third ed.): ISBN0-444-87777-0, ISBN978-0-444-87777-2; chap. 12, "Least-squares solution of overdetermined models", pp. 202–213, 1986.
Gilbert Strang and Kai Borre, "Linear Algebra, Geodesy, and GPS", SIAM, 624 pages, 1997.
Paul Wolf and Bon DeWitt, "Elements of Photogrammetry with Applications in GIS", McGraw-Hill, 2000
Karl-Rudolf Koch, "Parameter Estimation and Hypothesis Testing in Linear Models", 2a ed., Springer, 2000
P.J.G. Teunissen, "Adjustment theory, an introduction", Delft Academic Press, 2000
Edward M. Mikhail, James S. Bethel, J. Chris McGlone, "Introduction to Modern Photogrammetry", Wiley, 2001
Harvey, Bruce R., "Practical least squares and statistics for surveyors", Monograph 13, Third Edition, School of Surveying and Spatial Information Systems, University of New South Wales, 2006
Huaan Fan, "Theory of Errors and Least Squares Adjustment", Royal Institute of Technology (KTH), Division of Geodesy and Geoinformatics, Stockholm, Sweden, 2010, ISBN91-7170-200-8.
Charles D. Ghilani, "Adjustment Computations: Spatial Data Analysis", John Wiley & Sons, 2011
Charles D. Ghilani and Paul R. Wolf, "Elementary Surveying: An Introduction to Geomatics", 13th Edition, Prentice Hall, 2011
Erik Grafarend and Joseph Awange, "Applications of Linear and Nonlinear Models: Fixed Effects, Random Effects, and Total Least Squares", Springer, 2012
Alfred Leick, Lev Rapoport, and Dmitry Tatarnikov, "GPS Satellite Surveying", 4th Edition, John Wiley & Sons, ISBN9781119018612; Chapter 2, "Least-Squares Adjustments", pp. 11–79, doi:10.1002/9781119018612.ch2
A. Fotiou (2018) "A Discussion on Least Squares Adjustment with Worked Examples" In: Fotiou A., D. Rossikopoulos, eds. (2018): “Quod erat demonstrandum. In quest for the ultimate geodetic insight.” Special issue for Professor Emeritus Athanasios Dermanis. Publication of the School of Rural and Surveying Engineering, Aristotle University of Thessaloniki, 405 pages. ISBN978-960-89704-4-1[1]
John Olusegun Ogundare (2018), "Understanding Least Squares Estimation and Geomatics Data Analysis", John Wiley & Sons, 720 pages, ISBN9781119501404.
Shen, Yunzhong; Xu, Guochang (2012-07-31). "Regularization and Adjustment". Sciences of Geodesy - II. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 293–337. doi:10.1007/978-3-642-28000-9_6. ISBN978-3-642-27999-7.