A detailed presentation of the AVP was given in Chapter 3, including a deduction of the Lagrangian, which, to my knowledge, has not been given in the context of AVP before. In the following chapter, Chapter 4, the numerical aspects of the method were presented, both integration and optimization. Then, in Chapter 5 the main part of this thesis, namely the implementation of the principle and the testing of the different optimizing methods. Two different systems of galaxies were used for this purpose: one with 120 variables to be determined and an other with 660. The reason for using two so widely different systems is that there were indications from the presentation of the optimizing methods in Section 5.1 that they would behave quite differently when applied to systems of different sizes. This was also the case: When applied to the smaller system, there were only small variations in the performance of the methods (with the exception of the method of Steepest Descent), while when used on the larger system, the method making use of the second-derivative of the action seemed then to be too elaborate, resulting in long computation time even though the convergence was rapid. This is the general trend resulting from the tests undertaken here, which, most likely, will be more evident for even larger systems. But, when considering the different implementations of the optimizing methods, it becomes apparent that this general view is a bit more detailed.
The simplest and most stable of the optimizing methods tested, the method of Steepest Descent, proved to be a poor choice for any of the systems: each iteration is very fast but the convergence is equally slow, resulting in a long computing time. As mentioned in the presentation of the method in Subsection 4.3.1, the method is generally only used for illustrative purposes and in conjunction with other methods.
The second method tested that only considers the first-derivative of the action, was the Conjugate Gradients method with its two implementations. The convergence of this method was considerably faster than for the Steepest Descent, and the speed of each iteration was superior to most of the implementations of the Newton-Raphson, Secant, and Hybrid method. The method therefore proved to be quite efficient, having the third shortest computation time. One major problem with the method is that it is only able to locate a limited number of stationary values, all of them being minima.
A better choice of method if one is only interested in minimum points, is the third
method making use of only derivatives of the action to the fist order, namely the Secant
method. The two different implementations using backtracking and linear searches have been shown to behave quite
differently: the backtracking resulted in long computing time but found a large number
of minimum points (in the case of the larger system), while using the linear searches
located very few solutions very rapidly. In fact, the implementation using the secant
stepsize determiner was the most efficient method when considering the larger system.
Thus, if the main interest is speed, the Secant method using linear searches is
preferable, but if the goal is to reveal as many minimum points as possible, the Secant
method using backtracking is a good choice.
The Newton-Raphson method makes use of the information provided by the second-derivatives of the action, and thereby has the quickest convergence of all the methods tested. The problem is that the calculation and the handling of the Hessian matrix is rather time consuming. This is not particularly noticeable when applied to the small system, but becomes a serious drawback for the larger system. On the other hand, the method has the advantage of being able to locate saddle points which seem to be valid solutions. All in all, the Newton-Raphson method is an adequate method, especially for small systems, but will most likely be too slow for larger systems than the ones tested for here. Of the three stepsize determiners tested, the crude one seems to be able to locate most stationary values, while the linear searches resulted in the least amount of computing time.
A way to improve the performance of the Newton-Raphson method is to combine it with a method that initially has a fast convergence. This is the idea behind the Hybrid method; the method of Steepest Descent locates the solution to a certain accuracy, from where the Newton-Raphson method takes over and determines the solution to high accuracy with increasing convergence. The Hybrid method has therefore both relatively short computing time and the ability to locate a large array of solutions, including saddle points, to high accuracy.
As can be seen from the results above, none of the methods seems to be prominent when it comes to both finding the largest number of solutions and spending the least amount of time in locating them. The conclusion of the tests of the optimizing methods is therefore a bit vague, and the choice must be based on the intensions one has when using them. If the goal is to just locate any solution as fast as possible, the Secant method using the linear searches is the best choice; if one would like to locate as many minimum points as possible, one should go for the Secant method using backtracking; and if the largest amount of stationary values is the quest, the Hybrid method is to be preferred. If one is in need for a method for overall use, this study recommends the Hybrid method; it being fast, accurate, and able to locate a large number of solutions.
Apart form the testing of the optimizing methods, physical aspects of the larger system consisting of 22 galaxies in the LG and LN were discussed in Subsection 5.4.3. The distances to some of the more distant galaxies were adjusted in order to get a better fit between observed and predicted radial velocities. Most noticeable were the adjustments of the mass tracers IC 342, NGC 253, Dw1, NGC 45, and M101, which where all given larger distances than what are being observed. An introduction of more mass tracers in the LN, making it more complete, may very well change this picture.
The thesis was closed in Chapter 6 by a review of the AVP, containing an examination of the large array of its applications and different implementations.