(Particle) Swarm Intelligence is a powerful tool to solve optimization problems in a fixed search-space. SI is computationally appealing as simple to implement and computationally robust with respect to local minima and maxima, provided enough iterations (generations) are performed. As an additional bonus, SI is inherently parallel and can be implemented in a massively parallel way (cf. Auslander et al. (1995), Fabiunke (2002)).
Gradient (Grid) Search methods like hill-climbing are superior to random-guessing algorithms like SI if the search-space is e.g. a sphere, but on highly multi-dimensional surfaces, the gradient method gets stuck too often in local extreme points and therefore becomes computationally expensive, as one has to start from many different starting points.
In higher-dimensional problems, SI seem fitter than GS methods. However, one has to be careful with such statements, as according to the No Free Lunch (NFL) theorem (cf. Wolpert and Macready, 1996), when performance is averaged over all possible search spaces, all search algorithms perform equally well.
Ultimately we decided to stick with SI algorithms, which seem to converge faster for large real-life size portfolios. Combining these evolutionary algorithms with a selected Gradient Search for selected good intermediate solutions provides further speed-up.
One caveat with all numerical solutions, without further assumptions about the search-space is that there is no guarantee that the optimal solution is found. In practice one monitors the rate of convergence and dedicates enough search-time. Looking at the number of idle PCs and workstations in the typical investment bank or insurance company one can be on the save side and farm out the work in fractions of a second to a large number of processors or a dedicated cluster or Global Grid.