9.3 Computational speed

The computational effort required by our method is an area of some concern, but more as a practical rather than as a theoretical issue. From a rigorous theoretical point of view, the vorticity redistribution method is in fact superior to particle methods with respect to computational time. After all, in the limiting process in which convergence is achieved, the particle methods must transfer their vorticity to infinitely many neighbors, requiring infinitely many computational operations. Although the vorticity redistribution method must elaborately compute the fractions to transfer to the neighbors, only a finite number of neighbors is involved, making the work asymptotically finite.

However, the situation is much less clear than this argument might suggest. The comparison above assumes that the particle and redistribution methods use the same number of vortices and time-steps. Yet, a particle method such as Fishelov's [78] can be exponentially accurate. While the vorticity redistribution method can have any fixed order of accuracy, at least for the Stokes equations, it cannot be exponentially accurate using a finite number of points. For infinitely smooth initial data, an exponentially accurate particle method would asymptotically need much less points than an fixed order vorticity redistribution method, making the above comparison of times meaningless.

Furthermore, under realistic conditions the number of neighboring vortices affected in a particle method is not likely to be very large. Since it is significantly less work to transfer a vorticity fraction onto a vortex than to compute that fraction from a linear programming problem, finite or not, the asymptotic estimate is clearly misleading for practical applications. This is particularly so for the particle methods that remesh every few time steps, eg. [119,170]: these may involve as little as on the order of 200 neighboring vortices.

In any case, from a practical point of view the real question is whether the computational time for the redistribution step leads to an unacceptable increase in the total computational time. If the time for redistribution would be much larger than the time needed to find the velocity field, it would significantly reduce the problems that could be addressed with the method. Our computational examples in chapter 7 show that this is not the case. The time for redistribution is roughly half of the total time. To put this in perspective, we may note that in order to resolve length scales only twice as small, a computation would need 16 times the computational effort in two dimensions and 32 times in three.

Furthermore, as discussed in subsection 6.2.1, we have not yet made any serious attempt to reduce the time required for our method. Since there seems theoretically no limit to the reduction in computational effort that might be achievable, this seems a promising area for further research.

A true saving of computational time compared to particle methods can occur if the initial vorticity is sparse. The vorticity redistribution method, with its capability to deal with randomly distributed, independent vortices, need use only vortices in regions in which vorticity exists. New vortices are automatically added when the region with vorticity expands. For example, for the diffusion of a point vortex we started with a single vortex and we let our method add vortices automatically. Particle methods typically start out with a large number of vortices, most of which are inactive at those early times. (An improvement suggested by Pépin [170] is to allow the number of particles to be increased during remeshing, thus allowing less particles to be used during the first stages).