Performance tuning (e.g. batch geocoding or routing)

Deals with generic topics such as logging, framework and so on. If you are not sure where to place your topic just put it here.

Performance tuning (e.g. batch geocoding or routing)

Postby Bernd Welter » Thu Nov 02, 2017 1:17 pm

Hi there,

from time to time we get asked how clients can improve the performance of tasks that require a large number of transactions with equal parameter settings, e.g. routings with equal routing options, geocodings with equal search options. This little post should give you some valuable hints... feel free to return feedback.

Here are several mechanisms that can be applied at the same time - I recommend that you try to get the idea behind them so you can get a generic understanding of where to gain improvements

Let's use the following example scenario for a description - we deal with reverse geocoding (same ideas work with routing, too):
  • 1.000.000 given coordinates that are supposed to be reverse geocoded
  • I assume we have a load balanced cluster of xLocate modules (e.g. xServer INTERNET)
The poorest performance behaviour you could get is - as expected - by the most obvious call. By sending 1.000.000 sequential transactions of xLocate.findLocation() with default options. This approach will open an HTTP (or HTTPS) connection, get a single result from the server and close the connection. This procedure happens 1.000.000 times.
Now let's look at the improvements:
  • Instead of a single thread try to benefit from the cluster: slice the overall 1.000.000-task into several smaller tasks which could be handeled in parallel. E.g. if you slice the cake into 10 pieces of 100.000 each you could (theoretically) decrease the computation time from 100% to 10%. Of course this requires a proper assemblage of the 10 result packages and of course this would still open/close 1.000.000 HTTP/HTTPS connections.
  • Furthermore try to benefit from our bulk methods such as xLocate.findLocations(...): instead of sending a lot of single low complex transactions send them in bulk packages (e.g. 200/findLocations). This will reduce the efforts spend for opening and closing HTTP/HTTPS connections.
  • Of course the mechanisms mentioned so far will not reduce the absolute efforts of transactions on the backend server modules. To achieve this third improvement you should ensure to parametrize the requests in a way that you only compute information which is required, e.g. by limiting the reverse geocoding to a number of results == 1.
For those who deal with routing tasks - especially in DIMA (DistanceMatrix) and 1:N (=one to many) context:
  • Ensure to work with HighPerformanceRouting whenever it is possible
  • Ensure not to call a complex method (e.g. calculateExtendedRoute or calculateRoute) if not necessary: if you do not need polygon, segment info: set those ResultListOption-values to FALSE or use one of the "routeInfo"-methods.
  • If you only need driving times and distances: check whether calculateBulkRouteInfo can replace lot of calculateRouteInfo calls (reducing number of HTTP/HTTPS connections)
  • check whether you can even use 1:N routings (calcMatrixInfo) instead of calculateRouteInfo or calculateBulkRouteInfo.

So much for the moment,
best regards,
Bernd
Bernd Welter
Senior Technical Consultant Developer Components
PTV GROUP - Germany

Bernd at Youtube
User avatar
Bernd Welter
Site Admin
 
Posts: 1336
Joined: Mon Apr 14, 2014 10:28 am

Return to Generic questions

cron