Page 1 of 1

How to increase Performance of large xMapMatch call?

Posted: Thu Jul 06, 2017 6:58 am
by Bernd Welter
Hi Tobias,

I'm currently in touch with a customer who faces poor performance with a LARGE scenario (more than 14'000 coordinates) of a map matching call: It takes several minutes (100-180 seconds) to proceed the request.

Here are my questions:
  • Is it realistic to compute such a complexity within a shorter period (half the time or even less)? What do we have to do for this?
  • Is it meaningful to slice the 100% of polygon points into several requests that can be processed in parallel? For example to replace [0,..,14000] by [0,....5005],[4995...10005],[9995...14000] and then concatenate the results of [0...5000][5001..10000][10001...14000]? Can we recommend this for larger complexity?
Best regards from Cergy,

Bernd

Re: How to increase Performance of large xMapMatch call?

Posted: Thu Jul 06, 2017 8:25 am
by Tobias Bachmor
Hi Bernd,

without looking at the request, this is hard to tell. Granted, 14000 coordinates are a huge request which will take its time (serialising/deserialising/computing). But computation time largely depends on the profile used - and that depends on the input data.
Slicing sure is an option, but you then will have to glue the single tracks together - and depending on the kind of track (dense signal/sparse signal) slicing is not an easy job. So again, without looking at the request, there is no yes/no answer.
Maybe the 14000 coordinates can be filtered before they are sent to xMapmatch (e.g. if there are long times without movement). Or probably some features in the output can be switched off because they are not needed, thus reducing the time to serialise the result.

To make a long story short, requests with that amount of coordinates will take time to process. But by looking at the data, we surely can give some hints on how to tune the request.

Best,


Tobias

Re: How to increase Performance of large xMapMatch call?

Posted: Thu Jul 06, 2017 10:08 am
by Bernd Welter
Hello Tobias,

We will provide the request to you for internal analysis.
I think there is a good potential to provide some generic hints here in the forum.

According to the attached benchmarks we use a scenario LARGE which is based on 5000 coordinates.
So sounds like the experience of the customer is just ordinary.
If this is the case we can't solve the issue by parametrizing but by businesslogic such as slicng.

You mentioned "glueing" together. Is that a poblem? Is there something else we have to be careful with besides the parallel client calls? (I once faced a customer with a mighty number of backend modules but then he sent all transactions in a single thread. Performance was poor, machine wasn't loaded, so customer wasn't happy).

Best regards,
Bernd
Benchmark-xMapmatch-1.24.0.0.pdf
Benchmarks of xMapMatch 1.24
(1.06 MiB) Downloaded 871 times

Re: How to increase Performance of large xMapMatch call?

Posted: Thu Jul 06, 2017 12:46 pm
by Bernd Welter
just an idea: how about a recursive approach ;-)
based on matchTrackExtended (same signature)

Code: Select all

public ... myFake(TrackPosition[] inputTrackPositions...)
{
   if (inputTrackPositions.length <100)
      return matchTrackExtended( inputTrackPositions );
   else
   { // cut original array into pieces with overlap, based on half? (like QuickSort)
      glue the partial results together...
      return thecombi;
   }
}
higher level: apply parallelization ;-)

Re: How to increase Performance of large xMapMatch call?

Posted: Thu Jul 06, 2017 3:42 pm
by Tobias Bachmor
Well, glueing the parts together basically is will work - depending on the track ;)
There is a difference between matching a track completely in one run and using snippets of it and glueing them together afterwards - just remeber that during matching we keep track of some kind of history which is lost when cutting the track.

Regarding the performance, it really depends on the track. The tracks we use for the benchmarking are okay to get some figures but to be honest we would need to have different combinations of tracks and profiles to give more accurate performance numbers for different scenarios.

Re: How to increase Performance of large xMapMatch call?

Posted: Mon Aug 28, 2017 2:22 pm
by Joost
What you can also ask the customer, do you really need to match all the 14000 coordinates to get a result that matches his business rules? If he has a very dense signal it can be an idee to reduce the density. Mu guide rule it to reduce the density to once every 10 sec. You still maintain enough details for global matching.