How to increase Performance of large xMapMatch call?

This forum deals with mapmatching.

How to increase Performance of large xMapMatch call?

Postby Asterix » Thu Jul 06, 2017 6:58 am

Hi Tobias,

I'm currently in touch with a customer who faces poor performance with a LARGE scenario (more than 14'000 coordinates) of a map matching call: It takes several minutes (100-180 seconds) to proceed the request.

Here are my questions:

  • Is it realistic to compute such a complexity within a shorter period (half the time or even less)? What do we have to do for this?
  • Is it meaningful to slice the 100% of polygon points into several requests that can be processed in parallel? For example to replace [0,..,14000] by [0,....5005],[4995...10005],[9995...14000] and then concatenate the results of [0...5000][5001..10000][10001...14000]? Can we recommend this for larger complexity?
Best regards from Cergy,

Bernd
Bernd Welter
Manager Technical Consulting & Requirement Engineering
Senior Technical Consultant Developer Components
PTV GROUP - Germany

https://www.youtube.com/channel/UCgkUli9yGf0gwTDdxbMZ-Kg
User avatar
Asterix
Site Admin
 
Posts: 740
Joined: Mon Apr 14, 2014 10:28 am

Re: How to increase Performance of large xMapMatch call?

Postby Tobias Bachmor » Thu Jul 06, 2017 8:25 am

Hi Bernd,

without looking at the request, this is hard to tell. Granted, 14000 coordinates are a huge request which will take its time (serialising/deserialising/computing). But computation time largely depends on the profile used - and that depends on the input data.
Slicing sure is an option, but you then will have to glue the single tracks together - and depending on the kind of track (dense signal/sparse signal) slicing is not an easy job. So again, without looking at the request, there is no yes/no answer.
Maybe the 14000 coordinates can be filtered before they are sent to xMapmatch (e.g. if there are long times without movement). Or probably some features in the output can be switched off because they are not needed, thus reducing the time to serialise the result.

To make a long story short, requests with that amount of coordinates will take time to process. But by looking at the data, we surely can give some hints on how to tune the request.

Best,


Tobias
Tobias Bachmor
Director Logistics Applications
PTV, Germany
User avatar
Tobias Bachmor
 
Posts: 4
Joined: Tue May 13, 2014 12:12 pm

Re: How to increase Performance of large xMapMatch call?

Postby Asterix » Thu Jul 06, 2017 10:08 am

Hello Tobias,

We will provide the request to you for internal analysis.
I think there is a good potential to provide some generic hints here in the forum.

According to the attached benchmarks we use a scenario LARGE which is based on 5000 coordinates.
So sounds like the experience of the customer is just ordinary.
If this is the case we can't solve the issue by parametrizing but by businesslogic such as slicng.

You mentioned "glueing" together. Is that a poblem? Is there something else we have to be careful with besides the parallel client calls? (I once faced a customer with a mighty number of backend modules but then he sent all transactions in a single thread. Performance was poor, machine wasn't loaded, so customer wasn't happy).

Best regards,
Bernd

Benchmark-xMapmatch-1.24.0.0.pdf
Benchmarks of xMapMatch 1.24
(1.06 MiB) Downloaded 17 times
Bernd Welter
Manager Technical Consulting & Requirement Engineering
Senior Technical Consultant Developer Components
PTV GROUP - Germany

https://www.youtube.com/channel/UCgkUli9yGf0gwTDdxbMZ-Kg
User avatar
Asterix
Site Admin
 
Posts: 740
Joined: Mon Apr 14, 2014 10:28 am

Re: How to increase Performance of large xMapMatch call?

Postby Asterix » Thu Jul 06, 2017 12:46 pm

just an idea: how about a recursive approach ;-)
based on matchTrackExtended (same signature)

csharp code
public ... myFake(TrackPosition[] inputTrackPositions...)
{
if (inputTrackPositions.length <100)
return matchTrackExtended( inputTrackPositions );
else
{ // cut original array into pieces with overlap, based on half? (like QuickSort)
glue the partial results together...
return thecombi;
}
}

higher level: apply parallelization ;-)
Bernd Welter
Manager Technical Consulting & Requirement Engineering
Senior Technical Consultant Developer Components
PTV GROUP - Germany

https://www.youtube.com/channel/UCgkUli9yGf0gwTDdxbMZ-Kg
User avatar
Asterix
Site Admin
 
Posts: 740
Joined: Mon Apr 14, 2014 10:28 am

Re: How to increase Performance of large xMapMatch call?

Postby Tobias Bachmor » Thu Jul 06, 2017 3:42 pm

Well, glueing the parts together basically is will work - depending on the track ;)
There is a difference between matching a track completely in one run and using snippets of it and glueing them together afterwards - just remeber that during matching we keep track of some kind of history which is lost when cutting the track.

Regarding the performance, it really depends on the track. The tracks we use for the benchmarking are okay to get some figures but to be honest we would need to have different combinations of tracks and profiles to give more accurate performance numbers for different scenarios.
Tobias Bachmor
Director Logistics Applications
PTV, Germany
User avatar
Tobias Bachmor
 
Posts: 4
Joined: Tue May 13, 2014 12:12 pm


Return to PTV xMapmatchServer (admin=BAT,JCL)

cron