Dima Computation times... (xDima1 / xTour1)

deals with computation of distance matrices
Post Reply
User avatar
Bernd Welter
Site Admin
Posts: 2564
Joined: Mon Apr 14, 2014 10:28 am
Contact:

Dima Computation times... (xDima1 / xTour1)

Post by Bernd Welter »

Hi there,

every once in a while I'm getting asked for "performance of distance matrix calculation". Here are some simple results I produced on my local machine (I7- details below - 16GB memory, xTour 1.24.0.3, HIGH PERFORMANCE ROUTING enabled). I calculated the distance matrices from scratch.
Left: times based on conventional routing<br />Right: times based on HPR<br />green lines: 60 seconds<br />yellow lines: 1000 rows
Left: times based on conventional routing
Right: times based on HPR
green lines: 60 seconds
yellow lines: 1000 rows
500 x 500 : 2.0 seconds
1.000 x 1.000 : 3.0 seconds
2.000 x 2.000 : 5.5 seconds
3.000 x 3.000 : 9.5 seconds
4.000 x 4.000 : 14.5 seconds
5.000 x 5.000 : 21.0 seconds
7.500 x 7.500 : 39.0 seconds
10.000 x 10.000 : 64.0 seconds
15.000 x 15.000 : 136 seconds
20.000 x 20.000 : 240 seconds
Feedback is welcome.

Here's some feedback from Joost / Dutch office:
xDima 1 with CH: 100.000 random location in Europe, just to see it could be done. It could be done, but I had to play around with the different timers in the xServer framework didn’t kill the module if it would not respond fast enough. Above 100000 I ran into memory issue back then (iirc it was on a machine with 16 GB).
And Jürgen says:
70.000 objects with xServer2. The size of the dima was around 40 GB on a 32-GB-Ram-system. It is important to change some standard parameters. I got this support E-Mail from the development because I had some problems:
Increasing the maximum Java memory should help as long as the system itself provides enough memory.
- wrapper.conf: wrapper.java.maxmemory
- xserver.conf: moduleRunCmd -> -Xmx
So it is possible to compute large sizes, it requires time (and increasing default timeouts) and FILESPACE (size(n) = n²*6 bytes, i.e. 60GB for a 100.000²).

Best regards,
Bernd
500 x 500<br />2.0 seconds
500 x 500
2.0 seconds
1000 x 1000<br />3.0 seconds
1000 x 1000
3.0 seconds
2000 x 2000<br />5.5 seconds
2000 x 2000
5.5 seconds
3000 x 3000 <br />9.5 seconds
3000 x 3000
9.5 seconds
4000 x 4000<br />14.5 seconds
4000 x 4000
14.5 seconds
5000 x 5000<br />21.0 seconds
5000 x 5000
21.0 seconds
7500 x 7500<br />39.0 seconds
7500 x 7500
39.0 seconds
10.000 x 10.000<br />64 seconds
10.000 x 10.000
64 seconds
15.000 x 15.000<br />136 seconds
15.000 x 15.000
136 seconds
Betriebssystemname: Microsoft Windows 10 Enterprise
Betriebssystemversion: 10.0.10586 Nicht zutreffend Build 10586
Betriebssystemhersteller: Microsoft Corporation
Systemhersteller: LENOVO
Systemtyp: x64-based PC
Prozessor(en): 1 Prozessor(en) installiert.
[01]: Intel64 Family 6 Model 60 Stepping 3 GenuineIntel ~2494 MHz
BIOS-Version: LENOVO GNET83WW (2.31 ), 03.05.2017
Gesamter physischer Speicher: 16.009 MB
Verfügbarer physischer Speicher: 3.209 MB
Virtueller Arbeitsspeicher: Maximale Größe: 22.338 MB
Virtueller Arbeitsspeicher: Verfügbar: 2.902 MB
Virtueller Arbeitsspeicher: Zurzeit verwendet: 19.436 MB
Attachments
Overview
Overview
Bernd Welter
Technical Partner Manager Developer Components
PTV Logistics - Germany

Bernd at... The Forum,LinkedIn, Youtube, StackOverflow
I like the smell of PTV Developer in the morning... :twisted:
Joost
Posts: 307
Joined: Fri Apr 25, 2014 1:46 pm

Re: Dima Computation times...

Post by Joost »

For users who want to test this themselves: When testing performance of xServers and especially xDima, keep in mind that the first request after an xServer has started always take much longer then subsequent request since all data has to be read in from the hard drive and there is nothing yet cached by either the xDima or the OS.

Before benchmarking the performance of your local system, please do a few request as " warm up phase" before actually measuring.
Joost Claessen
Senior Technical Consultant
PTV Benelux
User avatar
Bernd Welter
Site Admin
Posts: 2564
Joined: Mon Apr 14, 2014 10:28 am
Contact:

Re: Dima Computation times... ( XSERS-944 )

Post by Bernd Welter »

Hello Joost,

imagine we have about 4-5GB per searchgraph, about N graphs (profiles) all together (required filespace = 5GB x N).
How does a server react (with more than 5GB x N memory) if he changes from one profile/SG to another? (after the warm up)

Would be benefit from "5GB x N" memory? Or does the process clean the memory with a changing graph and the required amount of memory does only depend on the target number of backend modules? (e.g. 8 backend modules, so only 40GB plus core memory)

Best regards,
Bernd
Bernd Welter
Technical Partner Manager Developer Components
PTV Logistics - Germany

Bernd at... The Forum,LinkedIn, Youtube, StackOverflow
I like the smell of PTV Developer in the morning... :twisted:
Joost
Posts: 307
Joined: Fri Apr 25, 2014 1:46 pm

Re: Dima Computation times...

Post by Joost »

There has been a lot of optimization is the use of search graphs in the few latest versions in the engine. Currently I'm not completely up to date what the xDima (or other xServer that uses high performance routing) actually needs to read in when switching graph. Let's see what development has to say about this.
Joost Claessen
Senior Technical Consultant
PTV Benelux
User avatar
Bernd Welter
Site Admin
Posts: 2564
Joined: Mon Apr 14, 2014 10:28 am
Contact:

Re: Dima Computation times...

Post by Bernd Welter »

Here is some more infro from DEV (Maximilian):
xServer 2 answer:
SearchGraphs/HPR networks are loaded and unloaded for each xroute/xdima request, i.e. the xServer does not cache the files explicitly but uses memory mapped files. For sure, the file system or the operating system can and hopefully will cache the HPR networks when these files are loaded frequently. I'm no expert on FS/OS caching, but I assume that more RAM is beneficial. Falling back on OS/FS caching has another benefit: It doesn't matter which backend module loads the HPR network.
The required memory does not depend directly on the number of backend modules (because of the memory mapped files). It is more important to have N times M amount of RAM where N is the number of profiles and M the size of the searchgraph/HPR network.

xServer 1 answer:
Frank thinks there should be no fundamental difference.

Regarding your dima performance experiments:
Not only the amount of locations impact the performance, but also their distribution. A 2000x2000 dima in Bavaria is calculated faster than a 2000x2000 dima across Europe. The size of the area of all the locations has a much stronger influence on the performance of conventional routing than on the performance of high-performance routing.
Bernd Welter
Technical Partner Manager Developer Components
PTV Logistics - Germany

Bernd at... The Forum,LinkedIn, Youtube, StackOverflow
I like the smell of PTV Developer in the morning... :twisted:
Post Reply