Page 1 of 1

Dima calculation times xDima2 (HPR)

Posted: Wed Dec 11, 2019 3:05 pm
by Bernd Welter
Hi there,

here is an overview of some sample computation based on a 40'000 test set...
Calculation times grow in square:
DimaCalculation.PNG
Size HPR DimaCalculation times in [sec]size in [MB]equi CONVSpeedup
(500)²0,8971,5
(1'000)²26
(2'500)²9,137,525² = 11 sec
(5'000)²2015050² = 16 sec(5000²) / (50²) = 10'000
(7'500)²41337,5
(10'000)²65= 1:05600100² = 60 sec
(15'000)²142 = 2:221350200² = 180 sec
(20'000)²240 = 4:002400
(25'000)²365 = 6:053750300² = 384 sec(25'000)²/(300)² = 7'000
(30'000)²507 = 8:275400400² = 620 sec
(35'000)²710 = 11:507350
(40'000)²945 = 15:459600500² = 900 sec(40'000)²/(500)² = 6'400
Calculated on a DELL Precision 5530, Intel i7-8850H (2.6 GHz), 32.0GB, Windows 10-64bit

Best regards,
Bernd
  • Update 3.11.2023: I added the column "equi CONV" to the table. It contains some comparison value that shows "what dima size can you compute in the same time via CONVENTIONAL routing". The base is my current laptop (Dell Intel I7-11850H / 64GB) and the GER PLZ5 codes / car profile / xServer v2.30.

Re: Dima calculation times xDima2

Posted: Wed Jun 22, 2022 3:31 pm
by Bernd Welter
Quick update: I have a new laptop:
  • Processor: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz 2.50 GHz
  • Memory: 64,0 GB (63,7 GB usable)
I computed a very large (57'xxx) matrix = with almost 3.5 billion relations! in less than 20 minutes.

Size o the disk: 26 GB

Re: Dima calculation times xDima2

Posted: Mon May 22, 2023 12:29 pm
by Bernd Welter
and another mega distance matrix based on 62'434 pairwise different (x,y) computed with an HPR based on
- car, no further truck attributes
- geographic restriction: allowed countries = ["IT"]
took me 10:48 minutes and occupies 29GB memory for it's 3'898'004'356 relations...

Re: Dima calculation times xDima2

Posted: Tue Jul 11, 2023 9:38 am
by Bernd Welter
Wow... I've just been told that one of my partners managed to calculate a (103'000)² matrix within 68 minutes...
That's very fast for 10 Billions of relations!

On the other hand I ased him why he requires such a huge data set... probably such a "surrounding" problem can be simplified with some preaggregations ... ;-)