Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Benchmark and related
Message
De
17/04/2020 16:30:22
 
 
À
17/04/2020 16:24:39
Al Doman (En ligne)
M3 Enterprises Inc.
North Vancouver, Colombie Britannique, Canada
Information générale
Forum:
Data virtualization
Catégorie:
Performance
Divers
Thread ID:
01674069
Message ID:
01674088
Vues:
51
>Any IT performance is ultimately limited by bottleneck(s). What bottlenecks have you identified in the 2 scenarios?
>
>Are data from various ISPs being (1) pushed to you as they're created, or (2) are you pulling them remotely from those ISPs?
>
>(1) As long as the connections from the ISPs to your site can keep up with the data generation, putting everything at your site would make no difference
>
>(2) If you can process data at full 1Gbps line speed, but the ISP connections are slower and that's limiting your processing performance, then it would be better to have the data at your site. However, you have to consider how those data would get to you, you might actually be in (1) on an ongoing basis
>
>As a general recommendation, try to keep all your processing on the same physical server computer, rather than spread out amongst several machines on a 1Gbps LAN. A gigabit connection can only transfer a theoretical maximum 125MBytes/sec; in real-world performance you'd be lucky to see 100MBytes/sec. Within a single server computer, a single SATA SSD can transfer over 500MBytes/sec, and NVMe drives several times that speed. You can see speeds like that between virtual machines on the same physical server, connected by a virtual switch or SDN. As soon as you have to go outside the physical server, your speeds will drop to gigabit.
>
>Gigabit is very slow by modern data center standards; 10gig is mainstream, and faster speeds are increasingly common. Multiple links can be aggregated to give higher throughput between physical host computers. This is increasingly important as server computers run more and more VMs and therefore need faster network links.

It is at an analysis stage. It is not clear to me how data virtualization can keep up with performance. Either they cache a tremendous amount of data in memory, which would in that case require a lot of memory, or they cache everything on disk, which would in that case require a lot of disk space. Data virtualization products mention they do not require any of that. Thus, I assume they get the data over the net in real time, mostly. Thus, if the connection between Montreal and Toronto is at a specific speed, it will, of course, be lower than local Ethernet speed at 10 Gbps for example.
Michel Fournier
Level Extreme Inc.
Designer, architect, owner of the Level Extreme Platform
Subscribe to the site at https://www.levelextreme.com/Home/DataEntry?Activator=55&NoStore=303
Subscription benefits https://www.levelextreme.com/Home/ViewPage?Activator=7&ID=52
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform