Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
New VFP version after 5815?
Message
De
27/09/2015 16:34:11
 
 
À
27/09/2015 14:33:43
Information générale
Forum:
Visual FoxPro
Catégorie:
Installation et configuration
Versions des environnements
Visual FoxPro:
VFP 9 SP2
OS:
Windows Server 2012
Network:
Windows 2008 Server
Database:
MS SQL Server
Application:
Web
Divers
Thread ID:
01624637
Message ID:
01625133
Vues:
95
J'aime (1)
For lurkers: Preferred terminology (in English, anyways) is:

Sockets: chip sockets on a physical server motherboard. Accept physical processor chips. A server may have 1, 2, 4 etc. sockets

Cores: number of physical processor cores per socket

Logical processors: Hardware-level presentation of the physical cores as logical processors to the OS. At this time this is mainly of interest for Intel processors that support Hyper-Threading, which in a nutshell is a way to run 2 simultaneous threads on a single processor core. Actual real-world performance gain is about 30%, so a 4 physical core Intel chip with HT will have 8 logical processors, and the overall performance of about 5.2 non-HT physical cores

See attached image for a real-world example of a low-end Xeon processor (4 cores w/HT) in a server.

Virtual processors as allocated by hypervisors are timesliced logical processors. Depending on the hypervisor you may be able to provision more total virtual processors to running VMs than there are logical processors, in which case the hypervisor will schedule/timeslice as required for best overall performance.

A process (such as an EXE COM server) consists of one or more threads, which may or may not run simultaneously. A logical (or virtual) processor can run only 1 thread at a time. The OS is responsible for scheduling/timeslicing the logical/virtual processor so that all processes and their threads running on that OS instance perform adequately.

***

Getting back to the thread topic, your table of results below is interesting. Your load-base function clearly processes the list of COM servers in the same order each time i.e. it checks 5744 first, 5564 second etc. For anyone who's interested it's worth pointing out that's not a true load-balancing scenario, which can have some surprising implications.

Your load-base, checking for first available COM server, always checks:
COM1 first, then 2, 3, 4, 5, 6

Load balancing:
* Randomly * picks starting COM process from available pool. If, for example, COM4 was randomly chosen to be checked first, the checking order would be 4, 5, 6, 1, 2, 3

The beauty of your load-base is you get a good idea how many total COM servers you need to service the load, at any desired service level agreement. If you test with 6 and decide 3 are enough, then when you actually run in production with only 3 you can be sure the performance will be good enough, as the overhead of running the other 3 COM servers will not be present.

On the other hand, in production the OS and/or hypervisor will notice that COM5 (424) and COM6 (4292) are hardly ever running and may page them out of memory to disk, remove their cached data files etc. When they do get called on it takes longer for them to spin back up before requests can be serviced. The more heavily the environment is virtualized/optimized, the more you will see this effect. In such environments you may find the average time to service requests is considerably higher from those occasionally-used COM server instances than from COM1 or COM2.

If instead you randomly select COM servers to service requests, then on average none will be shunted out to "cold storage" and request service time should be very consistent across all COM server instances.

So, overall the best strategy may be to combine the two:

- Start with load-base to figure out how many instances you need
- Then switch to true load balancing to make sure you get consistent performance from those instances

>Hi Hank,
>
>Not sure about the difference between a process, a CPU and a core.
>
>The table below shows that with 6 COM servers (processes, hence the ProcessID in leftmost column) running load-base (request go to the first available process), the 6th server (PID 4292) runs only 3 requests out of nearly 10k (.03%). In this (very comfortable) situation no contention ever happens.
>
>If we had 3 processes instead, 102 requests (85+14+3, 1%) would fall in contention for an available process, meaning that these users, 1% of the cases would wait for an additional average response time, .6 sec. in this case.
>
>It means that, if 2 processors would theoretically be enough to serve our 15 concurrent users, at least 3 are necessary to ensure almost no contention (no wait queue) for processing capacity.
>
>The time frame analyzed (morning) is, for this business, dedicated to order entry with some printing: order confirmation for the patient, work order for the lab; data analysis comes mostly in the evening, when the day's statistics are drawn.
>
>Regarding this discussion, I would add that the number of users per processor is of very little importance for data analysis and reporting (BI): whether users wait for 20 or 25 seconds to get a report has very little impact on the perceived performance.
>
>By contrast, when performing order entry at front desk like in this case, each patient (customer) requiring at least 20 hits on the system (20 user events), response time makes a lot of difference.
>
>Also the cost of hardware available today is just not an issue. The server used by this application is of 3-years old, lower-end Intel technology;
>using the latest high-en Intel Xeon would simply cut all figures by half, making all this discussion of how many users per CPU just OT.
>
>>Do you mean by CPU a virtual CPU (i.e., one process)? If so, 10 users per process is very good. That seems to be what you are describing. If so, that's very good. I typically aim at 3 users per process (in a highly transactional system that does a lot of reports that need crunching).
>>
>>
>>>>>
>>>>>PID    COM server    hits
>>>>>5744  ctb.ctbServer	  6996
>>>>>5564  ctb.ctbServer	  2284
>>>>>4648  ctb.ctbServer	    548
>>>>>4312  ctb.ctbServer	      85
>>>>>424	  ctb.ctbServer	      14
>>>>>4292  ctb.ctbServer	        3
>>>>>total                           9930
>>>>>
>>>>>
>>>>>around 1 hit per second
>>>>>
>>>>>10 users average
>>>>>250 hits per user per hour
>>>>
>>>>I don't really know what you want to say with these stats. Are you saying the server is under heavy load and thus justifies the 10 user / max CPU usage? I don't know what a "hit" entails, what resources a "hit" uses. But 10 users = max CPU seems very onerous imo.
>>>>
>>>>The server I described also acts as a web server serving remote requests for data directly feeding into end-user spreadsheets. It handles millions of data records in the charting databases, tens of thousands of records in multiple reporting databases, etc. It easily handles 20, 30 or more users. And I don't think it is a very high end server; just a single CPU, 16Gb ram. I could easily ramp that server up.
>>>>
>>>>What is consuming the CPU on your service? Is it FIC or the database engine or what?
>>>
>>>Frankly I'm not very keen at discussing whether it seems or not
>>>
>>>We have dozens of response time figures to share, for readers who like to analyze real facts and figures, are you one of these?
>>>
>>>Speaking of millions, the database has million records (order items)
>>>This application is an ERP that the whole company works with.
Regards. Al

"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov

Neither a despot, nor a doormat, be

Every app wants to be a database app when it grows up
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform