Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Microsoft VFP practice exam
Message
De
22/01/2004 10:48:00
Walter Meester
HoogkarspelPays-Bas
 
 
À
22/01/2004 09:46:23
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00865956
Message ID:
00869504
Vues:
63
Hi kevin,

>>However, SQL server is not meant to do this task. Actually I would not burden the server with too many of these tasks. The server should do what it is build for: handling raw data. I find using SPs for munging data a misuse of a database server. You're wasting valuable resource in terms of CPU and memory for tasks that can be handled more efficiently on the client.

>Um, as far as I understood it thin-client solutions were the intention when using back-end DB's, the idea was that the Server would undertake the heavy datamunging because

Most peoples definition of thin client is a WEB or TS/CITRIX based solution where you logically have a 3 tier solution. The data munging and business logic goes into the middle tier and should not be stored in the DB.

>
  • A Server that the DB runs on will be at least twice as powerful as an average client PC, not only that but would be "built" as a server, to do what a server does best

    In reality the server is the bottleneck. If a server is lets say twice as powerfull as the average client it only takes three clients to justify that the datamunging happens on the client. If lets say 100 clients connect to the server, you're way better off to process as much as you can on the client instead of the server. The server would be loaded with too many tasks that should be done on the client and will have serious effects on the server, esspecially when you talk about massive reporting tasks.


    >
  • If heavy processing was to take a few minutes or more it would be able to run on the server, thus not hogging the client PC and holding up the end user's PC

    But at least then it is only that client PC. If you were doing that on the server, all connected clients suffer from your reporting task and all have serious performance problems. On the average VFP is more capable of doing datamunging efficiently than SQL server (for various reason like the ommision of security checks, concurrency issues on the server etc) sould it would certainly not surprise me that on a server with more than one or two clients attached the VFP solutions of downloading the data you need to VFP and process it locally is only taking one minute as oposed to two minutes on the server.

    >
  • Due to the lack of a data-engine in other MS app dev tools SQL Server came with the capabilities that lacked in other tools - that indicates to me that SQL Server "was" designed with this in mind

    What exact capabilities are you talking about?
    Well, you can look at this at several angles, but if those capabilities do have a serious effect on performance then you'd better do these tasks on the client if your programming language allows for that.

    My opinion is that every language that positions itself as a language to write database applications SHOULD HAVE a decent local database engine to process data without defering to the database server for several reasons:

    - The data munging capabilities of the RDBMS are generally less than in the programming language.
    - For data intensive apps you might cause more network trafic than neccesary.
    - You avoid the server beeing stressed with things that should be done on the client.

    >>There is so much to gain in processing as much as you can on the client and use the SQL server for only the basic tasks, handling the raw data, maintaining security and integrity. This also makes you application more portable to lets say oracle. It is just a pitty that there are not many vendors (though there are a few), do recognize this fact.

    >I'm struggling to find a reason why you require so much datamunging in the front-end, it sounds to me that this is the result of the design of the application - and could suitably be re-worked in another dev tool, differently, to avoid all the client-side datamunging.

    I disagree. For example if I want to report a calendar with the worked hours for each employee for the whole year plus the hours of sickness, holidays, leave hours with different working roster, working partime, pregnancy leaves etc. in a coloured way so that the employee has an inmediate, clear and full overview of their leave hours in a given year, you will see that the way that the report presents this data is way different from the way the data is stored in.

    This routine I wrote several years ago is very complex and processes data from about a dozen tables. The tables are all normalized so that they are stored in the most efficient way. This complex routine takes only 5 hundreds of a second to calculate the report for one employee. This would be a nightmare to program in any other language as a lot of SEEKing and SCAN WHILEs are going one.

    There are numerous examples to find where your report structure is totally different from the source tables. In all these cases you'll have to do lots of datamunging to get it right. I've got more than average experience in writing complex report with graphs, crosstab and complex calculations to know that you can create about every reporting simply from their base tables is just a myth. If you want complex reporting there is no way arround datamunging.

    In about all of my applications I use internal cursors countaining META DATA. For example they contain information about queryable fields which is used by the applications querydesigner to provide a user friendly front-end for writing user defind queries.

    Or they contain information about the object hierarchy on which security is based. SQL whereclauses for SQL commands to prohibit access to certain data is calculated on the fly through datamunging data from the internal tables with the securityaccess table on the server. The security table is cached on the client in cursor that for each object (form, record, button, function, etc) the access rights can be determined in only one millisecond. The security access table is based on various groups in users individually. Accessrights are granted on objects in a object hierarchy in which child object inherit the userrights of the their parent.

    Even the whole menustructure which is tightly bound the security module is stored in a build in table to provide more flexibility than the native way of building menus (for example menu items could be greyed out or not included depending on the accesslevel of the user for the attached function, or second a menuitem could be used in more than one menu).

    Event the error reporting systems is using meta data (stored in the exe) to determine the nature of the occured error, and if the application is able to resolve from the error or that the error is so serious the application should shutdown.

    Not to speak about all kinds of fixed lists that are stored in the executable and are used for displaying in comboboxes or lists.

    But even more general problems could be solved more efficiently using local cursors. for example Finding out from a fixed set of numbers which numbercombinations add up to a certain number, or finding primenumbers could be more efficiently done with the local database engine than without because the database engine is desinged for use with large amounts of data. In .NET you'll have to do this with arrays: Well good luck...

    Well I understand that if less experienced VFP programmers don't use such features or don't see the value in such approach might not have much problems in converting to another language. But for me, having a variaty of complex have datadriven applications there is simply no alternative to VFP because there is no animal that can process META data that quickly.

    Walter,
  • Précédent
    Suivant
    Répondre
    Fil
    Voir

    Click here to load this message in the networking platform