Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Which is best for Desktop Apps VFP?.NET
Message
 
To
28/01/2004 01:40:00
Walter Meester
HoogkarspelNetherlands
General information
Forum:
Visual FoxPro
Category:
Other
Miscellaneous
Thread ID:
00860600
Message ID:
00872741
Views:
106
Walter,

Sorry for the late response to this. I'm on vacation and I promised myself I won't spend more than an hour a day near the computer <g>...

> With all honesty I don't see this. I don't have use for flexible type checking at runtime. I don't see how this alters the discussion. It may help if you drew an example when to use these features and how this compares to the VFP situation. But until now I cannot see the relevance of your argument.

Maybe I expressed myself a little vaguely in that post (can happen when replying to messages at 2 in the morning <g>). My point here is that a) you can get strong typing at design time including compiler support for type checking making it possible to catch many errors and typos at design time rather than at runtime and b) the ability to use a clean object model to access information about the data that you’re working with at runtime. You can do the latter in VFP as well, but it’s not nearly as complete nor as clean (laFields[x,2] to retrieve a type is not particularily clear for example).

ADO.Net also provides a clean abstracted data model that allows you to pass data around which is not easy to do with cursors period. You can’t take a business object that uses a cursor as its underlying data store and pass it over COM or a Web Service to another tier of an application. There are ways to get around this (as I have built into my own WWWC framework for that matter) but it takes a lot more work or a framework to do this. The main point I’m trying to make with this is that ADO.Net is a set of consistent related classes and an object model that provides you access to the data functionality.

This is not to say that you can’t do the things in VFP, but this is trying to say that you can do just about everything that you do with VFP with ADO.Net. But you will do it differently. I think a lot of people who give .Net a bad wrap are doing so based on doing things the VFP way, which if you ask any non-VFP developer is not a common practice… Just because things that we do in VFP don’t work the same in .Net doesn’t mean it can’t be done or even that it may require more code. It’s just done differently.

>for anyone. However, even when you're considering switching to .NET, you've got to realize that you'll leave the luxury of a local database engine behind that handling data is substantially different in many ways. You've got to analyze this for yourself if this is something you're willing to give up.

Well, I agree on this and I do miss a local data engine. But that’s part of the choice you have to make as a developer comparing tools.

> 1. A database server should in its essense not be missused for such tasks. Datamunging can be very resource consuming and therefore should not take place on any database server. It greatly increases the risks of performance problems. I know a few DBA's that have to deal with those situation about every day, so this certainly is not a non issue. A database server is ment to get your data in its raw form (just as a file server is getting your raw file). Datamunging actions are far more better left at the client, *IF* the client if capable of handling that.

I don’t agree with this statement at all. Name one other tool that is not x-Base that does things with a local data engine? Read any pure Database relational text and there will be no mention of offloading data for ‘local processing’. That’s a silly notion – that’s what the database is there for. Load balancing and dealing with data performance are real issues but people who know what they’re doing with the data server should be able to deal with this.

Even discounting that, here again you make this assumption that you can’t filter or traverse the data as you can in VFP. It’s true that there are fewer options for looking at the data, but OTOH you’re dealing with a simple structure that can be traversed extremely quickly, so ‘filtering’ data can be done simply by traversing the list for example and pulling out what you need into a custom view. This is no less efficient than filtering or sorting on the fly in VFP. And if you use the built in high level structures in .Net such as DataViews it doesn’t even take more code than it does with VFP.

The final point I want to make about ADO.Net and an object based data model is that you can extend it natively. You can subclass any of the .Net classes and override functionality or add new ones. Not only in ADO.Net but with almost everything in the framework. This is extremely powerful and allows extreme flexibility even if it comes with some complexity.

> I don't care at what level the rubber meets the road. The DML has to fit somewhere. If the DML commands are written they should at least not avoid SQL because it is not possible. There are numerous problems in handling data in a OO way.

DML inside of a language outside of xBase is a myth. Just about every other programming environment uses an object approach to data access where you pass SQL strings and return a result that is returned to you in some sort of object. DML is a Data Engine thing and only in XBase does the line between data engine and 4GL language mix…

> 1. It is a 3GL solution to a 4 GL problem. in a 4 GL you more specify what you want rather than how this is implemented.
> 2. Iterating through collections is far less readable than a few solid SQL or xBase commands. Though I admit that setting up a good object model with usefull method naming ease the pain a bit.
> 3. As a result of 2 writing bugfree code is much harder to do.
> 4. As a result of 2 and 3, readability is much harder also.

First I’m not sure what you’re billing as 3GL vs. 4GL. I think you’re comparing Views vs. a Data object? Even if you do that the object approach allows you the same functionality – in fact it’s always been that way. Even ADO was able to do this.

The rest of the points are totally subjective and based on your opinion. Bugfree code of all things has no place in this discussion as this is a completely separate issue. If anything ADO.Net makes this much less of an issue by full support for Intellisense and compile time validation of the code you write including, if you chose, of your data (typed DataSets or typed Datarows for example).

> I know looping and filtering is possible. But how about SETting an index ORDER and drilling down the index with a SCAN FOR WHILE . These kind of DML commands are just the DML commands I just so highly appriciate, because those are the fine building blocks that are key to success in data munging. And yes, I'm talking about cursors and views, not DBFs specificly.

A DataView provides this functionality. And lest you think this is a lot of code – it is not. Creating a DataView requires two lines of code, after which you get a View that you can traverse or data bind.

>I remember you saying different things about the ADO.NET esspecially when it comes to size ( which we did not discuss here). However, it comes down to the point what kind of data access you do. A simple SQL statement and displaying the results is not going to be the problem. Advanced report generation and heavy data munging is another. For example I would not try to handle a 'shortest path' problem in ADO.NET as the size of the data along with the record oriented approach of the problem is not something ADO.NET can efficiently handle.

ADO.Net has problems with large DataSets yes. There are ways around this and very good ones in fact. For example, I just built a report that has approximately 500,000 data items. But rather than pulling all the data up front into a huge resultset (or related resultsets) I use the Report engine (third party ActiveReports but something similar could be done with Crystal too) to pull the data one item at a time through a business object. The business object provides all the calculations and the report simply runs through a simple list of key values that cause the business object to be loaded. End result: A report that runs very fast and has next to no memory consumption (because only a single complex business object is in memory at any point). There are no super complex queries to pull the data, just a relatively simple query to retrieve the base keys required for the report.

As I said before that are different ways to accomplish certain tasks and in this case I was able to overcome the slowness and ended up with a design that was actually much cleaner than trying to shoehorn the data presented into a pure data representation.


Ultimately developers have to make their own decisions and not just look at one or two bullet points that support their point of view to justify their choice. I know I use .Net because a) I like it, b) it works for the apps that I'm building both internally and for customers and because c) I believe it is the future for development with Microsoft development tools. The road to get there hasn't been easy and you can bet I still get frustrated at the things that don't work right in .Net. But heck, what dev tool works 'right' in the first place. VFP surely is one that has many, many quirks and odd behaviors. The trick is mastering the tweaks and peculiarities and take advantage of them whereever possible.



+++ Rick ---
+++ Rick ---

West Wind Technologies
Maui, Hawaii

west-wind.com/
West Wind Message Board
Rick's Web Log
Markdown Monster
---
Making waves on the Web

Where do you want to surf today?
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform