Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Maradona on Messi
Message
De
18/07/2014 04:11:23
 
 
À
18/07/2014 02:18:40
Walter Meester
HoogkarspelPays-Bas
Information générale
Forum:
Sports
Catégorie:
Joueurs
Divers
Thread ID:
01603642
Message ID:
01604002
Vues:
44
>>Thanks for your replies, the one thing I was still curious about - if any of the apps from your company had used .NET and typed datasets in an application.
>
>No, and I'm not sure why you think it matters given my reasoning behind it:
>
>I'm not driving a truck, but a normal car because it is the most practical vehicle to me. You're trying to say: How would you know that a truck is impractical since you have never driven in one? I'm not the type of guy that will drive dozens of different vehicles to see what is suiting me best. Just like any normal person I sit down and look at the pro's and con's of the vehicles and make a choice. Only then I'll take it to a testdrive to see what is most confortable to me. Why would it be any different in selecting a ORM solution?
>
>
>We went away from having the overhead of having to define data structures on both server and client end in favour of dynamically determine and copy the data structure from the source. I do not see any compelling reason (on the contrary) to step away from that.
>
>In VFP there *IS* no compelling reason to do so, in .NET there are advantages in intellisense, automatic updates and easier access compared to untyped datasets in anyway you cut it, its not easier and more straigthforward as we do now in CursorAdapters.
>
>>Truthfully, I never really saw VFP/SQL views in the same light as I did typed datasets.
>
>Its like having a non-normalised database with uncontrolled redundency. If you make a change, you need to make sure the changes are made at several places (An article of Celko about creating binary trees comes to mind). You either have to control it, by having some semi-automatic mechanism (like SPs, Triggers) that will take care of that, or you make sure that your database is properly normalised and changes can be made a one single point.
>
>With typed datasets and VFPs' SQL views, just like having to deal with redundency, you need to find a way to deal with the fact that if you make schema changes, the changes have to be cascaded to the definitions on the client end. And dealing with that, might be easy or difficult, but no matter how well you try to deal with it (just like we did with the SQL views), errors will occur.
>
>And if for a single installation new columns need to be added (Remember we are dealing with a large EMR where clients have their wishes), the last thing we want is to have extra compiled code to deal with it. Its better to deal with the situation there is only one authorative point: The database itself.
>
>
>>>Yes, I agree - a structure change means pushing out a change. There are design time changes and runtime distribution changes. For market apps with a large distribution base, I can see what you're describing to be a potential problem.
>>
>>For ASP.NET and internal apps, this isn't as much of an issue, except for the possible problem of overhead. One of the biggest knocks I'll acknowledge on typed datasets is the overhead, especially when instantiated repeatedly. Some developers have sought out utilities that you'll find on CodePlex that utilize a proxy class to only generate them once.
>
>I agree, it will not be so much of an issue if the application is in house in a controlled environment. Its like my professor at university said: One is like a pussycat and the other is a Lion. The one instance installation is like a pussycat: Losts of resources are available to keep the pussycat happy and purring and do whatever she desires (keeping the users happy with enhancements, support, etc). The large installbase app is like a lion: You raise it as best you can, but at some point you need to release it into the wild. You'd better prepare it well, so that it can survive on its own, with minor (healt)care (aka updates). The app needs to be flexible out of the box and to a large extend to deal with scenarios you did not think of beforehand. The users might want to use it in certain unanticipated ways. The last thing you want is to build customised solutions for them. The higher the ratio of common codebase between installations, the better.
>
>>Now, on the design time end, some installations do this (push out changes) pretty painlessly with automated scripts, and others do it poorly. Typed datasets can (and should) be abstracted to a shared DLL that most source control systems can handle. I would think any installation that finds this kind of change management difficult is probably facing other challenges.
>
>>This evening I went back and read some blog entries (some going back as far as 10 years) where developers voiced pros and cons on typed datasets. Truthfully, many of the arguments against them stem from huge abuse of typed datasets. I'm not saying typed datasets are the solution for every application but some of the cons represent worst case scenarios.
>
>>I used them quite heavily from 2004 to 2009 across different clients. The truth is that if they had been problematic across multiple release cycles, I would have continued using them and generally advocating them. I've always found the argument of "custom collections vs typed datasets" to be a more fruitful discussion than "typed versus not typed".
>
>Again, my point is the less redundency in (meta)data, the better. You'll never have to deal with the problem to have typed datasets in sync with the database. The problem, however is that in .NET you'd lose some of the properties you value of typed datasets. And that is exactly my message here. It does not have to be that way if the programming tool was build up recognising this problem and dealt with it. There are ways to have good DML properties and even for intellisense there are solutions. You'd however lose compile time type checking, because just like the office automation we talked about... the compiler has no way of knowing the exact types, and should not have to, nor care. Hence my statement "for the purpose of keeping the compiler happy".
>
>I can see people being happy with typed datasets because what I write is of very little value to them, but it does not take away the criticims I got in using them in developent.
>
>>On your ORM question. I'm actually not sure I understand it. I've used typed datasets more for complex reporting than any other aspect of an application, though they also have benefits for binding. A typed dataset, in that sense, is really a very simple form of an ORM, where there's no behavior per se, but rather a disconnected set of relational objects.
>
>Your last line says it all to me: "A disconnected set of relational objects", which could (and will) get out of sync with the authorative source: The database.

Doesn't the problem here lie in the fact that you are allowing datasets (whether typed or untyped) and, in the VFP case, cursors to percolate beyond the data layer. If data is returned from the back end as a set of objects then changes to the DB structure only need to be dealt with in the data layer. And if the objects in .NET are strictly typed the compiler virtually guarantees that you won't cause breaking changes outside the data layer.
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform