Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
XSD & generated dataset - Base class
Message
Information générale
Forum:
ASP.NET
Catégorie:
ADO.NET
Divers
Thread ID:
00963350
Message ID:
00970261
Vues:
15
You should definately check out O/R tools and the object approach to creating entities, however, taking a data-based approach to your .NET applications is not a wrong way to go about things depending on your skill set and the type of application you are writing. In fact, you can end up with a very successful distributed entity-based application using datasets. If you prefer writing SQL-Statements used to define your entities (views on your data), and are comfortable with having the complex rules and validations in separate objects then the entity-based approach using datasets may be the best choice for you. For instance, if you are familiar with building FoxPro applications this will probably be the easiest approach.

Using a pure O/R mapper means that you're taking a totally OOP approach and will create custom entities (instead of using the dataset as the entity). As Bob points out there is a compelling reason to do so: You can encapsulate your business logic right into the entity. Also, it may be easier to exchange your entity with other non .NET systems. However, there are MANY compelling reasons to use datasets directly especially if you're sending them to Winforms clients:

1. Databinding is much easier with the Dataset. This is especially true for Winforms apps where you have 2-way data binding. If you create custom entities you will need to implement a handful of interfaces to get it to work: IBindingList, IList, IListSource, etc. I'm not saying you can't do it manually, but there's some work involved.

2. Filters/Sorting/Views are all made very very easy with the Dataset using the DataView.

3. Complex relationships between datatables can be easily created and managed and referential integrity can be enforced without having to go back to the database. Navigating from a parent row to the child rows in a related table is a snap.

4. AutoIncrementing columns automatically increment the keys when records are added.

4. Easy data manipulation and persistence. This is huge. The dataset takes care of remembering current and original values for you. It handles row state very well. If you add a record then delete it again inside the dataset there are no changes sent to the database. It persists relations, constraints, errors (row and column), calculated columns (expression-based), and has an extended properties collection in which you can put any additional information you need. The dataset also lends itself well to dynamic columns because on a Merge you can specify to automatically add any additional columns it finds. This is very powerful. Sure you can code all of this into your own entity object but you need to ask yourself is it worth it?

6. XML integration/serialization is a snap with ReadXML/WriteXML methods.

7. Simple data validation is built in. AllowNull, MaxLength, Referential integrity, uniqueness, data type. It also has an event model so that you can capture row/column changes. And with Row and Column errors (SetRowError/SetColumnError) you can easily indicate which rows/columns have problems and display them by databinding with the ErrorProvider. Complex validation or validator objects running on the middle-tier can simply set the row and/or column errors and send the dataset back to the client for resolution.

8. Strongly typed datasets are very easy to generate from an XSD file. That XSD which contains all of the schema information for the entity can be dynamically created by calling your middle-tier interfaces that return the Datasets (just temporarily set the DataAdapter's MissingSchemaAction setting to AddWithKey while your generating them). Then you can run the xsd.exe utility to create the strongly-typed dataset code.

Keep in mind that datasets are NOT a business object in its traditional sense, they are simply the business data. Data is separated from behavior. If you think about data as being something that passes through your tiers and is validated, manipulated, twisted and banged into other objects (or other pieces of data) then choosing to use datasets is the right way to go. However, if you are more comfortable with the data as a real business object which encapsulates its own rules and behavior then use an O/R mapper and create custom business objects.

Of course, some O/R mappers work well with the entity-based approach too. I suggest you read this excellent post by Frans Bouma: http://weblogs.asp.net/fbouma/archive/2004/10/09/240225.aspx

-B


>A few items here.
>
>First, you may be able to do what you want by editing the XSD. I know there is a way to annotate the XSD so you can name the class and properties differently from the table/col names in the db. Then xsd.exe will read your annotations and gen code based on them.
>
>Another approach people take with data sets is to seperate the Rules from the data set. Create a business object class which has the data set as a property. This allows you to have a base business object with abstract CRUD methods which you BO's derive from. Then, you can regen you datasets without loosing all your business rules.
>
>Most of the business object's methods will accept or return the data set. The data set can be passed to the UI layer or the data access layer as needed.
>
>See http://www.mssql.org/Uploaded_Files/BizObj.pdf for a more indepth treatment of this topic.
>
>But, you may want to ask if you should really use the data set. As you see, it is hard to encapsulte the rules and data using a dataset. You can generate a bizobj that derives from a base bizobj class. This generated class will make each propert overrideable. Then, you derive from that class which is where you put your rules. Something like:
>
>public class BusinessBase {
>
>//This class will contain CRUD funtions and implementation.
>}
>
>public class EntitynameGen : BusinessBase {
>
>// This class with be generated with overrideable properties
>// You might want to mark this class abstract, cause it should never be instantiated.
>
>}
>
>
>public class Entityname : EntitynameGen {
>
>// this class is where you will put your business rules.
>}
>
>So, with the above, you can regen the Gen class whenever you change your schema, such as adding a col or whatever. Of course, if you remove a col you may still have to manually remove any code looking at that property. Of course, when you compile you will get error messages to help you find those places.
>
>Finally, as Dave said, the ultimate is to use an Object/Relational mapping framework. This allows you to map your Entity/Business objects to the data base. This has alot of advantages. The first being that schema changes don't mean you have to change your objects, just the mapping files. The other BIG advantage is that you are now creating an OBJECT centeric application rather than a data centric. Your object model usually won't map your data model. But, using datasets makes your wedge it that way and your object model is never optimal.
>
>O/R mapping tools also eliminate the need to write SQL. You write Object Queries instead. Alot of folks don't like this cause it eliminates SP's and requiers the system have access to the tables. But, I've never considered this a problem.
>
>Alot of proponents of O/R say that the programmers should create and optimal Object model and the dbas create and optimal data model, then you just create mapping files. NHibernate is a .Net implementation of hibernate from the Java world with is very widley used.
>
>I hope this info helps. I didn't mean to open the whole architecture can of worms here, but it is important up front cause you will live with your apps architecture for a long time to come.
>
>BOb
Beth Massi
Program Manager
Visual Studio Community
Microsoft Corporation
http://blogs.msdn.com/bethmassi
http://mdsn.com/vbasic
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform