I have a table that basically contains "finishing" options. These are various values that dictate how a job should be finished. The table is completely normalized. However, for any given job, there may only be a few finishing options selected, on average between 4 and 6 columns. The finishing table has about 70 columns. So for each finishing record, the majority of the columns are "empty".
I have looked at whether I could create a number of smaller tables that contain more logically grouped finishing options but each finishing options is generally individual and one-off. So, I could end up having to manage many small tables with no logical grouping of columns. Therefore, the current table design seems to be the obvious choice.
Using a remote view and ODBC, the table's metadata is having to be passed between the client and server despite the fact that for any given query, there is very little actual non-default, non-null data being passed around. Generally, with the configuration of remote view/ODBC/SQL server 2000 and VFP, can a table of 70 columns be easily handled? I have never had to create a table with this many columns before so I would be somewhat wary, even using vanilla DBFs. However, with SQL server, I have no idea whether this is large table or whether SQL server will take this in its stride.
Any opinions on what constitutes a large SQL server table would be appreciated and by large, I mean number of columns as opposed to the number of rows.
Thanks!
-=Gary