>I have looked at the script of column size change and see that SSMS will create temp tables and then rename them. In test, in my - fairly small database - the change worked without a problem. But my customers have bigger tables. So my question, what - in relative terms - would constitute a "big" table? For example, I estimate that among my customers the biggest table where such change has to be done is about 500,000 rows. Is this "big" that could cause a problem? And if so, anything I can do in their SQL Server database to deal with this?
Scripts created by SSMS are often retarded. It gets a mild warning from SQL server, and then proceeds with this roundabout process of renaming old, creating new, inserting from old, dropping old. I wrote some code to do the same thing, and extending a field is not a case when the SQL server will actually complain. There are a few other cases where it will (changing collation on any field, changing clustered index), in which case I simply re-run the same code with llRebuild=.t. and that's it.
SSMS is disappointing in so many ways. SQL is rather good, but its image is permanently stained by so many people not knowing which part of the stupid things is the guilt of the server, and which is attributable to SSMS. My guess is that about 80% goes to the latter.