Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Varbinary(max)
Message
General information
Forum:
Microsoft SQL Server
Category:
Other
Title:
Environment versions
SQL Server:
SQL Server 2005
Miscellaneous
Thread ID:
01284653
Message ID:
01284655
Views:
21
>I have a filed that was defined as varbinary(2400). Then there was some changes to the project so now I need the field to be able to hold up to 7700 bytes. If I change the field to varbinary(7700), I will violate the maximum rowsize (8060 bytes) (because there are other fields in the table and they add up to a lot more than 360 bytes).
>
>So I guess I have to change the field type to varbinary(max). Before SQL2005 I tried not to use IMAGE or TEXT types unless I really had to, and that is kind of how I feel about varbinary(max).
>
>Is there anything I should watch out for when using varbinary(max)?
>Does varbinary(max) reduce performance a lot?
>
>Any other thoughts on the issue are also welcome.
>
>Thanks,
>Einar

Did you read Row-Overflow in BOL?

Row-Overflow Considerations
When you combine varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns that exceed 8,060 bytes per row, consider the following:

Surpassing the 8,060-byte row-size limit might affect performance because SQL Server 2005 Database Engine still maintains a limit of 8 KB per page. When a combination of varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns exceeds this limit, the Database Engine moves the record column with the largest width to another page in the ROW_OVERFLOW_DATA allocation unit, while maintaining a 24-byte pointer on the original page. Moving large records to another page occurs dynamically as records are lengthened based on update operations. Update operations that shorten records may cause records to be moved back to the original page in the IN_ROW_DATA allocation unit. Also, querying and performing other select operations, such as sorts or joins on large records that contain row-overflow data slows processing time, because these records are processed synchronously instead of asynchronously.

Therefore, when you design a table with multiple varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns, consider the percentage of rows that are likely to flow over and the frequency with which this overflow data is likely to be queried. If there are likely to be frequent queries on many rows of row-overflow data, consider normalizing the table so that some columns are moved to another table. This can then be queried in an asynchronous JOIN operation.


The length of individual columns must still fall within the limit of 8,000 bytes for varchar, nvarchar, varbinary, sql_variant, and CLR user-defined type columns. Only their combined lengths can exceed the 8,060-byte row limit of a table.


The sum of other data type columns, including char and nchar data, must fall within the 8,060-byte row limit. Large object data is also exempt from the 8,060-byte row limit.


The index key of a clustered index cannot contain varchar columns that have existing data in the ROW_OVERFLOW_DATA allocation unit. If a clustered index is created on a varchar column and the existing data is in the IN_ROW_DATA allocation unit, subsequent insert or update actions on the column that would push the data off-row will fail. For more information about allocation units, see Table and Index Organization.


You can include columns that contain row-overflow data as key or nonkey columns of a nonclustered index.
If it's not broken, fix it until it is.


My Blog
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform