>Yes, it turns out the solution provider isn't willing to support replication over low bandwidth, so it's going to have to be some sort of remote access setup.
Replication would probably require a very respectable bandwidth support. I can understand their POV. It appears a remote access would probably be better, which is what have usually seen at several occasions from discussions on this site, in situations like yours.
>Embedding documents in the database doesn't happen every day. They might go for a while not doing any, and then have to put in one or two dozen.
As this is not applicable on a regular basis, in order to determine the payload, you would probably extend the calculation period over a very long period, which you certainly have. I would usually double up on everything, as far as potential scenarios, just to assure your expections would apply. When we have non regular processes like these, such as random payloads, it makes things a little bit more difficult to predict.
>Do you have any idea of the efficiency of replication in this sort of scenario? If a 1MB document is embedded, how large of a transaction log entry does that create - I wonder if there's significant bloat.
The transaction log tends to grow very fast based on various operations the database has the support, such as removing a bunch of records from a specific operation as well as when adding a new field. It might not be a factor if you have daily maintenance which shrinks some of that. However, if it keeps getting bigger, based on the limit applied into SQL Server, you might have a very big log, especially by embeding such documents into a table.
I never embed such documents into a table. While this is good for security concerns, it defies any potential of external manipulation, which might have served here in such replication. File synchronization would usually be easier to support than database. So, at least, you would have add that part more easily resolved.