Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Saving and accessing files to the cloud
Message
From
30/01/2016 17:33:59
 
 
To
30/01/2016 16:43:28
General information
Forum:
Visual FoxPro
Category:
Coding, syntax & commands
Environment versions
Visual FoxPro:
VFP 9 SP2
OS:
Windows 7
Network:
Windows 2008 Server
Database:
Visual FoxPro
Application:
Desktop
Miscellaneous
Thread ID:
01630420
Message ID:
01630451
Views:
70
>>>- How quickly do you need to save files to the cloud, or retrieve them? Typical business LANs these days are gigabit i.e. 1,000 Mbit/sec. Typical asymmetric business broadband might be 20 Mbit/sec download, 5 Mbit/sec (or less) upload. In that example file transfer speed from cloud to business would be less than 2% of LAN, and upload speed from business to cloud less than 1%.
>>>
>>>- You may need or want to do the initial data transfer to the cloud via USB hard drive or tape. Uploading 150GB to the cloud over typical broadband upload speeds can take weeks or months
>>
>>Having USB3 or ESDI disc access makes sense. But the upload estimate seems to be too bad IMO: upload transfer should be ~500kByte/sec. 2 weekends if there is no traffic on weekends, a week if upload is not taxed heavily by normal biz. As docx usually compresses quite a bit, an automated setup where each dir is compressed before transfer and decompressed cloud side should speed the transfer up by a factor of 2 or 3.
>
>I recently set up a small business client with CrashPlan for backup (to a cloud account). We specified about 180GB to be backed up, I think it was about a million files. In order to not interfere with regular business operations we configured it to back up only between 00:00 and 06:00 local time, 7 days a week. The client has 2.5Mbit up, which is actually fast by local small business standards - some of my other clients only have between 0.5 and 1.0Mbit up.
>
>The initial sync started November 16, 2015 and finished somewhere around January 8 - 9 of this year, so just under two months.
>
>CrashPlan does smart, incremental block-level backups with compression, which should be pretty efficient.
>
>So that's one real-world example; YMMV.

We move between 30 and 200 GB of .dbf regularly. We tried some synching programs billed as smart (compressing&sending) but found that we get much better speeds if compressing manually or by script. Could be that our file sizes exceed the estimates & buffers from those programs, and of course we utilize 2 different physical discs for uncompressed and compressed stuff. Usually into .Rar, as that can be automated from the command line and has hooks into Totalcommander as well. We have 2 lines via 2 different providers (cable and DSL for failover security) and the usual transfer speed in MB/secs is a bit better than contracted Mb/secs divided by 10 (gauged by the speed shown in Totalcommander transfer and ballpark estimates of transfer time of given data amounts for uploads. Downloads do not fill the pipe, uncertain where the "bottleneck" is - speed is more than 5 times that of upload, so we do not really care.
Previous
Reply
Map
View

Click here to load this message in the networking platform