>>>>>>>Hi
>>>>>>>
>>>>>>>In the last few days one of my larger tables FPT file has bloated alarmingly , twice going from 75,000 to over 2 GB in a couple of hours - I can deal with it at the time but how can I quickly stop it happening?
>>>>>>>
>>>>>>>Thanks
>>>>>>>
>>>>>>>Colin
>>>>>>
>>>>>>increase your blocksize to your max. memo use. copy your table to another table and use that new table for your used table.
>>>>>
>>>>>By Max what do you mean? are there limits or good block size options?
>>>>>
>>>>>Thanks
>>>>>
>>>>>Colin
>>>>
>>>>it seem that useless. read that:
>>>>
>>>>VFP will put the memo back into the same place if it still fits. A lot depends on the SET BLOCKSIZE that was in effect when the table was created. If you know that your memo fields are going to go through significant additions over time you should use a bigger blocksize, this will reduce the number of times that a memo actually needs to be moved to a new space
>>>allocation inside the FPT. -- df
>>>>
>>>>The above appears to be incorrect for tables USEd in shared mode (which I assume is the way most of us use them). AlekseyT from Microsoft stated on UT today that an updated memo field is always re-written when a table is opened SHARED and a simple test confirms this to me. Unfortunately. He has been asked why, but so far no answer.
>>>>
>>>>
http://fox.wikis.com/wc.dll?Wiki~MemoBloat~VFP>>>>
>>>>Personally I don't use memo fields for .DBF files. I use character one field for memo; parse and save, merge and read my memos from that character field. So I reuse fields and my memos doesn't get fatten.
>>>
>>>Metin
>>>
>>>Can I change the blocksize of an existing FPT file?
>>>
>>>Thanks a lot for your help
>>>
>>>Colin
>>
>>Yes you can. Create a new table and copy old records to new table... But according to this article (If I'm not wrong understand because of my poor english) in shared environment that solution is useless... :(
>
>OK Metin thanks a lot for all your help
Your welcome... :)