I'm feeling compelled to explain the situation and why this has to happen, but it would take forever to do so. But yes, the ultimate reason this is a memo field is because of it's length. Although today, with CAST, a Memo memory variable would be just fine if the program were written to support that.
This single record cursor is a representation of a record to be processed that comes from either a VFP database or an SQL database (in that case, the corresponding field is of the Text type).
Data within this particular memo field can be processed as a whole, or parts of it can be processed (hence the replacement). The original engine never had to replace, as it processed only the string as a whole. Over the years, the code was enhanced to allow portions of it to be processed and the program could be called recursively with that smaller "chunk" of data each time. While creating a new cursor that contains the "chunk of data" each time works without a hitch, it is sluggish becuase that cursor must be created with the NOFILTER option because it can be called again and again down the stack. Since the only thing that needs to change in the cursor is the one field, it is essentially saved off, the pared down version of the memo field is shoved in, a recursive call is done, and then the field is then replaced with the original value. It works like a champ, save for this little bloat problem :) The engine doesn't know it's not processing the "entire" string -- it just sees a completely valid record to process the way it always does!
Previous
Reply
View the map of this thread
View the map of this thread starting from this message only
View all messages of this thread
View all messages of this thread starting from this message only