Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Windows systems - is file fragmentation bad?
Message
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00736741
Message ID:
00736836
Views:
12
>>>Hey Jim. Your four points seem, to me, to make the argument that fragmentation doesn't matter *as much* anymore because of system performance enhancements. But I am not sure that any of them negate the ol' "axiom"... in other words, I'd agree that the speed of modern systems just pushes off the manifestation of these issues until the disk is *really* fragmented (I am the first to admit I have not quantitative definition of "really" for that), or until you run some sort of process -- grabbing values from disk while in a long loop, for instance -- that really highlights the issue. But I'd imagine that there is still some effect.
>>
>>OK, let me try "why are contiguous files considered so good" as compared to fragmented files. Again, keeping the points initially listed in mind.
>>
>>It seems to me that in either a system with even a few concurrent processes underway (each using HD for user and temp files) or even a system running only a database application (using many tables, many of them largish), that contiguous files cause greater head movement (slowness) compared to the same files being fragmented (amongst each other).
>>
>>Any new thoughts?
>>>
>
>Well, pretty much the same ol' thoughts... <bg>. I just read your original Chatter post, and the reply above, but I am not sure I get your point that the fragmentation could be more likely to put two points of a file closer together. I'm sure it's possible, but I don't think it's more likely....
>
>Sure, if you need to jump from point A-->Z in a file (let's say that it's a large file represented by mgs of info represented alphabetically) then that's a big jump. And, in a contiguous file you are guaranteeing a 25-item jump; while in a fragmented file, Z could end up being closer than 25-items to A. But, there's no guarantee it will end up closer, and it could end up further apart. More importantly, it's rare that you jump from A-->Z. More commonly, you run through many pieces of data that are meant to be read contiguously... such as pages of a document or lines of a text file or bits of data in a database record.
>
>On a really quick system you're unlikely to notice an impact during standard user-driven use, but if a user-macro or a loop construct comes into play, at some point the very small differences add up... a .1 second increase over ten thousand hits comes out to... well, it adds up!
>
>(Am I following you correctly on this?)

Yes, I think you're following well. But I will expand more now...

Most of my applications have some larger and heavily used tables and lots of smaller tables. Now those larger tables (well, at least a few of them) typically have new records added AT THE SAME TIME (e.g. a CustBase, a CustAddr and a CustMisc, say).
Now when written, even though in separate tables (don't forget the .CDXs and possibly the .FPTs too), each of these records will be in VERY CLOSE PROXIMITY TO EACH OTHER because they WILL be written in file fragments. So any time that a specific customer record (set) is needed, they are all close by, MINIMIZING head movement. I see this as good.
Now someone comes along and DEFRAGS the HD. Now it is guaranteed that all EXISTING (related) records are far from each other! I don't see how this can be good.

So I have a theory that says that fragmentation of database tables is good, but the problem that it opposes conventional wisdom. So I'm trying to identify the facts that confirm the veracity of the established convention.

Got some? < s >
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform