Jim,
It would depend on how the data is accessed. The idea of breaking a wide table into multiple physical files is not an issue of normalization. It is an implementation detail. The logical design would still recognize one relational entity, only its data is stored in multiple physical files.
Sense this is purely an implementation issue then the answer to its effectiveness would be found in the details of the systems implementation. Would breaking the table up give better preformance, allow a simpler access mechanism, allow the use of less than all of the subtables in most queries? If you find that most queries need to join all the tables together then you probably are not making anything better by breaking it up.
From your numbers (10,000 recs at 100bytes each) is 1MB of DBF size. Adding 2000 recs per year at 100 bytes each adds 200,000 bytes per year. To reach anywhere near the limits of size for VFP (2GB) would take around 9,000 years. To reach the records count maximum from 10,000 at 2,000 per year would take approximately 500,000 years. So file size issues are not a problem here.