General information
Category:
Databases,Tables, Views, Indexing and SQL syntax
Hi,
in addition to the previous answers:
>- assuming an index on almost every column of the table for rushmore
> optimization, is the performance of import a problem ? (more indices
> same key values -> more time for inserting a record)
> Who has experience with the same amount of data?
>- How long could take a SQL-statement ?
>- Would it be better to store the data on SQL-Server ?
> What about performance of SQL-Server compared to Foxpro
> with large amount of data ?
Do you ***really*** need an index on every column ?
This might even slow down rushmore. Hilmar has an article about
indices on deleted(), the same applies here.
Also: if you are updating a indexed column or inserting a new record,
this will flag the .cdx as invalid and force a reread of all buffered
index info. Might take a while if the indexed data travels over a network,
especially if many users are working at the same time.
With the data kept on the WS this is less of a problem.
Still, this "index-trashing" can slow down the speed of the app.
For using .dbf's I think the old seek() - scan while
scenario is best if you are moving in such datasizes
(and yes, I've beeen there doing data mining
optimizations with some of the tables exceeding 1 GB).
If you are doing batch-updates this might be not that relevant -
too many unknowns to give a valid statement. Some more info from your side ?
And yes, Sql-Server can be very fast, especially if you use T-SQL.
The decision here should be reached by analyzing the topology of
the intended use scenarios. With these index sizes it is not only a matter
of bandwidth but also of disk access on the server in multi-user LAN scenarios.
Web queries or SOAP with vfp and .dbf's might actually be a good option here.
HTH
thomas
Previous
Reply
View the map of this thread
View the map of this thread starting from this message only
View all messages of this thread
View all messages of this thread starting from this message only