General information
Category:
Third party products
>I'm planning to use SQL Server as a back end to a client server app with these LARGE (2G+) tables using FoxFire! - I figure if I send the query to SQL Server, it should handle it better than it can in fox- don't you think?
If the tables are really larger than 2 gig, then VFP becomes problematic. But heed fair warning: SQL Server is a monolithic, cumbersome sonabitch to use for analytics when compared to VFP. Just getting data into SQL Server is a pain.
Some suggestions:
1. Use native SQL pass-through and send optimized Transact-SQL queries.
2. Use optimization hints in your Transact SQL code... ESPECIALLY for specifying an index to use on a query. SQL Sever's query optimizer loves to choose long, long table scans instead of using an index when the query becomes even marginally complicated. Here's an example. Note the optimizer hint in the FROM clause:
SELECT fldField1, fldField2, fldField3, etc...
INTO tempdb..MyTempTable
FROM tblData (INDEX=IDXtblIndex1)
WHERE fldICD9DX1 BETWEEN '885' AND '8877' OR
fldICD9DX1 BETWEEN '8951' AND '8977' OR
fldICD9DX1 IN('878','9059')
blah...blah
3. You might want to experiment with the SELECT ... INTO technique (as used above) into a temporary table, build your view based on the temporary table, and use the ODBC driver to return the results of the temporary table to the VFP view where you can manipulate it locally. This assumes that you will not be updating any data in the original table.
4. At end of a query run, you can issue a DROP TABLE command to SQL Server via SQL pass through to immediately remove any temporary tables that you explicitly created.
Previous
Reply
View the map of this thread
View the map of this thread starting from this message only
View all messages of this thread
View all messages of this thread starting from this message only