Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Creating a Primary key.
Message
General information
Forum:
Visual FoxPro
Category:
Coding, syntax & commands
Miscellaneous
Thread ID:
00322044
Message ID:
00322637
Views:
21
>Erik,
>
>Thank your for the response to the question. I'm sure it will help me. However, there's a little problem. When I run the funtion, an error pops up saying "Cannot Group By Aggregate field."
>
>Also I do not understand the Sytax of the command "COUNT(*)" and I am unable to find that funtion in the Foxpro Online Help CD.
>Can you tell me what that command is doing?
>

Look under the SELECT statement - it represents the count of records summarized in the tuple. IOW, If you had the following tables (field names are not legal, or even given, so don't try to name a field INV# since it is not legal syntax):

INVITEM

Inv#, InvLine#, etc.
1, 1, ...
1, 2, ...
1, 3, ...
2, 1, ...
2, 2, ...
3, 1, ...
3, 2, ...
3, 3, ...
3, 4, ...
4 ....

SELECT Inv#, COUNT(*) AS NumLines FROM INVITEM GROUP BY Inv#

Your result set would look like:

Inv#, NumLines
1, 3
2, 2
3, 4

>Any help is greatly appreciated.

You can't use a field whose content is created by counting summing, or other similar results that are the basis for gouping in the GROUP BY, COUNT(*) is an aggregation, not a value. Think about it for a minute - how could you create an aggregate (a group of common records) based on the result of the grouping operator - this is not the VFP Psychic Hotline, with the Great Karnac knowing before figuring out how to gather sets of records together to form a result that you'll use to group a set of records together...do you see the problem?

Let's rework the SELECT into something that makes some sense:

SELECT COUNT(*), custid+upsino+DTOS(puldat)+STR(sampleno,2) FROM MyTable GROUP BY 1 HAVING COUNT(*) > 1

makes more sense as:
SELECT Custid, ;
       UpsIno, ;
       PulDat, ;
       SampleNo, ;
       COUNT(*) as NumOccurs ;
  FROM MyTable ;
 GROUP BY Custid, UPSIno, PulDat, SampleNo ;
HAVING NumOccurs > 1
OK. The result set has a unique record based on Custid, UPSIno, PulDat and SampleNo - and you want to know how many records have this set of common values. You only care about tuples (sets of common things) that have more than one occurance. You need to pull the record sets. The order in which the key fields are combined and filtered to form their aggregation doesn't matter, and we want VFP to take maximum advantage of any indexes that exist and any intelligence about the relative distribution of values and cost of compares, and don't want to worry about if a field is bigger than we planned, so we don't want to form a single composite value that prevents VFP from using it's smarts. Let's pull the four grouping fields and their aggregate count, and let God (well, in this case VFP) sort'em out. and don't force needless function invocations like DTOS() and STR() on each individual line because that's the only way we can picture doing it in a pure procedural fashion.

SQL is designed to describe the desired result, not the method of forming it. The statement I wrote tells what i want, and les VFP figure put the best way to do the compare, and it can leverage knowledge in indexes to help it do the comapres more quickly and aggregate better. What if the fastest order of compares is differnt? VFP can figure that out at runtime? Numeric compares are 'lower cost' than character compares, so we speed up the process of handling the date and sampleno by elimiating the per record eval of the functions.

IOW, SQL is supposed to make your life easier - tell it what you want, and let it work out how to do it. And the individual field values are available in the result set directly rather than having to parse some dain-bread string whose only function is to group things...

>>>I am trying to create a primary key for my table out of the following index called "keydex" :
>>>
>>> custid+upsino+DTOS(puldat)+STR(sampleno,2)
>>>

FWIW, using a composite index like this is about the best way to ensure a loser decision and put an error in the table's PK by inadequate (and unnecessary) limitation. Make a separate primary key field. If there are in fact duplicated tuples, it's a clear indication that it is not a valid key. Using a factless, arbitrary or surroage primary key protects you - that way if someone ever pulls 101 samples from the custid+upsino+puldat, it won't break or truncate. St best, this is a bad candidate key that is risky because of the magnitude factor in STR(sampleno,2). And since the PK's job is not to sort the file but to uniquely identify a record (which probably can be done in a whole lot fewer cahacters. reducing bloar on child tables) you can make the arbitrary ordering a regular key...

You might want to do some reading on normalization and keys before making this kind or irrevocable, expensive and risky decision. Craig Berntson who already replied to you has an excellent reference on his Web site on primary keys. If you have access to it, Jim Booth and Steve Sawyer's book "Effective Techniques for Application Development Using Visual FoxPro 6.0" is a great, well written guide.

>>
>>First of all make sure SET DELETE is OFF.
>>
>>THen use this query to find dupes:
>>
EMail: EdR@edrauh.com
"See, the sun is going down..."
"No, the horizon is moving up!"
- Firesign Theater


NT and Win2K FAQ .. cWashington WSH/ADSI/WMI site
MS WSH site ........... WSH FAQ Site
Wrox Press .............. Win32 Scripting Journal
eSolutions Services, LLC

The Surgeon General has determined that prolonged exposure to the Windows Script Host may be addictive to laboratory mice and codemonkeys
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform