Aggregate - Not in query

Aggregate - Not in query

  
Is there a way to perform "Not in" filter to an aggregate

I have a table and I want to select all entries that are not in another table
Tomas,
It depends on how you are joining the tables, but you can filter like this
TableB.Id = NullIdentifier()

Considering you want records from TableA when there is no TableB entry
Yes,

For example to find Issues that don't have an Engineer assigned using Aggregates, you can:

Is there a way to compare against a list of strings in another table? Something like:

WHERE MyName Not In (SELECT name FROM table)

not in an aggregate. therefore you have to use an advanced query

Hi Marcos,

Have you tried the approach Tiago and João suggested? You should be able to apply it by having a With or Without join with table on MyName = table.name, and a filter for table.Id = NullIdentifier().

It is probably not as efficient as if Aggregates supported NOT INs natively, though.

Cheers,

Paulo

Thank you Paulo. Their suggestion works. I just didn't understand it until you restated it. I'll repeat it here once again with some additional context and a small change for those that need to have things explained a few different times; like me!  :)

I didn't describe this earlier but I am pulling data in from a flat file (tblQuestion) and normalizing it. Some records in tblQuestion had a group name in the tblQuestion.GroupName field and others had something else in that field. I want to reference records that were using the tblQuestion.GroupName field for something other than group names. 

I created a table with the group names called tblGroup and wanted every record from tblQuestion that didn't have a Name from tblGroup.Name in the tblQuestion.GroupName field. 

tblGroup(ID, Name)

tblQuestion(ID, tblGroupID, GroupName)


So, in this context the solution is:

Sources: tblQuestion [With or Without] tblGroup (tblQuestion.GroupName = tblGroup.Name)

Filter: tblGroup.id = NullIdentifier()


This is not at all efficient, but for a one time import of under 1000 records, this and the rest of the normalization steps ran in 26 seconds. Success!

Thanks,

Marcos