Hi,

A page contains a list of records in a table where 2,00,000 records are there. So, it is taking more time to load the page and on click of pagination of a table, again it is taking more time to load the data. Could you please help me to reduce the page loads time.


Regards,

Divya

Hey,

How are you obtaining your data - are you using an Aggregate to feed the page, or an Advanced Query? Is the query in the preparation of the page, or encapsulated in its own action?

Have you validated where most of the time is being spent? Is it the page being rendered, or is it obtaining the data?

Afonso Carvalho wrote:

Hey,

How are you obtaining your data - are you using an Aggregate to feed the page, or an Advanced Query? Is the query in the preparation of the page, or encapsulated in its own action?

Have you validated where most of the time is being spent? Is it the page being rendered, or is it obtaining the data?

Hi Afonso,


I have used SQL query in preparation. I think, most of the time is being spent in executing a SQL query.

Regards,

Divya


The platform will attempt to optimise Aggregates to only fetch the necessary fields and records, but will not perform these optimisations in an Advanced Query. There might be steps that you can take in your query that will increase performance. 

Can you share with us the SQL of your query?

Hi Divya,

Try to use the max records property on SQL element. Which may helps you in reducing screen load time.

On click on pagination increase the max record value logically and refresh the data source, and refresh the list record widget

Regards,

Koushik.

Afonso Carvalho wrote:

The platform will attempt to optimise Aggregates to only fetch the necessary fields and records, but will not perform these optimisations in an Advanced Query. There might be steps that you can take in your query that will increase performance. 

Can you share with us the SQL of your query?

Hi Afonso,


Please find the below query.

SELECT {Table1}.[ID],{Table1}.[COMPONENT],{Table1}.[TIMESTAMP],case when {Table1}.[STATUS]=1 then "Processing"
     when {Table1}.[STATUS]=2 then "Success"
     when {Table1}.[STATUS]=3 then "Error"
     else null
 end as Processingstatus,
case when id=Table1_FK then
COUNT(NULLIF(2, {Table2}.[STATUS])) else null end AS Success,
case when id=Table1_FK then
COUNT(NULLIF(1, {Table2}.[STATUS])) else null end  AS Error FROM
{Table1} LEFT OUTER JOIN {Table2} ON {Table1}.[ID] = {Table2}.[Table1_FK]
WHERE ({Table1}.[COMPONENT]=@interfacename or @interfacename='') and {Table1}.[COMPONENT] in (@ListOfInterfaceNames )
GROUP BY {Table1}.[ID] ;



Regards,

Divya

I don't see anything egregious that you should immediately optimise, but I do have some ideas:

 - that IN in the WHERE clause could be a problem. Do you see any marked improvement if you remove it?

 - do the Entities have any indexes?

 - in your opening post you say you have an Entity with 2,00,000 records. Do you mean 200 000 records, or two million? If the Entity has two million records in it, can you consider archiving some of those records in a History table? Do you need all of them?

divya v wrote:

Afonso Carvalho wrote:

The platform will attempt to optimise Aggregates to only fetch the necessary fields and records, but will not perform these optimisations in an Advanced Query. There might be steps that you can take in your query that will increase performance. 

Can you share with us the SQL of your query?

Hi Afonso,


Please find the below query.

SELECT {Table1}.[ID],{Table1}.[COMPONENT],{Table1}.[TIMESTAMP],case when {Table1}.[STATUS]=1 then "Processing"
     when {Table1}.[STATUS]=2 then "Success"
     when {Table1}.[STATUS]=3 then "Error"
     else null
 end as Processingstatus,
case when id=Table1_FK then
COUNT(NULLIF(2, {Table2}.[STATUS])) else null end AS Success,
case when id=Table1_FK then
COUNT(NULLIF(1, {Table2}.[STATUS])) else null end  AS Error FROM
{Table1} LEFT OUTER JOIN {Table2} ON {Table1}.[ID] = {Table2}.[Table1_FK]
WHERE ({Table1}.[COMPONENT]=@interfacename or @interfacename='') and {Table1}.[COMPONENT] in (@ListOfInterfaceNames )
GROUP BY {Table1}.[ID] ;



Regards,

Divya

Oh geez, please try not to do cases when running queries :( they can be very slow.
This looks like a mess.

Also you are running a GroupBy but you are selecting alot of extra fields (which are wider than your group by).

You are trying to set textual values in your aggregation process.... this would even be faster if you had a seperate foreach loop, but it might be wiser to just use a Status entity from where you can fetch the label.

Your counts are basically subqueries, doing such a weird case and NULLIF combination seems very over the top compared to a (SELECT COUNT(1) FROM Table2 WHERE X = Y)

Afonso Carvalho wrote:

I don't see anything egregious that you should immediately optimise, but I do have some ideas:

 - that IN in the WHERE clause could be a problem. Do you see any marked improvement if you remove it?

 - do the Entities have any indexes?

 - in your opening post you say you have an Entity with 2,00,000 records. Do you mean 200 000 records, or two million? If the Entity has two million records in it, can you consider archiving some of those records in a History table? Do you need all of them?

Hi,


If I remove IN clause also in SQL query, I could not see any improvement. 

Actually, Entity contains 4,00,000 records. Here, I applied some condition. Afterward, 2,00,000 records are displayed in the front end.


Regards,

Divya