Hello,
We are using Outsystems as a front-end to a rather complex stored procedure (hosted on a SQL Server DB on our infrastructure, which Outsystems accesses through a VPN). We call this procedure from Advanced SQL; it returns a recordset of several thousand records, each containing a JSON object.
We noticed some performance issues with this procedure and we found out that – on the database side – returning a single JSON array instead of separate objects in separate records improves speed by at least a factor of five. We assume this is due to the transport mechanism – having multiple records probably requires multiple roundtrips on the network, as opposed to the situation where the dataset comprises a single (albeit huge) record.
However, when we try to parse this JSON array in Outsystems, we find that it has been truncated. The reason, according to a forum post we found, is that Outsystems runs the query asynchronously and stores the resulting dataset in an Outsystems DB where the maximum field length is 2000.
Any recommendations?
For the sake of clarity: the output of the procedure depends on the previous user input (so we can’t cache it in Outsystems) and drives the next user action (so performance is essential to usability); there is also no reasonable way to reimplement the stored procedure as an Outsystems server action (as it requires on-prem data and is rather high-code).
Thank youFederica
Hi Federica,
How did you find out that the JSON is getting truncated (e.g. error in Service Center, using LogMessage, etc.)? Looking at some forum posts and the documentation, there doesn't seem to be a direct limitation on the possible JSON size from a network standpoint. This makes me believe that there might be a possible misunderstanding somewhere, for example there is this forum post that outlines truncations happening with the LogMessage action, which could be misleading in your situation.
The only constraint that I'm aware of is at the database level if you intend to store the full JSON in the OutSystems DB before processing it. However, this isn't a full limitation, just a design consideration, looking at the Database Data Types Documentation you can see that a Text length beyond 2000 gets stored as an NVARCHAR(MAX) in a SQL Server Database. There is also the strategy to convert to Binary Data before storing in the database, such as outlined in this forum post, in case the maximum length is variable/unknown.
Hopefully that helps at least a bit.
Hi Francisco,thanks for your reply.We find out that the JSON is getting truncated because we receive the following error: "Failed to deserialize JSON to ScegliElementiRelazioneList:Unterminated string. Expected delimiter: ". Path '[8].Colore_TipoNodo', line 1, position 2033.", where 2033 is the last character (while the whole file counts around 1.000.000 characters)
We found out that when the output has more than 2000 charcacters, it comes out as multiple records, each of 2034 characters, so we need to concat them in a variable and then deserialize it, loosing the speed improvement of the stored procedure
We were having the same issue, and found out that the platform breaks down the output of a SP in SQL query, if its too big, into multiple records.
So I'd say to just cycle and concat the list of results of the sql (without spaces or anything) to get the original large JSON file.