reading a Text/pipe delimited file
Application Type
Traditional Web

Hello, requiring some expertise. I have researched and managed to get my app importing a pipe delimited file but it needs a little tweaking at the final hurdle which i cant seem to figure out.  i am bringing the file in VIA an upload widget which i have assigned to a BinaryDataToText action, I then use a string split action with the Text field set to BinaryDataToText.Text and Delimiter is set to NewLine(). I then run a ForLoop on my String_Split.List followed by another String_Split Action where the text field is set to BinaryDataToText.Text and the Delimiter set to "|" I then have an action to Create or Update my Table where i list each Attribute within the Source and add the data Via Index e.g.

Name     StringSplit_Record.List[1].Text.Value

This action then Cycles back to the ForLoop

This is working however, It is only inserting one row of Data. before the Start and at the end of Each row in the Pipe Delimited file there is no pipe/comma etc. This means that the contents of the Final attribute consists of both the final entry of Row1 and the first entry of Row2. I am guessing that this is throwing things out of Sync therefore it is failing to upload all data. Here is an example of the data file and my flow. 

Can someone have a look through to see if they can spot the likely cause of this in order to help me to remedy things. 

Date is the first column and is included in the file but it doesn't like it as it is just a set of numbers i.e. 04062021 so i ignore it and hard set it to CurrDate() Same result is achieved. The final column is populated with CCM contents and the first of the next row i.e. CCM is a persons name    so the contents of the CCM attribute is   "Name 04062021" and then it does not proceed to insert row 2, likely because it doesn't know when Row 1 ends. 


Help is much appreciated, I am not a seasoned coder and new to Outsystems

Mike

String_Split.List.Current.Text.Value = ""
or
String_Split.List.CurrentRowNumber = 0

Hi Mike,

can you debug and check if the String_Split.List have all the records?

OML with the code would be great to debug myself.

Regards

Hi Jose

Ok i have Ran the Debugger and this is what i have found. (Apologies i cant show you the actual screenshot as it has secure data) But the debugger is bringing back everything and it looks like this on the String_Split after the ForLoop:

I stepped into the CreateOrUpdateaction: I am ignoring the First Attribute and hard setting it to CurrDate as mentioned. So that leaves 8 remaining attributes and there are 4 rows on my sample data meaning there are 32 items in total. The debugger returns all 32 attribute contents. every 8th result (Final attribute) lists the value as both Attribute 8 and attribute 1. it then continues sequentially. Now in my Source entries i am listing each attribute 1 through 8, but the debugger continues to count so instead if indexing attribute 1 of the next row as "1" it lists it as "9, 17, 25 etc." and i am not telling it to do anything with any indexes passed [8]. Here are a couple of screenshots:

Thanks for Your help

Mike

Can you send me the file with only those 4 records that are already with data visible?

Do you mean the original Txt file that is being Imported? If so then i cant as it contains real world personal data. But it looks exactly the same as how it is debugged, shown above but wit names etc. changed. 

Thanks

Mike

I need only 4 records like displayed in the image. You can delete the rest.

I want to test with another flow to understand the issue.

Ahh i see, ive attached the file but with dummy data

agentInfoRatio3.txt

The problem is String_Split reads newline() as a record ("") as well.


To overcome this I added an If that excludes the 1st row and the empty records:


Moreover there is a bug on String_Split2, the record to split is below. By default outsystems wrongly assigned to BinaryDataToText.Text :D


Let me know if it works now.

Thanks for this Jose. Ill play around with the If statement however, I know that for Bug 2, If i use Strin_Split.list.Current.Text.Value instead of the Default BinaryDataToToText equivalent it doesnt upload the file and i get an error of "Index 1 is out of range" 

Ran the Dubugger and can confirm No data is being fetched by the for loop.

I dont know how to overcome this

Thanks

Mike

What's inside String_Split.list when you reach the 2nd string split?

No data, Hence the index out of range, i get headers only

In the original Doc the New line is showing as CRLF. I believe this is Carraige Return. Is there a different Delimiter i should be using for New line. 

Thanks

Mike

To me it is working fine. It should work...

Can you show me the code with the details on each action?

Perhaps my IF statement is incorrect. Can i be a pain and ask what you put in your If statement. I cant access the actual code behind, i dont have access permissions unfortunately Jose.

String_Split.List.Current.Text.Value = ""
or
String_Split.List.CurrentRowNumber = 0

I owe you lost of beer!! My "Empty" Condition was different. I was using the Built in EMPTY function. It now works as it should. But, there may be a slight other issue as the actual file contains another Row that i want to ignore. another Condition telling it to ignore this row may be the answer. The actual file starts like this:


#fields:date|muID|muName|tvID|agentName|Site|Team|Team Manager ID|CCM

#sort:date,muID,muName,tvID,agentName,Site,Team,Team Manager ID,CCM


Followed by the Empty row and then the Data. So if i just tell it to ignore row 0 and row 1 i assume it will avoid any issues! 

Seriously though you have been amazing help, Thanks so much for teaching me and resolving my issue. 

Mike


If the file always start like that it will be easy to ignore, but I'm guessing the rows to ignore will be 0 and 4 plus the empties "".

Yup, I tested something similar. Carriage Return counts as a record and the empty row also counts as a record.


ahhh, thats great. I have to add a whole bunch of other things into the flow to delete existing records that are not in the latest upload. There are actually 3 different files with different structures totalling over 30,000 rows so no doubt there will be more challenges along the way. 

Thanks so much again Jose

Mike

If you have many records to exclude, you could consider only processing the rows with a pipe using the Index() built-in function.

Regards

Community GuidelinesBe kind and respectful, give credit to the original source of content, and search for duplicates before posting.