Azure_UploadFile
Input Parameters
file_path : Azure blob data filepath where you will save the file
storage_account_name : Azure account storage name
root_container : root container of the blob
file_name : filename to be uploaded to the Azure blob
file_content : binary file to be uploaded in the azure blob storage
azure_header_id : Azure header credential for authentication
tenant_id : Azure account tenant ID
client_id : Azure account client ID
secret_id : Azure account secret ID
xms_version : Versioning for Azure Storage, recommended to use the latest https://learn.microsoft.com/en-us/rest/api/storageservices/versioning-for-the-azure-storage-services
xms_blobtype : file blob type check for additional info https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
Output Parameters
message : process flow message
file_url : server url of the uploaded file
Is_success : flow status
Sample Usage
Above screenshot is sample for uploading image to the blob, this is using the upload widget and getting the required filename and binary data, for other input parameter you should have that set up on you Azure account
Azure_DeleteBlob
file_path : Azure blob data filepath where the file is saved
file_name : filename of the uploaded file to the Azure blob
Above screenshot is sample for deleting image from the blob then for other input parameter you should have that set
Azure_DownloadBlob
file_content : Binary content downloaded from AZURE blob
Above screenshot is sample for downloading image this will return binary data that you can download or use, from the blob then for other input parameter you should have that set
Azure_ListBlobs_With_Metadata
max_record : max record to retrieve in the azure blob storage
next_mark : if there is continuation or next page this will be populated data from "NextMarker" value from the JSON response
resource : F = Files only; D - Directories Only; A - All
prefix : Filters the results to return only blobs whose names begin with the specified prefix
include blob metadata : Include blob metadata
JSON_Output : Converted JSON output from XML response
Sample above request to return file list only from your root container meaning all files uploaded on that container will be returned but it will also depend on the the max_record you set, then for the nex_mark it will be populated once there are more items to be return but not included in the current request, This action also returns JSON text so that you will be able to structure what output you want to get like if you also want to get the Metadata of that blob. Sample JSON response below
Azure_PostPut_Table_Storage
request_body: JSON string that will act as your Azure table storage “Schema” like format (Your JSON request body, It should be single json structure not LIST)
is_update: This is to determine where to POST or PUT your record
azure_table_name: Your azure table directory where we need to store the record
azure_account_name: Azure account storage name
row_key: The row key is a unique identifier for an entity within a given partition (you can use your entity primary ID if you are syncing records from Outsystems to Azure) https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-the-table-service-data-model
partition_key: The partition key is a unique identifier for the partition within a given table (You can you Tenant name in here or other that might suit your flow needs) https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-the-table-service-data-model
Azure_BatchPOST_Table_Storage
is_initial: Checker if the first I call will be initial to set BatchID and ChangeSet
is_final: Checker if the call will be the final this is to post/put the request
in_batch_id: a unique identifier for the batch request
in_changeset: a unique identifier for grouping multiple operations inside a batch
in_partial_request_body: This is where we will build the final request body for posting in the Batch process
is_update: This is to determine where to POST or MERGE your record
out_partial_request_body : action output that will be used for building the final request body this will be used on the flow for temporary values
out_changeset : action output that will be used for building the final request body this will be used on the flow for temporary values
out_batchid : action output that will be used for building the final request body this will be used on the flow for temporary values
Aciont (BulkUpload)
temp_counter (int) : will be used for counting how many records have we processed this is due to the max limit of per batch would be 100 rows
Temp_request_body (text) : this will hold the output value of the Azure_BatchPost_Table_Storage action to build the final request body
temp_changeset (text) : this will hold current the change_set of the iteration
temp-batchid (text) : this will hold current batch_id of the iteration
TempUpload (text) : this is the structure I created for the upload (note that always include PatitionKey and RowKey on your structure)
is_initial (boolean) : flag to check whether the flow is initial or not
As seen on the screenshot above the implementation will be composed of multiple Assign widget that will build out final request body and also some logic to cater this event
Let’s start on the loop, the iteration will be based on your business logic requirement as a sample I used the user table upon going to the iteration flow our first step is to assign the “is_initial” value,
We set the is_initial flag based on the temt_counter we have so meaning in the first run it will always be true
Next is the assign entity value based on your business criteria on my example I am using the UserTable
Next is to serialize the structure into a JSON text format
Then now will be the Action that will do the actual process and building the request, as seen on the assignment most of them are straight forward, the only addition will be the “is_final” input parameter because we a logic here first is to check if the temp_counter hits 99 which means we are almost at the limit of azure batch process then the other condition is if the current row is the final row.
Then the final one on my sample would be the assigning of temporary values, in here majority of the assignment is also straightforward except the “temp_counter” since we will check if we hit the limit or not if now we will increment of the temp_counter to +1 then if we hit the counter to 99 meaning the next iteration will be the final and the action Azure_BatchPOST_Table_Storage will now do the actual posting.