[Cool Data Mover] Cool Data Mover FAQ 
Question
cool-data-mover
Web icon
Forge component by Cool Data Mover


Q: How do you handle users or system tables with the Cool Data Mover? 

A: The Cool Data Mover does allow you to move the Users as well. When an entity from ServiceCenter is added to the package, the Cool Data Mover adds it by default as a Lookup Entity. However, the User entity can be set manually to Move which will move/copy the users as well. This also applies for the Group Entity. 

For this to work correctly you need to include the Tenant and Espace entities as Lookup in your package as well. 

  • Set the match attributes for User to: Tenant_Id, Username and Is_active 

  • Set the match attributes for Tenant to: Name, Espace_id and Is_active 

  • Set the match attributes for Tenant to: ss_key and Is_active 

What we recommend is that you create a separate package for moving the users in which the User entity is a Move entity and using the entities and match options as specified above. This will copy the users correctly. 

In the application package you then include the same items (Users, Tenant, Espace) using the same match options. The only difference being that the Users entity is included as a Lookup. 


Q: It’s important to have unique records to make sure they are correctly identified. However, when there are, for example, duplicate Users, how does the Cool Data Mover handles this?  

A: There are several ways to handle duplicate User records 

  1. According to the match attributes the users are identical (same Username, Tenant_Id, and is_Active). This means that the user will only ever login with one of them, the active one. The others (probably all inactive) will never be used anymore. You should be able to identify which of the duplicates are never used anymore by looking at the last login values. The user that has the most recent login date should be the only active one. All records pointing to the inactive user should be re-linked/updated to the UserId of the active one. 
     
    This is the preferred solution because it keeps your user administration ‘clean’. Basically, it assumes that a username within a Tenant is unique and can be active or inactive. If an inactive user tries to register with the same username again the inactive user must be activated again.  
     
    If this is not an option, then you have some other choices on how to export/import them.? 


  1. If you keep the match attributes as they are currently set (which do not uniquely identify the users) you can still export/import them with a special feature enabled. What will happen is that the first of the duplicate users will be imported and the second, third, etc. of duplicate users will be matched to the first imported one. This means that only one user will be imported, and all records linked to both users will now point to the one imported user. Effectively the CDM will merge all duplicate users together.  
    This assumes that a duplicated user is the same person, which is very likely if an email address is used as a username. Be aware that if the duplicate users are not the same person then you end up with records linked from 2 different persons to the same username (login). 
     
    Since this modifies your data it needs to be enabled separately on the import side. On the espace CDM_Datamover, there is a site property called OverruleMatching. If you populate that with the value 99062 you will have a checkbox next to the match validation button, then validate the users and when it is not unique the checkbox appears to overrule this.? 


  1. If you want to move it as-is to the other environment, you can add the creation date as a match option. Because that, most likely, will result in unique users you can proceed with the export/import. However, if you already have the same users in PRD and ACC with different creation dates then they will not match, and you will end up with multiple entries for active and inactive users. 


Q: During the publishing of the Cool Data Mover solution an error is raised: ORA-01450 Key length exceeds (6398). 

A: This occurs when the OutSystems platform runs on an Oracle installation that uses a default block size of 8KB. Version 2.3.3 is compatible with this block size, so use that as a minimum. 


Q: The export runs successfully, but on the output folder we don't see any data. The folder is empty. 

A: Please check that the IIS users have got permission to write on the configured Main Data Mover directory  


Q: The import screen is stuck on "Preparing for import..." and no info is available on the logs... 

A: Make sure the OutSystems logging service is running on all front-ends, this should result in the import running and it will show up on the monitor page. 


Q: Where do I store the export files in an OutSystems PaaS environment? 

A: As part of the OutSystems PaaS?offer, each front-end of your environments includes a folder with 2GB capacity where you can store and retrieve temporary files from. ? 
The access to these folders is granted by default and you and your apps can write temporary files to the specific folder at?"D:\User\".? 
 
Please note that this folder is emptied regularly and if you'd like to store files permanently, you can either integrate with a third-party storage system (e.g. AWS S3) or upload the contents of your files to the database.
 


Q: Where can I find more information about errors? 

A: The Cool Data Mover logs almost everything that happens during a data operation. You can find logs in the following places: 

  1. The General Log in Service Center, filter on Module ‘CDM’  

  1. The Error Log in Service Center, filter on application Cool Data Mover 

  1. Inside the Cool Data Mover you can use the monitor page to zoom in on the entity where the error on occurred 

  1. Inside the Cool Data Mover you can use the run logs to have some extended logs regarding everything that happens during an export or import. (Note: make sure you disable the automatic cleanup option on that page and set logging level to Progress) 



Cool Data Mover wrote:


Q: How do you handle users or system tables with the Cool Data Mover? 

A: The Cool Data Mover does allow you to move the Users as well. When an entity from ServiceCenter is added to the package, the Cool Data Mover adds it by default as a Lookup Entity. However, the User entity can be set manually to Move which will move/copy the users as well. This also applies for the Group Entity. 

For this to work correctly you need to include the Tenant and Espace entities as Lookup in your package as well. 

  • Set the match attributes for User to: Tenant_Id, Username and Is_active 

  • Set the match attributes for Tenant to: Name, Espace_id and Is_active 

  • Set the match attributes for Tenant to: ss_key and Is_active 

What we recommend is that you create a separate package for moving the users in which the User entity is a Move entity and using the entities and match options as specified above. This will copy the users correctly. 

In the application package you then include the same items (Users, Tenant, Espace) using the same match options. The only difference being that the Users entity is included as a Lookup. 


Q: It’s important to have unique records to make sure they are correctly identified. However, when there are, for example, duplicate Users, how does the Cool Data Mover handles this?  

A: There are several ways to handle duplicate User records 

  1. According to the match attributes the users are identical (same Username, Tenant_Id, and is_Active). This means that the user will only ever login with one of them, the active one. The others (probably all inactive) will never be used anymore. You should be able to identify which of the duplicates are never used anymore by looking at the last login values. The user that has the most recent login date should be the only active one. All records pointing to the inactive user should be re-linked/updated to the UserId of the active one. 
     
    This is the preferred solution because it keeps your user administration ‘clean’. Basically, it assumes that a username within a Tenant is unique and can be active or inactive. If an inactive user tries to register with the same username again the inactive user must be activated again.  
     
    If this is not an option, then you have some other choices on how to export/import them.? 


  1. If you keep the match attributes as they are currently set (which do not uniquely identify the users) you can still export/import them with a special feature enabled. What will happen is that the first of the duplicate users will be imported and the second, third, etc. of duplicate users will be matched to the first imported one. This means that only one user will be imported, and all records linked to both users will now point to the one imported user. Effectively the CDM will merge all duplicate users together.  
    This assumes that a duplicated user is the same person, which is very likely if an email address is used as a username. Be aware that if the duplicate users are not the same person then you end up with records linked from 2 different persons to the same username (login). 
     
    Since this modifies your data it needs to be enabled separately on the import side. On the espace CDM_Datamover, there is a site property called OverruleMatching. If you populate that with the value 99062 you will have a checkbox next to the match validation button, then validate the users and when it is not unique the checkbox appears to overrule this.? 


  1. If you want to move it as-is to the other environment, you can add the creation date as a match option. Because that, most likely, will result in unique users you can proceed with the export/import. However, if you already have the same users in PRD and ACC with different creation dates then they will not match, and you will end up with multiple entries for active and inactive users. 


Q: During the publishing of the Cool Data Mover solution an error is raised: ORA-01450 Key length exceeds (6398). 

A: This occurs when the OutSystems platform runs on an Oracle installation that uses a default block size of 8KB. Version 2.3.3 is compatible with this block size, so use that as a minimum. 


Q: The export runs successfully, but on the output folder we don't see any data. The folder is empty. 

A: Please check that the IIS users have got permission to write on the configured Main Data Mover directory  


Q: The import screen is stuck on "Preparing for import..." and no info is available on the logs... 

A: Make sure the OutSystems logging service is running on all front-ends, this should result in the import running and it will show up on the monitor page. 


Q: Where do I store the export files in an OutSystems PaaS environment? 

A: As part of the OutSystems PaaS?offer, each front-end of your environments includes a folder with 2GB capacity where you can store and retrieve temporary files from. ? 
The access to these folders is granted by default and you and your apps can write temporary files to the specific folder at?"D:\User\".? 
 
Please note that this folder is emptied regularly and if you'd like to store files permanently, you can either integrate with a third-party storage system (e.g. AWS S3) or upload the contents of your files to the database.
 


Q: Where can I find more information about errors? 

A: The Cool Data Mover logs almost everything that happens during a data operation. You can find logs in the following places: 

  1. The General Log in Service Center, filter on Module ‘CDM’  

  1. The Error Log in Service Center, filter on application Cool Data Mover 

  1. Inside the Cool Data Mover you can use the monitor page to zoom in on the entity where the error on occurred 

  1. Inside the Cool Data Mover you can use the run logs to have some extended logs regarding everything that happens during an export or import. (Note: make sure you disable the automatic cleanup option on that page and set logging level to Progress) 



Interesting tool. Congrats!

Q: The import operation times out every time it hits a certain large table.

A: During the import operation it will try to match each record to see if it already exists. All the attributes that are marked for this entity as the match will be used in a query on that table. This means that on the database server there will be a lot of scans on the table. Without the proper indexes, this can lead to timeouts when it has a large number of records. To overcome this you can create an index on the entity that holds all the matching attributes.

Community GuidelinesBe kind and respectful, give credit to the original source of content, and search for duplicates before posting.