HI Team,
I am using a yxdb file stored in S3 to use as an central database. My workflow is on server. Multiple users are using the workflow at the same time. Now since there is no append data option in yxdb file so I"m downloading the previous data every time there is an append done by user then combining the data using Union tool and then overwriting the yxdb file in S3. So basically every time there is an append a new file is created and overwritten. The problem here is at time I see loss of data and I'm sure data is not lost anywhere in the workflow before overwriting the file in S3. My query is, is it possible that since multiple users are using this workflow on server parallelly and overwriting the file, can this be a reason of loss of previous data in the yxdb file.
Appreciate any suggestion. Thanks in advance
Yes, it's certainly possible that the timing of multiple users is causing data to be "lost" or more accurately, overwritten. Consider this sequence of events:
Typically this type of activity would be better served by a hosted database of some kind instead of a file. If that is not an option, I'd suggest instead possibly moving to uploading just files with the appended rows instead of overwriting the original. In that case, the full dataset becomes the union of multiple files in a given folder instead of 1 physical file.