This would be a game changer!
Thanks for the tip ned, we will give SQLite a try today. It's really great.
I think a few of users were trying to read write and then append back to yxdb.
SQLLite works like a charm! Really encourage other users to try this.
Thank you Ned! Been wondering about that!
Ned- Thanks for your response and I understand where you are coming from. One difficulty we face (and one that I bet is quite common) is that any setting up of additional software requires IT / admin rights for installation, maintenance, etc. I can't even download SQLite to try it out without IT's involvement. Hence, being able to append to the native Alteryx database would save us a fair amount of startup time.
That said, we will look into SQLite as a way of appending, as this is a function we need. Our files are too large to continue appending as CSV, XLSX, but our organization isn't really big enough to maintain a true enterprise data warehouse.
You don't need to download SQLite. It is built in to the Alteryx Input and Output tools. No drivers or anything required.
Ned - correct me if I'm wrong, but I see about a 3x increase in file size if I go from YXDB to SQLLITE. Does that seem right? My input is simply a YXDB. No tools in between. Just a YXDB input and a SQLLITE output. But the file size goes from about 50MB to 150MB.
The YXDB file is compressed which makes it smaller. That compression is part of what makes it so you can't append to one. The actual compression will vary based on what data you write to it. SQLite is appendable and queryable which inherently would be very difficult or impossible to support compression.
Got it . . . thanks for all the info!
Very cool. Thanks, Ned!
Nice - glad I checked back in on this thread.
I was using a local mysql instance the last few days so I could insert 200k parsed xml files...1 at a time. It was way to slow doing this with read-union-write yxdb method, so I started using mysql and hated the fact I needed to add another technology to the stack.
This will work great to build my data and then export as a yxdb for faster down stream processing for other processes.