Error opening connect string: Microsoft Access Database Engine
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi all,
I get the below error at the batch macro tool after I run the workflow.
Multiple Sheets Multiple Excel Files(60): Record #1: Tool #1: Error opening connect string: Microsoft Access Database Engine: External table is not in the expected format.\3274 = -328602519
There are 8 .xls files in a folder and I'm reading through a directory tool. The workflow ran perfectly fine two days back but, it is throwing this weird error suddenly today.
Please help me with the solutions to get rid off this problem.
Thank you.
Best Regards,
Teja.
- Labels:
- Error Message
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
The issue could be that maybe one of the files is not excel and its a CSV
To avoid issues with that in case you have different formats to filter according to the file type.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi All,
I know this is an old post, but I've started experiencing the same issue using Dynamic Input to process multiple (~30) large Excel binary (*.xlsb) files. The workflow has been working just fine until the last week or so, and the only change is additional data in some of the files. All files are in exactly the same standard format, with matching field names.
I've tried removing and re-installing 'AccessRuntime_x64_en-us.exe' (to enable the necessary drivers that allow Alteryx to read *.xlsb files) as well as removing and re-installing Alteryx.
I'm attaching screenshots of the error.
Has anyone else experienced the issue, and more importantly come up with a solution?
Many thanks,
Andrew
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
I may have solved the issue by taking the following steps;
1) Removing any non-standard alpha-numeric characters from the input file headings. I also copied the headings from one file onto all other files to ensure they were exactly the same (even though they "looked" and registered the same in Excel). The amended files remain in Excel binary format.
2) Splitting the workflow out by sales region in order to minimise the size of the data processed by the Dynamic Input tool, and then applying a union join at the end of the workflow to bring all the data together (see attached screenshot example).
=> I have re-run my workflow several times now and so far at least, the issue seems to have disappeared.
If interested, here is a bit more background into what I identified when seeking out my solution...
Step #1
- Initially, I tried converting the files to XLSX and amended the 'Directory' and 'Dynamic Input' tools accordingly. When I then tried to run the workflow, I got a "different schema" error. This made me think that, even though the file formats were exactly the same, Alteryx was interpreting them differently.
- The sales data inputs originated from a SAP query that had been extracted into Excel, and several fields had a "full-stop" in them. The Excel binary (*.xlsb) connection interpreted this as a hash "#", whereas the standard Excel (*.xlsx) kept the field names with a full-stop.
Step #2
- I started to look at connecting each of the 30+ files separately (very painful!), followed by a union tool, but found a post suggesting a processing limit of around 30 connections, so quickly abandoned this approach. This did get me thinking though about the dynamic input possibly struggling with such large data volumes (all files combined are around 10 million records).
- My files were fortunately logically named by sales region, so it helped to identify each region separately in the 'Directory' tool. After amending the workflow for the first region, it was a simple copy and paste to follow the same approach for the other four regions. This also had the benefit of validating smaller buckets of data should further errors occur.
