This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Is it possible to read parquet files from Azure Spark on Databricks? I'm storing my files in a container in my Azure storage account and, using those files, have created a table in Azure Databricks. I can see the parquet files in Alteryx, but I don't know how to read them.
Edit: I decided to create csv files instead. Each Connect in-DB tool allows me to select one csv file, but Databricks splits them into multiple files once I write them to a table. What's the most efficient way to read in all of those files at once (and subsequently query the original file as I assume it was split into smaller parts)?
Hi. Great guidance so far. Thanks! I'm trying to do similar and was able to create the ODBC connection and test it. However, do you know what I need to put in the "Databricks URL" field when creating the "write driver" in the Alteryx In-DB Connection? Unfortunately you seem to need to set up the Write tab even when just trying to Read.