This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHi Alteryx team,
It would be great if it was possible to install/upgrade and configure Alteryx Connect using scripts. This would enable us to deploy Alteryx Connect using script deployment tools such as Urbancode Deploy. This functionality was recently requested by one of our customers, who has most of their application installations/upgrades automated this way.
Thank you very much for considering this idea.
Kind regards,
Jan
Hi Alteryx team,
In Connect, users are able to open workflows, open reports (like Tableau) or use data source in a workflow using the blue button on an asset page. With the new functionality of cataloguing APIs, would it be possible to implement this button for API endpoints as well, meaning users would be able to trigger the API directly from Connect?
Thank you very much.
Michal
Our Data Catalogue in Connect has about 2 millions items (tables, views, columns).
I see next issues:
All queries in this script containing column or table name as a parameter (e.g. src.TABLE_NAME='${query_table_name}' AND src.COLUMN_NAME='${query_column_name}') will be executed as many times as number of columns in Data Catalogue (millions times). It will work very slow because it executes a lot of queries.
Can you optimize somehow this process?
We had an idea, when you have an yxdb file, you have all the metadata including the field descriptions column from previous formulas or selects, it would be so usefull to have that in connect so that you can easily write a dictionnary from alteryx designer.
We have some scheduled workflows that utilize the download tool for API calls. When we scrape them with connect there aren't any references to them in the "Relationships" or "Data Connections" areas.
Even if this is something that would be difficult for Alteryx to scrape through a workflow, I would love the ability to create entities like this and manually connect them as a data source. Like we have some partners where there 10 to 15 API calls are required to pull the entire data set. It would be great to know which workflows reference those APIs so that if changes are made on their side, we can easily identify which workflows are impacted.
I published a workflow today that scans a directory for files and then pushes them to a Dynamic Input. I noticed that on Connect there is no relationship there anywhere referencing that we are scraping a directory.
Connect has the db inputs and outputs and the "File Input" that references what the Dynamic input is originally set to go find, but there is nothing referencing the directory other than any notes that I have added to the description.
The reason that I think this may be important. We connect to a folder where FTP files are dumped by a powershell script and we want to go through that folder with Alteryx and pull and upload as needed. However the file that existed in the original input (when we created this workflow) no longer exists. So the visual relationship is broken in Connect as soon as that file is dropped. If perhaps we don't have a tool that references this sort of connection to a directory, having the ability to designate a dynamic connection to the original file might be good instead. We just want to be able for those in the future to reference a location, rather than a file that hasn't existed in a while.
@OndrejCsummarizes connect as "a state-of-the-art Data Catalog with a social twist".
I define it in a broader fashion as data analytics social network, a collective intelligence or #datahive...
I would propose adding Analytics projects and related documents and the relevant relationship data;
into the picture so that any team can track their Data Science project progress there...
Here is a nice process flow view of a DS process
docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/overview
A Microsoft Project view of the Analytics projects at hand...
as per the title, when selecting "Use in workflow" a user should have the option to connect with the in-db tools when applicable rather than being stuck on a green input tool with an odbc connection. Ditto when searching from the omnibox in designer.
A lot of information is not captured when colleagues run numerous SQL codes on server. Oracle, SQL Server, Azure and others...
Would there be a clever way of capturing and archiving all this queries run?
It may be wise to collect these for several reasons;
So found out a similar feature is now available in SQL Server 2016 --> https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-usin...
Clicking the ‘Use in Workflow’ button in connect downloads a workflow file with the Table which I can open in Alteryx Designer.
When I open in Alteryx it asks for Userid/PW but there is no option for SSO (single sign on):
If I leave the username and password blank then the connection fails and I get an error.
In the case of SSO the connection string should be:
SSO: odbc:Driver={HDBODBC};SERVERNODE=saphXXX.europe.company.com:30415;Trusted_connection=yes;
instead of this:
UseridPW: odbc:Driver={HDBODBC};SERVERNODE=saphXXX.europe.company.com:30415;UID=USERNAME;PWD=__EncPwd1__
Please note ALL of our users use SSO so current functionality is useless to us.
I have raised this as a bug with support but as usual they ask me to post here.
This option should also ask if the connection is In-DB connection also per this post: