Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hello,
We use the pre-sql statement of the input to set some parameters of connections. Sadly, we cannot do that in a in-db workflow. This would be a total game-changing feature for us.
Best Regards,
Simon
Currently Alteryx does not support writing to SharePoint document libraries.
However there are success sometimes but not at other times.
Please see attachment where we ran into an issue.
See this link for additional information.
We need official support for reading and writing to SharePoint document libraries.
It's an important Output target, and will becoming more so, as Alteryx enhances its reporting capabilities.
From Wikipedia :
In a database, a view is the result set of a stored query on the data, which the database users can query just as they would in a persistent database collection object. This pre-established query command is kept in the database dictionary. Unlike ordinary base tables in a relational database, a view does not form part of the physical schema: as a result set, it is a virtual table computed or collated dynamically from data in the database when access to that view is requested. Changes applied to the data in a relevant underlying table are reflected in the data shown in subsequent invocations of the view. In some NoSQL databases, views are the only way to query data. Views can provide advantages over tables: Views can represent a subset of the data contained in a table. Consequently, a view can limit the degree of exposure of the underlying tables to the outer world: a given user may have permission to query the view, while denied access to the rest of the base table. Views can join and simplify multiple tables into a single virtual table. Views can act as aggregated tables, where the database engine aggregates data (sum, average, etc.) and presents the calculated results as part of the data. Views can hide the complexity of data. For example, a view could appear as Sales2000 or Sales2001, transparently partitioning the actual underlying table. Views take very little space to store; the database contains only the definition of a view, not a copy of all the data that it presents. Depending on the SQL engine used, views can provide extra security.
I would like to create a view instead of a table.
Alteryx Designer is slow when using In-DB tools.
We use Alteryx 2019.1 on Hive/HortonWords with the Simba ODBC Driver configured with SSL enabled.
Here is a compare In-DB / in Memory :
We found that Alteryx open a new connection for each action :
- First link to joiner = 1 connection.
- Second ling to joiner = 1 connection.
- Click on the canevas = 1 connection.
Each connection take about 2,5 sec... It really slow down the Designer :
Please, keep alive the first connection instead of closing it and creating a new one for each action on the Designer.
Please could you enhance the Alteryx download tool to support SFTP connections with Private Key authentication as well. This is not currently supported and all of our SFTP use cases use PK.
I would like to suggest creating a fix to allow In-DB Connect tool's custom SQL to read Common Table Expressions. As of 2018.2, the SQL fails due to the fact that In-DB tools wrap everything in a select * statement. Since CTE's need to start with With, this causes the SQL to error out. This would be a huge help instead of having to write nested sub selects in a long, complex SQL code!
Hello all,
As of today, you can populate the Drop Down tool in the interface category with a query launched from a in-memory connection. I would really appreciate the ability to use instead an in-db connection.
Why ?
It means managing two connections instead of one, and finding ways to manage it on server for both of them, etc etc.. Simplicity is key.
Best regards,
Simon
Statistics are tools used by a lot of DB to improve speed of queries (Hive, Vertica, etc...). It may be interesting to have an option on the write in db or data stream in to calculate the statistics. (something like a check box for )
Example on Hive : analyse {table} comute statistics; analyse {table} compute statistics for columns;
Alteryx has the ability to connect to data sources using fat clients and ODBC but not JDBC. If the ability to use JDBC could be added to the product it could remove the need to install fat clients.
I noticed through the ODBC driver log that Alteryx doesn't care about the kind of base I precise. It tests every single kind of base to find the good one and THEN applies the queries to get the metadata info.
Here an example. I have chosen an Hive in db connection. If I read the simba logs, i can find those lines :
Mar 01 11:37:21.318 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select USER(), APPLICATION_ID() from system.iota Mar 01 11:37:22.863 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select USER as USER_NAME from SYSIBM.SYSDUMMY1 Mar 01 11:37:23.454 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select * from rdb$relations Mar 01 11:37:23.546 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select first 1 dbinfo('version', 'full') from systables Mar 01 11:37:23.707 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select #01/01/01# as AccessDate Mar 01 11:37:23.868 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: exec sp_server_info 1 Mar 01 11:37:24.093 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select top (0) * from INFORMATION_SCHEMA.INDEXES Mar 01 11:37:24.219 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: SELECT SERVERPROPERTY('edition') Mar 01 11:37:24.423 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select DATABASE() as `database`, VERSION() as `version` Mar 01 11:37:24.635 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select * from sys.V_$VERSION at where RowNum<2 Mar 01 11:37:25.230 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select cast(version() as char(10)), (select 1 from pg_catalog.pg_class) as t Mar 01 11:37:25.415 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select NAME from sqlite_master Mar 01 11:37:25.756 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select xp_msver('CompanyName') Mar 01 11:37:26.156 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select @@version Mar 01 11:37:26.376 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: select * from dbc.dbcinfo Mar 01 11:37:26.522 INFO 5264 HardyDataEngine::Prepare: Incoming SQL: SELECT @@VERSION;
I can understand that when Alteryx doesn't know the kind of base he tries everything.. (eg : in memory visual query builder) but here, I have selected the Hive database and I have to loose more than 5 seconds for nothing.
Hello all,
According to wikipedia :
https://en.wikipedia.org/wiki/Join_(SQL)
CROSS JOIN returns the Cartesian product of rows from tables in the join. In other words, it will produce rows which combine each row from the first table with each row from the second table.[1]
Example of an explicit cross join:
SELECT *
FROM employee CROSS JOIN department;
Example of an implicit cross join:
SELECT *
FROM employee, department;
The cross join can be replaced with an inner join with an always-true condition:
SELECT *
FROM employee INNER JOIN department ON 1=1;
For us, alteryx users, it would be very similar to Append Fields but for in-db.
Best regards,
Simon
It would be awesome if there was a cross tab in DB option because right now I have to stream out millions of records to build a cross tab.
As of today, for a full refresh, I can :
-create a new table
-overwrite a table. (will drop and then create the new table)
But sometimes, the workflow fails and the old table is dropped while the new one is not created. I have to modify the tool (setting "create a new table")to launch it again, which may be a complex process in companies. After that, I have to modify it again back to "overwrite".
What I want :
-create a new table-error if table already exists
-overwrite a table-error if table doesn't exist
-overwrite a table-no error if table doesn't exist (easy in sql : drop if exists...)
Thanks!
Hello all,
In help, we can read that :
https://help.alteryx.com/current/designer/write-data-db-tool
Update/Delete is currently only supported for SQL Server ODBC connections.
I don't know about you but SQL Server is well used in transactional workload but in analytics... well... I have only used once in several dozens of context !
Maybe it would be cool to make it work on many more database?
Best regards,
Simon
When I create a new table in a in-Db workflow, I want to specify some contraints, especially the Primary Key/Foreign Key
For PK/FK, the UX could be either the selection of some fields of the flow or a free field (to let the user choose a constant).
From wikipedia :
In the relational model of databases, a primary key is a specific choice of a minimal set of attributes (columns) that uniquely specify a tuple (row) in a relation (table).[a] Informally, a primary key is "which attributes identify a record", and in simple cases are simply a single attribute: a unique id.
So, basically, PK/FK helps in two ways :
1/ Check for duplicate, check if the value inserted is legit
2/ Improve query plan, especially for join
Currently, when one uses the Google BigQuery Output tool, the only options are to create a table, or append data to an existing table. It would be more useful if there was a process to replace all data in the table rather than appending. Having the option to overwrite an existing table in Google BigQuery would be optimal.
Hello,
Here a use case :
I work on the projects A and B with Alteryx inj IN DB mode.
My coworker works only on project B and have no rights to the data of project A.
When using temporary table in Alteryx, we both create the temporary tables in the default database. The issue is my coworker can see my temporary data of project A, which is not safe.
Solution : allow me to specify the database/schema when I create my temporary table.
Hello,
According to wikipedia :
A partition is a division of a logical database or its constituent elements into distinct independent parts. Database partitioning is normally done for manageability, performance or availability reasons, or for load balancing. It is popular in distributed database management systems, where each partition may be spread over multiple nodes, with users at the node performing local transactions on the partition. This increases performance for sites that have regular transactions involving certain views of data, whilst maintaining availability and security.
Well, basically, you split your table in several parts, according to a field. it's very useful in term of performance when your workflows are in delta or when all your queries are based on a date. (e.g. : my table helps me to follow my sales month by month, I partition my table by month).
So the idea is to support that in Alteryx, it will add a good value, especially in In-DB workflows.
Best regards,
Simon
Hello all,
as of today, a join in-db can only be done with an equal operator.
Example : table1.customer_id = table2.customer_id
It's sufficient most of the time. However, sometimes, you need to perform another kind of join operation, (especially with calendar, period_table, etc).
Here an example of clause you can find in existing sql
inner join calendar on calendar.id_year_month between fact.start_period and fact.end_period
helping to solve that case :
(the turnaround I use to day being : I make a full cartesian product with a join on 1=1 and then I filter the lines for the between)
or <,>, .... et caetera.
It can very useful to solve the most difficult issues. Note that a product like Tableau already offers this feature.
Best regards,
Simon
Hi,
Currently loading large files to Postgres SQL(over 100 MB) takes an extremely long time. For example writing a 1GB file to Postgres SQL takes 27 minutes! This is serious impacting our ability to use Alteryx as an ETL tool for loading our target Postgres Data Warehouse. We would really like to see the bulk load capacity to Postgres supported by Alteryx to help alleviate the performance issues.
Thanks,
Vijaya
User | Likes Count |
---|---|
17 | |
6 | |
5 | |
4 | |
3 |