I am having trouble while trying to publish something on Alteryx server with python tool.
I was able to publish a very simple print "Hello" successfully.
But as soon as there is something where I am reading a data stream in python tool, it is throwing an error.
Attached is a workflow, for which I get this error:
Message Type | Process or Tool Id | Details |
Information | -1 | Running at a Low Priority. |
Information | 10 | 1 record was output |
Information | 3 | Hello |
Information | 3 | Error: unable to read data (D:\Alteryx\Workspace\Service\Staging\1924_2f3dc8405af04eb2a7d8679132e681bf\__StageTemp\Engine_62712_88eb15595617416f8493e230acbc8407~\723f0afd-1f07-4d3c-aa9b-4786cda84b3f\0b0fa0b2280a09639e2059e56c8fa9328f4d76ab48e081dfdc80da7f0984f94e.yxdb) |
Information | 3 | ERROR: reading input data "#1" |
Error | 3 | --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-4077f55ec3cf> in <module> ----> 1 df = Alteryx.read("#1") d:\alteryx\bin\miniconda3\envs\jupytertool_venv\lib\site-packages\ayx\export.py in read(incoming_connection_name, debug, **kwargs) 33 When running the workflow in Alteryx, this function will convert incoming data streams to pandas dataframes when executing the code written in the Python tool. When called from the Jupyter notebook interactively, it will read in a copy of the incoming data that was cached on the previous run of the Alteryx workflow. 34 """ ---> 35 return __CachedData__(debug=debug).read(incoming_connection_name, **kwargs) 36 37 d:\alteryx\bin\miniconda3\envs\jupytertool_venv\lib\site-packages\ayx\CachedData.py in read(self, incoming_connection_name) 304 try: 305 # get the data from the sql db (if only one table exists, no need to specify the table name) --> 306 data = db.getData() 307 # print success message 308 print("".join(["SUCCESS: ", msg_action])) d:\alteryx\bin\miniconda3\envs\jupytertool_venv\lib\site-packages\ayx\Datafiles.py in getData(self, data, metadata) 498 if data is None: 499 # read in data as a list of numpy ndarrays --> 500 data = self.connection.read_nparrays() 501 # check if data is a list of numpy structs 502 elif isinstance(data, list) and all( RuntimeError: DataWrap2WrigleyDb::GoRecord: Attempt to seek past the end of the file |
Error | 3 | --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-5-c42a15b2c7cf> in <module> ----> 1 df.head() NameError: name 'df' is not defined |
Error | 3 | --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-6-88dbeb60e977> in <module> ----> 1 Alteryx.write(df,1) NameError: name 'df' is not defined |
Hi @Ashish
I was able to successfully run this workflow locally in designer as well as run it successfully on the server
However, I saw the same error as you when I had the validate box checked when saving
It would appear that the workflow is able to successfully run even without passing validation, so maybe just continue on for now, but also forward this in an email to support@alteryx.com to get it logged.