This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
The 2022.1.1.30569 Patch/Minor release has been removed from the Download Portal due to a missing signature in some of the included files. This causes the files to not be recognized as valid files provided by Alteryx and might trigger warning messages by some 3rd party programs.
If you installed the 2022.1.1.30569 release, we recommend that you reinstall the patch.
Looking for examples of what others have done to manage dependencies used by workflows that make use of the Python SDK and run on the Server. So far I haven't found any posts here in the community forums, or presentations from others who have gone down this road.
The process to develop the workflow/app makes use of a venv to make sure it works without changing, but the packaging instructions given in the SDK make it clear that the venv is not sent with the package. Instead the requirements.txt is used by the user to import the dependencies. Does the installation of the YXI package create a new venv on the user's machine?
Is the same true if the tool is deployed to the server? Could the server use a venv if needed or desired?
With potentially hundreds of python custom workflows running on the same server, I can foresee dependency clashes if functionality is changed as external libraries are updated. Obviously we can track the updates, implement testing (automated using Alteryx), and update the workflows/apps when needed, but some of these will be business critical and we can't risk leaving any apps 'behind'.
Is the same true if the workflow or app is published to our internal server? I.e. does the server use a venv that is specific to that workflow/app? Or does it use the same python environment for all apps/workflows?