Hi Team,
I have a requirement where in we need to generate massive amounts of synthetic data for a particular data model. We have the range values, expected field calculations etc with us.
I expect to generate over a 1B records. Is this achivable with Alteryx ?
Solved! Go to Solution.
Sounds like a lot of data, but in general, yes, Alteryx is great for generating random data sets.
Use a "Generate Rows" tool to generate as many rows as you need; then a Formula tool to generate random data for whatever you need. Use the Math > RandInt(n) or Math > Rand() Functions in your Formula Expression.
If you have ranges, something like [RangeBeginning] + RandInt([RangeEnd] - [RangeBeginning]) will give you an integer in your range.
Thanks.
Is there a Rand() funtion that allows me to set a range of values ?
[Edit: removing duplicate solution - same as @jdunkerley79's]
@JohnJPS & @jdunkerley79 have given you the solution, but there is one thing that I would like to add to this due to the amount of data that you're dealing with.
Play around with the order of tools to increase speed. Under "Workflow Properties > Runtime" you can turn Performance Profiling on to look at the time each tool is taking. Generally, generate your categorical variables first as that will be less data. But you may find that by changing the position of a join you could significantly reduce the time your workflow takes to run. I.e. don't try to join a billion rows to a billion rows based upon a field, instead randomise/sort/rganise and then join by record position.
Kane
User | Count |
---|---|
18 | |
14 | |
13 | |
9 | |
8 |