This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Right now, the power to solve new global challenges across industries, is at your fingertips, no matter where you're working from. Create a new topic or reply to an existing thread to share your perspective.
We are updating the requirements for Community registration. As of 7/21/21 all users will be required to register a phone number with their My Alteryx accounts. If you have already registered, you will be prompted on your next login to add your phone number.
I am Shawn and I work with CommScope a telecommunication company and I am looking to connect with other internal audit share use cases of Alteryx. We have been using Alteryx for a little over 6 months.
I've used it in a variety of ways to help clients with reviewing financials for the month end close and annual audits based on account rules (must be debit balance, variance over prior year, variance to budget, should not be zero, etc.). I've also used it to review transaction level details, payroll data and to create data tables for a Excel based financials that are fed from pivot tables. The sky is the limit. Feel free to reach out if you have any specific questions or needs.
Anyway, ... for the risk rank it is a simple 1 to 3 scale based on what we know is a stronger indicator (like paid more than 30 days after being terminated or before being hired) vs a complement indicator (not much on its own, but important when other things are at play, like being paid by physical checks that can be endorsed).
the score for the risk ranking each test may change (adjusted by hand) by what we learn during the review/follow-up. for example, based on the number of actual problems found by a test vs false positives; and in principle the scale can go as high as needed for tests that become actual smoking guns with fingerprints! 😉
On the predictive test the idea was that the system would choose the factors of importance on its own (every time it ran), but from the early results we found that things like the higher the complexity of the expense report the higher chances of approval (likely rubber stamp). when I say complexity I mean things like: time separation between the first and last expense included, number of locations (one city vs several countries), number of line items, etc.
Other factors that were interesting were the seniority of the submitter and the number of direct reports to the reviewer/aprover.
The example of Benfords law was quite intriguing. A follow up question on this is, lets say I test my data against Benfords Law and my results are significantly off (Say 1 is occurring 50% of the time as opposed to prediction under Benford law of 30%) what conclusions does one draw next?
Greetings. If '1' is occurring at 50% level, the next steps would be:
1) Look at the vendors that make up the 50%: can you isolate which ones are driving the outliers?
2) If you can isolate these vendors, ask yourself does it make sense the vendors are outliers...
a) maybe the vendor sequences all their invoice # to begin with '1'
b) maybe the invoice number isn't a 'real' invoice number (like an telecom vendor that begins the invoice # with 1(800)...
3) If it doesn't make sense for a vendor to have a high number of invoice #'s beginning with '1'....take a look at some of the invoices, talk to AP, Google the vendor for more info. Dig deeper by asking around.
4) You can also fine tune your workflow to do the analysis around only higher risk vendors ...say vendors without a PO number,etc