This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I'm building a test harness of a variety of data sets and algorithms for a specific problem that I'd like to test and this failing is causing incomplete results to occur in my testing.
For example, I have three sets of the same data: a clean mixed (numeric and categorical variables) data set, a set I've converted to all numerics (using the vtreat R package), and an all-categorical dataset.
I'd like to run a variety of algorithms against each of those three sets of data, and Naive Bayes fails when using Cross Validation against my categorical and mixed datasets.
When can we expect Cross Validation to work across predictive algorithms that don't support CV natively?