This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
These days, if you're following any of the "big data" trends, you've almost certainly encountered a dictionary's worth of new terms that seek to explain what big data is and why it matters to you. Terms like "distributed", "MapReduce", "in-memory", and "shuffle", among many others. All combined, these terms create a kind of standard language that allows us to describe the problem space that big data presents in a way that is universally meaningful. That's great, right? Except that if you're really new to the whole big data thing (or even just the regular data thing) to begin with, the suggestion that any of these terms are "universally meaningful" is likely going to annoy you, at the very least. I hear you, believe me.