Data Science

Machine learning & data science for beginners and experts alike.
SusanCS
Alteryx Alumni (Retired)

Featured on the Data Science Portal.

 

 

On August 26, Alteryx hosted its first Twitter chat, which featured our ACEs and involved many other Community members! This conversation revolved around six key questions that covered digital transformation, the role of Analytic Process Automation™, the democratization of data and the challenge of upskilling. You can check out the rich discussion by visiting the #AlteryxChat hashtag on Twitter, and read more about the first three questions in the chat on our INPUT blog.

 

Here on the Data Science blog, we thought it’d be worth digging into question #5 a bit more deeply: 



SusanCS_0-1600114150769.jpeg

 

 

A research paper I read recently also has led me to ask: Could process automation not just empower humans by helping us avoid dull tasks, but also by fundamentally changing the way we think? Considering automated processes as collaborators with humans, not merely as simple replacements, opens up a whole new realm of possibilities for both humans and algorithms. 

 

Let’s cover the essentials , go for a drive 🚗  and get a little philosophical 🤔



Less Tedium, More Creativity

Everyone is rightly excited about the potential of automation to take over the dull bits of everyday work:

 

There’s another perk of changing error-prone humans’ role:

 

We humans have fancy brains capable of complex reasoning, but we still make mistakes. Tedious tasks can become so mind-numbing that bored humans start to blunder. Fortunately, algorithms don’t get fatigued; they continue cleaning, classifying and predicting as requested, row after row.



SusanCS_1-1600114150775.gif



Full Autopilot vs. Human Involvement 🚗

Automation frees human brain power to focus on the tasks where it’s best applied:

 

But here’s an interesting dilemma: Are certain kinds of human involvement beneficial to everyone involved in the application of a predictive algorithm?

 

Imagine this (only slightly) futuristic scenario. You’re behind the steering wheel of a self-driving car. The car is on “autopilot” and is driving you on a familiar route. You’re supposed to stay alert as the car drives, but there’s also that email on your phone that you really need to answer. You pick up your phone and start writing, ignoring what the car is doing. Automation frees you from driving (a usually dull, non-intellectual task) so you can focus on a more human-appropriate task that involves critical thinking and language skills currently beyond AI’s capabilities. Automated driving may also reduce accidents, given that 94% of serious crashes are said to be caused by human error. 

 

But automated driving technology isn’t yet perfected or common. There’s still a “human in the loop” of driving decision-making who has critical thinking abilities.



SusanCS_2-1600114150771.jpeg

Image from the SAE



If Automation is 💯, What is Lost?

Someday, however, we’ll have widespread, skilled automated driving technology. When mundane tasks like driving get automated, what’s the incentive for engagement by the human who is watching from the sidelines? Why not write that email instead of watching the road?

 

As AI can handle more and more complex tasks, humans will still be, well, human. We still need to feel valued so that we are motivated to engage in automated processes’ outcomes. If we could build a perfect self-driving car and ensure always-ideal driving conditions, there would be zero incentive for a human to watch the road. 

 

Similarly, if we could use AI to perfectly select applicants for mortgage loans, or the ideal placement of ads in various media to generate leads, or the location of new retail stores — why would the humans related to these processes expend additional time researching the applicants, media or locations thoroughly? 



giphy-downsized

 


Humans might be tempted to just say, “Oh, I’m sure the AI got it right.” We could lose the important nuances and advanced critical thinking that humans bring to those processes.



Making Decisions in Collaboration with Automation 🤔

Researchers have been exploring this delicate balance. They’re considering whether, when and how to maintain humans’ role in processes and decisions that, in theory, could become perfectly automated, but that would still benefit from human collaboration. 

 

A recent paper, “The Allocation of Decision Authority to Human and Artificial Intelligence,” takes a hard look at what humans and AI both bring to the table in decision-making processes:

 

 … we consider a principal [an organizational leader] who faces a choice as to whether to give a human agent or an AI authority in making a decision. How does the introduction of the AI affect human effort? When AIs predict well, might humans decrease effort too much (‘fall asleep at the wheel’)? When should the AI or the human have the right to make the final decision? Are ‘better’ AIs in a statistical prediction sense necessarily more profitable for an organization?

 

These researchers do not conclude that humans ought to make all decisions. They aren’t anti-technology or fearmongering. If anything, they reveal skepticism about humans’ ability to set aside their biases and accept new perspectives, and they propose that AI can help us move beyond our mental habits.  

 

What’s also cool about this research is its different way of thinking about AI. The authors suggest we might not always want the most technically high-performing models that routinely offer near-perfect predictions and that might automate away the human role (what they call a “replacement AI”). Sometimes that’s fine, but “imperfect” suggestions from AI might be more beneficial in some situations, they say.



SusanCS_4-1600114150773.gif

 


An “augmentation AI” that performs reasonably well and informs humans, but doesn’t take over the final decision-making, could be the most productive (and also profit-maximizing) for many situations. Though imperfect from a technical standpoint, this collaboration keeps humans motivated to stay engaged in evaluating data from their uniquely insightful, valuable perspective. This human/AI “augmentation” allows for both efficient, boredom-reducing automation of decisions and humans’ motivation to maintain awareness and catch errors. 



🍦 or 🏋️? Annoying Humans on Purpose with AI 😡

Surprisingly, these researchers suggest that in some scenarios, it might even be productive and profit-maximizing to use what they call “unreliable AI” or “antagonistic AI.” 

 

Unreliable AI might be ideal, they say, when there’s a need to keep humans especially motivated to stay engaged. Imagine you’re back in the self-driving car, but now the car has a “feature” that sporadically turns off autopilot mode for one minute at unknown intervals. You don’t know when that will happen, so you keep your hands on the wheel and maintain situational awareness instead of writing that email. You might hate that feature of the car, but you’d have to admit that it keeps you alert. Similarly, an unreliable AI would ensure some human engagement and motivation by offering less-than-perfect performance, with human intervention required to correct its occasional errors.

 

The “antagonistic AI” goes a step further by making decisions that actively “antagonize” or frustrate the humans who receive the AI’s results. These decisions would be known to conflict with the humans’ existing biases, forcing them to reconsider their preferences and think harder about why they want to make a certain decision. 

 

Imagine now that you’ve input your destination — the local ice cream parlor — into your self-driving car’s navigation system, and it responds, “Really? Are you sure? No, let’s go to the gym instead,” and starts driving you to the gym. You’d have to actively override the system to get back on track for your ice cream. While you might find the car’s response pretty annoying, you’d also (maybe reluctantly) have to ask yourself: Where should I be going right now: to get ice cream or to work out? 



SusanCS_5-1600114151218.gif

 


The antagonistic AI makes decisions that humans may dislike so that the humans have to participate more deeply in decision-making — and, perhaps, will reevaluate their tendencies and biases in the process. The researchers use the example of a hiring manager whose AI tools suggest job candidates whose characteristics conflict with the manager’s biases (e.g., affinity bias, a commonly held bias toward hiring people with similar backgrounds to one’s own). The manager may find these suggestions frustrating, but they will have to make a more reasoned effort to explain to themselves and others why those less-similar candidates should or should not be considered for the job. Ultimately, antagonistic AI’s ability to help humans make stronger decisions based on more thorough reasoning could support organizational goals and increase profit.

 

 

How Humans May Change with AI Complements 😃 🤖 =

We are only now starting to see how automation will free up time, creativity and innovation opportunities so humans can explore and design entirely new things.

 

But as the research discussed here shows, maybe there’s still another philosophical layer to these automated AI processes that we don’t usually consider, something beyond maximizing technical performance. These technologies can do more than just reduce tedium while they complement human effort. They may also have the potential to change the way we think, in ways simple and profound. Even less technically ideal models might help us explore our reasoning and decisions by offering a new perspective — and help us to become more insightful, thoughtful people along the way.

 


Thanks for the great insights on Twitter, @AbhilashR@ThizViz@dataMack@HeatherMHarris@estherb47@RolandSchubert, and many others who contributed! 

 

 

Blog header image by Clem Onojeghuo on Unsplash.

Susan Currie Sivek
Senior Data Science Journalist

Susan Currie Sivek, Ph.D., is the data science journalist for the Alteryx Community. She explores data science concepts with a global audience through blog posts and the Data Science Mixer podcast. Her background in academia and social science informs her approach to investigating data and communicating complex ideas — with a dash of creativity from her training in journalism. Susan also loves getting outdoors with her dog and relaxing with some good science fiction. Twitter: @susansivek

Susan Currie Sivek, Ph.D., is the data science journalist for the Alteryx Community. She explores data science concepts with a global audience through blog posts and the Data Science Mixer podcast. Her background in academia and social science informs her approach to investigating data and communicating complex ideas — with a dash of creativity from her training in journalism. Susan also loves getting outdoors with her dog and relaxing with some good science fiction. Twitter: @susansivek