Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.
Free Trial

Data Science

Machine learning & data science for beginners and experts alike.
SydneyF
Alteryx Alumni (Retired)

A few months ago, I sent my dad the article 20 Top Lawyers Were Beaten by Legal AI in a Controlled Study, which (as the title suggests) discusses a study on how AI can be applied to the field of law, and how it performs against professional lawers. An implication of this article is the potential to replace lawyers with AI for many common legal needs, such as contract review or writing wills.  

 

It's an interesting article and application of AI, which I spend a lot of time thinking about. It might seem pretty innocent that I shared it with my dad, and it would be, except that my dad is a lawyer. 

 

Yes, I was kind of trying to get a rise out of him (it’s all affectionate, I promise).

 

Right now, it seems like hardly a week goes by without encountering an article that claims AI is about to take all our jobs (then turn into Skynet and kill us all). AI is being applied to many diverse applications, and it can often feel like the ultimate goal of any AI project is to outperform humanity.

 

An explicit version of AI vs. Human is often seen in the gaming world, where advanced AI are pitted against professional gamers as a benchmark for AI’s abilities and sophistication.

 

Why Are We Teaching AI to Play Games?

 

It seems like ever since researchers have been exploring and developing artificial intelligence, they have also been teaching their AI to play games

Despite what you might think, it is decidedly not a waste of time and money to teach AI to play games.

 

War Games - 1983War Games - 1983

In games, there are inherent rules and rewards. This makes games easy to teach to AI as a “scorable” proxy for complex tasks. Games help developers track the progress of AI. Games provide a safe (no real lives at stake) yet robust way to test the performance of AI because the performance of the AI is quantifiable via score.

 

Often the pinnacle of testing a gaming AI is to have it play against a human champion. 

 

Human vs. Machine

 

One of the most notable early faceoffs between a human champion and AI happened in 1996 and 1997, where IBM's Deep Blue played a world champion chess player, Garry Kasparov. Although Kasparov lost the first game of their first match in 1996, he was able to adapt to Deep Blue’s approach to chess and win three of the six games, taking the match.

 

Kasparov agreed to a rematch against Deep Blue in 1997. This match included six games. Deep Blue made an unexpected move in game two – it passed on the opportunity to capture an exposed pawn to make a move that had more benefits in the long term. Kasparov was perplexed by this very human-seeming move, and it threw Kasparov off his game (funny story, this move may have been the result of a bug). Regardless of what sparked this move, Kasparov ended up losing this match.  

 

By modern standards, Deep Blue's AI was pretty rudimentary. To play chess, Deep Blue was coded to use a brute-force method, analyzing many different sequences before making a move. When Deep Blue defeated Kasparov, people speculated that the next benchmark for AI excellence in gaming would be the Chinese game of Go, which would be difficult to code a computer using the same methods, due to the much higher number of possible moves and sequences in Go. People speculated it would be 100s of years before a human could be defeated by Go-playing AI

 

AI defeating human Go champions, comparable to Deep Blue vs. Kasparov, started to become mainstream in 2015 with DeepMind’s AlphaGo. First, AlphaGo defeated European Go champion Fan Hui (2015), then in 2016 AlphaGo took on South Korean champion (at the time, he was ranked second in international titles) Lee Sedol in a highly publicized match, and then in 2017 faced Chinese champion Ke Jie (at the time of the match, Ke Jie was ranked number one in the world by multiple Go associations).

 

AlphaGo was based on a far more complex framework than it's spiritual predecessor Deep Blue. AlphaGo leveraged deep learning to learn how to play the game of Go. A more recent iteration of AlphaGo, AlphaGo Zero, was trained entirely by being given the rules and then playing the game against itself. This new framework has been applied to the games of Chess and Shogi as well as Go in the newest iteration, AlphaZero.

 

Chess and Go are two popular examples of AI winning against humans, but there are many more examples (like Jeopardy) of AI defeating humans in turn-based games where the AI only needs to process one thing at a time and is given adequate time to do so.

 

More recently, researchers have focused on video games as the next great arena for AI. Video games are a challenge because the players need to respond in real time, and often need to divide attention between many different aspects of the game. OpenAI and DeepMind have been featured in the news this year for their AI playing Dota 2 and Starcraft 2, respectively.

 

Although these video-game playing AI systems have had some considerable success, they have not dominated their respective gaming platforms in the same way their turn-based brethren have, yet. For a more in-depth overview of gaming AI in the past and present, read AIs are Better Gamers than Us, and That’s Okay, from Jamie Rigg at Engadget.

 

Recovering from Defeat at the Hands of AI

 

In each encounter between an AI and a human champion, to some extent, the champion fighting for humankind is defending the honor of our species. Losing is hard. Losing to an opponent that is not human is even harder.

 

In 1996 and 1997, the pressure was high for Kasparov, a highly respected and admired world champion of Chess. Prior to the match, publications were put out with titles like “The Brain’s Last Stand.” Even though Kasparov went into the matches confident, he was visibly frustrated and even rattled by an unexpected move from Deep Blue. 

 

My parents have often told me that when you start an argument with a two-year-old, you’ve already lost. Similarly, when you get emotional when facing an AI, you’ve already lost. Many people attribute Kasparov’s loss to how Deep Blue was able to get into his head.

 

Credit: Stan Honda, AFP, Getty ImagesCredit: Stan Honda, AFP, Getty ImagesFollowing his loss, Kasparov actually accused IBM of cheating, creating somewhat of a conspiracy theory around the match. Surely, AI wasn’t developed enough to make these kinds of moves or develop novel strategy yet, right? The only explanation was human interference. At least in this scenario, Kasparov would have lost to another (devious) person, and not to an existentially threatening computer.

 

A person will feel the pressure and context of a situation. When AI is taking on these matches, it is not really aware that it isn’t just playing another training match against another AI. The stakes in these face-offs have always been much higher for the human.

 

After the third game of his Go match with AlphaGo, Lee Sedol actually apologized for losing

 

I don’t know how to start or what to say today, but I think I would have to express my apologies first. I should have shown a better result, a better outcome, and better content in terms of the game played, and I do apologize for not being able to satisfy a lot of people’s expectations. I kind of felt powerless. If I look back on the three matches, the first one, even if I were to go back and redo the first match, I think that I would not have been able to win, because I at that time misjudged the capabilities of AlphaGo.

Credit: APCredit: APLee Sedol ended up taking one of the five matches played against AlphaGo. At the end of the match, despite having lost four of the five games, he said this:

 

Personally, I am regretful about the result, but would like to express my gratitude to everyone who supported and encouraged me throughout the match... I have questioned at some points in my life whether I truly enjoy the game of Go, but I admit that I enjoyed all five games against AlphaGo. After my experience with AlphaGo, I have come to question the classical beliefs a little bit, so I have more study to do.

Following his loss to AlphaGo, in a similar sentiment to Lee Sedol, champion Ke Jie has spent time learning from how AlphaGo plays. His take away from the matches was that people just don’t know as much about Go as they thought they did, and there is still so much to learn. He has gone on to write books about the lessons learned from AlphaGo.

 

AlphaGo and AlphaZero have influenced the way games like Chess, Shogi and Go are played. What people are realizing is that despite what traditional wisdom would tell us, there is so much that Go masters don’t know about the game. Having this external vision of play is changing how the game is played. AI is not bound by convention.

 

20 years after the match, Kasparov is able to reflect on his experience more dispassionately. He has rescinded and apologized for a lot of his accusations towards the Deep Blue team at IBM. He has also written a book about his experiences facing AI (Deep Thinking). He is now an optimistic advocate for working with AI to enable a better future.

 

Losing to AI as an Academic

 

Every two years, there is a globally-held experiment/competition for researchers in the field of protein structure modeling called the Critical Assessment of Structure Prediction (CASP). The goal of this event is to give researchers an objective way to test structure prediction methods on a global stage. As Sigal Samuel at Vox puts it, CASP is pretty much a “fancy science contest for grown-ups.”

 

The winning submission to the CASP13 conference, held last year, was DeepMind’s AlphaFold.

 

This type of research is the life’s work of many biologists, including Dr. Mohammed AlQuraishi (of Harvard), who wrote a really interesting post on his personal blog about the experience of being bested by AI. In his blog, he writes about being afraid he had been outperformed by AI, feeling relieved when he realized that the insights from AlphaFold were in-line with where research in the field is, and then crediting the success of AlphaFold to the deep pockets of its parent company, Alphabet.

 

He also reflects that DeepMind’s ability to come into a field as an outsider and make significant progress indicates somewhat of a structural inefficiency in academia and big pharma. Academia’s competitive nature prevents information from being openly shared (causing each research group to have to rediscover things on their own, wasting time and effort), and pharmaceutical companies tend to focus on sales over novel research.

 

Ultimately, AlQuraishi concludes that the discoveries by AlphaFold are a good thing – there was a major advance in one of biochemistry’s most important problems. This resulted in higher visibility for the field and intellectual advancement for all researchers involved. Who (or what) did the discovering is less important. His suggestions for researchers to adapt to AI include focusing on problems that require more conceptual breakthroughs, and leave more engineering-heavy problems to AI research groups.

 

What Should We Do When AI Comes for Us and Our Jobs?

 

AI will probably eliminate some jobs in the future. Which jobs and when is unclear. As Mohammed AlQuraishi explains in his interview with Vox’s Sigal Samuel (yes I've linked this article twice, it's really good and I wish I wrote it):

 

At one point, a lot of people thought there was a hierarchy among jobs — intellectual jobs would be the last to be replaced and mechanical jobs would be the first. But that’s actually unclear. It may well be that jobs that are mechanical will take a long time to replace because it’s actually hard to make robots that make certain gestures. And things in the higher echelons of intellect can maybe be more quickly replaced.

As we have learned from each of our human champions that have had to directly face the prospect of obsoletion at the hands of AI, we can choose to see this as an opportunity to learn and do something different. AI opens the opportunity for new and exciting jobs. As AI gets better than us at some tasks, we will just need to find new tasks to take on. We also need to play an active role in determining what AI is developed and ensure that the jobs given to AI are conducted in an ethical manner.

 

canyoudothis.gif

 

My dad actually told me not to go to law school ten years ago, before AI applications in law were put into motion. AI applications for legal work are being explored because there is an opportunity to make something more efficient and effective, not just for fun or to spite lawyers (well, maybe a little).
 
Another thing my dad used to tell me when I was younger is that the job I would end up having as an adult hadn’t been invented yet. In a sense, he was right. I don’t think professionally blogging about data science and analytics for a software company had quite hit the mainstream in the nineties. In a less literal sense, I think he meant that the needs of today are not going to be the same as the needs of tomorrow, and if you're adaptive and open you can end up doing something new that people haven't done before. 
 

Every time we invent a new technology, jobs are disrupted. Think about how farms and factories look today compared to how they looked 100 years ago. Industries primed to have AI introduced are the industries that AI can improve efficiency in, potentially making lives better for the end consumers, and hopefully society as a whole.

Sydney Firmin

A geographer by training and a data geek at heart, Sydney joined the Alteryx team as a Customer Support Engineer in 2017. She strongly believes that data and knowledge are most valuable when they can be clearly communicated and understood. She currently manages a team of data scientists that bring new innovations to the Alteryx Platform.

A geographer by training and a data geek at heart, Sydney joined the Alteryx team as a Customer Support Engineer in 2017. She strongly believes that data and knowledge are most valuable when they can be clearly communicated and understood. She currently manages a team of data scientists that bring new innovations to the Alteryx Platform.

Comments