Get Inspire insights from former attendees in our AMA discussion thread on Inspire Buzz. ACEs and other community members are on call all week to answer!

Data Science

Machine learning & data science for beginners and experts alike.
CristonS
Alteryx Alumni (Retired)

Year round, mental health advocates fight stigma, provide support, educate the public and advocate for policies that support people with mental illness and their families. May is Mental Health Month, and the best time to talk about possibilities is right now.

 

In one of her recent blog posts, Sydney advocates that the process doesn't matter as much, as long as your steps are thoughtful. As long as your steps are thoughtful. That really stuck with me. Such good advice for walking through life - your relationships, your career, and your avocations. Be thoughtful, be contemplative, give consideration.

 

We talk a lot about data-driven decisions, but that's not enough. Pete Buttigieg (Democratic mayor of South Bend, Indiana, all politics aside) famously ignored human anthropology when researching his 1,000 Houses in 1,000 Days project to address housing stock that was sitting vacant or abandoned. At the end of the initiative, while the resulting numbers showed overall economic growth and improvement by levying fines and demolishing abandoned homes, they had completely overlooked the income or ethnicity of the homeowners affected, and the effects on the community itself.  The initiative is associated with gentrification. Many of the demolished homes were owned by local people who had fallen on hard times, willing but unable to make repairs on their property.

 

noun_Home_2547700.png

 

Spreadsheets and statistics are not impartial. Data is not impartial. Because of this, it is critical to build human factors in your research process. The disproportionate impact of the initiative felt by communities of color was not intentional on Buttigieg's part; he had put his blind faith in a data-driven system. Like many people do.

 

Amazon found out after the fact that its AI resume tool had trained itself towards gender bias. They had to go in and posthumously edit the program to make it neutral. Their affect recognition facial analysis software also exhibited gender and racial bias for gender classification. As did Google's. And IBM's. Interesting reading here and here.

 

This needs to change.

Instead of trying to explain (or defend) your black box model, it is important to create models that are interpretable in the first place. Be thoughtful in your approach. Get other perspectives. Reconsider the way “it's always been done.” And know that your model is going to affect human beings' lives and not just revenue. Criminal justice and healthcare models are used by social workers, case managers, judges... not data scientists. Can they explain the results of your model to someone who has been unfairly sentenced based on your decision tree?

 



I mean, it's terrifying what we can expect from AI if these are the current results. Twitter's AI cannot yet distinguish true sincerity from satire. Politicians with strong personalities have had their Twitter accounts suspended after their algorithms mistook their actual posts as parody. It is probably not intentional bias from twitter's algorithms, it is just that they don't know how to accommodate exaggerated personality tone. Alexandria Ocasio-Cortez, Beta O'Rourke, and Donald Trump have all sounded a little too much like themselves to pass twitter's sincerity test.

 

ezgif.com-gif-maker (2).png

 

We're on a path already with AI. Stephen Hawking already warned that AI could be the worst thing that has ever happened to humanity.  In terms of being thoughtful, do you think Microsoft intended to release a vulgar, racist chatbot?  Of course not.  But internet toxicity turned it into that - on purpose. The positive angle on this is that it is a true teachable moment. There is no such thing as foolproof when it comes to algorithms. You don't have to focus on the human/ cultural edge cases, but you cannot exclude them.

 

AI and Mental Health

 

One in five Americans suffers from a mental illness. Researchers estimate that only half of those people receive treatment, affecting their quality of life (physically, mentally, and emotionally), productivity, and satisfaction. And it is one of the most expensive parts of health care.


Researchers are currently exploring how AI can be leveraged to help mental health. If you search the internet for “AI and mental health,” you get a lot of optimistic results and a couple of insidious ones. While the concept of a chatbot as a listener, just having someone to talk to, is not new, it's come a really long way. It is literally incredible all the new avenues for recognizing conditions, like deep learning chatbots for diagnosing and treating depression, anxiety, and other disorders. A machine learning algorithm can help detect depression in children based on their speech patterns.

 

ezgif.com-gif-maker.png

 

These algorithms are addressing constantly-changing variables. The robot therapy pets can learn to read their humans' emotions fairly easily; for example, happiness tends to look the same in people. However, kids on the autism spectrum express emotion on a different plane. "Researchers at the MIT Media Lab have now developed a type of personalized machine learning that helps robots estimate the engagement and interest of each child during these interactions." Being able to appropriately respond to the child helps the robots to teach the child suitable interactions.

ezgif.com-gif-maker (1).png

A feature released by Facebook uses cognitive behavioral therapy type messenger responses to help re-frame conditions or mindsets. And as expensive and labor-intensive as in-person therapy, this is an attractive alternative.

Note to our InfoSec friends: My mind couldn't help but combine this with the mention that people would often disclose more embarrassing details to virtual therapists... and then the idea of hacked therapy logs.

AI algorithms can analyze brain scans to predict medication response. An incredible example of the need for balance between art and (data) science is using the artificial intelligence that can dominate the sky in air-to-air combat to predict treatment outcomes for bipolar disorder, in new findings from researchers at the University of Cincinnati.

"In psychiatry, treatment of bipolar disorder is as much an art as a science," David Fleck, co-author, said. "Patients are fluctuating between periods of mania and depression. Treatments will change during those periods. It's really difficult to treat them appropriately during stages of the illness."

 

noun_Brain_483393.png

 

And an algorithm that can outmaneuver humans in air-to-air combat can predict a reduction in manic symptoms.

 

Did you just read that? An air-to-air combat algorithm used to predict distributions within distributions to treat manic depression. I love math so much - it applies everywhere.


noun_Jet_2272874.png
All of these algorithms are only as good as their training. And retraining. And reevaluation. Are you seeing a pattern here? These are exciting times; let's just keep thinking things through. It is so important to acknowledge the limitations of AI and data-driven decisions, and not rely on them exclusively over human insight and compassion. 

Always, if you or someone you know needs help, it's out there. Even if it's not therapy, literally everyone can benefit from talking to a trained professional who will be on your side.

The Substance Abuse and Mental Health Services Administration (SAMHSA) has a National Helpline - a free, confidential, 24/7, 365-days-a-year treatment referral and information service (in English and Spanish) for individuals and families facing mental and/or substance use disorders. 800 - 662 - HELP (4357) This service provides referrals to local treatment facilities, support groups, and community-based organizations.

 

noun_Phone_2530383.png


Or PM me here on the Community to talk. Just reach out to someone! Let's be thoughtful to each other.