4 stages of learning:
1. Unconscious Incompetence
“I don’t know that I don’t know how to do this.” This is the stage of blissful ignorance before learning begins.
2. Conscious Incompetence
“I know that I don’t know how to do this, yet.” This is the most difficult stage, where learning begins, and where the most judgments against self are formed. This is also the stage that most people give up.
3. Conscious Competence
“I know that I know how to do this.” This stage of learning is much easier than the second stage, but it is still a bit uncomfortable and self-conscious.
4. Unconscious Competence
“What, you say I did something well?” The final stage of learning a skill is when it has become a natural part of us; we don’t have to think about it.
Links:
https://www.google.com/search?q=machines+robots+doing+thearapy&oq=machines+robots+doing+thearapy+&aqs=chrome..69i57.11607j0j7&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=sensitive+issues+suiside&oq=sensitive+issues+suiside&aqs=chrome..69i57j33.7119j1j7&sourceid=chrome&ie=UTF-8
https://en.wikipedia.org/wiki/ELIZA
https://en.wikipedia.org/wiki/ELIZA#cite_note-:0-2
Article about Woeboy:
We’re already seeing AI make some advancements here. Take, for example, a new program called
Woebot
from Stanford researchers. Woebot is essentially a chatbot therapist. It uses a Facebook messenger to administer a very common form of therapy called Cognitive Behavioral Therapy (CBT), which, as Megan Molteni at
Wired
explains it, “asks people to recast their negative thoughts in a more objective light.” Once they see their negative thoughts, they can start to recognize patterns and triggers, and try to stop them. Woebot checks in with users daily by sending messages, asking simple questions along the lines of “How do you feel today?” or “What are you doing right now and what’s your general mood?” Because it’s a robot, it remembers a user’s responses and “gets to know” them as time passes. It recognizes changes in mood and can tailor suggestions, the same way a real therapist might.
The Woebot is supposed to help people overcome fears of being judged or stigmatized, letting them obtain mental health help in a format that’s both familiar to them and anonymous. People may not admit to friends that they’re struggling, but might readily confide in Woebot — especially because they know the app won’t judge them. Of course, Woebot isn’t a complete replacement for face-to-face therapy, but not everyone has the time (or the money) to see a real therapist. And research suggests this kind of thing can really work. One recent
study
shows Woebot could reduce anxiety and depression in users.
This isn’t the first attempt at creating a robotic therapist.
Researchers who helped created Ellie
— a robot that helps veterans who are battling post-traumatic stress disorder — agree robots “listen” without judgment, and people may be more likely to confide in them more honestly. Ellie analyzes tone of voice, eye gaze, facial expressions, and head gestures to look for indicators of depression and post-traumatic stress disorder in patients. However, the robot’s developers stress she’s not a replacement for human therapists because she doesn’t try to offer any kind of treatment (unlike Woebot); she just gathers data.
But there’s another problem: Sometimes patients enter therapy but don’t stick with it, or don’t have the motivation to change their behavior or environment. It’s hard to identify who will or won’t follow through, but
a team of Penn State engineers
is working on ways to use machine learning to come up with customized mental and physical health plans that help patients stay motivated. It’s based on a gaming technique: Users are encouraged to move through virtual environments and perform certain tasks. As they do, scenarios get progressively harder and users have to exert more energy and greater motivation. The patient’s performance results could help researchers measure their personal level of motivation, and tailor mental health treatment accordingly to keep them interested and committed.
MORE FROM THE IDEA FACTORY
THE WEEK STAFF
Amazon might be replacing credit cards with the wave of a hand
THE WEEK STAFF
Augmented reality at the twitch of an eye
One of the most difficult parts of treating people suffering from mental health problems might be trying to identify those who are at the highest risk of self-harm. This is really hard to do. Suicidal thoughts are not usually rooted in a single, isolated incident such as a relationship breakup, job loss, or death of a close friend. This unpredictability is a problem for clinicians, but scientists are looking at how machine learning might be able to help. By examining huge quantities of data and pulling out patterns that humans might miss, robots could help spot potentially suicidal patients.
In one study, an algorithm
predicted suicide attempts with surprising accuracy
. The research particularly focused on improving clinicians’ predictive abilities about suicide. It found that today’s modern clinicians are no more able to definitively find the factors that lead to suicide than mental health specialists of 50 years ago. Machine learning could be the missing link that leads to major advancements in reducing self-harm through prediction and prevention.
All of this said, while robots are already supplementing the work therapists do, they can’t create genuine connections with clients, the kinds of connections needed to really help patients thrive. For now, that’s something only humans can do. So take comfort, doctor, you’re not out of a job just yet.
The essay should contain a brief abstract summarizing its contents and a clear thesis that states the position you are defending. It should offer arguments for the point of view you are defending and entertain objections to its thesis.
Two weeks ago, we discussed a thrilling topic in our class. It was about machines being in therapy.
It is brilliant how technology is capable of leaning by itself and how it can move up the skill level fast. Artificial intelligence can overcome human intelligence in many ways, which is why many companies and industries have depended on autonomy for accuracy and efficiency of performance. This by itself is a moral issue since it increases the unemployment rate especially in world developed countries. However, since employers and other shareholders are morally obligated to create value for stakeholders, creating value by using automation to reduce costs and improve performance can be argued to be just as moral, especially if it meant better products and services for customers and insuring environment safety procedures that can be monitored through computer progress reports. Robots have been proven to be effective in many industries but when it comes to consulting for a sensitive issue that needs careful participation it is important that we ask whether they’ll be safe to use and whether that is moral? One case of this incident is ELIZA, which I plan to examine in my paper. In doing so, I will explain why I believe it is immoral to incorporate machines in treatment of mental or psychological issues. My explanation will discuss issues of 1,2,3.
machines can learn and adapt.
there is a good discussion >> you’re very helpful… you’re very generious…. Very cute (Chat in application with Woeboy)
Problems with Woebot:
First, I really loved that Woebot uses emojis that makes it instantly more fun and expressive. But I personally sometimes get impatient. The problem with Woebot is that it immediately wants to ask me questions. The first question the robot should ask is “is there something bothering you?” Right away I should be able to vent out. The other problem is that is does not make me type freely, instead, it only offers me options of answers to choose from and responds based on them. This leads me to the third problem, sometimes I don’t want to choose these options, now I would think Woebot is just annoying and is not helpful. Yet, there is a particular issue that I found so annoyingly not right, which is the technology introduces itself saying that it is an emotional assistant and that it is not human, which it does voluntarily by itself. Then, later on it asks for my feelings and I say I don’t feel happy or sad, just in between. Then, it replies saying “I am feeling a little sluggish myself. I think it’s the weather,” which got me thinking this is so annoying do you think I was that stupid? how can you relate to the weather if you cannot live in it. This is just faulty information that would make the whole thing feel fake. This is especially because it was programed poorly, and one proof is that it will not tell me about their personality.