These are chat archives for FreeCodeCamp/DataScience

Sep 2018
Alice Jiang
Sep 08 2018 00:20 UTC
It was probably me. I'm a slightly obsessive fan girl of Rana who co-founded Affectiva as well as the company overall so I tend to talk about them a lot
Ayo Philip
Sep 08 2018 14:39 UTC
@becausealice2 Nice Alice. If I may ask, what are your highlights or takeaways from the event?
Alice Jiang
Sep 08 2018 16:54 UTC
I was sick all day so I missed quite a lot, but they are posting the recordings of all the sessions so I'll re-watch everything when I am feeling better and will have more highlights. In the meantime some things I do remember are that there was a lot of talk about the human-AI relationship and that AI needs to be able to trust humans just as much as humans need to be able to trust AI. Like, can we trust that AI delivered the right prescription to the right patient? If no, are we sure that the nurse who made the order put the correct prescription order in? Would AI being able to determine the nurse's emotional state (such as if they are upset or distracted or drowsy) improve the rate at which correct prescription deliveries are made?
In a similar line of thought, if a car can tell if the driver is distracted or angry or drowsy, it can provide feedback and offer suggestions that the driver pull over and have a nap or take some deep breaths or stop for coffee
AI is being used in Boston Children's Hospital to successfully improve the outcomes of surgeries by requiring all major surgeries first be performed in a simulation (on a dummy) in the minutes hours or days before the actual operation happens, where vital signs of the surgical team are measured and once they feel calm and comfortable with the surgery, thats when they can call for the surgery to be performed
it's being used in games that are designed to help children learn to understand and control emotions at a time in their life when that area of the brain really just isn't developed yet, and at best they get an hour long session with a therapist where they don't do much to learn about controlling emotions and then sent back into the world and expected to just fend for themselves
Alice Jiang
Sep 08 2018 17:00 UTC
As for key takeaways, people--especially the people building the AI systems--need to not think of AI as replacing human beings, but as augmentations. People err on the side of privacy, even if it limits the quality of their experience, so be respectful of what data you collect and how you use it. And also, if you build good AI from the start, there will be less need for "AI for good" in the long run.
"good AI" meaning AI that is inclusive of people of all colors, nationalities, genders, linguistic needs, physical and mental abilities.
One of the panelists said "People solve the problems they know" so it's important to maintain diversity in teams