Last Day: Final Presentations, Poster Session, and Staying Involved

The final day of the program began with a few hours for campers to prepare for their presentations by rehearsing and adding finishing touches to their posters.

IMG_0482.JPG
A student in the computational biology group rehearses the presentation.

Then came time for the presentations! The first group to present was computational biology. Students in this group worked to detect various cancers by using public data on the human genome. They explained the classifiers they tried, including decision trees, k-nearest neighbors, and k-means clustering. The group noted that their work could help identify cancer-causing genes that could potentially be eliminated with new technology.

Next up was the computer vision group, which has been working to map out poverty in Uganda. The group used public satellite images, transfer learning, and convolutional neural networks to extract features in the images. These features can then be inputted into a logistic regression, which can be used to determine a “poverty score” for the image. The group explained that in the future, this research could also help predict droughts and farm productivity.

The computer vision was followed by the NLP group, which used NLP for disaster relief by sorting tweets from the Haiti earthquake and Hurricane Sandy into five categories based on what kind of aid the tweets were providing – food, water, medical, energy, or none. Students described the data pre-processing methods they used, such as simplifying words via stemming or lemmatization and removing stop words (“a”, “the”, “and”) that might slow down the model, and they explained how a Naive Bayes classifier works and why they chose it.

Last but not least, the robotics group presented their work on autonomous vehicles. Students described how they used proportional-integral-derivative (PID) controllers to make sure the car stayed on the line and implemented Dijkstra’s algorithm into code so that the robot could determine the shortest path between two locations.

After a catered lunch with their mentors, campers had a poster session, where they answered questions from their peers, alumni, graduate students, AI4ALL staff, and other community members who stopped by.

IMG_0642.JPG
Computational biology group.
IMG_0774.JPG
Computer vision group.
IMG_0794.JPG
NLP group.
IMG_0807.JPG
Robotics group.

Several presentations on how to stay involved in computer science and artificial intelligence followed the poster session. Campers heard from representatives for Stanford Pre-Collegiate Studies, Stanford Online High School, the National Center for Women & Information Technology (NCWIT), and Girls Teaching Girls to Code. They then took a post-program survey to reflect on how their views have changed since the start of the program.

After the girls thoughtfully completed the survey, they headed back to their dorms, where they had a delicious dinner with several AI4ALL board members. During dinner, campers learned about the many opportunities available to them through the alumni program. An AI4ALL alum from last year, Stephanie Tena, gave a presentation about her research using k-means clustering to determine the water quality of a river, which was conducted as part of the alumni fellowship program.

IMG_0894.JPG

Girls then went to their rooms to pack up their things, before having one final house meeting where they appreciated one another and said their goodbyes.

We are incredibly grateful to all the research mentors, undergraduates, graduate students, postdocs, professors, guest speakers, alumni, staff, sponsors, parents, and of course, students, for making the Stanford AI4ALL program possible. We hope the campers have had an amazing three weeks learning about AI, finding role models in guest speakers and research mentors, and making new friends. Although we’re sad to see these 30 incredible young women leave their dorms, we’re confident that they are leaving with a strong support network, and we’re so excited to see how they continue to use their knowledge for good!

Blog post and all photos by Anna Wong.

Advertisements

Day 16: Haptics and Soft Robotics, Generalizable Autonomy in Robots, and Growth Mindset

Campers kicked off the day in research groups. The robotics group finished getting their robots to follow the shortest path between two points. Both the computer vision and NLP groups planned out their posters and worked on their slides. The computational biology group met research scientist Alborz, who spoke to the group about how to get data and use computational biology in the real world.

IMG_0366
The NLP group discusses their poster.
IMG_0368.JPG
Alborz speaks to the computational biology group.

Today’s guest lecturer was Professor Allison Okamura, the principal investigator of the Stanford CHARM (Collaborative Haptics and Robotics in Medicine) Lab. Prof. Okamura showed a video of a surgeon using haptic technology to make stitches. This method is less invasive than normal surgery because the haptic device is small, so the surgery cut can also be smaller, resulting in less blood loss, faster recovery, and a reduced likelihood of infection. Prof. Okamura noted that haptic feedback can also be used for palpation to search for lumps and potential tumors. She then discussed soft robotics, which involves robots created using soft materials, rather than rigid pieces. For example, Prof. Okamura and her lab made snake-like pneumatic growing robots. These softer and more flexible robots can get into small spaces, making them useful for medical applications and rescue missions.

IMG_0380.JPG

In the afternoon, students got to see a demo led by postdoc Animesh Garg. Dr. Garg explained that some robots trained to perform a certain task in one environment may be unable to complete the same task in a different environment. For example, a robot might be trained to beat eggs with a whisk, but then not understand how to beat an egg with a different tool such as a fork or knife.

IMG_0415.JPG

Dr. Garg described how, in order to create robots that can generalize their knowledge, researchers presented robots with a variety of objects, and the robots had to figure out how to use each object as a hammer, thus allowing them to learn the concept of hammering and how to grasp and use different objects as hammers. The robot was trained in simulation rather than in the physical world to speed up the training process. Campers got to see the result of this training and witnessed a real robot picked up a hammer.

IMG_0399.JPG

After the demo, students continued working on their presentations for tomorrow.

IMG_0428.JPG
Computational biology group member works on the poster.

In the evening, the girls had a personal growth session led by Sara and Kristine, who are part of the Clayman Institute for Gender Research and leaders of the Seeds of Change initiative, a program that provides training and support to women in STEM. Campers watched a short video demonstrating the differences between fixed and growth mindsets, then broke into groups where they completed worksheets to reflect on their own mindsets and discussed with the group.

IMG_0455.JPG
A camper discusses the growth mindset.

The campers have been working hard these past few weeks, and we can’t wait to see all of the presentations tomorrow!

Blog post and all photos by Anna Wong.

Day 15: Translational Bioinformatics, Industry Panel, and Haptic Robotics Demo

Day 15 began with more work in research groups. The computer vision group used cross validation to find the best parameters for their final model. The robotics group worked on and tested their code to get their robots to compute and follow the shortest path between two points. The NLP group learned how regular expressions and neural networks can be used in the NLP field, and the computational biology group began implementing the k-means clustering algorithm to separate benign and malignant gene expressions.

Next, students heard from UCSF Assistant Professor Marina Sirota, who described her work as part of the Bakar Computational Health Sciences Institute. Prof. Sirota explained that currently, it takes about 15 years and hundreds of millions of dollars to create and approve a new medical drug, and 90% of drugs fail in early development. A lot of this time and money could be saved with drug repurposing because a drug that has already been approved of by the FDA for one purpose might also be effective for another purpose. To identify which drugs might be effective for which diseases, Prof. Sirota and her fellow researchers use translational bioinformatics and utilize public gene expression data. Through this process, they found a potential treatment for Crohn’s disease, and they are now working on applying this approach to other diseases such as dermatomyositis and peripheral artery disease, and are using this to predict preterm births.

IMG_0166.JPG

Students continued to work with their research mentors in the afternoon.

IMG_0175.JPG
Computational biology group.

Then, campers attended an industry panel. The six panelists – Mohamad, Prashant, Kate, Meghana, Vidya, and Fei – work for various companies that utilize AI. They briefly introduced themselves as technical experts, data scientists, senior directors, and more, then distributed themselves among the students. The students got to spend several minutes talking to and learning from each panelist.

After the panel, campers headed over to see a haptic robotics demo, presented by Stanford Professor Oussama Khatib.

IMG_0278.JPG

Prof. Khatib explained that with many human-controlled robots, it’s difficult for the human operator to feel what the robot feels. For example, the human cannot sense how heavy or fragile an object is or how tightly the robot is gripping the object. This is where haptic robotics comes in. Students got to test out a controller that let them feel resistance as though they were passing a ball through a membrane.

IMG_0254.JPG

Prof. Khatib described his exciting work with OceanOne, a humanoid robot that was sent to recover artifacts from the 17th century shipwreck of La Lune. The shipwreck was too deep a dive for humans, but any robot sent down to recover artifacts would need to be gentle when handling the objects, so as not to break them. In 2016, OceanOne was sent down to the shipwreck while being controlled by a human operator using a haptic device to feel what the robot “felt” in its hands, and OceanOne was able to recover a Catalan vase from the wreckage.

IMG_0285.JPG
Campers watch Prof. Khatib demo OceanOne.
IMG_0295.JPG
Campers pose around OceanOne with Prof. Khatib.

Blog post and all photos by Anna Wong.

Day 14: Research Groups, Ambient Intelligence in Healthcare, and a Self-Driving Car

As usual, the day began with students working in their research groups. The computer vision group learned about logistic regression and how that can be applied to labels, and talked about stochastic gradient descent. The robotics research group continued to test their robots, improving their line-following code as well as trying to apply Dijkstra’s algorithm to find the optimal path. The NLP group went over more applications of NLP such as chatbots and the predictive text on phones, as well as NLP techniques such as language models. They also explored Stanford CoreNLP, which is a natural language software that can identify parts-of-speech, pronouns, and named entities. The computational biology group learned about decision trees and also looked at the k-means clustering process, which is an unsupervised machine learning algorithm.

IMG_0037
Computational biology group.
IMG_0042
NLP group.

Campers then heard from Dr. Serena Yeung, who recently finished her PhD and will soon be a professor at Stanford. Dr. Yeung explained how computer vision can be used to improve hospitals. Artificial intelligence is already being used in medicine for the treatment and diagnosis of diseases and for medical devices; however, not much has been done with AI to improve the physical space of healthcare.

IMG_0044.JPG

The goal of Dr. Yeung’s research is to add ambient intelligence to healthcare spaces so that the environment will be able to notice and respond to events such as patients falling down or doctors not sanitizing their hands. Depth and thermal sensors can be placed around the hospital to sense actions, while still preserving the privacy of patients and hospital workers. A human activity recognition framework can then be trained with convolutional neural networks to identify these actions. Dr. Yeung and her team tested these systems in a few hospitals to check if workers were properly cleaning their hands before working with patients, because this sanitation could prevent thousands of deaths per year from hospital acquired infections.

IMG_0049.JPG

After lunch, students continued to work with their research mentors, before heading off to a self-driving car demo.

Campers got to see and sit inside of a self-driving car created by a company called Zoox. Students learned about the lidar sensors, radar sensors, and cameras placed around the car in order to allow the car to “see” its surroundings and thus navigate around safely.

IMG_0125.JPG

IMG_0118.JPG

IMG_0129.JPG

Students ended the work day by making progress on their research projects.

Blog post and all photos by Anna Wong.

Day 13: Capitola Beach Day!

Cover photo from Diana Guzman.

Today, campers spent the day at Capitola Beach! They waded into the waters, relaxed on the sand, and played beach volleyball with one another. Many campers also wandered around the surrounding village to get boba and explore the shops.

Image from iOS (1).jpg
Photo by Samprikta Basu.
Image uploaded from iOS (3).jpg
Photo by Hari Bhimaraju.
IMG_20180708_130121-edit.jpg
Photo by Hannah Zhou.
Image from iOS (1).jpg
Photo from Ria Doshi.
IMG_20180708_105006.jpg
Photo by Hannah Zhou.

The campers had so much fun today, and we hope they’re excited for their last week at AI4ALL!

Blog post by Anna Wong.

Day 12: Farm Day, Free Time, and CS in College Panel

This weekend began with farm day! Campers spent the morning at the Stanford Farm removing weeds from the area and trimming bushes, in addition to visiting the chicken coop. Campers then had several hours of free time in the afternoon. They used this time to visit the Stanford Shopping Center, work on their research projects, and relax in their dorms.

In the evening, students heard from women who were or currently are involved in computer science in college. The panelists – Jessica, Jaimie, Rachel, Michelle, and Lauren – ranged from rising sophomores in college to recent graduates. They offered advice for both high school and university, discussed the transition to college, detailed their techniques to de-stress and overcome disappointments, and explained how to handle envious or competitive peers and create a more collaborative environment.

IMG_0029.JPG

Blog post and all photos by Anna Wong.

Day 11: Research Groups, Computer Vision for Robotic Motion, and Banquet!

Today, campers started off in their research groups. All of the groups worked on their presentations for the dinner banquet later in the day, then continued to learn new material. The computer vision group learned about convolutional neural networks, dot products, and matrix multiplication, and the robotics group worked on implementing Dijkstra’s algorithm into code. The NLP group defined various probability terms such as prior distribution and posterior distribution and went over how they could use probability to implement a Naive Bayes classifier. The computational biology group learned about the k-nearest neighbors algorithm.

IMG_7891.JPG
NLP group mentor helps a student with her Naive Bayes classifier.
IMG_7897.JPG
Computational biology group mentor helps a student use the k-nearest-neighbors algorithm.

Assistant Professor Jeannette Bohg then gave a guest lecture to the campers on computer vision and robotic manipulation. Students looked at optical illusions, mirrors, and the selective attention test to learn about the nuances of human vision.

IMG_7927.JPG
Students use mirrors to learn about the eye’s stabilizing mechanism.

Prof. Bohg went over possible methods that a computer might use to detect a specific object in an image. For example, a model can be trained with lots of photos of the object to look for patterns and features in new photos that would indicate the presence of this object. Prof. Bohg explained that a big challenge in computer vision is that humans can do things that robots can’t, and we don’t yet know the underlying principles that allow us to do these things.

IMG_7919.JPG

In the afternoon, students spent some time with their research mentors to finalize their presentations for that evening.

IMG_7949.JPG
Students in the computer vision group run through their presentation.

The banquet kicked off with welcoming remarks from AI4ALL co-founder Fei-Fei Li, who described the organization’s goal of encouraging students to create responsible and human-centered AI.

IMG_7967.JPG

Next, academic program director Juan Carlos Niebles and director of university partnerships Tiffany Shumate made a few opening remarks. They welcomed the audience of campers, research mentors, alumni, AI4ALL board members, and Stanford Pre-Collegiate Studies staff, and reminded the girls that they belong in the AI4ALL program and in the tech fields.

IMG_7975.JPG

IMG_7990

After a delicious dinner, Assistant Professor Emma Brunskill gave a keynote presentation on her work in using artificial intelligence for human impact. Prof. Brunskill explained that although some people fear that AI will replace humans, the goal of her research is to use AI to help humans achieve their full potential by personalizing education. Machines can use reinforcement learning to determine the best way to teach each individual student, such as by deciding which questions and problems are most effective at helping students understand and retain their knowledge.

IMG_7994.JPG

The banquet concluded with each of the four research groups giving a presentation on the goal of their projects and explaining what methods they’ve been using in their research. The computational biology group presented first, describing their work in using data on gene expression to help detect cancer. Next was the computer vision group, which is using satellite images to help detect poverty, followed by the NLP group, which is working on a Naive Bayes classifier to help classify tweets for disaster relief. Finally, the robotics group explained how they’ve been programming autonomous cars to determine and follow the shortest path between two points.

DSC_9967.JPG

The groups all thanked their mentors for their guidance thus far.

IMG_9980.JPG
A camper poses for a picture with one of her mentors.

We’re so impressed with all that the campers have done in these past two weeks, and we hope they’re ready for their final week at AI4ALL!

Blog post and all photos by Anna Wong.