Today is the final day of AI4ALL 2019! The girls have been hard at work for the past three weeks and they’re excited to present all the things they’ve learned.
During our final lectures, research mentors ran through presentations with the girls, put finishing touches on the posters, and calmed some nerves!
After office hours, it was time to present! Professor Fei-Fei Li, Professor Juan Carlos Niebles, and Program Director Rick Sommer came to watch the presentations and support the research groups. All four groups gave comprehensive, eloquent presentations on the things we’ve learned this summer. They answered questions skillfully, and engaged the audience. We all had a lot of fun learning and listening to the interesting presentations!
After presentations, we had a catered lunch in the AT&T Pavilion. There, we hung out with our research mentors, and took lots of pictures in the shade. Everyone was really proud of their work on the presentation!
After lunch, we congregated in the Gates Computer Science building lobby to attend a poster gallery. Graduate students, undergraduates, parents, and professors mingled with the students and their research mentors to learn more about the work the campers did this summer. Everyone was very impressed with how much we got done in 3 short weeks!
Finally, after the poster gallery, we had a panel from AI4ALL alumni on how to stay involved with the AI4ALL program. From funding for community AI projects to starting clubs at schools, we learned a lot about how to keep AI4ALL with us, even if we’re not on campus.
Today is the last day, and tomorrow we’ll be driving or flying back home. We take with us the memories from our three weeks AI4ALL, and apply all the knowledge we learned to impact our communities!
Blog post by Vivian Liu. Photos from AI4ALL 2019 can be found on Flickr (6/23-7/9) and Stanford’s Box account (7/10-7/11).
Today, we started with our second to last lectures with our research mentors! Tomorrow, we have our final presentations, so we prepared for that today.
In computer vision, we continued learning about Convolutional Neural Networks, and played around with changing the learning rate of our model. We also worked on our presentations for tomorrow!
In computational biology, we continued to learn about K-Nearest Neighbor in classifying whether a cell is cancerous or not. We also continued designing our research poster and organized the presentation.
In natural language processing, we ran through our presentations together in preparation for tomorrow. We also continued coding to reach the optimum accuracy in our model.
Finally, in robotics, we reviewed Dijkstra’s Algorithm and prepared our speaking parts for the presentation. We’re all ready for our presentations tomorrow!
Next, guest speaker Serena Yeung came to talk about advances of AI in healthcare. From senior care facilities to open heart surgery, the applications of AI have permeated America’s healthcare system.
Dr. Yeung talked about her experiences working with Stanford Hospitals’ Research Center to incorporate her knowledge in computer science in a medical setting.
After lunch, we had a fun demo from Dylan Losey and Gleb Shevchuk over in the the Stanford Robotics Lab. We watched as they demoed their haptic robotics device. We were able to see and interact with two of the robotic arms that were designed and developed right there in the lab. It was super cool to watch the arm in action!
After the demo, we had our last office hours with our research mentors! All the campers were hard at work reviewing the concepts we’ve been studying all camp and preparing the presentation materials.
Tomorrow is the big day, and we’re so excited to share all that we’ve learned here at AI4ALL!
Blog post by Vivian Liu. Photos can be found on the Flickr site:
Today is our third-to-last day of the AI4ALL camp! We were busy all day, from preparing presentations to attending great guest lectures.
We started off the day with more lectures from our research mentors. In addition, we started discussing our poster format.
Today, we had a guest lecture and demo from Oussama Khatib of Stanford’s Robotics. We learned more details about the OceanOne project that Olivia had presented to us during the first week.
OceanOne is a robot that can be controlled remotely, designed to do deep sea diving, reducing the risk for humans. Recently, the OceanOne robot was able to dive up to 100 meters underwater, and safely recover centuries-old artifacts from a sunken ship. We got to see this robot first hand! It was really cool to see the exact technology that had been to the bottom of the ocean, and executed a dangerous deep-diving mission in the place of a human.
After lunch, we had office hours with our mentors. In addition to reviewing the concepts we’ve learned over the past few weeks, we also worked on our posters for Thursday. Our homework was to have a first draft of the poster by tomorrow.
At the end of the day, we had a cross-camp Q&A with Professor Fei-Fei Li. We connected with other AI4ALL camps across the nation to participate in a guest lecture from Dr. Li. Other camps attending included Arizona State University and University of Michigan. AI4ALL Educational Manager Wells Santos moderated the talk.
Dr. Li shared her experiences as the only woman faculty member at Stanford’s AI department for a very long time, and how perseverance and grit carried her to success in the CS world. She talked about her two biggest worries with AI: that there was a lack of diversity, and that people feared that AI would take over the world and enslave humanity. In actuality, these two fears are fundamentally linked; diversity in AI widens the range of humanity that will shape such world-changing technology.
After her presentation, we opened the floor for questions from students from all camps. Everyone was very enthusiastic, and asked her lots of thoughtful and salient questions.
Tomorrow, we look forward to more amazing guest speakers and continuing work on our posters and presentations for Thursday!
Blog post by Vivian Liu. Photos will be coming soon!
This week, we continued to start off the day with lecture time with our research mentors.
In computer vision, we continued to learn about visual classification and started learning about k-nearest neighbor classification. We also reviewed some of last week’s concepts.
In NLP, we started to use naive Bayes classifiers after a week of learning about Bayes’ Rule and other statistical tools. We used this in the context of deciphering the tweets in our data set, and started to code a basic naive Bayes classifier.
In computational biology, we continued coding in Jupyter Notebook to decode the DNA in our dataset.
In robotics, we started to prepare the presentation for Thursday, and continued learning about automobile automation.
Next, we heard from Dr. Jonathan H. Chen on advances of Artificial Intelligence in Medicine. Dr. Chen is a doctor in internal medicine with Stanford hospital, and is also pursuing a PhD in computer science, specializing in Artificial Intelligence.
Dr. Chen talked about all the advances of AI in medicine, including the classic example of classifying moles as either cancerous or not. He provided a warning against expecting too much from AI too soon, but also encouraged us to get into the field to advance it ourselves. He was very passionate about the topic, and gave an engaging presentation!
After lunch, we had office hours with our mentors. We worked on the homework they assigned and asked questions when we encountered something new.
Next, we had a Industry Panel in SlavDom, in which we had a panel of five successful women in AI industry jobs. We heard about their experiences, from wielding AI for good to entering the industry.
From heading Intel’s AI for Social Good to leading Micron Technology’s Engineering sector, the panel gave sage advice and told inspiring stories to the campers during today’s industry panel!
Today is the day of our Summer Banquet! The research groups were hard at work to prepare and review for the presentations later in the day.
In the Decoding DNA/Computational Bio research group, we got an in-depth lecture on statistics. We learned about p-values, hypothesis testing, and other useful statistical tests when dealing with medical accuracy.
In computer vision, we did the keras tutorial and reviewed statistics in preparation for the presentation.
In natural language processing, we continued to study probability and used Jupyter Notebook to continue coding exercises.
Finally, in robotics, we learned a little about differential equations to better understand how to optimize decision-making. We also continued learning about velocity and angular velocity, and how these factors affect automobile decision making.
After the lecture, we heard from guest speaker Greg Zaharchuk, professor in radiology at Stanford Medical School. He spoke about his experiences using Artificial Intelligence in analyzing x-rays and MRI’s. This was especially interesting for those in the Decoding DNA group!
Next, we had office hours with our research mentors. We spent the bulk of this time putting some finishing touches on the presentations, and making sure we felt confident going into the banquet!
Finally, the girls dressed up for the highlight of the week, the AI4ALL 2019 Summer Banquet, which consisted of remarks from AI4ALL educational manager Wells Santos and program directors Alivia Shorter and Juan Carlos Niebles, a presentation from keynote speaker Dr. Michelle K. Lee, a networking dinner, and presentations from the campers about the work they’ve done so far.
We kicked off the banquet with an introduction from program director and Stanford CS professor Juan Carlos Niebles, in which he emphasized the importance of having women in AI.
Next, we heard from AI4ALL Educational Manager Wells Lucas Santos. He talked about his experiences coming out as queer despite the stigma about it within the computer science community, and how he overcame it to graduate at the top of his class. Since then, he has continued his activism for minorities in tech fields, and is now continuing his work with the national AI4ALL foundation. He led us in a mantra: “AI will change the world. Who will change AI?”
Next, we heard from keynote speaker Michelle K. Lee. Dr. Lee chronicled her academic roots, from her beginning as a tech-minded girl growing up in Silicon Valley, to her days as an MIT undergraduate. Then she talked about how fascinated she was with the patent laws that dictated so much of how technology was developing. After getting a law degree from Stanford University, Dr. Lee launched herself into an extremely successful career in patent law, serving tech titans such as Google and becoming the first Asian-American woman partner of law firm Fenwick & West.
She then left her lucrative position with Google to begin a career of public service as the director of the US Patent and Trademark Office, appointed by Barack Obama and confirmed unanimously by the Senate Judiciary Committee. Her hard work, perseverance, and technical prowess broke barrier after barrier in both Silicon Valley and Washington DC, and helped pave the way for more Asian-American women to follow in her footsteps. We were so lucky to have the chance to hear first-hand about her incredible story!
After the inspiring presentation, we had a networking dinner, with attendance from all the AI4ALL staff, alums, the students, and their families. In addition to the delicious catered Mediterranean food, students got the opportunity to meet with previous AI4ALL campers and receive advice and hear about their experiences.
Finally, the campers presented on their work so far in the AI4ALL camp. Research mentors proudly looked on as students recounted their experiences and talked about the work that they’ve done and concepts that they’ve learned.
Computer Vision Group, led by Boxiao Pan and Andrew Kondrich
Robotics (Self-Driving Cars) Group, led by Peter Zachares and Ali Mottaghi
Decoding DNA (Computational Biology) Group, led by Shantao Li and Tess Rinaldo
Disaster Relief (Natural Language Processing) Group, led by Lucy Li and Christina Yuan
Program Director Alivia Shorter concluded the banquet by emphasizing the uniqueness of AI4ALL and all the opportunities that the campers get from the program.
Next week is our final week together at AI4ALL. We can’t wait to learn even more with our mentors!
We started the day off again with another lecture from our research mentors. In addition to learning the content of our projects, we also all planned and practiced our presentations for the banquet tomorrow.
In Computer Vision, we started off the day with an open-note quiz about what we’ve learned so far. From logistic regressions to parameter specifications, we reviewed all the difficult concepts we’ve learned in our week together. After the quiz, we did some more review of the quiz in order to prepare for the banquet tomorrow. After some discussion, one of our fellow campers helped us to reach a breakthrough on our understanding of fully connected neural networks.
In Computational Biology, we learned about Random Forest Classifiers, and K-Nearest Classification in the context of cell classification. In addition, we prepared scripts for the presentation
With Natural Language Processing, we continued studying the probability distributions that determine how predictive text tries to guess our next words when we type or text. We also polished the presentation, and it’s ready for tomorrow’s presentation!
In robotics, we learned more about nodes, connections, and graphs to further our understanding of path optimization with self-driving cars.
After the morning lecture, we met Dr. Chris Manning, professor at Stanford and director of Stanford AI Lab. He explained Natural Language Processing in terms of syntax and grammar structure, and we discussed our experiences using voice recognition technology such as Google Home, Siri, and Alexa.
We looked at the game Zork, a primitive version of natural language processing, as a case study, breaking down sentences into verbs, nouns, and adjectives. We also got to see his research in action—the analogy predictions were particularly impressive. We inputed the words into the algorithm and we got a surprisingly accurate response! This talk was especially relevant to the NLP/disaster relief research group.
We had lunch in the oval with our research mentors. While munching on pizza, we chatted with mentors from different groups and soaked in the California sun. The rest of the pictures are on the AI4ALL Flickr. We had a fun time!
After lunch, we had office hours with our research mentors. We worked more on Friday’s banquet, assigning speaking positions and polishing presentations.
After office hours, we got to see a demo of RoboTurk presented by one of our research mentors, Andrew Kondrich. RoboTurk is a robot that can be operated remotely, and can hopefully one day help scientists collect data in dangerous environments.
We had a demo in which Andrew attempted to fold a pair of jeans with one of RoboTurk’s robots—the pair of jeans was 3 stories below us in the basement! He did this through the aid of an app on his phone. It was very interesting to see the robot move in real time, directed by his movements on his smartphone!
Later in the day, we had some 4th of July festivities! We decorated cookies, fountain-hopped, and watched some fireworks!
We’re presenting tomorrow at the AI4ALL banquet, and we’re excited to share what we’ve learned at camp so far with the professors and parents!
Today, we continued the lectures from our research mentors in the morning.
The Computer Vision group continued to learn about deep learning, from Convolutional Neural Networks to Logistic Regressions.
Computational Biology was hard at work, continuing to learn about algorithmic analysis of DNA sequences, but also preparing a presentation for the banquet on Friday.
The Robotics/Self-Driving Cars group learned about velocity and angular velocity to aid in their understanding of car motion. We lightly covered some calculus and introduced the concept of differential equations.
Finally, Natural Language Processing in Disaster Relief continued to learn about Bayes’ Law to understand the probability distribution of certain words given the presence of other words. We also got a head start on preparing for Friday’s banquet presentation!
Next, we heard from guest speaker Ning Zhang, whose work in computer vision and AI at Snapchat has brought us great filters like the gender-swap filter. She talked about her educational journey from China’s Tsinghua University to Berkeley’s EECS program.
We heard about how she decided not to pursue academia and instead pursue her passion for innovation in industry. From her work with Snapchat to founding her own AI startup, Dr. Zhang has contributed a lot to her field. Her innovations in Computer Vision and continued advocacy for AI education are very inspiring for the campers!
After this, we had Office Hours with our mentors. We asked questions about review, but we didn’t have homework today because of the Lake Lagunita firepit later in the night.
At the end of the day, we all went to Lake Lagunita (which by now is less of a lake and more of a field) to roast marshmallows and bond. We came around sunset and left when it was almost completely dark.
We ate delicious s’mores and sang campfire songs like Riptide and Over the Rainbow. One camper brought her ukulele and another brought her singing skills, and together they led us in songs. It was super fun to spend time with the campers outside of Gates!
Today, we had a really fun day, meeting cool innovators, learning more about our research projects, and at Lake Lagunita!
We started the day with a lecture from our project mentors. We split up into our research groups to learn more about our research from experts in the field!
In the Computer Vision group with mentors Andrew and Boxiao, we learned more about linear models for classification. We worked on Jupyter Notebook to understand neural networks and 2-D linear regressions
At Computational Biology with mentors Shantao and Tess, we started doing some coding and learned more about dictionaries and arrays within Python. We cross-applied computer science to biology in order to do things like extract blood type from a large data sample and translate a DNA sequence.
Next, at Robotics with Ali and Peter, we discussed the ethics of self-driving cars. Although in the media we frequently have negative discourse about the future of autonomy, we opened the conversation to talk more about the many positive implications of self driving cars, from reduced car accident rates to cleaner transportation.
Finally, at Natural Language Processing with mentors Lucy and Christina, we learned more about precision, recall, and F1 in order to better analyze the accuracy of your NLP algorithm. We also started programming with Jupyter Notebook and starting to construct our own algorithms!
After the lecture, we attended a guest talk with Stanford computer scientist Emma Brunskill. She talked about her experiences in the Stanford AI for Human Impact Lab, and how she’s furthered her field.
After lunch, we had Office Hours with our mentors. We spent more time learning about the research we’re doing and started coding and applying the things we’ve learned in the first week.
Finally, we heard from Jeannette Bohg on computer vision. We played around with optical illusions to give perspective to the truly difficult task of merging AI with vision. We learned about the difference between merely identifying objects from images and drawing a conclusion based on these findings—the latter is much harder and requires much more data.
We then tried to experience the world through a robot’s physical perspective by tying our fingers and blindfolding ourselves. Essentially, this is how robots must operate! It was very difficult to perform easy tasks such as writing our names and picking up objects. However, some of us learned fairly quickly to adapt to these new conditions.
We had a fun day today, spending a lot of time with our mentors and meeting prominent members of Stanford’s AI lab.
Today, kickstarted the week with a field trip to Zoox headquarters in Foster City, California. Zoox is one of the leading players in the race for fully-autonomous cars, and is one of the few corporations with approval from California for public testing. Before entering the building, we all signed non-disclosure agreements, so we can’t get into the details about what we saw. However, we had a great time learning about all the complex components of AI in driving, and got a sneak peek into Zoox’s future plans and business model.
After getting a tour of the manufacturing station and having the opportunity to see a self-driving car first-hand, we had a panel with some female engineers at Zoox. They talked about their experiences as being the only female on a project, their pathways to STEM, and how to deal with being the minority. We asked lots of questions, and they all had great answers for us.
Four lucky campers, through a raffle, got to ride in one of Zoox’s cars! They got to experience a fully-autonomous experience (with a Zoox employee behind the wheel just in case!) and watched the image analysis detect the presence of humans, other vehicles, and analyze traffic signs. Although I can’t talk about exactly how they do this, we definitely had a lot of fun experiencing auto-pilot.
After coming back to campus, we were treated to guest speakers Dr. Ayanna Howard and Dr. Tamara Pearson from Georgia Tech. Their experiences as not only being the gender minority, but also the racial minority were truly inspiring. We all learned a lot through their engaging stories and sage advice.
From their experiences lunching with Jeff Bezos to applying AI to special education, we learned a lot from their talk!
We had a really fun day today! We’ll be doing more research and working with our research mentors more in the coming days.