Decades of computer vision research, one ‘Swiss Army knife’

[vc_row][vc_column][vc_column_text]

Decades of computer vision research, one ‘Swiss Army knife’

By Allison Linn as written on blogs.microsoft.com

FSPB4198

When Anne Taylor walks into a room, she wants to know the same things that any person would.
Where is there an empty seat? Who is walking up to me, and is that person smiling or frowning? What does that sign say?
For Taylor, who is blind, there aren’t always easy ways to get this information. Perhaps another person can direct her to her seat, describe her surroundings or make an introduction.
There are apps and tools available to help visually impaired people, she said, but they often only serve one limited function and they aren’t always easy to use. It’s also possible to ask other people for help, but most people prefer to navigate the world as independently as possible.
That’s why, when Taylor arrived at Microsoft about a year ago, she immediately got interested in working with a group of researchers and engineers on a project that she affectionately calls a potential “Swiss Army knife” of tools for visually impaired people.
“I said, ‘Let’s do something that really matters to the blind community,’” said Taylor, a senior project manager who works on ways to make Microsoft products more accessible. “Let’s find a solution for a scenario that really matters.”
That project is Seeing AI, a research project that uses computer vision and natural language processing to describe a person’s surroundings, read text, answer questions and even identify emotions on people’s faces. Seeing AI, which can be used as a cell phone app or via smart glasses from Pivothead, made its public debut at the company’s Build conference this week. It does not currently have a release date.
Taylor said Seeing AI provides another layer of information for people who also are using mobility aids such as white canes and guide dogs.
“This app will help level the playing field,” Taylor said.
At the same conference, Microsoft also unveiled CaptionBot, a demonstration site that can take any image and provide a detailed description of it.

Very deep neural networks, natural language processing and more
Seeing AI and CaptionBot represent the latest advances in this type of technology, but they are built on decades of cutting-edge research in fields including computer vision, image recognition, natural language processing and machine learning.
In recent years, a spate of breakthroughs has allowed computer vision researchers to do things they might not have thought possible even a few years before.
“Some people would describe it as a miracle,” said Xiaodong He, a senior Microsoft researcher who is leading the image captioning effort that is part of Microsoft Cognitive Services. “The intelligence we can say we have developed today is so much better than six years ago.”
The field is moving so fast that it’s substantially better than even six months ago, he said. For example, Kenneth Tran, a senior research engineer on his team who is leading the development effort, recently figured out a way to make the image captioning system more than 20 times faster, allowing people who use tools like Seeing AI to get the information they need much more quickly.
A major a-ha moment came a few years ago, when researchers hit on the idea of using deep neural networks, which roughly mimic the biological processes of the human brain, for machine learning.
Machine learning is the general term for a process in which systems get better at doing something as they are given more training data about that task. For example, if a computer scientist wants to build an app that helps bicyclists recognize when cars are coming up behind them, it would feed the computer tons of pictures of cars, so the app learned to recognize the difference between a car and, say, a sign or a tree.
Computer scientists had used neural networks before, but not in this way, and the new approach resulted in big leaps in computer vision accuracy.
Several months ago, Microsoft researchers Jian Sun and Kaiming He made another big leap when they unveiled a new system that uses very deep neural networks – called residual neural networks – to correctly identify photos. The new approach to recognizing images resulted in huge improvements in accuracy. The researchers shocked the academic community and won two major contests, the ImageNet and Microsoft Common Objects in Context challenges.
Tools to recognize and accurately describe images
That approach is now being used by Microsoft researchers who are working on ways to not just recognize images but also write captions about them. This research, which combines image recognition with natural language processing, can help people who are visually impaired get an accurate description of an image. It also has applications for people who need information about an image but can’t look at it, such as when they are driving.
The image captioning work also has received accolades for its accuracy as compared to other research projects, and it is the basis for the capabilities in Seeing AI and Caption Bot. Now, the researchers are working on expanding the training set so it can give users a deeper sense of the world around them.

FSPB4720-1024x634

Margaret Mitchell, a Microsoft researcher who specializes in natural language processing and has been one of the industry’s leading researchers on image captioning, said she and her colleagues also are looking at ways a computer can describe an image in a more human way.
For example, while a computer might accurately describe a scene as “a group of people that are sitting next to each other,” a person may say that it’s “a group of people having a good time.” The challenge is to help the technology understand what a person would think was most important, and worth saying, about the picture.
“There’s a separation between what’s in an image and what we say about the image,” said Mitchell, who also is one of the leads on the Seeing AI project.
Other Microsoft researchers are developing ways that the latest image recognition tools can provide more thorough explanations of pictures. For example, instead of just describing an image as “a man and a woman sitting next to each other,” it would be more helpful for the technology to say, “Barack Obama and Hillary Clinton are posing for a picture.”
That’s where Lei Zhang comes in.
When you search the Internet for an image today, chances are high that the search engine is relying on text associated with that image to return a picture of Kim Kardashian or Taylor Swift.
Zhang, a senior researcher at Microsoft, is working with researchers including Yandong Guo on a system that uses machine learning to identify celebrities, politicians and public figures based on the elements of the image rather than the text associated with it.
Zhang’s research will be included in the latest vision tools that are part of Microsoft Cognitive Services. That’s a set of tools that is based on Microsoft’s cutting-edge machine learning research, and which developers can use to build apps and services that do things like recognize faces, identify emotions and distinguish various voices. Those tools also have provided the technical basis for Microsoft showcase apps and demonstration websites such as how-old.net, which guesses a person’s age, and Fetch, which can identify a dog’s breed.
Microsoft Cognitive Services is an example of what is becoming a more common phenomenon – the lightning-fast transfer of the latest research advances into products that people can actually use. The engineers who work on Microsoft Cognitive Services say their job is a bit like solving a puzzle, and the pieces are the latest research.
“All these pieces come together and we need to figure out, how do we present those to an end user?” said Chris Buehler, a software engineering manager who works on Microsoft Cognitive Services.
From research project to helpful product
Seeing AI, the research project that could eventually help visually impaired people, is another example of how fast research can become a really helpful tool. It was conceived at last year’s //oneweek Hackathon, an event in which Microsoft employees from across the company work together to try to make a crazy idea become a reality.
The group that built Seeing AI included researchers and engineers from all over the world who were attracted to the project because of the technological challenges and, in many cases, also because they had a personal reason for wanting to help visually impaired people operate more independently.
“We basically had this super team of different people from different backgrounds, working to come up with what was needed,” said Anirudh Koul, who has been a lead on the Seeing AI project since its inception and became interested in it because his grandfather is losing his ability to see.
For Taylor, who joined Microsoft to represent the needs of blind people, it was a great experience that also resulted in a potential product that could make a real difference in people’s lives.
“We were able to come up with this one Swiss Army knife that is so valuable,” she said.

[/vc_column_text][/vc_column][/vc_row]

The SwimTrain exergame makes swim workouts fun again

[vc_row][vc_column][vc_column_text]

The SwimTrain exergame makes swim workouts fun again

By Miran Lee as written on blogs.msdn.microsoft.com

To many who swim for exercise, workouts come down to the monotony of doing laps—swimming back and forth in a pool. Over and over. Unlike other exercisers, who can make their routines less of a chore by adding a social component—working out with friends, family, or in groups—swimmers really haven’t had many options, because coordinating a group of swimmers is difficult. The Korea Advanced Institute of Science and Technology (KAIST) and Microsoft Research Asia (MSRA) are happy to report that with SwimTrain, their new cooperative “exergame” research project, you’ll never have to swim alone again.
SwimTrain is the result of a research collaboration between KAIST and MSRA. The project targets something we can all relate to: exercise boredom. Swimming, while one of the best ways to get fit, can be tedious. The SwimTrain team thinks they have a way to make swimming a lot more exciting.

SwimTrain

How does SwimTrain work? First, you slip your phone into a waterproof case and plug in some waterproof headphones. Then, you jump in. Players get matched up as a team to form a virtual “train,” with each player controlling the speed of a single train compartment. Go too fast or too slow, and the game warns you of bumping into other compartments. Featuring narration, vibration feedback, spatialized sound effects, and background music, the immersive experience takes players through different modes of gameplay based on an interval training workout plan.
Each SwimTrain round consists of three phases:
Phase 1: Compartment ordering
Compartments race against other compartments. A compartment is ranked based on a swimmer’s average stroke speed during the race.
Phase 2: Train running
Compartments are placed along the same track and run in a circle (like a merry-go-round). To earn points, each swimmer must maintain their current stroke rate with the target stroke rate established in the previous phase. A compartment shifts with the movement of the current stroke rate relative to the target stroke rate, and it should travel without crashing into adjacent compartments.
Phase 3: Train stop
The virtual train stops. Every swimmer takes a short rest. The game narrates the final ranking of the current round and information for the next round, such as the duration of each phase and recommended stroke types.
SwimTrain accomplishes immersive gameplay by relying on advanced tech packed into a mobile phone. The barometer, accelerometer, gyroscope, and magnetometer track swimming activities, determining swimming periods, stroke, style, speed, and other events. This information is fed to a Network Manager based on the Microsoft Azure cloud, and is then delivered back to the game as rank and round data, determining the status of the player in relation to the train. It’s also passed to a Feedback Manager, which provides the auditory and sensory feedback that make SwimTrain unique.
Preliminary feedback from users is positive—SwimTrain makes you feel like you’re not alone in the pool. According to one test user, “Although [SwimTrain] didn’t provide any visual feedback, I felt like I was swimming with others.” Feedback is also indicating that SwimTrain is providing an immersive and enjoyable experience that’s intense workout, too.

The project team’s research is getting noticed in the world of human-computer interaction (HCI). CHI 2016, the world’s top conference for HCI, has accepted the team’s research for inclusion in the CHI 2016 Notes and Papers Program.
This collaboration with KAIST is a great example of how Microsoft values symbiotic relationships with partners in academia. “Not only do we have the ability to shape the future of Microsoft products, we have the chance to support and learn from some of the top professors in computer science,” said Darren Edge, lead researcher at MSRA. Many of these collaborations lead to internships. “When a student makes a particularly promising contribution to a joint project, we can also invite them to spend time at Microsoft as a research intern. Everybody wins from such internships: we get some of the brightest PhD students to work on our projects, and the students develop new expertise and skills that they can apply to their university work with their professor.”
Darren explains that this recently happened as a result of his ongoing collaboration with Professor Uichin Lee at KAIST. Following the completion of work on SwimTrain, Professor Lee’s PhD student Jeungmin Oh joined Darren at MSRA for a six-month internship, working in another area. “We are all now collaborating on multiple projects in parallel. If any of them are as successful as SwimTrain, which won the third place award at the recent Microsoft Korea and Japan Day and has two accepted papers pending publication, I will be very happy indeed,” he states.
The MSRA HCI group has in fact had a longstanding collaboration with academia: In recent years, MSRA has supported principal investigators for projects published at CHI 2014, CSCW 2015, and CHI 2016.
In the future, SwimTrain will focus on measuring more data, such as heart rate and maximal oxygen uptake, to determine the exertion level of a player’s swimming. Also, the method might be applied to other group exercises, such as group jogging and group cycling. We look forward with anticipation to what SwimTrain might inspire.

[/vc_column_text][/vc_column][/vc_row]

Sparking opportunity for all youth around the globe

Sparking opportunity for all youth around the globe

By Mary Snapp as written on blogs.microsoft.com

US-final_1500-239x1024

Sometimes all it takes is a spark: that one class, that one teacher, that one project which makes a difference. It can change the lives of young students who may have had little opportunity to excel, or perhaps even to complete high school, to enable them to become successful engineers, entrepreneurs or computer scientists. This is the inspiration behind our global YouthSpark initiative.
Last September, Satya Nadella announced a three-year, $75 million YouthSpark investment to help every young person get the opportunity to learn computing skills and computer science.
Today we are providing an update by announcing YouthSpark grants to 100 nonprofit partners in 55 countries. In turn, our partners will leverage the power and energy of local schools, businesses and community organizations to create new and engaging opportunities for students to explore computer science. These partners will teach students valuable skills to help them prepare for and succeed in jobs that are open today across industries, along with new jobs that will be created. Our partners will build upon the work that Microsoft already has underway, including our commitments to computer science education through programs like Hour of Code with Code.org, BBC micro:bit and TEALS.
Still, much more progress must be made. Despite the need for basic computational thinking skills across all subject areas, in the U.S. less than 25 percent of high schools offer computer science classes. Only 2.5 percent of U.S. high school graduates go on to study computer science in college, and of this small percentage, only 1 in 5 computer science graduates is female. Globally, some countries have made computer science a mandatory subject in secondary schools, but we know firsthand through our own work that far too few schools around the world provide courses in computing. We also recognize that governments play a critical role in continued progress on this important issue. We continue to work with policymakers around the world to support the policy and funding necessary to expand computer science into public education. In the U.S., we’re proud to support Computer Science for All, a national effort created by President Barack Obama to give all American students the opportunity to learn computer science in school.
We know that no single organization or company can close the global computer science education skills gap. That is why we are committed to work in partnership with others. Our efforts have focused on leveraging longstanding community relationships of more than 100 nonprofit partners around the world to create access to computer science, and also to break down barriers and stereotypes that are keeping large populations of youth out of computer science education — even when the opportunities are available.
Later this month, we will bring together some of our local nonprofit partners from around the world during a YouthSpark Summit at the Microsoft campus in Redmond. We’ll learn, discuss, share ideas and develop action plans so that, together with our partners, we can continue to improve and bring better knowledge and expertise to local communities.
Every young person should have an opportunity, a spark, to realize a more promising future. Together with our nonprofit partners, we are excited to take a bold step toward that goal today. Learn more about our nonprofit partners here, and visit YouthSpark.com for more information on our global initiative to make computer science education accessible for all young people.

Predicting ocean chemistry using Microsoft Azure

Predicting ocean chemistry using Microsoft Azure

LiveOcean_WebGraphics_MRCBLOG_900x300

Shellfish farmer Bill Dewey remembers the first year he heard of ocean acidification, a phrase that means a change in chemistry for ocean water. It was around 2008, and Dewey worked for Taylor Shellfish, a company that farms oysters in ocean waters off the coast of Washington. That year, thousands of tiny “seed” oysters died off suddenly. Today, a cloud-based predictive system from the University of Washington (UW) and Microsoft Research may help the shellfish industry survive changing conditions by providing forecasts about ocean water.
Dewey, director of Public Affairs for Taylor Shellfish, vividly remembers walking into a conference room where an audience of shellfish farmers first heard that ocean acidification might threaten their industry profoundly. They learned that increased carbon dioxide in the atmosphere is making ocean water more acidic. In 2013, the Washington legislature stepped in and asked the UW to study and build a predictive forecast model, aptly named, LiveOcean.
Just like a numerical weather forecast model, LiveOcean will soon provide a forecast that predicts the acidity of water in a specific bay, part of Puget Sound or other coastal regions, days in advance.
Parker MacCready, a professor of physical oceanography at UW, is the scientist leading the LiveOcean team and used Microsoft Azure to create the cloud-based storage system. The system holds enormous amounts of data from his remote ocean model, the Regional Ocean Modeling System (ROMS), which helps feed the LiveOcean models. The Azure component uses Python and the Django web framework to provide these forecasts in an easy-to-consume format. To produce these forecasts, the LiveOcean system relies on other sources: US Geological Survey data (for river flow), atmospheric forecasts, and another ocean model called HYbrid Coordinate Ocean Model (HYCOM).

Dewey needs information on the acidity levels because a baby oyster needs to create a shell immediately to survive, and needs carbonate ions in the water to make that first tiny shell. If the water is too acidic, the baby oyster must use too much energy and dies in its attempt to make that first shell. Taylor Shellfish has hatcheries for the baby oysters and “planting” beds where young oysters are carried to grow to full size. Forecasts of water acidity in both places would help the company know when it was safe to hatch the babies, and where (and when) it is safe to plant them.
Ocean acidification is an emerging global problem, according to the National Oceanic and Atmospheric Administration (NOAA). Scientists are just starting to monitor ocean acidification worldwide, so it is impossible to predict exactly in what ways it will affect the marine environment. In a report, NOAA wrote, “There is an urgent need to strengthen the science as a basis for sound decision making and action.”
Azure tools make the system open to anybody. MacCready is eager to see how others develop sites pulling data on water currents for kayakers, for example, or information for salmon fishers. He is particularly excited about “particle tracking,” which helps him see where individual particles in the ocean move. That tracking could predict where an oil spill might move, for example. Using the cloud is “the way of the future” from his scientific perspective. “It gives the ability to create and use different resources without having to go out and buy hardware yourself.”
Fine-tuning and testing is essential to the reliability of the predictions. In recent years, MacCready and others have been validating the forecasts that LiveOcean is making. They pair real observations from physical instruments to predictions. Within months, he hopes to refine forecasts down to the level of individual bays, so that he can tell Dewey whether Samish Bay or Willapa Bay, for example, is “safe” for the new oysters.
LiveOcean has impacts far beyond just the shellfish industry. Jan Newton, principal oceanographer at the Applied Physics Laboratory, is the co-director of the Washington Ocean Acidification Project (WOAP), believes it may change how the public sees climate change and ocean chemistry.
“Data portals and models like LiveOcean can really make a bridge [of understanding] because even if people don’t understand the chemistry, they’ll look at the color-coding and see how this changes with location and season,” she said. Dewey believes that these tools for the Pacific Ocean chemistry will be adopted by others for oceans worldwide.

GOVERNORS LAUNCH BIPARTISAN PARTNERSHIP TO EXPAND ACCESS TO COMPUTER SCIENCE EDUCATION

shutterstock_291722060

Governors Launch Bipartisan Partnership to Expand Access to Computer Science Education

On February 21, 2016, Governors Asa Hutchinson (R-Ark.) and Jay Inslee (D-Wash.) announced a new partnership to promote K-12 computer science education at the state level at the National Governors Association Winter Meeting.
Currently, only 1 out of 4 schools offer computer science instruction — teaching students to create technology, not just use it. Demand for increased and earlier access to computer science is growing among educators, parents, and employers. In a recent survey, 90 percent of parents said they want computer science taught in schools. Today, there are more than 600,000 open computing jobs across the U.S. in every industry and these are among the fastest growing, highest paying jobs in the US.
“There are few jobs today that don’t require some degree of technology or computer use, whether it’s auto mechanics, fashion design or engineering. A big part of our children’s success in the 21st century economy will be to ensure every student feels confident in front of a computer,” said Governor Inslee. “In Washington state we’ve had great bipartisan success promoting stronger computer science education, including teacher training and learning standards. I’m hopeful that governors around the country will join us in making computer science one of the basic skills every child learns.”
To address the education gap, governors joining the Partnership for K-12 Computer Science will work toward three key policy goals in their states:
Governors Asa Hutchinson and Jay Inslee will serve as the bipartisan co-chairs for the initiative; they are calling on their colleagues to join them. Participating governors will also share best practices for expanding access to computer science, and advocate for federal policies to support computer science instruction.
“I’m delighted to join fellow governors to promote computer science education in schools across the country. I strongly believe this is paramount to the future of the American economy, and a critical step in preparing the next generation for the fastest growing field in the world,” said Governor Asa Hutchinson. “This time last year, our state passed the most comprehensive computer science education law in the country and appropriated significant funding to train teachers. And we’re not done yet. I look forward to collaborating with my colleagues in other states.”
The Partnership builds on increasing nationwide momentum for computer science education. In January, President Obama proposed $4.1 billion in his budget to support K-12 computer science. More than 20 states have proposed policies to expand access to computer science instruction, and districts are investing time and resources in preparing tens of thousands of educators to teach the subject. Last year, one of every three schools in the U.S. participated in the Hour of Code, a global campaign designed to address misperceptions about computer science.
“It’s amazing to see computer science sweeping across the nation's K-12 public schools, to provide a better future for our children,” said Hadi Partovi, CEO of Code.org. “Washington and Arkansas have led the way, but other states like Idaho, Utah, Massachusetts, Georgia and Alabama are also making this a priority. This new partnership will help expand that groundswell across the US.”
Code.org will provide the Partnership with resources related to best practices in policy and programs, and will facilitate collaboration among governors and their staff.

Source: http://www.governorsforcs.org

Microsoft YouthSpark expands youth programs for computer science education

education computer science managed solution

Microsoft YouthSpark expands youth programs for computer science education

by Suzanne Choney, Microsoft News Center Staff as written on Microsoft.com
It gave Victoria Tran confidence and a possible career path. It’s giving Joey Cannon the opportunity he has long wanted. They’re among the high school students around the country benefitting from computer science education, something in short supply around the world, and in the U.S., where less than a quarter of high schools teach it.
On Wednesday, Microsoft CEO Satya Nadella announced an expansion of the YouthSpark program to increase access to computer science education for all youth worldwide, and especially for those from under-represented backgrounds, with a $75 million commitment in community investments over the next three years.
In the U.S., where the TEALS (Technology Education and Literacy in Schools) program brings computer science education to high school students and teachers, this flagship program of YouthSpark will increase mightily, going from 131 schools in 18 states to nearly 700 schools in 33 states in the next three years.
“If we are going to solve tomorrow’s global challenges, we must come together today to inspire young people everywhere with the promise of technology,” said Nadella. “We can’t leave anyone out.”

Excerpts from an interview conducted by computer science students, as part of Microsoft YouthSpark’s support of Roadtrip Nation.
YouthSpark is a global initiative to increase access for all youth to learn computer science, empowering them to achieve more for themselves, their families and their communities.
With the TEALS program, tech industry volunteers are needed to team-teach computer science to 30,000 students, a message Nadella shared during the annual Dreamforce conference hosted by Salesforce, where he called upon thousands of tech professionals to serve as TEALS volunteers.
Those TEALS volunteers create a ripple effect – you could even call it a tidal wave, really – with what they do. They teach not only students, but prep teachers as well to go on to lead their own computer science classes in subsequent years.
Since the program began in 2009, “We’ve had an amazing response from schools and teachers, as well as volunteers from across the industry, without which none of this would be possible,” says Kevin Wang, the very first volunteer, and the founder of TEALS who works for Microsoft.
Boston technology teacher Ingrid Roche says TEALS volunteers are superb. “Either that, or they’re magically phenomenal,” she says. “They have a ton of knowledge, and they are able to share it with me and the students without being condescending in any way, or judgmental.”
Roche teaches at the Boston Latin Academy, part of Boston Public Schools. The school has about 1,700 students in grades 7-12. The student population reflects the area’s diversity – including students who are black, white, Latino and Asian.
TEALS volunteers, Roche says, “have a sensitivity” to students from different backgrounds.
“That’s really helpful with TEALS having that on their radar,” Roche says. “If it’s always the student who went to computing summer camp raising his hand in class every single time, if he always gets called on – he’s going to get better, and the others won’t.”
By taking a TEALS class, Victoria Tran of San Francisco, and the daughter of Vietnamese immigrants, learned about career paths she might not otherwise have considered – or been self-assured enough to try.
“I’m getting better and better. I feel more confident in my abilities. I’ve never been so sure of myself and what I can do,” says Tran, who is contemplating majoring in computer science or electrical engineering in college.
In the state of Washington, TEALS is in its second year at the public International School in Bellevue, where teacher Janet Roberts has been working with TEALS volunteers to teach computer science.
“I have some ancient programming experience in Fortran, and without the expertise of the TEALS people, it would have been very difficult to launch this course,” says Roberts, who also teaches math.
The TEALS volunteers, Roberts says, “have been amazing, dedicated and patient” in working with both her and the students. Last year, there were two classes in AP Computer Science offered, with 55 students in both classes at the high school, which has about 300 students.
Joey Cannon, 18, took the TEALS class last year at the International School and is helping with it this year as a teacher’s assistant.
Without the TEALS program, he says, the school’s AP Computer Science class “wouldn’t be possible,” Joey says. “Even if we just had one teacher who knew all of it really well, it would have been hard for them to be able to provide all the help that students needed.”
Making computer science education the rule and not the exception is achievable, but it takes everyone’s help.
Says TEALS founder Wang: “We need to get to a point where computer science classes are offered alongside other classes like biology, chemistry and physics. We have a long road ahead of us. Microsoft is sort of seeding this investment, but in order for this problem to be solved, we’ve got to have everybody joining us.
“You can think of this is as a movement, and Microsoft is supporting a movement,” Wang says. TEALS “is really there to help our local schools, all of our local schools, wherever they may be, to prepare our kids to make sure they’re ready for what comes next, no matter what they do with their lives.”
Microsoft extends this commitment to the workplace as well through its Explore Microsoft 12-week summer internship program, specifically designed for college students. Explore Microsoft offers a rotational experience that enables the students to gain experience in different software engineering roles.
That’s where Chuma Kabaghe, a native of Zambia attending the University of Illinois, got her start.
“The Explore program exposed me to the three core technology disciplines at Microsoft. I also learned about various areas in the technology sector,” said Kabaghe. “The Explore program allowed me to see what is possible and gave me an opportunity to prove to myself that I really do have what it takes to be a successful software engineer.”
And she does. Kabaghe has now joined Microsoft in that role.
Lead image: Victoria Tran at a TEALS coding event in San Francisco.

Source: http://news.microsoft.com/features/microsoft-youthspark-expands-youth-programs-for-computer-science-education/