For decades, areas such as computer vision, deep learning, speech, and natural language processing have challenged the field's top exerts, yet today, it seems that computer scientists are making more progress every day in these areas (among many others).

Because of these breakthroughs, tools including Microsoft Translator are coming to life, and it is only up until recently that things like this were the stuff of fantasy and science fiction. In turn, so many people are being helped in so many ways by, for example, breaking down language barriers and facilitating communication.

Just the beginning

Last September, Microsoft announced the creation of Microsoft AI and Research, a new group that brings together approximately 7,500 computer scientists, researchers and engineers from the company’s research labs and product groups such as Bing, Cortana and Azure Machine Learning.

Microsoft Research AI, a research and incubation hub within Microsoft's research organization, is focused on solving some of AI’s most difficult challenges. The team of scientists and engineers will work closely with colleagues across Microsoft’s research labs and product groups in order to tackle some of the hardest problems in AI and accelerate the integration of the latest AI advances into products and services that benefit customers and society.

A core goal of Microsoft Research AI is to reunite AI research endeavors, such as machine learning, perception and natural language processing, that have evolved over time into separate fields of research. This integrated approach will allow the development of sophisticated understandings and tools that can help people do complex, multifaceted tasks.

Conclusion

Microsoft believes AI will be even more helpful when tools can be created that combine those functions and add some of the abilities that come naturally to people, like applying our knowledge of one task to another task or having a commonsense understanding of the world around us.

As AI moves from research to product, Microsoft is maintaining their commitment to foundational, open, and collaborative research in addition to their dedication to solving society’s toughest problems in partnership with all members of society. All the while, Microsoft is actively pursuing a mission in common with us here at Managed Solution, to empower every person and organization on the planet to achieve more.

Conversations on AI

As written on news.microsoft.com
Microsoft has been investing in the promise of artificial intelligence for more than 25 years — and this vision is coming to life with new chatbot Zo, Cortana Devices SDK and Skills Kit, and expansion of intelligence tools.
“Across several industry benchmarks, our computer vision algorithms have surpassed others in the industry — even humans,” said Harry Shum, executive vice president of Microsoft’s Artificial Intelligence (AI) and Research group, at a small gathering on AI in San Francisco on Dec. 13.
“But what’s more exciting to me is that our vision progress is showing up in our products like HoloLens and with customers like Uber building apps to use these capabilities.”
When Bill Gates created Microsoft Research in 1991, he had a vision that computers would one day see, hear and understand human beings — and this notion attracted some of the best and brightest minds to the company’s labs.
Last month, Microsoft became the first in the industry to reach parity with humans in speech recognition. There’s also been groundbreaking work with Skype Translator — now available in 9 languages — an example of accelerating the pipeline from research to product. With Skype Translator Microsoft has enabled people to understand each other, in real time, while talking to others in all corners of the world. But what about the dream of face-to-face, real-time translation?
Using this new intelligent language and speech recognition capability, Microsoft Translator can now simultaneously translate between groups speaking multiple languages in-person, in real-time, connecting people and overcoming barriers.

[vc_row][vc_column][vc_column_text]

rob-bernard-four-green-tech-predictionsFour Green Tech Predictions for 2017

Written by Rob Bernard as seen on blogs.microsoft.com
The end of the calendar year is a traditional time of reflection, of the ups and downs of the past year, and to think about what to expect in 2017. As Microsoft’s chief environmental strategist, I am encouraged by the progress made on some environmental issues in the past year, but there is still much work to be done. Despite the increasing urgency around many environmental issues, I remain optimistic about the future.
Perhaps the most notable breakthrough this past year was that the Paris Agreement on climate change entered into force. Cities, countries and companies around the world are now focusing their efforts on how to set and execute their plans to reduce carbon emissions. We also saw growth in corporate procurement of renewable energy in 2016, both in the U.S. and around the globe, another encouraging sign. At Microsoft, we put forth our own goal to source 50% of our datacenter electricity from wind, solar and hydropower. At the end of this year, we’re happy to report that we are on pace to not only meet our goal, but also are creating new financial and technology models that can further accelerate the pace of the adoption of renewable energy.
As we look towards 2017, I expect that we will see both continued progress on energy and an increasing focus on non-energy areas. As we at Microsoft think about 2017, I think we expect to see some shifts in approaches and investments happening across the world.

1. IoT and Cloud Computing will begin to transform utility energy management:

Aging infrastructure is already under pressure and the addition of more renewable energy will only compound the stress on existing infrastructure. As more clean energy comes online, along with distributed resources like electric vehicles and rooftop solar, utilities are facing a big challenge – how to manage a more complex network of energy creating and energy storing devices.  2017 will see an increased investment by utilities in technology to leverage data, through IoT solutions and cloud computing, to make energy management more predictable, flexible and efficient.
In developing nations, we are seeing a different trend, but one that is also accelerated by IoT and cloud computing. In these markets, data is being used to accelerate distribution, sales and management of micro-solar grids to enable households to get both power and connectivity to the internet. 2017 should be an exciting year with even more growth as capital investments in these markets increase and solar and battery storage prices decline.

2. Water will begin to emerge as the next critical world-scale environmental challenge

leaf-rain-raindrops-drop-of-water
Water scarcity is increasing in many areas around the world. Our oceans are under threat from pollution, acidification and warming temperatures. We are already seeing the devastating effects on iconic landmarks like the Great Barrier Reef. And these trends are putting peoples’ food, water, and livelihoods at risk. In 2017, awareness on this challenge will increase. We will begin to better understand what is happening to our water through the use of sensors and cloud computing. Our ability to leverage technologies like advanced mapping technologies and sensors will increase and expand our understanding of what is driving the decline of many of our critical water systems.

3. Data will increasingly be used to try to better understand our planet

Data is essential for monitoring and managing the earth’s resources and fragile ecosystems. There is much we do not understand about the planet, but we see an increasing number of companies and investments flowing toward developing tools and platforms that enable better mapping and understanding of earth’s carbon storage and air borne gasses, and ecosystems and the associated value they provide. We expect to see data being applied more proactively to create a more actionable understanding of how we can better manage food, water, biodiversity and climate change.

4. Organizations and policy makers will start leveraging cloud-based technologies

This area is perhaps the most difficult to try to predict. While countries will begin implementing their action plans under the Paris Agreement, it is not easy to predict the methods each country will use and prioritize to make progress against commitments under the Paris Agreement. And the changes will happen not just at the national level. Increasingly we will see cities and local governments moving ahead with technology implementation to drive efficiencies and accountability, along with policy changes as well. We’re already leveraging machine learning and artificial intelligence to better model and understand the potential impact of legislative changes, in addition to offering our technologies to our public sector partners as they work towards their plans. While this will likely take several years to take hold, 2017 should see an increased awareness for the role of technology in environmental policy
While there are many challenges ahead across so many areas of sustainability, I remain optimistic.  The number of people and organizations that are focusing on these and many other areas of sustainability ensure that we will continue to make progress in 2017.  At Microsoft, we are committed to working with our customers and partners to help them achieve more through the use of our technologies. Everyone – companies, countries, individuals – have much work to do. We believe that by working together, we can help develop effective solutions to address environmental challenges that will benefit our business, our customers and the planet. And we are up to the challenge.

[/vc_column_text][/vc_column][/vc_row]

Historic milestone: Microsoft researchers achieve human parity in conversational speech recognition

By 
Microsoft has made a major breakthrough in speech recognition, creating a technology that understands a conversation as well as a person does.
In a paper published Monday, a team of researchers and engineers in Microsoft Artificial Intelligence and Research reported a speech recognition system that makes the same or fewer errors than professional transcriptionists.  The researchers reported a word  error rate (WER) of 5.9 percent, down from the 6.3 percent WER the team reported just last month.
The 5.9 percent error rate is about equal to that of people who were asked to transcribe the same conversation, and it’s the lowest ever recorded against the industry standard Switchboard speech recognition task.

 

Frank Seide and Chris Basoglu of Microsoft's CNTK groupNewly updated Microsoft Cognitive Toolkit can help speed advances in deep learning

By Athima Chansanchai as written on blogs.microsoft.com
Microsoft has released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and NVIDIA GPUs.
The toolkit, previously known as CNTK, was initially developed by computer scientists at Microsoft who wanted a tool to do their own research more quickly and effectively. It quickly moved beyond speech and morphed into an offering that customers, including a leading international appliance maker and Microsoft’s flagship product groups, depend on for a wide variety of deep learning tasks.
“We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of the Microsoft Cognitive Toolkit.
With the latest version of the toolkit, which is available on GitHub, developers can use Python or C++ programming languages in working with the toolkit.  With the new version, researchers also can do a type of artificial intelligence work called reinforcement learning.

[vc_row][vc_column][vc_column_text]

Using artificial intelligence to create invisible UI

By Martin Legowiecki as written on techcrunch.com
Interaction with the world around us should be as easy as walking into your favorite bar and getting your favorite drink in hand before your butt hits the bar stool. The bartender knows you, knows exactly what drink you like and knows you just walked through the door. That’s a lot of interaction, without any “interaction.”
We’re redefining how we interact with machines and how they interact with us. Advances in AI help make new human-to-machine and machine-to-human interaction possible. Traditional interfaces get simplified, abstracted, hidden — they become ambient, part of everything. The ultimate UI is no UI.
Everyone’s getting in the game, but few have cracked the code. We must fundamentally change the way we think.

Cross-train your team

Our roles as technologists, UX designers, copywriters and designers have to change. What and how we build — scrolling pages, buttons, taps and clicks — is based on aging concepts. These concepts are familiar, proven and will still remain useful. But we need a new user interaction model for devices that listen, “feel” and talk to us.
Technologists need to become more like UX designers and vice versa. They must work much closer together and mix their roles, at least until some standards, best practices and new tools are established.

No decision trees

The bartender from the above example is where more of the UI is starting to reside. On one hand, that represents a lot more responsibility to create transparent experiences that tend to be based on hidden rules and algorithms. But on another, this gives us incredible latitude for creating open-ended experiences in which only important and viable information is presented to the user.
For example, to command our AI assistant, “Tell my wife I am going to be late,” the system needs to be smart enough not only to understand the intent, but also to know who the wife is and the best way to contact her. No extraneous information is necessary, no option list, no follow-up questions. We call this Minimum Viable Interaction (MVI).

Your interface is showing

We’ve started talking to our machines — not with commands, menus and quirky key combinations — but using our own human language. Natural language processing has seen incredible advances and we finally don’t need to be a machine to talk to one. We chat with the latest chatbots, search using Google Voice or talk to Siri. The accuracy of speech recognition has improved to an incredible 96 percent accuracy.

This space is way too dynamic to be married to an original creative concept.

The last few percentage points might not seem like a lot, but it’s what makes or breaks the perfect experience. Imagine a system that can recognize what anyone says 100 percent of the time, no matter how we say things (whether you have an accent, pause between words or say a bunch of inevitable “uhhs” and “umms”). Swap a tap or a click for the Amazon Echo’s far-field recognition, and the UI melts away. It becomes invisible, ubiquitous and natural.
We aren’t there yet. For now, we can devise smart ways of disguising the capability gap. A lot of time goes into creating programming logic and clever responses to make the machine seem smarter than it really is. Make one mistake where the UI shows and the illusion will break.

Contextual awareness

The system needs to know more about us for invisible UI to become reality. Contextual awareness today is somewhat limited. For example, when asking for directions via Google Maps, the system knows your location and will return a different result if you are in New York versus California.
Our phones, watches and other mobile devices are loaded with a ton of sensors. They make us humans the cheap sensors machines need today. We gather the knowledge and data that the system needs to do its work.
But even with all the sensors and data, the machine needs to know more about us and what is going on in our world in order to create the experiences we really need. One solution is combining the power of multiple devices/sensors to gather more information. But this usually narrows down and limits the user base — not an easy thing to sell to a client. You have to think on your feet. Change, tweak, iterate. This space is way too dynamic to be married to an original creative concept.
What wasn’t possible just yesterday is becoming mainstream today as we develop new experiences, explore new tech, topple old paradigms and continue to adapt.

[/vc_column_text][/vc_column][/vc_row]

Contact us Today!

Chat with an expert about your business’s technology needs.