Revolutionizing the Financial Sector With Artificial Intelligence

Today's many technological revolutions are changing the business environment, almost beyond recognition. When it comes to the financial sector, artificial intelligence (AI) is finally addressing some long-pressing compliance issues.

Out of the $35.8 billion projected expenditures on AI across all industries in 2019, banks and other financial institutions are investing $5.6 billion in AI. This sum will go into things such as prevention systems, fraud analysis, investigations, and automated threat intelligence. Alongside retail, manufacturing, and healthcare providers, the banking sector is the top spender in AI.

This investment isn't without merit either, as the McKinsey Global Institute estimates that the financial sector could generate more than $250 billion over the coming years. It will be a result of improved decision making, better risk management, and personalized services. Despite these projections, many financial firms are cautious when it comes to implementing AI. But those that want a competitive advantage need to overcome this instinct and benefit from what artificial intelligence has to offer.

Improving the Sales Process

When it comes to lead handling and distribution, most banks employ a "round robin"-type system where every lead officer is assigned an equal number of leads in circular order and without any priority. But NBKC Bank, a midsized financial institution based in Kansas, introduced AI into the process.

They realized that some loan officers performed better in the morning while others in the evenings. To that end, they've implemented a platform that distributes leads based on the officers' peak efficiency times. While a quarter of leads are assigned randomly, the rest are assigned based on this intelligent system. And while it still takes into account individual workloads so that everyone gets an equal number, NBKC Bank managed to improve their loan officers' performance by 65% and their closing rates by 10 to 15%.

Better Risk Analysis

Various statistical models have been used to evaluate risk by financial institutions for some time now. The most significant difference today, however, is that the use of such algorithms is much more extensive than it was in the past. Likewise, the amount and type of data available are also much more considerable than in previous years. All of these put together, coupled with the introduction of AI and machine learning (ML) will result in solving many problems.

Fraud analysis is one such example. By using AI, banks and other financial institutions will be able to spot frauds faster by detecting unusual activity in real-time. Similarly, AI can detect and filter out fraudulent or, otherwise, high-risk applications. Agents will, thus, only have to review those that have made it past the system, significantly increasing their overall effectiveness.

Alternatively, AI can use alternative sources of data, allowing banks to offer lending products to new groups of people. In the future, AI is predicted to take on even more complex tasks such as deal organization or Financial contract reviews.

Enhancing Customer Service

Sumitomo Mitsui Banking Corp (SMBC), a global financial organization, is one institution that's deploying AI for its customer service. It makes use of IBM Watson, a question-answering computer system, that's able to monitor all call center conversations, automatically recognizing questions and providing operators with real-time answers.

The introduction of Watson into the mix, the cost of each call reduced 60 cents, with equates to over $100,000 in annual savings for the bank. The system also managed to increase customer satisfaction by 8.4%.

SMBC also uses IBM Watson for employee-facing interactions, answering questions that staff members may have about internal operations. The AI system is also used to deal with a variety of cybersecurity issues.

Takeaway

Investing in AI should be on every financial institution's priority list going forward. Nevertheless, knowing how to navigate all implementations and compliance issues can prove to be a challenge. With Managed Solution, you can find the application that will best suit your needs. Contact us today for more information.

Meet the Tech Exec: Claire Weston, Founder and CEO, Reveal Biosciences

[vc_row][vc_column][vc_empty_space height="20px"][vc_column_text]To download the full magazine and read the full interviews, click here.

Dr. Claire Weston is an accomplished and dedicated scientific leader with a track record of success in cancer research.  She was awarded a PhD in from Cambridge University in the UK and has lead teams and projects focused on cancer biomarkers in both large pharma and start-up environments.  Claire founded Reveal Biosciences in 2012 and has since demonstrated strong year-on-year growth.  She has authored numerous peer-reviewed publications in leading journals including Science, and is a respected member of multiple professional organizations including the Digital Pathology Association. 

Reveal Biosciences is a computational pathology company focused on tissue-based research. 

Why did you decide to explore biotechnology?

When I was a child I went to a local science day and watched a scientist pour liquid nitrogen onto the floor. The liquid nitrogen changed from liquid to gas, something I’d never seen before, and I thought it was amazing!  It really initiated my interest in science. I love biotechnology because it's at the interface of science and technology, and solves real world problems.

How was the idea of Reveal Biosciences born?

Several years ago I was working at a different company developing a biomarker-based test for breast cancer. As part of that test, we sent a set of 150 patient slides to three different pathologists to review and provide a diagnosis. We then compared those results to our quantitative biomarker test. What really struck me at the time was the variation in the results that we got back from the pathologists. These are all very qualified, experienced pathologists, yet they didn't agree on the results for all the different patients. This is important because the way the patients are treated is often dependent on the way that the pathologist reviews the slide. It became clear that taking a quantitative, computational approach could help provide more accurate and reproducible data to benefit patients. This became one of the driving missions of our company.

In simple words, how do you help people?

We provide data from microscope slides or pathology samples that can benefit research, clinical trials, and patients.  For example, we generate quantitative pathology data to help pharmaceutical companies develop therapeutic drugs, we use it for clinical trials to increase precision and stratify patient groups, and we're also in the process of building pathology data applications to help pathologists diagnose disease in a way that will ultimately benefit patients.

Click here to watch more videos.

How are your services different from other similar companies in the market?

We are fairly unique in that we have a scientific team in the lab doing pathology and a computational team of data scientists and software engineers who are developing our AI-based platform. Our ImageDx platform includes models to generate very quantitative data and diagnostic outputs that can be applied to many different diseases. The products that we are working on are unique and differentiate us, but the main driver is the quantitative pathology data that we generate.

How did you marry artificial intelligence with pathology?

We've been using traditional machine learning to identify and quantify cells from images for a while, but in the last few years AI has advanced significantly. It's impressive to see how well it works in pathology images. We've made the natural evolution from more traditional machine learning into AI. Compute power is now more readily available which means that we can generate data from one patient slide in minutes rather than the days or weeks it used to take.  This sea change in computational speed means that the data we generate is more meaningful and relevant to routine pathology workflows.[/vc_column_text][grve_video video_link="https://www.youtube.com/watch?v=apAy6ZRi11w"][vc_column_text]Click here to watch more videos.

You are planning to use cloud-based technology to deliver accurate diagnosis and to address medical needs worldwide. How does that work?

There's a huge shortage of pathologists worldwide. Even in the US where we have very highly qualified pathologists we’re heading for a retirement cliff, and less pathologists are coming through residency to maintain their numbers. This is particularly evident in rural areas where there's a real shortage of expertise. Having a cloud-based approach will help address some of those problems.

I'm excited by the potential for AI in a cloud-based platform to bring advanced pathology expertise to anywhere with internet access. Hospitals or pathology labs throughout the world could upload an image from a microscope slide into the cloud, and that image can be analyzed to generate advanced diagnostics. Countries with limited resources often have the ability to generate the most basic kind of microscope slide, but they sometimes lack the ability to do the more advanced diagnostics. The possibility to do so is going to revolutionize pathology and be impactful for healthcare globally.  This should also benefit patients in the US by helping to lower the cost of healthcare.

How is AI impacting pathology?

The application of AI in pathology is a very new thing. We've been developing this for a while and we're launching the first products in the clinic for patients in 2019. We are also building more enhanced pathology models by integrating other data sources. We’re finding that we can use AI to detect aspects of cancer that are not obvious just by looking down a microscope. For example, we're detecting small changes in the texture of the nucleus of cells or small cellular changes that you wouldn't necessarily notice by eye but can be predictive or prognostic of disease. I think this is going to be really impactful for personalized medicine.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column heading_color="primary-1"][vc_empty_space][grve_callout title="Tech Spotlight Interviews" button_text="Learn more" button_link="url:http%3A%2F%2Finfo.managedsolution.com%2Fc-level-interview-registration||target:%20_blank|"]IT is a journey, not a destination. We want to hear about YOUR journey!
Are you a technology innovator or enthusiast?
We would love to highlight you in the next edition of our Tech Spotlight.[/grve_callout][/vc_column][/vc_row]

The Intersection of AI, People, and Society - Microsoft's Role

For decades, areas such as computer vision, deep learning, speech, and natural language processing have challenged the field's top exerts, yet today, it seems that computer scientists are making more progress every day in these areas (among many others).

Because of these breakthroughs, tools including Microsoft Translator are coming to life, and it is only up until recently that things like this were the stuff of fantasy and science fiction. In turn, so many people are being helped in so many ways by, for example, breaking down language barriers and facilitating communication.

Just the beginning

Last September, Microsoft announced the creation of Microsoft AI and Research, a new group that brings together approximately 7,500 computer scientists, researchers and engineers from the company’s research labs and product groups such as Bing, Cortana and Azure Machine Learning.

Microsoft Research AI, a research and incubation hub within Microsoft's research organization, is focused on solving some of AI’s most difficult challenges. The team of scientists and engineers will work closely with colleagues across Microsoft’s research labs and product groups in order to tackle some of the hardest problems in AI and accelerate the integration of the latest AI advances into products and services that benefit customers and society.

A core goal of Microsoft Research AI is to reunite AI research endeavors, such as machine learning, perception and natural language processing, that have evolved over time into separate fields of research. This integrated approach will allow the development of sophisticated understandings and tools that can help people do complex, multifaceted tasks.

Conclusion

Microsoft believes AI will be even more helpful when tools can be created that combine those functions and add some of the abilities that come naturally to people, like applying our knowledge of one task to another task or having a commonsense understanding of the world around us.

As AI moves from research to product, Microsoft is maintaining their commitment to foundational, open, and collaborative research in addition to their dedication to solving society’s toughest problems in partnership with all members of society. All the while, Microsoft is actively pursuing a mission in common with us here at Managed Solution, to empower every person and organization on the planet to achieve more.

Conversations on AI

Conversations on AI

As written on news.microsoft.com
Microsoft has been investing in the promise of artificial intelligence for more than 25 years — and this vision is coming to life with new chatbot Zo, Cortana Devices SDK and Skills Kit, and expansion of intelligence tools.
“Across several industry benchmarks, our computer vision algorithms have surpassed others in the industry — even humans,” said Harry Shum, executive vice president of Microsoft’s Artificial Intelligence (AI) and Research group, at a small gathering on AI in San Francisco on Dec. 13.
“But what’s more exciting to me is that our vision progress is showing up in our products like HoloLens and with customers like Uber building apps to use these capabilities.”
When Bill Gates created Microsoft Research in 1991, he had a vision that computers would one day see, hear and understand human beings — and this notion attracted some of the best and brightest minds to the company’s labs.
Last month, Microsoft became the first in the industry to reach parity with humans in speech recognition. There’s also been groundbreaking work with Skype Translator — now available in 9 languages — an example of accelerating the pipeline from research to product. With Skype Translator Microsoft has enabled people to understand each other, in real time, while talking to others in all corners of the world. But what about the dream of face-to-face, real-time translation?
Using this new intelligent language and speech recognition capability, Microsoft Translator can now simultaneously translate between groups speaking multiple languages in-person, in real-time, connecting people and overcoming barriers.

Four Green Tech Predictions for 2017

[vc_row][vc_column][vc_column_text]

rob-bernard-four-green-tech-predictionsFour Green Tech Predictions for 2017

Written by Rob Bernard as seen on blogs.microsoft.com
The end of the calendar year is a traditional time of reflection, of the ups and downs of the past year, and to think about what to expect in 2017. As Microsoft’s chief environmental strategist, I am encouraged by the progress made on some environmental issues in the past year, but there is still much work to be done. Despite the increasing urgency around many environmental issues, I remain optimistic about the future.
Perhaps the most notable breakthrough this past year was that the Paris Agreement on climate change entered into force. Cities, countries and companies around the world are now focusing their efforts on how to set and execute their plans to reduce carbon emissions. We also saw growth in corporate procurement of renewable energy in 2016, both in the U.S. and around the globe, another encouraging sign. At Microsoft, we put forth our own goal to source 50% of our datacenter electricity from wind, solar and hydropower. At the end of this year, we’re happy to report that we are on pace to not only meet our goal, but also are creating new financial and technology models that can further accelerate the pace of the adoption of renewable energy.
As we look towards 2017, I expect that we will see both continued progress on energy and an increasing focus on non-energy areas. As we at Microsoft think about 2017, I think we expect to see some shifts in approaches and investments happening across the world.

1. IoT and Cloud Computing will begin to transform utility energy management:

Aging infrastructure is already under pressure and the addition of more renewable energy will only compound the stress on existing infrastructure. As more clean energy comes online, along with distributed resources like electric vehicles and rooftop solar, utilities are facing a big challenge – how to manage a more complex network of energy creating and energy storing devices.  2017 will see an increased investment by utilities in technology to leverage data, through IoT solutions and cloud computing, to make energy management more predictable, flexible and efficient.
In developing nations, we are seeing a different trend, but one that is also accelerated by IoT and cloud computing. In these markets, data is being used to accelerate distribution, sales and management of micro-solar grids to enable households to get both power and connectivity to the internet. 2017 should be an exciting year with even more growth as capital investments in these markets increase and solar and battery storage prices decline.

2. Water will begin to emerge as the next critical world-scale environmental challenge

leaf-rain-raindrops-drop-of-water
Water scarcity is increasing in many areas around the world. Our oceans are under threat from pollution, acidification and warming temperatures. We are already seeing the devastating effects on iconic landmarks like the Great Barrier Reef. And these trends are putting peoples’ food, water, and livelihoods at risk. In 2017, awareness on this challenge will increase. We will begin to better understand what is happening to our water through the use of sensors and cloud computing. Our ability to leverage technologies like advanced mapping technologies and sensors will increase and expand our understanding of what is driving the decline of many of our critical water systems.

3. Data will increasingly be used to try to better understand our planet

Data is essential for monitoring and managing the earth’s resources and fragile ecosystems. There is much we do not understand about the planet, but we see an increasing number of companies and investments flowing toward developing tools and platforms that enable better mapping and understanding of earth’s carbon storage and air borne gasses, and ecosystems and the associated value they provide. We expect to see data being applied more proactively to create a more actionable understanding of how we can better manage food, water, biodiversity and climate change.

4. Organizations and policy makers will start leveraging cloud-based technologies

This area is perhaps the most difficult to try to predict. While countries will begin implementing their action plans under the Paris Agreement, it is not easy to predict the methods each country will use and prioritize to make progress against commitments under the Paris Agreement. And the changes will happen not just at the national level. Increasingly we will see cities and local governments moving ahead with technology implementation to drive efficiencies and accountability, along with policy changes as well. We’re already leveraging machine learning and artificial intelligence to better model and understand the potential impact of legislative changes, in addition to offering our technologies to our public sector partners as they work towards their plans. While this will likely take several years to take hold, 2017 should see an increased awareness for the role of technology in environmental policy
While there are many challenges ahead across so many areas of sustainability, I remain optimistic.  The number of people and organizations that are focusing on these and many other areas of sustainability ensure that we will continue to make progress in 2017.  At Microsoft, we are committed to working with our customers and partners to help them achieve more through the use of our technologies. Everyone – companies, countries, individuals – have much work to do. We believe that by working together, we can help develop effective solutions to address environmental challenges that will benefit our business, our customers and the planet. And we are up to the challenge.

[/vc_column_text][/vc_column][/vc_row]

Historic milestone: Microsoft researchers achieve human parity in conversational speech recognition

Historic milestone: Microsoft researchers achieve human parity in conversational speech recognition

By 
Microsoft has made a major breakthrough in speech recognition, creating a technology that understands a conversation as well as a person does.
In a paper published Monday, a team of researchers and engineers in Microsoft Artificial Intelligence and Research reported a speech recognition system that makes the same or fewer errors than professional transcriptionists.  The researchers reported a word  error rate (WER) of 5.9 percent, down from the 6.3 percent WER the team reported just last month.
The 5.9 percent error rate is about equal to that of people who were asked to transcribe the same conversation, and it’s the lowest ever recorded against the industry standard Switchboard speech recognition task.

Newly updated Microsoft Cognitive Toolkit can help speed advances in deep learning

 

Frank Seide and Chris Basoglu of Microsoft's CNTK groupNewly updated Microsoft Cognitive Toolkit can help speed advances in deep learning

By Athima Chansanchai as written on blogs.microsoft.com
Microsoft has released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and NVIDIA GPUs.
The toolkit, previously known as CNTK, was initially developed by computer scientists at Microsoft who wanted a tool to do their own research more quickly and effectively. It quickly moved beyond speech and morphed into an offering that customers, including a leading international appliance maker and Microsoft’s flagship product groups, depend on for a wide variety of deep learning tasks.
“We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of the Microsoft Cognitive Toolkit.
With the latest version of the toolkit, which is available on GitHub, developers can use Python or C++ programming languages in working with the toolkit.  With the new version, researchers also can do a type of artificial intelligence work called reinforcement learning.

Using artificial intelligence to create invisible UI

[vc_row][vc_column][vc_column_text]

Using artificial intelligence to create invisible UI

By Martin Legowiecki as written on techcrunch.com
Interaction with the world around us should be as easy as walking into your favorite bar and getting your favorite drink in hand before your butt hits the bar stool. The bartender knows you, knows exactly what drink you like and knows you just walked through the door. That’s a lot of interaction, without any “interaction.”
We’re redefining how we interact with machines and how they interact with us. Advances in AI help make new human-to-machine and machine-to-human interaction possible. Traditional interfaces get simplified, abstracted, hidden — they become ambient, part of everything. The ultimate UI is no UI.
Everyone’s getting in the game, but few have cracked the code. We must fundamentally change the way we think.

Cross-train your team

Our roles as technologists, UX designers, copywriters and designers have to change. What and how we build — scrolling pages, buttons, taps and clicks — is based on aging concepts. These concepts are familiar, proven and will still remain useful. But we need a new user interaction model for devices that listen, “feel” and talk to us.
Technologists need to become more like UX designers and vice versa. They must work much closer together and mix their roles, at least until some standards, best practices and new tools are established.

No decision trees

The bartender from the above example is where more of the UI is starting to reside. On one hand, that represents a lot more responsibility to create transparent experiences that tend to be based on hidden rules and algorithms. But on another, this gives us incredible latitude for creating open-ended experiences in which only important and viable information is presented to the user.
For example, to command our AI assistant, “Tell my wife I am going to be late,” the system needs to be smart enough not only to understand the intent, but also to know who the wife is and the best way to contact her. No extraneous information is necessary, no option list, no follow-up questions. We call this Minimum Viable Interaction (MVI).

Your interface is showing

We’ve started talking to our machines — not with commands, menus and quirky key combinations — but using our own human language. Natural language processing has seen incredible advances and we finally don’t need to be a machine to talk to one. We chat with the latest chatbots, search using Google Voice or talk to Siri. The accuracy of speech recognition has improved to an incredible 96 percent accuracy.

This space is way too dynamic to be married to an original creative concept.

The last few percentage points might not seem like a lot, but it’s what makes or breaks the perfect experience. Imagine a system that can recognize what anyone says 100 percent of the time, no matter how we say things (whether you have an accent, pause between words or say a bunch of inevitable “uhhs” and “umms”). Swap a tap or a click for the Amazon Echo’s far-field recognition, and the UI melts away. It becomes invisible, ubiquitous and natural.
We aren’t there yet. For now, we can devise smart ways of disguising the capability gap. A lot of time goes into creating programming logic and clever responses to make the machine seem smarter than it really is. Make one mistake where the UI shows and the illusion will break.

Contextual awareness

The system needs to know more about us for invisible UI to become reality. Contextual awareness today is somewhat limited. For example, when asking for directions via Google Maps, the system knows your location and will return a different result if you are in New York versus California.
Our phones, watches and other mobile devices are loaded with a ton of sensors. They make us humans the cheap sensors machines need today. We gather the knowledge and data that the system needs to do its work.
But even with all the sensors and data, the machine needs to know more about us and what is going on in our world in order to create the experiences we really need. One solution is combining the power of multiple devices/sensors to gather more information. But this usually narrows down and limits the user base — not an easy thing to sell to a client. You have to think on your feet. Change, tweak, iterate. This space is way too dynamic to be married to an original creative concept.
What wasn’t possible just yesterday is becoming mainstream today as we develop new experiences, explore new tech, topple old paradigms and continue to adapt.

[/vc_column_text][/vc_column][/vc_row]