Understanding cloud forests through the power of cloud computing

cloud-forests-tower270x180

The Brazilian Cloud Forest Sensing Project is studying how cloud forests function in response to climatic variability. The project deployed more than 700 Internet enabled sensors connected to Microsoft Azure and is gathering integrated data on physical and biological processes within the study site. Through its partnership with Microsoft Research, the Brazilian Cloud Forest Sensing Project has created a repeatable Internet-of-Things (IoT) solution that revolutionizes how research can benefit from the use of a wireless sensor network, cloud technology, and automated data stream processing.
Researching cloud forests at work
Brazil is one of the most forested countries in the world. More than 60 percent of Brazil is covered by forest, including many cloud forests—moist forests characterized by persistent low cloud cover. Cloud forests help provide clean water because its trees intercept water from clouds. That water then drips onto the soil and feeds rivers, lakes, and irrigation systems, even during periods of low rainfall.
The Brazilian Cloud Forest Sensing Project is an initiative of the São Paulo Research Foundation (FAPESP) Biodiversity Research Program, supported by the Microsoft Research-FAPESP Joint Research Center. Rafael Oliveira, a professor of Ecology from the University of Campinas (Unicamp), conducts research for the project in the cloud forests of Campos do Jordão, Brazil.
The goal of Oliveira’s investigation is to understand how cloud forests work and then measure the impact of microclimatic variation on several ecosystem processes. The research project focuses on a fragmented forest—which most forests in the world are—in contrast to a continuous forest such as the ones found in the Amazon. Most fragmented forests are in proximity to urban areas and are critical to the water supply of those communities.
Managing sensor data in the cloud
Collaborators from the Brazilian Cloud Forest Sensing Project partnered with Microsoft Research to define research questions and develop software to analyze streams of data, which the Sensing Project team gathered every 15 minutes from a unified ensemble of more than 700 sensors on plants, in soil, and above tree canopies throughout the forest.
To manage and process high volumes of complex data, the Sensing Project team uses Microsoft Azure to store, process, and visualize the data coming in from the sensors. The sensors themselves are connected to Azure as an instance of the Internet of Things: devices embedded in the physical world sending data to the cloud and changing their behavior based on the directives assigned to them.
The sensors are networked to and communicate directly with the Microsoft Azure cloud platform. Because researchers have a constant stream of real-time data, they can quickly observe what's happening and if necessary, remotely change how the sensors collect data. This ability to make fast adjustments provides the researchers with high-fidelity data for specific time periods.
Providing a model for future research
Using a network of 700 sensors to study a forest was a new concept, and the team had to determine all the pertinent details, such as how to manage the sensors, which sensors were needed, and what kinds of data to measure. The researchers at the Sensing Project have created a repeatable technical solution, and researchers worldwide will be able to learn from it and apply these practices to their own sensor-based studies. The methodologies that are being developed will help Brazil’s investigations and other global research projects.

[vc_row][vc_column][vc_column_text]

SQL Server 10% Cash Back partner incentive ends June 30, 2016

Just a few weeks left to take advantage of the cashback incentive for sales of SQL Server Standard and SQL Server Premium Open licenses to SMB Commercial customers that runs from March 1, 2016 through June 30, 2016. Incentives will only be paid on sales that occur after you successfully register and is limited to the first 400 partners to apply. So ensure you register before the sale and don't leave money on the table. WW Open Premier partners are not eligible for this incentive.

Key dates to be aware of:

March 1, 2016: Incentive start date and portal live date for registration
June 30, 2016: Incentive end date
August 2016: Approximately two months post end of promotion payouts will begin

MAX PAYOUTS ON FUNDING DOLLARS
SPRINT TO THE FINISH TO GET THE MOST MICROSOFT FUNDING BEFORE THE DEADLINE

Review eligibility for offers - we are ready to help you through the technical stages necessary to execute & ensure the service is set-up successfully.


 

Managed Solution is a full-service technology firm that empowers business by delivering, maintaining and forecasting the technologies they’ll need to stay competitive in their market place. Founded in 2002, the company quickly grew into a market leader and is recognized as one of the fastest growing IT Companies in Southern California.

We specialize in providing full Microsoft Solutions to businesses of every size, industry, and need.

[/vc_column_text][/vc_column][/vc_row]

Microsoft research project puts cloud in ocean for the first time

By Athima Chansanchai as written on news.microsoft.com

Microsoft-Project-Natick-0412-1600x700

In 2015, starfish, octopus, crabs and other Pacific Ocean life stumbled upon a temporary addition to the seafloor, more than half a mile from the shoreline: a 38,000-pound container. But in the ocean, 10 feet by 7 feet is quite small. The shrimp exploring the seafloor made more noise than the datacenter inside the container, which consumed computing power equivalent to 300 desktop PCs.
But the knowledge gained from the three months this vessel was underwater could help make future datacenters more sustainable, while at the same time speeding data transmission and cloud deployment. And yes, maybe even someday, datacenters could become commonplace in seas around the world.
The technology to put sealed vessels underwater with computers inside isn’t new. In fact, it was one Microsoft employee’s experience serving on submarines that carry sophisticated equipment that got the ball rolling on this project. But Microsoft researchers do believe this is the first time a datacenter has been deployed below the ocean’s surface. Going under water could solve several problems by introducing a new power source, greatly reducing cooling costs, closing the distance to connected populations and making it easier and faster to set up datacenters.

A little background gives context for what led to the creation of the vessel. Datacenters are the backbone of cloud computing, and contain groups of networked computers that require a lot of power for all kinds of tasks: storing, processing and/or distributing massive amounts of information. The electricity that powers datacenters can be generated from renewable power sources such as wind and solar, or, in this case, perhaps wave or tidal power. When datacenters are closer to where people live and work, there is less “latency,” which means that downloads, Web browsing and games are all faster. With more and more organizations relying on the cloud, the demand for datacenters is higher than ever – as is the cost to build and maintain them.
All this combines to form the type of challenge that appeals to Microsoft Research teams who are experts at exploring out-of-the-box solutions.
Ben Cutler, the project manager who led the team behind this experiment, dubbed Project Natick, is part of a group within Microsoft Research that focuses on special projects. “We take a big whack at big problems, on a short-term basis. We take a look at something from a new angle, a different perspective, with a willingness to challenge conventional wisdom. So when a paper about putting datacenters in the water landed in front of Norm Whitaker, who heads special projects for Microsoft Research NExT, it caught his eye.
“We’re a small group, and we look at moonshot projects,” Whitaker says. The paper came out of ThinkWeek, an event that encourages employees to share ideas that could be transformative to the company. “As we started exploring the space, it started to make more and more sense. We had a mind-bending challenge, but also a chance to push boundaries.”
One of the paper’s authors, Sean James, had served in the Navy for three years on submarines. “I had no idea how receptive people would be to the idea. It’s blown me away,” says James, who has worked on Microsoft datacenters for the past 15 years, from cabling and racking servers to his current role as senior research program manager for the Datacenter Advanced Development team within Microsoft Cloud Infrastructure & Operations. “What helped me bridge the gap between datacenters and underwater is that I’d seen how you can put sophisticated electronics under water, and keep it shielded from salt water. It goes through a very rigorous testing and design process. So I knew there was a way to do that.”
James recalled the century-old history of cables in oceans, evolving to today’s fiber optics found all over the world.
“When I see all of that, I see a real opportunity that this could work,” James says. “In my experience the trick to innovating is not coming up with something brand new, but connecting things we’ve never connected before, pairing different technology together.”
Building on James’s original idea, Whitaker and Cutler went about connecting the dots.

Natick_secondary

Cutler’s small team applied science and engineering to the concept. A big challenge involved people. People keep datacenters running. But people take up space. They need oxygen, a comfortable environment and light. They need to go home at the end of the day. When they’re involved you have to think about things like landscaping and security.
So the team moved to the idea of a “lights out” situation. A very simple place to house the datacenter, very compact and completely self-sustaining. And again, drawing from the submarine example, they chose a round container. “Nature attacks edges and sharp angles, and it’s the best shape for resisting pressure,” Cutler says. That set the team down the path of trying to figure out how to make a datacenter that didn’t need constant, hands-on supervision.
This initial test vessel wouldn’t be too far off-shore, so they could hook into an existing electrical grid, but being in the water raised an entirely new possibility: using the hydrokinetic energy from waves or tides for computing power. This could make datacenters work independently of existing energy sources, located closer to coastal cities, powered by renewable ocean energy.
That’s one of the big advantages of the underwater datacenter scheme – reducing latency by closing the distance to populations and thereby speeding data transmission. Half of the world’s population, Cutler says, lives within 120 miles of the sea, which makes it an appealing option.
This project also shows it’s possible to deploy datacenters faster, turning it from a construction project – which require permits and other time-consuming aspects – to a manufacturing one. Building the vessel that housed the experimental datacenter only took 90 days. While every datacenter on land is different and needs to be tailored to varying environments and terrains, these underwater containers could be mass produced for very similar conditions underwater, which is consistently colder the deeper it is.

IMG_1298-Assembly-1024x768

Cooling is an important aspect of datacenters, which normally run up substantial costs operating chiller plants and the like to keep the computers inside from overheating. The cold environment of the deep seas automatically makes datacenters less costly and more energy efficient.
Once the vessel was submerged last August, the researchers monitored the container from their offices in Building 99 on Microsoft’s Redmond campus. Using cameras and other sensors, they recorded data like temperature, humidity, the amount of power being used for the system, even the speed of the current.
“The bottom line is that in one day this thing was deployed, hooked up and running. Then everyone is back here, controlling it remotely,” Whitaker says. “A wild ocean adventure turned out to be a regular day at the office.”
A diver would go down once a month to check on the vessel, but otherwise the team was able to stay constantly connected to it remotely – even after they observed a small tsunami wave pass.
The team is still analyzing data from the experiment, but so far, the results are promising.
“This is speculative technology, in the sense that if it turns out to be a good idea, it will instantly change the economics of this business,” says Whitaker. “There are lots of moving parts, lots of planning that goes into this. This is more a tool that we can make available to datacenter partners. In a difficult situation, they could turn to this and use it.”

Natick_out-of-water

Christian Belady, general manager for datacenter strategy, planning and development at Microsoft, shares the notion that this kind of project is valuable for the research gained during the experiment. It will yield results, even if underwater datacenters don’t start rolling off assembly lines anytime soon.
“While at first I was skeptical with a lot of questions. What were the cost? How do we power? How do we connect? However, at the end of the day, I enjoy seeing people push limits.” Belady says. “The reality is that we always need to be pushing limits and try things out. The learnings we get from this are invaluable and will in some way manifest into future designs.”
Belady, who came to Microsoft from HP in 2007, is always focused on driving efficiency in datacenters – it’s a deep passion for him. It takes a couple of years to develop a datacenter, but it’s a business that changes hourly, he says, with demands that change daily.
“You have to predict two years in advance what’s going to happen in the business,” he says.
Belady’s team has succeeded in making datacenters more efficient than they’ve ever been. He founded an industry metric, power usage effectiveness (PUE), and in that regard, Microsoft is leading the industry. Datacenters are also using next-generation fuel cells – something James helped develop – and wind power projects like Keechi in Texas to improve sustainability through alternative power sources. Datacenters have also evolved to save energy by using outside air instead of refrigeration systems to control temperatures inside. Water consumption has also gone down over the years.
Belady, who says he “loved” this project, says he can see its potential as a solution for latency and quick deployments.
“But what was really interesting to me, what really surprised me, was to see how animal life was starting to inhabit the system,” Belady says. “No one really thought about that.”
Whitaker found it “really edifying” to see the sea life crawling on the vessel, and how quickly it became part of the environment.
“You think it might disrupt the ecosystem, but really, it’s just a tiny drop in an ocean of activity,” he says.
The team is currently planning the project’s next phase, which could include a vessel four times the size of the current container with as much as 20 times the compute power. The team is also evaluating test sites for the vessel, which could be in the water for at least a year, deployed with a renewable ocean energy source.
Meanwhile, the initial vessel is now back on land, sitting in the lot of one of Microsoft’s buildings. But it’s the gift that keeps giving.
“We’re learning how to reconfigure firmware and drivers for disk drives, to get longer life out of them. We’re managing power, learning more about using less. These lessons will translate to better ways to operate our datacenters. Even if we never do this on a bigger scale, we’re learning so many lessons,” says Peter Lee, corporate vice president of Microsoft Research NExT. “One of the things that’s so fun about a CEO like Satya Nadella is that he’s hard-nosed business savvy, customer obsessed, but another half of this brain is a dreamer who loves moonshots. When I see something like Natick, you could say it’s a moonshot, but not one completely divorced from Microsoft’s core business. I’m really tickled by it. It really perfectly fits the left brain/right brain combination we have right now in the company.”

[vc_row][vc_column][vc_column_text]

Donuts and Data: Celebrating National Donut Day and SQL Server 2016

Happy National Donut Day! People all over the country are celebrating this delicious day by eating a variety of donuts.  Managed Solution joined in on the fun by providing donuts for the office, including cream filled, old fashioned, and even some donuts with colorful sprinkles. What's a sweet treat that won't make you run on the treadmill all night? Data made easy with the new SQL Server 2016.
SQL Server 2016 is much more than a database – it is the data management and business analytics platform for intelligent applications that can tackle any data project for any application. SQL Server 2016 is the only database that is born in the cloud. It is the summation of capabilities we released first in Microsoft Azure, where they are already being used in millions of production databases.
Watch this video to learn more about how SQL Server 2016 is transforming businesses with data:

Now that you've learned a little bit more about SQL Server 2016 and data, it's time to treat yourself to a donut - after all it is a holiday!

Managed Solution’s Team has the experience and expertise to architect SQL database and reporting systems tailored for your environment. Contact us for more information 800-307-0296


[/vc_column_text][/vc_column][/vc_row]

All that RaaS: saving lives and transforming healthcare economics

Stuart, a 66-year-old man with diabetes, felt lousy—constantly fatigued, nauseated, and short of breath after just the slightest exertion. His daughter, worried by his increasing frailty, took him to the emergency room at the local hospital. Her concern was amply justified: Stuart was suffering from heart failure. Like 5.1 million other Americans each year who suffer from heart failure, he was admitted to the hospital to treat this serious, often life-threatening condition. The caring medical team stabilized his condition, and Stuart left the hospital after 10 days, glad to be home with words of advice and a few medications. Within a month he was back, once again fatigued, and facing a second episode.

5582.RiskOMeter_MRCBlog2_496x330

Stuart’s story is far from rare. Hospital readmissions for chronic conditions such as diabetes, chronic obstructive pulmonary disease (COPD), and congestive heart failure (CHF) are both common and very costly. Studies conducted in the United States indicate that nearly 20% of Medicare patients who are hospitalized for chronic conditions are often readmitted within 30 days. Experts at Edifecs indicate that it costs Medicare—and US taxpayers—about $26 billion a year, and often a large majority of these readmissions are actually considered avoidable with accurate prioritization and personalized care protocols. Readmission-related costs have become so onerous that the Affordable Care Act includes financial rewards and penalties to deal with the readmission problem. Hospitals that reduce their readmission rates receive financial incentives; those that do not, lose reimbursement and get penalized.

Holistic tools that can reliably predict heart-failure readmissions—taking into account all aspects of each patient’s condition and risk factors—would significantly help patients and hospitals. The growth in the use of electronic patient records has recently offered the potential for such analysis, but little had been done to harness the collective intelligence contained in hospital patient records augmented with other data sources.
By introducing cloud computing technology and applying some of the latest advances in machine learning techniques, researchers are rapidly changing this situation.
One leading example of this is RaaS (Readmission Score as a Service), a platform that was developed by the University of Washington (UW) Tacoma’s Center for Data Science. RaaS compares a patient’s medical information to a database of heart-failure outcomes, using advanced machine learning techniques to arrive at a risk-of-readmission factor as well as corresponding actionable guidelines for the patient-provider team. Those patients identified with a high risk receive additional treatment: the goal is to reduce their likelihood of readmission and produce overall healthier outcomes across all stages of the patient care continuum.
The hundreds of machine learning models of RaaS are developed by using both the R machine learning language, and Microsoft Azure Machine Learning. This chronic care management predictive platform relies on historical patient data from multiple sources. These sources include anonymized electronic medical records, claims, labs, medications, and psycho-social factors, all labeled with observed outcomes that the machine learning models access and share in sync to provide continuous monitoring for personalized patient alerts.
RaaS is available as an on-premises service as well as via the cloud by using Azure Machine Learning web services and the Azure-based Zementis Adapa scoring engine to make predictions for patients. When deployed using Azure Cloud Services, RaaS performs data preparation at scale.
The UW Center for Data Science team began developing initial models in collaboration with MultiCare Health System in March 2012, using just two on-premises servers. The maintenance, frequent updates, and down times of these on-premises servers posed an ongoing problem, and scalability issues limited the scope of the project by affecting the speed of data exploration and machine learning.
About a year and a half ago, the team applied for and was awarded an Azure for Research grant, taking advantage of the Microsoft Research program that offers training and awards of computing resources to qualified institutions that use the cloud to advance scientific discovery. The award enabled the Center for Data Science team to scale up the project and create a robust prediction engine that generates a readmission risk factor score for patients at every stage of their hospital care: post-admission, pre-discharge, and post-discharge.
The RaaS platform at MultiCare Health enables the care management team to view an electronic dashboard that shows heart-failure patients’ risks of readmission. UW Medicine Cardiology is now collaborating with the Center for Data Science team to study the efficacy of predictive models for augmenting care management guidelines by using machine learning.
—Daron Green, Deputy Managing Director, Microsoft Research
—Gregory Wood, MD, UW Medicine Cardiology

Understanding ocean chemistry through the power of Cloud Computing

clip0000636_000.mov.07_03_55_19.

Ocean temperatures and chemistry are changing dramatically and posing a risk to certain life forms, including shellfish such as oysters grown and harvested in Washington state. Microsoft Research teamed up with University of Washington scientists to take data from a complex modeling system run on supercomputers and bring it to the cloud. Soon, it will be widely available to help predict growing conditions for the shellfish industry and may help other industries adapt to ocean changes.
Climate change driven fluctuations in ocean chemistry have been linked to die-offs of baby oysters along the Northeast Pacific Ocean coast. The gradual change known as ocean acidification is making it difficult for shellfish, corals, sea urchin, and other creatures to form their calcium-based shells or other structures. The Northwest region’s thriving oyster hatcheries were struck by high mortality rates. While climate is changing worldwide, the Northwest is particularly vulnerable to ocean acidification because of the upwelling of colder more acidic water into the bays and estuaries of Washington, Oregon and British Columbia.
Modeling complex currents and chemistry
The Washington state legislature asked the University of Washington to study ocean acidification through the Washington Ocean Acidification Center (WOAC). Microsoft Research joined with Parker MacCready, professor of physical oceanography, to bring complex information from a variety of data sources into a system called LiveOcean, which would provide a model of currents and chemistry and predict a few days into the future.
Just like a numerical weather forecast model, LiveOcean will soon provide a forecast that predicts the acidity of water in a specific bay or other coast region three to seven days in advance. Bill Dewey, Director of Public Affairs for Taylor Shellfish, needs that prediction to know when and where it is safe to plant oyster larvae and raise juvenile oysters.
Estimates are that the West Coast oyster industry generates 3,000 jobs and makes an annual economic impact of about $207 million.
More than 30 percent of Puget Sound’s marine species are vulnerable to ocean acidification by virtue of their dependence on the mineral calcium carbonate to make shells, skeletons and other body parts.
A baby oyster uses carbonate ions in the water to make their first shell. If the water is too acidic, the baby oyster uses too much energy and dies. Taylor Shellfish has hatcheries for the baby oysters and separate “planting” beds where young oysters are moved to continue growing. Forecasts would help Taylor Shellfish know the safest planting locations.
Dewey remembers hearing the phrase “ocean acidification” for the first time in 2007, with other industry leaders at a meeting, and knows the extreme challenges from ocean acidification. Besides working for Taylor Shellfish, Dewey also farms his own shellfish beds and has a personal stake in the challenge.
Making a model open to everyone
MacCready is the scientist leading the LiveOcean team who stepped up to help the shellfish industry. But he also looks forward to other scientists and industries drawing their own insights from this same model. The LiveOcean design incorporates the open use by others.
MacCready was a visiting scientist who spent four months at Microsoft Research in Redmond, Washington collaborating with scientists there, including Rob Fatland. Fatland is now the Director of Cloud and Data Solutions at the University of Washington.
MacCready’s team used Azure tools to draw data from a large model run on a high-performance computing cluster. The model is known as the Regional Ocean Modeling System (ROMS). They push data out of Azure with Python then write scripts for websites. Using the cloud is “the way of the future” he said, for complex systems like this one. “It gives the ability to create and use different resources without having to go out and buy hardware yourself.”
MacCready and Microsoft researchers built a forecast system open to everyone. Using the cloud, a non-scientist will be able to reach into ROMS forecast data and pull out information through a smart phone, laptop, tablet, or other devices. The Azure component uses Python and the Django web framework to provide these forecasts in an easy-to-consume format. To produce the forecasts, the Live Ocean system relies on other sources: US Geological Survey data for river flow, atmospheric forecasts, and another ocean model called HYCOM.

liveocean_illustration

One crucial element of LiveOcean is the careful validation of its results. MacCready and others have spent years validating the modeling system by direct observation from physical instruments paired to predictions.
Policy and public understanding
Predictions will be vital for both policy-makers and scientists, according to Dewey. The impact of discoveries from the model could be vast. It will show a bigger and better picture to every part of society as decisions loom due to challenges from climate change. Beyond the industries and legislators, MacCready also sees LiveOcean as providing a new and important window on the coastal ocean globally as scientists and others adopt the model and begin to use it.
Dewey sees the model waking people up to changes. “The ocean acidification issue has really come to light here on the Pacific, where we have these upwelling events. But we are the tip of the spear for this. It has woken up the industry across the country,” he said.

Marketing agency improves technology, saves $87,000 with cloud-based telephony

For BDSmktg, its field staff is the core of its business, with only a small percentage of employees at headquarters. BDSmktg is using Skype for Business Online in Microsoft Office 365 to knit these two groups more closely together, accelerate business, and save bundles of money. With Skype for Business Online, BDSmktg will save US$87,000 annually in personal phone charge reimbursements, audio conferencing fees, and PBX maintenance, and avoid the need to spend $250,000 on a new PBX system.

Flustered by phones

James Metcalfe never imagined that the most troublesome technology in his company would be the most mundane: phones.
James Metcalfe is Director of IT Network Infrastructure for BDSmktg, an agency that provides retail marketing services for world-class brands by representing their products and services in stores. The Irvine, California¬–based agency provides thousands of representatives each year to some of the biggest names in retail.
James Metcalfe had already outfitted several hundred of the agency’s full-time employees with Microsoft Office 365 to give them anytime, anywhere, any-device access to email, document storage, document sharing, and web conferencing. Employees used the latest PCs, laptops, tablets, and smartphones.
But old-fashioned phone communications posed a growing problem. Only a small percentage of BDSmktg employees work at the Irvine headquarters, while thousands work in the field—from home or on the road—because their jobs require that they be near the stores they service.
A significant portion of the company’s large recruiting team and extensive field staff used their personal phones to conduct business, and BDSmktg reimbursed them for the charges. But this was expensive and problematic. When job candidates returned calls to recruiters, they could end up talking to a recruiter’s family member. Or, if recruiters or field operations managers left BDSmktg and went to work for a competitor, they took job candidates’ phone numbers with them.
“There were delays in tracking down phone numbers to reach colleagues, which slowed down the business,” Metcalfe says.
In the Irvine office, the company’s private branch exchange (PBX) system was old, out of date, and hemorrhaging money. “Every time we had budget talks, the PBX system came up, but sticker shock ended the discussion,” Metcalfe says. “The timing was never right to make the large investment to replace or upgrade it.”

One way to connect everyone

In late 2015, BDSmktg asked to be part of a Microsoft early adopter program for a new version of Skype for Business Online (part of Microsoft Office 365) that included significant telephony enhancements. Cloud PBX and PSTN Calling provide software-based PBX functionality with a bank of Voice over Internet Protocol (VoIP) phone numbers. PSTN Conferencing allows people invited to a Skype for Business Online meeting to join by dialing in over a landline or mobile phone (rather than the Internet).
BDSmktg gave Skype for Business Online to about 300 of its employees, and adoption was instant and enthusiastic. “We’ve been using Lync Online for years, so our staff already had experience with chat, screen share, and video and web conferencing,” Metcalfe says. “Adding PSTN Conferencing and PSTN Calling just makes communications even simpler. With Skype for Business Online, we have one way to connect everyone, wherever they are, whatever device they’re using, and whether they’re connected to the Internet or not.”

More professional, more accountable

Today, BDSmktg employees who work from home have an assigned Skype for Business Online phone number that they use for work calls; no more giving out personal phone numbers. When an employee leaves BDSmktg, there’s no longer the worry that a personal phone number is a contact’s only link to the company. BDSmktg simply reassigns the Skype for Business Online phone number to a new employee, maintaining continuity with client and job candidate communications.
“With PSTN Calling, we can track every inbound and outbound call, see the number called, and the duration of the call,” Metcalfe says. “We have much better accountability around a critical part of our business.”

Work effectively from anywhere

Employees working from home now feel better connected to the company because they can connect quickly with colleagues. “We’re able to provide more seamless communication for our employees who work from home,” Metcalfe says. “People are blown away by the quality of the HD Voice in Skype for Business Online. They don’t want to go back to regular phones.”
BDSmktg management likes the flexibility that the new features provide. “With Skype for Business Online, we have more freedom to place people wherever the business needs them to be, rather than having technology limitation determine employee access,” says Ken Kress, President of BDSmktg.

Huge savings

Management also likes the savings. By using Skype for Business Online for field staff telephony, BDSmktg eliminates the need to reimburse employees for calls made from personal devices—a US$12,000 annual savings.
By replacing the $8,000-a-month licenses from its current conferencing provider with a $1,700-a-month Skype for Business Online subscription, BDSmktg will save $75,000 annually.
And by replacing its physical PBX with Cloud PBX, BDSmktg will avoid a $250,000 replacement cost and ongoing maintenance costs of $35,000 a year.
Last but not least is the real estate cost avoidance that BDSmktg could realize by using Skype for Business Online. “We’ll avoid significant costs to expand our office as our company grows as we enable more people and roles to work from home,” Metcalfe says.

Easy to manage

From Metcalfe’s perspective, having telephony functionality bundled with Office 365 makes his life easier. He eliminates the work and expense of a physical phone infrastructure. It’s far easier to move employees around the office and to move them from office to home. “Scaling up and creating additional phone numbers with PSTN Calling is very straightforward,” Metcalfe says.
There are fewer vendors and bills to manage. More services on user desktops are connected and interoperable, making support easier. “Giving employees new capabilities and saving money is what a successful IT department strives for,” Metcalfe says. “I’ve been championing a new phone system for three years, and to finally find a solution that is affordable, easy to implement, and easy to use is a game changer.”

Next, extend to every field employee

Metcalfe’s vision is for all the company’s thousands of field staff representatives to have access to Skype for Business Online and other Office 365 services. The above-mentioned savings could well make this possible.
“It would be ideal for our field operations managers to easily and instantly connect with the representatives that they manage,” Metcalfe says. “Everyone would have the Skype for Business Online mobile app on their smartphones. As our field programs ramp up and down, we adjust our Office 365 subscriptions as required using a central admin portal. It would make us more nimble, more responsive, and more competitive than ever.”

Centralizing national flood data in the cloud

nfie-flooded-property270x180

Researchers from the University of Texas collaborated with other researchers, federal agencies, commercial partners, and first responders to create the National Flood Interoperability Experiment (NFIE). They used Microsoft Azure to help build a prototype for a national flood data-modeling and mapping system with the potential to provide life- and cost-saving information to the public. The goals of the NFIE include standardizing data, demonstrating a scalable solution, and helping to close the gap between national flood forecasting and local emergency response.
Sharing flood information for better prediction and response
In October 2013, the Onion Creek area near Austin, Texas, faced a particularly destructive flood. While onsite studying the flood, David Maidment, professor of civil engineering at the University of Texas, spoke with Harry Evans, chief of staff for the Austin Fire Department. They realized that they had similar goals for improving flood prediction and response, and could collaborate well with their different areas of expertise.
Maidment, who specializes in hydrology and flooding at the Center for Research and Water Resources, brought together participants from academia, federal agencies, commercial partners, and first responders to create the National Flood Interoperability Experiment (NFIE). He wanted a technology infrastructure that would allow flood information to flow in from various agencies and academia, and then flow out to allow citizens and first responders to better understand what was happening.
“What we're trying to do in the National Flood Interoperability Experiment is to prototype a set of infrastructure and services that can communicate with one another and with the public in a uniform and open way,” says Maidment.

nfie-people-in-flood270x180

Microsoft Azure for data analysis, storage, and sharing in the cloud
Microsoft Research helped the NFIE find the computational power it needed in Microsoft Azure. The NFIE uses Azure to perform statistical analysis of present and past flood data to help design a prototype for a national flood data-modeling and mapping system with the potential to provide life- and cost-saving information to the public.
By using Azure, the NFIE can standardize and store data in the cloud. Maidment and colleagues at the University of Texas developed a new language that provides both a common way to store time-value pairs, like river flow time, and a standard way of communicating that information through the Internet. The US Geological Survey adopted this language to publish its time-series data on water observations, and the National Weather Service will also use it to publish forecasts as part of the NFIE. When this common language is implemented operationally, those organizations will be able to communicate and collaborate more efficiently with one another.
More flood information provides protential for improved public safety
NFIE uses Azure to deliver more forecasts than any one agency could. Currently, the National Weather Service makes forecasts at about 3,600 locations on rivers in the country. The NFIE expects to demonstrate delivery of specific and actionable data for 2.67 million locations nationally, including smaller streams. It also expects to increase the spatial density of flood forecast locations by a factor of more than 700, compared to the current National Weather Service system.
Ultimately, the NFIE has demonstrated that information with a greater level of detail has the potential to increase real-time responsiveness that can improve public safety and save lives. Working closely with the Austin Fire Department, the NFIE shows how data can be used to improve decision-making. Evans notes that this work will help the NFIE develop a template that agencies can use nationwide, along with their threat and risk analyses, to help communities better protect themselves from the risks of flooding.

Contact us Today!

Chat with an expert about your business’s technology needs.