[vc_row][vc_column][vc_column_text]

Greener datacenters for a brighter future: Microsoft’s commitment to renewable energy

By Brad Smith on blogs.microsoft.com

 

Lightbulb-643x367

As the world increasingly races to a future based on cloud computing, a host of new and important public issues are emerging. One of these issues involves the energy and sustainability practices of the datacenters that power the cloud. Over the past year we’ve spent considerable time focusing on our work in this area at Microsoft, and I wanted to share today where we’re heading.
When it comes to sustainability, we’ve made important progress as a company since the start of this decade, but even more important work lies ahead. Across the tech sector we need to recognize that datacenters will rank by the middle of the next decade among the large users of electrical power on the planet. We need to keep working on a sustained basis to build and operate greener datacenters that will serve the world well.
For Microsoft, this means moving beyond datacenters that are already 100 percent carbon neutral to also having those datacenters rely on a larger percentage of wind, solar and hydropower electricity over time. Today roughly 44 percent of the electricity used by our datacenters comes from these sources. Our goal is to pass the 50 percent milestone by the end of 2018, top 60 percent early in the next decade, and then to keep improving from there.
Especially given the magnitude of datacenter expansion that will continue throughout this time period, this is not a small goal. It requires that we understand where we’ve come from over the past few years and take a principled approach to our work in the future. We need to translate these principles into clear and concrete goals that we can use to hold ourselves accountable in a responsible way. And it will require work with many other important stakeholders and institutions, from environmental groups to utilities to governments themselves.
Important work to date
Our focus on sustainability is not new. We’ve been tracking and reducing emissions since 2007, and we’ve been operating our datacenters and the rest of the company at 100 percent carbon neutrality since 2012. We’ve achieved this progress by driving efficiencies, charging our business units a fee on carbon, and investing in sustainable energy projects and technologies. When we’re not able to eliminate our energy use or directly power our operations with green energy, we purchase renewable energy certificates to reduce carbon emissions. When we include the use of these certificates, 100 percent of our consumption has been powered by renewable energy since 2014.
Microsoft was in fact one of the first large enterprises to implement a global internal carbon fee model, charging each business unit a fee based on the carbon emissions of its business operations. This provides a powerful incentive to find carbon-saving alternatives and invest in carbon-reducing innovations. Thanks in part to this program, Microsoft was ranked by the U.S. Environmental Protection Agency as the second largest user of green power in the United States. Earlier this year we were honored to receive an EPA Climate Leadership Award and to be recognized by the United Nations and World Economic Forum for our carbon fee model.
As part of our commitment to carbon neutrality, we also offset the carbon impact of our air travel. We do this by investing in community projects that focus on issues such as clean cookstoves, habitat protection and restoration, and solar power and lighting – impacting more than 7 million people worldwide.
As a result of this work, we’ve reduced carbon emissions by 9.5 million metric tons, purchased 14 billion kilowatt hours of green energy, and cut energy consumption by 10 percent at our 125-building main campus in Redmond, Washington. All of this represents important progress and creates a strong foundation on which to build.
A principled approach to the future
While we’re proud of our progress, we readily recognize that even bigger steps will be needed in the future. In part this is because of the unique and increasingly important role that datacenters will play in the decades ahead.
Over the past 250 years a few select inventions have served as the fundamental catalysts for human progress. In the First Industrial Revolution that began in the latter 1700’s, steam and the steam engine played this role. A century later, the Second Industrial Revolution was based on electricity and electrical power plants, and then gasoline and the combustion engine. The Third Industrial Revolution relied on the microprocessor. The Fourth Industrial Revolution we’re now entering will feature major technology advances in physical materials, biological processes and digital technologies. But the fundamental cornerstones for all of these advances will be data – the electricity of our age – and the datacenters that will make this massive use of data possible.
Datacenters have become the engine of transformation. The good news is that public cloud datacenters operated by companies like Microsoft are more energy efficient than the private server facilities run by individual companies or governments. This is natural, given the degree to which this has become a core competency, and it reflects our focus on both world-leading R&D and large capital investments to drive datacenter energy efficiency.
But there is no room for complacency. The largest tech companies today may each consume as much electrical power as a small American state. There may come a point in just a few decades when we each may consume as much power as a mid-sized nation. This creates an obvious responsibility that we need to take seriously.
To help us live up to this responsibility, we have established three principles to guide our environmental sustainability work:
Transparency. We’re committed to the type of transparency that will hold ourselves accountable and share our track record with the public. We’ll report annually our total energy consumption and consumption across regions, the mix of sources for the power that we use, the impact of our internal carbon fee model and the investments we make. We also will be transparent about where we are investing in renewable energy certificates or international equivalents, and our investments in carbon offset projects around the world.
Help to accelerate the transition to a clean energy infrastructure. We’re committed to using more clean energy every year. We will be mindful about siting datacenters and other facilities where renewable energy sources are readily available or can be made available during ramp-up or operational phases. Wherever we operate, we will work to bring new renewable energy sources online either through investments in new projects, by engaging on enabling policy changes that will help accelerate availability of more clean energy, and by working with utilities to increase the availability of renewable energy on the grid.
Investments in research. Finally, we’re committed to cutting-edge research and development investments to advance our energy responsibilities. We will continue, in particular, to focus on R&D that will lead to further improvements in the efficiency of computing infrastructure, datacenters, servers and software performance. We will also work to advance sustainability through the products, platforms, and capabilities we use to run our business and offer our customers and partners, and we will invest in new technologies that have the capability to create more clean energy at scale.
Practical energy commitments
We recognize the importance of translating these principles into action. We’ve concluded that this requires that we make to ourselves and to the public five clear and concrete commitments:
1. Improving our energy mix. First, we need to focus on our datacenters’ sourcing of electricity. Today, although 100 percent of the electricity used by our datacenters is renewable based on a mixture of direct projects and renewable energy certificates (or the equivalent), only about 44 percent of that power is generated by wind, solar and hydropower sources. Some of that electricity comes from projects where Microsoft directly procures renewable energy, such as the Keechi wind farm in Texas and the Pilot Hill wind farm in Illinois, while the rest is supplied by wind, solar and hydropower sources on the grid. But this means that we have a large opportunity to address the remaining 56 percent either with our own direct purchases or by encouraging the addition of new wind, solar, or hydropower additions to the grid.
We recognize that both the volume and percent of energy from these renewable sources needs to be higher. As we move forward, we will continue to purchase renewable energy certificates to ensure we reduce our carbon emissions to zero. But more important, we are setting goals to grow the percent of wind, solar, and hydropower energy we purchase directly and through the grid to 50 percent by 2018, 60 percent early in the next decade, and to an ongoing and higher percentage in future years beyond that. As we make progress, we’ll report on it and share how we’re thinking about our next milestone on this path.
2. Maintaining carbon neutrality. Through investments in energy efficiency, direct purchases of renewable energy, renewable energy certificates or the equivalent, and carbon offsets, we will continue to be 100 percent carbon neutral in our operations and business air travel.
3. Retiring all green attributes from projects in which we invest. Any time we purchase green energy, we will not sell the renewable energy certificates or any other green “attributes” for others to claim.
4. Investing in new energy technologies. We will continue to invest in new energy technology, such as our biogas and fuel cell work, that have the potential to accelerate the availability of new types of energy and drive new efficiencies.
5. Supporting public policies that help enable new renewable energy sources. Finally, we will support public policies that accelerate the availability of additional renewable energy in markets where we operate. We believe this is an imperative not only for our ability to meet our own commitments, but for the energy improvements that are needed by the tech sector more broadly.
Toward a broader conversation about a sustainable future
The more we’ve focused on these issues, the more apparent it has become that the world needs an ever-broadening conversation to make sustained progress. Much has been accomplished in this regard in recent years, including a new global commitment to address these issues. But we’ll all need to work together to translate this into the types of practical steps that are needed.
We definitely learned a lot earlier this year about the types of practical steps that can make a difference. Microsoft joined an innovative public-private partnership with Dominion Power and the State of Virginia to do just that. Dominion will build a 20-megawatt solar energy plant to bring new, additional clean energy to the grid in Virginia, and Microsoft helped fund it and will claim and retire the green attributes. Partnering with utilities and governments in these types of ways can impact the grid beyond our own operational needs and can help accelerate the transition to a cleaner energy economy.
The progress that’s needed will not come easily. The issues are complex and the steps that are needed are varied. Real progress will require that groups across the non-governmental, business and governmental communities find new ways to work together.
At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. In a world of more than 7 billion people, this plainly comes with a responsibility to advance sustainability in our operations, including datacenters, to deliver innovative solutions that will help address the environmental challenges and opportunities that lie ahead.

[/vc_column_text][/vc_column][/vc_row]

[vc_row][vc_column][vc_column_text]

The 6 not-so-obvious reasons a project plan fails

By Maddie Murray

new-piktochart

 

Read more about Microsoft's findings here

[/vc_column_text][/vc_column][/vc_row]

Eco-testing a building before it is even built

green-prefab-building2a

Architects and engineers all over the world are inventing new ways to reduce the time, cost, and risk of constructing energy-efficient, high-performance buildings. Data-intensive analysis plays a key role in the design of “green” buildings—but the high cost of such analysis can be prohibitive to these eco-pioneers. Microsoft Azure Azure provides a way to help building designers perform complex data analysis cost-efficiently—and quickly—facilitating the design of energy-efficient buildings.
Using pre-fabricated parts and fast computers
Despite the global demand for sustainably-designed buildings, many design businesses face practical implementation challenges, such as the time-intensive process of performing computer simulations, and the expenses of the powerful technology that is needed to reduce execution time, sustainable design specialists, and computer aided design (CAD) software. The good news: cloud computing has tremendous potential to change all of that.
Green Prefab, a small startup company in Italy, is working with Microsoft Research Connections and the Royal Danish Academy to develop next-generation tools that will one day allow in-depth simulations of a building’s performance—before it's built. This innovative approach is possible by using Microsoft Azure, Microsoft’s open and powerful public cloud platform—to provide inexpensive data-intensive analysis.
Green Prefab is developing a library of prefabricated green building components that can be used to design eco-friendly buildings. Architects will be able to access civil engineering services, via the cloud, to produce energy efficiency reports, conduct in-depth structural analysis, and view photo-realistic preview images of the building. Green Prefab has teamed up with Microsoft Research Connections to develop some of the first tools for Microsoft Azure.
One essential ingredient of Green Prefab’s industrialized approach is its use of a data model that was developed for the construction industry in the 1990s by an international consortium known as buildingSMART. The buildingSMART model is an open format that makes it easy to exchange and share building information modeling (BIM) data between applications that were developed by different software vendors.
The open format of buildingSMART's data model has made it easier for Green Prefab to model prefabricated green building components. New tools that use massive computational power to simulate building performance will help the sustainable building industry.
Developing energy simulations
With the goal of making it possible for engineers and architects to analyze complex building scenarios extremely quickly, Green Prefab and the Institute of Architectural Technology of the Royal Danish Academy collaborated to validate the potential usefulness of building performance energy simulations in the cloud.
The Royal Danish Academy conducted an experiment that used Green Prefab’s prototype web-based tools with the supercomputer in Barcelona, Spain, to execute parametric energy simulations of buildings by using the power of cloud computing.
The design of the test building reflected the floor space, occupancy, and environmental setting of a standard office in Copenhagen, Denmark. In order to understand the advantages of the proposed service, in comparison to conventional ways of using simulations, a parallel experiment was conducted. Starting from the same building design, the same architect conceived and tested 50 design options with a standard dual-core PC.
The cloud-based approach achieved approximately twice the potential energy savings, 33 percent, compared to only 17 percent for the conventional approach. It also reduced computing time significantly. Running the 220,184 parametric simulations on a standard dual-core PC would have taken 122 days; running those same energy simulations in the cloud took only three days.
Reducing building time, cost, and risk
The wide adoption of cloud-based civil engineering tools could radically reshape the green building industry; Green Prefab's photo-realistic, 3-D illustrations of buildings in design are just the first step. By producing digital, full-detailed models of a building, Green Prefab expects to be able to guarantee its appearance and performance, save construction time, and reduce costs as much as 30 percent.
Even small architectural firms will be able to control costs in the pre-construction phase and reduce uncertainties during construction as civil engineering tools in the cloud become available to small and medium-sized architecture and engineering firms around the world.
Microsoft’s collaboration with Green Prefab presents an optimistic picture of a future in which new cloud-based tools help reduce the energy consumption of buildings substantially. Such scientific breakthroughs will facilitate a shift towards building more environmentally friendly buildings that use energy and water efficiently, reduce waste, and provide a healthy environment for working and living.

Understanding cloud forests through the power of cloud computing

cloud-forests-tower270x180

The Brazilian Cloud Forest Sensing Project is studying how cloud forests function in response to climatic variability. The project deployed more than 700 Internet enabled sensors connected to Microsoft Azure and is gathering integrated data on physical and biological processes within the study site. Through its partnership with Microsoft Research, the Brazilian Cloud Forest Sensing Project has created a repeatable Internet-of-Things (IoT) solution that revolutionizes how research can benefit from the use of a wireless sensor network, cloud technology, and automated data stream processing.
Researching cloud forests at work
Brazil is one of the most forested countries in the world. More than 60 percent of Brazil is covered by forest, including many cloud forests—moist forests characterized by persistent low cloud cover. Cloud forests help provide clean water because its trees intercept water from clouds. That water then drips onto the soil and feeds rivers, lakes, and irrigation systems, even during periods of low rainfall.
The Brazilian Cloud Forest Sensing Project is an initiative of the São Paulo Research Foundation (FAPESP) Biodiversity Research Program, supported by the Microsoft Research-FAPESP Joint Research Center. Rafael Oliveira, a professor of Ecology from the University of Campinas (Unicamp), conducts research for the project in the cloud forests of Campos do Jordão, Brazil.
The goal of Oliveira’s investigation is to understand how cloud forests work and then measure the impact of microclimatic variation on several ecosystem processes. The research project focuses on a fragmented forest—which most forests in the world are—in contrast to a continuous forest such as the ones found in the Amazon. Most fragmented forests are in proximity to urban areas and are critical to the water supply of those communities.
Managing sensor data in the cloud
Collaborators from the Brazilian Cloud Forest Sensing Project partnered with Microsoft Research to define research questions and develop software to analyze streams of data, which the Sensing Project team gathered every 15 minutes from a unified ensemble of more than 700 sensors on plants, in soil, and above tree canopies throughout the forest.
To manage and process high volumes of complex data, the Sensing Project team uses Microsoft Azure to store, process, and visualize the data coming in from the sensors. The sensors themselves are connected to Azure as an instance of the Internet of Things: devices embedded in the physical world sending data to the cloud and changing their behavior based on the directives assigned to them.
The sensors are networked to and communicate directly with the Microsoft Azure cloud platform. Because researchers have a constant stream of real-time data, they can quickly observe what's happening and if necessary, remotely change how the sensors collect data. This ability to make fast adjustments provides the researchers with high-fidelity data for specific time periods.
Providing a model for future research
Using a network of 700 sensors to study a forest was a new concept, and the team had to determine all the pertinent details, such as how to manage the sensors, which sensors were needed, and what kinds of data to measure. The researchers at the Sensing Project have created a repeatable technical solution, and researchers worldwide will be able to learn from it and apply these practices to their own sensor-based studies. The methodologies that are being developed will help Brazil’s investigations and other global research projects.

How Microsoft Conjured Up Real-Life Star Wars Holograms

By Brian Barrett as written on www.wired.com

Help me, HoloLens. You’re my only hope.
OK, so maybe it’s not quite time to write R2-D2 out of Star Wars quite yet. But Microsoft researchers have created something that brings one of the droid’s best tricks to our present-day lives. It’s called holoportation, and it could change how we communicate over long distances forever. Also, it makes for one hell of a demo.
It started, though, as a response to homesickness, says project lead Shahram Izadi. His Cambridge (UK)-based team, which focuses on 3D-sensor technologies and machine learning among other next-generation computing concerns, had spent two and a half years embedded with the HoloLens team in Redmond, Washington. Izadi is a father—that’s his daughter in the demo video—and when the time came to dream up the next challenge, they turned to the one that most affected their personal and professional lives during that stretch: communication.
“We have two young children, and there was really this sense of not really being able to communicate as effectively as we would have liked,” Izadi says. “Tools such as video conferencing, phone calls, are just not engaging enough for young children. It’s just not the same as physically being there.”
So they created something that, in several key ways, is. Holoportation, as the name implies, projects a live hologram of a person into another room, where they can interact with whomever’s present in real time as though they were actually there. In this way, it actually one-ups the classic Star Wars version, in which a recorded message appears in hologram form. Holoportation can do that, too, but the real magic is in what basically amounts to a holographic livestream.
As you might imagine, that takes a lot of horsepower.
I’ll Take You There
The Holoportation system starts with high-quality 3D-capture cameras placed strategically around a given space. Think of each of them as a Kinect camera with a serious power-up. “Kinect is designed to track the human skeleton,” says Izadi. “We’re really about capturing high quality detail of the human body, to reconstruct every feature. That has required a rethinking of the 3D sensor from the ground up.”
Once those cameras have captured every possible viewpoint, custom software stitches them together into one fully formed 3D model. This process is ongoing, Izadi says, as more frames of data make for a higher-quality model. The accumulated data results in an incredibly lifelike hologram that can be transported anywhere in the world that has a receiving system, like, say, a HoloLens. And it can do it fast.
“We want to do all of this processing in a tiny window, around 33 milliseconds to process all the data coming from all of the cameras at once, basically, and also create a temporal model, and then stream the data,” says Izadi, whose team leans on a small army of off-the-shelf (but high-end) Nvidia GPUs to crunch the relevant numbers.
But wait, you’re saying, that must be an insane amount of data to transmit. You’re right! Not only does the holoportation process generate mountains of data, Izadi points out that most streaming video codecs aren’t particularly 3D-friendly. That makes compression, which in this case transforms gigabytes into megabytes, a huge part of making holoportation work.
Aligning Worlds
To be clear, what you’re seeing in this video is real. It actually does work. There are still some hurdles to overcome before holoportation becomes a part of our everyday lives, though. You’ll notice, for instance, that the furniture in the two rooms that Microsoft uses is identical, making interaction much more seamless than it would be with the furniture from two rooms overlapping, or people walking through desks, and so on. Fortunately, there’s a straightforward solution: Train the cameras to only focus on the items you want to holoport, rather than an entire room.
“The user could potentially decide that they don’t want to replicate any furniture,” says Izadi. “We have this notion of background segmentation, where you capture the room just with the furniture in it, and that means that only the foreground object that goes into the room after will be of interest for the stream.” You could also strategically incorporate certain pieces of furniture by deciding how the two rooms align. Take, as an example, two grandparents holoporting in from their couch so that they can experience Christmas morning. Rather than let them float in mid-air, one could decide to orient their seated holograms onto the couch in their own living room.
This gets to be fairly heady stuff. For now, it’s probably enough to know that Izadi’s team is aware of potential spatial problems, and sees them instead as opportunities. After all, Izadi ultimately sees the project as a consumer device. We already have dedicated home theaters; why not dedicated home holoportation rooms, as well?
“The end goal and vision for the project is really to boil this down to something that’s as simple as a home cinema system” says Izadi. “You walk in to a number of these almost speaker-like units, in a way that you would set up surround sound, but this is giving you surround-vision.”
Long before then, maybe even within a couple of years, Izadi expects that you might find holoportation rigs in meeting rooms. And while it would be expensive—that’s a lot of cameras, and a lot of GPUs—he points out that global business travel costs about a trillion dollars a year. Holoportation starts to look a lot less expensive next to a round trip ticket to Shanghai.
Also intriguing? While Izadi’s team has worked closely with HoloLens, their system doesn’t play hardware favorites. All you’d really need to enjoy, at the receiving site, is an virtual reality or augmented reality headset.
“We’re very much agnostic to what we call the ‘viewer’ technology,” says Izadi. “Obviously we feel like there are some unique scenarios with HoloLens, but we would like to leverage as many display technologies as possible.”
Including, one suspects, plucky little droids.

Microsoft research project puts cloud in ocean for the first time

By Athima Chansanchai as written on news.microsoft.com

Microsoft-Project-Natick-0412-1600x700

In 2015, starfish, octopus, crabs and other Pacific Ocean life stumbled upon a temporary addition to the seafloor, more than half a mile from the shoreline: a 38,000-pound container. But in the ocean, 10 feet by 7 feet is quite small. The shrimp exploring the seafloor made more noise than the datacenter inside the container, which consumed computing power equivalent to 300 desktop PCs.
But the knowledge gained from the three months this vessel was underwater could help make future datacenters more sustainable, while at the same time speeding data transmission and cloud deployment. And yes, maybe even someday, datacenters could become commonplace in seas around the world.
The technology to put sealed vessels underwater with computers inside isn’t new. In fact, it was one Microsoft employee’s experience serving on submarines that carry sophisticated equipment that got the ball rolling on this project. But Microsoft researchers do believe this is the first time a datacenter has been deployed below the ocean’s surface. Going under water could solve several problems by introducing a new power source, greatly reducing cooling costs, closing the distance to connected populations and making it easier and faster to set up datacenters.

A little background gives context for what led to the creation of the vessel. Datacenters are the backbone of cloud computing, and contain groups of networked computers that require a lot of power for all kinds of tasks: storing, processing and/or distributing massive amounts of information. The electricity that powers datacenters can be generated from renewable power sources such as wind and solar, or, in this case, perhaps wave or tidal power. When datacenters are closer to where people live and work, there is less “latency,” which means that downloads, Web browsing and games are all faster. With more and more organizations relying on the cloud, the demand for datacenters is higher than ever – as is the cost to build and maintain them.
All this combines to form the type of challenge that appeals to Microsoft Research teams who are experts at exploring out-of-the-box solutions.
Ben Cutler, the project manager who led the team behind this experiment, dubbed Project Natick, is part of a group within Microsoft Research that focuses on special projects. “We take a big whack at big problems, on a short-term basis. We take a look at something from a new angle, a different perspective, with a willingness to challenge conventional wisdom. So when a paper about putting datacenters in the water landed in front of Norm Whitaker, who heads special projects for Microsoft Research NExT, it caught his eye.
“We’re a small group, and we look at moonshot projects,” Whitaker says. The paper came out of ThinkWeek, an event that encourages employees to share ideas that could be transformative to the company. “As we started exploring the space, it started to make more and more sense. We had a mind-bending challenge, but also a chance to push boundaries.”
One of the paper’s authors, Sean James, had served in the Navy for three years on submarines. “I had no idea how receptive people would be to the idea. It’s blown me away,” says James, who has worked on Microsoft datacenters for the past 15 years, from cabling and racking servers to his current role as senior research program manager for the Datacenter Advanced Development team within Microsoft Cloud Infrastructure & Operations. “What helped me bridge the gap between datacenters and underwater is that I’d seen how you can put sophisticated electronics under water, and keep it shielded from salt water. It goes through a very rigorous testing and design process. So I knew there was a way to do that.”
James recalled the century-old history of cables in oceans, evolving to today’s fiber optics found all over the world.
“When I see all of that, I see a real opportunity that this could work,” James says. “In my experience the trick to innovating is not coming up with something brand new, but connecting things we’ve never connected before, pairing different technology together.”
Building on James’s original idea, Whitaker and Cutler went about connecting the dots.

Natick_secondary

Cutler’s small team applied science and engineering to the concept. A big challenge involved people. People keep datacenters running. But people take up space. They need oxygen, a comfortable environment and light. They need to go home at the end of the day. When they’re involved you have to think about things like landscaping and security.
So the team moved to the idea of a “lights out” situation. A very simple place to house the datacenter, very compact and completely self-sustaining. And again, drawing from the submarine example, they chose a round container. “Nature attacks edges and sharp angles, and it’s the best shape for resisting pressure,” Cutler says. That set the team down the path of trying to figure out how to make a datacenter that didn’t need constant, hands-on supervision.
This initial test vessel wouldn’t be too far off-shore, so they could hook into an existing electrical grid, but being in the water raised an entirely new possibility: using the hydrokinetic energy from waves or tides for computing power. This could make datacenters work independently of existing energy sources, located closer to coastal cities, powered by renewable ocean energy.
That’s one of the big advantages of the underwater datacenter scheme – reducing latency by closing the distance to populations and thereby speeding data transmission. Half of the world’s population, Cutler says, lives within 120 miles of the sea, which makes it an appealing option.
This project also shows it’s possible to deploy datacenters faster, turning it from a construction project – which require permits and other time-consuming aspects – to a manufacturing one. Building the vessel that housed the experimental datacenter only took 90 days. While every datacenter on land is different and needs to be tailored to varying environments and terrains, these underwater containers could be mass produced for very similar conditions underwater, which is consistently colder the deeper it is.

IMG_1298-Assembly-1024x768

Cooling is an important aspect of datacenters, which normally run up substantial costs operating chiller plants and the like to keep the computers inside from overheating. The cold environment of the deep seas automatically makes datacenters less costly and more energy efficient.
Once the vessel was submerged last August, the researchers monitored the container from their offices in Building 99 on Microsoft’s Redmond campus. Using cameras and other sensors, they recorded data like temperature, humidity, the amount of power being used for the system, even the speed of the current.
“The bottom line is that in one day this thing was deployed, hooked up and running. Then everyone is back here, controlling it remotely,” Whitaker says. “A wild ocean adventure turned out to be a regular day at the office.”
A diver would go down once a month to check on the vessel, but otherwise the team was able to stay constantly connected to it remotely – even after they observed a small tsunami wave pass.
The team is still analyzing data from the experiment, but so far, the results are promising.
“This is speculative technology, in the sense that if it turns out to be a good idea, it will instantly change the economics of this business,” says Whitaker. “There are lots of moving parts, lots of planning that goes into this. This is more a tool that we can make available to datacenter partners. In a difficult situation, they could turn to this and use it.”

Natick_out-of-water

Christian Belady, general manager for datacenter strategy, planning and development at Microsoft, shares the notion that this kind of project is valuable for the research gained during the experiment. It will yield results, even if underwater datacenters don’t start rolling off assembly lines anytime soon.
“While at first I was skeptical with a lot of questions. What were the cost? How do we power? How do we connect? However, at the end of the day, I enjoy seeing people push limits.” Belady says. “The reality is that we always need to be pushing limits and try things out. The learnings we get from this are invaluable and will in some way manifest into future designs.”
Belady, who came to Microsoft from HP in 2007, is always focused on driving efficiency in datacenters – it’s a deep passion for him. It takes a couple of years to develop a datacenter, but it’s a business that changes hourly, he says, with demands that change daily.
“You have to predict two years in advance what’s going to happen in the business,” he says.
Belady’s team has succeeded in making datacenters more efficient than they’ve ever been. He founded an industry metric, power usage effectiveness (PUE), and in that regard, Microsoft is leading the industry. Datacenters are also using next-generation fuel cells – something James helped develop – and wind power projects like Keechi in Texas to improve sustainability through alternative power sources. Datacenters have also evolved to save energy by using outside air instead of refrigeration systems to control temperatures inside. Water consumption has also gone down over the years.
Belady, who says he “loved” this project, says he can see its potential as a solution for latency and quick deployments.
“But what was really interesting to me, what really surprised me, was to see how animal life was starting to inhabit the system,” Belady says. “No one really thought about that.”
Whitaker found it “really edifying” to see the sea life crawling on the vessel, and how quickly it became part of the environment.
“You think it might disrupt the ecosystem, but really, it’s just a tiny drop in an ocean of activity,” he says.
The team is currently planning the project’s next phase, which could include a vessel four times the size of the current container with as much as 20 times the compute power. The team is also evaluating test sites for the vessel, which could be in the water for at least a year, deployed with a renewable ocean energy source.
Meanwhile, the initial vessel is now back on land, sitting in the lot of one of Microsoft’s buildings. But it’s the gift that keeps giving.
“We’re learning how to reconfigure firmware and drivers for disk drives, to get longer life out of them. We’re managing power, learning more about using less. These lessons will translate to better ways to operate our datacenters. Even if we never do this on a bigger scale, we’re learning so many lessons,” says Peter Lee, corporate vice president of Microsoft Research NExT. “One of the things that’s so fun about a CEO like Satya Nadella is that he’s hard-nosed business savvy, customer obsessed, but another half of this brain is a dreamer who loves moonshots. When I see something like Natick, you could say it’s a moonshot, but not one completely divorced from Microsoft’s core business. I’m really tickled by it. It really perfectly fits the left brain/right brain combination we have right now in the company.”

FaST-LMM and Windows Azure Accelerate Genetics Research

fast-lmm-title

Today, researchers can collect, store, and analyze tremendous volumes of data; however, technological and storage limitations can severely impede the speed at which they can analyze these data. A new algorithm that was developed by Microsoft Research, called FaST-LMM (Factored Spectrally Transformed Linear Mixed Models), runs on Windows Azure in the cloud and expedites analysis time—reducing processing periods from years to just days or hours. An early application of FaST-LMM and Windows Azure helps researchers analyze data for the genetic causes of common diseases.
Searching for DNA Clues to Disease
The Wellcome Trust in Cambridge, England, is researching the genetic causes of seven diseases—including hypertension, rheumatoid arthritis, and diabetes. The project involves searching for combinations of genomic information to gain insight into an individual’s likelihood to develop one of these diseases. With a database containing genetic information from 2,000 people and a shared set of approximately 13,000 controls for each of the seven diseases, they needed both massive storage and powerful computation capacity.
They are storing their vast database of genetic information in the Windows Azure cloud, instead of traditional hardware storage, which represents a profound shift in how big data are stored. ”We are taking on the challenge of taking what would be traditional high-performance computing, one of the hardest workloads to move to the cloud, and moving to the cloud,” observes Jeff Baxter, development lead in the Windows HPC team at Microsoft. “There’s a variety of both technical and business challenges, which makes it exciting and interesting.”
Exploring the Power of the Cloud
Resource management is one of the primary issues associated with big data: not only determining how many resources are required for the project, but also identifying the right type of resources—within the available budget. For example, running a large project on fewer machines might save on hardware costs but result in substantial project delays. Researchers must find a balance that will keep their project on track while working with available resources.
The FaST-LMM algorithm can analyze enormous datasets in less time than existing alternatives. Microsoft Research also has the infrastructure that is required to perform the computations, explains David Heckerman, distinguished scientist at Microsoft Research. With more CPUs dedicated to a job, computations that would ordinarily take years to finish can be completed in just hours.
For the Wellcome Trust project, the team’s available resources included a combination of Windows HPC Server, Windows Azure, and the FaST-LMM algorithm. The team knew that they had a powerful set of technologies. The question was, could it achieve the results required in the desired timeline?
“For this project, we would need to do about 125 compute years of work. We wanted to get that work done in about three days,” explains Baxter. By running FaST-LMM on Windows Azure, the team had access to tens of thousands of computer cores and an improved algorithm that was able to expedite the work. “You’re still doing hundreds of compute years of work,” he explains, “but with these resources, we can actually do hundreds of compute years in a couple of days.”
While the results were impressive, there was something that had an even bigger impact. “The most impressive thing was how quickly we could take this project from inception to actually completing it and generating new science,” Baxter notes. “This is stuff that, without both the improvements in the algorithms that the Microsoft Research guys had come up with and the ability for us to provide the tens to hundreds of thousands of cores, would have been infeasible.”
The Future for Big Data Research
The Wellcome Trust project is just the beginning of what could be a major shift in how research databases are stored and analyzed. “With this new, huge amount of data that’s coming online, we’re now able to find connections between our DNA and who we are that we could never find before,” Heckerman says. The ability to analyze that data more quickly, and with greater depth, could help scientists make faster breakthroughs in genetic research—and breakthroughs in critical genetic research. The FaST-LMM algorithm running on Windows Azure is helping to accelerate just such breakthroughs.

All that RaaS: saving lives and transforming healthcare economics

Stuart, a 66-year-old man with diabetes, felt lousy—constantly fatigued, nauseated, and short of breath after just the slightest exertion. His daughter, worried by his increasing frailty, took him to the emergency room at the local hospital. Her concern was amply justified: Stuart was suffering from heart failure. Like 5.1 million other Americans each year who suffer from heart failure, he was admitted to the hospital to treat this serious, often life-threatening condition. The caring medical team stabilized his condition, and Stuart left the hospital after 10 days, glad to be home with words of advice and a few medications. Within a month he was back, once again fatigued, and facing a second episode.

5582.RiskOMeter_MRCBlog2_496x330

Stuart’s story is far from rare. Hospital readmissions for chronic conditions such as diabetes, chronic obstructive pulmonary disease (COPD), and congestive heart failure (CHF) are both common and very costly. Studies conducted in the United States indicate that nearly 20% of Medicare patients who are hospitalized for chronic conditions are often readmitted within 30 days. Experts at Edifecs indicate that it costs Medicare—and US taxpayers—about $26 billion a year, and often a large majority of these readmissions are actually considered avoidable with accurate prioritization and personalized care protocols. Readmission-related costs have become so onerous that the Affordable Care Act includes financial rewards and penalties to deal with the readmission problem. Hospitals that reduce their readmission rates receive financial incentives; those that do not, lose reimbursement and get penalized.

Holistic tools that can reliably predict heart-failure readmissions—taking into account all aspects of each patient’s condition and risk factors—would significantly help patients and hospitals. The growth in the use of electronic patient records has recently offered the potential for such analysis, but little had been done to harness the collective intelligence contained in hospital patient records augmented with other data sources.
By introducing cloud computing technology and applying some of the latest advances in machine learning techniques, researchers are rapidly changing this situation.
One leading example of this is RaaS (Readmission Score as a Service), a platform that was developed by the University of Washington (UW) Tacoma’s Center for Data Science. RaaS compares a patient’s medical information to a database of heart-failure outcomes, using advanced machine learning techniques to arrive at a risk-of-readmission factor as well as corresponding actionable guidelines for the patient-provider team. Those patients identified with a high risk receive additional treatment: the goal is to reduce their likelihood of readmission and produce overall healthier outcomes across all stages of the patient care continuum.
The hundreds of machine learning models of RaaS are developed by using both the R machine learning language, and Microsoft Azure Machine Learning. This chronic care management predictive platform relies on historical patient data from multiple sources. These sources include anonymized electronic medical records, claims, labs, medications, and psycho-social factors, all labeled with observed outcomes that the machine learning models access and share in sync to provide continuous monitoring for personalized patient alerts.
RaaS is available as an on-premises service as well as via the cloud by using Azure Machine Learning web services and the Azure-based Zementis Adapa scoring engine to make predictions for patients. When deployed using Azure Cloud Services, RaaS performs data preparation at scale.
The UW Center for Data Science team began developing initial models in collaboration with MultiCare Health System in March 2012, using just two on-premises servers. The maintenance, frequent updates, and down times of these on-premises servers posed an ongoing problem, and scalability issues limited the scope of the project by affecting the speed of data exploration and machine learning.
About a year and a half ago, the team applied for and was awarded an Azure for Research grant, taking advantage of the Microsoft Research program that offers training and awards of computing resources to qualified institutions that use the cloud to advance scientific discovery. The award enabled the Center for Data Science team to scale up the project and create a robust prediction engine that generates a readmission risk factor score for patients at every stage of their hospital care: post-admission, pre-discharge, and post-discharge.
The RaaS platform at MultiCare Health enables the care management team to view an electronic dashboard that shows heart-failure patients’ risks of readmission. UW Medicine Cardiology is now collaborating with the Center for Data Science team to study the efficacy of predictive models for augmenting care management guidelines by using machine learning.
—Daron Green, Deputy Managing Director, Microsoft Research
—Gregory Wood, MD, UW Medicine Cardiology

Contact us Today!

Chat with an expert about your business’s technology needs.