Understanding cloud forests through the power of cloud computing

cloud-forests-tower270x180

The Brazilian Cloud Forest Sensing Project is studying how cloud forests function in response to climatic variability. The project deployed more than 700 Internet enabled sensors connected to Microsoft Azure and is gathering integrated data on physical and biological processes within the study site. Through its partnership with Microsoft Research, the Brazilian Cloud Forest Sensing Project has created a repeatable Internet-of-Things (IoT) solution that revolutionizes how research can benefit from the use of a wireless sensor network, cloud technology, and automated data stream processing.
Researching cloud forests at work
Brazil is one of the most forested countries in the world. More than 60 percent of Brazil is covered by forest, including many cloud forests—moist forests characterized by persistent low cloud cover. Cloud forests help provide clean water because its trees intercept water from clouds. That water then drips onto the soil and feeds rivers, lakes, and irrigation systems, even during periods of low rainfall.
The Brazilian Cloud Forest Sensing Project is an initiative of the São Paulo Research Foundation (FAPESP) Biodiversity Research Program, supported by the Microsoft Research-FAPESP Joint Research Center. Rafael Oliveira, a professor of Ecology from the University of Campinas (Unicamp), conducts research for the project in the cloud forests of Campos do Jordão, Brazil.
The goal of Oliveira’s investigation is to understand how cloud forests work and then measure the impact of microclimatic variation on several ecosystem processes. The research project focuses on a fragmented forest—which most forests in the world are—in contrast to a continuous forest such as the ones found in the Amazon. Most fragmented forests are in proximity to urban areas and are critical to the water supply of those communities.
Managing sensor data in the cloud
Collaborators from the Brazilian Cloud Forest Sensing Project partnered with Microsoft Research to define research questions and develop software to analyze streams of data, which the Sensing Project team gathered every 15 minutes from a unified ensemble of more than 700 sensors on plants, in soil, and above tree canopies throughout the forest.
To manage and process high volumes of complex data, the Sensing Project team uses Microsoft Azure to store, process, and visualize the data coming in from the sensors. The sensors themselves are connected to Azure as an instance of the Internet of Things: devices embedded in the physical world sending data to the cloud and changing their behavior based on the directives assigned to them.
The sensors are networked to and communicate directly with the Microsoft Azure cloud platform. Because researchers have a constant stream of real-time data, they can quickly observe what's happening and if necessary, remotely change how the sensors collect data. This ability to make fast adjustments provides the researchers with high-fidelity data for specific time periods.
Providing a model for future research
Using a network of 700 sensors to study a forest was a new concept, and the team had to determine all the pertinent details, such as how to manage the sensors, which sensors were needed, and what kinds of data to measure. The researchers at the Sensing Project have created a repeatable technical solution, and researchers worldwide will be able to learn from it and apply these practices to their own sensor-based studies. The methodologies that are being developed will help Brazil’s investigations and other global research projects.

Predicting ocean chemistry using Microsoft Azure

LiveOcean_WebGraphics_MRCBLOG_900x300

Shellfish farmer Bill Dewey remembers the first year he heard of ocean acidification, a phrase that means a change in chemistry for ocean water. It was around 2008, and Dewey worked for Taylor Shellfish, a company that farms oysters in ocean waters off the coast of Washington. That year, thousands of tiny “seed” oysters died off suddenly. Today, a cloud-based predictive system from the University of Washington (UW) and Microsoft Research may help the shellfish industry survive changing conditions by providing forecasts about ocean water.
Dewey, director of Public Affairs for Taylor Shellfish, vividly remembers walking into a conference room where an audience of shellfish farmers first heard that ocean acidification might threaten their industry profoundly. They learned that increased carbon dioxide in the atmosphere is making ocean water more acidic. In 2013, the Washington legislature stepped in and asked the UW to study and build a predictive forecast model, aptly named, LiveOcean.
Just like a numerical weather forecast model, LiveOcean will soon provide a forecast that predicts the acidity of water in a specific bay, part of Puget Sound or other coastal regions, days in advance.
Parker MacCready, a professor of physical oceanography at UW, is the scientist leading the LiveOcean team and used Microsoft Azure to create the cloud-based storage system. The system holds enormous amounts of data from his remote ocean model, the Regional Ocean Modeling System (ROMS), which helps feed the LiveOcean models. The Azure component uses Python and the Django web framework to provide these forecasts in an easy-to-consume format. To produce these forecasts, the LiveOcean system relies on other sources: US Geological Survey data (for river flow), atmospheric forecasts, and another ocean model called HYbrid Coordinate Ocean Model (HYCOM).

Dewey needs information on the acidity levels because a baby oyster needs to create a shell immediately to survive, and needs carbonate ions in the water to make that first tiny shell. If the water is too acidic, the baby oyster must use too much energy and dies in its attempt to make that first shell. Taylor Shellfish has hatcheries for the baby oysters and “planting” beds where young oysters are carried to grow to full size. Forecasts of water acidity in both places would help the company know when it was safe to hatch the babies, and where (and when) it is safe to plant them.
Ocean acidification is an emerging global problem, according to the National Oceanic and Atmospheric Administration (NOAA). Scientists are just starting to monitor ocean acidification worldwide, so it is impossible to predict exactly in what ways it will affect the marine environment. In a report, NOAA wrote, “There is an urgent need to strengthen the science as a basis for sound decision making and action.”
Azure tools make the system open to anybody. MacCready is eager to see how others develop sites pulling data on water currents for kayakers, for example, or information for salmon fishers. He is particularly excited about “particle tracking,” which helps him see where individual particles in the ocean move. That tracking could predict where an oil spill might move, for example. Using the cloud is “the way of the future” from his scientific perspective. “It gives the ability to create and use different resources without having to go out and buy hardware yourself.”
Fine-tuning and testing is essential to the reliability of the predictions. In recent years, MacCready and others have been validating the forecasts that LiveOcean is making. They pair real observations from physical instruments to predictions. Within months, he hopes to refine forecasts down to the level of individual bays, so that he can tell Dewey whether Samish Bay or Willapa Bay, for example, is “safe” for the new oysters.
LiveOcean has impacts far beyond just the shellfish industry. Jan Newton, principal oceanographer at the Applied Physics Laboratory, is the co-director of the Washington Ocean Acidification Project (WOAP), believes it may change how the public sees climate change and ocean chemistry.
“Data portals and models like LiveOcean can really make a bridge [of understanding] because even if people don’t understand the chemistry, they’ll look at the color-coding and see how this changes with location and season,” she said. Dewey believes that these tools for the Pacific Ocean chemistry will be adopted by others for oceans worldwide.

Centralizing national flood data in the cloud

nfie-flooded-property270x180

Researchers from the University of Texas collaborated with other researchers, federal agencies, commercial partners, and first responders to create the National Flood Interoperability Experiment (NFIE). They used Microsoft Azure to help build a prototype for a national flood data-modeling and mapping system with the potential to provide life- and cost-saving information to the public. The goals of the NFIE include standardizing data, demonstrating a scalable solution, and helping to close the gap between national flood forecasting and local emergency response.
Sharing flood information for better prediction and response
In October 2013, the Onion Creek area near Austin, Texas, faced a particularly destructive flood. While onsite studying the flood, David Maidment, professor of civil engineering at the University of Texas, spoke with Harry Evans, chief of staff for the Austin Fire Department. They realized that they had similar goals for improving flood prediction and response, and could collaborate well with their different areas of expertise.
Maidment, who specializes in hydrology and flooding at the Center for Research and Water Resources, brought together participants from academia, federal agencies, commercial partners, and first responders to create the National Flood Interoperability Experiment (NFIE). He wanted a technology infrastructure that would allow flood information to flow in from various agencies and academia, and then flow out to allow citizens and first responders to better understand what was happening.
“What we're trying to do in the National Flood Interoperability Experiment is to prototype a set of infrastructure and services that can communicate with one another and with the public in a uniform and open way,” says Maidment.

nfie-people-in-flood270x180

Microsoft Azure for data analysis, storage, and sharing in the cloud
Microsoft Research helped the NFIE find the computational power it needed in Microsoft Azure. The NFIE uses Azure to perform statistical analysis of present and past flood data to help design a prototype for a national flood data-modeling and mapping system with the potential to provide life- and cost-saving information to the public.
By using Azure, the NFIE can standardize and store data in the cloud. Maidment and colleagues at the University of Texas developed a new language that provides both a common way to store time-value pairs, like river flow time, and a standard way of communicating that information through the Internet. The US Geological Survey adopted this language to publish its time-series data on water observations, and the National Weather Service will also use it to publish forecasts as part of the NFIE. When this common language is implemented operationally, those organizations will be able to communicate and collaborate more efficiently with one another.
More flood information provides protential for improved public safety
NFIE uses Azure to deliver more forecasts than any one agency could. Currently, the National Weather Service makes forecasts at about 3,600 locations on rivers in the country. The NFIE expects to demonstrate delivery of specific and actionable data for 2.67 million locations nationally, including smaller streams. It also expects to increase the spatial density of flood forecast locations by a factor of more than 700, compared to the current National Weather Service system.
Ultimately, the NFIE has demonstrated that information with a greater level of detail has the potential to increase real-time responsiveness that can improve public safety and save lives. Working closely with the Austin Fire Department, the NFIE shows how data can be used to improve decision-making. Evans notes that this work will help the NFIE develop a template that agencies can use nationwide, along with their threat and risk analyses, to help communities better protect themselves from the risks of flooding.

Untangling airports using open source tools on Microsoft Azure

Scientists from the Universities of Stirling and Nottingham in the United Kingdom tackled the knotty problem of delays on airport taxiways, where planes are entering or leaving runways. Sandy Brownlee, PhD, and Jason Atkin, PhD, collaborated with Manchester Airport to use cloud computing to model the complex data from many airports worldwide. The team created open-source tools using Linux on Microsoft Azure to expand these insights and create new algorithms, sharing these on Github. The team is helping Manchester Airport to reduce delays, save money and lessen any environmental impact.
Tim Walmsley helps the third-largest airport in the United Kingdom – Manchester - manage an estimated 23 million passengers per year. To successfully plan airport operations and growth, he asked for data science help from university researchers, who specifically sought to gain insights from modeling the movements on taxiways, to and from the runways. “Aviation is an industry that’s growing. So there are lots of ways that the industry is trying to tackle the impacts that that growth could bring. The Airport Optimization Project feeds into that,” Walmsley, Environment Manager for Manchester, explained.
Sandy Brownlee, a senior research assistant at the University of Stirling, began helping Manchester Airport by searching for specific data on what is sometimes called “ground movement” or taxiing to populate a model. At first, he was frustrated because individual airports did not want to share everything with him. What he discovered, however, is that he could access public data using Flight Radar 24 and Open Street Map for dozens of airports worldwide. Jason Atkin, PhD, of the University of Nottingham, partnered with Brownlee to help model how taxiways can be leveraged to make airports more efficient.

airport_b-roll.00_29_30_07.

Taxiways connect everything
The time aircraft spend getting to and from runways is one of the understudied choke points at airports. “Taxiing is a really critical problem because it connects everything else,” Brownlee explained. Many are familiar with strategies for aligning takeoffs or landings to improve safety or efficiency but that slow crawl toward the gate (called a stand in the UK) can be a crucial link in the chain of events.
“The computing power we’ve got now allows us to understand and analyze data in different ways and pull out different information so we can better understand the true uncertainty in taxiing. We can understand which aircraft take a long time to get there, which aircraft get there quickly, and under what circumstances this is happening,” Atkin said.
Public data sources
Using Microsoft Azure, Brownlee could use Linux virtual machines and develop methods using OpenJDK. By leveraging these open source tools on Azure he completed his work in about one-tenth the time he might if he’d used just his desktop computer. “So rather than spending several months waiting for my data to be ready so that I could get on and do things, I had it within a couple of weeks,” he said.
There were three main tools that the team created to share on Github. TaxiGen reads taxiway and runway information from Open Street Map and then automatically writes it out in a usable format. SnapTracks reads raw GPS coordinates with timings and adds them to TaxiGen material. GM2KML generates helpful visualizations from the other two tools.
“Researchers rely on open tools and platforms to be able to develop and share their work. The ability to use the cloud for access to computing power not available on the desktop can act like a time machine, shrinking the time to results from months to weeks. This is a transformational way of thinking about research computing,” explained Kenji Takeda from Microsoft Research, who was supporting the project. Brownlee’s work on analysis of ground movement was funded by the Sandpit for Integrating and Automating Airport Operations and DAASE grants from the Engineering and Physical Sciences Research Council (EPSRC).
“By getting better predictions, you can start improving the rest of the airport system,” Atkin said. One pilot can take longer than another to cover the same ground, traffic congestion can be heavy at busy times, and mechanical delays of any sort can throw off predictions. Taxiing delays ripple through the entire system. Modeling and predicting that taxi time helps airports change when and where they direct planes and can yield big savings. Brownlee estimates modeling could help cut bottlenecks at Manchester in half.
Open source benefits
Because the tools created by the team are available to anyone, both Brownlee and Atkin foresee that other airports around the world will use them. “The work that Sandy’s doing is going to provide a lot of public domain data and the ability to analyze this for a lot of different airports. And we should be able to see these multi-million-pound savings at airports worldwide,” Atkin said.
Brownlee also hopes models will help guide decisions in weather emergencies or when a runway must be closed. Airports worldwide can use the modeling to understand what to do about a sudden change. “By getting more researchers worldwide involved … we could get a lot more benefit from different areas of knowledge all coming from the same problem,” he said.
No matter what the world does with the open-source tools, for Walmsley the great impact is at Manchester, where he expects “a much better experience for the customer and for the airlines using the airport.”

how real businesses are using machine learning - ms

How real businesses are using machine learning

By Lukas Biewald as written on techcrunch.com
There is no question that machine learning is at the top of the hype curve. And, of course, the backlash is already in full force: I’ve heard that old joke “Machine learning is like teenage sex; everyone is talking about it, no one is actually doing it” about 20 times in the past week alone.
But from where I sit, running a company that enables a huge number of real-world machine-learning projects, it’s clear that machine learning is already forcing massive changes in the way companies operate.
It’s not just futuristic-looking products like Siri and Amazon Echo. And it’s not just being done by companies that we normally think of as having huge R&D budgets like Google and Microsoft. In reality, I would bet that nearly every Fortune 500 company is already running more efficiently — and making more money — because of machine learning.
So where is it happening? Here are a few behind-the-scenes applications that make life better every day.

Making user-generated content valuable

The average piece of user-generated content (UGC) is awful. It’s actually way worse than you think. It can be rife with misspellings, vulgarity or flat-out wrong information. But by identifying the best and worst UGC, machine-learning models can filter out the bad and bubble up the good without needing a real person to tag each piece of content.
It’s not just Google that needs smart search results.
A similar thing happened a while back with spam emails. Remember how bad spam used to be? Machine learning helped identify spam and, basically, eradicate it. These days, it’s far more uncommon to see spam in your inbox each morning. Expect that to happen with UGC in the near future.
Pinterest uses machine learning to show you more interesting content. Yelp uses machine learning to sort through user-uploaded photos. NextDoor uses machine learning to sort through content on their message boards. Disqus uses machine learning to weed out spammy comments.

Finding products faster

It’s no surprise that as a search company, Google was always at the forefront of hiring machine-learning researchers. In fact, Google recently put an artificial intelligence expert in charge of search. But the ability to index a huge database and pull up results that match a keyword has existed since the 1970s. What makes Google special is that it knows which matching result is the most relevant; the way that it knows is through machine learning.
But it’s not just Google that needs smart search results. Home Depot needs to show which bathtubs in its huge inventory will fit in someone’s weird-shaped bathroom. Apple needs to show relevant apps in its app store. Intuit needs to surface a good help page when a user types in a certain tax form.
Successful e-commerce startups from Lyst to Trunk Archive employ machine learning to show high-quality content to their users. Other startups, like Rich Relevance and Edgecase, employ machine-learning strategies to give their commerce customers the benefits of machine learning when their users are browsing for products.

Engaging with customers

You may have noticed “contact us” forms getting leaner in recent years. That’s another place where machine learning has helped streamline business processes. Instead of having users self-select an issue and fill out endless form fields, machine learning can look at the substance of a request and route it to the right place.
Big companies are investing in machine learning … because they’ve seen positive ROI.
That seems like a small thing, but ticket tagging and routing can be a massive expense for big businesses. Having a sales inquiry end up with the sales team or a complaint end up instantly in the customer service department’s queue saves companies significant time and money, all while making sure issues get prioritized and solved as fast as possible.

Understanding customer behavior

Machine learning also excels at sentiment analysis. And while public opinion can sometimes seem squishy to non-marketing folks, it actually drives a lot of big decisions.
For example, say a movie studio puts out a trailer for a summer blockbuster. They can monitor social chatter to see what’s resonating with their target audience, then tweak their ads immediately to surface what people are actually responding to. That puts people in theaters.
Another example: A game studio recently put out a new title in a popular video game line without a game mode that fans were expecting. When gamers took to social media to complain, the studio was able to monitor and understand the conversation. The company ended up changing their release schedule in order to add the feature, turning detractors into promoters.
How did they pull faint signals out of millions of tweets? They used machine learning. And in the past few years, this kind of social media listening through machine learning has become standard operating procedure.

What’s next?

Dealing with machine-learning algorithms is tricky. Normal algorithms are predictable, and we can look under the hood and see how they work. In some ways, machine-learning algorithms are more like people. As users, we want answers to questions like “why did The New York Times show me that weird ad” or “why did Amazon recommend that funny book?”
In fact, The New York Times and Amazon don’t really understand the specific results themselves any more than our brains know why we chose Thai food for dinner or got lost down a particular Wikipedia rabbit hole.
If you were getting into the machine-learning field a decade ago, it was hard to find work outside of places like Google and Yahoo. Now, machine learning is everywhere. Data is more prevalent than ever, and it’s easier to access. New products like Microsoft Azure ML and IBM Watson drive down both the setup cost and ongoing cost of state-of-the-art machine-learning algorithms.
At the same time, VCs have started funds — from WorkDay’s Machine Learning fund to Bloomberg Beta to the Data Collective — that are completely focused on funding companies across nearly every industry that use machine learning to build a sizeable advantage.
Most of the conversation about machine learning in popular culture revolves around AI personal assistants and self-driving cars (both applications are very cool!), but nearly every website you interact with is using machine learning behind the scenes. Big companies are investing in machine learning not because it’s a fad or because it makes them seem cutting edge. They invest because they’ve seen positive ROI. And that’s why innovation will continue.

Buckled up and ready to go? Untangling airports using open source tools on Microsoft Azure

By Kenji Takeda, Solution Architect and Technical Manager, Microsoft Research as written on blogs.msdn.microsoft.com
Nobody likes a delay at the airport. Many of us have spent time buckled up, ready for takeoff, wondering why our plane is stuck on its way between the gate and the runway. Scientists in the United Kingdom are working hard to help untangle these airport operations, to help save fuel, money, and impact on the environment. Cloud computing is empowering the research team to parse the anatomy of these snarls and create a model that will one day recommend better paths for every plane.
Tim Walmsley, Environment Manager for Manchester Airport, which is the third largest airport in the United Kingdom, handling over 23 million passengers per year explains:
“Aviation is an industry that’s growing. So there are lots of ways that the industry is trying to tackle the impacts that growth could bring to the climate. The Airport Optimization Project feeds into that because we want to reduce the amount of CO2 that’s generated when an aircraft lands, taxis, and ultimately departs.”
One of the great understudied choke points at airports is the time aircraft spend taxiing to and from the runway. Sandy Brownlee, PhD , is a senior research assistant at the University of Stirling in Scotland who turned his computer science expertise toward this problem. He used Microsoft Azure to store data on thousands of taxiways at different airports and create open tools, now available to anyone on GitHub, to model and improve aircraft taxiing to reduce pollution and improve efficiency. Jason Atkin, assistant professor at the University of Nottingham, is Brownlee’s partner in their project with Manchester Airport, and also developed systems that are now streamlining operations at London Heathrow Airport. “One of the things cloud computing does is bring the power and data processing ability of huge machines to any researcher’s desk,” Atkin says.
Modeling required Brownlee to bring together data for dozens of airports from publicly available sources, including Flight Radar 24 and Open Street Map. He was pleasantly surprised to find how easy it was to use open-source tools on Microsoft Azure , such as Linux virtual machines, and developing his methods using OpenJDK. Processing speeds using Azure enabled him to complete his work in one-tenth the time of just using his desktop computer. “So rather than spending several months waiting for my data to be ready so that I could get on and do things, I had it within a couple of weeks,” he says.
“Taxiing is a really critical problem because it connects everything else,” Brownlee explains. Many are familiar with strategies for aligning takeoffs and landings to improve safety or efficiency, but just imagine how the time spent slowly taxiing to or from the gate adds quite another layer of complexity to the puzzle.
One pilot can take longer than another to cover the same ground, traffic congestion can be heavy at busy times, and mechanical delays of any sort can throw off predictions. Taxiing delays ripple through the entire system, throwing off other timing. Modeling and predicting that taxi time can help airports change when and where they direct planes and yield big savings. Brownlee estimates modeling could help cut bottlenecks at Manchester in half.
But beyond bottlenecks, he also hopes someday the model will help guide decisions in weather emergencies or when a runway must be closed. Airports worldwide can use the modeling to understand what to do about a sudden change. “By getting more researchers worldwide involved … we could get a lot more benefit from different areas of knowledge all coming from the same problem,” he says.
The team applied for an Azure for Research award that let them explore how cloud computing could speed up their work, and it paid off by accelerating their analysis by several months. This enabled them to develop their open source tools on Microsoft Azure, apply their data science expertise to bigger datasets, investigate many more airports, and help set up models to make air travel more efficient and environmentally-friendly for us all. As a result, Tim Walmsley is confident that the future is bright for Manchester Airport, “As we embark on the 1-billion-pound project to transform our airport, the Airport Optimization Project … will make sure we maintain efficiency, safety and the passenger experience.”

magic clouds
The Mythical, Magical Cloud By Kate Frasure, Customer Development Manager

I was standing in Verizon Wireless the other day to upgrade my phone. The salesman I was working with was describing to me the process of transferring the data from my current phone to the new one via the cloud back-up.
When I started to speak, I noticed my hand made a gesture as if I was talking about a physical ‘cloud’ in the sky. It’s amazing how the branding of essentially data that is just located in a big data center, offsite, has been made to appear more like a mythical, magical world where our data lives.
As one of our engineers pointed out to me, ‘the cloud’ has been around longer than you may think. If you ever setup a Hotmail or Yahoo email account, or even if you have a Gmail account today, you are utilizing the cloud for your mail because the email data is housed on a server in a datacenter somewhere in the United States.
Of course today, it is not just email anymore. You can now setup your entire business network infrastructure in the cloud and not only that, there are various services you can choose from. So who should you choose?
Unfortunately, there is not a one-size-fits-all solution. Each service offers you a variety of options and it is up to you to determine which mix of services best fits your business needs.
Lucky for you, we have put together a quick side-by-side comparison to help you get started. While there may be a variety of options out there, we decided to look at three of the most well-known, Amazon Web Service (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
Amazon Web Service (AWS) Microsoft Azure Google Cloud Platform (GCP)
Year entered market
 2006
 2010
2014
Hours of downtime in 2014
2.69 hours
 50.74 hours
Under 5 hours
 Linux OS Support
  • Red Hat Enterprise Linux (RHEL) and Fedora
  • Ubuntu
  • CentOS Linux
  • SUSE Linux (SLES and openSUSE) Enterprise
  • Canonical Ubuntu
  • CentOS by OpenLogic
  • CoreOS
  • Oracle Linux
  • SUSE Linux Enterprise
  • openSUSE
 Not currently available
Pricing & Models*
  • Per hour – rounded up
  • On demand, reserved, spot
  • Per minute – rounded up commitments (pre-paid or monthly)
  • On demand – short term commitments (pre-paid or monthly)
  • Per minute – rounded up (10 minute minimum)
  • On demand – sustained use
 Compliance
 GovCloud – meets ITAR (International Traffic in Arms Regulations), HIPAA, SOC 1-3, ISO 27001, FIPS 140-2 compliant endpoints, etc.
 Azure Government – still very new
Pros
  • Scalability
  • Auto-scaling offered at additional cost through CloudWatch
  • Large partner ecosystem, having been in the market the longest
  • Larger offering of third-party applications
  • Archiving capability through Glacier
  • Seamless integration for heavily invested Microsoft users
  • More modern, familiar and easy to use interface for those familiar with Windows
  • Vast hybrid capabilities
  • Primarily targets PaaS**
  • Single sign-on (SSO) option for many applications
  • Better networking, with each instance living on its own network
  • Instant auto-scaling for no additional cost
  • Data storage and analytic tool capabilities
Cons
  • Requires cloud architecture knowledge
  • Has experienced significantly more downtime than AWS and GCP in the last year
  • Not as much support for Linux, especially Red Hat
  • Not as geographically widespread
  • Not as many offerings
*Pricing Models: on demand – customers pay for what they use without any upfront cost; reserved – customers reserve instances for 1 or 3 years with an upfront cost that is based on the utilization; spot – customers bid for the extra capacity available
**PaaS (Platform as a service): Vendor provides the infrastructure and an application development platform that generally includes the operating system, database and web server. Customers managed only their applications.
About the author:
Kate Frasure is a Texas-born, Colorado-raised project manager. In her role as Customer Development Manager at Managed Solution, she oversees the process of bringing new clients on board and various other IT projects. Her diverse communications background and attention-to-detail contribute to her passion to improve processes to see businesses succeed. She is continuously looking to find the organization and flow that accompanies a streamlined business.

Contact us Today!

Chat with an expert about your business’s technology needs.