CloudWatch Metrics for Amazon WorkSpaces via @jeffbarr

By Jeff Barr, Amazon WorkSpaces
AWS customers are deploying Amazon WorkSpaces at scale in medium and large organizations. For example, health care company Johnson & Johnson is using WorkSpaces to realize the long-promised security and efficacy benefits of virtual desktops, in a world populated by a diverse workforce that would like to use their own computing devices if possible (also known as BYOD – Bring Your Own Device). You can view their recent presentation, Deploying Amazon WorkSpaces at Scale, to learn more about what they did and how they now support BYOD for 16,000 contractors and employees, along with zero clients for another 8,000 users.

New Metrics

In order to help our customers to monitor their WorkSpaces deployments, we recently added additional Amazon CloudWatch metrics for WorkSpaces. These metrics are designed to provide administrators with additional insight in to the overall health and connection status of individual WorkSpaces and of all of the WorkSpaces that belong to a particular directory.
Like all CloudWatch metrics, these metrics can be viewed in the AWS Management Console, accessed via the CloudWatch APIs, and monitored by CloudWatch Alarms and third-party tools.

aws metrics managed solution

The new metrics are enabled by default and are available to you at no extra charge.

Here’s what you get:
Here’s how you create an alarm that will fire if a user cannot connect to their WorkSpace:
aws cloud alarm managed solution

Available Now:
The new metrics are available now and you can start monitoring them today!

Source: https://aws.amazon.com/blogs/aws/new-cloudwatch-metrics-for-amazon-workspaces/

Case Study: The State of Arizona Runs its DNS Solution on AWS and Saves 75% Annually

About The State of Arizona

The State of Arizona consists of more than 130 federated government agencies and 32,000 employees, which serve more than 6 million residents. The organization decided to begin migrating its IT infrastructure to AWS after recognizing that more than half of its 2,600 servers were aging and needed to be replaced. During its first phase, the State of Arizona migrated its DNS solution to the AWS Cloud. By using AWS, the State now saves 75% in annual operating costs on its DNS solution when compared to its previous on-premises IT infrastructure.
The State of Arizona Runs its DNS Solution on AWS and Saves 75% Annually (3:31)
To learn more, visit https://managedsolut.wpengine.com/amazon-web-services/.
Read more customer success stories or search by industry to learn how Managed Solution helps businesses implement technology productivity solutions.
Source: http://aws.amazon.com/solutions/case-studies/state-of-arizona/

Introducing AWS Device Farm: Test your app on real phones & tablets in the ‪#‎AWS‬ cloud!

aws DEVICE farm managedsolution

Introducing AWS Device Farm: Test your app on real phones & tablets in the ‪#‎AWS‬ cloud!

AWS Device Farm

Available on July 13, 2015!

How it works
    1. Upload your Android or Fire OS app to AWS Device Farm
    2. AWS Device Farm tests your app against your choice of real devices
    3. Get results in minutes that pinpoint bugs and performance problems

Learn more

Case Study Video: US FDA Brings Scale, and Cost Effective Innovation to New Programs Utilizing AWS Cloud

FDA-AWS-Case-Study-Managed-Solution
About the US Food and Drug Administration (FDA)
The US Food and Drug Administration (FDA) protects and promotes public health. The agency, which receives 100,000 handwritten reports of adverse drug affects each year, needed a way to make the data entry process more efficient and reduce costs. By using AWS, the FDA and AWS Partner Captricity quickly turn manual reports into machine-readable information with 99.7% accuracy, reducing costs from $29 per page to $0.25 per page.
US FDA Brings Scale, and Cost Effective Innovation to New Programs Utilizing AWS Cloud
Managed Solution partners with IT staff to offer consulting and government solutions to improve IT infrastructure to reflect the latest technology. Read more customer success stories or search by industry to learn how Managed Solution helps businesses implement technology productivity solutions.

Case Study: Airbnb Scales Infrastructure Automatically Using AWS

aws-airbnb-managed-solution

Case Study: Airbnb Scales Infrastructure Automatically Using AWS

About Airbnb

Airbnb is a community marketplace that allows property owners and travelers to connect with each other for the purpose of renting unique vacation spaces around the world. The Airbnb community users’ activities are conducted on the company’s Website and through its iPhone and Android applications. The San Francisco-based Airbnb began operation in 2008 and currently has hundreds of employees across the globe supporting property rentals in nearly 25,000 cities in 192 countries.

The Challenge

A year after Airbnb launched, the company decided to migrate nearly all of its cloud computing functions to Amazon Web Services (AWS) because of service administration challenges experienced with its original provider. Nathan Blecharczyk, Co-founder & CTO of Airbnb says, “Initially, the appeal of AWS was the ease of managing and customizing the stack. It was great to be able to ramp up more servers without having to contact anyone and without having minimum usage commitments. As our company continued to grow, so did our reliance on the AWS cloud and now, we’ve adopted almost all of the features AWS provides. AWS is the easy answer for any Internet business that wants to scale to the next level.”

Why Amazon Web Services

Airbnb has grown significantly over the last 3 years. To support demand, the company uses 200 Amazon Elastic Compute Cloud (Amazon EC2) instances for its application, memcache, and search servers. Within Amazon EC2, Airbnb is using Elastic Load Balancing, which automatically distributes incoming traffic between multiple Amazon EC2 instances. To easily process and analyze 50 Gigabytes of data daily, Airbnb uses Amazon Elastic MapReduce (Amazon EMR). Airbnb is also using Amazon Simple Storage Service (Amazon S3) to house backups and static files, including 10 terabytes of user pictures. To monitor all of its server resources, Airbnb uses Amazon CloudWatch, which allows the company to easily supervise all of its Amazon EC2 assets through the AWS Management Console, Command Line Tools, or a Web services API.
In addition, Airbnb moved its main MySQL database to Amazon Relational Database Service (Amazon RDS). Airbnb chose Amazon RDS because it simplifies much of the time-consuming administrative tasks typically associated with databases. Amazon RDS allows difficult procedures, such as replication and scaling, to be completed with a basic API call or through the AWS Management Console. Airbnb currently uses Multi-Availability Zone (Multi-AZ) deployment to further automate its database replication and augment data durability.
Airbnb was able to complete its entire database migration to Amazon RDS with only 15 minutes of downtime. This quick transition was very important to the fast-growing Airbnb because it did not want its community of users to be shut out of its marketplace for an extended period of time. Tobi Knaup, an engineer at Airbnb says, “Because of AWS, there has always been an easy answer (in terms of time required and cost) to scale our site.”

The Benefits

Airbnb believes that AWS saved it the expense of at least one operations position. Additionally, the company states that the flexibility and responsiveness of AWS is helping it to prepare for more growth. Knaup says, “We’ve seen that Amazon Web Services listens to customers’ needs. If the feature does not yet exist, it probably will in a matter of months. The low cost and simplicity of its services made it a no-brainer to switch to the AWS cloud.”
To learn more, visit https://managedsolut.wpengine.com/amazon-web-services/ Read more customer success stories or search by industry to learn how Managed Solution helps businesses implement technology productivity solutions.
Airbnb Scales Infrastructure Automatically Using AWS (3:18)

Source: http://aws.amazon.com/solutions/case-studies/airbnb/

Cloud Computing, Server Utilization, & the Environment With Amazon Web Services By @jeffbarr

6.15.15 cloudcomputingaws

Cloud Computing, Server Utilization, & the Environment by Jeff Barr, Amazon Web Services

After reading the Greenpeace, Renewable Energy, and Data Centers blog entry from my colleague James Hamilton a couple of weeks back, I took a look at the Greenpeace report on data center power consumption and noted that it’s pretty unusual for an environmental report to not feature energy conservation as a primary evaluation criteria.
It seems to me that any analysis of the climate impact of a data center should take into consideration resource utilization and energy efficiency, in addition to power mix. Carbon emissions are driven by three items: the number of servers running, the total energy required to power each server, and the carbon intensity of energy sources used to power these servers. Using fewer servers and powering them more efficiently are at least as important to reducing the carbon impact of a company’s data center as its power mix. I thought it would be interesting to run the numbers on this and take a look at how these three factors interact when it comes to overall carbon emissions from compute activity.
I’ll get to the math in a minute but what I ended up with is the following:
On average, AWS customers use 77% fewer servers, 84% less power, and utilize a 28% cleaner power mix, for a total reduction in carbon emissions of 88% from using the AWS Cloud instead of operating their own data centers.
Let’s take a closer look at these numbers to get a better sense of the efficiency and power conservation gains that are possible through cloud computing.

Cloud Customers Consume 77% Fewer Servers

Let’s first look at server utilization and the number of servers required to support a given group of workloads. On-premises data centers typically have fairly low server utilization rates. This is because companies can’t afford to run out of server capacity. Without sufficient capacity, applications fail, sales don’t get completed, customers don’t get served, and critical business data doesn’t get tracked. Servers and related IT resources are required for the company to maintain high service quality through peak load periods.
This peak capacity is only rarely used and, consequently, average server utilization levels are often under 20%. In contrast, large-scale cloud infrastructure operators have a much larger pool of customers and applications allowing them to smooth out peaks and run at much higher overall utilization levels. In addition, innovations that are made possible by the scale and dynamic nature of the cloud, such as the EC2 Spot Market, help to drive utilization even higher and lead to additional efficiency improvements.
The 2014 Data Center Efficiency Assessment from the NRDC has cloud server utilization at 65% and on-premises utilization running 12 to 18%, which is consistent with other estimates I’ve come across over the years. So, with approximately 65% server utilization rates for the typical large-scale cloud provider versus 15% on-premises, it means that when companies move to the cloud, their applications can be supported using only 23% of the server resources, so this means they typically provision fewer than 1/4 of the servers that they would on-premises. This alone is a material gain — but there are significant power efficiency differences as well!

Cloud Customers Consume 84% Less Power

A common measure of infrastructure efficiency is Power Usage Effectiveness (PUE). This is the total power that is delivered to the server, storage, and networking in a data center (this is called critical power), divided by the total power that is brought to a data center (this is called total power). The difference between total power and critical power is the power lost in data center power distribution, cooling and, to a lesser extent, lighting and other power-consuming overhead items. Lower is better when looking at PUE.
The annual Uptime Institute survey has found average data center PUE to be 1.7 (Industry Average Data Center PUE Stays Nearly Flat Over Four Years). Large-scale cloud providers run at scale and invest deeply in efficiency since, at scale, these investments can have real and rapid paybacks. Some megascale operators report PUE numbers as low as 1.07. Google reports an impressive PUE of 1.12. Some of the smaller cloud providers may invest less in efficiency improvements so I’ll use a more conservative 1.2 as the cloud industry average PUE, with the understanding that some operators including AWS do run more efficiently.
Using this data, we have a prospective customer moving from on-premises to a cloud deployment going from an average PUE of 1.7 down to 1.2, which means, for like-powered servers, the power consumption in the cloud would be 29% lower than on-premises data centers.
So, if you multiply the impact – 77% fewer servers required (i.e. cloud requires only 23% of the number of servers required for the same workloads) by 71% more efficient servers, customers only need 16% (23% x 71%) of the power as compared to on-premises infrastructure. This represents an 84% reduction in the amount of power required.
To put this into perspective, National Resources Defense Council (NRDC) estimates that total US data center power consumption was 91 billion kilowatt hours (kWh) in 2013. If all of the workloads in these data centers were migrated to the cloud, we would see a reduction in annual power consumption of more than 76 billion kWh. That would be equivalent to the combined annual residential power consumption of the states of New York and Kentucky.

Cloud Customers Reduce Their Carbon Emissions by 88%

The massive improvement in energy efficiency drives a huge reduction in climate impact because less energy consumed means fewer carbon emissions. The climate impact improvements get even better when you factor in that the average corporate data center has a dirtier power mix than the typical large-scale cloud provider.
A popular way to look at the climate impact of power mix is carbon intensity (grams of carbon emissions per kWh of energy used). Using data from the International Energy Agency report Key World Energy Statistics 2014, the global power source average is 545 grams kWh.
As a cloud example, the June 2015 AWS average power mix carbon intensity is 393 grams/kWh. Measured this way, large-scale cloud providers use a power mix that is 28% less carbon intense than the global average.
Combining the fraction of energy required (16%) with the fraction of carbon intensity of power mix (72%), you end up with only 12% of the carbon emissions. This represents an 88% reduction in carbon emissions for customers when they use AWS vs. the typical on-premises data center.
To show just how large of an impact energy efficiency plays here versus power mix, let’s take a look at how carbon emissions change if we adjust the power mix. This would never happen, but cloud providers could have a power mix that encompassed 6 times the carbon of on-premises datacenters and still achieve the same net carbon impact of on-premises data centers. That’s how much more energy efficient cloud computing is than on-premises datacenters given the factors mentioned above!

Working Toward 100%

AWS remains focused on working towards our long-term commitment to 100% renewable energy usage. In the last year, we’ve taken several significant steps to achieve this goal, including teaming with Pattern Development to build and operate the 150 megawatt Amazon Wind Farm (Fowler Ridge) in Indiana.
In May 2015, we updated our Sustainable Energy webpage to announce that the AWS global infrastructure is powered by approximately 25% renewable energy today, and that we expect to reach 40% by the end of 2016. We have several additional developments planned in the next 12 -18 months to help us get there and encourage our customers to check back on our sustainability page often to watch our progress‎.
The environmental argument for cloud computing is already surprisingly strong and I expect that the overall equation will just continue to improve going forward.

Source: https://aws.amazon.com/blogs/aws/cloud-computing-server-utilization-the-environment/