In this video, Jessica talks about the cloud and how it might be the answer to support your remote workforce.

While the cloud seems like it's intangible, there's actually warehouses around the world that are full of servers that are dedicated to certain companies, applications and business needs. The existing infrastructure lives in a physical warehouse but the service itself is what makes it intangible.

Here are the 5 use cases Jessica discusses for utilizing the cloud in your business:

  1. Data backup and storage: if you're still using manual hard drives backing up data or add extra files on an external hard drive, then the cloud can offer a much more efficient and secure way to store this data. On top of that, when you store your files in the cloud, it may come with a backup solution. For example, if your data is stored on the east coast, and your backup is on the west coast. If something happens to the east coast, you still have everything you need
  2. If you have on-site servers and you don't need to for compliance or legal reasons, then a cloud-based system would be a great option. Whatever you own or manage yourself on-premise, your IT team has to continue to patch that server and make those updates so it's as secure as it can be. If you're in the cloud, this is automatically done for you, so your team gets back that time and doesn't need to spend time on this
  3. Does your company have VPN access? A VPN allows you to connect to your company and connect remotely to all the files and applications you need. There can be a lot of lag time with login issues or latency. The cloud eliminates many of these issues and presents a more pleasant user experience and provides the most bandwidth.
  4. Scalability. Scaling usually happens quickly and this can add a lot of weight to your infrastructure if it's not cloud-based, not to mention it's costly (capital expenditures). When you're cloud-based and add new employees, the equipment you need is pretty minimal and everything else is an operating expense. Additionally, you can give the new employee a few logins and they have access to everything they need. Again, this is a huge time saver.
  5.  Cost. When using anything physically on-premise, they will be capital expenditures, versus the cloud which is an operational expense. With operating expenses, you have a more predictable spend.

Hybrid cloud approaches have seen a steady rise in popularity among organizations. A hybrid cloud approach refers to incorporating the benefits of company-owned private clouds, public cloud services, and the more traditional dedicated hosting services. Each of these has its pros and cons, and companies are starting to take advantage of what each has to offer while also minimizing the potential risks that come attached.

It was the concern over the security of public clouds that have led many to turn to hybrid models, in the first place. For instance, businesses will make use of the privacy and security that private clouds have to offer, while still benefiting from the flexibility and easy scalability of public clouds. Below are some of the main reasons why organizations are steadily moving towards hybrid cloud methods.

More Flexibility

One of the most immediate benefits of such a hybrid system is that it allows companies to manage their applications and databases more effectively. On the one hand, they can host their important data on private clouds and/or dedicated servers, where they will have absolute control. On the other hand, they will use the available public cloud space for faster and easier scalability. They'll be able to test out new applications on the public cloud, determining their feasibility, among other such benefits.

So, as you steadily reach your limit, you can easily and seamlessly migrate entire services and applications to the public cloud. If you do need to scale down, you'll be able to take the same action in reverse. This enhanced flexibility and the ability to mix these functionalities based on your own needs is what draws so many companies to hybrid clouds.

Enhanced Security

When you are using a third-party, public cloud system to store all of your sensitive data, you are leaving yourself exposed to all sorts of possible risks. You will need to take into account all security problems, compliance issues, and performance requirements, which indicates that private clouds are also a good alternative. When using a hybrid cloud, you can choose which services will be on the public cloud and which will remain on the private one. In addition, when you're also using a public cloud, you're not overcrowding your private and secure space.

Lower Costs 

An organization that uses a hybrid cloud approach will almost always experience lower costs than a company that is exclusive with either one or the other. Hybrid clouds allow you to pay-as-you-need, meaning that you will have complete control over your IT expenses. You'll still have your backups in the cloud, which further reduce the costs. And with public cloud services, such as Azure or AWS, you can accurately determine your exact requirements without having to overpay for something you won't use.

Innovation Opportunities

With the ability to test and develop new applications on the public cloud, you can focus more of your efforts of this process without having to worry about ever exceeding your limits. This will reduce your potential costs of failure and give you access to an immense potential for scalability. In such an environment, the probabilities of innovation are greatly enhanced without having to sacrifice privacy or security in doing so. You'll not have to rearrange your infrastructure to test out a new service when using a hybrid cloud.

Migrating to the cloud can be an incredibly lucrative endeavor for any business looking to streamline their day-to-day operations and optimize their processes; knowing the first steps for a successful cloud migration is very important. To begin, it is important to note that there are plenty of benefits in the migration to a cloud, but three of them quickly come to mind.

On the one hand, cloud computing helps to lower data storage costs by a significant margin. Also, flexibility and scalability are all but guaranteed, allowing you to scale this storage space whenever you need to. On the other hand, however, moving to the public cloud will enable you to increase your IT team without any extra costs attached. Keep in mind that once you've partnered up with a provider, they will become an extension of your IT department, as they will be the ones managing and maintaining the data center.

Nevertheless, the cloud migration process can quickly become a hassle without the proper knowledge, planning, and execution. Only slightly above a quarter of businesses engaged in cloud migration reported that they were "extremely satisfied" with their overall cloud migration experience. Despite this low percentage and with the proper information, the benefits are well worth the investment.

Below are the first steps for successful and seamless cloud migration.

Choosing the Right Cloud Provider

The selection of your future cloud provider should not be taken lightly. There are plenty to choose from, and each has its strengths and weaknesses. While some may focus on scalability, others offer better-personalized applications management options. Among the most popular choices, you can choose among companies such as Amazon, Microsoft's Azure, Google, IBM, and the list can go on.

When making this decision, don't merely go for the market leader but take a moment to consider whether their services align with your own business goals. Also, take into account the long-term relationship you will have with this provider. It's not a very time or cost-effective strategy to choose one business only to change it later.

The Level of Cloud Integration

There are two ways of migrating your application. On the one side, there's the shallow cloud integration, while on the other, there's the deep cloud integration. 

The shallow cloud integration - It is when you move all on-premise applications to the cloud but conducts minimal changes or none at all, to the servers used to run those applications. The changes that are made are only enough to allow the applications to function within the new environment. It means that you will not use any cloud-based services.

The deep cloud integration - This procedure implies that you modify your application during the migration process to make use of critical cloud capabilities. Among these, there are such capabilities like dynamic load balancing, auto-scaling, serverless computing capabilities, or cloud-specific data stores.

Cloud KPIs

You will also have to establish your Key Performance Indicators (KPIs). These are metrics about your application or service used to measure how these perform based on your expectations. These KPIs will help you to determine how your cloud migration is doing, showcasing any problems that may exist with your application, as well as when the movement was complete and successful.

Some of these cloud KPIs may refer to the user experience, such as page load time, lag, or session duration. Regarding infrastructure, some KPIs may include disk performance, memory usage, or CPU usage. And as far as business engagement is concerned, there are conversion and engagement rates, as well as cart adds. 

Conclusion

While these are the first steps that you need to consider when migrating to the cloud, other issues may interest you as well. In the end, it all boils down to what your business needs and hopes to achieve from this migration, in the first place. For more information on cloud migration and cloud computing, in general, feel free to visit our website or contact us directly.

 

In 2018, the question isn’t whether or not you should move to the cloud, but rather which cloud — public or private? While there can certainly be benefits to both, it truly depends on your organization and business model. Here, we’ll discuss some of the benefits and why you should consider moving to the public cloud.

According to Forbes, Microsoft and Amazon Web Services are two of the top vendors in the cloud today, and I think most in the industry would agree. There are a few things one must consider when selecting a public cloud provider, starting with location. To ensure low latency, you’ll want to make sure the data center you select is in the right location. While most, if not all public cloud providers offer multiple locations, Microsoft Azure has more regions than any other cloud provider. Another important aspect for choosing a public cloud provider is data security. While all cloud vendors will claim to be secure, you should always consider the risks. Check your cloud provider for regulation and compliance coverage. For example, Azure has over 70 compliance certifications.

With all that said, whichever public cloud you choose, here are some great reasons to think about moving to the public cloud, regardless of the provider.

 

Reduced Costs

Having your data onsite can become quite costly. With traditional storage vendors, you typically get locked into long-term contracts. This causes users to over-provision their storage, which leads to paying for unnecessary resources. However, with public cloud vendors such as Microsoft and AWS, you can pay as you go — only paying for what you need. Having an OpEx-based storage solution allows you to easily budget for services and data. Cloud-based storage also won’t take up as much real-estate onsite, allowing you to either a) have smaller office space meaning rent space is significantly less or b) have more room for offices and cubicles which results in more employees driving the business and revenue. Cloud-based storage services also eliminate power and cooling costs, which can be substantial for many companies.

 

IT Team Expansion

When moving to the public cloud, your data is stored at your provider’s data center. That provider then becomes an extension of your IT team as they will be the ones managing and maintaining the data center. The result can be very beneficial from multiple perspectives. First, you get assigned a representative (account manager) and/or team of people to help manage your data. Not only does this provide a dedicated specialist to your team that can bring additional knowledge and skills, it can potentially cut costs. By moving away from having on-site, private storage, you don’t need to hire additional people to manage your data center. This also frees up your IT staff’s workload allowing them to work on far more productive tasks that actually increase revenue.

 

Flexibility & Scalability

As alluded to in point 1, it doesn’t matter how much storage you need today, tomorrow, next month or next year. With the public cloud, you can scale your storage as needed. In some cases, you can even hibernate data when you don’t need to access it. This provides an additional layer of flexibility for your business. You aren’t locked into a long-term fixed contract. Furthermore, it opens the possibility for a hybrid cloud experience, connecting to multiple clouds at once. This can be especially beneficial when considering DR and BC strategies.

In the end, the public cloud offers almost everything ‘as-a-service’ resulting in expanding your internal team and its knowledge. This also helps cut costs by not requiring your organization to hire additional IT team members. Plus, with the public cloud OpEx business model, you only have to pay for what you use and what you need, decreasing Total Cost of Ownership and increasing ROI.

azure site recovery 2 - managed solutionCloud migration and disaster recovery of load balanced multi-tier applications

Support for Microsoft Azure virtual machines availability sets has been a highly anticipated capability by many Azure Site Recovery customers who are using the product for either cloud migration or disaster recovery of applications. Today, I am excited to announce that Azure Site Recovery now supports creating failed over virtual machines in an availability set. This in turn allows that you can configure an internal or external load balancer to distribute traffic between multiple virtual machines of the same tier of an application. With the Azure Site Recovery promise of cloud migration and  disaster recovery of applications, this first-class integration with availability sets and load balancers makes it simpler for you to run your failed over applications on Microsoft Azure with the same guarantees that you had while running them on the primary site.
In an earlier blog of this series, you learned about the importance and complexity involved in recovering applications – Cloud migration and disaster recovery for applications, not just virtual machines. The next blog was a deep-dive on recovery plans describing how you can do a One-click cloud migration and disaster recovery of applications. In this blog, we look at how to failover or migrate a load balanced multi-tier application using Azure Site Recovery.
To demonstrate real-world usage of availability sets and load balancers in a recovery plan, a three-tier SharePoint farm with a SQL Always On backend is being used.  A single recovery plan is used to orchestrate failover of this entire SharePoint farm.
Disaster Recovery of three tier SharePoint Farm
Here are the steps to set up availability sets and load balancers for this SharePoint farm when it needs to run on Microsoft Azure:
  1. Under the Recovery Services vault, go to Compute and Network settings of each of the application tier virtual machines, and configure an availability set for them.
  2. Configure another availability set for each of web tier virtual machines.
  3. Add the two application tier virtual machines and the two web tier virtual machines in Group 1 and Group 2 of a recovery plan respectively.
  4. If you have not already done so, click the following button to import the most popular Azure Site Recovery automation runbooks into your Azure Automation account.

    DeployToAzure

  5. Add script ASR-SQL-FailoverAG as a pre-step to Group 1.
  6. Add script ASR-AddMultipleLoadBalancers as a post-step to both Group 1 and Group 2.
  7. Create an Azure Automation variable using the instructions outlined in the scripts. For this example, these are the exact commands used.
$InputObject = @{"TestSQLVMRG" = "SQLRG" ; "TestSQLVMName" = "SharePointSQLServer-test" ; "ProdSQLVMRG" = "SQLRG" ; "ProdSQLVMName" = "SharePointSQLServer"; "Paths" = @{ "1"="SQLSERVER:SQLSharePointSQLDEFAULTAvailabilityGroupsConfig_AG"; "2"="SQLSERVER:SQLSharePointSQLDEFAULTAvailabilityGroupsContent_AG"}; "406d039a-eeae-11e6-b0b8-0050568f7993"=@{ "LBName"="ApptierInternalLB"; "ResourceGroupName"="ContosoRG"}; "c21c5050-fcd5-11e6-a53d-0050568f7993"=@{ "LBName"="ApptierInternalLB"; "ResourceGroupName"="ContosoRG"}; "45a4c1fb-fcd3-11e6-a53d-0050568f7993"=@{ "LBName"="WebTierExternalLB"; "ResourceGroupName"="ContosoRG"}; "7cfa6ff6-eeab-11e6-b0b8-0050568f7993"=@{ "LBName"="WebTierExternalLB"; "ResourceGroupName"="ContosoRG"}} $RPDetails = New-Object -TypeName PSObject -Property $InputObject | ConvertTo-Json New-AzureRmAutomationVariable -Name "SharePointRecoveryPlan" -ResourceGroupName "AutomationRG" -AutomationAccountName "ASRAutomation" -Value $RPDetails -Encrypted $false
You have now completed customizing your recovery plan and it is ready to be failed over.
Azure Site Recovery SharePoint Recovery Plan
Once the failover (or test failover) is complete and the SharePoint farm runs in Microsoft Azure, it looks like this:
SharePoint Farm on Azure failed over using Azure Site Recovery
Watch this demo video to see all this in action - how using in-built constructs that Azure Site Recovery provides we can failover a three-tier application using a single-click recovery plan. The recovery plan automates the following tasks:
  1. Failing over SQL Always On Availability Group to the virtual machine running in Microsoft Azure
  2. Failing over the web and app tier virtual machines that were part of the SharePoint farm
  3. Attaching an internal load balancer on the application tier virtual machines of the SharePoint farm that are in an availability set
  4. Attaching an external load balancer on the web tier virtual machines of the SharePoint farm that are in an availability set
 
With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery and migration needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization's IT applications.
You can check out additional product information and start protecting and migrating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.

Azure Data Factory March new features update

Hello, everyone! In March, we added a lot of great new capabilities to Azure Data Factory, including high demanding features like loading data from SAP HANA, SAP Business Warehouse (BW) and SFTP, performance enhancement of directly loading from Data Lake Store into SQL Data Warehouse, data movement support for the first region in the UK (UK South), and a new Spark activity for rich data transformation. We can’t wait to share more details with you, following is a complete list of Azure Data Factory March new features:
  • Support data loading from SAP HANA and SAP DW
  • Support data loading from SFTP
  • Performance enhancement of direct loading from Data Lake Store to Azure SQL Data Warehouse via PolyBase
  • Spark activity for rich data transformation
  • Max allowed cloud Data Movement Units increase
  • UK data center now available for data movement

Support data loading from SAP HANA and SAP Business Warehouse

SAP is one of the most widely-used enterprise softwares in the world. We hear you that it’s crucial for Microsoft to empower customers to integrate their existing SAP system with Azure to unlock business insights. We are happy to announce that we have enabled loading data from SAP HANA and SAP Business Warehouse (BW) into various Azure data stores for advanced analytics and reporting, including Azure Blob, Azure Data Lake, and Azure SQL DW, etc.

SAP HAHA and SAP BW connectors in Copy Wizard

For more information about connecting to SAP HANA and SAP BW, refer to Azure Data Factory offers SAP HANA and Business Warehouse data integration.

Support data loading from SFTP

You can now use Azure Data Factory to copy data from SFTP servers into various data stores in Azure or On-Premise environments, including Azure Blob/Azure Data Lake/Azure SQL DW/etc. A full support matrix can be found in Supported data stores and formats. You can author copy activity using the intuitive Copy wizard (screenshot below) or JSON scripting. Refer to SFTP connector documentation for more details.

SFTP connector in Copy Wizard

Performance enhancement of direct data loading from Data Lake Store to Azure SQL Data Warehouse via PolyBase

Data Factory Copy Activity now supports loading data from Data Lake Store to Azure SQL Data Warehouse directly via PolyBase. When using the Copy Wizard, PolyBase is by default turned on and your source file compatibility will be automatically checked. You can monitor whether PolyBase is used in the activity run details.
If you are currently not using PolyBase or staged copy plus PolyBase for copying data from Data Lake Store to Azure SQL Data Warehouse, we suggest checking your source data format and updating the pipeline to enable PolyBase and remove staging settings for performance improvement. For more detailed information, refer to Use PolyBase to load data into Azure SQL Data Warehouse and Azure Data Factory makes it even easier and convenient to uncover insights from data when using Data Lake Store with SQL Data Warehouse.

Spark activity for rich data transformation

Apache Spark for Azure HDInsight is built on an in-memory compute engine, which enables high performance querying on big data. Azure Data Factory now supports Spark Activity against Bring-Your-Own HDInsight clusters. Users can now operationalize Spark job executions through Spark Activity in Azure Data Factory.
Since Spark job may have multiple dependencies such as jar packages (placed in the java CLASSPATH) and python files (placed on the PYTHONPATH), you will need to follow a predefined folder structure for your Spark script files. For more detailed information about JSON scripting of the Spark Activity, refer to Invoke Spark programs from Azure Data Factory pipelines.

Max allowed cloud Data Movement Units increase

Cloud Data Movement Units (DMU) reflects the powerfulness of copy executor used to empower your cloud-to-cloud copy. To copy multiple files with large volume from Blob storage/Data Lake Store/Amazon S3/cloud FTP/cloud SFTP into Blob storage/Data Lake Store/Azure SQL Database, higher DMUs usually provide you better throughput. Now you can specify up to 32 DMUs for large copy runs. Learn more from cloud data movement units and parallel copy.

UK data center now available for data movement

Azure Data Factory data movement service is now available in the UK, in addition to the existing 16 data centers.With that, you can leverage Data Factory to copy data from Cloud and On-Premise data sources into various supported Azure data stores located in the UK. Learn more about the globally available data movement and how it works from Globally available data movement, and the Azure Data Factory’s Data Movement is now available in the UK blog post.

end of life products - managed solution

Microsoft Products Reaching End of Life

End of life is a key moment to transition to a cloud-first, mobile-first environment. Managed Solution can help you with this transition. Key dates for Office products approaching end of life support:
  • April 11, 2017: Exchange Server 2007
  • October 10, 2017: Office 2007, Project Server 2007, SharePoint Server 2007
  • October 31, 2017: Outlook 2007 connectivity to Office 365
  • January 9, 2018: Communication Server 2007

Breakout on End of support for Exchange Server 2007

On April 11, 2017 extended support for Exchange Server 2007 will end. Updating to Office 365 will provide:
  • Continued support
  • Security updates
  • Better hardware utilization
  • Improved connectivity to Outlook and OWA
  • Easier and more complete compliance
Microsoft recommends migrating to current product versions prior to the support end date to get the latest product innovations and ensure uninterrupted support. If assistance with migration is needed, contact Managed Solution for more information.



Marketplace Monday Azure

Marketplace Monday: SurPaaS Analyzer is a smart decision-enabling tool that analyzes your application for its effective Cloud migration. Check it out!

Contact us Today!

Chat with an expert about your business’s technology needs.