Integrating your on-premises identities with Azure Active Directory By Billmath

 

Today, users want to be able to access applications both on-premises and in the cloud. They want to be able to do this from any device, be it a laptop, smart phone, or tablet. In order for this to occur, you and your organization need to be able to provide a way for users to access these apps, however moving entirely to the cloud is not always an option.

arch azure managed solution

With the introduction of Azure Active Directory Connect, providing access to these apps and moving to the cloud has never been easier. Azure AD Connect provides the following benefits:
  • Your users can sign on with a common identity both in the cloud and on-premises. They don't need to remember multiple passwords or accounts and administrators don't have to worry about the additional overhead multiple accounts can bring.
  • A single tool and guided experience for connecting your on-premises directories with Azure Active Directory. Once installed the wizard deploys and configures all components required to get your directory integration up and running including sync services, password sync or AD FS, and prerequisites such as the Azure AD PowerShell module.

Why use Azure AD Connect

Integrating your on-premises directories with Azure AD makes your users more productive by providing a common identity for accessing both cloud and on-premises resources. With this integration users and organizations can take advantage of the following:
    • Organizations can provide users with a common hybrid identity across on-premises or cloud-based services leveraging Windows Server Active Directory and then connecting to Azure Active Directory.
    • Administrators can provide conditional access based on application resource, device and user identity, network location and multi-factor authentication.
    • Users can leverage their common identity through accounts in Azure AD to Office 365, Intune, SaaS apps and third-party applications.
    • Developers can build applications that leverage the common identity model, integrating applications into Active Directory on-premises or Azure for cloud-based applications.
Azure AD Connect makes this integration easy and simplifies the management of your on-premises and cloud identity infrastructure.
Download Azure AD Connect and Learn More Here

Source:
https://azure.microsoft.com/en-us/documentation/articles/active-directory-aadconnect/

How To Share a Word Document Through SharePoint Online & Onedrive

Managed Solution’s In The TechKnow is a Web Tech Series featuring how-to video tutorials on technology.

This series is presented by Jennell Mott, Business Operations Manager, and provides a resource for quick technical tips and fixes. You don’t need to be a technical guru to brush up on tech tips!
Don’t see the technology that you would like to learn? Submit a suggestion to inthetechknow@managedsolution.com and we will be sure to cover it in our upcoming webcast series.
Other #inTheTechKnow videos: https://managedsolut.wpengine.com/inthetechknow/

Sign up for the newsletter so you can be informed of the last technology webcasts.

By Jeff Barr, Amazon WorkSpaces
AWS customers are deploying Amazon WorkSpaces at scale in medium and large organizations. For example, health care company Johnson & Johnson is using WorkSpaces to realize the long-promised security and efficacy benefits of virtual desktops, in a world populated by a diverse workforce that would like to use their own computing devices if possible (also known as BYOD – Bring Your Own Device). You can view their recent presentation, Deploying Amazon WorkSpaces at Scale, to learn more about what they did and how they now support BYOD for 16,000 contractors and employees, along with zero clients for another 8,000 users.

New Metrics

In order to help our customers to monitor their WorkSpaces deployments, we recently added additional Amazon CloudWatch metrics for WorkSpaces. These metrics are designed to provide administrators with additional insight in to the overall health and connection status of individual WorkSpaces and of all of the WorkSpaces that belong to a particular directory.
Like all CloudWatch metrics, these metrics can be viewed in the AWS Management Console, accessed via the CloudWatch APIs, and monitored by CloudWatch Alarms and third-party tools.

aws metrics managed solution

The new metrics are enabled by default and are available to you at no extra charge.

Here’s what you get:
  • Available – WorkSpaces that respond to a status check are counted in this metric.
  • Unhealthy – WorkSpaces that do not respond to the same status check are counted in this metric.
  • ConnectionAttempt – The number of connection attempts made to a WorkSpace.
  • ConnectionSuccess – The number of successful connection attempts.
  • ConnectionFailure – The number of unsuccessful connection attempts.
  • SessionLaunchTime – The amount of time taken to initiate a session, as measured by the WorkSpaces client.
  • InSessionLatency – The round trip time between the WorkSpaces client and WorkSpaces, as measured and reported by the client.
  • SessionDisconnect – The number of user initiated and automatically closed sessions.
Here’s how you create an alarm that will fire if a user cannot connect to their WorkSpace:
aws cloud alarm managed solution

Available Now:
The new metrics are available now and you can start monitoring them today!

Source: https://aws.amazon.com/blogs/aws/new-cloudwatch-metrics-for-amazon-workspaces/

windows-8.1-tips-sheet-managed-solution

#intheTechKnow Tips Sheet: Windows 8.1

Microsoft Windows 8.1 is designed to work seamlessly between touchscreen and keyboard. This reimagined operating system was designed to make everyday tasks easier and more enjoyable. Windows 8.1 features a fully customizable tile interface, instant search capabilities through the charms, access to apps through the user-friendly Windows Store, and much more.

Master Windows 8.1 before Windows 10 is launched on July 29, 2015.

Want to learn more?

windows-8.1-tips-sheet-button-managed-solution

View Managed Solution’s In The TechKnow – a Web Tech Series featuring how-to video tutorials on technology. This series is presented by Jennell Mott, Business Operations Manager, and provides a resource for quick technical tips and fixes. You don’t need to be a technical guru to brush up on tech tips!

Azure Monkey Managed Solution

Inside Azure Search: Chaos Engineering

By Heather Nakama Software Engineer, Azure Search
A central truth in cloud computing is that failure is inevitable. As systems scale, we expect nodes to fail ungracefully in random and unexpected ways, networks to experience sudden partitions, and messages to be dropped at any time.
Rather than fight this truth, we embrace it. We plan for failure and design our systems to be fault-tolerant, resilient, and self-stabilizing. But once we’ve finished designing and building, how do we verify that our beautiful, fault-tolerant systems actually react to failures as they should?
Functional tests can only do so much. Distributed systems are complex ecosystems of moving parts. Each component is subject to failure, and more than that, its interactions with other system components will also have their own unique failure modes. We can sit around and armchair-theorize all we like about how these components will respond to imagined failures, but finding every possible combination of failure is just not feasible. Even if we do manage to exhaustively account for every failure mode our system can encounter, it’s not sustainable or practical to re-verify system responses in this way every time we make a change to its behavior.

Chaos Engineering

Azure Search uses chaos engineering to solve this problem. As coined by Netflix in a recent excellent blog post, chaos engineering is the practice of building infrastructure to enable controlled automated fault injection into a distributed system. To accomplish this, Netflix has created the Netflix Simian Army with a collection of tools (dubbed “monkeys”) that inject failures into customer services.
Inspired by the first member of the Netflix Simian Army, Chaos Monkey, we’ve created our own “Search Chaos Monkey” and set it loose to wreak havoc against a test environment.
The environment contains a search service that is continuously and randomly changing topology and state. Service calls are made against this service on a regular basis to verify that it is fully operational.
Even just setting up this target environment for the Search Chaos Monkey to play in has been incredibly helpful in smoking out issues with our provisioning and scaling workflows. When the Search Chaos Monkey is dormant, we expect the test service to operate smoothly. Any errors coming from it can therefore be assumed to be caused by bugs in existing workflows or false positives from the alerting system. We’ve caught several bugs this way before they had a chance to escape into production.

Quantifying Chaos

After the test service was stabilized, we unleashed the Search Chaos Monkey and gave it some tools of destruction to have fun with. It runs continuously and will randomly choose an operation at regular intervals to run in its test environment.
The set of possible operations the monkey can choose from depends on the level of chaos we’ve told it to cause in our test environment.
Low chaos refers to failures that our system can recover from gracefully with minimal or no interruption to service availability. Accordingly, while the Search Chaos Monkey is set to run only low chaos operations, any alerts raised from the test service are considered to be bugs.
Medium chaos failures can also be recovered from gracefully, but may result in degraded service performance or availability, raising low priority alerts to engineers on call.
High chaos failures are more catastrophic and will interrupt service availability. These will cause high priority alerts to be sent to on-call engineers and often require manual intervention to fix.
High chaos operations are important for ensuring that our system can fail in a graceful manner while maintaining the integrity of customer data. Along with medium chaos operations, they also function as negative tests that verify alerts are raised as expected, enabling engineers to respond to the problem.
All operations can also be run on demand with a manual call to the Search Chaos Monkey.

Downgrading Failures from Exceptional to Expected

These levels of chaos provide a systematic and iterative path for an error to become incorporated into our infrastructure as a known and expected failure.
Any failure that we encounter or add to our system is assigned a level of chaos to measure how well our system reacts to it. The failure could be a theoretical one that we want to train our system to handle, a bug found from an existing chaos operation, or a repro of a failure experienced by a real search service. Either way, we will have ideally first automated any new failure so that it can be easily triggered by a single call to the Search Chaos Monkey.
In addition to the low, medium and high levels described above, the failure can be classified as causing an “extreme” level of chaos.
Extreme chaos operations are failures that cause ungraceful degradation of the service, result in data loss, or that simply fail silently without raising alerts. Since we cannot predict what state the system will be in after running an extreme chaos failure, an operation with this designation would not be given to the Search Chaos Monkey to run on a continuous basis until a fix was added to downgrade it to high chaos.
Driving chaos level as low as possible is the goal for all failures we add to our system, extreme or otherwise.
If it’s an extreme chaos failure that puts our system on the floor, then it’s essential we at least enable our infrastructure to preserve service integrity and customer data, letting it be downgraded to a high chaos failure. Sometimes this means that we sacrifice service availability until a real fix can be found.
If it’s a high chaos failure that requires a human to mitigate, we’ll try to enable our infrastructure to auto-mitigate as soon as the error is encountered, removing the need to contact an engineer with a high priority alert.
We like it any time the Search Chaos Monkey breaks our system and finds a bug before customer services are affected, but we especially like it when the monkey uncovers high and extreme level chaos failures. It’s much more pleasant to fix a bug at leisure before it is released than it is to fix it in a bleary-eyed panic after receiving a middle-of-the-night phone call that a customer’s service is down.
Medium chaos errors are also welcome, if not as urgent. If the system is already capable of recovery, we can try to improve early detection so steps can be taken before engineers are notified that service availability is impacted. The less noise the engineers on call need to deal with, the more effective they can be at responding to real problems.
Automation is the key to this process of driving down chaos levels. Being able to trigger a specific failure with minimal effort allows us to loosely follow a test-driven development approach towards reduction of chaos levels. And once the automated failure is given to the Search Chaos Monkey, it can function as a regression test to ensure future changes to our system do not impact its ability to handle failure.

Chaos Engineering in Action

To illustrate how this works, here’s a recent example of a failure that was driven from extreme chaos to low chaos using this model.
Extreme chaos: Initial discovery. A service emitted an unexpected low-priority alert. Upon further investigation, it ended up being a downstream error signifying that at least one required background task was not running. Initial classification put this error at extreme chaos – it left the cluster in an unknown state and didn’t alert correctly.
High chaos: Mitigation. At this point, we were not able to automate the failure since we were not aware of the root cause. Instead, we worked to drive the failure down to a level of high chaos. We identified a manual fix that worked, but impacted availability for services without replicas. We tuned our alerting to the correct level so that the engineer on call could perform this manual fix any time the error occurred again. Two unlucky engineers were woken up to do so by high priority alerts before the failure was fixed.
Automation. Once we were sure that our customer services were safe, we focused our efforts on reproducing the error. The root cause ended up being unexpected stalls when making external calls that were impacting unrelated components. Finding this required the addition of fault injection to cause artificial latency in the component making the calls.
Low chaos: Fix and verification. After the root cause was identified, the fix was straightforward. We decoupled the component experiencing latency in calls from the rest of the system so that any stalls would only affect that component. Some redundancy was introduced into this component so that its operation was no longer impacted by latency, or even isolated stalls, only prolonged and repeated stalls (a much rarer occurrence).
We were able to use the automated chaos operation to prove that the original failure was now handled smoothly without any problems. The failure that used to wake up on-call engineers to perform a potentially availability-impacting fix could now be downgraded to a low chaos failure that our system could recover from with no noise at all.
At this point, the automated failure could be handed off to the Search Chaos Monkey to run regularly as a low chaos operation and continually verify our system’s ability to handle this once-serious error.

Chaos Engineering and the Cloud

At Azure Search, chaos engineering has proven to be a very useful model to follow when developing a reliable and fault tolerant cloud service. Our Search Chaos Monkey has been instrumental in providing a deterministic framework for finding exceptional failures and driving them to resolution as low-impact errors with planned, automated solutions.
By Heather Nakama Software Engineer, Azure Search
Source: http://azure.microsoft.com/blog/2015/07/01/inside-azure-search-chaos-engineering/

Bsse-Configurations-Managed-Solution

Technical: How much time do base configurations save? by Terry Danner, Network Infrastructure Engineer

In the networking world - similar to the systems world, we have base configurations that can save many hours of time. Some things are standard on almost all networking equipment. While syntax may change between vendors, your best practices as a network engineer are your standards and do not change. Some things always need to be configured in order for the network to work correctly or else it will be called a “Not”- work.
Below are examples:

Hostname:

This is the name of the switch that will register on the network. Once this command is entered, the prompt will also change to this name for each level of commands. I.E. ManagedSolution> will be the initial login layer ManagedSolution# would be the enable command level prompt.

SNTP Servers:

SNTP = Simple Network Time Protocol. These settings are configured so that the switch synchs time without an outside reliable source(s). This is very important for troubleshooting issues. If you have a failure at 8:00 a.m. on a server that is more than likely using a reliable time source, you would want to look on the switch or firewall log for the same timestamp. If all systems are synching using SNTP it makes it much easier.

DNS Servers:

DNS = Domain Name Servers. This is how the piece of network equipment resolves IP addresses to common names or common names to IP addresses. This really helps with troubleshooting common network problems.

VLAN Names:

This is really an engineer’s standard but if you are supporting multiple clients and every switch you log into is configured with similar VLAN’s, then you will spend less time learning about the environment and more time fixing it.

IP Helper:

The server may change but it is best to enable it on the switch. Remember the VLAN that the DHCP server sits on will not need to accept DHCP requests and normally a switch will not accept a request without an IP Helper statement.

IP Routing:

This really is for Layer 3 purposes only, but if you plan on using it and configuring it - then turn it on. This is your roadmap showing your computers where to go.

Recommendations:

Take each equipment vendor that you configure and support and build a base configuration that includes the above information in a text file format. One for Cisco, Dell, Foundry, and any other vendors you may support. Remember to include the configure command and the exit commands followed by the exit command or ! to keep the script flowing between groups of interface commands. An example is below:
cConfigure t
interface vlan 4
ip address 10.1.4.1 255.255.255.0
exit
!
interface vlan 8
ip address 10.1.8.1 255.255.255.0
exit
!
By building out a base configuration and executing the script, I have saved countless hours of time configuring the standard settings for switches.
Oh and remember... the most important command to execute before moving on to the next client/customer issue once the switch is configured or the problem is solved is: copy running-config startup-config. This way you won’t be reconfiguring the changes to the environment after the next power outage or reboot of the switch.

What is Azure Backup?

Azure Backup, an enabling technology of Availability on Demand, is a scalable solution that protects your application data with zero capital investment and minimal operating costs.

Save up to 80% with cloud backup

*Based on the Gartner study, published in February 2014, comparing TCO of cloud backup to tape backups.
Access full report

Power of backing up to Azure

Data is the heart of any organization and backing up this data is a key part of a business strategy. Azure Backup is a scalable solution with zero capital investment and minimal operational expense.

Protect your critical assets wherever they are

Your data and applications are everywhere—on servers, clients and in the cloud. Azure Backup can protect your critical applications, including SharePoint, Exchange, and SQL Server, files and folders, Windows servers, Windows clients, and Azure IaaS VMs.

Compelling alternative to tape

Due to business or compliance requirements, organizations are required to protect their data for years, and over time this data grows exponentially. Traditionally, tape has been used for long-term retention. Azure Backup provides a compelling alternative to tape with significant cost savings, shorter recovery times and up to 99 years of retention.

Secure and reliable

Your backup data is secure over the wire and at rest. The backup data is stored in geo-replicated storage which maintains 6 copies of your data across two Azure datacenters. With 99.9% service availability, the solution provides an operational peace of mind.

Efficient and flexible

This solution is efficient over the network and on your disk. Once the initial seeding is complete, only incremental changes are sent at a defined frequency. Inbuilt features such as compression, encryption, longer retention and bandwidth throttling helps boost IT efficiency.

 

Want to learn more? Contact one of our experts here and get started on your cloud backup journey today.

[vc_row][vc_column][vc_column_text]

How To Sync OneDrive For Business on your Desktop

Managed Solution’s In The TechKnow is a Web Tech Series featuring how-to video tutorials on technology.

This series is presented by Jennell Mott, Business Operations Manager, and provides a resource for quick technical tips and fixes. You don’t need to be a technical guru to brush up on tech tips!
Don’t see the technology that you would like to learn? Submit a suggestion to inthetechknow@managedsolution.com and we will be sure to cover it in our upcoming webcast series.
Other #inTheTechKnow videos: https://managedsolut.wpengine.com/inthetechknow/

Sign up for the newsletter so you can be informed of the last technology webcasts.


[/vc_column_text][/vc_column][/vc_row]

Contact us Today!

Chat with an expert about your business’s technology needs.