Sleep At Night by Mike Clinger, Senior Systems Engineer_ManagedSolution

Do You Like To Sleep At Night? by Mike Clinger, Senior Systems Engineer

For years RAID 5 has been the choice of SAN administrators who were looking for a high capacity fault tolerant RAID solution. RAID 5 does provide a high capacity storage solution with its N-1 capacity formula. This is a very attractive offering when storage capacity is your main focus when choosing a RAID type configuration. RAID 5 is also a fault tolerant RAID configuration with its single parity hard drive and this has given SAN administrators a sense that their data is safe even if a hard drive fails. A SAN administrator would even have a greater sense of safety for their data when a hot spare was setup in their SAN array. The online hot spare would start a rebuild operation as soon as the SAN array has detected that a hard drive failure has occurred within the RAID 5 set.
It is during this rebuild process that the SAN administrator’s data is at its most vulnerable state. With today’s hard drives becoming increasingly larger in RAID 5 data sets, the rebuild times are taking longer and longer to complete - at times even taking several days to complete. During the rebuild process the RAID 5 set has no fault tolerance and if a second hard drive fails during this process the data would be lost without any fault tolerance being present. That would cause me to suffer through some restless nights of sleep while the RAID 5 set was being rebuilt.

This is why RAID 6 with a hot spare is a much better high availability RAID configuration when compared to a RAID 5 configuration with a hot spare. RAID 6 can withstand a double hard drive failure because RAID 6 uses two hard drives for parity, so this gives the SAN administrator a higher level of comfort as when compared to the RAID 5 configuration. And if a RAID 6 data set does suffer a hard drive failure, a rebuild process is initiated with a hot spare. The array is not left in a vulnerable state during the rebuild process the RAID 6 data set can still withstand a hard drive failure during the rebuild cycle. It could also keep the data intact due to the two parity drives in use by the RAID 6 configuration.
Sure Raid 6 does use an additional hard drive resource with its N-2 storage capacity formula, which does reduce storage capacity by the total size of one additional hard drive as when compared to the RAID 5 configuration. The performance is comparable to the RAID 5 configuration but the higher availability of the RAID 6 data set more than makes up for a little less capacity.
So if you like to sleep at night even when your SAN array has encountered a hard drive failure make RAID 6 your choice for a high capacity and fault tolerant RAID configuration.
About the author:
Mike Clinger has over thirty years of experience working in the information technology field. As a Senior Systems Engineer at Managed Solution, Mike contributes his strong technical expertise for projects focusing on storage, cloud and virtualization solutions.
Articles by Mike Clinger:

Application Compatibility Part 2_ManagedSolution


Application Compatibility Part 2: There Are Answers - By Jason M. Donahue, MCITP, Systems Engineer

In my last article, I discussed some of the obstacles faced when looking at issues of application compatibility, such as when upgrading an existing application or operating system in your enterprise. Whether installing the latest version of Windows, running into a problem with a new application breaking one of your existing apps, or that new printer deciding not to play well with others, there are a number of pitfalls you may find yourself trying to avoid.
The good news is you’re not alone, and there are solutions.
For those of us looking to upgrade to a new version of Windows, a good place to always start is with the Windows Compatibility Center, which can give you an idea if your critical business application or piece of hardware has been tested, and any issues you may run into.
The Compatibility Center is only the first step, though. Certainly, it can tell you if something’s going to work or not, but it’s not going to give you an in-depth look at everything your users may run into. The next step, then, is to run Microsoft’s Upgrade Assistant on one of your PCs, which will give you a more in-depth look at potential issues you might face. The Upgrade Assistant looks at three main things:
  1. Will your computer’s hardware support the new version of Windows you’re looking to install?
  2. Are your applications or devices going to be supported?
  3. What features of the new OS won’t be supported on your PC?
This automates a lot of the work you might otherwise have done using the Windows Compatibility Center, and can give you a much more customized, in-depth look at potential compatibility issues you may have with a new operating system upgrade.
While both of these are good solutions when looking at operating system upgrades, what neither will really tell you is potential pitfalls you might run into with potential incompatibility issues between applications.
One example I mentioned last time is Java: while the goal behind Java might have been “write once, run anywhere”, it’s really not that simple. I’ve seen more than one case of one application requiring a version of Java that will break another application on the PC. That’s not something you want to find out after you’ve rolled out a new Java update to all your workstations, and may well result in your end users gathering torches and pitchforks.
This, then, is when you need to test on an actual PC that duplicates your work environment. Whether you’re using a single virtual machine, or a bank of physical PCs, you’ll need to test your lab as closely as possible to your physical environment as you can. Depending on your environment, this could easily extend into needing to duplicate a portion of your server infrastructure to test upgrades against.
Depending on your current environment, right now, you might be breaking into a cold sweat. Maybe you don’t have the physical resources to set up your test lab right now. Maybe you simply don’t have the staffing required to do it. Don’t worry. At Managed Solution, we’re here to help, and we can steer you clear of the potential pitfalls you may encounter when doing an upgrade.
Other Articles by Jason Donahue:

The One Thing Executing a Successful Migration to Office 365_ManagedSolution
“The One Thing”: Executing a Successful Migration to Office 365 - By Jeff Lizerbram, MCSA, Solutions Architect

When it comes to the topics of self-improvement and time-management, I am a bookworm. My latest favorite is by Gary Keller and Jay Papasan entitled “The One Thing: The surprisingly simple truth behind extraordinary results” (look for it on Amazon.com). In a nutshell, the authors guide the reader through small actions that can make a huge impact. Whether it relates to business, finance, personal life or other goals, it seems to be a method that all successful individuals use, but up until recently, it has never been really articulated on HOW they use it. Well, lately, I have been putting this mindset to the test, and during some recent Office 365 migrations that I have had the opportunity to lead, it seems The One Thing mantra has led to more seamless implementations than ever before. I wanted to take this opportunity to write about how I use The One Thing to successfully execute an Office 365 migration for large enterprises.
The authors of the book explain that the core to “The One Thing” is by asking yourself a question: “What is the ONE thing I can do, such that by doing it, everything else will be easier or unnecessary?” Using this question, and relating it to the task at hand, I was able to put together an Office 365 migration plan that was efficient, effective and helped an organization get from point A to point B with minimal effort in reduced time. And as an added bonus, the plan became a template that could be used for many other projects down the line.
Migrating an organization to Office 365 can seem like a daunting task – especially if the organization involves thousands of users, thousands more of shared and resource mailboxes, and hundreds of thousands of documents that need to be transitioned to OneDrive for Business. Asking the question “What is the ONE thing I can do to migrate [your company name here] to Office 365 such by doing it, everything else will become easier or unnecessary?”, it became evident the tasks that needed to be put in place, and in priority. And given a definitive time window to complete the project (let’s say 3 months), the question can be broken down even further to help prioritize the One thing that needs to be done at any given time:
“What is the One thing I can do this quarter?”
“What is the One thing I can do this month?”
“What is the One thing I can do this week?”
“What is the One thing I can do TODAY?
...that by doing it, everything else will become easier or unnecessary?”
Thinking through the project this way allows me to prioritize what is needed to Plan, Prepare and Migrate to Office 365. For example, the one thing I can do this Quarter is have my project plan signed-off, and on-time. This Month’s One Thing might be to complete fully accurate end-user and administrator documentation, so my customer has the information to execute and support their users after the migration is completed. This week’s One Thing might be to establish the end-point Office 365 connectors so that I can test and pilot Email and File migrations to Office 365. Today’s one thing is to gather all of the administrator accounts necessary to setup the environment – if you think about it, just having the login credentials necessary makes everything else easier, doesn’t it?
So with this mindset, I’m off to do another Office 365 migration. I hope this helps, not only for other Office 365 Solution architects out there, but perhaps for people in general as a way to organize life!
About the author:
Jeff Lizerbram has over 20 years of experience working in the information technology field. As a solutions architect for Managed Solution, with an emphasis on Microsoft Cloud products, Jeff contributes strong technical expertise designing and implementing scopes of work including migrating Email, SharePoint, local file storage and Skype for Business services, as well as other critical business systems, from on-premise or other hosted platforms to Microsoft’s Office 365. Jeff is actively planning and designing projects, writes technical as well as end user facing instructional guides, and trains staff and clients. When not working on projects or working with his Managed Solution team, Jeff loves spending time with his family and exploring the great San Diego outdoors.
Other blog posts by Jeff Lizerbram:

Data Loss Managed Solution
Data Loss: Risk Management, Backups, and Disaster Recovery: By George Fedoseyev, Server Engineer, MCSE: Server Infrastructure

Throughout my consulting career, I have always viewed business continuity as one of the main areas of focus. It does not matter whether we are working with a home user or a large corporation – the customer’s data integrity has always been and will be priority number one. It is imperative to pause your day to day operations and find the time to think about the potential consequences of ignoring your backups, not having disaster recovery, or performing risky activities.
It always surprises me the amount of unnecessary risks the IT departments take. When performing system cutovers, migrations, and upgrades it does not take much to check on the status of the backups for the systems one is working on. In the unfortunate circumstance when the backups are not adequate before the scheduled migration, the migration or any activity should be postponed until the backup situation is resolved. Often IT departments are under heavy pressure from other business units to deliver under strict deadlines; however, data loss should never be allowed to occur. As a Server Engineer at Managed Solution, we strive on providing the highest quality work and minimizing risks. I am proud to state that Managed Solution has great guidelines to mitigate risk when it comes to backups with our BDRs, practices to check backups prior to cutovers, and overall awareness of the situation. I am happy to see engineers back out of risky projects and put a hold on cutovers involving risks of data loss until the underlying issue is resolved.
Every business should view their backups as the cornerstone of IT. Go to your IT department right now and ask them how confident they are in their backups and how much backup history can your existing system provide. Are you satisfied with the response? If you are a small or medium size business, or even a large one, the answer might be no. Why is this the case? Why do businesses often ignore the most crucial aspect of IT operations or do not invest enough to make sure that their business is resilient to data loss? This is the question that we often ask our clients when planning to build out their environments. Under certain conditions a poorly timed backup and server failure can cause irreparable damage to the business or even force it to close its doors for good. It is strongly encouraged to re-evaluate your backup and disaster recovery plans as often as possible and to ensure that they are aligned with the business needs.
“Hey, I’m mitigating risk according to best practices and my backups are in good shape. I got my bases covered.” I’m sure we are all happy to hear that from any client. There is a small culprit, however. Where will the backups be restored to? Is there enough server resources available to absorb the full load of another failed server? How long will it take a vendor to deliver another server to your location, if not? What are your options if your primary file server with 5 TB of data goes down? How long will it take to restore all that data? All of these questions should be considered and discussed with stakeholders before the disaster strikes.
When any IT professionals follow risk management, backups, and disaster recovery best practices, they can come out looking like heroes in an otherwise very poor-looking situation. These practices will reduce the amounts of business productivity and financial losses, stress, and overnight repairs. Do not wait until it is too late, learn on mistakes of others, and prevail and prove your system is ran by a true professional!
About the author:
George Fedoseyev is a server engineer with over ten years of experience working in Information Technology field. Most of his professional career George has worked with as a managed service provider focusing on infrastructure, messaging, and SharePoint design and implementation. Originally from Saint-Petersburg, Russia, George has moved to sunny San Diego in the early 2000.
Articles by George Fedoseyev:

Formalized Change Management_j.hendrickson_managedsolution

Formalized Change Management – Is It Needed? BY JOSH HENDRICKSON, SYSTEMS ENGINEER

As IT infrastructure grows, so does the need for more and more administrative staff to manage that infrastructure. Along with the increased number of staff members and servers also comes an ever increasing level of system interoperation and complexity. It is as these high levels of growth and complexity that an organization must start to consider a more formalized change management process and procedure.
First though, let’s back up and talk about what a formal change management process consists of. A formal change management process generally requires a change management governing body, a change execution entity/member, and the infrastructure itself.
A governing body can be as simple as the next few organizational levels over the infrastructure administrators (generally a department manager and the director or VP of IT). It can also be a complex organizational entity (also known as a steering committee) that includes many different internal departments to a larger IT department. Basically, the goal of this governing body is to oversee any change requests, ensure clear direction of any infrastructure changes, as well as ensure there will be little to no unforeseen impact on end user production systems. The change execution entity can consist of one IT administrator or a group of IT members working to collectively execute the change as approved by the governing body.
If executed properly a formal change management process can greatly reduce the risk of service outages, data lose, or critical revenue loss incurred by production systems that require adherence to strict service level agreements (SLAs). Failure to comply with change management process and procedure generally comes with professional consequences. In some cases, non-complying IT staff members could be formally reprimanded or even worse result in termination of employment.
Some organizations only implement change management on production environments, while others require change management on development, QA, and production environments. Any level of formalize change management ensures that there is a written record of infrastructure changes, the intended impact (if any) of the change, and the entity that requested and/or conducted the execution of the change.
Whether it is an external compliance regulation, such as HIPPA or Sarbanes Oxley, or an internal company practice and procedure, the benefits of implementing a formal change management process can prove extremely valuable to an organization. So, as a company executive or valued member of an organization’s IT staff, it may be time to stand up a formal change management control system to mitigate company risks related to Information Technology.
About the author
Josh Hendrickson is a detail oriented IT Professional that focuses on core infrastructure systems. He was born and raised in Indiana and migrated to California back in 2003. He has worked in an array of industries such as healthcare, marketing and IT services.
Articles by Josh Hendrickson:

Contact us Today!

Chat with an expert about your business’s technology needs.