The Best New Years Resolutions for IT Departments in 2024

 

As we bid farewell to another year, it's the perfect time to reflect on the past and set our sights on the future. For IT departments, embracing the new year often involves reevaluating strategies, streamlining processes, and leveraging innovative solutions.

As a passionate team of IT experts that champion all the ways in which bolstering IT can benefit businesses everywhere, we're so excited to guide you through some New Year resolutions that can revitalize your IT approach and bring success in 2024.

 

AI Integration for IT Advancement

Resolution: Embrace the integration of artificial intelligence (AI) in our IT operations to enhance efficiency and decision-making processes.

Why: AI technologies, such as machine learning and predictive analytics, can revolutionize how we manage and optimize IT resources. By leveraging AI, we can automate routine tasks, gain insights from data, and make proactive decisions that contribute to the overall success of our IT initiatives.

 

Automation for Streamlined Operations

Resolution: Embrace automation to streamline repetitive tasks and enhance operational efficiency.

Why: Automation can significantly reduce manual efforts, minimize errors, and accelerate processes. By identifying opportunities for automation in routine tasks, we can free up valuable time for our IT teams to focus on more strategic initiatives, leading to a more agile and responsive IT environment.

 

Embrace Cloud Optimization

Resolution: In 2024, commit to optimizing our cloud infrastructure for efficiency and cost-effectiveness.

Why: Cloud technology is dynamic and ever evolving. Ensuring that our cloud services are optimized will enhance performance, reduce costs, and allow us to take full advantage of the latest features.

 

Enhance Cybersecurity Measures

Resolution: Strengthen our cybersecurity posture to safeguard against evolving threats.

Why: As cyber threats become more sophisticated, prioritizing cybersecurity is crucial. Implementing robust measures, such as regular security audits and employee training, will fortify our defenses.

 

Implement Proactive Monitoring

Resolution: Transition to proactive monitoring for early issue detection and swift resolution.

Why: Reactive approaches can lead to downtime and disruptions. Proactive monitoring ensures that potential issues are identified and addressed before they impact operations.

 

Upgrade Legacy Systems

Resolution: Develop a plan to systematically upgrade legacy systems to modern, efficient solutions.

Why: Outdated systems pose security risks and hinder performance. Upgrading to the latest technologies ensures we stay competitive, secure, and aligned with industry standards.

 

Optimize IT Budgets

Resolution: Conduct a thorough review of IT budgets to identify cost-saving opportunities without compromising performance.

Why: Efficient budget allocation is essential for achieving business objectives. Identifying and eliminating unnecessary expenses will optimize our IT spend.

Interested in learning more? Check out our blog on Software Sprawl.

 

Promote Collaboration Tools

Resolution: Implement or enhance collaboration tools to boost team productivity.

Why: Effective communication and collaboration are cornerstones of success. Integrating advanced collaboration tools will empower our teams to work seamlessly, irrespective of location.

You can learn more by reading our blog on Microsoft viva or click here to see all of the powerful collaboration tools and services we offer to amplify your team’s engagement and productivity.

 

Invest in Employee Training

Resolution: Prioritize ongoing training to keep our IT teams well-versed in the latest technologies.

Why: The tech landscape evolves rapidly. Investing in continuous training ensures that our teams are equipped with the skills needed to navigate emerging trends.

Here are some resources for internal training:

You can also access our past webinars for expert walkthrough on various tools and technologies that all IT teams should know.

 

Explore New Microsoft Solutions

Resolution: Stay abreast of the latest Microsoft solutions and integrate them into our IT ecosystem.

Why: Microsoft offers a suite of powerful solutions. Regularly exploring and adopting new tools can enhance productivity and keep us at the forefront of technological innovation.

Learn more about Microsoft tools and services that you can access through our trusted team.

As we step into 2024, let's embark on a journey of IT excellence. These resolutions serve as a roadmap for a successful and technologically advanced year. If you're ready to turn these resolutions into reality, our team at Managed Solution is here to support you every step of the way. Here's to a year of innovation, efficiency, and IT success!

 

More Resources

It's no surprise by now that IT leaders have ever greater responsibilities concerning strategic business tasks and various digital transformation initiatives. But with added responsibility comes other operational challenges such as the hiring of top talent with the necessary skills to put those initiatives into practice.

Increasing operational efficiency, cybersecurity, leveraging IoT devices, cloud adoption, and integration, are some of the most pressing issues that CIOs and other IT leaders need to address. These initiatives, among others, are dictating the current trends in the talent market.

What Are the Toughest Roles to Fill in IT?

Delivering in most IT initiatives these days requires a heavy focus on cybersecurity and risk management. It's precisely these areas that CIOs struggle the most to find qualified personnel. Next, come cloud services, integration, multi-cloud management, and overall cloud architecture. Cloud computing, as a whole, provides tremendous potential for flexibility and scalability, particularly in a disrupted or uncertain global economy.

Other areas that see a tough time hiring top talent are enterprise architecture, DevOps/Agile process, automation, and IoT implementation. Below is a list of the senior hardest roles to fill in IT in 2019.

  1. Security and risk management
  2. Cloud services and integration
  3. IoT (connected devices, sensors)
  4. Enterprise architecture
  5. Multi-cloud management
  6. Automation and robotic processes
  7. Cloud architecture
  8. DevOps/Agile processes

The Scramble for IT “Unicorns”

With so many organizations undergoing digital transformation, it's no wonder that there's fierce competition for top talent or IT “unicorns,” as we like to call them. Companies have started adopting all sorts of strategies to attract these unicorns with varying degrees of success.

Some companies have adopted some extreme hiring strategies in this attempt. And while they may make some sense on paper, in practice they can prove disastrous. On the one hand, some businesses create an endless stream of applicants going in and out of the organization. They hope that this massive influx of applicants will inevitably result in some that are worth keeping. The problem is that this high turnover rate will not only increase costs but will also drop company morale.

Adversely, others have done the complete opposite. Their approach is to look at every hire as a sort of diamond in the rough that, with enough time and investment, will turn into a top performer. This strategy, however, often results in precious resources being squandered on employees that don't have the capability or willingness to excel.

There is no one-way-fits-all approach, but neither these strategies nor higher salaries and benefits alone are the way to go. All of these can prove to be quite costly and ineffective in an overly competitive job market.

When to Hire and When to Outsource

CIOs need to take the idea of outsourcing very seriously if they wish to remain competitive. Some areas should include full-time, in-house hires, as much as possible, particularly regarding data analytics, AI/ML, and CX/UX design, or any others that prove to be vital to an organization's success.

Other fields, however, are changing and innovating too fast to keep up with and upskilling employees may be a too costly endeavor. Such areas include things like security, automation, and cloud architecture. Even though they are essential; they are better off in the hands of trained professionals. They have access to a large body of skill sets through an entire help desk and a team of engineers/solution architects. And let's not forget the costs associated with salaries, benefits, and other in-house costs such as workstations, etc.

IT leaders should decide which parts of their processes should be in-house and which should be outsourced, depending on the company's individual needs and goals.

With the almost incredible strides that technology has made over the past two decades, IT leaders are quickly becoming critical players in fueling business transformation. But with the nearly overwhelming amount of technology introduced daily, it's even hard for CIOs to know where to invest their attention.

Even though the tech market is expected to slow from 4.5% in 2019 to 3.8% in 2020, due to economic uncertainty, the growth gap between business and back-office tech is expected to narrow. This trend is, in large part, to many companies' efforts to reduce operating costs and preserve profit margins.

Also, the driving forces behind increased tech spending still exist. These include things like the adoption of cloud, artificial intelligence, improved analytics tools, back-office tech, as well as CX-oriented technologies.

If you haven't invested already, here is how IT creates business value for your organization.

Tackling IoT Data

Many companies today can capture and analyze data, but very few have the necessary infrastructure to deploy and manage IoT applications based on that data. As edge computing becomes more sophisticated, businesses require efficient storage and processing capabilities that can collect, analyze, and act on that data. A priority of every IT team is to develop the right platform that can do this and maximize the business value of IoT.

Eliminating Old and Irrelevant Data Metrics

A lot of organizations still operate on siloed data, the ever-present spreadsheets, and multiple versions of the same data. By defining and using a data analytics platform that can deliver actionable and data-driven intelligence across the business, companies can retire half, if not more, of its legacy reports.

Adopting AI and ML

AI and machine learning are vital aspects of digital transformation. A comprehensive and integrated data environment is an essential component for robotics, AI, and IoT, that can get all the information out of dissipated documents and into a single source that's always up-to-date. Such an environment makes it possible to get real-time information on schedules and budgets, as well as to make the company more flexible and resilient to future economic uncertainties.

Improve Customer Experience

Customers today judge a company based on the experiences they offer. CX is more important than the price or the product itself. IT teams should focus their attention using data to create an easy-to-read customer profile, which will then be used to generate a unified customer experience for every individual. It will provide them with a consistent and contextualized customer experience across all channels and touchpoints. Such CX will be able to deliver higher rates of trust, customer satisfaction, retention, and higher profitability down the line.

Catering to the Mobile Workforce

Customers, partners, and employees all expect anytime-anywhere access to information. Although very few businesses are prepared enough to embrace the proliferation of wireless devices in the modern workplace. Telecommuting and cross-departmental collaboration are at an all-time high and are only expected to get larger. It demands more flexibility in terms of when, where, and from which devices; employees do their work. Unfortunately, many companies do not have the necessary wireless infrastructure to cater to this mobile workforce. The IT department will be vital in keeping up with this demand.

Security and Costs 

Data breaches and other forms of cybercrime have never been more prevalent. Then, there's the matter of compliance to various governmental regulations such as the EU's GDPR, the California Consumer Privacy Act (CCPA), the Health Insurance Portability and Accountability Act (HIPAA), etc. IT leaders and departments can implement solutions and leverage technologies that tackle security threats and manage regulatory compliance issues, saving companies on costs.

It can also help manage to spend across the organization by deploying the right technologies. One of the most significant issues in this regard are the costs associated with added services when businesses buy their own SaaS tools. IT can determine what's needed and what's not, to mitigate unnecessary spending.

[vc_row][vc_column][vc_empty_space][vc_column_text]To download the full magazine and read the full interviews, click here.

Michael Scarpelli acts as Director, IT, Technical Support Manager at La Jolla Institute for Immunology (LIAI). He oversees LIAI’s team of support technicians and assists in managing the day-to-day flow of the Information Technology Department. With assistance from Senior Information Technology Manager John Stillwagen, Michael is an integral part of making sure that business, both administrative and research, runs smoothly at LIAI.

Michael joined LIAI in 2002 as a Tech Support Specialist. A Writing major from UCSD, Michael brings a broad skill-set to the Information Technology Department, and is continually looking to advance and expand the functions of the IT Department at La Jolla Institute for Immunology.

What did you want to grow up to be when you were a kid?

When I was a little kid I wanted to be a scientist. Then I hit high school chemistry and there was just some disconnect between my brain and what they were teaching me. Same thing with math. I was very good at math up until calculus level and then natural ability ran out and the subject matter sort of caught up with me. Then it was, "No buddy, this is not working anymore."

At that point I made a hard left into Bachelor of Arts, literature kind of stuff. I'm actually a writing major so I have nothing applicable to what I'm doing in any way. That’s how a lot of my friends are. They went for astronomy or their passions and then they end up doing something totally different.

Growing up I'd always been interested in computer stuff because they were cool toys to play with and I liked getting into them. I wasn't afraid to sort of poke around on them. I had a Mac computer growing up and then, at college, one of my friends, John Stillwagen, he's our Management Information Services director now, was working here at the institute and he said, "Do you want a job? Go apply." I showed up and essentially my interview was my boss shaking my hand and saying, "Do you know how to use a Mac computer?" And I said, "Yes." And he said, "Great, you're hired."  So initially what got me into it was not necessarily my saying, "Hey, I want to do IT." It was more that it was already something that I was doing on my own.

This is where the communication aspect comes into play finally, I like to try to translate and explain ideas to people and I like to answer questions. That's a lot of what support is like, especially in the early stages. You having to just sort of walk people through, not only is this how you do things, but let me explain to you how it works so that you maybe understand it the next time and we don't have this problem repeatedly forever. So that's sort of how I got into IT.

I like being able to fix problems, so being able to do that what's kept me in the position for as long as I've been here.

What super power do you want most?

Being able to stop time, slow down time, affect time. I think there's a lot of different applications for that. There's a lot of flexibility to it. I feel like you can get creative if you can stop time.

If you were on an island, what three things would you bring?

Well, I feel like I'd probably want some sort of board game. I'm not a chess player, but that's one of those games that I can think of where there are so many solutions. Checkers has been solved. A computer has figured out how to win any game of checkers, right? Chess, I think they haven't quite gotten there yet. I think I read something like there's supposed to be more combinations of chess games available than there are stars in the universe kind of a thing. The number of permutations you can have is so vast, that you could always be coming up with new ways to play a game. I feel like that would be good.  I'd have to pick a book, I just don't know what book it would be. Maybe some anthology book.

Maybe something like a soccer ball or something, something you can entertain yourself with, something that will keep yourself in shape and entertained, but also like you could draw a face on it, make it a new Wilson or whatever.

What’s the area of focus that you're concentrated on?

Generally, my focus is still sort of where I started out, which is end user support.  The primary goal is always how is what we're doing affecting the researchers and their ability to do the research, essentially. We try to really focus on that more than adherence to any particular sort of IT standard or methodology. I feel like it's really common that it's IT's job to enforce a specific set of restrictions or standards on the company. Whereas for us, unless there's a really compelling reason to say no, like it physically cannot be done or it's a really dangerous idea, we aim to say, yes, and here's how we'll help you get there.

We’re trying to bring the most minimal amount of friction to the way research gets done. As a result, we have a pretty good rapport with the general user population. I think users trust us to handle data and solve problems effectively, so I don't believe we have a lot of shadow IT issues, where people are buying things on their own because IT can't solve it, or they've tried to fix it through the official channels and it didn't work so they did their own thing, or went out and bought their own software or pulled in an external hard drive from home. I feel like people generally know that if they come to us, they'll get the help that they need. They don't have to go and look on their own to do something.

Are you a part of the executive conversation with growing the business forward?

Yes. The structure for our leadership is going to be a little different from the standard company. We have an executive vice president and chief operating officer. And then above him is the president and scientific director. When I started, the president was just a head of a lab and also a division, which is like a logical grouping of labs. I feel like he still primarily thinks of things like a scientist. So, for him, research is paramount and protecting the unique structure and feel of the institute is really important.

The COO, who is my direct boss, has been my direct boss since I got hired, used to be the IT manager and is now the COO of the company. He understands the IT side of things. He was also a researcher himself, has his PhD and had been an immunologist at some point. He understands both sides of the equation very well and can back initiatives that we are pushing forward, knowing that we're making the right choices. I'd say it's pretty easy to feel like we're part of that discussion and the driving goals of IT are aligning well with what the organization on the whole is looking to accomplish.

We've developed a level of mutual trust. The COO knows that we are going to try to do our best to assist the research, and we know that if we really need him to come to bat for us, that if it's something that really matters, we know what can happen.

What does this year look like for you?

I've been focused on security. We have a simple site, we have a simple network. So, it's pretty easy to protect the perimeter. Single firewall, single site. We're able to say, “Let's just not let all this stuff in unless it's through the VPN or is a service that we specifically allow.” The network security side of things is fairly straightforward. Obviously, I'm sure security experts anywhere wince when someone says that to them. Where it gets tricky is the end users. Especially where we're at because we have a lot of visiting scientists. We've got 400 people in a given year and you might have 100 people turn over in the course of a year. It’s because you have people who are grad students, who maybe graduate or get another job or go somewhere else. You'll have post-doctoral candidates, and they're there to be doing lab work, but their goal is to have their own lab, so they're going to leave one day. That's sort of the ideal scenario, that they're doing so well, they have projects of their own, they go off and start their own gig.

You’ll have visiting scientists, people from China, Japan, Spain, the Netherlands, anywhere, all over the world. And they're here for months and then they leave. Having very user-focused rules is important because we're not their home institution, we're not even their home country for many of them. So, it's difficult to instill the sense of corporate culture responsibility when they're just going to be leaving soon. You really want to have easy options available to them.

The labs are all sort of their own little companies, they all bring their own funding, I simply control what they're going to buy. We have some labs that are all PC, some labs are all Mac. Some labs, everybody gets their own stuff, some labs it's bring your own from home. That’s all based on their funding level, but also the comfort level of the PI, of the lab head, the principle investigator. Which is also why we have to be flexible in our department. I can't say to anyone, you have to be using this particular hardware and it can't be more than four years old and it has to be running all this stuff because if they don't have the money to pay for that. I can't be like, “Well then you can't do your research.”

The other issue that we have used to be storage. The storage in research nonprofits was a big deal for a while; it was the topic of a lot of conferences. But that's sort of been "solved" now. It's tricky because we generate a ton of data but we don't have Fortune 500 budgets. We just have nonprofit institute budgets. But, the data output is excessive. The question's not how do you store it, it's how do you find it again. We're now looking at solutions that will enable us to do a lot more metadata management and a lot more automated assistance with tagging and sorting and collecting that data.

Analytics is probably downstream of that. Data management is what's really critical, and it's something that most file systems don't really do well natively. Especially not at the volume we would need. We're dealing with 150-200 million files. Research data is frequently write once, read seldom if ever. You may have people that just collect data, and they should be, it's their job. But it may be data that they didn't really need to analyze at the time because it didn't quite get them the results that their research needed, but you still want to keep it. It also means that people aren't touching it very often, so it's really easy to lose track of it.

A lot of places cover this with data librarians, and we may move in that direction. Larger institutions, solve it just through raw manpower. They throw interns at it, they throw grad students at it, which is not a luxury we have.

The next initiative will be probably coming up in two years from now, which is then moving that data quickly. Especially as more collaborations are being done between institutions and those datasets are not small. You're going to be dealing with stacked TIF images. Maybe one image file you're working with is actually a stack of hundreds of other images compressed to give a 3D model of a cell structure. That's going to be a 50 gigabyte single file that you might need to send to somebody, and maybe they don't want to wait seven hours to get it. Having a network that's sort of hardened to do large transfers like that and systems that will help chaperone transfers across the land is going to be big for us.

What's the greatest mistake that you learned from?

Coming up through the ranks as a part-time, hourly help desk guy I think was very helpful because you make a lot of mistakes doing that stuff. I can think of a time where I lost somebody's data. I can think of times where I didn't give someone a good answer or I didn't follow up with them. It helps me understand the help desk that I'm managing and the work that they're doing, because it's work that I did for a long time. I can tell what flies and what doesn't. Somebody tells me, “ I didn't have time to do this,” I can be like, “Well, you did. You just didn't do it.” Or I can be like, “Yeah, I can tell it's been crazy, I've seen the tickets. I know what's been going on.”

I would say more specifically for me on that, I learned a long time ago to sort of separate my ego from the process. In IT, you don't often have great conversations with people. It's usually like why is my stuff broken? What happened, what did you do wrong? People don't generally come to you to say everything is great today. If nothing is broken, people wonder why we have so much IT staff. But the answer is the reason it's not broken is because you have so much IT. You need to learn to be not defensive about that or to realize that it has no bearing on you as a person necessarily, as long as you are doing your best in the job. Being able to just have someone unload on me and let them know we'll fix it for you and we'll make it better, I think goes a long way.

You always hear horror stories about people's bosses and the way they handle conflict, and I feel like a lot of that is tied with their sense of ownership and power in the organization. I think not having that is crucial for this kind of service-oriented role. I still think of it as a service. It doesn't matter that I've been doing it as long as I have, it doesn't matter that I have a director title, I still consider myself to be a service employee. I solve tickets regularly in the system. I'm still one of the guys that goes out to a microscope to install new software.

There are things where the help desk has surpassed my knowledge set at this point because they're doing it every day. There are times I don't know the method they're using anymore. Also realizing that it's sort of like working in retail. You're going to see some ugly stuff out of people, and sometimes realizing that they're not mad at you, they're mad at a situation. Being able to grasp that is important.

What are the hiring challenges, and how do you hire?

I’m actually usually not super focused on an applicant’s resume. I look for certain key words, but for the help desk especially, I actually prefer a resume that is not a six pager or super dense. I am willing to hire people who are pretty green as long as it seems like they have the willingness to pick stuff up and run with it. We tend to hire for who's going to fit best for both the group and the personality and structure of the organization. So, if we get a sense that someone's a self-starter, that they are a good communicator, that they are going to be able to be relaxed in a stressful situation, those are things we tend to hire for more than this person has three pages of certifications and has worked at a hundred tech companies and has probably seen it all before.

Now obviously, sometimes those candidates are great because if they can come in and know everything and hit the ground running, then awesome. Perfect. But I also feel like sometimes you come in with a lot of preconceived methods about ways of handling situations. I hire based on fit for the team more than I do for the resume. You could have a pretty low amount of actual resume experience, but as long as it seems like you'll catch on quick, that'll work out pretty well for you. We want someone who can communicate well to our end users rather than someone who just knows it all already.

I’d say the biggest challenge there is that people, since it's sort of like an entry level-ish kind of job, depending on the tier of tech, you tend to have people who shotgun resumes out. So, we'll get people who are fresh out of school, and you'll also get people who are clearly looking for high end level six figure salary sort of jobs. And they'll say that specifically, in our recruiting system. It appears they didn't apply for this job, a robot applied for this job, or they just applied to anything that had a keyword. We get a lot of that.

If you could give guidance to any IT manager about how they position their careers, what would you tell them?

I think it's really important to be part of the greater organization and to work a lot with other groups where possible. When another department has a technology problem they need to solve or a problem you can solve with software, really work with them on it. Be involved in that process because it helps you both understand the business. It will help you immerse faster, but it also makes you more valuable. You'll be involved in more discussions because people will have learned that they can bring something to you and you'll be willing to sit down with them and solve the problem.

I find that the more up front you are about stuff, the less things can come back and bite you in the ass later on, and the more people believe in you and depend on you.

Final Question:

Top concern - Choose from: security, mobility, IoT, analytics, DevOps, advanced systems architecture, cloud, automation. Pick top 3 and rank order

  1. Security
  2. Mobility
  3. Cloud

[/vc_column_text][grve_callout title="Tech Spotlight Interviews" button_text="Learn More" button_link="url:http%3A%2F%2Finfo.managedsolution.com%2Fc-level-interview-registration||target:%20_blank|"]IT is a journey, not a destination. We want to hear about YOUR journey!
Are you a technology innovator or enthusiast?
We would love to highlight you in the next edition of our Tech Spotlight.[/grve_callout][/vc_column][/vc_row]

[vc_row][vc_column][vc_column_text]

All IT Jobs Are Cybersecurity Jobs Now

By Christopher Mims as written on wsj.com
The rise of cyberthreats means that the people once assigned to setting up computers and email servers must now treat security as top priority
In the Appalachian mountain town of West Jefferson, N.C., on an otherwise typical Monday afternoon in September 2014, country radio station WKSK was kicked off the air by international hackers.
Just as the station rolled into its afternoon news broadcast, a staple for locals in this hamlet of about 1,300, a warning message popped up on the screen of the program director’s Windows PC. His computer was locked and its files—including much of the music and advertisements the station aired—were being encrypted. The attackers demanded $600 in ransom. If station officials waited, the price would double.
The station’s part-time IT person, Marty Norris, was cruising in his truck when he got the call that something was amiss. He rushed to the station. “I immediately pulled the plug on his computer,” says Mr. Norris.
In a quick huddle, the possibility of paying the ransom was raised, but the idea didn’t get far. “We’re a little bit stubborn in the mountains,” says General Manager Jan Caddell. “It’s kind of like being held up. We thought if we paid, they’d just ask for more.”
Security experts believe this particular strain of ransomware has netted criminals at least $325 million in extorted payments so far, but the real figure could easily be twice that.
The global “WannaCry” ransomware attack that peaked last week, and has affected at least 200,000 computers in 150 countries, as well as the growing threat of Adylkuzz, another new piece of malware, illustrate a basic problem that will only become more pressing as ever more of our systems become connected: The internet wasn’t designed with security in mind, and dealing with that reality isn’t cheap or easy.
Despite all the money we’ve spent—Gartner estimates $81.6 billion on cybersecurity in 2016—things are, on the whole, getting worse, says Chris Bronk, associate director of the Center for Information Security Research and Education at the University of Houston. “Some individual companies are doing better,” adds Dr. Bronk. “But as an entire society, we’re not doing better yet.”
Ever greater profits from cyberattacks mean cybercriminals have professionalized to the point where they are effectively criminal corporations, says Matthew Gardiner, a cybersecurity strategist for Mimecast, which manages businesses’ email in the cloud. Instead of hackers fumbling their way through complicated financial transactions, or money whizzes fumbling their way through malware design, there is true division of labor. As in any other industry, specialization begets efficiency.
Large (legitimate) corporations have the resources to hire talent to protect their digital assets, but for small- and medium-size businesses, it’s harder. There’s no shortage of good advice on how to perform basic security hygiene, but who’s there to implement it? The solution is resource management, with a focus on cybersecurity. Dr. Bronk lays it out like this:
1. Retrain IT staff on security—or replace them. In today’s world of ever-multiplying threats and dependence on connected assets, all IT staff must now be cybersecurity staff first. “The good news is that you don’t need that dedicated person to run your email server anymore—they can run security,” says Dr. Bronk.
2. Push everything to the cloud. It used to be the job of IT personnel was to build and maintain the tools employees need. Now, pretty much anything can be done better with a cloud-based service.“I mean, even the CIA uses Amazon’s web services,” says Dr. Bronk. “If there’s a best of breed, why not use it? If you want a safe car, go buy a Volvo.”
Marty Norris tests program back up at WKSK in West Jefferson, N.C. Photo: Andy McMillan for The Wall Street Journal
 3. New IT investment will need baked-in security. Data from the Bureau of Labor Statistics indicates jobs in IT security are one of the fastest-growing categories in tech, up 33% in the past four years alone. That’s probably due to companies simply catching up on investing in cybersecurity after years of under-investment, says Mr. Gardiner.
Diana Kelley, global executive security adviser at IBM Security, a division ofInternational Business Machines Corp. , compares the current state of network security to graphical user interfaces in their earliest days, when they weren’t particularly intuitive. Collectively designers and engineers learned to prioritize and improve them. “Security can be like that, too,” she adds. “We can think about it upfront and weave it into the process in a much more effective way.”
The cloud isn’t perfect, of course. A , disclosed last week, exposed customer email addresses, allowing attackers to target them with convincing emails that included a malware attachment disguised as a Microsoft Word doc. And then there’s the fact that massivedenial-of-service attacks like Mirai can make the cloud inaccessible at critical times.
WannaCry is a good example of how increasing cybersecurity can be relatively simple—thwarting it was as simple as keeping Windows up-to-date. On the other hand, it used a sophisticated exploit lifted from a hack of National Security Agency tools that allowed it to spread directly from one computer to another, infecting systems in companies that might have been prepared for other kinds of attacks. These kinds of systemic
weaknesses employed by or stolen from governments have led Microsoft to plead for a “Geneva Convention” on cyber weapons
President and general manager Jan Caddell, program director Nathan Roland and IT staffer Marty Norris monitor things at radio station WKSK in West Jefferson, N.C., on Friday. Photo: Andy McMillan for The Wall Street Journal
 
As for West Jefferson’s own WKSK, the station was lucky. Mr. Norris, its IT consultant, had backed up the computers. He was able to wipe the slate clean and get everyone back on the air in a few hours. It’s a good illustration of how prioritizing even the most basic cybersecurity practices can be a life-saver.
Since then, he has implemented offline backups of the station’s computers, just in case. He’s also become a keen student of the kind of attacks, such as WannaCry, that can affect small organizations. As soon as he read that it could hit older systems, he rushed to protect them at his day job—as the IT person for the local school district.
Appeared in the May 22, 2017, print edition as 'All IT Jobs Are Security Jobs Now.'

Looking for a technology partner to assist with a specific project? Call Managed Solution at 800-208-3617  or contact us to schedule a full analysis on the performance of your network.

Network Assessment & Technology Roadmap

[/vc_column_text][/vc_column][/vc_row]

Talking DevOps, hardcore air hockey and more with Donovan Brown

lounge-640x348

Written by Vanessa Ho as found on blogs.microsoft.com

Donovan Brown was a new technical seller at Microsoft struggling with a demo when he sent the email that changed his life.
“I had completely hosed the VM [virtual machine] I was using,” Brown recalled. He sent a desperate, cold-call email to a technical evangelist for help, which led to an invite for Brown to demo on stage, which led to a meteoric career rise.
Three years later, the once-unknown salesman has become one of Microsoft’s top presenters, with Brown now the senior program manager in charge of the company’s vision for DevOps, an approach to software development that incorporates Agile methodologies. DevOps calls for development and operations teams to step out of their traditional silos and collaborate in a system that emphasizes automation, testing, monitoring and continuous delivery.
Microsoft Build 2016, San Francisco.

Microsoft Build 2016, San Francisco.

Many organizations are interested in DevOps for its potential to deliver products faster in evolving markets, but aren’t sure how to build a supply pipeline or adopt new ways of working. A longtime developer who is also passionate about car-racing and air hockey, Brown has risen as a leader at a critical time for the industry, demystifying DevOps for thousands of IT pros around the world.
“DevOps is here. It is how you succeed. It is how you beat the competition. Why should you do DevOps? Because your competition already is,” Brown said recently in a demo for developers at Microsoft’s Ignite New Zealand conference. It was one of his many high-profile appearances in 2016, which included keynotes at Microsoft’s enormous Build and Ignite events.
Along the way, Brown has become known for his quirky personal brand as a gifted public speaker who also has killer technical chops. His winking catchphrase, “I’m going to rub a little DevOps on it and make it better,” has spawned the memorable hashtag #RubDevOpsOnIt. He has become so recognized in dev circles that he’s now known as “The Man in the Black Shirt,” a reference to the polos he wears on stage.
Read the full story here.

SQL Server as a Machine Learning Model Management System

By Rimma Nehme as written on blogs.technet.microsoft.com

Machine Learning Model Management

If you are a data scientist, business analyst or a machine learning engineer, you need model management – a system that manages and orchestrates the entire lifecycle of your learning model. Analytical models must be trained, compared and monitored before deploying into production, requiring many steps to take place in order to operationalize a model’s lifecycle. There isn’t a better tool for that than SQL Server!

SQL Server as an ML Model Management System

In this blog, I will describe how SQL Server can enable you to automate, simplify and accelerate machine learning model management at scale – from build, train, test and deploy all the way to monitor, retrain and redeploy or retire. SQL Server treats models just like data – storing them as serialized varbinary objects. As a result, it is pretty agnostic to the analytics engines that were used to build models, thus making it a pretty good model management tool for not only R models (because R is now built-in into SQL Server 2016) but for other runtimes as well.
SELECT * FROM [dbo].[models]

Machine Learning model is just like data inside SQL Server

Figure 1: Machine Learning model is just like data inside SQL Server.

SQL Server approach to machine learning model management is an elegant solution. While there are existing tools that provide some capabilities for managing models and deployment, using SQL Server keeps the models “close” to data, thus leveraging all the capabilities of a Management System for Data to be now nearly seamlessly transferrable to machine learning models (see Figure 2). This can help simplify the process of managing models tremendously resulting in faster delivery and more accurate business insights.

Publishing Intelligence To Where Data Lives

Figure 2: Pushing machine learning models inside SQL Server 2016 (on the right), you get throughput, parallelism, security, reliability, compliance certifications and manageability, all in one. It’s a big win for data scientists and developers – you don’t have to build the management layer separately. Furthermore, just like data in databases can be shared across multiple applications, you can now share the predictive models.  Models and intelligence become “yet another type of data”, managed by the SQL Server 2016.

Why Machine Learning Model Management?

Today there is no easy way to monitor, retrain and redeploy machine learning models in a systematic way. In general, data scientists collect the data they are interested in, prepare and stage the data, apply different machine learning techniques to find a best-of-class model, and continually tweak the parameters of the algorithm to refine the outcomes. Automating and operationalizing this process is difficult. For example, a data scientist must code the model, select parameters and a runtime environment, train the model on batch data, and monitor the process to troubleshoot errors that might occur. This process is repeated iteratively on different parameters and machine learning algorithms, and after comparing the models on accuracy and performance, the model can then be deployed.
Currently, there is no standard method for comparing, sharing or viewing models created by other data scientists, which results in siloed analytics work. Without a way to view models created by others, data scientists leverage their own private library of machine learning algorithms and datasets for their use cases. As models are built and trained by many data scientists, the same algorithms may be used to build similar models, particularly if a certain set of algorithms is common for a business’s use cases. Over time, models begin to sprawl and duplicate unnecessarily, making it more difficult to establish a centralized library.

Why SQL Server 2016 for machine learning model management

Figure 3: Why SQL Server 2016 for machine learning model management.

In light of these challenges, there is an opportunity to improve model management.

Why SQL Server 2016 for ML Model Management?

There are many benefits to using SQL Server for model management. Specifically, you can use SQL Server 2016 for the following:
  • Model Store and Trained Model Store: SQL Server can efficiently store a table of “pre-baked” models of commonly used machine learning algorithms that can be trained on various datasets (already present in the database), as well as trained models for deployment against a live stream for real-time data.
  • Monitoring service and Model Metadata Store: SQL Server can provide a service that monitors the status of the machine learning model during its execution on the runtime environment for the user, as well as any metadata about its execution that is then stored for the user.
  • Templated Model Interfaces: SQL Server can store interfaces that abstract the complexity of machine learning algorithms, allowing users to specify the inputs and outputs for the model.
  • Runtime Verification (for External Runtimes): SQL Server can provide a runtime verification mechanism using a stored procedure to determine which runtime environments can support a model prior to execution, helping to enable faster iterations for model training.
  • Deployment and Scheduler: Using SQL Server’s trigger mechanism, automatic scheduling and an extended stored procedure you can perform automatic training, deployment and scheduling of models on runtime environments, obviating the need to operate the runtime environments during the modeling process.
Here is the list of specific capabilities that makes the above possible:

ML Model Performance:

  • Fast training and scoring of models using operational analytics (in-memory OLTP and in-memory columnstore).
  • Monitor and optimize model performance via Query store and DMVs. Query store is like a “black box” recorder on an airplane. It records how queries have executed and simplifies performance troubleshooting by enabling you to quickly find performance differences caused by changes in query plans. The feature automatically captures a history of queries, plans, and runtime statistics, and retains these for your review. It separates data by time windows, allowing you to see database usage patterns and understand when query plan changes happened on the server.
  • Hierarchical model metadata (that is easily updateable) using native JSON support: Expanded support for un-structured JSON data inside SQL Server enables you to store properties of your models using JSON format. Then you can process JSON data just like any other data inside SQL. It enables you to organize collections of your model properties, establish relationships between them, combine strongly-typed scalar columns stored in tables with flexible key/value pairs stored in JSON columns, and query both scalar and JSON values in one or multiple tables using full Transact-SQL. You can store JSON in In-memory or Temporal tables, you can apply Row-Level Security predicates on JSON text, and so on.
  • Temporal support for models: SQL Server 2016’s temporal tables can be used for keeping track of the state of models at any specific point in time. Using temporal tables in SQL Server you can: (a) understand model usage trends over time, (b) track model changes over time, (c) audit all changes to models, (d) recover from accidental model changes and application errors.

ML Model Security and Compliance:

  • Sensitive model encryption via Always Encrypted: Always Encrypted can protect model at rest and in motion by requiring the use of an Always Encrypted driver when client applications to communicate with the database and transfer data in an encrypted state.
  • Transparent Data Encryption (TDE) for models. TDE is the primary SQL Server encryption option. TDE enables you to encrypt an entire database that may store machine learning models. Backups for databases that use TDE are also encrypted. TDE protects the data at rest and is completely transparent to the application and requires no coding changes to implement.
  • Row-Level Security enables you to protect the model in a table row-by-row, so a particular user can only see the models (rows) to which they are granted access.
  • Dynamic model (data) masking obfuscates a portion of the model data to anyone unauthorized to view it. Return masked data to non-privileged users (e.g. credit card numbers).
  • Change model capture can be used to capture insert, update, and delete activity applied to models stored in tables in SQL Server, and to make the details of the changes available in an easily consumed relational format. The change tables used by change data capture contain columns that mirror the column structure of a tracked source table, along with the metadata needed to understand the changes that have occurred.
  • Enhanced model auditing. Auditing is an important mechanism for many organizations to serve as a checks and balances.  In SQL Server 2016 are there any new Auditing features to support model auditing. You can implement user-defined audit, audit filtering and audit resilience.

ML Model Availability:

  • AlwaysOn for model availability and champion-challenger. An availability group in SQL Server supports a failover environment. An availability group supports a set of primary databases and one to eight sets of corresponding secondary databases. Secondary databases are not backups. In addition, you can have automatic failover based on DB health. One interesting thing about availability groups in SQL Server with readable secondaries is that they enable “champion-challenger” model setup. The champion model runs on a primary, whereas challenger models are scoring and being monitored on the secondaries for accuracy (without having any impact on the performance of the transactional database). Whenever a new champion model emerges, it’s easy to enable it on the primary.

ML Model Scalability

  • Enhanced model caching can facilitate model scalability and high performance. SQL Server enables caching with automatic, multiple TempDB files per instance in multi-core environments.
In summary, SQL Server delivers the top-notch data management with performance, security, availability, and scalability built into the solution. Because SQL Server is designed to meet security standards, it has minimal total surface area and database software that is inherently more secure. Enhanced security, combined with built-in, easy-to-use tools and controlled model access can help organizations meet strict compliance policies. Integrated high availability solutions enable faster failover and more reliable backups – and they are easier to configure, maintain, and monitor, which helps organizations reduce the total cost of model management (TCMM). In addition, SQL Server supports complex data types and non-traditional data sources, and it handles them with the same attention – so data scientist can focus on improving the model quality and outsource all of the model management to SQL Server.

Conclusion

Using SQL Server 2016 you can do model management with ease. SQL Server is unique from other machine learning model management tools, because it is a database engine, and is optimized for data management. The key insight here is that “models are just like data” to an engine like SQL Server, and as such we can leverage most of the mission-critical features of data management built into SQL Server for machine learning models. Using SQL Server for ML model management, an organization can create an ecosystem for harvesting analytical models, enabling data scientists and business analysts to discover the best models and promote them for use. As companies rely more heavily on data analytics and machine learning, the ability to manage, train, deploy and share models that turn analytics into action-oriented outcomes is essential.

Managed Solution is a full-service technology firm that empowers business by delivering, maintaining and forecasting the technologies they’ll need to stay competitive in their market place. Founded in 2002, the company quickly grew into a market leader and is recognized as one of the fastest growing IT Companies in Southern California.

We specialize in providing full managed services to businesses of every size, industry, and need.

[vc_row][vc_column][vc_column_text]

3 urgent truths about cloud computing in a mobile world in 2016

As written on blogs.office.com
In a constantly evolving world transformed by cloud, social and mobile technologies, companies are focused on strategic platforms for streamlining processes, reaching customers and expanding sales. Cloud computing has been—and will continue to be—a big player in the game, impacting every aspect of our business processes. Global spending on Infrastructure as a Service (IaaS) is expected to reach $16.5 billion this year, while the global Software as a Service (SaaS) market is projected to grow to $67 billion by 2018. So what lies ahead for 2016 and beyond? Here are three facts you should know about cloud computing:
  1. Going hybrid is not just for cars

    —Companies looking for enterprise cloud solutions no longer want to be forced to choose between their datacenter and the cloud. Businesses require a flexible IT infrastructure that can scale on demand. A hybrid cloud solution offers both. With a private cloud in a company’s datacenter, it can be more agile and manage resources more effectively. Given this fact, it’s no surprise that private cloud adoption increased from 63 percent to 77 percent, driving hybrid cloud adoption up from 58 percent to 71 percent year-over-year. This has allowed companies to take advantage of service providers that offer cloud storage, backup and recovery options with increased efficiency and reduced cost.
  2. Internet of Things (IoT) = cloud-based services

    —From vehicles and security systems, to refrigerators and washing machines, every “thing” is now connected. According to International Data Corporation (IDC), by 2018, there will be 22 billion IoT devices installed, driving the development of over 200,000 new apps and services. This has produced a new generation of platforms, which will eventually all communicate via the cloud. From pre-configured solutions that accelerate IoT projects, to the ability to connect devices to efficiently manage multiple assets with expandable scalability, to gathering valuable IoT data and capturing insights that integrate with existing business systems—there are a wealth of possibilities when it comes to using cloud-based services to jumpstart and manage IoT offerings.
  3. There’s a native app for that

    —Experts estimate cloud apps will account for a whopping 90 percent of worldwide mobile data traffic by 2019. That means time savings and efficiency are the name of the game when it comes to the application development process. With cloud-native apps reducing overall development time by 11.6 percent, many companies are choosing cloud-development strategies to streamline processes and boost collaboration. Growing alongside this trend are cloud app containers. They’ve emerged as an attractive way for developers to quickly and efficiently build and deploy these “born-in-the cloud” applications. By using containers, developers and IT professionals can deploy applications from a workstation to a server in mere seconds. And they can select from Windows Server containers, Linux containers and Hyper-V containers—both in the cloud and on-premises.
There’s no question—the cloud has been a game changer. With innovative security measures and a better understanding of cloud computing, more and more companies are relying heavily on cloud-based apps and platforms to boost customer demand strategies. Today’s cloud provides layers of security features and operational best practices, not to mention enterprise-grade user and admin controls to further secure an environment, beckoning companies to take advantage of a holistic, agile platform that is also secure.

[/vc_column_text][/vc_column][/vc_row]

Contact us Today!

Chat with an expert about your business’s technology needs.