IT Trends for 2019 That You Should Incorporate in Next Year's Budget

Our fast-paced society is moving at such a high rate; it’s almost impossible to keep up with all the future trends that may pop-up around the corner. That is, of course, if you do not have specialists and trend analysts inside your company that study anything that could be the next big thing in IT in 2019, and in the years to come.

Trends, when it comes to the fashion industry, can appear at the beginning of a year, and by the second quarter, they can already be yesterday’s news. And they usually are. But that’s what happens in the fashion world. When it comes to industries like healthcare, biotech, financial services, and a few others, trends can be identified in time. They are usually analyzed well in advance, and the predictions tend to be spot on if the right people are handling the data.

2019 is a very promising year for IT trends, and here are some that should be included in your company’s budget:

 

Digital Transformation and Cloud

We cannot talk about a company undergoing a digital transformation process without cloud platforms. These two services come hand in hand, and they can bring any company plenty of benefits when it comes to being more productive, more efficient and more cost-effective.

As part of your business’ digital transformation process, switching over to cloud will allow your company to be more agile when it comes to handling data, messaging, and business portfolios from anywhere in the world you, your employees, or clients may be.

Cloud-based infrastructure is essential to bringing your company into the world of the future, and it’s vital in delivering flexible and on-demand access to resources worldwide, to both your employees and clients.

Want to learn more about IT trends to look out for in 2019? Sign up to watch our webinar to hear from our Founder Sean Ferrel and Solutions Architect Richard Swaisgood on what you can expect in 2019.

 

Machine Learning, AI and Automation

The age of automation is upon us, and some companies, like Google, Apple and Tesla are at the forefront of this new industrial revolution. With driverless cars and self-serving supermarkets, automation is only around the corner as a large scale phenomenon.

Machine learning and AI (artificial intelligence) are necessary fields at the forefront of developing the new way companies will work in the future. Most businesses will undergo complex automation processes to become more productive, and this will be done mainly through machine learning and AI.

Although this will mean that plenty of people will lose their jobs, it also means that it will create value in other areas of our day to day lives. Automation is not only a trend for 2019; it will become a day to day thing from now on.

While most people are familiar with the idea of AI (artificial intelligence) from sci-fi movies, not everyone is acquainted with the role machine learning plays as part of AI. Machine learning is a field of AI that interprets and uses statistics to help computers better understand behaviors, thus allowing them to learn and replicate.

Machine learning will lead to the improvement of daily processes which take place in any company, small, medium or big, because it helps cut costs exponentially and it eliminates the random factor out of many equations, leaving less (close to zero) room for errors.

 

Preparing for the California Consumer Privacy Act

The EU has already gone through the process of the GDPR (Europe’s General Data Protection Regulation), as of August 2018, and around a third of companies could not continue their activities in the European Union without making severe changes.

The same will happen with most companies in the US after the California Consumer Privacy Act comes into play. The “CCPA” will be taking effect as of January 1, 2020, so there’s still time to prepare your company to become compliant with the new set of regulations, especially when it comes to managing data.

If your company undergoes any security breaches, this could lead to severe financial losses because based on this new act, any eligible client can demand up to $750 for each violation. It means that if your company is in charge of handling private data and suffers a cyber-attack or any security breach, you are will be held accountable and have to endure the losses.

When companies work with professional data management companies, the entire responsibility will be shifted towards them, making it easier for you to focus on your core business, without the threat of losing significant amounts of money in newly-filed lawsuits based on the new CCPA act.

 

Four Green Tech Predictions for 2017

[vc_row][vc_column][vc_column_text]

rob-bernard-four-green-tech-predictionsFour Green Tech Predictions for 2017

Written by Rob Bernard as seen on blogs.microsoft.com
The end of the calendar year is a traditional time of reflection, of the ups and downs of the past year, and to think about what to expect in 2017. As Microsoft’s chief environmental strategist, I am encouraged by the progress made on some environmental issues in the past year, but there is still much work to be done. Despite the increasing urgency around many environmental issues, I remain optimistic about the future.
Perhaps the most notable breakthrough this past year was that the Paris Agreement on climate change entered into force. Cities, countries and companies around the world are now focusing their efforts on how to set and execute their plans to reduce carbon emissions. We also saw growth in corporate procurement of renewable energy in 2016, both in the U.S. and around the globe, another encouraging sign. At Microsoft, we put forth our own goal to source 50% of our datacenter electricity from wind, solar and hydropower. At the end of this year, we’re happy to report that we are on pace to not only meet our goal, but also are creating new financial and technology models that can further accelerate the pace of the adoption of renewable energy.
As we look towards 2017, I expect that we will see both continued progress on energy and an increasing focus on non-energy areas. As we at Microsoft think about 2017, I think we expect to see some shifts in approaches and investments happening across the world.

1. IoT and Cloud Computing will begin to transform utility energy management:

Aging infrastructure is already under pressure and the addition of more renewable energy will only compound the stress on existing infrastructure. As more clean energy comes online, along with distributed resources like electric vehicles and rooftop solar, utilities are facing a big challenge – how to manage a more complex network of energy creating and energy storing devices.  2017 will see an increased investment by utilities in technology to leverage data, through IoT solutions and cloud computing, to make energy management more predictable, flexible and efficient.
In developing nations, we are seeing a different trend, but one that is also accelerated by IoT and cloud computing. In these markets, data is being used to accelerate distribution, sales and management of micro-solar grids to enable households to get both power and connectivity to the internet. 2017 should be an exciting year with even more growth as capital investments in these markets increase and solar and battery storage prices decline.

2. Water will begin to emerge as the next critical world-scale environmental challenge

leaf-rain-raindrops-drop-of-water
Water scarcity is increasing in many areas around the world. Our oceans are under threat from pollution, acidification and warming temperatures. We are already seeing the devastating effects on iconic landmarks like the Great Barrier Reef. And these trends are putting peoples’ food, water, and livelihoods at risk. In 2017, awareness on this challenge will increase. We will begin to better understand what is happening to our water through the use of sensors and cloud computing. Our ability to leverage technologies like advanced mapping technologies and sensors will increase and expand our understanding of what is driving the decline of many of our critical water systems.

3. Data will increasingly be used to try to better understand our planet

Data is essential for monitoring and managing the earth’s resources and fragile ecosystems. There is much we do not understand about the planet, but we see an increasing number of companies and investments flowing toward developing tools and platforms that enable better mapping and understanding of earth’s carbon storage and air borne gasses, and ecosystems and the associated value they provide. We expect to see data being applied more proactively to create a more actionable understanding of how we can better manage food, water, biodiversity and climate change.

4. Organizations and policy makers will start leveraging cloud-based technologies

This area is perhaps the most difficult to try to predict. While countries will begin implementing their action plans under the Paris Agreement, it is not easy to predict the methods each country will use and prioritize to make progress against commitments under the Paris Agreement. And the changes will happen not just at the national level. Increasingly we will see cities and local governments moving ahead with technology implementation to drive efficiencies and accountability, along with policy changes as well. We’re already leveraging machine learning and artificial intelligence to better model and understand the potential impact of legislative changes, in addition to offering our technologies to our public sector partners as they work towards their plans. While this will likely take several years to take hold, 2017 should see an increased awareness for the role of technology in environmental policy
While there are many challenges ahead across so many areas of sustainability, I remain optimistic.  The number of people and organizations that are focusing on these and many other areas of sustainability ensure that we will continue to make progress in 2017.  At Microsoft, we are committed to working with our customers and partners to help them achieve more through the use of our technologies. Everyone – companies, countries, individuals – have much work to do. We believe that by working together, we can help develop effective solutions to address environmental challenges that will benefit our business, our customers and the planet. And we are up to the challenge.

[/vc_column_text][/vc_column][/vc_row]

How mixed reality and machine learning are driving innovation in farming

mixed-reality-for-farming-managed-solutionHow mixed reality and machine learning are driving innovation in farming

By Jeff Kavanaugh as written on techcrunch.com
Farming is, by far, the most mature industry mankind has created. Dating back to the dawn of civilization, farming has been refined, adjusted and adapted — but never perfected. We, as a society, always worry over the future of farming. Today, we even apply terms usually reserved for the tech sector — digital, IoT, AI and so on. So why are we worrying?
The Economist, in its Q2 Technology Quarterly issue, proclaims agriculture will soon need to become more manufacturing-like in order to feed the world’s growing population. Scientific American reports crops will soon need to become more drought resistant in order to effectively grow in uncertain climates. Farms, The New York Times writes, will soon need to learn how to harvest more with less water.
And they’re right. If farms are to continue to feed the world’s population they will have to do so in manners both independent of, and accommodating to, the planet’s changing and highly variable climes. That necessitates the smart application of both proven and cutting-edge technology. It necessitates simplified interfaces. And, of course, it necessitates building out and applying those skills today.
Fortunately, the basics for this future are being explored today. For example, vertical farming, a technique allowing farmers to grow and harvest crops in controlled environments, often indoors and in vertical stacks, has exploded in both popularity and potential. In fact, this method has been shown to grow some crops 20 percent faster with 91 percent less water. Genetically modified seeds, capable of withstanding droughts and floods, are making harvests possible even in the driest of conditions, like those found in Kenya.

If farms are to continue to feed the world’s population they will have to do so in manners both independent of, and accommodating to, the planet’s changing and highly variable climes.

But managing such progress, whether indoors or in the field, is a challenge unto itself. Monitoring acidity, soil nutrients and watering time for each plant for optimal growth is, at best, guesswork or, at worst, an afterthought. But it’s here new interactive technologies may shine. A small family of sensors can monitor a plant’s vitals and provide real-time updates to a remote server. Artificial intelligence’s younger cousin, machine learning, can study these vitals and the growth of some crops to anticipate future needs. Finally, augmented reality (AR), where informative images overlay or augment everyday objects, can help both farmers and gardeners to monitor and manage crop health.
Plant.IO* is one system that shows how it can be done: A cube of PVC pipes provides the frame for sensors, grow-lights, cameras and more. A remote server dedicated to machine learning analyzes growth and growth conditions and anticipates future plant needs. A set of AR-capable glasses provides to the user an image, or a representation, of the plant, regardless of location. If the AR device is capable, like the Microsoft HoloLens, it also can provide a means to interact with the plant by adjusting fertilizer, water flow, growth lights and more.
This methodology, when paired with gamification, may lend itself to a new, simplified form of crop management. Together, AI and AR make it simple and fun for everyone from adults to adolescents to monitor and manage their own gardens from home and afar. This idea is at the heart of Plant.IO: a fun, workable solution for an agriculture-based scenario where digital information can overlay a physical object or area without losing context.
In fact, this sort of management system could extend beyond gardens and farms. Any scenario where a physical environment exists alongside measurable data could, potentially, benefit from an AR/AI deployment. Industrial operations, such as warehouse management, are a promising area. Industrial farming, where the combination of AI and infrared cameras to measure a field’s health, is another.
With the right formula of AR and AI, users can monitor and nurture plants from virtually anywhere in the world. It doesn’t matter if they’re growing plants on their kitchen counter, or preparing for their next harvest. Better yet, they can do this with the latest information on a plant’s acidity, nutrient, watering levels and more in an environmentally sound manner.
The first industrial revolution helped us go from the fields to the cities with the productivity gains from machine farming. This industrial revolution is using machine learning and other digital “implements” to take farming even further — and to feed the world.
*Disclosure: Plant.IO is an open-source digital farming project created by Infosys.

SQL Server as a Machine Learning Model Management System

SQL Server as a Machine Learning Model Management System

By Rimma Nehme as written on blogs.technet.microsoft.com

Machine Learning Model Management

If you are a data scientist, business analyst or a machine learning engineer, you need model management – a system that manages and orchestrates the entire lifecycle of your learning model. Analytical models must be trained, compared and monitored before deploying into production, requiring many steps to take place in order to operationalize a model’s lifecycle. There isn’t a better tool for that than SQL Server!

SQL Server as an ML Model Management System

In this blog, I will describe how SQL Server can enable you to automate, simplify and accelerate machine learning model management at scale – from build, train, test and deploy all the way to monitor, retrain and redeploy or retire. SQL Server treats models just like data – storing them as serialized varbinary objects. As a result, it is pretty agnostic to the analytics engines that were used to build models, thus making it a pretty good model management tool for not only R models (because R is now built-in into SQL Server 2016) but for other runtimes as well.
SELECT * FROM [dbo].[models]

Machine Learning model is just like data inside SQL Server

Figure 1: Machine Learning model is just like data inside SQL Server.

SQL Server approach to machine learning model management is an elegant solution. While there are existing tools that provide some capabilities for managing models and deployment, using SQL Server keeps the models “close” to data, thus leveraging all the capabilities of a Management System for Data to be now nearly seamlessly transferrable to machine learning models (see Figure 2). This can help simplify the process of managing models tremendously resulting in faster delivery and more accurate business insights.

Publishing Intelligence To Where Data Lives

Figure 2: Pushing machine learning models inside SQL Server 2016 (on the right), you get throughput, parallelism, security, reliability, compliance certifications and manageability, all in one. It’s a big win for data scientists and developers – you don’t have to build the management layer separately. Furthermore, just like data in databases can be shared across multiple applications, you can now share the predictive models.  Models and intelligence become “yet another type of data”, managed by the SQL Server 2016.

Why Machine Learning Model Management?

Today there is no easy way to monitor, retrain and redeploy machine learning models in a systematic way. In general, data scientists collect the data they are interested in, prepare and stage the data, apply different machine learning techniques to find a best-of-class model, and continually tweak the parameters of the algorithm to refine the outcomes. Automating and operationalizing this process is difficult. For example, a data scientist must code the model, select parameters and a runtime environment, train the model on batch data, and monitor the process to troubleshoot errors that might occur. This process is repeated iteratively on different parameters and machine learning algorithms, and after comparing the models on accuracy and performance, the model can then be deployed.
Currently, there is no standard method for comparing, sharing or viewing models created by other data scientists, which results in siloed analytics work. Without a way to view models created by others, data scientists leverage their own private library of machine learning algorithms and datasets for their use cases. As models are built and trained by many data scientists, the same algorithms may be used to build similar models, particularly if a certain set of algorithms is common for a business’s use cases. Over time, models begin to sprawl and duplicate unnecessarily, making it more difficult to establish a centralized library.

Why SQL Server 2016 for machine learning model management

Figure 3: Why SQL Server 2016 for machine learning model management.

In light of these challenges, there is an opportunity to improve model management.

Why SQL Server 2016 for ML Model Management?

There are many benefits to using SQL Server for model management. Specifically, you can use SQL Server 2016 for the following:
Here is the list of specific capabilities that makes the above possible:

ML Model Performance:

ML Model Security and Compliance:

ML Model Availability:

ML Model Scalability

In summary, SQL Server delivers the top-notch data management with performance, security, availability, and scalability built into the solution. Because SQL Server is designed to meet security standards, it has minimal total surface area and database software that is inherently more secure. Enhanced security, combined with built-in, easy-to-use tools and controlled model access can help organizations meet strict compliance policies. Integrated high availability solutions enable faster failover and more reliable backups – and they are easier to configure, maintain, and monitor, which helps organizations reduce the total cost of model management (TCMM). In addition, SQL Server supports complex data types and non-traditional data sources, and it handles them with the same attention – so data scientist can focus on improving the model quality and outsource all of the model management to SQL Server.

Conclusion

Using SQL Server 2016 you can do model management with ease. SQL Server is unique from other machine learning model management tools, because it is a database engine, and is optimized for data management. The key insight here is that “models are just like data” to an engine like SQL Server, and as such we can leverage most of the mission-critical features of data management built into SQL Server for machine learning models. Using SQL Server for ML model management, an organization can create an ecosystem for harvesting analytical models, enabling data scientists and business analysts to discover the best models and promote them for use. As companies rely more heavily on data analytics and machine learning, the ability to manage, train, deploy and share models that turn analytics into action-oriented outcomes is essential.

Managed Solution is a full-service technology firm that empowers business by delivering, maintaining and forecasting the technologies they’ll need to stay competitive in their market place. Founded in 2002, the company quickly grew into a market leader and is recognized as one of the fastest growing IT Companies in Southern California.

We specialize in providing full managed services to businesses of every size, industry, and need.

Amazon Puts Machine Learning In Reach

[vc_row][vc_column][vc_column_text]

Amazon Machine Learning gives data science newbies easy-to-use solutions for the most common problems

By Martin Heller as written on infoworld.com
As a physicist, I was originally trained to describe the world in terms of exact equations. Later, as an experimental high-energy particle physicist, I learned to deal with vast amounts of data with errors and with evaluating competing models to describe the data. Business data, taken in bulk, is often messier and harder to model than the physics data on which I cut my teeth. Simply put, human behavior is complicated, inconsistent, and not well understood, and it's affected by many variables.
If your intention is to predict which previous customers are most likely to subscribe to a new offer, based on historical patterns, you may discover there are non-obvious correlations in addition to obvious ones, as well as quite a bit of randomness. When graphing the data and doing exploratory statistical analyses don’t point you at a model that explains what’s happening, it might be time for machine learning.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

Amazon’s approach to a machine learning service is intended to work for analysts to understand the business problem being solved, whether or not they understand data science and machine learning algorithms. As we’ll see, that intention gives rise to different offerings and interfaces than you’ll find in Microsoft Azure Machine Learning (click for my review), although the results are similar.
With both services, you start with historical data, identify a target for prediction from observables, extract relevant features, feed them into a model, and allow the system to optimize the coefficients of the model. Then you evaluate the model, and if it’s acceptable, you use it to make predictions. For example, a bank may want to build a model to predict whether a new credit card charge is legitimate or fraudulent, and a manufacturer may want to build a model to predict how much a potential customer is likely to spend on its products.
In general, you approach Amazon Machine Learning by first uploading and cleaning up your data; then creating, training, and evaluating an ML model; and finally by creating batch or real-time predictions. Each step is iterative, as is the whole process. Machine learning is not a simple, static, magic bullet, even with the algorithm selection left to Amazon.

 

amazon puts machine learning in reach 2 - managed solution

Data sources

Amazon Machine Learning can read data -- in plain-text CSV format -- that you have stored in Amazon S3. The data can also come to S3 automatically from Amazon Redshift and Amazon RDS for MySQL. If your data comes from a different database or another cloud, you’ll need to get it into S3 yourself.
When you create a data source, Amazon Machine Learning reads your input data; computes descriptive statistics on its attributes; and stores the statistics, the correlations with the target, a schema, and other information as part of the data source object. The data is not copied. You can view the statistics, invalid value information, and more on the data source’s Data Insights page.
The schema stores the name and data type of each field; Amazon Machine Learning can read the name from the header row of the CSV file and infer the data type from the values. You can override these in the console.
You actually need two data sources for Amazon Machine Learning: one for training the model (usually 70 percent of the data) and one for evaluating the model (usually 30 percent of the data). You can presplit your data yourself into two S3 buckets or ask Amazon Machine Learning to split your data either sequentially or randomly when you create the two data sources from a single bucket.
As I discussed earlier, all of the steps in the Amazon Machine Learning process are iterative, including this one. What happens to data sources over time is that the data drifts, for a variety of reasons. When that happens, you have to replace your data source with newer data and retrain your model.

Training machine learning models

Amazon Machine Learning supports three kinds of models -- binary classification, multiclass classification, and regression -- and one algorithm for each type. For optimization, Amazon Machine Learning uses Stochastic Gradient Descent (SGD), which makes multiple sequential passes over the training data and updates feature weights for each sample mini-batch to try to minimize the loss function. Loss functions reflect the difference between the actual value and the predicted value. Gradient descent optimization only works well for continuous, differentiable loss functions, such as the logistic and squared loss functions.
For binary classification, Amazon Machine Learning uses logistic regression (logistic loss function plus SGD). For multiclass classification, Amazon Machine Learning uses multinomial logistic regression (multinomial logistic loss plus SGD). For regression, it uses linear regression (squared loss function plus SGD). It determines the type of machine learning task being solved from the type of the target data.
While Amazon Machine Learning does not offer as many choices of model as you’ll find in Microsoft’s Azure Machine Learning, it does give you robust, relatively easy-to-use solutions for the three major kinds of problems. If you need other kinds of machine learning models, such as unguided cluster analysis, you need to use them outside of Amazon Machine Learning -- perhaps in an RStudio or Jupyter Notebook instance that you run in an Amazon Ubuntu VM, so it can pull data from your Redshift data warehouse running in the same availability zone.

Recipes for machine learning

Often, the observable data do not correlate with the goal for the prediction as well as you’d like. Before you run out to collect other data, you usually want to extract features from the observed data that correlate better with your target. In some cases this is simple, in other cases not so much.
To draw on a physical example, some chemical reactions are surface-controlled, and others are volume-controlled. If your observations were of X, Y, and Z dimensions, then you might want to try to multiply these numbers to derive surface and volume features.
For an example involving people, you may have recorded unified date time markers, when in fact the behavior you are predicting varies with time of day (say, morning versus evening rush hours) and day of week (specifically workdays versus weekends and holidays). If you have textual data, you might discover that the goal correlates better with bigrams (two words taken together) than unigrams (single words), or the input data is in random cases and should be converted to lowercase for consistency.
Choices of features in Amazon Machine Learning are held in recipes. Once the descriptive statistics have been calculated for a data source, Amazon will create a default recipe, which you can either use or override in your machine learning models on that data. While Amazon Machine Learning doesn’t give you a sexy diagrammatic option to define your feature selection the way that Microsoft’s Azure Machine Learning does, it gives you what you need in a no-nonsense manner.

Evaluating machine learning models

I mentioned earlier that you typically reserve 30 percent of the data for evaluating the model. It’s basically a matter of using the optimized coefficients to calculate predictions for all the points in the reserved data source, tallying the loss function for each point, and finally calculating the statistics, including an overall prediction accuracy metric, and generating the visualizations to help explore the accuracy of your model beyond the prediction accuracy metric.
For a regression model, you’ll want to look at the distribution of the residuals in addition to the root mean square error. For binary classification models, you’ll want to look at the area under the Receiver Operating Characteristic curve, as well as the prediction histograms. After training and evaluating a binary classification model, you can choose your own score threshold to achieve your desired error rates.

amazon puts machine learning in reach 3 - managed solution

For multiclass models the macro-average F1 score reflects the overall predictive accuracy, and the confusion matrix shows you where the model has trouble distinguishing classes. Once again, Amazon Machine Learning gives you the tools you need to do the evaluation in parsimonious form: just enough to do the job.

Interpreting predictions

Once you have a model that meets your evaluation requirements, you can use it to set up a real-time Web service or to generate a batch of predictions. Bear in mind, however, that unlike physical constants, people’s behavior varies over time. You’ll need to check the prediction accuracy metrics coming out of your models periodically and retrain them as needed.
As I worked with Amazon Machine Learning and compared it with Azure Machine Learning, I constantly noticed that Amazon lacks most of the bells and whistles in its Azure counterpart, in favor of giving you merely what you need. If you’re a business analyst doing machine learning predictions for one of the three supported models, Amazon Machine Learning could be exactly what the doctor ordered. If you’re a sophisticated data analyst, it might not do quite enough for you, but you’ll probably have your own preferred development environment for the more complex cases.

[/vc_column_text][/vc_column][/vc_row]

How real businesses are using machine learning

how real businesses are using machine learning - ms

How real businesses are using machine learning

By Lukas Biewald as written on techcrunch.com
There is no question that machine learning is at the top of the hype curve. And, of course, the backlash is already in full force: I’ve heard that old joke “Machine learning is like teenage sex; everyone is talking about it, no one is actually doing it” about 20 times in the past week alone.
But from where I sit, running a company that enables a huge number of real-world machine-learning projects, it’s clear that machine learning is already forcing massive changes in the way companies operate.
It’s not just futuristic-looking products like Siri and Amazon Echo. And it’s not just being done by companies that we normally think of as having huge R&D budgets like Google and Microsoft. In reality, I would bet that nearly every Fortune 500 company is already running more efficiently — and making more money — because of machine learning.
So where is it happening? Here are a few behind-the-scenes applications that make life better every day.

Making user-generated content valuable

The average piece of user-generated content (UGC) is awful. It’s actually way worse than you think. It can be rife with misspellings, vulgarity or flat-out wrong information. But by identifying the best and worst UGC, machine-learning models can filter out the bad and bubble up the good without needing a real person to tag each piece of content.
It’s not just Google that needs smart search results.
A similar thing happened a while back with spam emails. Remember how bad spam used to be? Machine learning helped identify spam and, basically, eradicate it. These days, it’s far more uncommon to see spam in your inbox each morning. Expect that to happen with UGC in the near future.
Pinterest uses machine learning to show you more interesting content. Yelp uses machine learning to sort through user-uploaded photos. NextDoor uses machine learning to sort through content on their message boards. Disqus uses machine learning to weed out spammy comments.

Finding products faster

It’s no surprise that as a search company, Google was always at the forefront of hiring machine-learning researchers. In fact, Google recently put an artificial intelligence expert in charge of search. But the ability to index a huge database and pull up results that match a keyword has existed since the 1970s. What makes Google special is that it knows which matching result is the most relevant; the way that it knows is through machine learning.
But it’s not just Google that needs smart search results. Home Depot needs to show which bathtubs in its huge inventory will fit in someone’s weird-shaped bathroom. Apple needs to show relevant apps in its app store. Intuit needs to surface a good help page when a user types in a certain tax form.
Successful e-commerce startups from Lyst to Trunk Archive employ machine learning to show high-quality content to their users. Other startups, like Rich Relevance and Edgecase, employ machine-learning strategies to give their commerce customers the benefits of machine learning when their users are browsing for products.

Engaging with customers

You may have noticed “contact us” forms getting leaner in recent years. That’s another place where machine learning has helped streamline business processes. Instead of having users self-select an issue and fill out endless form fields, machine learning can look at the substance of a request and route it to the right place.
Big companies are investing in machine learning … because they’ve seen positive ROI.
That seems like a small thing, but ticket tagging and routing can be a massive expense for big businesses. Having a sales inquiry end up with the sales team or a complaint end up instantly in the customer service department’s queue saves companies significant time and money, all while making sure issues get prioritized and solved as fast as possible.

Understanding customer behavior

Machine learning also excels at sentiment analysis. And while public opinion can sometimes seem squishy to non-marketing folks, it actually drives a lot of big decisions.
For example, say a movie studio puts out a trailer for a summer blockbuster. They can monitor social chatter to see what’s resonating with their target audience, then tweak their ads immediately to surface what people are actually responding to. That puts people in theaters.
Another example: A game studio recently put out a new title in a popular video game line without a game mode that fans were expecting. When gamers took to social media to complain, the studio was able to monitor and understand the conversation. The company ended up changing their release schedule in order to add the feature, turning detractors into promoters.
How did they pull faint signals out of millions of tweets? They used machine learning. And in the past few years, this kind of social media listening through machine learning has become standard operating procedure.

What’s next?

Dealing with machine-learning algorithms is tricky. Normal algorithms are predictable, and we can look under the hood and see how they work. In some ways, machine-learning algorithms are more like people. As users, we want answers to questions like “why did The New York Times show me that weird ad” or “why did Amazon recommend that funny book?”
In fact, The New York Times and Amazon don’t really understand the specific results themselves any more than our brains know why we chose Thai food for dinner or got lost down a particular Wikipedia rabbit hole.
If you were getting into the machine-learning field a decade ago, it was hard to find work outside of places like Google and Yahoo. Now, machine learning is everywhere. Data is more prevalent than ever, and it’s easier to access. New products like Microsoft Azure ML and IBM Watson drive down both the setup cost and ongoing cost of state-of-the-art machine-learning algorithms.
At the same time, VCs have started funds — from WorkDay’s Machine Learning fund to Bloomberg Beta to the Data Collective — that are completely focused on funding companies across nearly every industry that use machine learning to build a sizeable advantage.
Most of the conversation about machine learning in popular culture revolves around AI personal assistants and self-driving cars (both applications are very cool!), but nearly every website you interact with is using machine learning behind the scenes. Big companies are investing in machine learning not because it’s a fad or because it makes them seem cutting edge. They invest because they’ve seen positive ROI. And that’s why innovation will continue.