[vc_row][vc_column][vc_column_text][vc_single_image image="8188" img_size="800x600" alignment="center"][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

Instagram’s big redesign goes live with a colorful new icon, black-and-white app and more

By Sarah Perez as written on techcrunch.com
Instagram’s new icon is pink. Well, it’s pink and purple and yellow and orange. It’s definitely different. And that’s not all the company has changed today. Instagram this morning is rolling out a radical redesign of its mobile application, which not only includes this new, brightly colored app icon but also a revamped user interface that does away with color in favor of a black-and-white look and feel.
You may remember that screenshots of this redesign leaked last month, prompting many to wonder if such a change was actually in the works.
As it turns out, it was.
Instagram’s interest in updating the icon was to better reflect how its community has changed over time.
“When Instagram was founded over five years ago, it was a place for you to easily edit and share photos. Over those five years, things have changed,” says Ian Spalter, Instagram’s Head of Design. “Instagram is now a diverse community of interests where people are sharing more photos and videos than ever before, using new tools like Boomerang and Layout, and connecting in new ways through Explore.”
The new icon, however, still references Instagram’s history with its now simplified and softer camera that appears in the much more colorful design.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text][vc_single_image image="8200" img_size="500x300" alignment="center"][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

In addition, the colors that blend and blur from purple to pink to orange and yellow are also supposed to reference Instagram’s iconic rainbow in its older design. (This isn’t entirely obvious, but we can see how the designer would want to make that connection.)
Meanwhile, where Instagram’s icon is now filled with color, the app itself has had the color removed. Instead of using blue and white in the app’s chrome, the new black-and-white design allows the color in the app to come from the community and what’s being shared. The user interface is no longer competing for attention.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text][vc_single_image image="8201" img_size="600x500" alignment="center"][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

Though this design change will impact users the most, given it’s the app that’s actually interacted with on a regular basis, it somehow feels less jarring — at least, initially — than the change to the app icon.
Perhaps that’s because nothing has been fundamentally changed with regard to the app’s workflow. The buttons remain in the same positions, and pops of color are still shown to highlight things like notifications, for example. And there are some slight under-the-hood changes. For instance, Instagram now uses standard iOS and Android components, fonts and patterns. But the app itself is simply a cleaner, more modern version of the Instagram we know and love.
That’s not to say it doesn’t take some getting used to. Seeing the editing tools laid out in black-and-white simplicity will prompt a double take the first few times you use them. But the process of using the tools has not been changed.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text][vc_single_image image="8202" img_size="500x400" alignment="center"][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

However, the icon’s update feels as dramatic as iOS 7 once did when Apple’s Jony Ive unveiled the operating system’s newer, flatter look-and-feel and its brighter color gradients. This initially prompted some user backlash among Apple fans who had trouble adjusting. (Remember the Jony Ive Redesigns Things Tumblr, anyone?)
What’s funny is that the iOS revamp years ago eventually prompted Instagram’s user base to call for the company to update its look as well. The older app icon began to feel out of place on the iPhone home screen, as other app icons were updated to better fit Apple’s new design language.

Then, when Google rolled out its own take on flat design with Material Design, Instagram’s icon began to feel a little out of place there, too.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text][vc_single_image image="8203" img_size="500x200" alignment="center"][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

Besides the icon change and black-and-white revamp, Instagram’s larger suite of apps, including Layout, Hyperlapse and Boomerang, have also received new icons. These new icons now better reflect what their app does in some cases. For example, the collage maker Layout has gone from a square to a grid. They also now match the new Instagram icon’s color scheme.
While the makeover is dramatic, it’s not tied to the other forthcoming changes, like the rollout of Business Profiles due in a few months.
Instagram has been working on this redesign since last summer and ended up testing more than 300 icons before arriving on a lead candidate in late November. The company then worked on the user interface update, which had been tested internally since the beginning of the year.
Those tests finally made it out into the wild in the past couple of weeks, which is when users spotted them and the news of the redesign was leaked. The company doesn’t share details on its internal tests or how the changes impacted key metrics like user engagement.
However, with 400 million users worldwide who share more than 80 million photos and videos daily, it’s not likely that the company would roll out an update of this magnitude if it were worried the changes could negatively impact any of its numbers.

[/vc_column_text][/vc_column][/vc_row]

[vc_row][vc_column][vc_column_text]

Welcome to the Invisible Revolution

As written on blogs.microsoft.com
Think of your favorite pieces of technology. These are the things that you use every day for work and play, and pretty much can’t live without.
Chances are, at least one of them is a gadget – your phone, maybe, or your gaming console.
But if you really think about it, chances also are good that many of your most beloved technologies are no longer made of plastic, metal and glass.
Maybe it’s a streaming video service you use to binge watch “Game of Thrones” on or an app that lets you track your steps and calories so you can fit into those jeans you wore back in high school. Maybe it’s a virtual assistant that helps you remember where your meetings are and when you need to take your medicine, or an e-reader that lets you get lost in your favorite book via your phone, tablet or even car speakers.
Perhaps, quietly and without even realizing it, your most beloved technologies have gone from being things you hold to services you rely on, and that exist everywhere and nowhere. Instead of the gadgets themselves, they are tools that you expect to be able to use on any type of gadget: Your phone, your PC, maybe even your TV.
They are part of what Harry Shum, executive vice president in charge of Microsoft’s Technology and Research division, refers to as an “invisible revolution.”
“We are on the cusp of creating a world in which technology is increasingly pervasive but is also increasingly invisible,” Shum said.

Read the full story here.[/vc_column_text][/vc_column][/vc_row]

[vc_row][vc_column][vc_column_text]

The Future of Mobile App Development

By Nat Friedman as written on blogs.microsoft.com
It is incredible how much has happened since Xamarin joined Microsoft just over a month ago, starting with Scott Guthrie’s Build 2016 announcements that Xamarin is now part of all editions of Visual Studio at no additional charge — from Community to Enterprise — and our plans to open source the Xamarin SDK. It is a dream come true for us to be able to put the power of Xamarin into the hands of all developers.
In just the first two weeks since Build alone, we helped nearly 3.5 times more developers get started building great apps with Xamarin than ever in our history as a company.
Now we are at Xamarin Evolve 2016, the world’s largest cross-platform mobile development conference, in Orlando. This morning we open sourced the Xamarin SDK and launched new ways to make Visual Studio the most complete mobile development environment.  We also launched new ways to build native, cross-platform apps faster than ever using our popular cross-platform UI framework, Xamarin.Forms.
This is our third Evolve conference, but the first time we are showing the comprehensive developer experience that only Microsoft and Xamarin together can deliver.

Open source Xamarin: Ready for you!

We have officially open sourced and contributed to the .NET Foundation the Xamarin SDK for Android, iOS and Mac under the same MIT license used for the Mono project. This includes native API bindings for iOS, Android and Mac, the command-line tools necessary to build for these platforms, and Xamarin.Forms, our popular cross-platform UI framework.
Watching Xamarin co-founder and open source pioneer Miguel de Icaza announce this onstage was a proud moment for all of us. The future of native cross-platform mobile development is now in the hands of every developer. We look forward to seeing your contributions; go to open.xamarin.com to get involved.

Visual Studio: Your complete mobile development environment

Today we launched new ways to connect Visual Studio to your Mac to make it even easier for C# developers to create native iOS apps, and new ways to auto-generate mobile app test scripts in Visual Studio.
Our iOS Simulator remoting lets you simulate and interact with your iOS apps in Visual Studio — even supporting multi-touch interactions on Windows machines with capable touch screens. We also unveiled our iOS USB remoting, which makes it possible to deploy and debug apps from Visual Studio to an iPad or iPhone plugged into your Windows PC.
In addition, our Test Recorder Visual Studio Plugin now brings Test Recorder’s ability to generate test scripts to Visual Studio users. Simply interact with your app on device or in the simulator and Test Recorder automatically generates scripts that can be run on thousands of devices with Xamarin Test Clouds automated app testing.

Xamarin.Forms: Faster and easier mobile app development

We launched Xamarin.Forms a few years ago to help developers build mobile apps faster, maximizing UI code-sharing while still delivering fully native experiences.
Today, we showed three key new features that will be coming to Xamarin.Forms.  Data Pages and Themes make it easy to connect apps to common entities and data sources, and create beautiful, native user interfaces with just a few lines of code. The Forms Previewer makes it easy to iterate on your Xamarin.Forms UI designs by providing real-time previewing of Xamarin.Forms user interfaces composed in XAML.

The new, mobile-optimized development lifecycle

We were able to show today the most streamlined mobile lifecycle available anywhere through our combined product lineup, including integrations between Visual Studio Team Services, HockeyApp and Xamarin Test Cloud. Through our combined mobile lifecycle solution, you now have a complete solution to build great mobile apps at scale, tackling the unique challenges of mobile DevOps.

future of mobile app development 2 - managed solution

We’ve heard great enthusiasm from our customers. Bryan Hooper, senior director enterprise architecture at Bloomin’ Brands, talked about how they have “paired Xamarin with Microsoft’s Azure technology, and we’re really excited about the new partnership between the two organizations.” Darrell Thompson, vice president of information system services at Coca-Cola Consolidated, says that “Xamarin and Microsoft have been excellent partners and brought our mobile development to a whole new level.”

[/vc_column_text][/vc_column][/vc_row]

[vc_row][vc_column][vc_column_text][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

The Key to Global Empowerment is Technology

By Tanner Taddeo as written on techcrunch.com
With the exponential growth in technology, the world has seen not only profound change in various industries, but also a fundamental shift in the structure to our global society.
To unwrap this bit of jargon, let’s look at the intersection of human rights and technology. The fundamental nature of human rights is to allow individuals to exercise their autonomy, liberty and free will, insofar as it doesn’t infringe upon the rights and liberties of others. Broadly speaking, governments are supposed to provide the protection under which the citizenry can freely exercise such will.
Historically, sovereignty has been the golden rule that must not be violated, regardless of what actions take place within the confines of a given territory. This has given authoritarian leaders the freedom to rule as they please. But with the advent of Right to Protect, deriving from the Rome Statute, the emerging customary law opens the door for countries to yield their sovereign rights if they fail to uphold and protect basic human rights.
While this is a monumental leap for international law and human rights, it still begs a more practical question: Outside of rhetoric and tough speak, how can we empower individuals living in countries that lack adequate civil societies to bolster state institutions, have a say in the national dialogue, usher in an era of accountability and transparency to the political system(s) and exercise their human rights? The answer seems to reside in technology.

Technology can … empower individuals through networks, information and digital trade.

Take for example Ushahidi, a company that runs an open-source tech platform developed to map outbreaks of violence in Kenya. Here, technology is used as a means of an emergency tool for individuals to report, monitor and evaluate violence in given communities. Such technology is helping facilitate a decline in community violence and abuse toward women.
In countries where access to capital is lacking because of inadequate financial institutions, micro-loans and peer-to-peer money transfers have allowed small business to not only spring up, but also stimulate local economies. To put the potential in perspective, the International Finance Corporation estimates that “up to 84% of small and medium-sized enterprises (SMEs) in Africa are either un-served or underserved, representing a value gap in credit financing of US$140- to 170-billion.”
In countries where systemic subjugation and deprivation is run-of-the-mill, individuals using the power of social media are showcasing to the world the gross negligence of their government(s) and forcing world leaders to respond.
While civil society, rule of law and regulatory mechanisms surely cannot spring up overnight, the world does not have the luxury to wait and watch its slow evolution. Technology can circumvent traditional processes and empower individuals through networks, information and digital trade. Technology emboldens the notion of human rights, quite literally, with the touch of a hand.
The question is, will governments around the world back the inevitable tide of technology or will they cling to tradition?

[/vc_column_text][/vc_column][/vc_row]

[vc_row][vc_column][vc_column_text]

Amazon Machine Learning gives data science newbies easy-to-use solutions for the most common problems

By Martin Heller as written on infoworld.com
As a physicist, I was originally trained to describe the world in terms of exact equations. Later, as an experimental high-energy particle physicist, I learned to deal with vast amounts of data with errors and with evaluating competing models to describe the data. Business data, taken in bulk, is often messier and harder to model than the physics data on which I cut my teeth. Simply put, human behavior is complicated, inconsistent, and not well understood, and it's affected by many variables.
If your intention is to predict which previous customers are most likely to subscribe to a new offer, based on historical patterns, you may discover there are non-obvious correlations in addition to obvious ones, as well as quite a bit of randomness. When graphing the data and doing exploratory statistical analyses don’t point you at a model that explains what’s happening, it might be time for machine learning.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

Amazon’s approach to a machine learning service is intended to work for analysts to understand the business problem being solved, whether or not they understand data science and machine learning algorithms. As we’ll see, that intention gives rise to different offerings and interfaces than you’ll find in Microsoft Azure Machine Learning (click for my review), although the results are similar.
With both services, you start with historical data, identify a target for prediction from observables, extract relevant features, feed them into a model, and allow the system to optimize the coefficients of the model. Then you evaluate the model, and if it’s acceptable, you use it to make predictions. For example, a bank may want to build a model to predict whether a new credit card charge is legitimate or fraudulent, and a manufacturer may want to build a model to predict how much a potential customer is likely to spend on its products.
In general, you approach Amazon Machine Learning by first uploading and cleaning up your data; then creating, training, and evaluating an ML model; and finally by creating batch or real-time predictions. Each step is iterative, as is the whole process. Machine learning is not a simple, static, magic bullet, even with the algorithm selection left to Amazon.

 

amazon puts machine learning in reach 2 - managed solution

Data sources

Amazon Machine Learning can read data -- in plain-text CSV format -- that you have stored in Amazon S3. The data can also come to S3 automatically from Amazon Redshift and Amazon RDS for MySQL. If your data comes from a different database or another cloud, you’ll need to get it into S3 yourself.
When you create a data source, Amazon Machine Learning reads your input data; computes descriptive statistics on its attributes; and stores the statistics, the correlations with the target, a schema, and other information as part of the data source object. The data is not copied. You can view the statistics, invalid value information, and more on the data source’s Data Insights page.
The schema stores the name and data type of each field; Amazon Machine Learning can read the name from the header row of the CSV file and infer the data type from the values. You can override these in the console.
You actually need two data sources for Amazon Machine Learning: one for training the model (usually 70 percent of the data) and one for evaluating the model (usually 30 percent of the data). You can presplit your data yourself into two S3 buckets or ask Amazon Machine Learning to split your data either sequentially or randomly when you create the two data sources from a single bucket.
As I discussed earlier, all of the steps in the Amazon Machine Learning process are iterative, including this one. What happens to data sources over time is that the data drifts, for a variety of reasons. When that happens, you have to replace your data source with newer data and retrain your model.

Training machine learning models

Amazon Machine Learning supports three kinds of models -- binary classification, multiclass classification, and regression -- and one algorithm for each type. For optimization, Amazon Machine Learning uses Stochastic Gradient Descent (SGD), which makes multiple sequential passes over the training data and updates feature weights for each sample mini-batch to try to minimize the loss function. Loss functions reflect the difference between the actual value and the predicted value. Gradient descent optimization only works well for continuous, differentiable loss functions, such as the logistic and squared loss functions.
For binary classification, Amazon Machine Learning uses logistic regression (logistic loss function plus SGD). For multiclass classification, Amazon Machine Learning uses multinomial logistic regression (multinomial logistic loss plus SGD). For regression, it uses linear regression (squared loss function plus SGD). It determines the type of machine learning task being solved from the type of the target data.
While Amazon Machine Learning does not offer as many choices of model as you’ll find in Microsoft’s Azure Machine Learning, it does give you robust, relatively easy-to-use solutions for the three major kinds of problems. If you need other kinds of machine learning models, such as unguided cluster analysis, you need to use them outside of Amazon Machine Learning -- perhaps in an RStudio or Jupyter Notebook instance that you run in an Amazon Ubuntu VM, so it can pull data from your Redshift data warehouse running in the same availability zone.

Recipes for machine learning

Often, the observable data do not correlate with the goal for the prediction as well as you’d like. Before you run out to collect other data, you usually want to extract features from the observed data that correlate better with your target. In some cases this is simple, in other cases not so much.
To draw on a physical example, some chemical reactions are surface-controlled, and others are volume-controlled. If your observations were of X, Y, and Z dimensions, then you might want to try to multiply these numbers to derive surface and volume features.
For an example involving people, you may have recorded unified date time markers, when in fact the behavior you are predicting varies with time of day (say, morning versus evening rush hours) and day of week (specifically workdays versus weekends and holidays). If you have textual data, you might discover that the goal correlates better with bigrams (two words taken together) than unigrams (single words), or the input data is in random cases and should be converted to lowercase for consistency.
Choices of features in Amazon Machine Learning are held in recipes. Once the descriptive statistics have been calculated for a data source, Amazon will create a default recipe, which you can either use or override in your machine learning models on that data. While Amazon Machine Learning doesn’t give you a sexy diagrammatic option to define your feature selection the way that Microsoft’s Azure Machine Learning does, it gives you what you need in a no-nonsense manner.

Evaluating machine learning models

I mentioned earlier that you typically reserve 30 percent of the data for evaluating the model. It’s basically a matter of using the optimized coefficients to calculate predictions for all the points in the reserved data source, tallying the loss function for each point, and finally calculating the statistics, including an overall prediction accuracy metric, and generating the visualizations to help explore the accuracy of your model beyond the prediction accuracy metric.
For a regression model, you’ll want to look at the distribution of the residuals in addition to the root mean square error. For binary classification models, you’ll want to look at the area under the Receiver Operating Characteristic curve, as well as the prediction histograms. After training and evaluating a binary classification model, you can choose your own score threshold to achieve your desired error rates.

amazon puts machine learning in reach 3 - managed solution

For multiclass models the macro-average F1 score reflects the overall predictive accuracy, and the confusion matrix shows you where the model has trouble distinguishing classes. Once again, Amazon Machine Learning gives you the tools you need to do the evaluation in parsimonious form: just enough to do the job.

Interpreting predictions

Once you have a model that meets your evaluation requirements, you can use it to set up a real-time Web service or to generate a batch of predictions. Bear in mind, however, that unlike physical constants, people’s behavior varies over time. You’ll need to check the prediction accuracy metrics coming out of your models periodically and retrain them as needed.
As I worked with Amazon Machine Learning and compared it with Azure Machine Learning, I constantly noticed that Amazon lacks most of the bells and whistles in its Azure counterpart, in favor of giving you merely what you need. If you’re a business analyst doing machine learning predictions for one of the three supported models, Amazon Machine Learning could be exactly what the doctor ordered. If you’re a sophisticated data analyst, it might not do quite enough for you, but you’ll probably have your own preferred development environment for the more complex cases.

[/vc_column_text][/vc_column][/vc_row]

girls robotics team - managed solution

Don't Tell This Robotics Team That STEM Is For Boys

By Sarah Hedgecock as written on forbes.com
At the Javits Center in Manhattan last Saturday, hundreds of teenagers milled about in sneakers and safety goggles, tinkering with the robots they had brought to the FIRST Robotics Competition New York City Regional. Requests for parts (dowel rods, PVC pipe) boomed out over the PA system. Parents lingered near each team’s staging area, sporting their children’s team colors.
Robotics competitions this large haven’t been a standard part of high school for very long. The ability of so many schools to support teams that build semi-autonomous machines–or find enough kids to even build a team–is a fairly new phenomenon. One thing about the competition will be familiar to anyone who participated in a particularly nerdy hobby in high school: It was very dude-heavy.
But the team FORBES had come to see was busting that trend: The Fe Maidens–pronounced “Iron Maidens”–is one of two robotics teams from the Bronx High School of Science, a magnet school in New York City. It’s made up of 42 girls. The only male members of the team are coaches and mentors. (The high school’s other team is coed, and it’s neck-and-neck with the Fe Maidens when it comes to competitive wins.)
“It wasn’t until I came here that I realized that STEM fields are more than just a career,” says team captain Violet Killy. “I thought you could just start them after college, or during college, and I’d have to wait to get my hands dirty. And then I saw kids driving robots at Bronx Science, and I was like, ‘I want to do that.’”
The team was founded in late 2006, expressly to encourage girls to get into STEM and break down the gender stereotypes that are, nearly a decade later, still rampant in technical fields. Even the students who make up the Fe Maidens regularly hear people saying they’re pretty good at this–for girls. “We’re trying to get girls to realize that this is something they can do, this is what’s out there, it’s available to them, it’s fun,” says Killy.
And the name? The team’s first captain was a fan of the band Iron Maiden. “We’re a group of girls, we’re tough as iron, we’re building what the guys are building,” explains the team’s PR chief, Luz Jimenez. “So we just went with it.”
At the competition, the team was tinkering with its robot for the first time in several weeks. Per competition rules, each team gets six weeks to build its robot (they start with a basic kit of parts provided by FIRST, the STEM education nonprofit that sponsors the competition, but can add parts as needed). The robot must then go into a bag until competition day.
At the competition itself, teams compete in two-and-a-half-minute rounds of a game that changes every year. This year the theme was castles. Students compete in two three-team “alliances,” each defending a castle at one end of the playing field. Among other things, robots were tasked with operating autonomously for the first 15 seconds of each round, clearing certain barriers on the playing field and scoring goals by sending balls through holes in the opposing alliance’s castle.

how students are being empowered - managed solution

How students around the world are being empowered to achieve more

By Suzanne Choney as written on blogs.microsoft.com
Microsoft is committed to “building immersive and inclusive learning experiences for students and teachers around the world, experiences that build 21st-century skills including communication, collaboration, critical thinking, creativity and computational thinking,” writes Tony Prophet, corporate vice president of Education Marketing.
In the classroom, OneNote, Skype, Sway and Minecraft empower teachers and students to create and share in entirely new ways, teach and learn through doing and exploring, accommodate any learning style, and focus the classroom experience on learning outcomes, Prophet says.
Yusuf Medhi, Microsoft corporate vice president of Windows and Devices Marketing, writes that with new Windows 10 devices that are “tailor made for education and perfect for students, we are seeing strong demand for Windows 10 in the classroom,” including Windows 10 devices that start at $199.

how real businesses are using machine learning - ms

How real businesses are using machine learning

By Lukas Biewald as written on techcrunch.com
There is no question that machine learning is at the top of the hype curve. And, of course, the backlash is already in full force: I’ve heard that old joke “Machine learning is like teenage sex; everyone is talking about it, no one is actually doing it” about 20 times in the past week alone.
But from where I sit, running a company that enables a huge number of real-world machine-learning projects, it’s clear that machine learning is already forcing massive changes in the way companies operate.
It’s not just futuristic-looking products like Siri and Amazon Echo. And it’s not just being done by companies that we normally think of as having huge R&D budgets like Google and Microsoft. In reality, I would bet that nearly every Fortune 500 company is already running more efficiently — and making more money — because of machine learning.
So where is it happening? Here are a few behind-the-scenes applications that make life better every day.

Making user-generated content valuable

The average piece of user-generated content (UGC) is awful. It’s actually way worse than you think. It can be rife with misspellings, vulgarity or flat-out wrong information. But by identifying the best and worst UGC, machine-learning models can filter out the bad and bubble up the good without needing a real person to tag each piece of content.
It’s not just Google that needs smart search results.
A similar thing happened a while back with spam emails. Remember how bad spam used to be? Machine learning helped identify spam and, basically, eradicate it. These days, it’s far more uncommon to see spam in your inbox each morning. Expect that to happen with UGC in the near future.
Pinterest uses machine learning to show you more interesting content. Yelp uses machine learning to sort through user-uploaded photos. NextDoor uses machine learning to sort through content on their message boards. Disqus uses machine learning to weed out spammy comments.

Finding products faster

It’s no surprise that as a search company, Google was always at the forefront of hiring machine-learning researchers. In fact, Google recently put an artificial intelligence expert in charge of search. But the ability to index a huge database and pull up results that match a keyword has existed since the 1970s. What makes Google special is that it knows which matching result is the most relevant; the way that it knows is through machine learning.
But it’s not just Google that needs smart search results. Home Depot needs to show which bathtubs in its huge inventory will fit in someone’s weird-shaped bathroom. Apple needs to show relevant apps in its app store. Intuit needs to surface a good help page when a user types in a certain tax form.
Successful e-commerce startups from Lyst to Trunk Archive employ machine learning to show high-quality content to their users. Other startups, like Rich Relevance and Edgecase, employ machine-learning strategies to give their commerce customers the benefits of machine learning when their users are browsing for products.

Engaging with customers

You may have noticed “contact us” forms getting leaner in recent years. That’s another place where machine learning has helped streamline business processes. Instead of having users self-select an issue and fill out endless form fields, machine learning can look at the substance of a request and route it to the right place.
Big companies are investing in machine learning … because they’ve seen positive ROI.
That seems like a small thing, but ticket tagging and routing can be a massive expense for big businesses. Having a sales inquiry end up with the sales team or a complaint end up instantly in the customer service department’s queue saves companies significant time and money, all while making sure issues get prioritized and solved as fast as possible.

Understanding customer behavior

Machine learning also excels at sentiment analysis. And while public opinion can sometimes seem squishy to non-marketing folks, it actually drives a lot of big decisions.
For example, say a movie studio puts out a trailer for a summer blockbuster. They can monitor social chatter to see what’s resonating with their target audience, then tweak their ads immediately to surface what people are actually responding to. That puts people in theaters.
Another example: A game studio recently put out a new title in a popular video game line without a game mode that fans were expecting. When gamers took to social media to complain, the studio was able to monitor and understand the conversation. The company ended up changing their release schedule in order to add the feature, turning detractors into promoters.
How did they pull faint signals out of millions of tweets? They used machine learning. And in the past few years, this kind of social media listening through machine learning has become standard operating procedure.

What’s next?

Dealing with machine-learning algorithms is tricky. Normal algorithms are predictable, and we can look under the hood and see how they work. In some ways, machine-learning algorithms are more like people. As users, we want answers to questions like “why did The New York Times show me that weird ad” or “why did Amazon recommend that funny book?”
In fact, The New York Times and Amazon don’t really understand the specific results themselves any more than our brains know why we chose Thai food for dinner or got lost down a particular Wikipedia rabbit hole.
If you were getting into the machine-learning field a decade ago, it was hard to find work outside of places like Google and Yahoo. Now, machine learning is everywhere. Data is more prevalent than ever, and it’s easier to access. New products like Microsoft Azure ML and IBM Watson drive down both the setup cost and ongoing cost of state-of-the-art machine-learning algorithms.
At the same time, VCs have started funds — from WorkDay’s Machine Learning fund to Bloomberg Beta to the Data Collective — that are completely focused on funding companies across nearly every industry that use machine learning to build a sizeable advantage.
Most of the conversation about machine learning in popular culture revolves around AI personal assistants and self-driving cars (both applications are very cool!), but nearly every website you interact with is using machine learning behind the scenes. Big companies are investing in machine learning not because it’s a fad or because it makes them seem cutting edge. They invest because they’ve seen positive ROI. And that’s why innovation will continue.

Contact us Today!

Chat with an expert about your business’s technology needs.