A Great Simian or just a Monkey

Business Page 1 of 7

Welcome to the unintended Digital Transformation

Bad things often have positive sides, so does Corona / Covid-19. Companies have struggled for many years with the digital transformation and hired tons of expensive consultants as well as huge investments in tools and products, all to make the digital transformation happen. It has been slow, expensive and tough. Few large enterprises have succeeded at scale.

Suddenly we get the Corona pandemic and it happens by itself. One could wonder why it did not happen without a pandemic. Probably the answer is simple:

We as humans and as a businesses are not very adaptable to change unless we are forced.

With Corona, we are forced to change our behaviour to stay healthy and to assist in not spreading the virus. Digital is a necessary infrastructure for both our society and business to stay safe and function.

Corona is not positive in any way, but some consequences are definitely positive.

Stay safe out there and welcome to the digital transformation.

Photo: Private, taken in Livigno 22 Feb 2020, a time when Italy (or the rest of Europe) not fully had realised what they were facing. The emptiness in the system is because it is Saturday and not due to the virus. Saturdays are always empty in Livigno.

Remote Work

Remote Work – How to become a remote work company

Definitions, principles, tips and tools for remote work. Given the global COVID-19 / Corona situation today more and more companies advice their employees to work from home. I will not write about Corona, there are much more educated professionals to do that, but remote work and work-from-home (WFH) is something I have some experience with. Therefore I thought I would summarise my perspective on remote work.

What is Remote Work?

Work from a place that is best situated for us to get our work done.

That is my definition on remote work. I know other might define it different, but at any normal day I would say that this is a pretty clear and general definition of what the term “Remote Work” is all about.

Note that “remote” does not have to be at home. It can be at a client, at a cafe, on a different continent, on a plane…or at home.

Per definition I can feel that the term remote is flawed since it gives the perception that remote is the edge-case and in-the-office is the normal.

Principles

There are a few things that become more important then when everyone is in the office. These principles are not in any way just connected to companies that go the remote work-way but to all companies today.

A company today is based on trust, transparency, accountability and accessibility.

  1. Trust not control
    To be able to work remotely we need to trust our co-workers. This is rather logic, but many companies still apply control before trust. In a remote work environment it is of importance that every co-worker feel empowered by the company and sense that the company trust him / her with the task at hand.
  2. Accountability
    With trust comes accountability. If the company trusts you with important tasks, you are also accountable for those tasks. It is not that much about getting hanged if you do not finish on time, but rather that you do everything in your power to deliver the task. It can be asking for help in time, making appropriate changes for better results etc, but most important that we do not expect “others” to do things or “it was not my responsibility” or simply letting things slip. This is not only an individual topic, but applies to teams as well.
  3. Transparency
    The company needs to apply a public-first philosophy to all information. Also important that information is written / recorded and not informally delivered by the coffee-machine. All things cannot be public due to business risks or legal consequences, but those are exceptions, not a reason to keep other things non-public.
    Being transparent is not the same as pushing every snippet of information to all co-workers in the #general channel in Slack or a company-all email, it is about making information accessible to everyone.
  4. Accessibility
    We need to a larger extent to keep everyone updated on what we do and also being accessible to co-workers that want to get in touch. This is not an “always-on” feature, but being open with when we are accessible and not.
  5. Asynchronous communication (bonus bullet 1)
    This is more of a practically very important bullet. The internal culture of many companies rely on almost real-time expectancy on replies in chats, email or similar. In a remote work environment it is important that all communication is asynch-first. We cannot expect everyone to constantly sitting by the computer waiting to reply to co-workers. An even more important part is that we can no-longer expect everyone to have the same working hours, but everyone work when it suits them best from a business and private perspective.
  6. Family first (bonus bullet 2)
    With remote work companies can also send an important signal to its employee, family comes first. This from several perspectives. In most cases, remote work means more time with family and a possibility to join more activities with your kids or other family activities. Simply put, a more present family member. Secondly it also integrates the family in what you do at work. This gives the opportunity for co-workers to meet at other co-workers home and meet family members.

Myths about not working in a company office

There are so many myths and wrong assumptions about remote working and working from home. Many people I have talked to over the years simply say either…

“My work can only be done from the office”

….or…

“My company does not allow us to work remotely”

I understand that some roles at certain companies are tough, but most information-workers today can work remotely without any disruption or problem at all. This is solely a relic from bad management or a culture that needs a significant upgrade.

Forbes recently put together a list remote-work myths. Worth a browse.

Are tools the solution?

I currently see tons of posts about companies trying to promote their product as the solution to remote work. A tool is never the solution to a change in work procedures. The might help having a great screw-driver, but to build a house you need a great architecture and a reliable construction, regardless of how great your screwdriver is. The real-solution lies in aligning around values / culture and that every process needs to be updated to support remote-work and processes also needs to be written down in a format that everyone can adopt to regardless of where they work.

Given my background in collaboration and online communication I see a lot of similarities. No tool would make a company better at corporate collaboration or online communication, it is about enabling every necessary process with the right features.

A post like this one from Slack “Adapting the way we work when offices need to close” lacks a lot in regards of processes and values. Slack is imho not a tool that increase the result of remote work in any significant way out of the box. In 2016 I wrote this post about “Cutting the Slack“, that describes why I consider Slack more of a productivity-hostile tool.

Slack definitely can be a tool that help when working remote, but it is not in itself the solution. A better company to look at is GitLab, that is a truly remote work company, or all-remote as they call it.

Gitlab

GitLab takes a more all-in-perspecitve and describes everything very public. It is extensive, detailed and very good. They seem to truly live up to most of my principles above and without a doubt the trust and transparency parts as well as the two bonus bullets Asynch-communication and family first.

Just go to their all-remote section and you dive how deep you want.

Basecamp

Basecamp is another company that is all-remote and uses simplicity and simple logic when describing their way of working. I have nothing less of admiration for them. Below are some resources to take a closer look at if interested. A lot of worthy reading on their blog Signal vs Noice as well in their Podcast Rework

They have also written a book on the topic, Remote. I have read the book a few years ago and is a good summary that is built on tips & trix that will help you as an organisation to be better at remote work. It is not a deep dive, but it gives you the essentials.

Top 3 things to focus on for co-workers

All companies want to improve its business when making a decision like this. It could be to avoid loosing employees, increase revenue, productivity or decrease sick-days, cut travel costs etc etc.

So, when the values are in place, people are working from wherever they want at what hours they need, what is left?

We need to make sure our co-workers are happy, deliver and evolve. What do I mean by that?

  • Happy
    Since we do not meet our co-workers on an every day basis as when all co-workers are in the office, we need a way to find out how our co-workers are feeling. Are they happy, sad, stressed or angry etc.

    Solution: A simple way to catch the sentiment of all company co-workers on a daily basis.
  • Deliver
    There are usually all kinds of project management and task solutions in a company, but many of those are cumbersome and becomes more of a formal tool to update when different tasks are solved. How can we as co-worker and peer, in an easy way, follow what tasks we are working on today and during this week?

    Solution: Daily and weekly checkins. Every co-worker submit what they will do during the day and also a weekly checkin for the bigger picture (naturally with follow-up). This gives everyone in the company a glimpse of what everyone does on a daily basis as well as a weekly overview. These checkins are visible to everyone and should not take more than 10s to fill in.
  • Evolve (feedback)
    Feedback is generally overlooked in my view. It is given too rarely and often by the wrong person in the wrong manner, but that is a post for another day. To be able to stay motivated in a remote work environment, constant feedback is necessary. We need to know if we did great things and if we did things that could be improved. Feedback also needs to be given in the right manner, by the right person at the right time. Feedback is also a constant loop and should be done regularly, not only in the yearly manager-review.

    Solution: Update the personal development process to support more iterative feedback over time that is driven by task, team and personal development. Feedback could be triggered automatically, by peers or requested by yourself. This topic relies heavily on the core principles trust, transparency and accountability as well as company and personal values.

…from a management perspective then?

Well, this should naturally not differ anything from what we already do, but some things are worth to highlight.

  1. Goals and KPIs
  2. Dashboards
  3. Employees

There is a huge benefit of remote work that is often missed, the fact that most information now is in a written or recorded format. This gives the opportunity to follow-up our business sentiment, both operational and productivity. With all information now digitally available we can in a simple way (well) get an almost real-time overview of our operational efficiency and sentiment.

Since most companies have business goals and many trickle those business objectives down to business units and often individual levels it is now rather easy to implement KPIs on all levels and present those in dashboards.

If we share what we are working (openly), how we enjoy our day (personal only, but aggregated in dashboards for management to catch trends in departments etc) and constantly evaluate or performance (person-to-person, so not open, only overall rating could be aggregated in management dashboards) it also gives us the opportunity to get a better view of our own performance and by that evolve as individuals. This must be one of the most important benefits of remote work that would be hard to get with values and processes that are not remote-work ready.

Photo credit: My private photo. Taken in Chang Mai in Thailand. I was not remote working there, but could have been.

corporate collaboration

Corporate Collaboration is still broken

After all these years and new tools constantly being released, corporate collaboration is still not solved. I would actually go as far to state that almost nothing has happened in the last 5 years. Collaboration in large corporations is unstructured and scattered all over (both tools and information), without any form of measurement of productivity, information or impact.

Let us start with why we are using corporate collaboration tools within our organisation.

  1. Make better and faster decisions based on the right information from experts in the area.
  2. Become more productive.
  3. Tear down corporate structure and connect the right expertise / people and information without corporate structures in the way.
  4. As well as soft things as a sense of belonging and be part of a team etc.

…but at the end of the day, for a corporation, it boils down to being a more efficient organisation that research, produce, market, sell and support it’s products in a more efficient way so that the company performs better in terms of their corporate objectives.

Is there a corporate collaboration tool today, that can prove that they can measure the impact and / or result of their collaboration product if used in the right way within a corporation? NO!

Why is Corporate Collaboration not evolving?

This one is naturally a hard task to describe and I do not have a definitive opinion, but with a brief look at how things have evolved in the last few years, we might have some guidance.

First, most of the tools corporations use are sneaking their way in through the backdoor by small teams that are tired of Microsoft Sharepoint document-centric collaboration. These teams simply want a smoother and more efficient way of communicating and getting things done. Tools like Slack and Trello are sneaking their way in. A manager for a small team is adding the monthly fee on his credit card as an expense report and suddenly these tools have made their way in through the corporate door without passing the IT department and other slow and backward-looking departments. These tools are great in many ways and provide clear value to the team, but they do not provide any measurable value from the company perspective.

Secondly, these tools are built to assist small teams or multiple small teams. Multiple small teams do not mean that these teams are connected in any way. The company that suddenly have hundreds of small teams in e.g. Slack, do not gain any immediate measurable benefit from all these teams and channels that are full of one sentence communication and integrations with other tools. Furthermore, it is a tough thing to re-use the information in those team as best practices or similar for others to learn from.

What we have now is multiple consumer-centric social apps that have entered the company without providing any measurable benefit to the company (they do provide value, don’t get me wrong). On the other end we have lots of old-fashioned corporate software vendors trying their best to keep up, these softwares are either too boring and complex for people to actually use (or implement) or they are too simple and just trying to replicate these new shiny toys like Slack or Trello et al and by that also falling into the same category of non-measurable tools available. Microsoft Teams (Yammer) is one of those. Teams create the same value as Slack, but it is still not built to support a large corporation, it is for teams without connection to other teams or aggregation for corporate review to learn from, track or simply to give praise when a team is outperforming or support when they have a challenge.

Is there a future for Corporate Collaboration?

Absolutely. Even though the first paragraphs in this post are a bit pessimistic, I am fully convinced we will see lots of innovation in this space. AI (let us stick with this very general term for now) is one enabler and when the term social finally seems to be a natural part of the corporate infrastructure in one way or the other, we can start to build real value, from a corporate perspective, from these tools or new ones.

The ideal corporate collaboration tool

A dangerous headline on a paragraph, but conceptually we are now mature enough to take the next step and implement a corporate collaboration tool that is measurable and also prove the value of using the tool for the company as well as for the individual, the team and the department etc. This without compromising simplicity and ease-of-use.

Measurable

A modern tool needs to be able to measure it’s the benefits of it’s usage in real-time and present it to everyone.

  1. Individuals
    Are your contributions in discussions helping improve the task at hand and how do you measure towards your goals. How are your conversations evolving over time? Have you considered a person making others smarter or making others smile etc.
  2. Teams
    Is your team solving tasks according to timeline and dependencies with our teams / projects? Is there people in your team that provide value in these tools that do not get the right appreciation from their manager. How is the sentiment and tone in the teams and how do topics discussed map towards company objectives. How does your team rate towards other teams etc etc. Does your team provide value to other teams.
  3. Management
    Dashboards for everything with a real-time view of the operational status of the company. Find which teams, individuals, departments that are most productive. See how topic, activity, tone, sentiment etc map towards productivity. A clear view of great performing teams and create best practices etc. Find individuals that usually not get credit for their work, since they work in the quite, but often provide the information that makes the greatest impact on projects or similar (as new input to the yearly review with the boss).

Suddenly we have KPIs and real-time dashboards that can measure productivity and collaboration in a way that is not done yet.

Simply put, it is all about connecting the dots, that can partially be letting AI work the unstructured text, but also programmatically connecting the dots (either by user assistance or by code), a challenge YES, impossible NO. Still with the same usability as Slack or Trello et al.

Are we going to see a product like this soon? I certainly hope so, I know many organisation would love it.

Or maybe it is just me that thinks corporate collaboration is broken and a tool like above would provide great value to every corporation?

cognitive enrichment of unstructured information

Why is unstructured data so important?

Your business is making decisions only on 20% of the information you have access to, this since 80% of your information is unstructured and up until now not able to be fully utilized. It is about time we start to make decisions for our company based on all information we have, not only 20%. All else would be quite stupid, wouldn’t it?

Companies have tried to make sense of unstructured data for ages, but 78% state that they have little or no insight into their unstructured data.

It is an understatement to say that most of the worlds information is completely un-utilized and hidden in the dark.

You might think “I know my data” or “We can search our documents”, but that is not the same as getting value from the information.

What is unstructured data?

It might be clear to many, but just so we are all on the same page, unstructured data is images, video, sound and documents like blogs, news articles and Word documents et al.

An image is most often stored only with some metadata attached to it. That data is only telling us what time, date the photo was taken, sometimes (if the camera has the feature) it stored where it was taken etc. What is it in the photo? The most important information is completely hidden from us and we need to manually look at the picture to decide what it contains….it is the same with video.

For text, it is hard to find entities, sentiment, emotions, categories and also how they actually relate to each other. Which information is the most significant in a text and how does that relate to a target entity etc.

But I have Google Search?

Yes, we all do, but let us try an example. If we let Googles algorithm read all the Harry Potter books and at the same time let a cognitive system like Watson read the books, what will the difference be?

A simple yet powerful result is that one of them will be able to answer this question:

“Which house in the Harry Potter books is evil”

Image courtesy of Warner Brothers

It is not stated in the books that the evil house is Slytherin, but we all know it since we have read, reasoned and decided that Slytherin is all bad and Griffindor is good. That is an advanced example but still, puts the finger on the difference.

Google can deliver this result as well, but only if someone actually has written that Slytherin is evil in a text.

 

In a company context then?

If we translate this to a company, we could have thousands of reviews of our products stored in documents, but actually not know which one that is most appreciated and why (most reviews rely on stars, numbers etc to create a forked way of rating, but that does not tell us anything about context).

If those reviews would be enriched with cognitive information, the answer is only a search away.

Other examples are accident reports for insurance companies, customer support, legal (laws, regulations etc), social media, integrate unstructured data in business analytics and predictive analysis, medical research, product information, marketing and communication etc.

Examples – Getting value from your unstructured data

Example with getting value from unstructured text: A customer survey or customer feedback or similar. You receive a 2 star review and that is not good. What you are missing when not working with your unstructured data in a way that you can pull value from it, is that in the comments it says “The product was a broken unit, but Julia really went the extra mile to fix it” or a three star with the comment “Your opening hours make it impossible for me to contact you in any way, even though I love your product, I actually just bought 2 new ones”

What a traditional system misses is the following:

  1. Julia did a great job
  2. Reason for the 2 star was that the product was broken
  3. Opening hours are bad which pulls down the stars
  4. The average review was not connected to the product, which seemed to be a 5-star experience.

That is without a doubt important information for any company working with customer experience.

Example with getting value from unstructured data in images: Your ad agency has taken a bunch of photos for your new products and it is time to add those to the product information data. Often the data from photos and product data are disconnected, but no more. Now the process can be streamlined. If we put this in an online perspective it will not only make your process more efficient it will also increase conversation and increase sales, why?

  1. If a potential customer is looking for a yellow chair, he / she will find it immediately, this instead of browsing through pages of chairs of different colors and sizes.
  2. Since the unstructured data has become structured the Google results will increase significantly and your customers will find the yellow chair much faster.
  3. Value add and up-sale. Since we know the image contains a yellow chair, we now automatically can add value by showing products that fit well with the yellow chair and not only additional chairs as often is the case today.

Example with internal company documents: You have thousands of customer reviews, but they are only available in document format and poorly tagged, you only get product, date and some other basic meta data. You cannot get an overview of which products are having problems and with what, which products are highly appreciated and why, is it a specific issue that is re-occurring? If you enrich all your reviews with cognitive capabilities you will get the following (please note that this is not a huge effort):

  1. Dashboards with clear overview of all products and how they are perceived with a score.
  2. If problems with products, the actual problem is defined on the affected products.
  3. Image information of attached images can be analyzed. What products, color, model, issues etc can now be identified.
  4. …well, you get it.

How do I start to take control over my unstructured data?

All this must be complex, expensive and take ages to get up and running. Not really, the actual enrichment is very straightforward. For text, use the Watson Natural Language Understanding service. Send text through the API and enrich the document with the response from the API. You can also bulk upload documents (in many different formats incl .doc .pdf and HTML) to Watson Discovery Service if you want the service to manage the processing for you (ingesting, converting, enrich, store and also the querying). Watson Discovery Service uses NLU for enrichment, but also adds an end-to-end solution. The actual enrichment is the same. Using WDS is a bit more complex, but on the other hand, you will have your own cognitive search-engine in-a-box incl a powerful query-language) and intuitive tooling

If you want to enrich documents with domain-specific information like your own products, domain language etc it is possible to add custom ML-models to both Watson NLU and WDS through Watson Knowledge Studio, which is an easy-to-use interface to build a custom ML-model (done by subject matter experts, not programmers).

For images, it is a similar approach, but with the Watson Visual Recognition API and enrich the image with the response from the API. It is also possible to build your domain-specific classifiers so that Watson can recognize your products etc.

Conclusion

There are vast amounts of value hidden in unstructured information and in this post, I tried to take a few simple examples. In each organization, there will be easy wins, but as with everything, also more complex.

The best value is naturally gained when all information is integrated and put in context.

Today, only 20% of the information in companies are accessible, that will not create the most reliable foundation for a business to rely on, so it is time to get started to gain value from ALL your information.

natural language understanding in business

Business Benefits of Natural Language Understanding

Yesterday I wrote a post about Google Natural Language vs Watson Natural Language Understanding, Given that it was 2178 words you might think it contained a lot of info on why understanding natural language from unstructured text…….but no, so thought it was appropriate to write one.

I wanted to elaborate on some business examples in this post and below you will find two examples: Product Information Cognitive Enrichment and how to Utilize Natural Langauge in Customer Relations.

What is Natural Langauge in this context?

About 80% of all data in the world is unstructured (dark data), that is information that is in text (like documents, blogs, intranets, social media etc), video and sound. This information has been difficult to draw value from, up until now.

The concrete result of utilizing a natural language service like Watson NLU or Google NL is that you can surface information that previously was hidden and not accessible to draw valuable and actionable insight from.

Things you can enrich your information with is the following amongst other things:

  1. Is it a positive or negative text?
  2. What emotions are present in the text, is the writer angry and what is he / she angry about. Is it a product or a person etc.
  3. What products, persons, companies etc are mentioned in the post.
  4. What category does the text belong to. Is it a text about sports, business, tech etc.
  5. How do things relate to each other, is a person angry about a product from a company in a specific city?

But what about classic search, like intranet search or Google search? Can´t Google Search already do this?

Neither of these are good at managing these types of questions, they are good at relevancy, but not in understanding the meaning of the text.

Try imagining the result of this question in a Google search or intranet search:

“Show me the 10 most positive reviews of hotels in Stockholm that are close to the Royal Castle”.

If a service like Watson Natural Language Understanding have enriched all those reviews, an answer is returned in a wiff with ranking and everything.

Other examples can be:

  • “Which of our products are mentioned in negative terms the last week that is related to our new shop in Amsterdam”
  • “Which of our products are most mentioned in business-related articles “
  • “Which brands are most related to our product X”
  • “How does our product compare against our competitor X in terms of sentiment in the financial sector”

What about measuring mentions in Social Media etc?

It is also important to note that a simple question like “Which brands are most related to our product X” is not always a simple search for mentions of the two entities (brands and product name), but two things are different from the “dumb” keyword search.

  1. Is the product and the brand actually related to each other in the text?
  2. Is the product or brand really a product name or brand name? Let us take two Swedish companies as examples, Ericsson and IKEA and use the following two sentences as examples:

    “The politician Peter Ericsson and Ivar Andersson are traveling to Kivik on Tuesday for meetings”.

    “Ericsson has signed a new agreement with IKEA to implement their technology in the Kivik product range starting 2019”.

    You can replace Ericsson with Apple or many other companies, same frustration. The IKEA product can be replaced with most companies products since many have names that already exist.

What a natural language service does is that it understand that the first example with Peter Ericsson is not the company Ericsson, but a person. It also understands that Kivik is a city and not a product. Therefore it does not show up in a cognitive query, but would most certainly show up in a classic social media mention search.

In the other sentance, it understands that Ericsson is a company and Kivik is a product, that is Natural Language understanding and it will have a great impact on the value we get from information.

Integrate cognitive enriched information in your business applications

Now lets put this in a wider perspective where this information is combined with other sources of information like already existing solutions like structured data (BI / Analytics etc). Most companies can predict their product sales, but uncertain things always appear and most of the time it is hard to know why, at least prove the gut feeling you have, with combining enriched unstructured data with existing structured data new insights surface.

Example: Suddenly and unexpectedly sales drop for a product, this without any logical explanation. Predictive analytics can only use existing structured data to present predictions from, the unexpected is hard to predict and to find an answer to. This no more. By using customer service conversations and external sources like weather, news, social media etc we can suddenly see that the product is mentioned in negative terms and that it is due to a new feature that was launched a few months ago, but due to the change in weather seasons, the reaction did not surface until now. This information was found in customer service conversations as well as in social media information and weather data and when combined a the root-cause was found and the product could be fixed and sales returned.

It might sound like a far-fetched scenario, but did not want to make it too simple since most companies are complex and want to draw insight from the reality not only a simple “What products are mentioned in negative terms on Twitter”, but rather put it in context and connected with sales and the “why” question as well. The example above surfaces actionable insight that was not possible before.

Product Information Cognitive Enrichment

Product information tends to be very static and does not always match how the customers refer to the product. Often product data has internal terms and a lot of unstructured data is not possible to find.

Let us say you are looking for a yellow chair from a furniture store. We are heading to Google and enter a search for a yellow chair for bedrooms. I have tried this search with a few furniture companies and it is very similar, the result is that high in the ranking you first get the chair category and then the bedroom category (or the other way around). When you click a link and move on to the site it is often not the yellow chair you find, but a category with 30 pages of chairs or similar, somewhere we have lost the connection to the bedroom as well as the colour yellow.

I know all companies product data is different as well as how they work with Google, but if you work at a large company that sells products, I think you can relate to the challenge.

What I discovered when I tried several alternatives is that Pinterest surface at the top for these types of searches on Google, why?

Simple, it is because Pinterest has user-created product information. To put this in context for this post, the information on Pinterest is by default already cognitive enriched information, the product description and data is from humans, not a product database with only structured data.

So, if our product information would have been enriched with cognitive capabilities, how would that look.

Product data that is enriched with cognitive capabilities would connect the customer with the right product in one click instead of endless scrolling and headache.

Since this enriched product info know that the chair in the image is yelly and also know which chairs that is well suited for bedrooms, it is simple query to answer.

The result is higher conversation, more sales and happier customers.

Nice and dandy, absolutely, but is it really that simple? Overall I would say yes! If we break it down there are a few actions that is needed.

  1. Enrich existing product information with cognitive information from unstructured data such as editorial content, product images and existing product information. Could also be combined with behavioral data from Google Analytics or similar.
  2. To gain full value subject matter experts (in this case probably interior designers and product salespeople or similar) need to train a Machine Learning algorithm that understands what furniture that fits with others and also what colors that match other colors. We need to teach the algorithm to act as a designer. This to replicate the inspirational feeling we get when moving into a furniture store and not to put a pink toilet brush by the bedside.
  3. Access to data. Sounds easy, but can be a challenge.

Natural Language in Customer Relations

To enrich all customer interactions we have is also a great example where natural language creates value. Which customer service ticket should I start with, is the customer angry or happy (or angry when he / she started and happy when the ticket is closed), is it a critical business problem, is it regarding a high priority product etc etc. Is the answer already existing in our knowledge base so we can speed up the amount of time utilized on the ticket?

And equally important, all the information that was previously hidden is now available for us to use in dashboards for insights, customer satisfaction (without surveys!), product problems, time saved etc.

To enrich unstructured information with cognitive capabilities is of value for most companies and I dare to state that every single company can benefit from investigating how it can benefit your organisation.

So, lets start to empower your organisation with the real value in your unstrucutred information.

 

Top image from my favorite coffeeshop, Koppi in Helsingborg

google nl vs watson nlu

Google Natural Language vs Watson Natural Language Understanding

The competition in understanding natural language from unstructured text is thickening. Google just launched two new features for their Google Natural Language API, categories and sentiment. Those have been in the Watson Natural Language Understanding API for a while now, but let us see how the two APIs compare to each other overall.

Let us start with a head to head comparison with a real example.

Google Natural Language API vs Watson Natural Language Understanding Head-to-Head

I thought an article about another player in the game could be in order, so I entered an article in Fast Company about the Microsoft CEO Satya Nadella “Satya Nadella Rewrites Microsoft’s Code”

I will use the demo-interfaces for both services, they can be found here for Google NL and here for Watson NLU. The only difference I could find in how you post information to the two services is that you in the case of Watson, just can post a URL to the API and Watson does the rest. It is a simple feature that makes analyzing of web pages much easier, but the result is the same and also, someone has probably already built something similar for Google NL and put it on Github. If you do try the services I suggest looking at the actual API results as well, not only the demo-interfaces since those only show parts of the results.

So, how did they compare?

Document Sentiment: 

Watson NLU returns a 0.19 positive sentiment on document level
Google NL returns a 0 neutral sentiment

…so very similar, which I would have considered very strange otherwise given the length and depth on the topic in the article.

Winner: Shared victory

Sentiment breakdown

Both services provide a breakdown on sentiment so it is possible to determine sentiment on entities etc, but Google NL also provides sentiment on sentences, which can come in handy since it puts the sentiment in context immediately in the result from the API.

Entities

I started by listing a few entities to compare, but it does not give a great perspective of the capabilities since those numbers need to be in context, do run it yourself and check the result for details. Overall they are very similar, naturally with some differences in the result, but overall similar. Watson NLU provides a slightly better granularity, but Google NL has the sentence result which is very good, so overall very similar.

Winner: Shared victory

Categories

Again, very similar. The major difference is that Google added the news and business categories, while Watson was a bit more rigid and stuck to tech and software. Even though the entities in the article mainly are tech-related I did like that Google NL classified the article as Business / Industrial at a 0.89 score, while Watson NLU did not include any business-related category, but classified the major category as /technology and computing/software at a 0.67 score.

Winner: Google

Entities

This one was a bit peculiar. Entity identification is naturally a difficult thing, but was a bit surprised by the results from Google NL, while Watson NLU was quite solid. Let us just look at the top 4 from each

Watson NLU (the score is relevance score)

  1. Microsoft, Company, 0.87
  2. Satya Nadella, Person, 0.81
  3. CEO, JobTitle, 0.55
  4. Steve Ballmer, Person, 0.38

Google NL (the score is salience score)

  1. Satya Nadella, Person, 0.47
  2. Microsoft, Organisation, 0.42
  3. learner, Person, 0.02
  4. CEO, Person, 0.01

The two things that surprised me was the drop in salience score already after the second entity, it stayed at 0 for all the rest of entities, as well as the type for “learner” and CEO….and also that “learner” was classified in that way at all. If I look through the entire list in Google NL, I cant get my head around it completely.

It also seems like Watson NLU has a bit better capability in business related types and Google NL is a bit more focused on consumer types. Watson NLU clearly more structured.

Winner: Watson

Conclusions of the test

The main differences between the two is that Watson NLU supports more features, like emotions, as well as the opportunity to apply custom ML models to the Watson NLU. This gives Watson NLU the capability of learning entities and relations in your specific domain.

Google NL has the benefit of being straightforward and support all their features in all languages as well as having a bit more granularity in their score (salience and magnitude).

Is it actually working? I would say that both services are good at what they do, but I would give the win at this stage to Watson due to the more extensive features as well as the capability of adding custom models. This is from an enterprise perspective, if you are in the consumer space it might be worth to do a POC on both. I like how IBM has started to be more modern in their approach with Watson and I think the APIs are working very similar. They are open, well documented and easy to work with (please note that I am not a developer).

Also, it is worth noting that much of Watson NLU have been around for a few years now (through the IBM acquisition of AlchemyAPI 2015 ). Google has been in the game for many years as well, but not in the enterprise space with a packaged service for natural language. If Google continuous to focus on this space I think they will be a real threat to IBM if they do not keep their pace up (which I see is a risk given it is IBM).

I would say as of this date, Watson NLU is the winner in the test, but I think Google is working at a high pace to package it’s extreme knowledge in the space quickly and I expect a lot of progress at a high pace. So, even if Watson is a leader today, they might not be tomorrow. The difference seems to be in the packaging, not the domain expertise.

For a bit of breakdown on pricing, terminology etc, keep reading.

What is Natural Langauge in this context?

Simply put it is the capability to do text analysis through natural language processing. It gives us the possibility to extract the following:

  • Entities
    Extract people, companies, places, landmarks, organisations etc etc
  • Categories
    Automatic categorization of the text. Both Google NL and Watson NLU has an impressive list of categories. Google state total 700 and I have not counted Watsons, but seems to be about the same.
    List of categories for Google NL.
    List of categories for Watson NLU.
  • Sentiment
    Is a text positive or negative, but nowadays it does not stop there, it is also possible to break it down further to target the sentiment at specific entities or words (differs between Google and Watson, more on that later in the post).
  • Syntax / Semantic Roles
    Linguistic analytics of the text by splitting the text into parts and identify nouns, verbs as well as subject, action and object etc. The Google Cloud Natural Language Syntax feature seems to be a bit more extensive than Watson Natural Languages Semantic Roles.
  • Keywords, emotions, and concepts (Watson only)
    Emotions are …. emotions like joy, anger, sadness etc. A great feature for customer service or similar products.
    Keywords are words that are important in the text.
    Concepts are words that might or might not appear in the text but reflect a concept.

Terminology

The two services use similar terminology. Google uses Syntax where Watson uses Semantic Roles, otherwise very similar terminology.

In Watson NLU all results are returned with a confidence score. Google has added two additional things to consider, magnitude and salience. Personally I like the simplicity in only using the confidence score, but naturally, the two other values can provide additional value in some cases.

Confidence Score: Is a score between 0 to 1 and the closer to 1 it is, the more confident it is. Usually above 0.75 is considered confident, but that is naturally depending on the subject and domain, you do not want a car to only be 75% sure that it is ok to do something, but if a customer service representative is getting a ticket that is 75% confident to be a Lost Password ticket, that will do.

Sentiment Score: Is a score between -1 and +1. When close to 0 it is fairly neutral, the closer to 1 the more positive and when close to -1 it is pretty negative. Watson actually sent the positive/neutral/negative-label in the API, Google only the score. Google Natural Language also sends a Magnitude parameter. Magnitude is a score to complements the sentiment score by telling us how strong the sentiment is.

Salience: Shows how central an entity is in the entire provided text or document. It is a score between 0 to 1. This is a good feature to if you need to see how “heavy” an entity is in a text. Only available in Google Natural Language.

To see explanations of Google Natural Language terminology as well as examples of JSON results for each of above, do visit Google Natural Language Basics.

To see explanations of Watson Natural Language Understanding terminology as well as examples of JSON results for each of above, do visit the Watson Natural Langauge Understanding API reference documentation. There is also an API Explorer if you want to play with the API.

Custom ML-models?

If you are an enterprise this feature is usually very important, this so it is possible to extract domain-specific entities and relations. If you have build an ML-model it is very easy to deploy it to Watson Natural Language Understanding, but I could not find a way to do it with Google Natural Language. Since I am not entirely familiar with the Google APIs I might be mistaken here, so feel free to correct me and point me in the right direction.

It might also be as simple as that IBM comes from the enterprise angle and applying custom models in more of a pre-requisite for IBM than for Google that comes from the consumer space.

Supported Languages

In terms of AI / Cognitive / Machine Learning the language is always a tricky beast. I have written extensively about what languages Watson understands, and will in this context only compare Watson NLU vs Google NL. I would say they are on-par with each other on this topic. Watson supports Arabic and Russia, while Google NL is supporting Chinese (both traditional and simplified). As a Swede, I will give Watson the victory, since Watson NLU actually partially supports Swedish as well, but that is a very biassed Watson victory.

Additionally, the comparison here is a bit difficult. I interpret that Google NL supports the listed languages for all features in the API, which is very good. Watson NLU has more features but does not support all features in all languages, so dependent on your task one or the other might support it.

languages supported by watson natural language understanding

Supported Languages for Watson Natural Language Understanding

languages supported for google natural language

Supported Languages for Google Natural Language

What is the price for Google Natural Language

Monthly prices per 1000 text-records. One text-record can contain up to 1000 unicode characters. It might seem complicated, but if you have followed my posts of pricing prior, it is clear that they all are equally complicated. Full details available at the Google NL pricing site.

google nl pricing

What is the price for Watson Natural Language Understanding

Watson NLU is also charged on a per “block” per month price-model, they call it units and a unit is about 10.000 characters, so bigger units. IBM also charges for enrichment features. As an example: if you want a 18.000 character text analysed for entities and categories, it is 4 NLU Units (independent on how many categories or entities that are returned). Two units for the text and two units for the features. Looking for pricing for the rest of the Watson APIs, I have a post with a spreadsheet with the cost for all Watson APIs.

Watson NLU pricing

Given that the prices for Watson NLU are labeled in Swedish krona (since it is my live Bluemix account I have taken the screenshot from), I also attached a simplified model so it is easy to compare to USD as well.

Conclusion on pricing

This is a tough one since these models are hard to interpret before you have worked with them live and actually been invoiced, which I have not from Google, but from IBM Watson.

Nevertheless, I get the impression that you get more bang for the buck with Watson in this case. I sense that the free tire is more generous as well. But, this is a tough one for me to come to a clear conclusion, so it is more of a sense than a fact that I think Watson is more bang for the buck. The day I will receive an invoice from Google with NL on it I might update.

Disclaimer: I have been working with the Watson APIs for many years and know them pretty well, I am not as deep with Googles APIs. With that said I am open to others to complement my analysis and / or conclusions.

Top Image: The image is a wallpaper from the game Crysis 2.

human-tail chatbot

Short-tail, long-tail and human-tail chatbot

I am not that overwhelmed by the hype of chatbots as a buzzword for AI. I see chatbots as an interface. It might be considered an evolution in terms of UI / UX, but as an example of AI, I am not convinced. So, what is the use-case for a chatbot then, in terms of AI? This is how I see it.

I have written about my thoughts on why I think a chatbot is a stupid example of AI, so will not go into that much further.

I am dividing the chatbot use-case scenarios into three different stages:

  1. Short-tail
  2. Long-tail
  3. Human-tail

short tail long tail chatbot

This chart simplifies my description. As seen in the chart, a well-implemented chatbot can save wast amount of time and help people focus on the quality work instead of assisting on simple tasks that re-occur very frequently.

All of above can have a chatbot as an interface, but can also be integrated into other existing software, be a classic webpage or an app, it does not matter, but for me, this describes the use-case for a chatbot pretty clear.

What is a chatbot?

This is also a term that is up for interpretation, but for me, a chatbot is a software that can understand the human language, understand the meaning and intention of what is said, identify entities and then respond in a way we understand as well as with the appropriate language for the domain.

Short-tail of a chatbot

This is the most common use-case and use-case with the least AI in. Short-tail answers simple, repeatable tasks, that are common and easy to foresee. Examples are:

  • What are your opening hours?
  • Can I book a table for 2, tomorrow at 8pm?
  • Who plays Harvey Spector in Suits?

From a customer service perspective, short-tail are often replacements for FAQs (internal or external) or the most prominent features on your company site.
Examples:

  • What is the wifi password
  • How do I configure the printer at 5th floor?
  • Show me product X for women in red.

As you see from above examples, this is not that much of AI except that the bot needs to be able to understand the intent of your text and potentially identify a few entities (like color, names, hours, dishes, sizes, product names etc).

Most chatbots we see today are in this category, not all have the ability to understand the meaning and identify entities, but still, those more “stupid” bots also fall in the category.

Short-tail chatbots are essentially the replacement for site-search and forms on sites.

Long-tail of a chatbot

Now we are starting to touch AI (or augmented intelligence) and the chatbot might provide more value than just being a more productive interface. The reason for this is that the long-tail chatbots can answer questions that are not common and questions that might be buried deep in all the unstructured data (80% of all data in the world is unstructured) we have, usually impossible to find since up until now, our search-features have not been able to understand, reason and learn knowledge in specific domains, today that is possible, that is what we tend to call AI.

This is a chatbot that actually tells us things we do not already know.

A short-tail chatbot only makes a process a bit more effective and streamlined in a simple user interface. A long-tail chatbot actually provides real knowledge and makes it available on-the-glass for us.

A long-tail chatbot takes much longer to implement given that we have to train the bot on the domain that it is going to work in. This is done with subject-matter experts. Often a new ML-model is needed for the bot to be able to fully grasp the domain and be able to understand, learn and reason. The ML-model is often also needed for the bot to be able to understand the more detailed and niched questions that might be asked. This since we still need the bot to be able to understand the meaning and intention of what the user is asking. Since long-tail bots usually are applied in a narrow field and with depth in that narrow field.

Human-tail of a chatbot

Remember the last call you had with a call-center? As soon as a question you have is not solved quickly you tend to end up in two scenarios, either you get angry or you are transferred to the manager (or you are informed that this is above the operators pay-grade and they need to talk to the manager etc). Let´s put this in the bot scenario.

  1. You get angry!
    A bot can today sense emotions and notice that you are either using a bad language (which we tend to use more frequent with a bot compared to on the phone) or that you simply are starting to show some frustration and irritation.
  2. The question requires manager assistance
    At a certain stage the bot might be given a question that simply is above the authority of the bot, what shall the bot do?

In both above cases, it is hard to train a bot to act accordingly since emotions are very hard to communicate in a chat, and even harder if you are a bot.

This is where the human-tail comes in.Human-tail is simply when a bot senses that it can no longer manage the conversation with a positive outcome. It is time to hand it over to a human. Some tasks are simply better suited for humans (still).

Natural human-tail scenarios need to be implemented in the bot as well. This can be done by alerting a human to take over the discussion and when the issue is solved, hand it over back to the bot. The human can see the entire conversation as well as emotions and all the different products, agreements and other details that have been either collected or pull from internal sources. Another scenario can be that for certain topics you get the option to be transferred to a human instead of the bot, this by choice of the user, not automatically.

Personally, I think the human-tail is as equally important to build a great bot, from an end-user perspective.

Augmented Intelligence

I have written about augmented intelligence many times, but most AI and cognitive solutions are implemented to complement and elevate humans, not replace. Therefore I like Augmented Intelligence better than Artificial Intelligence that often insinuates that AI is replacing humans.

In the above three, this is very clear.

  • Short-tail
    The bot simply removes the easy to solve scenarios from our lives and lets us focus on the scenarios that require more cognition.
  • Long-tail
    We, as humans, cannot remember everything and cannot learn everything. In the long-tail scenario, the bot is helping us with the things we do not know or that we simply have forgotten about. elevating us as humans.
  • Human-tail
    The bot acts as a first-line support, a meat-wall (or a computer-wall) to put it bluntly. We only get calls / chats which specifically require human capabilities. We are still better suited to manage a situation where emotions play a large role or to calm an angry person down. We also tend to be better at getting an angry person to become happy again, by explaining etc, a bot can occasionally be a bit rigid when it comes to what is right and wrong.

Photo taken August 2017 by me, in Vägerödsdalar on Skaftö, Sweden. It shows a direction pole for Bohusleden near our summer house.

20 years ago IBMs Deep Blue beat Kasparov in Chess

https://youtu.be/wdns2fVUEeE

The 11th of May 1997 IBMs supercomputer Deep Blue beat the current world champion, Garry Kasparov, in chess. IBM Research released a little re-cap of the event and what has happened over the years up until today and Watson, that was launched to fame in similar manner, by winning Jeopardy in 2011.

Deep Blue actually lost the first game in 1996 with 4-2 in favor of Garry Kasparov, but in the re-match in 1997, Deep Blue won.

watson price and language update

Watson Language and Price Update

Since publishing the post on what languages Watson supports and how much Watson actually cost, those posts have generated outstanding most visits on this blog. Since Watson is constantly updated I thought it was time to update those posts since the Watson language post is from Dec 2015 and the Watson price / cost post is from August 2016.

I will in this post just point out some major updates and differences, the complete tables of languages and prices etc are in the respective original post.

New, discontinued and merged Watson APIs

14 Watson APIs.png

Today there are 13 APIs available with a lot of merging happening. Well, there is actually 14 listed today, but Tradeoff Analytics is already discontinued, so 13 is the correct amount. Just recently a few APIs have either merged or been discontinued. Dialog is now only available through Conversation (no more XML horror), Alchemy is fully integrated and all the visual / image APIs are merged to one. I like this change even though it is kind of the opposite of what IBM told us a year ago when the stated there would be 50 APIs released. To be honest it is a lot easier to work with 14 then 50, so great to see this merge happening. This might naturally lead to the notion that you pay for more features per API than you actually need, but overall IBM has lowered the price, so that is not currently a risk. I only found one service where the things had changed in a negative way, and then it was only the free option for Language Translation, it has decreased from 1.000.000 free characters to 250.000.

Update: What does Watson cost?

Below are some notable changes to the Watson pricing.

Natural Language Understanding: Compared to Alchemy Language, the entry level has decreased from $0.007 to $0.003 per call, which is a significant decrease in price. Secondly, customized models have decreased from $3500 to $800 so also a price decrease. Otherwise very similar structure.

Conversation: Price decrease as well, from $0.0089 per call to $0.0025 per call.

Language Translation: Primarily a 75% decrease in free translations from 1.000.000 to 250.000. The only service that is updated in a negative way.

Visual Recognition: More than 50% reduction in price for Custom Classifier Training per image. This is great since that is a key feature in Visual Recognition that no one else is offering. IBM also removed the fee for storing the custom model.

Discovery News: Is the old Alchemy News. Fee model is integrated into the Discovery service instead as prior in the Alchemy Language service.

Discovery: A new search engine service, so updated the table with the pricing for this service.

No change to the rest.

Head over to the updated post “What does Watson cost? What is the price?” to see the updates.

Update: What languages does Watson support?

Unfortunately not so many updates as one would have hoped for during the 1.5 years that have passed since my initial post on the topic, still there are some changes. First the documentation is now a lot better and most services have a “supported language” section available, not all, but most. I assume the merging of some services has enforced some structuring of both the APIs as well as the documentation, which is very notable in the Natural Language Understanding documentation. Prior it was scattered all over and documented in so many places it was hard to keep track, now it is all displayed in a nice table (which is included in my post as well). Outside of that, there are just a few languages added to the APIs.

In the table, I have tried my best to provide accurate links as well, so it is easier to find updates on the languages and to read more if needed.

Now, head over to the updated post “What languages does Watson support” to see the updates.

chef watson and watson ads

Commercial use of Chef Watson with Watson Ads

Is Chef Watson included in IBMs new Watson Ad product? Chef Watson is the most publicly talked about Watson use-case, after Jeopardy I assume. Chef Watson takes a scientific approach to cooking and creates dishes that on a molecular level as well as cognitive level should fit together, it creates some pretty interesting recipes, but yet mostly tastes really good (yes, I have tasted dishes made by Chef Watson), but where did Chef Watson go?

Can I buy Chef Watson? Can I use an API all consume the brilliance of Chef Watson? Or was it just a gimmick to be used as marketing for IBM?

Well, none of the above it seems, even though the API example might be close, and naturally, marketing plays a part as well, but no, none seems fully correct.

Nonetheless, Chef Watson is still available for us all to play with at IBMChefWatson.com.

Is Chef Watson commercially available?

The initial and most interesting question must be: Can a company pay for access to Chef Watson and integrate capabilities in their business applications?

The answer seems to NO. Have been involved in discussions where companies (large global ones) have tried to acquire access, but been denied by IBM. The reason given has been that the Chef Watson team has been focused on the Watson Ads initiative. Watson Ads?

What does Chef Watson has to do with Watson Ads……and more importantly, what is Watson Ads? Does IBM nowadays produce ads or ad-tech?

What is Watson Ads?

watson ads chef watson

An example is Campell Soup that is an early advertiser using Watson Ads. If you visit a site with a Watson Ad from Campell you can start to chat and ask about recipes etc. Naturally, the answer will be recipes based on Campbells products. You can play with the ads on watsonads.com. As an ad product it is actually pretty cool and I hope many companies start to use this format instead of dumb banners etc, these are both in context and has a way higher level of engagement, which probably lead to better conversions.

watson adsWatson Ads seems to be a product brought to life by the Weather Company (acquired by IBM a while ago and a part of Watson). The product is an ad format that acts like a chatbot. The chatbot is listening to your questions and replies with contextual replies that suggest ways to consume the products of the company whose ad you are chatting with.

So, finally I have understood, what I believe is, the reason for the Chef Watson team being involved with the Watson Ad product.

I want Chef Watson APIs

I am not entirely sure that ad-tech is a great fit for IBM, I am not either entirely convinced that directing the Chef Watson team towards an ad product is the best use of those brilliant people, but now that the Watson Ad product is out there, they might go back to providing the capabilities of Chef Watson to others. Hopefully, the components of Chef Watson can be a part of the Watson APIs as the capabilities from Weather Company and the other Watson APIs that are already available. Would love to do some interesting things with a Chef Watson API.

Page 1 of 7

Powered by WordPress & Theme by Anders Norén