Transparency will be the key to success in the era of artificial intelligence and cognitive. With all new technology also comes new challenges. In the artificial intelligence and cognitive area of technology one main challenge is how to manage privacy and integrity.
Currently artificial intelligence and cognitive products are the new hot and trendy topic and also it seems to be the new hot area for new startups.
Disclaimer: The company I have co-founded, Monies, is one of those. Our goal is to unify your financial life and bring meaning and consciousness to your financial life. By that we are naturally in the needles eye of this post and transparency will be one of a few keys to our success.
Since these systems and technologies are a lot about emotions and to learn about how we as humans work and to learn, reason and have a conversation based on the input you provide, it is essential that we trust the technology and that we know 100% that the info is secure, not shared and private only to me. Already at this stage there are a lot of risk and many will fail.
In my opinion we will also need to be really assured that my data is not mixed with other persons or organizations data. This is important since most of the AI / cognitive platforms are….platforms. This means that all data goes into “one” place and not on your own dedicated instance or server. Most platforms, Facebook, Google, Amazon or IBM are physically based in huge data centers and all data is physically stored in the same environment as other individuals or organizations data. For these organizations to be able to build the necessary trust that our data is secure and private to us or our organization, transparency will be of outmost importance to establish this trust.
If we push the topic it is the same as when you share information with another human, you need to feel that you trust that individual prior to sharing sensitive information. Since these technologies replicates a lot of the behavior we have as humans. Would you share your private info if you knew that the other person immediately shared that info with the rest of his company or even other companies that they are also doing business with, would not think so.
Even though machines replicate a lot of our behavior today, some things are hard to establish, like trust. Enter the revenge of transparency. Transparency have long be pushed down in the value-chain in favor of capitalizing on our private data, companies like Facebook, Google and others that have as a business-model to re-sell your data to companies that want to reach you with their message. Their business model is to sell your data, you are the product.
This will not compute in the era of AI and cognitive. We will, in my opinion, be much more restricted in what we share about ourselves if we do not feel safe and secure.
Ai and cognitive as technologies are already here to stay so this is not a post against these technologies, the opposite, I believe strongly in them, even so much that I have started a company in the segment.
We and everyone else in the segment need to be fully transparent about the following:
- How we store the data
- How we keep it safe
- That personal information is not shared and truly personal
- Provide info on the reasoning on the answers provided
- Provide evidence that backs the answers that are provided
Only then can a artificial intelligence and cognitive technologies really become as successful as they are expected to become.
I will end this post with a quote rom Rob High, IBM Fellow and CTO of IBM Watson, in a Mashable article on the topic recently.
They’re subject to the human condition — that is, all of the forms of expression that we leverage to communicate our thoughts, ideas and knowledge, and all of the experiences that we’re exposed to that shapes those thoughts — cognitive systems don’t behave like other deterministic (mathematically modeled) computing systems. They are subject to the same ambiguities, nuances, subtleties and lack of universal truth that we as humans are subject to. They, like other human experts, are only really held up as an expert when we develop trust in them. Cognitive systems, like other human experts, have to establish that trust by being transparent about why they believe what they believe — answer what they answer. And in doing so, they will reveal whether they are acting nefariously or not.
Photo: The top photo is taken from a train entering Stockholm a rainy winter day in December. I thought the wet window could symbolize some transparency.