Why we Need Clear Boundaries and Guidelines for AI

Next Visions
#NextLevelGermanEngineering
4 min readJul 16, 2020

--

Artificial intelligence is more present in our lives than ever before and powers numerous services we use, including voice assistants, chatbots or video streaming recommendation systems. But can we really trust these applications? In the Next Visions Podcast, experts on AI and ethics talk about the necessity of clear guidelines for emerging technologies.

Photo by Robynne Hu on Unsplash

In recent news, AI didn’t come off very well. Companies like IBM or Microsoft just announced that they will end the sales of facial recognition technology, one area of AI, with immediate effect. The real-life implications might be devastating: Whereas the technology might be well-trained to identify white faces, it fails to differentiate black faces. When used by law enforcement, this could lead to false accusations for People of Color.

Critical press coverage, intransparency and unethical business practices have led to distrust regarding emerging technologies around the world. The Edelman Trust Barometer reported that trust in tech is globally down by four percent compared to the previous year. 61% of the respondents said that the pace of technology is too fast and that the respective governments do not understand new technologies enough to regulate them effectively. But even though trust in tech is declining, it’s more present in our lives than ever before. More than five billion people worldwide have mobile devices and in our daily life, we are surrounded by technology.

How can we regain trust in technology?

Robots and algorithms are already so deeply integrated into our lives, yet we still need to figure out how to make them more trustworthy. That’s one of the many matters we discussed in season one of our Next Vision podcast. We invited visionary thinkers and makers to talk about how they imagine the future and what challenges they anticipate. Some of our guests are working on the implications of advancing AI technologies like author John C. Havens, creative director Florian Schmitt, Head Of Spaces UX at Google Sophie Kleber, and Maria Kolitsida, founder of the AI company winningminds.ai.

In this episode, John C. Havens and Florian Schmitt talk about ethic in AI

All four of them are sure that we need guidelines for AI in order to regain trust and promote technological literacy. Voice assistants are one example that shows how intransparent AI-powered services are. Those programs have been criticized for several reasons. For instance, the conversations users have with their virtual assistants are being recorded and analyzed in order to further improve the systems — often without the user knowing or actively agreeing to that.

“The data is already being collected, and we have not been explicitly asked whether we’re ok with that” — Sophie Kleber, Head of Spaces UX at Google

For Maria, who co-founded an AI startup, ethical and thoughtful principles should always be the basis when working with intelligent technology. For her, the responsibility lies with the engineer and with the user: „At the end of the day, you have binary systems: 0 and 1. Whatever you [the engineer] do with them is really up to you. And whatever the recipient will do with them is really up to them.“ In her understanding, we tend to put the fault on the machines, instead of blaming the people who developed it. Since machine learning programs are fed with existing data, AI tends to be biased, as Sophie explains: “AI is racist and sexist because the only thing it learns from is the past and the society that we have created up until now, I think we all agree, is far from perfect.”

Maria Kolitsia and Sophie Kleber talk about the emotional intelligence of machines

Artificial intelligence under criticism

The faulty data is not the only criticism concerning AI. In the discussion between John and Florian, Florian comes up with the example of chatbots, which appear on many websites these days. He explains: “A lot of times you’re not exactly sure anymore if you’re talking to a chatbot or if you’re talking to a real person.” In research he conducted, most participants said they want to get informed whether they talk to a real person or just AI.

The implementation of new standards would be one way to regain the trust of consumers. In order to drive this forward, John works as an Executive Director at the Institute of Electrical and Electronics Engineers (IEEE), whose mission is to integrate ethics into the design and development of autonomous systems. The initiative does not only deal with complex philosophical questions but rather with consumer directed-tips like: “Is it safe to bring this robot into my home?” They also developed general principles for creating autonomous and intelligent systems, which includes the protection of human rights, data agency, transparency and the awareness of misuse.

Building trustworthy products

Building trust is not an easy task, but it’s essential when we want to further develop our digital infrastructure. Even though all of us have to use digital products carefully, the responsibility lies with the companies that handle tons of data and develop the tools. Many companies already work on their own guidelines regarding artificial intelligence and at the same time demand governments to establish rules that provide a level playing field.

When implemented with care, tools like AI can be used to enhance trust instead of arousing suspicion. Implementing higher standards for technology is a Herculean task, but winning back lost trust is even harder.

Christian Knörle and Tim Leberecht host the Next Visions Podcast
The Next Visions Podcast is brought to you by Porsche in collaboration with the House of Beautiful Business. The hosts are Christian Knörle (Porsche Digital) and Tim Leberecht (HoBB)

You find the full conversations of Maria and Sophie and John and Florian on all audio streaming platforms including Spotify, Google Podcasts, Apple Podcasts and Deezer.

--

--

Next Visions
#NextLevelGermanEngineering

There’s more to Porsche than sports cars // #NextVisions is a platform about smart technologies and the people that drive our digital journey.