Surely you remember a very curious case that invaded the Internet several years ago when rumors arose in the United States that Amazon was working with the CIA, and that Alexa was spying on North American users. The arguments of the users were that Alexa suddenly turned on its lights in a very strange way for a couple of seconds, which indicated that it was recording the conversations without us noticing it.
To reaffirm their theory, users asked Alexa if she was connected to the CIA, to which the Amazon assistant did not respond at all, in fact, she did not even respond with the classic “I don’t know that”, which many interpreted as a A way of accepting that Alexa did spy on users, because in his understanding the Amazon assistant could not lie due to the famous Asimov laws of robotics that also apply to artificial intelligence products such as smart assistants.
Obviously, this situation generated a lot of concern for some, and it was something really funny for others, because the fact that Alexa could not answer did not mean that she was a spy, it simply showed that she was an artificial intelligence that still needed to continue learning and getting better, in fact, although Alexa, Google Assistant and Siri have improved remarkably from 3 years ago, they still have enormous room for improvement to become the assistants that science fiction movies have shown us.
So smart assistants can lie?
To answer this question, we have approached the experts and managers of Alexa and Google Assistant in Mexico, which are two of the most used smart assistants in both mobile devices and home products.
The first question we have asked is whether Alexa and Google Assistant can really lie, and whose answer has been the following:
At Amazon, customer trust is at the core of everything we do and Alexa is designed to help and provide accurate information to customers. It is very important for us to build experiences that provide timely, relevant and accurate information to customers. We want Alexa to be an objective consultation tool for them, so we integrate various credible sources of information. Alexa gets its information from a variety of trusted sources selected by us, such as Wikipedia and many others.
We can see the Google Assistant as a new way to assist people and interact with Google products through a conversation that allows them to solve problems and find information. For information that comes from Google Search, each year we publish hundreds of improvements to our algorithms to ensure that they display high-quality content in response to user inquiries. While there is no magic bullet for solving the problem of incorrect information, we have taken a number of actions to combat misinformation, unrepresentative content, and models trying to mislead users.
The concrete answer is … no, neither Alexa nor Google Assistant nor other assistants like Siri can lie, which is not the same as that they cannot give incorrect information. We cannot consider the action of the attendees’ response as a lie because they are offering an answer taken from somewhere, they are not giving false information on purpose, in fact, that is the reason why many times before giving an answer to one question cites the source from which they are obtaining the data.
This means that what can happen is that they give wrong or false information, but that is because the source from which they are taking this information is false, in fact, many answers we make are answered directly from Wikipedia.
The above has led us to another question about the attendees, and that is …
Does anyone check the responses of the attendees and verify that the information is real?
In the case of Amazon, they have mentioned the following to us:
The selection of information depends on several factors, including the type of question that is being asked and if it is interacting with a third-party skill. Amazon does not provide the content for third party skills. When a client asks a straightforward fact-based question (“Alexa, how high is Mount Everest?”) We will typically rely on our repository of facts and sources like Wikipedia.
On the other hand, Google says the following:
We focus on providing access to sources of consultation that privilege the quality of information and that can be trusted.
Our experience has shown us that to combat misinformation we must partner with qualified sources, for example, during the Covid-19 health emergency, the WHO and the ministries of health have been fundamental partners to offer quality and authoritative information. We recognize that this is not enough. That is why we work to support through different initiatives. For example, through the Google News Initiative, we created the Emergency Relief Fund for Journalism, aimed at small and medium-sized media in Latin America. With this fund of 10 million USD we were able to support more than 1,050 media outlets throughout Latin America. We also announced funds to combat misinformation for fact-checkers and non-profit organizations.
As a company with a mission focused on making information universally accessible, we share a common cause and are committed to helping journalism not only during these unprecedented times, but to prosper beyond crisis. Together with other companies, governments and civil society groups, we are helping to make a better future possible for news and information in general.
In summary, there is no team that is listening to what virtual assistants say all the time, in fact, it has happened on more than one occasion that assistants when taking information directly from Wikipedia say very funny and incorrect things, such as what happened a few months ago with Siri when asked “who was the poop”.
In a case where it is quickly identified that the information being taken from Wikipedia is incorrect, then it is quickly corrected from the source (as in the case with Siri), or else this source will probably be disabled so that the wizard stops take information from that source.