Amazon Alexa Skills have glaring security lapses and very poorly implemented sourcing of information to questions asked of the smart assistant, alleges new report

Amazon Alexa Skills
Amazon Alexa Skills is quite liberal about sourcing info and vetting process? Pic credit: Amazon

Amazon Alexa, the smart, always-on, voice-activated virtual assistant, is littered with serious security lapses, claims a new report. Driven but malicious developers can apparently take advantage of loose security and scrutiny policies. Additionally, they can also harvest a lot of data by masquerading as big tech corporations within Amazon Alexa Skills, alleges the report.

Malicious code writers can allegedly trick Amazon Alexa, the highly popular virtual assistant. A new report that resulted from a joint collaboration between Germany’s Ruhr-University Bochum, North Carolina State, and an independent researcher, has made the serious claim.

Amazon Alexa Skills can source data from anywhere from an already-activated platform:

After looking at  90,194 Alexa skills, academics at Germany’s Ruhr-University Bochum, together with equally concerned colleagues from North Carolina State and an independent researcher are concerned about Amazon Alexa Skills.

Dr. Martin Degeling, who was part of the research team, stated: “A first problem is that Amazon has partially activated skills automatically since 2017. Previously, users had to agree to the use of each skill. Now they hardly have an overview of where the answer Alexa gives them comes from and who programmed it in the first place.”

Simply put, Amazon Alexa’s Skills can dig and bring up the information that its users seek, from anywhere. Apparently, Amazon hasn’t implemented strict authentication and authorization for processing information that users have mentioned or requested in the question.

The second most glaring security loophole is impersonation. In Amazon Alexa Skills’ case, it would mean assuming the identity of a large tech company.

“When a skill is published in the skill store, it also displays the developer’s name. We found that developers can register themselves with any company name when creating their developer’s account with Amazon. This makes it easy for an attacker to impersonate any well-known manufacturer or service provider.”

It is concerning to note that the researchers tested their claims. They claim to have successfully “published skills in the name of a large company,” prompting one of the researchers to comment, “valuable information from users can be tapped here.”.

Amazon is aware of possible security oversight, and is addressing them:

The creators of the revealing and concerning report have reportedly approached Amazon and presented their findings. One of the researchers noted, “Amazon has confirmed some of the problems to the research team and says it is working on countermeasures.”

Amazon would surely amend some or all security policies pertaining to Alexa Skills. However, the owners and users of Amazon Alexa-enabled devices, most notably, the Amazon Echo series, must exercise caution.

It seems Amazon would prefer users monitor, change, or manage their own permissions. However, users need to extensively read manuals and user guides to follow correct procedures, which is often not the case.

Incidentally, academics reportedly tried to upload 234 policy-breaking Alexa skills last year. Surprisingly, the concerned and relevant division approved all of them.

Many buyers eagerly procure or activate Amazon Alexa, Google’s Virtual Assistant, and a few other always-on platforms. However, these always-on platforms do have their own set of security challenges, especially when they offer multiple solutions that third-party developers can provide.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x