Giraffe Security

Hat Trick: AWS introduced same RCE vulnerability three times in four years

December 29, 2024

Background

Almost three years ago, in April 2022, Giraffe Security discovered a security vulnerability in Amazon’s AWS Neuron SDK, a set of Python libraries for running machine learning workloads on specialized hardware in AWS. The issue was not in the libraries themselves, but rather how Amazon instructs users to install this package. In many tutorials and even in the official documentation1, Amazon instructs that you install their packages with a command like this:

pip install transformers-neuronx --extra-index-url=https://pip.repos.neuron.amazonaws.com

Now, this command may look innocent — we want pip to install the package “transformers-neuronx” and we want it to be downloaded from the PyPi index hosted at https://pip.repos.neuron.amazonaws.com instead of the default index. But it is not actually that simple and not exactly what pip will do, leading to a security vulnerability.

The Bug

As we have previously written in our blog (Remote Code Execution Vulnerability in Google They Are Not Willing To Fix), Python’s pip package manager has some quirks that make it difficult to install packages from private package registries correctly. Very often, people discover the “extra-index-url” parameter and use that if they want to install a package from some other package registry.

This parameter works but it has one corner case that has to be considered – it does not guarantee that the package is downloaded from the specified index and pip may also download the package from the default PyPi registry. This is a problem if you are distributing packages that are not uploaded to the PyPi default index, but only to your private index, as then a malicious actor could upload a package with the same name to the default index, making pip download the malicious package. This can lead to remote code execution when the malicious package is loaded.

This exact issue was what we discovered back in 2022 – Amazon was uploading their packages to https://pip.repos.neuron.amazonaws.com, but they did not upload the same packages to PyPi, making these packages up for grabs by anyone on the Internet. We ended up claiming a few of these packages (“mx-neuron” for example) and reported the issue through their bug bounty program. They then promptly fixed the problem by uploading dummy packages to PyPi under the same name, preventing malicious users from claiming them on PyPi.

Repeating the same mistake?

As I did my research into this topic in 2022, I noticed that I had not been the first one to notice this problem, as I found that some of the Neuron SDK packages in PyPi had an interesting version history. Below is an image from libraries.io, a tool which scrapes data about open source tools on the Internet. Although not publicly visible on PyPi anymore, libraries.io shows that in 2020, a large set of versions for package “torch-neuron” (one of the libraries in Neuron SDK) were published in 2020. As these versions were uploaded at the same time and had such variety in version numbers, it leads me to believe it was another security researcher that stumbled upon the same problem as we did, and they uploaded all these versions to prove that their version from PyPi is downloaded instead when installing the package.

torch-neuron releases

Based on this information, I believe that Amazon was aware of this problem already in 2020, but despite that, they did not fix it at its source, nor did they set up a process to ensure any new packages are also claimed on the default PyPi index, given that I was able to claim “mx-neuron” and a few other libraries about two years later.

I expected that after my report, they would be closing this hole for good by getting rid of these flawed install instructions or at least set up a process to claim any new packages in PyPi.

Hat Trick

Now, in December 2024, I happened to visit Amazon’s Neuron SDK private package index again, and I observed that they had expanded their set of packages quite a bit and had introduced many new ones. Out of curiosity, I checked whether they had claimed all these new packages in PyPi, and stumbled upon packages that were available and I was able to claim under my own PyPi account. This means that they have still not properly addressed the problem by getting rid of the “extra-index-url” in their documentation, 2+ years after my report in 2022, and they still do not have a foolproof system to claim package names on PyPi.

Conclusion

To recap, Amazon has introduced the same dependency confusion issue at least on 3 separate occasions when adding new packages to the Neuron SDK registry. They have been notified at least twice but they have yet to come up with a permanent fix.

While it is impossible to guard against all possible attacks, it is interesting to see how Amazon is repeating the same mistake over and over again here. Usually, large tech companies like Amazon take security seriously, so I am wondering what exactly has happened here that they dropped the ball three times in a row. When I reported the issue to them initially, they took action immediately, which suggests they considered this a serious problem.

One the other hand, Neuron SDK team could be treating this issue as a “configuration mistake made by customer”, as technically, a knowledgeable AWS customer should know about dangers of using “extra-index-url” and should install Neuron SDK in some other way, for example by using “index-url” parameter instead or utilize a better package manager like Poetry. If we look at it from this perspective, they may not consider this as a serious problem at all, as documentation and tutorials are just guidelines and the customer is responsible for keeping their application secure, no matter what Amazon documentation tells them.

I think the lesson from this story is to not to assume that code on the Internet is secure, even if it comes from a reputable source like AWS documentation. Always familiarize yourself with everything before using it in production.

As usual, if you have any comments about this article, contact us at [email protected] or DM us @GiraffeSecurity. GiraffeSecurity hopes that its readers had a great 2024 and wishes you all an even better 2025!

GiraffeSecurity has reached out to Amazon for a comment but has not yet received a response.

Happy New Year

  1. To list a few examples, there is a recent blog post from November this year in Amazon’s official blog, as well as in the official documentation (the “tensorflow-neuron (inf1)” tab). ↩︎