Tricks To Navigate Supply Chain Attacks

Tricks To Navigate Supply Chain Attacks

A Large Chain

Fig. 1. The Chain
Source: Adapted from [1]

In the last few weeks we’ve witnessed a few notable supply chain attacks. The most popular being LiteLLM and more recently Axios. I had to halt development and testing of the SCAIRE project due to the recent Litellm library attack. While my local testing environment was safe because of the quick action of repository developers, it prevented me from testing the project with other Sysadmins stalling progress.

LiteLLM claims they have solved the issues but I’m going to wait a few more days while the dust settles for any other minor changes they wish to make since this isn’t a production tool.

You can read me more about that here. LiteLLM Supply Chain Updates đź”—.

Some Grounding

The world hasn’t significantly changed in terms of how systems are being exploited. Most modern attacks follow the same overall themes as they did all the way back to the beginning. A threat actor finds some way to have malicious code executed on a victims system by having the system, or user execute the malicious code with privileges.

What’s different in every case is the method of delivery. In supply chain cases, often the first attack vector is the developers own computer that has previously been compromised which leads to exposure of their trusted keys to places like Github repos. This gives an attacker the keys to the kingdom for their particular application to manipulate the source of development and do whatever they want.

What’s frustrating about supply chain is that anyone else working from that supply and updating these packages is then exploited as well. Developers may use 100’s of dependencies within their projects and any one of them could be from a compromised developers repository that has given up their kingdoms keys without being noticed for a period of time. That window of time could be minutes, hours, days or even weeks depending on activity of the project, exploit repercussions, or anything that may cause others to look a little more closely.

Upgrading is Necessary Though

Ideally, we want to do our best to prepare, be aware, and avoid supply chain attacks where possible. When considering our upgrades, we should be upgrading with intent. I always ask myself a few questions before upgrading.

It’s worth mentioning that if you’re following a framework with a more bureaucratic process, then the decision making is formal in these situations but if you are on your own? Some things to consider.

I am upgrading to:

  • Fix a known CVE.
  • Add additional features that are necessary for operation.
  • Reaching version compliance for necessary audits.
  • Staying up to date to ensure future upgrades are less problematic.
  • Solving a bug that has had some form of impact.

Being able to properly articulate from documented reasons why an upgrade was necessary beyond say “The releases page had a new version bump” shows we are thinking through our process hopefully with intent.

Where to Start

Supply chain attacks are sophisticated and require being able to work backwards through layers where undoubtedly you end up at a developers exploited Macbook somewhere. We aren’t alone though, security researchers, tech news, developers, and just about everyone has a vested interest in understanding how to remediate these situations but we can be better prepared than just waiting for the latest AI boosting rag to bury the lead hyperlink we need to see source response or steps.

mindmap
  root((Supply Chain Defense))
    Information Sources
      RSS Feeds
      Social Media
      Security Advisories
    Technical Practices
      Version Pinning
      Package Managers
      SIEM Integration
    Process
      Change Management
      Documentation
      Team Communication
    Mindset
      Intentional Upgrades
      Preparation
      Self-respect for process

Use Multiple Trusted Sources for Information

In the first minutes and hours. Make sure you are using trusted sources of information. Since many supply chain attacks also compromise the repository sources, information may be manipulated by threat actors. It can take time for developers to gain access back to their own repositories and so it’s important to have multiple trusted sources of information available.

Learn the Package / Dependency Managers Developers Use

Whether it’s node, yarn, Pypi, Docker, Apt, Yum, pacman, portage, winget. The list goes on and yes it’s a tall order. The good news is, most package managers follow principles that become more familiar over time. For each new package manager that you learn, the next one becomes easier to learn.

How this prepares us is to recognize and use good security practices for ourselves. Such as version pinning which is a way we can pin a specific version of dependency between updates. It’s also great for deeper debugging and troubleshooting.

Staying Informed

One of the more valuable ways I spend my time in IT is just staying informed about what’s going on in various spheres (including Infosec) but we need to be efficient about this since there is too much information to observe every day. There are levels to staying informed to make good use of our time though.

For example:

  • Level 1: You check all your feeds and glance at headlines at various points throughout the day. This is an ongoing habit and I would say a first line of defense just avoiding a lot of security problems in the first place. Know what’s going on in the world.
  • Level 2: You are planning an upgrade and take extra time looking at repo discussions, the programs website, and any news you missed in headlines. Further scrutinizing dependencies and of those projects if possible. You should be doing all of this to follow change management processes anyway. You’re doing that right? Right?
  • Level 3: You are actively tracking a major issue affecting many to ensure your systems are not affected.

In cases like supply chain poisoning, IT may not be able to tell you if a particular package is in use in their systems. Only the most well documented change managed coolaid drinking ITIL places look like this. Large corporations, banks, some government institutions (hopefully). Unfortunately, you won’t see that often enough out in the world where IT is considered a cost centre more than a risk reduction production amplifier. Where does that leave you? Somewhere in between I imagine as I have been at various points throughout my career.

Tools I Use Daily for Finding Fast Infosec Details

This is not an exhaustive list but it’s a good launch point:

  • RSS Feed Manager: A good rss reader is a tool every sysadmin should be using. Yes it’s old-school cool but there’s no faster way to consume a lot of curated content at once. It’s still my favourite way to find out about software updates, security concerns, and having a beat on the world. Create categories, subscribe to as many security news sites and check them often. Sites like bleeping computer đź”—, Dark Reading đź”—, Google Workspace Updates đź”—. Whatever may be relevant and supports RSS. Keep curated sites and update them often.
  • X: Not all social media is valuable but often sites like X can be the first place you see a developer say “Something weird is going on with our project”.
  • Reddit: With a curated list of communities make sure to setup a separate account that only has your IT communities in it. This helps reduce the noise in the algorithm.
  • Lemmy & Mastodon: These are federated Reddit style communities and also support RSS. Having less users than places like X or Reddit but the users that do exist are mostly fed up expat redditors. Getting into the topic of how to find where the truly smart people have gone on the Internet is an entire other post I want to talk about one day but right now just know there is a very exciting transformation with federated services happening and brilliant people are there.
  • Alerts and Advisories Sites: Security advisory lists and searchable sites are excellent as well. Sites like Canada’s Alerts and Advisories đź”— are well curated often with the quick points we need to at least understand before moving on should something come up.
  • Security Dashboards: If you have access to security dashboards, these are great to review as well but I would also hope that you also have notifications setup.

Document the Upgrade Process

It is becoming more important to know exactly what versions of what packages and dependencies are on a given system at any time. For that we can use our SIEM systems typically to keep track. Systems like Wazuh đź”— can make quick work of having those answers quickly once fully configured.

However, planning your upgrades with expectations of versions you are moving from with versions you are moving to is also detailed information that can be extremely important once the chips are down. A little bit of preparation can go a long way here. Document, via either ticket, or wiki through your upgrade process. Outline the expectations with clear version numbers and brief descriptions of the information you can identify. If working on a team, review it briefly with your team first, follow the service value system frameworks available if possible.

Combined with a good change management process, tickets, documentation, and a good project plan, we can get a foot on the ground in a world that spins out of control. Even if you are a lonely Sysadmin with the weight of the world on your shoulders. Good tool usage, documented upgrading, and fast knowledge can take the stress levels down and improve decision response of almost any situation.

Remediation is 9/10s Preparation

When remediation hits, you want all of this preparation available ahead. Everyone needs information quickly to react appropriately. All of this relevant data shows that we are taking our role seriously, planning appropriately, and deserve to maintain these systems. In other words, have some self respect for your process.

Every remediation situation is going to look different depending on a lot of systems outside our control. It’s important to recognize how far a remediation reaches outside the IT department and touches the an entire organization and the involved departments.

Good documentation and good sources won’t stop the next attack, but they’ll help you figure out what to do about it.

References

[1] P. V. D. Veide, "The Chain," *Wikimedia* 2017. [Online]. Available: https://commons.wikimedia.org/wiki/File:The_Chain_(35308539015).jpg Accessed: Mar. 30, 2026.

[2] Sailpoint and D. Research, "LiteLLM Supply Chain Incident," *Sailpoint* 2026. [Online]. Available: https://docs.litellm.ai/blog/security-update-march-2026 Accessed: Mar. 30, 2026.

[3] B. Computer, "Bleeping Computer RSS Feed," *Bleeping Computer* 2026. [Online]. Available: https://www.bleepingcomputer.com/feed/ Accessed: Mar. 30, 2026.

[4] D. Reading, "Dark Reading RSS Feed," *Dark Reading* 2026. [Online]. Available: https://www.darkreading.com/rss.xml Accessed: Mar. 30, 2026.

[5] Google, "Google Workspace Updates RSS Feed," *Google* 2026. [Online]. Available: https://feeds.feedburner.com/GoogleAppsUpdates Accessed: Mar. 30, 2026.

[6] C. C. F. C. Security, "Alerts and Advisories," *Cyber.gc.ca* 2026. [Online]. Available: https://www.cyber.gc.ca/en/alerts-advisories Accessed: Mar. 30, 2026.

[7] Wazuh, "Wazuh - Open Source Security Platform," *Wazuh* 2026. [Online]. Available: https://wazuh.com/ Accessed: Mar. 30, 2026.