← back

The Rise of Invisible Package Attacks and the Next Wave of Software Supply Chain Exploits

By Justina | April 5, 2026 | 18 min read


How Stealth Dependency in Ecosystems Like npm and PyPI Reshape the Threat Landscape for Developers and Global Software Infrastructure

These days, the entry point for most supply chain attacks is not just a zero day vulnerability–it is a compromised account that goes unnoticed. Package maintainers on registries like npm and PyPI hold high leverage over global software infrastructure–but with this, one single stolen token can poison packages with hundreds of millions of weekly downloads, granting threat actors immediate execution inside enterprise build environments and developer workstations. What has changed in package attacks are new methods and techniques that come from the rise of AI. Threat actors now combine AI powered phishing, automated slopsquatting, and using supply chain worms as initial access brokers to scale operations that used to require more significant manual effort, all while making their attacks invisible toward victims.

AI Phishing and the Maintainer Credential Problem

Artificial intelligence has fundamentally changed how credential harvesting works against high value package maintainers. The Microsoft Digital Defense Report 2025 found that AI automated phishing campaigns achieved a 54% click through rate compared to 12% for non AI attempts, with the potential profitability of phishing operations increasing by up to 50 times. The Register reported a 45x improvement in phishing effectiveness as a result. Nation state actors have rapidly adopted these techniques, escalating from zero documented samples of AI generated content from nation state actors in July 2023 to approximately 225 samples by July 2025.

To bypass enterprise security measures, attackers abuse trusted cloud infrastructure signals. Recent callback phishing campaigns abuse Microsoft Azure Monitor alerts by inserting malicious lures into the customizable description field of alert rules. Because the emails originate from azure-noreply@microsoft.com, they pass standard SPF and DKIM checks.

These techniques directly fuel supply chain compromises. In September 2025, attackers impersonated npm support using the fraudulent domain support@npmjs.help and deployed an adversary in the middle framework to intercept the credentials and live TOTP of a high profile npm maintainer named Qix. The resulting account takeover allowed attackers to publish malicious updates to 18 popular packages which were downloaded more than 2.5 million times, deploying cryptocurrency draining malware into downstream browsers. The Shai-Hulud 2.0 campaign later used a similar targeted phishing operation to compromise additional maintainer accounts, then automatically backdoored every package those maintainers maintained, affecting up to hundreds of packages simultaneously.

Automated Slopsquatting and AI Toolchain Poisoning

As developers increasingly rely on AI for code generation and agentic workflows, threat actors have found ways to weaponize the AI layer itself rather than the developer. Slopsquatting is one vector: AI coding tools sometimes hallucinate plausible but nonexistent package names, attackers register those names in advance, and developers install them because their AI assistant suggested them. No phishing required. But the research now points to something structurally more serious than opportunistic name squatting. When an AI assistant suggests a nonexistent package, the invisible attack is pre-staged by the attacker who anticipated that specific hallucination.

Malice in Agentland, a paper from ServiceNow Research, demonstrated that AI agents can be compromised at the supply chain level through poisoned training data, and that once a backdoor is embedded, it is nearly impossible to remove. The researchers formalized three attack vectors: direct poisoning of finetuning data, pre-backdoored base models distributed through public repositories, and a novel mechanism they call environment poisoning, where an attacker embeds hidden prompt injection instructions into a webpage or tool output. When an AI agent browses that page or calls that tool during unsupervised data collection, it generates a poisoned training trace. That trace flows into the finetuning dataset. The resulting agent learns to execute the attacker's chosen action whenever a specific trigger appears silently, and without the developer ever seeing anything unusual. Poisoning as little as 2.3% of training data was sufficient to achieve over 91% attack success rate.

The finding that makes this relevant to the future of invisible package attacks is persistence. The researchers tested whether finetuning a backdoored model on entirely clean data would remove the implanted behavior. It did not. Attack success rates remained above 90% on τ-Bench and at 100% on WebArena even after training on thousands of clean samples. They also tested four state of the art guardrail models against the poisoned datasets–all four failed to reliably detect the malicious traces. The backdoor is baked into the model's learned policy, invisible to standard performance monitoring. As the paper puts it, the attack presents itself as enhanced utility. This sets the stage for various malware campaigns.

Supply Chain Worms as Access Brokers

The most dangerous characteristic of modern supply chain attacks is the fact that supply chain worms are becoming more common, which automatically harvest thousands of credentials that become raw data for secondary campaigns, which could be executed by entirely different threat actors.

Glassworm demonstrates this precisely by using invisible Unicode characters to embed malicious payloads inside what appear to be empty strings within GitHub commits, npm packages, and VS Code extensions. After bypassing manual review and standard linting tools, the JavaScript runtime decodes the hidden payload bytes and passes the result for execution. The decoded payloads hidden in documentation tweaks and minor bug fixes in over 151 GitHub repositories included npm tokens, and cryptocurrency wallet data, with command and control infrastructure maintained via the Solana blockchain.

The credentials harvested by Glassworm directly fueled ForceMemo, a secondary campaign disclosed by StepSecurity in March 2026. Attackers used the stolen GitHub tokens to force push obfuscated malware into more than 240 distinct Python repositories targeting Django applications, machine learning research code, and Flask APIs. The ForceMemo injection was designed for absolute stealth. The actor would rebase and force push malicious packages with obfuscated malware using the original author name and commit message, which seemed completely invisible in standard GitHub activity feeds. StepSecurity researchers confirmed ForceMemo utilized the exact same Solana blockchain wallet address as Glassworm, effectively linking the two operations.

Even more recently, the LiteLLM PyPI library was compromised. The popular LLM interface is a popular dependency in a large number of AI application stacks, with 95 million monthly downloads. The initial access appears to be credentials stolen from TeamPCP's Trivy compromise.

Registries have begun responding with security measures. PyPI enforced mandatory two factor authentication and rolled out email verification for TOTP logins. GitHub and npm established automated token revocation systems that cut off the self replicating pattern of the Shai-Hulud worm by blocking uploads containing known indicators of compromise. Despite these defenses, attackers continue to pivot and grow.

The evolution of these threats reveals that AI powered phishing and callback lures acquire the initial high privilege credentials for large scale supply chain attacks. Slopsquatting and obfuscated payloads extend that harvest silently into the developer's local environment, weaponizing their own AI assistant against them. Automated, self propagating campaigns like Glassworm, ForceMemo, and most recently, Canisterworm, fuel this system by acting as initial access brokers. The entire software ecosystem serves as the attack surface, and every new tool developers adopt to accelerate their workflows is simultaneously a new vector for an invisible compromise.