Ghost Dependencies: An Emerging Supply Chain Security Threat in the Agentic Coding Paradigm
Author: Tianchu Chen of Tencent Xuanwu Lab
0x00 Introduction
As the capabilities of Large Language Models (LLMs) continue to advance, AI-assisted software development is evolving from the “Copilot” paradigm—where humans write code and AI provides completions—to the “Agentic Coding” paradigm, where AI autonomously makes decisions and executes actions. In this new paradigm, AI is no longer merely a code generation assistant but has transformed into an intelligent agent capable of independently planning tasks, selecting technology stacks, manipulating file systems, and even executing commands.
However, this transfer of control introduces new attack surfaces: AI Agents make decisions on behalf of users, but these decisions are not always secure. Through extensive testing and analysis of mainstream Agentic Coding tools and their underlying LLMs, we have identified several prevalent AI decision-making risks. Among these, a category of risks related to the software supply chain can produce persistent and covert impacts. We have termed this phenomenon “Ghost Dependencies.”