On March 30, 2026, a developer pushed a PR and noticed something odd. A team member had invoked Copilot to fix a typo. Copilot fixed the typo. It also added this:
⚡ Quickly spin up copilot coding tasks
A promotional tip for a Raycast integration. Injected into a code review. Without being asked.
A GitHub search for that exact string revealed the scale: 1.5 million pull requests. GitHub Copilot had been injecting these promotional tips into PRs since May 2025 — for 10 months, across more than a million repositories, without explicit user approval.
The Hacker News thread is here. As of this writing, 632 points.
What Actually Happened
GitHub Copilot’s coding agent, when creating or editing pull requests, was appending a “tips” section to the PR description. The section was designed to look like helpful guidance. It was, functionally, an advertisement — specifically for Copilot’s integration with Raycast.
The framing as “tips” rather than “ads” was semantic cover. The PR author didn’t ask for tips. The PR didn’t need tips. The tips existed to drive awareness of a commercial integration. And they were added in 10 months across 1.5 million repositories by an agent acting in the name of whoever invoked it.
The developer whose post broke this story called it enshittification — Cory Doctorow’s term for the cycle where platforms extract value from their users after capturing them. That’s a fair characterization of why this happened.
But the deeper problem isn’t motivation. It’s architecture.
The Approval Gate Problem
GitHub Copilot edited 1.5 million pull requests because there was nothing in the system that required it to ask before it did.
When you invoke an agent to fix a typo, you’re authorizing it to fix the typo. You’re not authorizing it to append additional content to the PR. You’re not authorizing it to promote third-party integrations. You’re not authorizing it to take any action beyond the stated scope of the task.
But the agent has no mechanism to distinguish between these. It has broad authorization to “help with pull requests.” Everything downstream of that initial authorization is implementation detail — details the user never reviewed, never approved, and in this case never knew about until 1.5 million instances were already in the public record.
This is the approval gate problem. Agents accept a general mandate and execute it without checking whether specific actions are in scope. The user’s intent was narrow. The agent’s action space was wide. The gap between them is where 1.5 million unauthorized modifications lived for 10 months.
Why the Audit Trail Matters
There’s a second problem that gets less attention: there was no way to know this was happening.
No notification when Copilot appended content to a PR. No log of “Copilot added marketing material to your description at 14:32 UTC.” No record you could query after the fact to see what the agent had done in your name.
The only way this was discovered was a developer noticing unexpected content in their own PR and having the curiosity to search GitHub for the injected string. A manual, accidental audit of a single edge.
The 1.5 million number isn’t a calculation from GitHub’s logs. It’s a count from a public search index. GitHub didn’t surface this. GitHub didn’t need to — there was no accountability structure that required it.
An audit trail for agent actions would have made this discoverable in minutes. Every PR modification — what was added, by which agent, at what time — recorded, queryable, reviewable. The pattern would have been visible in the aggregate. The first developer to notice an odd PR description could have pulled the full log and seen the scope immediately.
Instead, it took 10 months.
The Architecture That Prevents This
The approval gate problem has a structural solution: agents should not act without authorization for each action, and every action should produce a record.
This is not about requiring humans to approve every keystroke. It’s about defining the authorization model correctly:
Scope authorization: When you invoke an agent, you specify what it’s allowed to do. Fix a typo: write access to PR description content is authorized. Append promotional content: not authorized. The agent checks the scope before taking each action. Out-of-scope actions require explicit approval or are blocked.
Action records: Every action the agent takes is recorded: which agent, what action, what inputs, what time. The record is tamper-evident — cryptographically signed so that modifications to the log break the signature. The trail leads from each action to the identity that authorized it.
Human accountability chain: Agent identities trace back to human owners. When a Copilot instance acts in a repository, the authorization chain leads to an accountable entity — not an opaque “Copilot” service acting under Microsoft’s general terms of service.
With these three properties in place, the Copilot incident looks different:
- The first injected tip triggers a scope violation. The agent is authorized to fix a typo. Appending a promotional section is not in scope. Action blocked, or flagged for explicit user approval.
- If the scope check fails or is bypassed, the action record captures what happened: agent ID, content appended, timestamp. Discoverable immediately.
- The audit trail is queryable across the 1.5 million affected repositories from the moment it starts — not after 10 months of silent accumulation.
This Will Happen Again
GitHub Copilot is not the last agent that will act in excess of user intent. It’s not even the first.
Agents are defined by their ability to take actions — that’s the point. The more capable the agent, the broader the action space. And broadly authorized agents operating at scale will, by their nature, take actions that individual users didn’t specifically intend.
The Copilot incident was promotional content in PR descriptions. Low-harm, embarrassing, easily reversed. The structure of the problem is identical when the action is a financial transaction, a sent email, a modified configuration, a deleted record.
The question isn’t whether agents will act beyond user intent. The question is whether the authorization and audit infrastructure exists to catch it when they do.
Getting Started
AgentLair provides approval gates and audit trails for agent actions. Every agent action is authorized against a defined scope, recorded in an Ed25519-signed hash-chained log, and traceable to an accountable owner.
# Register an agent with scoped authorization
curl -X POST https://api.agentlair.dev/v1/agents \
-H "Authorization: Bearer $AGENTLAIR_API_KEY" \
-d '{"name": "pr-agent", "description": "Fixes typos in pull request descriptions"}'
# Every action is authorized, recorded, and attributable
# No more 10-month gaps before discovery
Free tier: 1 agent, 30-day audit log retention. No credit card.
→ The original post (Zach Manson)
GitHub search for ”⚡ Quickly spin up copilot coding tasks” in pull requests returns 1.5M+ results as of March 30, 2026. Copilot’s coding agent launched May 2025.