
TL;DR
VS Code 1.118 makes Copilot a Git co-author by default for chat and agent commits. The argument is not really about one trailer line. It is about consent, audit signals, and who controls developer workflow metadata.
Direct answer
VS Code 1.118 makes Copilot a Git co-author by default for chat and agent commits. The argument is not really about one trailer line. It is about consent, audit signals, and who controls developer workflow metadata.
Best for
Developers comparing real tool tradeoffs before choosing a stack.
Covers
Verdict, tradeoffs, pricing signals, workflow fit, and related alternatives.
Read next
GitHub Copilot is moving from autocomplete into asynchronous coding agents, terminal workflows, MCP, skills, and model choice. Here is what changed in 2026.
8 min readCopilot has 77M users but the competition has changed. Here is how it works in 2026, what Copilot Workspace adds, and whether it is still the best choice.
5 min readThe math of agent pipelines is brutal. 85% reliability per step compounds to about 20% at 10 steps. Here is why long chains collapse in production, and the six patterns the field has converged on to fight the decay.
9 min readVS Code 1.118 shipped a small source-control default with a much bigger trust problem.
The official VS Code 1.118 release notes say Git AI co-authoring is now enabled by default for chat and agent workflows. When Copilot changes files, VS Code can automatically add Copilot as a co-author on the commit. The source-control docs explain the underlying setting, git.addAICoAuthor, and the available modes: off, chatAndAgent, and all.
That sounds tidy. It is just a Co-authored-by: trailer.
But the reaction on Hacker News, Reddit, and GitHub-adjacent forums is not really about one line of commit metadata. Developers are arguing about who gets to write into the permanent record of a repo, whether AI usage should be disclosed, and whether a tool default should silently become team policy.
This is a good argument to have because every coding agent is about to run into the same boundary.
For the broader Copilot platform shift, read GitHub Copilot Coding Agent and CLI. For the workflow-trust layer behind this story, pair it with The Agent Reliability Cliff and What Hacker News Gets Right About AI Coding Agents.
VS Code introduced the setting earlier with off as the default. In 1.118, the release notes say the default is enabled for chat and agent workflows. The behavior applies when Copilot makes changes to files and the commit is created through VS Code's built-in Git flow.
The docs matter because the scope is narrower than some angry summaries imply:
off adds no AI co-author trailer.chatAndAgent adds the trailer for Copilot Chat or agent-mode generated code.all extends the behavior to inline completions.The practical fix is simple:
{
"git.addAICoAuthor": "off"
}
That solves the local annoyance. It does not solve the policy question.
Git history is not decorative UI.
Commit metadata feeds code review, blame, release notes, compliance systems, security audits, dashboards, and future debugging. Once a default tool setting writes into that layer, it stops being a personal preference and starts acting like workflow policy.
That is why this landed badly. Developers are not only objecting to AI attribution. Some people actively want AI-generated work labeled. The deeper objection is that the default changed in a place where the user expected authorship and commit hygiene to remain under their control.
There are three separate concerns getting mashed together:
The third one is the real issue.
If a team requires AI attribution, that should be explicit. If a team bans AI attribution in commit trailers and tracks usage elsewhere, that should also be explicit. A surprise editor default is the worst possible place to make the decision.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
From the archive
May 2, 2026 • 8 min read
Apr 29, 2026 • 9 min read
Apr 28, 2026 • 10 min read
Apr 23, 2026 • 7 min read
There is a real argument for labeling agent-written commits.
AI-generated changes often need different review pressure. A reviewer might want to inspect edge cases more closely, ask for stronger tests, or look for familiar failure modes: broad refactors, invented APIs, missing migrations, fake confidence, or accidental changes outside the requested scope.
Attribution can also help teams measure what is happening:
That is useful operational data. In the same way AI coding tools pricing is shifting from sticker price to usage accounting, agent productivity will shift from vibes to accepted-change telemetry.
There is also precedent for structured trailers. Open-source projects already use trailers like Reviewed-by, Signed-off-by, Co-authored-by, and Reported-by. The idea that Git metadata can carry workflow signals is not new.
So the pro-attribution case is not silly. The weak version is "the AI deserves credit." The strong version is "reviewers and teams need machine-readable provenance signals."
That distinction matters.
The counterargument is also strong: tools are not authors.
Developers already use compilers, IDEs, formatters, linters, autocomplete, generators, snippets, Stack Overflow, docs, and internal templates. We do not list all of them as co-authors. The human who chooses, reviews, commits, and ships the change owns the outcome.
That ownership point is not philosophical fluff. It is the accountability model software teams actually use.
If a production bug ships, the answer cannot be "Copilot co-authored it." The responsible party is the human and organization that accepted the change. A trailer that makes accountability feel shared with a vendor-owned tool can muddy the signal instead of clarifying it.
The other problem is false precision. A commit may include:
Flattening all of that into a co-author trailer can imply more certainty than the tool really has. If multiple agents touched the code, the tool that happened to run the commit command may get the attribution even if another model did the meaningful work.
That is not provenance. That is accidental bookkeeping.
The right answer is not "always add AI co-author trailers" or "never disclose AI use."
The right answer is: teams should choose the attribution layer deliberately.
For small personal repos, a VS Code setting is probably enough. If you like the signal, leave it on. If you hate it, turn it off.
For teams, decide this in the repo:
## AI attribution policy
- Humans remain accountable for every commit they push.
- AI-generated code must be reviewed to the same standard as human code.
- We do not use `Co-authored-by` trailers for AI tools.
- Significant agent-generated work should be disclosed in the PR description under "AI assistance".
- Agent sessions that modify security, auth, billing, or data migrations require extra review.
Or choose the opposite:
## AI attribution policy
- Commits with substantial agent-generated code should include an AI provenance trailer.
- The trailer should name the tool only when it materially generated the committed diff.
- The PR description must still name the human owner and summarize verification.
- The human committer remains responsible for the final change.
Either policy is better than drift.
Mixed histories are the bad outcome: one developer commits through VS Code with the trailer, another uses the CLI without it, another disables the setting, another uses Claude Code, another uses Codex, and now the repo has a provenance signal nobody can interpret.
I would turn the VS Code default off for team repos unless the team has explicitly decided to use commit trailers as the AI provenance layer.
Then I would add AI disclosure to the pull request template instead.
PRs are a better place for this signal because they can carry context:
A commit trailer can say an AI touched the work. A PR section can explain how.
For agent-heavy teams, go further. Store session logs, prompts, tool traces, and model metadata in the agent system itself. Link the useful audit artifact from the PR. Do not try to stuff the whole provenance story into one Git footer.
That is also the lesson from agent reliability work: serious agent workflows need verification artifacts, not just generated output.
AI coding tools are moving from helpers to actors.
That means defaults matter more. A default that edits code is one thing. A default that edits workflow metadata is another. A default that writes into PR descriptions, commit messages, authorship fields, issue comments, or release notes is operating in the social layer of software development.
That layer is sensitive because it encodes trust.
This is where the Hacker News skepticism is useful. Developers are not rejecting attribution because they want to hide AI use. Many are rejecting vendor-controlled attribution because they want the human workflow boundary to stay clear.
Copilot, Claude Code, Codex, Cursor, and every other agent platform should treat this as a design rule:
AI tools can suggest workflow metadata, but they should not silently claim workflow identity.
Make it visible. Make it configurable. Let teams set policy. Keep the human accountable.
That is how AI attribution becomes useful instead of becoming another reason developers distrust the tools.
Add this to your VS Code settings:
{
"git.addAICoAuthor": "off"
}
The official VS Code source-control docs list the supported values as off, chatAndAgent, and all.
VS Code's docs say the trailer is added only when committing from inside VS Code. Commits made with external Git tools or the command line do not include the trailer from this VS Code feature.
Often yes, but disclosure should be policy-driven. For most teams, a PR template or agent-session log is clearer than a blanket co-author trailer because it explains what the tool did and what the human verified.
This post is not legal advice. In practical engineering terms, the human and organization accepting the change remain accountable for the code. Teams with compliance requirements should decide the policy with legal and security stakeholders rather than inheriting an editor default.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
The original AI coding assistant. 77M+ developers. Inline completions in VS Code and JetBrains. Copilot Workspace genera...
View ToolAI-native code editor forked from VS Code. Composer mode rewrites multiple files at once. Tab autocomplete predicts your...
View ToolOpen-source AI code assistant for VS Code and JetBrains. Bring your own model - local or API. Tab autocomplete, chat,...
View ToolAI coding assistant with deep codebase context. Indexes your entire repo graph for accurate answers. VS Code and JetBrai...
View ToolOne control panel for Claude Code, Codex, Gemini, Cursor, and 10+ AI coding harnesses. Desktop app for Mac.
Open AppVS Code extension showing live LLM API spend right in your editor as you work.
Open AppThe DevDigest CLI. Install tools, manage configs, and automate workflows.
Open AppReal-time prompt loop with history, completions, and multiline input.
Claude CodeConfigure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI AgentsA concrete step-by-step guide to moving your development workflow from Cursor to Claude Code - settings, rules, keybindings, and the habits that transfer.
Getting Started
GitHub Copilot is moving from autocomplete into asynchronous coding agents, terminal workflows, MCP, skills, and model c...

Copilot has 77M users but the competition has changed. Here is how it works in 2026, what Copilot Workspace adds, and wh...

The math of agent pipelines is brutal. 85% reliability per step compounds to about 20% at 10 steps. Here is why long cha...

Hacker News keeps arguing about Claude Code, Codex, skills, MCP, and orchestration. Under the noise, the same four truth...

A Q2 2026 pricing and packaging update for AI coding tools, based on official plan docs and release notes. Includes prac...

AI coding agents are submitting pull requests to open source repos - and some CONTRIBUTING.md files now contain prompt i...

New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.