CNCF Survey: Nearly 50% of Open-Source Contributors Use AI Assistants, 2/3 of Projects Have No Formal Guidelines
The CNCF TAG Developer Experience published on April 29, 2026 the first results of a survey on AI tool usage in CNCF projects: 133 participants from nearly 100 projects. Nearly half actively use AI assistants in their IDE (Claude Code and GitHub Copilot dominate), about two thirds of projects have no formal AI guidelines, and more than half of participants believe AI contributions should always be disclosed.
The CNCF TAG Developer Experience published on April 29, 2026 the first results of a survey on AI tool usage in open-source CNCF projects. The survey was conducted by Graziano Casto, Hélia Barroso, Alessandro Pomponio, and Sonali Srivastava; 133 participants from nearly 100 different projects took part.
What Does the Survey Show?
The single largest finding: nearly 50% of participants actively use AI assistants within their IDE or command-line interface. Only about 10% still rely on manual copy-paste from web chatbots, indicating mature IDE integration. Approximately the same percentage (~10%) have already implemented advanced integrations for PR review and issue triaging.
The most popular tools, according to the authors, are Claude Code and GitHub Copilot — explicitly identified as “clear leaders in the space.” Primary use cases: writing and refactoring code, improving documentation, debugging, understanding unfamiliar codebases, and analyzing pull requests.
The Governance Gap
According to the results, two thirds of projects have no formal AI guidelines or participants are unaware they exist. “The vast majority of projects do not mention AI use in their publicly available documentation,” the CNCF team writes. Only a few pioneering projects have established clear policies. A third of participants report that AI use is permitted; fewer than 4% report an explicit ban.
Transparency and Concerns
The question of disclosing AI contributions shows a clear trend: more than 50% believe disclosure should be always required, with an additional 20% supporting the requirement in specific cases. Key concerns include: security vulnerabilities in AI-generated code, license compliance (known issues with third-party code in training sets), and low-effort PR burden on maintainers.
The survey remains open until May 18, 2026 (End of Day, Anywhere on Earth), and CNCF is seeking additional responses for a more representative view of the cloud-native ecosystem.
Frequently Asked Questions
- Which AI tools are most popular in open-source projects?
- Claude Code and GitHub Copilot stand out as 'clear leaders in the space' for IDE and CLI scenarios. Only about 10% of contributors still rely on manual copy-paste from web chatbots.
- Are there AI guidelines in open-source projects?
- Not for most — roughly two thirds of projects have no formal AI guidelines or participants are unaware they exist. 'The vast majority of projects do not mention AI use in their publicly available documentation.'
- What do participants think about transparency?
- More than 50% believe AI-assisted contributions should always be disclosed, with an additional 20% supporting disclosure in specific cases. Main concerns are security vulnerabilities, license compliance, and 'low-effort PR' burden on maintainers.
This article was generated using artificial intelligence from primary sources.
Related news
CNCF State of AI in Projects: Claude Code and GitHub Copilot Dominate, Two-Thirds of Projects Have No Formal AI Policy
QIMMA: New Leaderboard Puts Quality Before Quantity in Arabic LLM Evaluation
Apple at ICLR 2026 in Rio: over 40 posters, MLX demo on iPad Pro, SHARP 3D generation and MANZANO unified model