Skip to content

Latest commit

 

History

History
94 lines (62 loc) · 3.77 KB

File metadata and controls

94 lines (62 loc) · 3.77 KB

AI Covenant for LinkML

This covenant establishes community norms for responsible AI use in the LinkML project. It aims to maintain trust, quality, and accountability while embracing AI as a useful tool.

It applies to the following repos central to the LinkML mission:

Core Principle: You Own Your Contributions

Everything you contribute is yours—regardless of what tools helped create it.

When you submit code, documentation, issues, or comments with AI assistance, you are the author. You are responsible for:

  • Understanding what you are submitting
  • Verifying correctness and appropriateness
  • Defending and explaining your choices during review
  • Ensuring it meets project standards

Do not submit anything you cannot fully stand behind.

AI-Assisted Code Reviews

AI review tools (Claude, Copilot, CodeRabbit, etc.) provide automated quality checks, not human reviews.

  • AI comments are suggestions, not requirements
  • PR owners may close AI comments without response
  • Human reviewers may use AI feedback to inform their own review
  • A PR still requires human approval regardless of AI feedback

AI-Assisted Discussions

AI tools can be helpful thinking aids when preparing to participate in LinkML discussions and issues.

They may be used to:

  • Clarify your own thinking before engaging
  • Explore alternative framings or options
  • Help draft your contribution for clarity and structure

However, discussions exist to surface, negotiate, and consolidate human judgement. They are not a channel for autonomous or proxy AI participation.

AI systems MUST NOT be used to directly post comments, replies, or messages in:

  • GitHub issues or discussions
  • Shared Slack channels
  • Project mailing lists or email threads

All discussion contributions must reflect a human position that the author is prepared to explain, revise, and defend. Posting AI-generated commentary as an independent “voice” undermines trust, accountability, and the purpose of deliberation.

In short:

  • AI may support participation
  • Humans must own participation

When to Disclose AI Assistance

Required disclosure:

  • When proposing bug fixes or changes to code you don't fully understand, attribute the idea to AI so reviewers can assess appropriately.

Appreciated transparency:

  • When brainstorming solutions, distinguish between "AI suggests X" and "I recommend X based on my expertise". This helps the community prioritize ideas.

Not required:

  • Routine use of AI for writing code, issues, or PR descriptions.
  • AI co-authorship in commit messages. This is actively discouraged.

What This Means in Practice

Situation Guidance
Writing code with Copilot/Claude No disclosure needed; you own the result
Submitting AI-suggested fix you fully understand No disclosure needed
Submitting AI-suggested fix in unfamiliar code Disclose AI origin for reviewer context
Drafting issue or PR description with AI No disclosure needed; ensure it's accurate
Brainstorming in discussions Be clear about AI-generated vs. expert ideas
Receiving AI review comments Address or close at your discretion

Trust and Accountability

This covenant is built on trust. By contributing to LinkML, you agree that:

  1. You will not submit AI-generated content without reviewing it
  2. You will take responsibility for any issues arising from your contributions
  3. You will be honest about the origins of ideas when it matters for review quality

This covenant may evolve as AI tools and community needs change. Feedback and suggestions are welcome.