Files
DpsManagerServer/旧的java项目/AGENTS.md
2026-04-15 15:19:28 +08:00

6.2 KiB
Raw Blame History

AGENTS.md for this repository

Purpose

  • This document guides agent-like contributors (human or AI) on how to build, lint, test, and style the codebase.
  • It also captures preferences or constraints used by the team and by automated agents operating in this repo.

Scope

  • Build, test, and lint commands for common environments (Node/TS, Python, Go, Rust, Java).
  • Code style guidelines: imports, formatting, types, naming, error handling, tests, and documentation.
  • Cursor rules and Copilot rules inclusion if present in the repo.
  • How to handle running a single test and how to extend tests locally.
  1. Quick start: common commands
  • Build (preferred entry point):
    • Node/TypeScript: npm run build (or yarn build)
    • Python: python -m build or your project-specific build script
    • Go: go build ./...
    • Rust: cargo build
  • Lint:
    • Node/TypeScript: npm run lint (or yarn lint)
    • Python: ruff check . | flake8 or your projects linter
    • Go: golangci-lint run
    • Rust: cargo clippy
  • Test:
    • Node/TS: npm test (or yarn test)
    • Python: pytest
    • Go: go test ./...
    • Rust: cargo test
  • Note: If your repo uses a mixed tech stack, prefer using the language-specific script in package.json or equivalent courier scripts.
  1. Run a single test (typical patterns)
  • Node / Jest
    • Run a specific test by name: npm test -- -t "should render the component"
    • Run a specific file: npm test -- path/to/file.test.js
  • Python / Pytest
    • Run tests matching a keyword: pytest -k "test_name_substring" -q
    • Run a specific file: pytest tests/test_module.py -q
  • Go
    • Run a single test by name: go test -run TestName ./...
  • Rust / Cargo
    • Exact test: cargo test -- --exact TestName
    • Run a file-like subset: cargo test -- 'pattern'
  • Java / Maven or Gradle
    • Maven: mvn -Dtest=MyTest#testMethod test
    • Gradle: ./gradlew test --tests "com.example.MyTest.testMethod"
  1. Code style guidelines <A) Imports and modules>
  • Group imports into three blocks: standard library, third-party, and first-party modules.
  • Order blocks alphabetically within each group; separate blocks with a newline.
  • Avoid wildcard imports; prefer explicit imports.
  • For TS/JS, prefer absolute/alias imports over relative when it improves clarity.

<B) Formatting and tooling>

  • Use the projects formatter (Prettier, gofmt, black, etc.) with the configured settings.
  • Respect the repositorys line length (commonly 100-120 chars). Break long lines at logical points.
  • Use semicolons consistently if the project enforces them; otherwise adhere to the established style.
  • Enable and respect lint rules; fix all autofixable issues during code edits.

<C) Types and APIs>

  • In TypeScript, enable strict type checking; prefer interfaces for public APIs and type aliases for shapes.
  • Use readonly modifiers where possible to express intent and optimize immutability guarantees.
  • Prefer explicit return types for exported functions and public APIs.
  • Avoid any where possible; if necessary, use unknown with proper checks.

<D) Naming conventions>

  • Variables and functions: camelCase
  • Classes and types: PascalCase
  • Constants: UPPER_SNAKE_CASE
  • File and module names: kebab-case or snake_case, consistent with project convention

<E) Error handling>

  • Do not swallow errors; attach context when rethrowing (e.g., throw new Error(Context: ${err.message})).
  • Propagate errors to callers with meaningful messages.
  • Use try/catch around IO-bound or network-bound operations and ensure resources are released in finally or via finally-like blocks.

<F) Async/Promises>

  • Prefer async/await syntax for readability.
  • Handle rejections at the call site when possible; avoid unhandled promises.
  • Use Promise.all when performing independent async tasks, but catch and handle failures gracefully.

<G) Tests>

  • Tests should be fast, deterministic, and hermetic.
  • Use descriptive test names and structure (Arrange-Act-Assert patterns where helpful).
  • Isolate external dependencies; mock/stub network/db calls effectively.
  • Include tests for error paths and boundary conditions.

<H) Documentation and comments>

  • Document non-obvious logic with concise comments; avoid obvious boilerplate.
  • Public APIs should have JSDoc / TSdoc-style comments describing inputs, outputs, and side effects.
  • Update or add READMEs where necessary to reflect changes in behavior.

<I) Security and compliance>

  • Do not log sensitive data; mask secrets in logs.
  • Validate inputs and sanitize outputs where appropriate.
  • Treat untrusted data carefully; avoid code paths that execute untrusted input without validation.
  1. Cursor and Copilot rules
  • Cursor rules: If this repo uses Cursor tooling, its rules can live under .cursor/rules/ or .cursorrules. Copy or adapt them into this document when agents are created.
  • Copilot rules: If there is .github/copilot-instructions.md, follow its guidance and ensure code generation adheres to the outlined constraints.
  • If the repo contains these files, consider linking to them here and summarizing any special constraints applicable to agents.
  1. Git workflow and contribution notes
  • Do not modify dependencies without explicit approval.
  • Keep commits small and focused; write 1-2 sentence messages describing why a change was made, not just what changed.
  • For AGENTS.md updates, include a short rationale in the commit message.
  • Prefer small, reviewed edits over sweeping rewrites.
  1. Local integration guidance
  • Run: npm install or yarn to install deps before building/tests.
  • For multi-language repos, ensure the local environment has language runtimes and tooling installed (Node, Python, Go, Rust, etc.).
  • Use a clean environment (e.g., nvm, virtualenv) to avoid cross-project contamination.
  1. Example usage scenarios
  • A single-file feature: create a minimal unit test, run npm test -- -t "feature X" to validate.
  • A refactor: run lint and then a subset of tests; fix issues flagged by lints before merging.
  • A CI-like dry run: run npm ci; npm run build; npm test; and report failures with minimal noise.

Notes

  • If you want me to tailor this file to the exact repo, I can incorporate the actual existing AGENTS.md content (if present) or sync with Cursor/Copilot rules after inspecting the repo. Right now this is a solid, language-agnostic baseline.