Designing effective prompt files is key to obtaining accurate and useful responses from GitHub Copilot. Prompt files (*.prompt.md) let you define reusable prompts that generate code or documentation in a structured way. Here we discuss best practices for structuring your prompt files and note the differences in functionality between Visual Studio Code and Visual Studio.
Table of Contents 📋 YAML Frontmatter ✍️ Compose Clear and Structured Content 🎨 Advanced Prompt Structuring Patterns 📁 Organize Supporting Materials ⚙️ Environment-Specific Considerations 🎯 Conclusion 📚 References Appendix A: YAML Frontmatter Metadata Reference Appendix B: Tools and Capabilities Reference 📋 YAML FrontmatterPrompt files may start with a YAML frontmatter enclosed by ---. This header configures how the prompt appears in the Chat UI and how it executes:
name: Identifier for the slash/hashtag command. If omitted, Copilot uses the filename. description: Shown when selecting the prompt in the picker. Provides context to your team. agent: Sets the chat mode (ask, edit, agent or a custom agent name). When referencing a custom agent, the prompt inherits that agent’s default tools and behior. See How to Structure Content for Copilot Agent Files for details on custom agents. model: Chooses a specific LLM; otherwise Copilot uses the default. tools: Restricts which tools (e.g., fetch, codebase, specific MCP servers) the prompt can access. Tool Priority: Tools specified in the prompt override tools from the referenced agent, which override default tools (Prompt > Agent > Default). argument-hint: Suggests how to provide arguments when running the prompt (visible in the input field).These metadata fields are supported in VS Code and in Visual Studio 17.10+; however, not all features (such as custom agent names or specific tools) may be fully supported by Visual Studio yet. Check the release notes for your version to see which fields are functional.
Here’s a sample YAML header:
--- name: react-form agent: ask model: GPT-4 description: "Generate a React form component from a list of fields." tools: ['codebase', 'fetch'] argument-hint: 'fields=field1:string,field2:number...' --- ✍️ Compose Clear and Structured ContentThe body of a prompt file contains the actual instructions. Use concise, direct language to convey the task. Organize content with headings and bullet points to make it easy for both humans and the LLM to follow.
Define the Role and ObjectiveStart by stating the persona and mission. For example: “You are a senior software engineer preparing a code scaffold for a new feature. Generate a file structure, include doc comments, mark TODOs where logic should be implemented, and create supporting files (e.g., package.json).”
Use Bullet PointsEnumerate requirements or tasks clearly. LLMs process bullet lists effectively, which results in more organized responses.
For example:
Include comments explaining each function. Add a TODO placeholder in functions that need implementation. Create .env placeholders for environment variables. Generate dependency files if needed.Example: Scaffolding with TODO and .env Placeholders
When generating scaffolds, combine .env configuration templates with TODO markers to guide implementation:
File: .env.template
# Copy this file to .env and replace placeholder values # Azure OpenAI Configuration AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ AZURE_OPENAI_API_KEY=your_api_key_here AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4 # Database Configuration DATABASE_URL=postgresql://user:password@localhost:5432/myappFile: app.py
import os from dotenv import load_dotenv # Load environment variables load_dotenv() class AIService: """Service for interacting with Azure OpenAI.""" def __init__(self): # TODO: Initialize Azure OpenAI client with credentials from environment # TODO: Add connection validation and error handling pass def generate_response(self, prompt: str) -> str: """ Generate AI response for the given prompt. Args: prompt: User's input text Returns: Generated response text """ # TODO: Call Azure OpenAI API with prompt # TODO: Implement retry logic for transient failures # TODO: Add response validation and sanitization pass class DatabaseService: """Service for database operations.""" def __init__(self): db_url = os.getenv("DATABASE_URL") # TODO: Establish database connection using db_url # TODO: Implement connection pooling pass def se_interaction(self, prompt: str, response: str) -> None: """Se user interaction to database.""" # TODO: Insert prompt and response into interactions table # TODO: Add transaction handling and rollback on error passThis approach creates a clear implementation roadmap where developers use GitHub Copilot to fill in the TODO sections while the .env.template file documents all required configuration.
Gather Input Information DynamicallyModern prompt files can collect input from multiple sources in a flexible, intelligent way rather than requiring manual template filling. This approach enables more natural workflows and reduces friction for users.
Available Input Sources:
Explicit User Input (Manual templates with {placeholders}) User fills in structured template before submission Highest priority when conflicts occur Best for: Complex requirements gathering, initial project setup Active File/Selection Automatically use content from currently open file or selected text No manual input needed if user has file open Best for: Refactoring, reviewing, transforming existing content Attached Files (Using #file) User attaches files directly in chat: #file:path/to/document.md Supports multiple files simultaneously Best for: Processing specific documents, comparing files Workspace Context Automatically discover files by common patterns or content analysis Search for configuration files, detect project structure Best for: Project-aware operations, smart defaults Chat Variables (VS Code) ${workspaceFolder}, ${file}, ${selection}, ${activeEditor} Dynamically inject current context into prompts Best for: Context-aware code generation Tool Integration #codebase - Semantic search across repository #fetch - Pull external documentation Web search for latest information Best for: Research, documentation lookup, external resourcesExample: Multi-Source Input Strategy
Here’s how a prompt can intelligently combine multiple sources:
## Input Sources (Collect from all ailable sources) **Gather information from ALL ailable sources:** - User-provided information in chat (structured sections or `{placeholders}`) - Active file or selection (detect content type automatically) - Attached files with `#file` (analyze content, don't rely solely on filenames) - Workspace context files (search by common names or content patterns) - Explicit file paths provided as arguments **Content Detection (intelligent analysis):** - Analyze file structure and content to determine type - Look for metadata patterns (dates, speakers, timestamps) - Identify language, framework, or technology from imports/syntax - Detect purpose from content structure (config, source, docs) **Information Priority (when conflicts occur):** 1. **Explicit user input** - Override everything 2. **Active file/selection** - Current workspace context 3. **Attached files** - Explicitly provided resources 4. **Workspace context** - Discovered automatically 5. **Inferred/derived** - Calculated from other sources **Workflow Example:** 1. Check for explicit user input (highest priority) 2. Check active file - analyze to identify type/purpose 3. Check attached files - analyze content 4. Search workspace for related files by pattern 5. Merge information using priority rules 6. Ask user for clarification only if critical info missingPractical Workflow Example: Session Summary Generation
A real-world prompt (like article-generate-techsession-summary.prompt.md) demonstrates this multi-source approach:
**Scenario A: User has files open** 1. User opens `SUMMARY.md` in editor 2. Runs `/techsession-summary` 3. Prompt detects open file contains session metadata 4. Auto-searches workspace for `transcript.txt` 5. Generates summary, outputs to `SUMMARY.md` (overwrites existing) **Scenario B: User provides partial info** 1. User types: `/techsession-summary {{session title: "AI Agents Workshop"}}` 2. Prompt uses title from user input (priority 1) 3. Searches workspace for summary/transcript files 4. Finds files, extracts remaining metadata 5. Generates summary with user-specified title **Scenario C: User attaches files** 1. User types: `/techsession-summary #file:session-notes.md #file:recording.txt` 2. Prompt analyzes attached files (detect types by content) 3. Merges metadata from both files 4. Generates new summary with descriptive filename **Scenario D: Nothing ailable** 1. User runs `/techsession-summary` in empty folder 2. Prompt lists current directory contents 3. Asks user to either: - Attach files with `#file:` - Provide file paths as arguments - Nigate to correct folderBenefits of Multi-Source Strategy:
Flexibility: Works with various user workflows Efficiency: Reduces manual input when context ailable Intelligence: Makes smart decisions based on ailable data Graceful degradation: Falls back to asking user when needed Priority-based conflicts: Clear rules for handling duplicatesWhen to Use Each Approach:
Approach When to Use User Effort Flexibility Manual Template Complex requirements, initial setup High Low Active File Refactoring, reviewing existing code None High Attached Files Specific documents, multiple inputs Medium High Workspace Context Project-aware operations None High Chat Variables Context-aware generation None Medium Tool Integration Research, external docs Low High Combined Strategy Production prompts Low-Medium Very HighModern prompts should default to combined strategy for best user experience, using manual templates only as a fallback or for complex scenarios where auto-detection isn’t sufficient.
Provide an Input TemplateFor prompts that require user input, include a user-editable template with placeholders. Wrap variable sections in double braces to signal that they should be replaced. Microsoft’s AI Prompt Book demonstrates an effective pattern where each placeholder includes both a field description AND a concrete example:
## Use Case Description {{Briefly describe the idea, challenge, or opportunity. e.g., "We want to use generative AI to streamline our employee onboarding process by automating answers to policy questions."}} ## Target Users {{Who will use the prototype? e.g., "New employees at a large enterprise, HR support staff."}} ## Expected Inputs {{What will the system take as input? e.g., "Natural language questions from employees about policies or processes."}} ## Expected Outputs {{What should the system return? e.g., "Helpful answers, links to internal documents, or a checklist of onboarding tasks."}} ## Constraints or Assumptions {{Are there any limitations or technical context to be aware of? e.g., "Must work with existing SharePoint knowledge base; IT has approved Azure OpenAI."}} ## Goal Generate a well-structured set of requirements to guide the rapid prototyping of this solution.This dual-layer approach (instruction + example) helps users understand both what to provide and how to format it, reducing ambiguity and improving prompt effectiveness.
Include Examples (Optional)Providing example input and output demonstrates the expected result. For instance, when instructing Copilot to create a README, show a sample section to illustrate tone and structure.
Reference Tools, Files and VariablesIn VS Code, you can leverage chat variables and tools directly within prompts:
Use ${workspaceFolder}, ${file} or ${selection} to embed context about the current workspace, file or selection. Use #fetch to pull content from a URL, or #codebase to search your repository. These features are fully supported in VS Code Chat and may be partially ailable in Visual Studio Chat (depending on version).Visual Studio currently supports fewer chat variables/tools; check the official docs for the latest list of supported chat commands.
🎨 Advanced Prompt Structuring PatternsBuilding on Microsoft’s AI Prompt Book approach, sophisticated prompt files often follow a three-part architecture that separates concerns and maximizes reusability:
The Three-Part Prompt Architecture 1. System MessageThe System Message defines the AI’s role, expertise level, mission, and operational guidelines. It sets the context for how the AI should behe throughout the interaction.
Key elements to include:
Persona definition: “You are a senior solution architect…” Core mission: “…tasked with gathering and structuring solution requirements…” Step-by-step process: Numbered instructions for systematic execution Deliverable specification: Clear description of expected output format Quality criteria: Standards the output must meetExample System Message:
## System Message You are a senior software engineer preparing code scaffolding to support a rapid prototype for a developer using GitHub Copilot. Your mission: - Generate a code scaffold (file structure, function/method definitions, class stubs) - Include clear, concise doc-comments describing intended behior, input/output, and edge cases - Add TODO markers signaling where GitHub Copilot should implement logic - Include .env placeholders and dependency files (requirements.txt, package.json, csproj) - Reflect the use case, technologies, data access patterns, and requirements as specified - Format each file using appropriate code fences and filenames - Output **only** scaffold code - no implementation - so developers can use GitHub Copilot to build the full solution This ensures Copilot has the right structural context to generate meaningful code while keeping architecture aligned with the developer's intent. 2. User Prompt TemplateThe User Prompt Template provides a structured input form with semantic sections and placeholder syntax. Each field should include:
Section heading describing the information category Placeholder with instruction in {double braces} Inline example showing concrete input (using e.g., "..." format)Example User Prompt Template:
## User Prompt Template ## Use Case {{Describe the prototype use case and goal. e.g., "Search and summarize HR policy documents via natural language queries."}} ## Target Functionality {{High-level behior to enable via GitHub Copilot. e.g., "Accept user question, retrieve relevant policy docs, generate short summaries, support follow-ups."}} ## Technologies / Frameworks {{List languages, frameworks, and services. e.g., "Python, Semantic Kernel, Azure OpenAI, Azure AI Search"}} ## Data Access {{Describe data source or access methods. e.g., "Local JSON/CSV file of policy documents, no external APIs"}} ## Goal Generate scaffold code optimized for use with GitHub Copilot. 3. Example UsageThe Example Usage section demonstrates proper prompt execution by showing the User Prompt Template filled out with realistic, concrete values. This serves multiple purposes:
Training users on appropriate level of detail Validating template design by testing with real scenarios Providing copy-paste starting points for common use casesExample Usage Section:
## Example Usage ## Use Case Prototype for searching and summarizing HR policy documents via natural language queries. ## Target Functionality - Accept a user question - Search across a set of HR policy documents - Retrieve relevant documents and generate brief summaries - Support follow-up queries (e.g., "What is the maternity lee policy?") ## Technologies / Frameworks Python, Semantic Kernel, Azure OpenAI, Azure AI Search ## Data Access - Local JSON/CSV file containing policy documents - No external APIs required for data access ## Goal Generate scaffold code for use with GitHub Copilot. Organizing Prompts by CategoriesMicrosoft’s AI Prompt Book organizes prompts into engagement-stage categories that align with solution development lifecycle:
🔍 Discovery: Use case ideation, evaluation, research, resource gathering ⚡ Rapid Prototyping: Requirements definition, data generation, code scaffolding, code generation 🚚 Delivery: Architecture design, deployment planning, webinar content 💻 GitHub Copilot: Repository-specific prompts for in-IDE workflowsConsider organizing your .github/prompts/ directory with subdirectories matching your team’s workflow stages:
.github/prompts/ ├── discovery/ │ ├── use-case-ideation.prompt.md │ └── requirements-gathering.prompt.md ├── development/ │ ├── code-scaffolding.prompt.md │ └── code-generation.prompt.md ├── quality/ │ ├── grammar-review.prompt.md │ └── security-review.prompt.md └── documentation/ ├── article-writing.prompt.md └── api-docs-generation.prompt.md Recommended Model SpecificationInclude a Recommended Model field in your prompt documentation (can be in YAML frontmatter or as a Markdown section) to guide users on model selection:
--- name: code-scaffolding model: o3 # or "gpt-5", "claude-sonnet-4.5" description: "Generate code scaffolds for GitHub Copilot implementation" ---Or as a Markdown section:
### Recommended Model o3 or GitHub Copilot (for in-IDE scaffolding) gpt-5 (for Azure AI Foundry Chat Playground) 📁 Organize Supporting MaterialsComplex prompts often require reusable snippets or deeper context. Organize these resources strategically:
Prompt snippets: Create a folder such as .github/prompt-snippets/ for reusable sections (e.g., code review guidelines, test boilerplates) that you reference from multiple prompts via Markdown links. Custom agents: For reusable personas that multiple prompts can reference, create .agent.md files in .github/agents/. See How to Structure Content for Copilot Agent Files for detailed guidance on agent design. Project documentation: Use a .copilot/context/ (optional) folder to store rich information—API contracts, data schemas, domain terms, architecture decisions and diagrams—which the Copilot engine can search. VS Code and Visual Studio both index these files to improve the relevance of suggestions. Example outputs: Including example outputs (e.g., a table of tests to generate) can guide the model’s formatting and structure. When adding examples, clearly mark them so readers know they’re illustrative. Targeted Instructions with .instructions.md FilesIn addition to prompt files (.prompt.md), Visual Studio 17.12+ supports targeted instruction files that automatically apply context based on file patterns. These .instructions.md files provide more flexibility than a single global copilot-instructions.md file.
Key Features:
Multiple instruction files for different contexts (languages, frameworks, file types) Automatic application based on glob patterns YAML frontmatter for configurationFile Structure:
--- description: "C# coding standards for this project" applyTo: "**/*.cs" --- # C# Instructions - Write clear and concise comments for each function. - Use PascalCase for component names, method names, and public members. - Use camelCase for private fields and local variables. - Add a newline before the opening curly brace of any code block. - Ensure that the final `return` statement of a method is on its own line.Usage Pattern:
Create .github/instructions/ directory Add *.instructions.md files for different contexts Use applyTo glob patterns to target specific files Enable in Visual Studio via Tools > Options > GitHub > Copilot > Copilot ChatExample Organization:
.github/instructions/ ├── csharp-backend.instructions.md (applyTo: "src/backend/**/*.cs") ├── typescript-frontend.instructions.md (applyTo: "src/frontend/**/*.ts") ├── python-ml.instructions.md (applyTo: "ml/**/*.py") └── sql-migrations.instructions.md (applyTo: "db/migrations/**/*.sql")When Copilot processes your request, it automatically detects and applies relevant instruction files based on your current context. The applied instructions are listed in the References section of Copilot’s response.
For more examples, see the instruction samples on GitHub.
⚙️ Environment-Specific Considerations Feature or Recommendation VS Code (1.106+ Preview) Visual Studio 17.10+ .prompt.md support Yes. Slash commands /promptName run workspace or user prompts. Yes, since version 17.10. Reference with #prompt:promptName in chat input. Prompt invocation syntax /promptName - Slash commands invoke prompts directly #prompt: - Reference prompts as context in chat using #prompt:promptName .instructions.md support Support varies; check VS Code release notes for current status Yes, since version 17.12. Files in .github/instructions/ with applyTo glob patterns. User prompt files Supported. Stored in ~/.config/Code/User/prompts (Linux) or %APPDATA%\Code\User\prompts (Windows). Appear across all workspaces as slash commands. Not supported. Only workspace prompts are recognized. Tools and variables Extensive support for #fetch, #codebase, ${file}, ${selection}, etc. Limited support; see Visual Studio docs for current tools and variables. Custom agents .agent.md files in .github/agents define specialized personas. Available in VS Code 1.106+ (Preview). Agent mode ailable in VS 17.14+. Custom agent profiles can be defined, but mechanism differs from VS Code. 🎯 ConclusionEffective prompt-file design combines a well-crafted YAML header with a clear, structured body and often includes templates or examples. For sophisticated, reusable prompts, consider adopting the three-part architecture (System Message, User Prompt Template, Example Usage) demonstrated by Microsoft’s AI Prompt Book—this pattern separates concerns, maximizes reusability, and provides clear guidance for both AI models and human users. By respecting the official file locations (e.g., .github/prompts/ for prompt files), organizing prompts by workflow categories, and understanding the differences between VS Code and Visual Studio capabilities, you can create prompt libraries that provide consistent and high-quality results across your development environments.
📚 References Official GitHub Copilot DocumentationGitHub Copilot Prompt Engineering Guide [📘 Official] This comprehensive guide from GitHub provides foundational strategies for crafting effective prompts when working with GitHub Copilot. It covers general prompt engineering principles that apply across different Copilot interfaces and is essential reading for understanding how to communicate effectively with the AI assistant.
Customize Chat Responses and Set Context (Visual Studio) [📘 Official] The official Visual Studio documentation explains how to create and use .prompt.md files, custom instructions, and targeted .instructions.md files in your workspace. This reference details the YAML frontmatter options, file locations, and prompt invocation syntax for Visual Studio.
GitHub Copilot in Visual Studio [📘 Official] Microsoft’s documentation for GitHub Copilot in Visual Studio provides specific information about prompt file support (ailable from version 17.10+) and explains the differences in functionality between Visual Studio and VS Code, which is crucial for understanding the environment-specific considerations discussed in this article.
Prompt Engineering Best PracticesOpenAI Prompt Engineering Guide [📘 Official] While this guide is focused on OpenAI’s models, the prompt engineering principles it discusses (clarity, specificity, providing examples, and iterative refinement) are universally applicable to GitHub Copilot. This reference helps readers understand the underlying LLM behior that makes structured prompts effective.
Anthropic’s Prompt Engineering Tutorial [📘 Official] This tutorial offers insights into how large language models interpret instructions, including the importance of clear role definition, structured formatting, and providing context—all concepts that directly support the best practices outlined in this article for creating effective prompt files.
Community Resources and ExamplesMicrosoft AI Prompt Book for Architects [📘 Official] A curated collection of production-ready prompts organized by solution development lifecycle stages (Discovery, Rapid Prototyping, Delivery). This repository demonstrates the three-part prompt architecture (System Message, User Prompt Template, Example Usage) discussed in this article and provides concrete examples of prompts for requirements gathering, code scaffolding, architecture design, and more. Essential reference for understanding enterprise-grade prompt structuring.
GitHub Awesome Copilot Repository [📘 Official] GitHub’s official curated list of resources, instruction examples, and prompt patterns for GitHub Copilot. This repository provides real-world examples and best practices for custom instructions files that complement the guidance in this article.
Related Articles in This SeriesHow GitHub Copilot Uses Markdown and Prompt Folders Foundational article explaining the file locations and basic structure of prompts, agents, and instructions. Read this first to understand where files should be stored and how Copilot discovers them.
How to Name and Organize Prompt Files Best practices for organizing GitHub Copilot files in your repository, including naming conventions, folder structure, and the distinction between workspace and user-scope files.
How to Structure Content for Copilot Agent Files Comprehensive guide to creating custom agents with .agent.md files. Learn how to design agent personas, configure tools, create handoff workflows, and understand how agents interact with prompts and instructions.
Appendix A: YAML Frontmatter Metadata ReferenceThis appendix provides comprehensive documentation of all metadata fields supported in .prompt.md YAML frontmatter across GitHub Copilot implementations and related tools.
Core Metadata Fields - Complete specifications for the 6 essential YAML fields:
name - Command identifier for invoking prompts description - UI display text shown in prompt picker agent - Execution mode (ask, edit, agent, or custom) model - LLM selection for specific capabilities tools - Capability restrictions for security/focus argument-hint - Usage guidance displayed to usersExtended Metadata Fields - Experimental and custom fields for advanced use cases:
version, author, tags, category for organization and trackingCustom Agent Metadata - Additional fields for .agent.md files defining specialized personas
Platform-Specific Differences - Distinctions between VS Code and Visual Studio implementations
Complete Examples - Full YAML headers showing all fields working together
Validation & Best Practices - Essential guidance including:
Required vs optional fields checklist Common mistakes and how to oid them Recommended configurations for different scenariosFuture Proposed Fields - Community-requested capabilities planned for future releases
Each field entry includes: type, support status, purpose, code examples, and best practices. Use this appendix as your go-to reference when authoring or troubleshooting prompt files.
Core Metadata Fields name Type: String Required: No (defaults to filename without extension) Purpose: Defines the command name used to invoke the prompt VS Code: Invoked as /name (slash command) Visual Studio: Invoked as #name (hashtag command) Example: name: react-form → Use /react-form in VS Code Best Practice: Use lowercase with hyphens for multi-word names name: code-review description Type: String Required: No (but highly recommended) Purpose: Human-readable explanation shown in prompt picker/autocomplete Visibility: Appears in UI when selecting prompts Character Limit: Keep under 100 characters for best display Best Practice: Write clear, action-oriented descriptions description: "Review code for security vulnerabilities and best practices" agent Type: String (enum) Required: No (defaults to user’s current chat mode) Purpose: Specifies the execution mode for the prompt Supported Values: ask - Research/analysis mode (no file edits) edit - Direct file modification mode agent - Autonomous multi-step agent mode Custom agent names (if .agent.md files defined) VS Code: Fully supported (1.106+) Visual Studio: Limited support (check version docs) agent: agent # Use autonomous agent modeAgent Mode Comparison:
Mode File Edits Multi-step Tool Access Best For ask No Limited Yes Analysis, Q&A, research edit Yes No Limited Direct code modifications agent Yes Yes Full Complex workflows, automation model Type: String Required: No (uses user’s default model) Purpose: Specifies which LLM to use for this prompt Common Values: gpt-4 or GPT-4 gpt-4-turbo gpt-3.5-turbo claude-sonnet-4.5 o1-preview o3 Availability: Depends on user’s Copilot subscription and enabled models Best Practice: Only specify when prompt requires specific model capabilities model: claude-sonnet-4.5 # Use Claude for this prompt tools Type: Array of strings Required: No (all tools ailable by default) Purpose: Restricts which tools/capabilities the prompt can use Supported Values: codebase - Semantic search across repository editor - File read/write operations filesystem - Directory listing, file operations fetch - Web content retrieval web_search - Internet search capabilities MCP server names (e.g., github, azure) Use Case: Security, performance, or focus constraints VS Code: Full support Visual Studio: Limited support tools: ['codebase', 'editor', 'filesystem'] # Restrict to local operations onlyTool Access Patterns:
# Minimal tools (fast, focused) tools: ['editor'] # Code-focused (no external access) tools: ['codebase', 'editor', 'filesystem'] # Research-enabled (includes external data) tools: ['codebase', 'fetch', 'web_search'] # Full access (all ailable tools) tools: [] # or omit the field entirely argument-hint Type: String Required: No Purpose: Shows usage hint in chat input field Visibility: Displayed as placeholder text when invoking prompt Best Practice: Use concise syntax examples Format: Suggest argument patterns users should provide argument-hint: 'fields=name:string,age:number,email:string'Effective Argument Hints:
# File path argument argument-hint: 'path/to/file.ts' # Key-value pairs argument-hint: 'component=Button props=variant,size' # Optional parameters argument-hint: '[language] [framework]' # Multiple files argument-hint: 'file1.ts file2.ts ...' Extended Metadata Fields version Type: String (semantic version) Status: Not officially documented but supported by some tools Purpose: Track prompt file versions for compatibility Format: Follow semantic versioning (major.minor.patch) version: "1.2.0" author Type: String or array Status: Not officially documented Purpose: Document prompt creator(s) Use Case: Team attribution, maintenance responsibility author: "Development Team" # or author: ["Alice Smith", "Bob Jones"] tags Type: Array of strings Status: Experimental/custom Purpose: Categorize prompts for searching/filtering Use Case: Large prompt libraries with organizational needs tags: ['security', 'code-review', 'python'] category Type: String Status: Custom/organizational Purpose: High-level prompt classification Use Case: Aligns with prompt organization structure category: "quality-assurance" Custom Agent Metadata (.agent.md files)When defining custom agents in .github/agents/*.agent.md, additional fields are ailable:
instructions Type: Markdown content (in body, not YAML) Purpose: Define agent’s system instructions and behior Location: Main content after YAML frontmatter functions (Proposed) Type: Array of function definitions Status: Experimental Purpose: Define custom capabilities for specialized agents Platform-Specific Metadata VS Code Specific # VS Code Preview features (1.106+) name: my-prompt agent: custom-agent-name # Reference .agent.md file tools: ['codebase', '@mcp-server-name'] # MCP server integration Visual Studio Specific # Visual Studio 17.10+ name: my-prompt # Invoked as #my-prompt (not /my-prompt) # Note: Limited tool/agent support compared to VS Code Complete Example with All Common Fields --- name: comprehensive-code-review description: "Perform security audit and best practices review with detailed report" agent: agent model: claude-sonnet-4.5 tools: ['codebase', 'editor', 'fetch'] argument-hint: '[focus=security|performance|style]' version: "2.1.0" author: "Security Team" tags: ['security', 'code-review', 'audit'] category: "quality-assurance" --- # Comprehensive Code Review Prompt [Prompt content here...] Validation and Best PracticesRequired Fields Checklist:
✅ name (or rely on filename) ✅ description (strongly recommended)Optional but Recommended:
agent - Specify if prompt needs specific mode model - Specify if prompt requires specific capabilities tools - Restrict for security/performance argument-hint - Guide users on usageCommon Mistakes:
❌ Using spaces in name (use hyphens: code-review not code review) ❌ Overly long description (keep under 100 chars) ❌ Specifying unailable models (check user’s subscription) ❌ Over-restricting tools (limits functionality unnecessarily) ❌ Vague argument hints (be specific about expected format) Future Metadata Fields (Proposed)Based on community requests and tool evolution:
# Proposed future fields requires: ['extension-id'] # Extension dependencies min-version: "1.95.0" # Minimum VS Code version max-tokens: 8000 # Token limit for responses temperature: 0.7 # Model temperature override context-files: ['docs/**'] # Auto-include file patternsThese fields are not currently supported but represent potential future capabilities.
Appendix B: Tools and Capabilities ReferenceTools are capabilities that prompts can use to gather information, interact with code, access external resources, and perform operations. The tools YAML field controls which capabilities are ailable to a specific prompt, enabling fine-grained control over prompt behior for security, performance, or focus reasons.
This appendix provides comprehensive documentation of all tools that can be specified in the tools YAML field, their capabilities, use cases, and access patterns.
Core Built-in Tools:
codebase - Semantic search across workspace for code patterns, symbols, and implementations editor - File read/write operations including creating, modifying, and deleting files filesystem - Directory nigation, file queries, and metadata access (read-only) fetch - Retrieve content from web URLs and REST APIs web_search - Search the internet for current information and documentationMCP (Model Context Protocol) Server Tools:
@github - GitHub API integration for repository data, issues, and pull requests @azure - Azure resource management, queries, and documentation access Custom servers - Organization-specific tools (e.g., @company-wiki, @internal-api-docs)Common Tool Combinations:
Local only: ['codebase', 'editor', 'filesystem'] - No external network access Research-enabled: ['codebase', 'editor', 'fetch', 'web_search'] - Includes external resources Full access: [] or omit field - All ailable tools enabled Core Built-in Tools codebase Purpose: Semantic search across the entire workspace/repository Capabilities: Search for code patterns, functions, classes, and symbols Find related code implementations Locate definitions and references Understand project structure and relationships Use Cases: Code review prompts needing context from multiple files Refactoring operations requiring cross-file analysis Documentation generation from existing code Finding similar patterns or implementations Performance: Moderate (indexes workspace on first use) Security: Low risk (read-only access to workspace) tools: ['codebase'] # Enable semantic code searchExample Query Patterns:
“Find all implementations of the UserService interface” “Locate error handling patterns in this project” “Search for security vulnerabilities in authentication code” editor Purpose: File read/write operations Capabilities: Read file contents Create new files Modify existing files Delete files Rename/move files Use Cases: Code generation prompts that create new files Refactoring prompts that modify multiple files Scaffolding prompts that build project structures Performance: Fast (direct file system operations) Security: Moderate risk (can modify workspace files) tools: ['editor'] # Enable file operationsBest Practices:
Always preview changes before applying Use with agent mode for multi-file operations Combine with codebase for context-aware edits filesystem Purpose: Directory nigation and file system queries Capabilities: List directory contents Check file/directory existence Get file metadata (size, modified date) Trerse directory structures Search for files by pattern Use Cases: Project structure analysis Finding configuration files Discovering test files Validating project setup Performance: Fast (file system queries) Security: Low risk (read-only operations) tools: ['filesystem'] # Enable directory operationsExample Operations:
List all .json config files Find test files matching pattern *.test.ts Check if .env file exists Get workspace folder structure fetch Purpose: Retrieve content from web URLs Capabilities: Download web pages Access REST APIs Retrieve documentation from URLs Fetch external resources Use Cases: Documentation research prompts API integration verification External resource validation Pulling in reference materials Performance: Variable (depends on network and remote server) Security: Moderate risk (external network access) tools: ['fetch'] # Enable web content retrievalSupported Protocols:
https:// (recommended) http:// (use with caution)Limitations:
May be rate-limited by remote servers Authentication not supported for private APIs Subject to CORS and other web restrictions web_search Purpose: Search the internet for information Capabilities: Find current information online Locate documentation and tutorials Research best practices Discover recent developments Use Cases: Research prompts needing latest information Finding solutions to errors or issues Discovering new libraries or tools Validating current best practices Performance: Variable (depends on search provider) Security: Moderate risk (external network access) tools: ['web_search'] # Enable internet searchBest Practices:
Use for information not ailable in workspace Verify results from authoritative sources Consider that results may change over time MCP (Model Context Protocol) Server ToolsMCP servers extend GitHub Copilot with additional capabilities through standardized protocols. Reference MCP tools using @server-name notation.
Built-in MCP Servers @github Purpose: GitHub API integration Capabilities: Access repository information Read issues and pull requests Get commit history Query GitHub metadata Availability: VS Code with GitHub Copilot extension tools: ['codebase', '@github'] @azure Purpose: Azure resource management and queries Capabilities: List Azure resources Query resource properties Access Azure documentation Azure service integration Availability: With Azure GitHub Copilot extension tools: ['codebase', '@azure'] Custom MCP ServersOrganizations can create custom MCP servers for proprietary tools and services:
tools: ['codebase', '@company-wiki', '@internal-api-docs']Configuration: Custom MCP servers must be registered in VS Code settings or workspace configuration.
Tool Access Patterns Minimal Access (Fast, Focused)Best for simple, targeted operations with no external dependencies.
tools: ['editor']Use Cases:
Simple file edits Code formatting Comment generation Local Workspace Access (Code-Focused)Enables comprehensive workspace analysis without external network access.
tools: ['codebase', 'editor', 'filesystem']Use Cases:
Refactoring operations Project-wide code analysis Internal documentation generation Cross-file consistency checks Research-Enabled (External Resources)Allows access to external information while maintaining workspace capabilities.
tools: ['codebase', 'editor', 'fetch', 'web_search']Use Cases:
Documentation research API integration Best practices lookup Technology evaluation Full Access (All Capabilities)Provides complete tool access for complex, autonomous operations.
tools: [] # or omit the field entirely # Enables all ailable tools by defaultUse Cases:
Complex agent workflows Multi-source research tasks Comprehensive code generation Advanced automation Tool Combinations by Use Case Code Review Prompt tools: ['codebase', 'filesystem', 'fetch'] Search code for patterns (codebase) Find related test files (filesystem) Check external coding standards (fetch) Documentation Generator tools: ['codebase', 'editor', 'filesystem', 'web_search'] Analyze code structure (codebase) Create documentation files (editor) Discover existing docs (filesystem) Research API documentation formats (web_search) Scaffolding Prompt tools: ['editor', 'filesystem', 'fetch'] Create project files (editor) Check for existing structure (filesystem) Download templates/boilerplates (fetch) Security Audit Prompt tools: ['codebase', 'filesystem', 'web_search'] Search for security patterns (codebase) Find configuration files (filesystem) Look up CVEs and vulnerabilities (web_search) Tool Restrictions and Security Why Restrict Tools? Security: Limit external network access Performance: Reduce tool overhead for simple tasks Focus: Prevent prompt from using irrelevant capabilities Cost: Some tools may he usage costs or limits Security Considerations by Tool Tool Risk Level Considerations codebase Low Read-only workspace access editor Moderate Can modify files; review changes filesystem Low Read-only; limited metadata access fetch Moderate External network; validate URLs web_search Moderate External network; results may vary MCP Servers Variable Depends on server implementation Recommended RestrictionsPublic/Shared Prompts:
tools: ['codebase', 'editor', 'filesystem'] # No external accessEnterprise/Internal Prompts:
tools: ['codebase', 'editor', 'filesystem', '@internal-docs']Research-Hey Prompts:
tools: ['codebase', 'fetch', 'web_search'] # External research OK Tool Availability by Platform Tool VS Code 1.106+ Visual Studio 17.10+ Notes codebase ✅ Full support ✅ Full support Core functionality editor ✅ Full support ✅ Full support Core functionality filesystem ✅ Full support ✅ Full support Core functionality fetch ✅ Full support ⚠️ Limited Check version docs web_search ✅ Full support ⚠️ Limited May require settings @github ✅ With extension ❌ Not supported VS Code only @azure ✅ With extension ⚠️ Different impl. Platform-specific Custom MCP ✅ Preview feature ❌ Not supported VS Code 1.106+ Tool Usage Best Practices Start Minimal, Expand as Needed # Start with minimal tools tools: ['editor'] # Add tools only when required tools: ['codebase', 'editor'] # Added codebase for context # Enable research when needed tools: ['codebase', 'editor', 'fetch'] # Added fetch for docs Explicit is Better Than Implicit # ❌ Unclear intent tools: [] # All tools enabled # ✅ Clear intent tools: ['codebase', 'editor', 'filesystem'] # Local operations only Document Tool Requirements --- name: security-audit tools: ['codebase', 'filesystem', 'web_search'] description: "Audit code for vulnerabilities (requires internet for CVE lookup)" --- # Security Audit Prompt **Required Tools:** - `codebase`: Search code for security patterns - `filesystem`: Find configuration files - `web_search`: Look up CVEs and best practices Test with Restricted ToolsBefore deploying prompts, test with restricted tool access to ensure graceful handling of missing capabilities:
# Development version tools: ['codebase', 'editor', 'fetch', 'web_search'] # Production version (more restrictive) tools: ['codebase', 'editor'] Future Tool Capabilities (Proposed)Based on community feedback and tool evolution:
# Proposed future tools tools: [ 'codebase', 'editor', 'terminal', # Execute commands 'debugger', # Debugging integration 'git', # Version control operations 'package-manager', # npm/pip/men operations 'database', # Database queries 'cloud-provider' # Cloud resource management ]These tools are not currently supported but represent potential future extensions to the GitHub Copilot tools ecosystem.