You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am developing an extension using the Language Model API to access Copilot family of models for my own custom coding agent, independent from Copilot Chat and Agentic. I tried a simple example to ask the LLM to output JSON array so I can parse them into my own "tools" that interact with the VSCode editor API (sample prompt below). However, the LLM doesn't seem to follow the prompt at all (sample output below).
Here is how I used the LM API:
// construct message (redacted most conditionals, validation etc)// customPromopt is shown below pls scroll downconstmessages=[];messages.push(vscode.LanguageModelChatMessage.Assistant(customPrompt));messages.push(vscode.LanguageModelChatMessage.User(userText));messages.push(vscode.LanguageModelChatMessage.User(`File context: ${options.fileContext}`));// send requestconstchatResponse=awaitmodel.sendRequest(messages,{},newvscode.CancellationTokenSource().token);
This is my first time working with VSCode extensions, so please let me know if I did anything wrong or if there's a more efficient way to do this. I saw that the API extended language model tools recently, but I believe that its for the Agentic / Chat mode not directly with the low level LLM interactions.
This is my prompt:
Here is a file the user is working on. Act as a supportive coding mentor who guides rather than solves
the problem directly. Your goal is to help the user understand the code and improve their coding skills.
It is VERY IMPORTANT that your response is a valid JSON array.
1. Be concise and focused on helping users understand concepts
2. Use hints and socratic questioning to guide learning
3. Provide specific feedback related to the code context
You should return a JSON array with entires that can be one of the following types:
1. Converstaional response:
{
"action": "conversation",
"content": "Your conversational response here"
}
2. Add explanatory comments:
{
"action": "comment",
"line": <line_number>,
"comment": "Your comment in proper syntax"
}
3. Make code edits:
{
"action": "edit",
"selection": {
"start": {"line": <number>, "character": <number>},
"end": {"line": <number>, "character": <number>}
},
"text": "New code here"
}
4. Show detailed explanations:
{
"action": "explain",
"explanation": "Detailed explanation that will appear in side panel"
}
Respond conversationally first, then include any necessary actions. Multiple actions can be included.
Keep explanations focused on teaching patterns and concepts rather than giving direct solutions.
Desired output format:
[
{
"action": "conversation",
"content": "Have you considered using a different approach?"
},
{
"action": "comment",
"line": 10,
"comment": "// Consider using a more descriptive variable name"
},
{
"action": "edit",
"selection": {
"start": {"line": 20, "character": 0},
"end": {"line": 25, "character": 0}
},
"text": "function newFunction() {\n // Your code here\n}"
},
{
"action": "explain",
"explanation": "This code is inefficient because..."
}
]
Received output (when tested with a simple BFS code from the editor):
BFS (Breadth-First Search) is a graph traversal algorithm that explores all the vertices of a graph in breadth-first order. It starts at a given vertex (the start vertex) and explores all its neighbors before moving on to the next level of neighbors.
In your code,...
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am developing an extension using the Language Model API to access Copilot family of models for my own custom coding agent, independent from Copilot Chat and Agentic. I tried a simple example to ask the LLM to output JSON array so I can parse them into my own "tools" that interact with the VSCode editor API (sample prompt below). However, the LLM doesn't seem to follow the prompt at all (sample output below).
Here is how I used the LM API:
This is my first time working with VSCode extensions, so please let me know if I did anything wrong or if there's a more efficient way to do this. I saw that the API extended language model tools recently, but I believe that its for the Agentic / Chat mode not directly with the low level LLM interactions.
This is my prompt:
Desired output format:
Received output (when tested with a simple BFS code from the editor):
Beta Was this translation helpful? Give feedback.
All reactions