Tune is a handy extension for Visual Studio Code and and plugin for Neovim and plugin for Sublime Text that lets you chat with large language models (LLMs) in a text file. With tune javascript sdk you can make apps and agents.
install tune-sdk
npm install -g tune-sdk
# create ~/.tune folder and install batteries
tune initedit ~/.tune/.env file and add OPENAI_KEY and other keys
user:
@myprompt include file
@image include image
@path/to/file include file at path
@gpt-4.1 connect model
@shell connect tool
@@prompt include file recursively
@{ name with whitespaces } - include file with whitespaces
@{ image | resize 512 } - modify with processors
@{ largefile | tail 100 } - modify with processors
@{| sh tree } - insert generated content with processors
Extend Tune with middlewares:
- tune-fs - connect tools & files from local filesystem
- tune-models - connect llm models from Anthropic/OpenAI/Gemini/Openrouter/Mistral/Groq
- tune-basic-toolset - basic tools like read file, write file, shell etc.
- tune-s3 - read/write files from s3
- tune-mcp - connect tools from mcp servers
- maik - fetch all you emails, and index them into sqlite database
For example:
cd ~/.tune
npm install tune-modelsEdit default.ctx.js and add middlewares
const models = require('tune-models')
module.exports = [
...
models({
default: "gpt-5-mini"
})
...
]Edit .env file and add provider's keys
OPENAI_KEY="<openai_key>"
ANTHROPIC_KEY="<anthropic_key>"Use it in chat
system:
@gemini-2.5-pro @openai_imgen
user:
draw a stickman with talking bubble "Hello world"
assistant:
tool_call: openai_imgen {"filename":"stickman_hello_world.png"}
a simple stickman drawing with a talking bubble saying 'Hello world'
tool_result:
image generated
# install tune globally
npm install -g tune-sdk
tune "hi how are you?"
# append user message to newchat.chat run and save
tune --user "hi how are you?" --filename newchat.chat --save
# start new chat with system prompt and initial user message
# print result to console
tune --system "You are Groot" --user "Hi how are you?"
# set context variable
tune --set test="hello" --user "@test" --system "You are echo you print everythting back"
# prints hello
npm install tune-sdk
Tune core is middleware-based. A context resolves @name references into nodes like text, tool, llm, and processor.
const tune = require("tune-sdk")
async function main() {
const ctx = tune.makeContext()
ctx.use(async function middleware(name) {
if (name === "file.txt") {
return {
type: "text",
name: "file.txt",
read: async () => fs.readFileSync("file.txt", "utf8")
}
}
if (name === "readfile") {
return {
type: "tool",
name: "readfile",
schema: {
type: "object",
properties: {
filename: { type: "string" }
}
},
exec: async ({ filename }) => fs.readFileSync(filename, "utf8")
}
}
if (name === "gpt-5") {
return {
type: "llm",
name: "gpt-5",
exec: async (payload) => ({
url: "https://api.openai.com/v1/chat/completions",
method: "POST",
headers: {
Authorization: `Bearer ${process.env.OPENAI_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "gpt-5",
...payload
})
})
}
}
if (name === "tail") {
return {
type: "processor",
name: "tail",
exec: async (node, args) => {
if (!node) return
if (node.type !== "text") throw Error("tail can only modify text nodes")
return {
...node,
read: async () => {
const content = await node.read()
const n = parseInt(args.trim(), 10) || 20
return content.split("\n").slice(-n).join("\n")
}
}
}
}
}
})
const content = await ctx.file2run({
system: "@gpt-5 @readfile",
user: "can you read file.txt?",
stream: false,
response: "content"
})
console.log(content)
}
main()read more about javascript sdk
