Convert rich text into deterministic, token-compact prompts optimized for LLM usage.
PromptFold is a production-ready web application that transforms rich text input (with formatting, lists, paragraphs) into clean, token-efficient prompts suitable for Large Language Models. It strips visual styling while preserving document structure, ensuring consistent and optimized output.
π Live Demo | π Documentation
- π Rich Text Input: Paste formatted text with paragraphs, numbered lists, bullet lists, and mixed formatting
- π― Deterministic Transformation: Same input always produces the same output
- π Token Optimization: Removes redundant whitespace and normalizes formatting
- π Accurate Token Counter: Real-time token counting using GPT tokenizer
- π One-Click Copy: Copy optimized prompt to clipboard instantly
- π± Responsive Design: Works seamlessly on desktop, tablet, and mobile devices
- π No Backend Required: Runs entirely in the browser
- β‘ Lightweight: Minimal dependencies for fast performance
When working with LLMs, token efficiency matters. PromptFold helps you:
- Reduce costs by minimizing token usage
- Improve clarity by removing visual noise
- Ensure consistency with deterministic transformations
- Save time with instant copy-paste workflow
The application applies the following rules to optimize your text:
-
Whitespace Normalization
- Trim leading and trailing spaces per line
- Collapse multiple spaces inside a line into one
- Remove empty lines completely
-
Structure Preservation
- Each paragraph becomes one single line
- Paragraphs separated by exactly one
\n - Meaningful line breaks preserved using
\n
-
List Handling
- Numbered lists remain numbered (1. 2. 3. format)
- Bullet lists converted to "- item" format
- Nested lists with different types: Main item keeps format, sub-items use dash
- Nested lists with same type: Converted to
- main: sub1, sub2, sub3format
-
Visual Formatting Ignored
- Colors, font sizes, bold, italics, underline removed
- Only document structure matters
- Framework: Next.js 15.1 (App Router)
- Language: TypeScript 5.7
- Styling: Tailwind CSS 3.4
- Runtime: React 19
- Token Counting: gpt-tokenizer for accurate GPT-3.5/GPT-4 token estimation
- Node.js 18.x or higher
- npm, yarn, or pnpm
- Clone the repository:
git clone https://github.com/Pakeetharan/PromptFold.git
cd PromptFold- Install dependencies:
npm install- Run the development server:
npm run dev- Open your browser:
Navigate to http://localhost:3000
npm run build
npm startInput (Rich Text):
Welcome to PromptFold
Here are the features:
1. Token optimization
β’ Fast processing
β’ Low cost
2. Deterministic output
3. Real-time processing
Key benefits:
β’ Faster processing
β’ Lower costs
β’ Better results
Output (Optimized):
Welcome to PromptFold
Here are the features:
1. Token optimization: Fast processing, Low cost
2. Deterministic output
3. Real-time processing
Key benefits:
- Faster processing
- Lower costs
- Better results
PromptFold/
βββ src/
β βββ app/
β β βββ layout.tsx # Root layout with metadata
β β βββ page.tsx # Main application page
β β βββ globals.css # Global styles and Tailwind
β βββ components/
β β βββ RichTextInput.tsx # Editable rich text input
β β βββ PromptOutput.tsx # Read-only output with copy
β βββ utils/
β βββ transformer.ts # Core transformation logic
β βββ tokenCounter.ts # Token estimation algorithm
βββ package.json
βββ tsconfig.json
βββ tailwind.config.ts
βββ next.config.js
transformer.ts: Pure utility functions for text transformationtokenCounter.ts: Accurate GPT token counting using gpt-tokenizer libraryRichTextInput: ContentEditable div with paste handlingPromptOutput: Read-only textarea with copy button and live stats
- Push to GitHub (see instructions below)
- Connect to Netlify:
- Visit netlify.com
- Click "Add new site" β "Import an existing project"
- Select your GitHub repository
- Build settings are auto-detected
- Click "Deploy"
npm install -g vercel
vercel- AWS Amplify: Connect GitHub repository
- Docker: Use the included configuration
- Static hosting: Run
npm run buildand deploy theoutfolder
We welcome contributions! Here's how you can help:
- π Report bugs: Open an issue describing the problem
- β¨ Suggest features: Share your ideas in the discussions
- π Improve documentation: Fix typos or add examples
- π§ Submit PRs: Fix bugs or add new features
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes
- Test thoroughly: Ensure all existing functionality still works
- Commit with clear messages:
git commit -m 'Add amazing feature' - Push to your fork:
git push origin feature/amazing-feature - Open a Pull Request
- Follow TypeScript strict mode
- Use Prettier for formatting
- Write clear, descriptive comments
- Keep transformation logic deterministic
- Avoid unnecessary dependencies
The transformation logic is implemented as pure functions, making testing straightforward:
import { transformToPrompt } from './utils/transformer';
// Test deterministic output
const input = '<p>Hello World</p>';
const output = transformToPrompt(input);
expect(output).toBe('Hello World');- Very deeply nested lists may not format perfectly
- Browser compatibility requires modern JavaScript features
- Exact token counting using gpt-tokenizer β
- Dark mode support
- Export to file functionality
- Undo/redo functionality
- Custom transformation rules
- Batch processing
- API endpoint option
This project is licensed under the MIT License - see the LICENSE file for details.
Your Name
- GitHub: @Pakeetharan
- Twitter: @PakeetharanB
- Built with Next.js
- Styled with Tailwind CSS
- Inspired by the need for efficient LLM prompts
If this project helps you, please give it a β on GitHub!
Made with β€οΈ for the AI community