From bc027a1dfe9c248796a6b9c826d930b3dca91255 Mon Sep 17 00:00:00 2001 From: Ceither Date: Thu, 5 Mar 2026 09:19:27 +0800 Subject: [PATCH] Create prompt-repetition-improves-non-reasoning-llms.md --- ...mpt-repetition-improves-non-reasoning-llms.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 app/docs/CommunityShare/Amazing-AI-Tools/prompt-repetition-improves-non-reasoning-llms.md diff --git a/app/docs/CommunityShare/Amazing-AI-Tools/prompt-repetition-improves-non-reasoning-llms.md b/app/docs/CommunityShare/Amazing-AI-Tools/prompt-repetition-improves-non-reasoning-llms.md new file mode 100644 index 00000000..fa936a94 --- /dev/null +++ b/app/docs/CommunityShare/Amazing-AI-Tools/prompt-repetition-improves-non-reasoning-llms.md @@ -0,0 +1,16 @@ +--- +title: "Prompt Repetition Improves Non-Reasoning LLMs" +description: "复读机或可提高大模型能力" +date: "2026-03-05" +tags: + - "AI" + - "LLMs" + - "arXiv" +--- + +# + +When not using reasoning, repeating the input prompt improves performance for popular models (Gemini, GPT, Claude, and Deepseek) without increasing the number of generated tokens or latency. + +1. Prompt Repetition LLMs are often trained as causal language models, i.e. past tokens cannot attend to future tokens. Therefore, the order of the tokens in a user’s query can affect prediction performance. For example, a query of the form “ ” often performs differently from a query of the form “ ” (see options-first vs. question-first in Figure 1). We propose to repeat the prompt, i.e. transform the input from “ ” to “ ”. This enables each prompt token to attend to every other prompt token, addressing the above. When not using reasoning, prompt repetition improves the performance of LLMs (Figure 1) without increasing the lengths of the generated outputs or latency +