Prompting-guidelines

From CLAIF Wiki
Revision as of 23:08, 22 January 2026 by ArthurWolf (talk | contribs) (Created page with "= Librecode Prompting Guidelines for Students = Some prompting guidelines given to students as they start working on the Librecode project, or if they need to write prompts for any part of the project. == Prompt Engineering Guides / Documentation == * https://platform.openai.com/docs/guides/prompt-engineering * https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api * https://platform.claude.com/docs/en/build-with-claude/p...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Librecode Prompting Guidelines for Students

Some prompting guidelines given to students as they start working on the Librecode project, or if they need to write prompts for any part of the project.

Prompt Engineering Guides / Documentation

If you have to read just one, read the Anthropic one.

The Big Important Pieces

  1. Use XML-like tags to structure the document, including on multiple levels. You can use Markdown inside those tags' contents, but avoid structuring using Markdown headers. Models tend to understand structure better using XML-like tags.
  2. Provide examples, both of good output and of bad output, clearly delimited by XML-like tags. Give multiple examples. Providing multiple examples is called the "few shots" / "multishot" technique, and can sometimes completely replace fine-tuning.
  3. Research what format/style was used to train the model you are trying to use or fine-tune; it often can give major insights and solve issues.
  4. Unless you have a good reason not to, it's generally a good idea to use a temperature of 0 (or the equivalent top-k / top-p / etc.).
  5. Keep your system prompt simple and short. The place to put instructions is the prompt itself. Putting too much in the system prompt is a common beginner mistake.
  6. Templating formats like Handlebars or Jinja make for nicer, readable prompt templates/files.
  7. Provide context: explain in the prompt what the prompt is "for", what the project the prompt is being used for is all about, and any other useful context you can think of.
  8. Use an LLM to rewrite your prompts. In particular, give it these links, these rules, any other rules you can think of, and instructions to rewrite the prompt following these instructions, and make it clear that they are writing text that will be read by a machine (not by a human) so they can write compact text without any pleasantries. This generally results in much better prompts.
  9. Use coding agents to work on your prompt templates. Gemini is free.
  10. Beyond examples, describe the output: its length, format, style, etc.
  11. If using a thinking model, actually instruct it to think, and even give it examples of how to think; examples of what a useful chain-of-thought looks like for a specific input.
  12. It's stupid, but giving it a "persona" where you tell it it's a "world expert" at doing whatever you're asking it to do reliably increases performance, even to this day, by a noticeable amount. See studies. Threatening the model also increases performance, but I personally can never get around to doing it...
  13. Prefill answers: after the end of your prompt, write the beginning of the answer the prompt would answer. This can sometimes help with preventing some issues, but note this won't work for thinking models.