In this lesson, we are going to explore the importance of consistent formatting and organization when crafting prompts for Large Language Models (LLMs). You might wonder how something as seemingly simple as prompt formatting can significantly impact the responses you receive from an AI. Just as in human communication, clarity and structure play crucial roles in ensuring that your requests are understood and accurately fulfilled. Let's dive into how you can apply these principles to make your interactions with LLMs more effective and predictable.
Formatting your prompts consistently is not just about making them look neat; it means making your intentions clear to the AI. Imagine you are giving someone instructions for baking a cake, but instead of listing the steps in order, you jumble them all up. The result? Confusion and, likely, a not very tasty cake. The same principle applies to LLMs. By presenting your prompts in a clear, structured manner, you greatly increase the chances of receiving the desired output.
While there are many approaches to structuring your prompts, in this course, we'll teach you the Markdown Prompts Framework (MPF) developed by Prompt Engineers and AI experts at CodeSignal.
MPF is a very effective approach to creating highly readable, maintainable, and effective prompts and is at the core of many aspects of Cosmo.
Throughout this course we'll see many examples of application of MPF, but for now, here is a high-level summary:
- Split your prompts into Markdown sections like this:
__SECTION__
- this not only helps LLMs better understand your prompts but makes your prompts very easily skimmable (especially when rendered in Markdown since these show up in bold) allowing your fellow AI engineers to easily find and read relevant sections when your prompts get large.
