Almost every discussion about using LLMs seems to eventually leads to people mentioning they ways they've adjusted their writing style, prompting style and expectations to get the most out of their LLM.
So it makes me wonder if the LLMs training us as we are training them as there is plenty of precedent for new communications media creating new rules for communication.
From smileys to emoji
Then emoji came along. At first, people used them exactly how smileys were used, as punctuation. But then they started to be used as adjectives and adverbs. Sending someone a chat message of "Let's meet up" could be followed a few different ways
- Let's meet up πΊπΈ
- Let's meet up ☕️ π₯―
- Let's meet up πΏπ₯
In each case, the emoji are used as part of the message to express, in a collection of images, what might have taken a many more words. And it isn't hard to see those eventually evolve into just sending the emoji when you have a relationship where a lot of the context could be gathered from the last exchange you had.
- πΊπΈ
- ☕️ π₯―
- πΏπ₯
So how are people communicating with LLMs?
LLM power users all have their own way of building prompts that seem to converge on a basic format.
- Act as a or You are a to set how the LLM should respond
- Build/Construct/Create/Reply statement that describes the basic expectation
- Refinements as positive or negative restrictions
- Details and edge cases
- Rules if this is to be a draft or template for manual refinement
- Acceptance criterias, must mention energy use as example
- Output expectations
- Size and scope
- Tone and voice, either argument for or balanced discussion
Common variations seem to be breaking the request into a chain of related refinements, what I describe as Improv coding as they coincidentally follow the basic rules of Improv, embracing Yes, And and making statements. All of these show up in more modern guides to writing better prompts.
This looks to me like a natural language variation of a 4GL and it seems like a natural refinement of the first guide on prompting I read a few year back. Its largely an organic response to what works from using LLMs. So, did the LLMs train us to be better prompters?
Comments
Post a Comment