WTF is Prompt Engineering? And Should You Care?
On how Prompt Engineering remains a niche gig, plus some prompting tips!
It has been a quiet few months for my newsletters and updates. Top of mind have been way too many events in my personal life: House renovations and moving! Contractors! Keeping up with the kids! Our first dog!
Work has not been easy (the hustle of seeking consulting clients), either. The combo has kept me away from properly journaling my work at the intersection of publishing and AI.
That work, however, continues apace, at least as much as possible in the time spared by all the above priorities. As we are also documenting on a jointly authored newsletter Paolo & Praveen on Tech, that work is also producing some initial prototypes following the Storya app experiment. More on that in upcoming updates.
I believe it is also important to acknowledge that the mental health and creative difficulties post-shutdown of Storya, which I have written about extensively, have gotten only partially easier in recent months. Finding my creative and professional stride continues to be an exercise in patience and resilience.
Prompt Engineering (profession or fad?)
One way I have tried to leverage my learnings from the startup experience has been through the consulting work I mentioned. I have been working on and pitching prompt engineering projects for clients in the publishing space for the past half a year or so. It has given me some interesting opportunities to test and refine my AI skills in the “real world”.
That being said, my overall feeling (at this admittedly early stage of prompt engineering as an actual profession) is that most companies approaching AI still struggle. There is still reticence to dive deep into how the applications of generative AI can help businesses. There is a very understandable combination of legal and employee concerns about AI replacement and compliance that is preventing a lot of experimentation going on in official capacities.
I say “official” because, as many surveys and studies are finding, employees and managers are already secretly using chatbots and other AI tools across their tasks for professional purposes, often without notifying their bosses in light of the concerns above.
That being said, I thought I’d share some key prompt engineering lessons here, something I have touched on previously without going into much details. So, what is all this fuss about prompting about?
Two preliminary recommendations, first:
- Only use the best models: right now I would name Llama 3 in the open source crowd and Opus from Anthropic for the proprietary models
- Stick to the source models and their providers without getting scammed by the many “products” providing what are essentially prettier interfaces for less capable models at a higher price! Most of these so-called wrappers are just useless.
Now, when it comes to the actual prompting experience:
- ALWAYS break down the tasks by asking the model to PLAN its response by “THINKING STEP BY STEP”.
- Anytime it is possible, provide the model with (good) SAMPLES: whether it is your own work or a work you admire, help the AI focus on the task at hand by giving it two or three examples of good work you want to inspire the responses.
- Roleplay: whenever appropriate to the task, ask the AI to “Act as an ABC” role. This is another proven and effective way to get the AI to focus on the task at hand. Even better when this is combined with the samples just mentioned.
- Model-hop: This one is fresh off the oven of the latest rounds of academic research. Get the AI to create the steps, then start executing. If one of the steps’ output is underwhelming, ask a different model to execute only that step. THEN plug the better outcome back into the original model and ask it to take it from there. Fancy, uh?
These techniques are talked about with fancy names (“few-shot”, “chain-of-thought”, “personas”, etc) but really the summary above gives enough context to start getting better results without having to go through dozens of research papers.
To improve upon these simple frameworks, the magic of prompt engineering is really about iteration, which allows one to build longer “master” prompts for specific case studies, especially when one is looking to come back to similar tasks daily.
If you are keen to see some examples, of course, let me know in the comments and we can cover those in future articles!
Looking ahead, I think that, while prompt engineering remains an interesting new opportunity for consulting work, it may struggle to take off under this exact guise. Perhaps new service providers will emerge packaging the idea of prompt engineering as something different and more acceptable for companies?
Time will tell.
Peace,
Paolo