How I Won Singapore's GPT-4 Prompt Engineering Competition – Towards Data Science
Sign up
Sign in
Sign up
Sign in
Member-only story
Sheila Teo
Follow
Towards Data Science
—
31
Share
Last month, I had the incredible honor of winning Singapore’s first ever GPT-4 Prompt Engineering competition, which brought together over 400 prompt-ly brilliant participants, organised by the Government Technology Agency of Singapore (GovTech).
Prompt engineering is a discipline that blends both art and science — it is as much technical understanding as it is of creativity and strategic thinking. This is a compilation of the prompt engineering strategies I learned along the way, that push any LLM to do exactly what you need and more!
Author’s Note:
In writing this, I sought to steer away from the traditional prompt engineering techniques that have already been extensively discussed and documented online. Instead, my aim is to bring fresh insights that I learned through experimentation, and a different, personal take in understanding and approaching certain techniques. I hope you’ll enjoy reading this piece!
This article covers the following, with 🔵 referring to beginner-friendly prompting techniques while 🔴 refers to advanced strategies:
1. [🔵] Structuring prompts using the CO-STAR framework
2. [🔵] Sectioning prompts using delimiters
3. [🔴] Creating system prompts with LLM guardrails
4. [🔴] Analyzing datasets using only LLMs, without plugins or code —
With a hands-on example of analyzing a real-world Kaggle dataset using GPT-4
Effective prompt structuring is crucial for eliciting optimal responses from an LLM. The CO-STAR framework, a brainchild of GovTech Singapore’s Data Science & AI team, is a handy template for structuring prompts. It considers all the key aspects that influence the effectiveness and relevance of an LLM’s response, leading to more optimal responses.
—
—
31
Towards Data Science
Data Scientist, https://www.linkedin.com/in/sheila-teo/
Help
Status
About
Careers
Blog
Privacy
Terms
Text to speech
Teams