No Products in the Cart
"Trust me, I'm a Prompt Engineer" - one of our most popular designs
Prompt engineering is a very new career (or potential career) that is gaining popularity and recognition in the past year or so. As technology continues to advance and human-computer interactions become increasingly important through the use of AI tools, the role of a prompt engineer is becoming quite relevant in some fields - and yes, the term "engineering" is used a bit loosely, read on for more details. In this blog post, I will discuss what a prompt engineer is, the novelty of the career, and how it is something that no one could have foreseen 5 years ago.
A prompt engineer is a professional who specializes in designing and creating prompts that provide instructions for AI tools, such as ChatGPT, Midjourney, and others to get the most accurate results from said tool for the intended purpose.
If that makes no sense to you, I encourage you to check our previous post or go run some search on how new AI tools are changing many aspects of commonly known professions. Do it all in a separate window, of course, then come back. Our SEO manager (aka me) will thank you.
Long story short: many of these AI tools use "generative AI", which at a high level, use tons of training material (images, text, video and a long etc.) to mash something up based on that material. That something is the output and to interact with the tool and give it instructions in natural language, you use a prompt. Something like: "write a 1,500 word blog post about what a prompt engineer is and why no one could have seen this coming 5 years ago, use snarky, sarcastic, ironic language" - that didn't work with ChatGPT, btw. This is all manual and out of my troubled mind, baby!
Some call prompt engineering an art. Whether art or science, at the end of the day, it is becoming more and more relevant as the usage of AI tools continues to penetrate more industries. As an engineering school drop-out who's bult a 15-year career in Analytics and Data Science by learning how to learn, I'll prefer the original meaning of the word engineering: to use ingenuity to contrive, devise.
Before 2017, large computer models, such as the ones used by modern AI tools, were based on relatively small sets of high quality tagged data through supervised or semi-supervised methods - both rely on either high quality labeled data or some level of human interaction, hence the limitation.
Then in June 2017 (yes, we're using "5 years ago" loosely, 5 years, 7 months ago doesn't sound too compelling), computer scientists from Cornell University and Google introduced the transformer architecture, which revolutionized the way software can interpret natural language, getting closer to how humans do.
Since its publishing, the now famous "Attention is all you need" paper, has influenced numerous projects and many became open source, leading to some brilliant minds to capitalize on these new ideas to develop some of the AI tools we now hear so often, most notably, ChatGPT.
The new Transformer architecture could be used for unsupervised training, which means that the human interaction needed for tagging data is no longer required, and so these models could be trained with as much data as you could feed them - and you can afford computing power: these things won't run very well out of your laptop, no matter how buffed up it is. They need the large-scale computing power only cloud computing can provide.
Bottom line: before the introduction of Transformer architecture, we were bound to a roughly proportional level of human intervention to scale AI models. Transformer architecture allowed us to scale up these models way, way faster.
Time to put on your tinfoil hat - if you're into that, no judging, but please do share a pic in our Instagram :).
Remember what the prompt is: the natural language text with which you give instructions to these models.
Here comes the freaky part: no one designed these AI tools to interact with humans through natural language prompts.
Ok, if you've shared your pic on Instagram with us, you can pull that tinfoil hat off, the truth is quite mundane, though technical. I'll simplify.
Since transformers are designed to train themselves, it was inevitable that they would develop their own way of interpreting human language, and so, once you have a model large enough, it will inevitably understand an instruction through natural language: the prompt is born.
Although an unintended feature, interacting with a computer model using natural language has made arguably the most relevant feature of these new tools, since now anyone can give them instructions without any training in computer science.
But if it's so natural then why do you need to engineer these prompts?
In a nutshell, and very well worth remembering: AI is NOT, I repeat, is NOT intelligence. At least not in the way that you would define the intelligence of a human being - rather long topic, won't go any further here. As I've mentioned in my previous post, these generative AI models are nothing more than a glorified paraphrasing machine on steroids. Every time you ask something of it, it'll reach out to its massive training data and put together a mash up of its contents in a way that it deems most relevant. This relevance is directly defined by your prompt.
So, if you ask one of these tools to "tell me a brief story of computer science" they'll likely end up with a pretty generic, Wikipedia-like text. Not necessarily bad, but you can do better, or just go directly to Wikipedia. What if you ask: "write a 3,000-word SEO optimized article for an online science magazine about the history of computer science. Focus on the milestones that have directly influenced modern AI and expand on popular theories for future developments. Use journalistic language in the style of National Geographic magazine. Include the following keywords:..."
To the point: the more specific you are, the more it narrows down the body of knowledge at its disposal, so you get results closer to your initial intention. And this is the engineering part. If you become too specific, the body of knowledge could become too narrow and not provide enough depth to the output. If it's too broad, you'll get generic outputs.
Prompt engineering is all about optimizing prompt instructions to the limitations of the AI tool you are using and the specificity of your needs.
It's hard to say where profession will go next. In some cases, you can even have one AI tool write prompts for another tool - a typical example is using submitting Midjourney prompt recommendations on ChatGPT to generate a Midjourney prompt. So there's a chance there are still many evolution steps to this.
I've already discussed the implications on search in my previous post. As search engines react, so will these tools and it will likely become a constant feedback loop that will accelerate the evolution of both.
In some fields, the use of AI tools can greatly accelerate learning, such as in software development, where many even senior devs are leveraging ChatGPT to move faster - albeit at the risk of producing really bad quality code if the prompt is not fine-tuned, or downright buggy.
Similar challenges and novel solutions will appear in more industries and specialties as the usage becomes more wide-spread.
Once again, my recommendation: ride the wave now before it washes you out.