Ever since the rapid rise of large language models (LLMs) like GPT-4, Llama, and LLM-powered chatbots like ChatGPT, Bing Chat, and Bard, there have been multiple reports talking about the new AI job that pays a six-figure salary with one needing a degree in computer engineering, or even advanced coding skills.
About three months back, I, too, wrote excitedly about this new emerging career option called ‘prompt engineering’ and how it was turning out to be a blessing for those feeling threatened by the rapid march of generative artificial intelligence (AI) tools. You may read the whole article ‘How to become a prompt engineer’ if you wish, but here’s a summary if you’re not inclined to do so.
However, unlike a prompt writer (like most of us who enter prompts in the context box of AI-powered chatbots like ChatGPT, Bing Chat, Bard, Hugging Chat, etc.), a prompt engineer would need a combination of technical skills, industry knowledge, and experience. The reason: Prompt engineers create and refine these prompts to generate the desired result.
Prompt engineering, for instance, can be used to analyze customer sentiment and behaviours, enabling businesses to make informed decisions and drive innovation. It can also help researchers to extract valuable information from large language models by breaking down tasks, specifying information requirements, and using specific vocabulary. Prompt engineering is also applicable in fields like healthcare, education, and finance, where it can help extract relevant information, analyze trends, and make informed decisions.
Prompt engineers program in prose and send the plain text commands to the AI model, which does the actual work. A more technical area of prompt engineering is fine-tuning the input data used to train AI models. It involves carefully selecting and structuring the input data to maximize its usefulness for training the model.
A prompt engineer would need some technical experience in Big Data technologies like Hadoop, Apache Spark, and others. Understanding Big Data technology will also be helpful when working on an AI model since you will need to work with a large amount of data. Experts suggest that knowledge of programming languages like Java, C++, and Python will help you understand the working of AI models and modify the text to get the output you want.
Even the World Economic Forum lists prompt engineering as one of the estimated 97 million new jobs that will emerge by 2025 to enable humans and machines to work together.
However, I would now caution that AI will automate prompt engineering too in the next 1-2 years, so it may not be wise to lay all your bets on this job as a long-term career option.
In a 6 June article in Harvard Business Review, too, the author is sceptical about the prospects of prompt engineering. The article notes that regardless of the buzz around the term, prompt engineering isn’t the future. “The prominence of prompt engineering may be fleeting. A more enduring and adaptable skill will keep enabling us to harness the potential of generative AI. It is called problem formulation — the ability to identify, analyze, and delineate problems,” reads the summary of the article.
The author makes a valid point. To begin with, there are already “templated prompts” that cover broad categories. Microsoft’s Semantic Kernel SDK (software development kit), for instance, already makes it simpler to manage complex prompts and get focused results from LLMs.
According to Microsoft, you can use the Semantic Kernel to create natural language prompts, generate responses, extract information, invoke other prompts or perform any other task that can be expressed with plain text. “You don’t need to write any code or import any external libraries; just use the curly braces {{…}} to embed expressions in your prompts. Semantic Kernel will parse your template and execute the logic behind it. This way, you can easily integrate AI into your apps with minimal effort and maximum flexibility,” reads the tutorial.
Further, Microsoft has already proposed an Automatic Prompt Optimization (APO) framework. You may read the paper here. According to the authors, automatic or semi-automatic procedures can help humans write the best prompts and reduce manual effort. The results suggest that the proposed algorithm can improve the performance of the initial prompt input by up to 31%, exceeding state-of-the-art prompt learning baselines by an average of 4-8% while relying on fewer LLM application programming interface (API) calls.