SY

Shalom
Yiblet

Prompt Engineering Isn't Permanent

March 26, 2023 | 3 minutes

The era of prompt engineering, I believe, is destined to be a transient one, with its existence spanning only a few years. The reason behind its emergence lies in the fact that the current generation of AI models can't always capture our intentions and deliver the desired results.

The Emergence of Large Language models

Before the advent of GPT-3, we relied on fine-tuning to achieve the required performance from our AI models. This process can be thought of as a more intensive form of prompt engineering, wherein we embed our desired outcomes directly into the model's weights.

However, the introduction of large language models (LLMs) has rendered this level of customization largely unnecessary. LLMs, with their remarkable ability to infer the desired output from a mere sentence prompt, have revolutionized the AI landscape. To illustrate, we once had to fine-tune BERT models for question-answering tasks by providing hundreds of thousands of training samples. In contrast, GPT-3 can now answer questions from passages without the need for such extensive training data.

Fine-tuning to Single prompts

As we trace the trajectory of this progress, we observe an exponential decrease in the amount of information required for the model to discern our intended output. When GPT-3 was released with fine-tuning support, the documentation advised users to provide a few input-output examples for training. Fast forward to ChatGPT, and the average person can utilize GPT-3's capabilities without even needing to supply sample inputs and outputs.

The evolution of AI has taken us from thousands of inputs to tens, and now to a single prompt. Currently, to optimize the output, we still need to experiment with numerous prompts, compare the results, and select the best one. However, it seems unlikely that this final stage of fine-tuning AI to produce our desired outcomes will persist for long. The The evolution of AI has taken us from thousands of inputs to tens, and now we find ourselves at the stage where a single prompt can often suffice. Despite the leaps and bounds we've made, the process of optimizing output still involves generating multiple prompts, comparing the results, and choosing the best one.

Future of Prompt Engineering in AI

Nevertheless, I foresee that this current approach to eliciting our desired outcomes from AI models will not endure for an extended period. As we continue to refine these models and enhance their ability to comprehend and act upon our intentions, it's likely that the need for prompt engineering will wane.

As the AI field advances, we might witness models that can seamlessly grasp our objectives without the need for any complex manipulation of prompts. The AI systems of the future could potentially understand the user's requirements with minimal input, perhaps even intuiting the desired output based on the context of the interaction.

Shalom Yiblet
follow @syiblet