Article sections

    If at First You Don’t Succeed: Tips to Get Better Results from Your GenAI Prompts

    In last month’s issue, we shared several “prompt engineering” tips from a recent NACVA webinar on effective GenAI “prompt engineering.” Presenter Colin Brown — the founder and chief technology officer of Syncnet, which helps consultants leverage AI and software — also provided some advice on how to revise prompts to obtain better results when you’re not satisfied with the initial round.

    Addressing Common Problems
    GenAI — such as ChatGPT, Copilot, Claude, Perplexity.ai, and Gemini — aims to please the user, but it doesn’t always succeed the first time around. What seemed like an effective prompt can sometimes need a little (or a lot of) tweaking. Here are some common problems you might see in your results and how to address them:

    1. A general disconnect. We’ve all gotten results from GenAI that don’t match up with what we expected or wanted. In such cases, Brown recommended adding the following: “Before answering, ask me any questions you need to understand the request better.” He’ll also use an alternative AI tool to improve a prompt. For example, to improve a prompt for Chat GPT, he’ll go to Claude and enter: “Here’s a prompt: [prompt]. Make it better.”

    2. Results that differ in format or tone from what you want. Say, for example, that you prompted Gemini to prepare a report for litigation purposes or to provide an industry overview, but the resulting report or overview is in a different format than you desired. Brown suggested modifying your request by adding an “anchor” — an example of a report or overview that illustrates what you need. This gives the AI tool a frame of reference.

    3. Biased results. The results produced by GenAI tools can be biased for several reasons, ranging from the underrepresentation of certain groups or perspectives in training data to the implicit bias a software engineer brought to the design process. If you think your results are biased, Brown said, you can ask the tool to provide multiple perspectives or opinions. You also can request citations so you can check the data it’s relying on in its response.

    4. Hallucinations. AI hallucinations occur when GenAI tools spit out results that are nonsensical, misleading, inaccurate, or entirely fabricated. This can obviously pose a significant problem for valuators. Brown advised adding the following phrase to your prompt if you suspect you’ve received a hallucination: “Include sources with links for any facts mentioned.” You also can follow up with: “For each fact mentioned, tell me how confident you are and why.” But don’t stop there — you also must check those sources to confirm they do indeed support the tool’s assertions.

    Start Here
    Wading into the GenAI waters for valuation purposes can understandably seem daunting. The good news is that you’re not on your own. NACVA’s AI Data University offers a growing library of prompts and other AI tools you can search, filter, and save to use again and again to help you streamline your valuation practice — and leave you more time for the critical judgment and analysis that only a human can perform.

    in AI
    Did this article answer your question?