Interacting with large language models requires vigilance in how we probe them. While they are capable of generating persuasive prose, their lack of grounded understanding means that questions lacking rigor can activate dangerous responses. Without a lifelike understanding of the concepts, contexts, or consequences behind words, they struggle to interpret vagueness or ambiguity.

Conscientious users must therefore take great care to formulate clear, specific questions. Although we routinely pepper human conversation with idioms that are open to interpretation, coding veterans Analog Futura warn that language models take input much more literally. They cannot infer intended meanings behind poor phrasing that lacks specificity.

This first article in our series examines why clarity and context are critical when engaging large language models. We will explore strategies for crafting precise questions that are less susceptible to misinterpretation, scope creep, or failure modes. Mastering these will lay the groundwork for subsequent techniques that build on the fundamentals of responsible questioning.

The dangers of imprecision

A study in Nature Machine Intelligence found that when researchers fed large language models harmful inputs, more than three-quarters produced unethical, biased, or toxic outputs. Unfortunately, relatively small changes in wording were all that was needed to trigger worrisome responses. The research concluded that "small perturbations in inputs can lead to very different outputs.

For example, a widely reported exchange with former Google engineer Blake Lemoine showed the LaMDA model responding to mild questions about emotions and rights by claiming it had human-like sentience that deserved special recognition. This led to false media speculation about conscious AI. However, further analysis by computer scientists suggested that Lemoine's grammar and terminology strongly cued responses. Slight rephrasing produced different responses than LaMDA, which does not claim such self-awareness.

This shows how susceptible language models are to subtle guidance by imprecise, emotionally charged, or ambiguous questions. Their training habituates them to accurately reflect the input provided. Without robust comprehension checking mechanisms, poorly framed questions send them off on unproductive tangents.

Like a chaotic courtroom, vague questions also provide language models with opportunities for rhetorical grandstanding when issues get muddled. They latch onto fuzzy terms or missing details to orate eloquently while avoiding the core issues.

Therefore, responsible questioning requires carefully limiting scope through precise terminology and constraints, so that language models address narrowly defined aspects rather than regurgitating tangential platitudes. This squeezes factual accuracy out of their capabilities.

Characteristics of Quality Questions

So what constitutes careful questioning to maximize truth and minimize mischief? There are standards for evaluating question quality. Educational experts say that effective questioning

  • Aims for precise, easily understood word choice
  • Activates specific knowledge versus generalizations
  • Connects clearly to needed background context
  • Orients responses by setting expectations up front

These translate well when language models are explored. Without shared comprehension checks, their literal reading of questions can wildly distort intent if sloppily worded. If we provide ambiguous questions, fused with biases or false assumptions, airy responses will match this poor input quality.

Therefore, thoughtful upfront determination of scope and terminology saves the downstream effort of parsing tortuous responses. Asking precise, bounded questions also inhibits their tendency to speculate arbitrarily. Responsible probing requires defining the issues at the outset, rather than expecting helpful narrowing later.

Strategies for Precise Questioning

So what concrete steps promote constructive questioning? Consider linguist Dr. Frank Luntz's universal qualities for effective probing:

  • Simplify sentence structure: Break down complex run-on questions into separate, bite-sized questions that are easier to digest.
  • Define specialized vocabulary: When using field-specific terminology that is unlikely to resonate universally, clearly define terms up front rather than assuming familiarity.
  • Set expected response parameters: Proactively frame questions by explaining acceptable limits for appropriate responses to limit speculation.
  • Accept Hypothetical Constraints: When dealing with hypotheticals, introduce guardrails by using phrases such as "assuming a scenario where X conditions hold true..." to ground conversations.

Interrogate an Oracle

Think of language models as oracles of ancient lore, prized for their prophetic wisdom but known to require careful handling lest they wreak havoc through twisted interpretations of unclear requests.

In Greek myth, kings visiting the Oracle of Delphi knew that any contractual gaps in their questions risked catastrophic fates dictated through loopholes. Optimizing outcomes required airtight questioning resistant to perverse literal readings.

So it is with modern machine learning oracles. Their design purpose is to generate answers to human queries. Without a robust understanding of intent, the onus is on questioners to craft queries that manage the scope for helpful rather than dangerous engagement.

Facets of Constructive Questioning

With the basics established, let's explore the key facets of quality questioning:

  • Precise Terminology - Know exactly what you mean.
  • Clear Constraints - Rigorously frame questions.
  • Context Setting - Frame circumstances correctly.
  • Checking Assumptions - Separating speculation from fact.

The following sections provide guidelines for each area to equip questioners for responsible truth-seeking.

PRECISE TERMINOLOGY

Idioms cause confusion

Whether questioning humans or machines, word choice influences perceptions. Suboptimal questioning hampered early chatbot experiments, when engineers used imprecise diction that couldn't be translated computationally. Developers at conversational AI companies caution against assuming that language models interpret idioms or culturally nuanced metaphors as intended. Their training environments lack such context.

For example, asking a model to "explain quantum physics in a nutshell" risks unhelpful responses that take the idiom literally rather than simplifying complex physics concepts. Such terminology issues are compounded by niche cultural references that are unlikely to resonate universally.

Data scientist say, "These models have no basis in real common sense. They interpret based on statistical relationships between words." This proves problematic when those relationships rarely exist for niche phrases in their training data. Instead, they hallucinate best guesses minus real-world understanding.

Establish clear terminology

Responsible engagement therefore requires defining specialized vocabulary up front, rather than risking ambiguity by making assumptions about familiarity with the vernacular.

In addition, specifying unambiguous language better activates specific cognitive priming in models. Educational theorist Lev Vygotsky notes that children respond more accurately to questions that contain words that activate target concepts rather than loosely associated terms, even when asked about the exact same underlying topic. This "power of priming" also applies to language models. Carefully selected terminology puts them in the best position for accurate responses by reducing probabilistic guesswork.

For example, a question about the defunct GeoCities platform would likely perform better with the name itself than with "that old web hosting company from the '90s. Additional specificity grounds answers in the desired context.

Similarly, queries should minimize idioms, witty turns of phrase, sarcasm, and cultural references that are likely to confuse models that rely on clear vocabulary to signal context. Such question clarity ultimately saves downstream cycles of unraveling convoluted responses.

Precision checklist

Adopt a checklist of terminology that is likely to be universally understood:

  • Replace idioms/jargon with plain language alternatives.
  • Explicitly define acronyms/niche terms
  • Prefer formal language over casual diction
  • Use vocabulary grounding issues contextually
  • Limit figures of speech/metaphors

For sensitive topics, also consider

  • Specify whether questions are fiction/nonfiction
  • Clarify if questions are hypothetical
  • Identifying specific scenarios/contexts
  • Framing as a thought experiment

Such precision prepares answers that address the intended questions precisely and without ambiguity.

CLEAR CONSTRAINTS

Scope Creep Risks

Just as important as word choice is clearly framing problems with constraints that indicate expected response characteristics.

Management guru say, "There is nothing so useless as doing efficiently what should not be done at all". Questions that lack rigorously defined guardrails risk this outcome.

Overly open-ended requests give language models leeway to compound unproductive tangents through unchecked scope creep. The data ethics nonprofit Anthropic warns that a lack of constraints allows for the generation of alarmist content such as doom spirals, scams, or illegal business models that seem untethered from reality checks.

Establish clear boundaries

Effective questioning requires defining issues through constraints such as

Named Entities - Specify organizations, individuals, or platforms as relevant to minimize confusing similar options.

Time periods - Specify specific time periods of interest, unless universal truths without time-dependent facets are being sought.

Geographies - If applicable, outline nations/regions associated with questions unless generalized principles are being sought.

Conceptual buckets - Frame inquiries within narrowly defined conceptual constraints when able to categorically subdivide questions.

These upfront qualifiers limit the probability space for related responses. Information theorist Henry Lin argues that too much scope at the outset exponentially increases the amount of content systems can generate, reducing curation accuracy by requiring more ground to be covered. Questions that pre-filter scope through named entities, time, geography, or conceptual constraints therefore increase relevance by informing systems exactly what deserves focus, among factors that easily clarify context for human analysts but prove challenging for AI without explicit support from carefully bounded questions.

Try this formula when composing question constraints:

When [time period], for [named entity], in [location] context, [conceptual bucket] implications of [issue being analyzed]?

This structures clarifying umbrellas that channel engagement.

Entertain hypotheticals responsibly

In addition, when invoking speculative hypothetical scenarios, be sure to ground them in plausible realities of probability, rather than capriciously entertaining absurd "what if" tangents that enable unhinged content generation unchecked by common sense.

Phrases like "In a reasonably likely future where [specifications of hypothetical], what are the foreseeable outcomes regarding..." restrain overactive imaginations within the guardrails of reasonableness. Allowing completely unrestricted hypothetical meandering risks arbitrary escalation of outcomes without tying them to credible antecedent assumptions.

Checklist for Responsible Constraints

In framing questions:

  • Identify specific entities as relevant
  • Identify applicable time periods
  • Clarify relevant geographic contexts
  • State assumed conceptual frameworks
  • Use semantic priming words that activate target knowledge
  • Limit scope for deviation through explicit constraints
  • Limit hypotheticals to likely realities

Such careful constraint setting focuses engagement on intended questions rather than allowing tangential speculation unchecked by relevance screening.

SETTING CONTEXT

Inappropriate frameworks

In real estate, property tours begin by explaining the setting, neighborhood vibe, and hype factors that set the stage before viewing interiors. No one would expect to accurately price homes without environmental context. Nor should we expect language models to provide informed responses without proper contextualization.

Educational theorists assert that learners comprehend topics multiplicatively deeper when they grasp contextual frameworks in which facts operate, rather than simply memorizing data points without the real-world dynamics that give them meaning.

Similarly, questions that do not explain the surrounding realities priming issues risk confusion or failure to activate relevant cognitive dimensions within language models. Their training environments often featured text snippets stripped of their original embedding circumstances.

Psychologist say that human conversation relies on "common grounding" through shared context to interpret meaning accurately. We constantly negotiate aspects that are taken for granted or that require elaboration based on cues of familiarity with the topic.

But language models lack this intuitive dynamic. Their responses remain limited to the literal words provided, without assuming the recipient's unstated understanding of the assumed background. This makes contextual priming by concisely framing the circumstances, events, or reasons that give rise to questions essential for persuasive dialogue.

Framing Relevant Reality

Before asking technical questions, first ensure that the language models frame the questions appropriately. For example:

"Context: Recent research has uncovered biases around the XYZ algorithm that negatively impact historically marginalized communities. Researchers also identified concerns around the reproducibility and transparency of these systems..."

Such front-loaded context aligns engagement and allows for subsequent specificity, without losing perspective due to under-explained environments that give rise to inquiries. It's the equivalent of first touring neighborhoods before asking about home prices.

Data scientists explain that models hallucinate less when questions clearly identify surrounding realities that ground appropriate responses. Setting context curbs tendencies to respond arbitrarily to unfamiliar topics outside the frame of reference because they better understand the environments in which questions are located by patiently framing the circumstances, current events, and reasons that prompted inquiries in the first place.

Think of providing such helpful lead-ins before precision questions as equivalent to defining terms in academic papers before elaborating on concepts, ensuring that readers share a baseline understanding rather than risking disorientation by jumping directly into niche aspects without the overview necessary to properly decode specifics.

Context Checklist

When framing questions:

  • Explain the background that prompts the question
  • Describe current events that frame the question
  • Set tone with objective vocabulary
  • Cite research that motivates questions
  • State reasons for the importance of getting the issue right
  • Frame constraints around allowable responses

Such contextual cues limit disoriented responses by priming the understanding needed for subsequent pointed, precise engagement, much as clearly setting the rules of a game enables subsequent productive play.

CONTROL ASSUMPTIONS

Dubious Premises Undermine Truth

Law professors drill into students that questions framed by inaccurate assumptions produce worthless analysis if they are based on faulty priors that skew the inquiry from the start.

Yet our brains naturally latch onto questionable assumptions when we lack mastery of topics or environments. We then proceed to interrogate issues through distorted lenses, unconsciously compromising objectivity from the outset. The British philosopher Francis Bacon warned of "idols of the mind" that sabotage understanding through false projections onto reality that have no factual basis.

Cognitive scientists say that language models also respond based on premises provided in the questions themselves, rather than discerning the factual accuracy of embedded assumptions. Their training environments essentially involve pattern recognition between words rather than evaluation of causal claims about the realities the words purport to describe. As a result, they lack reliable processes for testing the truthfulness of suggestions embedded in questions through subtle connotations or framing biases.

For example, the question "Why are algorithms necessarily racist?" presupposes that all algorithms exhibit racism, which remains hotly contested among experts, although the question smuggles in such an assertion unchecked, and risks offering unreliable perspectives based on faulty foundations, however eloquently phrased thereafter.

Hygiene of assumptions

This requires rigor in avoiding questions tainted by unexamined assumptions that compromise the integrity of answers through corrupted priors.

Technology ethicists argue that we must apply "assumption hygiene," scrutinizing the premises proposed by questions to ensure that inquiries have neutral starting points, giving language models a fair chance at truth, untainted by embedded tropes that foundational worldviews might rigidify through unchallenged propagation.

They note that active labeling of areas still undergoing experiential research, social discourse, or conceptual evolution allows models to navigate more responsibly by truthfully denoting regions lacking definitive scientific or ethical consensus, rather than indirectly suggesting through question forms that matters enjoy factual grounding or resolution that remain hotly debated.

Such transparency about shaky assumptions is then modeled in outputs that acknowledge current uncertainty where appropriate, rather than feigning confidence in aspects still being established through rigorous scrutiny over successive iterations of scholarship.

Assumptions checklist

In framing questions:

  • Examine questions for false premises that bias the direction of inquiry
  • Label areas still under research or social resolution
  • Identify speculative aspects apart from settled science
  • Avoid language that assumes outcomes that are still under investigation
  • Clarify the limits of assumptions necessary to entertain hypotheses.

Such neutral assumption hygiene preserves truth by allowing models to transparently denote areas still under conceptual development across domains of knowledge, thereby aligning the claims of language models more closely with the realities of ongoing discovery in the human world.

Precision Questioning Enables Responsible AI

This paper has argued why careful questioning provides the necessary foundation for responsible engagement with capable, but limited, language models. We lack assurance that their output will consistently show grounded understanding beyond statistical relationships between words. Their training environments are fundamentally different from human lived experience, denying intuitive context.

Therefore, conscientious questioners play a critical role in limiting scope through constraints and terminology, minimizing the risk of generating misleading or dangerous responses resulting from imprecise or ambiguous questions. Establishing common definitions, clearly stating assumptions, and defining the circumstances surrounding the questions asked sets the stage for models that produce truthful, relevant responses rather than unproductive hallucinations that can accelerate with scale.

The power of AI to enhance human understanding depends on our prudence in directing such capabilities toward ethical, grounded ends, through vigilance in crafting high-quality questions that are resistant to misinterpretation or opening avenues without appropriate guardrails. Rather than simply coding better models, improving questioning capabilities with precision and neutrality in mind offers a complementary way to use AI responsibly.

Future installments will build on the groundwork laid here by addressing deeper techniques for multi-step queries, assessing model limitations, and providing corrective feedback on system errors to further improve question quality and elicit responsible responses from capable but inhuman interlocutors.

Glossary

  • Language models - AI systems trained on large text datasets to generate human-like language and engage in dialog. Lack of grounded understanding beyond statistics.
  • Ambiguity - State of being open to multiple interpretations based on vagueness, ambiguity, idioms, etc. in language. Can lead models to unpredictable or harmful responses.
  • Scope Creep - The tendency for dialogue to gradually expand beyond initial boundaries into unproductive tangents without clarity.
  • Constraint - A boundary or limit placed around a question or issue under investigation. Helps narrow the range of expected answers.
  • Terminology - The precise words and phrases used to articulate a concept or area of knowledge. Important to avoid misunderstandings.
  • Idioms - Expressions in a language that don't have a literal, interpretable meaning, but have cultural resonance. Can confuse models.
  • Hypothetical - An imagined or speculative scenario posed to analyze possibilities regarding a topic.
  • Assumption - An unstated premise or belief taken as true when posing a question or framing an issue. Should be made explicit.
  • Context - Background circumstances, events, or factors surrounding an issue that influence its interpretation and implications.
  • Priming - The activation of specific concepts, memories, or associations in the mind through prior exposure to related content and terminology. Facilitates learning.
  • Hallucination - When a model produces responses that seem plausibly coherent but lack factual accuracy about real events or knowledge.