textgrad.autograd.llm_ops#

Classes

FormattedLLMCall(engine, format_string, fields)

This class is responsible for handling the formatting of the input before calling the LLM.

LLMCall(engine[, system_prompt])

The simple LLM call function.

LLMCall_with_in_context_examples(engine[, ...])

The simple LLM call function.

class textgrad.autograd.llm_ops.FormattedLLMCall(engine, format_string, fields, system_prompt=None)#

Bases: LLMCall

This class is responsible for handling the formatting of the input before calling the LLM. It inherits from the LLMCall class and reuses its backward function.

Parameters:
  • engine (EngineLM) – The engine to use for the LLM call.

  • format_string (str) – The format string to use for the input. For instance, “The capital of {country} is {capital}”. For a format string like this, we’ll expect to have the fields dictionary to have the keys “country” and “capital”. Similarly, in the forward pass, we’ll expect the input variables to have the keys “country” and “capital”.

  • fields (dict[str, str]) – The fields to use for the format string. For the above example, this would be {“country”: {}, “capital”: {}}. This is currently a dictionary in case we’d want to inject more information later on.

  • system_prompt (Variable, optional) – The system prompt to use for the LLM call. Default value depends on the engine.

forward(inputs, response_role_description='response from the language model')#

The LLM call with formatted strings. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.

Parameters:
  • inputs (dict[str, Variable]) – Variables to use for the input. This should be a mapping of the fields to the variables.

  • response_role_description (str, optional) – Role description for the response variable, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE

Returns:

Sampled response from the LLM

Return type:

Variable

class textgrad.autograd.llm_ops.LLMCall(engine, system_prompt=None)#

Bases: Function

The simple LLM call function. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.

Parameters:
  • engine (EngineLM) – engine to use for the LLM call

  • input_role_description (str, optional) – role description for the input variable, defaults to VARIABLE_INPUT_DEFAULT_ROLE

  • system_prompt (Variable, optional) – system prompt to use for the LLM call, default depends on the engine.

backward(response, prompt, system_prompt, backward_engine)#

Backward pass through the LLM call. This will register gradients in place.

Parameters:
  • response (Variable) – The response variable.

  • prompt (str) – The prompt string that will be used as input to an LM.

  • system_prompt (str) – The system prompt string.

  • backward_engine (EngineLM) – The backward engine that will do the gradient computation.

Returns:

None

forward(input_variable, response_role_description='response from the language model')#

The LLM call. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.

Parameters:
  • input_variable (Variable) – The input variable (aka prompt) to use for the LLM call.

  • response_role_description (str, optional) – Role description for the LLM response, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE

Returns:

response sampled from the LLM

Return type:

Variable

Example:

>>> from textgrad import Variable, get_engine
>>> from textgrad.autograd.llm_ops import LLMCall
>>> engine = get_engine("gpt-3.5-turbo")
>>> llm_call = LLMCall(engine)
>>> prompt = Variable("What is the capital of France?", role_description="prompt to the LM")
>>> response = llm_call(prompt, engine=engine)
# This returns something like Variable(data=The capital of France is Paris., grads=)
class textgrad.autograd.llm_ops.LLMCall_with_in_context_examples(engine, system_prompt=None)#

Bases: LLMCall

The simple LLM call function. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.

Parameters:
  • engine (EngineLM) – engine to use for the LLM call

  • input_role_description (str, optional) – role description for the input variable, defaults to VARIABLE_INPUT_DEFAULT_ROLE

  • system_prompt (Variable, optional) – system prompt to use for the LLM call, default depends on the engine.

backward(response, prompt, system_prompt, in_context_examples, backward_engine)#

Backward pass through the LLM call. This will register gradients in place.

Parameters:
  • response (Variable) – The response variable.

  • prompt (str) – The prompt string that will be used as input to an LM.

  • system_prompt (str) – The system prompt string.

  • backward_engine (EngineLM) – The backward engine that will do the gradient computation.

  • in_context_examples (List[str])

Returns:

None

forward(input_variable, response_role_description='response from the language model', in_context_examples=None)#

The LLM call. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.

Parameters:
  • input_variable (Variable) – The input variable (aka prompt) to use for the LLM call.

  • response_role_description (str, optional) – Role description for the LLM response, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE

  • in_context_examples (List[str])

Returns:

response sampled from the LLM

Return type:

Variable

Example:

>>> from textgrad import Variable, get_engine
>>> from textgrad.autograd.llm_ops import LLMCall
>>> engine = get_engine("gpt-3.5-turbo")
>>> llm_call = LLMCall(engine)
>>> prompt = Variable("What is the capital of France?", role_description="prompt to the LM")
>>> response = llm_call(prompt, engine=engine)
# This returns something like Variable(data=The capital of France is Paris., grads=)