textgrad.autograd.llm_ops#
Classes
|
This class is responsible for handling the formatting of the input before calling the LLM. |
|
The simple LLM call function. |
|
The simple LLM call function. |
- class textgrad.autograd.llm_ops.FormattedLLMCall(engine, format_string, fields, system_prompt=None)#
Bases:
LLMCall
This class is responsible for handling the formatting of the input before calling the LLM. It inherits from the LLMCall class and reuses its backward function.
- Parameters:
engine (EngineLM) – The engine to use for the LLM call.
format_string (str) – The format string to use for the input. For instance, “The capital of {country} is {capital}”. For a format string like this, we’ll expect to have the fields dictionary to have the keys “country” and “capital”. Similarly, in the forward pass, we’ll expect the input variables to have the keys “country” and “capital”.
fields (dict[str, str]) – The fields to use for the format string. For the above example, this would be {“country”: {}, “capital”: {}}. This is currently a dictionary in case we’d want to inject more information later on.
system_prompt (Variable, optional) – The system prompt to use for the LLM call. Default value depends on the engine.
- forward(inputs, response_role_description='response from the language model')#
The LLM call with formatted strings. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.
- Parameters:
inputs (dict[str, Variable]) – Variables to use for the input. This should be a mapping of the fields to the variables.
response_role_description (str, optional) – Role description for the response variable, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE
- Returns:
Sampled response from the LLM
- Return type:
- class textgrad.autograd.llm_ops.LLMCall(engine, system_prompt=None)#
Bases:
Function
The simple LLM call function. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.
- Parameters:
- backward(response, prompt, system_prompt, backward_engine)#
Backward pass through the LLM call. This will register gradients in place.
- forward(input_variable, response_role_description='response from the language model')#
The LLM call. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.
- Parameters:
input_variable (Variable) – The input variable (aka prompt) to use for the LLM call.
response_role_description (str, optional) – Role description for the LLM response, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE
- Returns:
response sampled from the LLM
- Return type:
- Example:
>>> from textgrad import Variable, get_engine >>> from textgrad.autograd.llm_ops import LLMCall >>> engine = get_engine("gpt-3.5-turbo") >>> llm_call = LLMCall(engine) >>> prompt = Variable("What is the capital of France?", role_description="prompt to the LM") >>> response = llm_call(prompt, engine=engine) # This returns something like Variable(data=The capital of France is Paris., grads=)
- class textgrad.autograd.llm_ops.LLMCall_with_in_context_examples(engine, system_prompt=None)#
Bases:
LLMCall
The simple LLM call function. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.
- Parameters:
- backward(response, prompt, system_prompt, in_context_examples, backward_engine)#
Backward pass through the LLM call. This will register gradients in place.
- Parameters:
- Returns:
None
- forward(input_variable, response_role_description='response from the language model', in_context_examples=None)#
The LLM call. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.
- Parameters:
input_variable (Variable) – The input variable (aka prompt) to use for the LLM call.
response_role_description (str, optional) – Role description for the LLM response, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE
in_context_examples (List[str])
- Returns:
response sampled from the LLM
- Return type:
- Example:
>>> from textgrad import Variable, get_engine >>> from textgrad.autograd.llm_ops import LLMCall >>> engine = get_engine("gpt-3.5-turbo") >>> llm_call = LLMCall(engine) >>> prompt = Variable("What is the capital of France?", role_description="prompt to the LM") >>> response = llm_call(prompt, engine=engine) # This returns something like Variable(data=The capital of France is Paris., grads=)