textgrad.autograd.functional#
Functions
|
WIP - Aggregates a list of variables. |
|
A functional version of the LLM call with formatted strings. |
|
A functional version of the LLMCall. |
|
Represents a sum operation on a list of variables. |
- textgrad.autograd.functional.aggregate(variables)#
WIP - Aggregates a list of variables. In TextGrad, forward pass of aggregation is simply concatenation of the values of the variables. The backward pass performs a reduction operation on the gradients of the variables. This reduction is currently an LLM call to summarize the gradients.
- textgrad.autograd.functional.formatted_llm_call(inputs, response_role_description, engine, format_string, fields, system_prompt=None)#
A functional version of the LLM call with formatted strings. Just a wrapper around the FormattedLLMCall class.
This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.
- Parameters:
inputs (dict[str, Variable]) – Variables to use for the input. This should be a mapping of the fields to the variables.
response_role_description (str, optional) – Role description for the response variable, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE
engine (EngineLM) – The engine to use for the LLM call.
format_string (str) – The format string to use for the input. For instance, “The capital of {country} is {capital}”. For a format string like this, we’ll expect to have the fields dictionary to have the keys “country” and “capital”. Similarly, in the forward pass, we’ll expect the input variables to have the keys “country” and “capital”.
fields (dict[str, str]) – The fields to use for the format string. For the above example, this would be {“country”: {}, “capital”: {}}. This is currently a dictionary in case we’d want to inject more information later on.
system_prompt (Variable, optional) – The system prompt to use for the LLM call. Default value depends on the engine.
- Returns:
Sampled response from the LLM
- Return type:
- textgrad.autograd.functional.llm_call(input_variable, engine, response_role_description=None, system_prompt=None)#
A functional version of the LLMCall. The simple LLM call function. This function will call the LLM with the input and return the response, also register the grad_fn for backpropagation.
- Parameters:
input_variable (Variable) – The input variable (aka prompt) to use for the LLM call.
response_role_description (str, optional) – Role description for the LLM response, defaults to VARIABLE_OUTPUT_DEFAULT_ROLE
engine (EngineLM) – engine to use for the LLM call
input_role_description (str, optional) – role description for the input variable, defaults to VARIABLE_INPUT_DEFAULT_ROLE
system_prompt (Variable, optional) – system prompt to use for the LLM call, default depends on the engine.
- Returns:
response sampled from the LLM
- Return type:
>>> from textgrad import Variable, get_engine >>> from textgrad.autograd.functional import llm_call >>> engine = get_engine("gpt-3.5-turbo") >>> prompt = Variable("What is the capital of France?", role_description="prompt to the LM") >>> response = llm_call(prompt, engine=engine) # This returns something like Variable(data=The capital of France is Paris., grads=)
- textgrad.autograd.functional.sum(variables)#
Represents a sum operation on a list of variables. In TextGrad, sum is simply concatenation of the values of the variables.