textgrad.variable#

Classes

Variable([value, predecessors, requires_grad])

The main thing.

class textgrad.variable.Variable(value='', predecessors=None, requires_grad=True, *, role_description)#

Bases: object

The main thing. Nodes in the computation graph. Really the heart and soul of textgrad.

Parameters:
  • role_description (str) – The role of this variable. We find that this has a huge impact on the optimization performance, and being specific often helps quite a bit!

  • value (str, optional) – The string value of this variable, defaults to “”. In the future, we’ll go multimodal, for sure!

  • predecessors (List[Variable], optional) – predecessors of this variable in the computation graph, defaults to None. Here, for instance, if we have a prompt -> response through an LLM call, we’d call the prompt the predecessor, and the response the successor.

  • requires_grad (bool, optional) – Whether this variable requires a gradient, defaults to True. If False, we’ll not compute the gradients on this variable.

backward(engine=None)#

Backpropagate gradients through the computation graph starting from this variable.

Parameters:

engine (EngineLM, optional) – The backward engine to use for gradient computation. If not provided, the global engine will be used.

Raises:
  • Exception – If no backward engine is provided and no global engine is set.

  • Exception – If both an engine is provided and the global engine is set.

generate_graph(print_gradients=False)#

Generates a computation graph starting from the variable itself.

Parameters:

print_gradients (bool) – A boolean indicating whether to print gradients in the graph.

Returns:

A visualization of the computation graph.

get_grad_fn()#
get_gradient_and_context_text()#

For the variable, aggregates and returns i. the gradients ii. the context for which the gradients are computed.

Returns:

A string containing the aggregated gradients and their corresponding context.

Return type:

str

get_gradient_text()#

Aggregates and returns the gradients on a variable.

Return type:

str

get_role_description()#
Return type:

str

get_short_value(n_words_offset=10)#

Returns a short version of the value of the variable. We sometimes use it during optimization, when we want to see the value of the variable, but don’t want to see the entire value. This is sometimes to save tokens, sometimes to reduce repeating very long variables, such as code or solutions to hard problems. :param n_words_offset: The number of words to show from the beginning and the end of the value. :type n_words_offset: int

Parameters:

n_words_offset (int)

Return type:

str

get_value()#
reset_gradients()#
set_grad_fn(grad_fn)#
set_role_description(role_description)#
set_value(value)#