textgrad.optimizer.optimizer#
Classes
|
Base class for all optimizers. |
|
TextualGradientDescent optimizer |
|
- class textgrad.optimizer.optimizer.Optimizer(parameters)#
Bases:
ABC
Base class for all optimizers.
- Parameters:
parameters (List[Variable]) – The list of parameters to optimize.
- Methods:
zero_grad(): Clears the gradients of all parameters.
step(): Performs a single optimization step.
- abstract step()#
Performs a single optimization step.
- zero_grad()#
Clears the gradients of all parameters.
- class textgrad.optimizer.optimizer.TextualGradientDescent(parameters, verbose=0, engine=None, constraints=None, new_variable_tags=['<IMPROVED_VARIABLE>', '</IMPROVED_VARIABLE>'], optimizer_system_prompt='You are part of an optimization system that improves text (i.e., variable). You will be asked to creatively and critically improve prompts, solutions to problems, code, or any other text-based variable. You will receive some feedback, and use the feedback to improve the variable. The feedback may be noisy, identify what is important and what is correct. Pay attention to the role description of the variable, and the context in which it is used. This is very important: You MUST give your response by sending the improved variable between {new_variable_start_tag} {{improved variable}} {new_variable_end_tag} tags. The text you send between the tags will directly replace the variable.\n\n\n### Glossary of tags that will be sent to you:\n# - <LM_SYSTEM_PROMPT>: The system prompt for the language model.\n# - <LM_INPUT>: The input to the language model.\n# - <LM_OUTPUT>: The output of the language model.\n# - <FEEDBACK>: The feedback to the variable.\n# - <CONVERSATION>: The conversation history.\n# - <FOCUS>: The focus of the optimization.\n# - <ROLE>: The role description of the variable.', in_context_examples=None, gradient_memory=0)#
Bases:
Optimizer
TextualGradientDescent optimizer
- Parameters:
engine (EngineLM) – the engine to use for updating variables
parameters (List[Variable]) – the parameters to optimize
verbose (int, optional) – whether to print iterations, defaults to 0
constraints (List[str], optional) – a list of natural language constraints, defaults to []
optimizer_system_prompt (str, optional) – system prompt to the optimizer, defaults to textgrad.prompts.OPTIMIZER_SYSTEM_PROMPT. Needs to accept new_variable_start_tag and new_variable_end_tag
in_context_examples (List[str], optional) – a list of in-context examples, defaults to []
gradient_memory (int, optional) – the number of past gradients to store, defaults to 0
new_variable_tags (List[str])
- property constraint_text#
Returns a formatted string representation of the constraints.
- Returns:
A string containing the constraints in the format “Constraint {index}: {constraint}”.
- Return type:
str
- step()#
Perform a single optimization step. This method updates the parameters of the optimizer by generating new text using the engine and updating the parameter values accordingly. It also logs the optimizer response and the updated text. :returns: None
- class textgrad.optimizer.optimizer.TextualGradientDescentwithMomentum(engine, parameters, momentum_window=0, constraints=None, new_variable_tags=['<IMPROVED_VARIABLE>', '</IMPROVED_VARIABLE>'], in_context_examples=None, optimizer_system_prompt='You are part of an optimization system that improves text (i.e., variable). You will be asked to creatively and critically improve prompts, solutions to problems, code, or any other text-based variable. You will receive some feedback, and use the feedback to improve the variable. The feedback may be noisy, identify what is important and what is correct. Pay attention to the role description of the variable, and the context in which it is used. This is very important: You MUST give your response by sending the improved variable between {new_variable_start_tag} {{improved variable}} {new_variable_end_tag} tags. The text you send between the tags will directly replace the variable.\n\n\n### Glossary of tags that will be sent to you:\n# - <LM_SYSTEM_PROMPT>: The system prompt for the language model.\n# - <LM_INPUT>: The input to the language model.\n# - <LM_OUTPUT>: The output of the language model.\n# - <FEEDBACK>: The feedback to the variable.\n# - <CONVERSATION>: The conversation history.\n# - <FOCUS>: The focus of the optimization.\n# - <ROLE>: The role description of the variable.')#
Bases:
Optimizer
- Parameters:
- property constraint_text#
- step()#
Performs a single optimization step.