textgrad.loss#
Classes
|
The test-time loss to use when working on a response to a multiple choice question. |
|
A module to compare two variables using a language model. |
|
A module to compare two variables using a language model. |
|
A vanilla loss function to evaluate a response. |
- class textgrad.loss.MultiChoiceTestTime(evaluation_instruction, engine=None, system_prompt=None)#
Bases:
Module
The test-time loss to use when working on a response to a multiple choice question.
- Parameters:
- class textgrad.loss.MultiFieldEvaluation(evaluation_instruction, role_descriptions, engine=None, system_prompt=None)#
Bases:
Module
A module to compare two variables using a language model.
- Parameters:
evaluation_instruction (Variable) – Instruction to use as prefix for the comparison, specifying the nature of the comparison.
engine (EngineLM) – The language model to use for the comparison.
v1_role_description (str, optional) – Role description for the first variable, defaults to “prediction to evaluate”
v2_role_description (str, optional) – Role description for the second variable, defaults to “correct result”
system_prompt (Variable, optional) – System prompt to use for the comparison, defaults to “You are an evaluation system that compares two variables.”
role_descriptions (List[str])
- Example:
TODO: Add an example
- class textgrad.loss.MultiFieldTokenParsedEvaluation(evaluation_instruction, role_descriptions, engine=None, system_prompt=None, parse_tags=None)#
Bases:
MultiFieldEvaluation
A module to compare two variables using a language model.
- Parameters:
evaluation_instruction (Variable) – Instruction to use as prefix for the comparison, specifying the nature of the comparison.
engine (EngineLM) – The language model to use for the comparison.
v1_role_description (str, optional) – Role description for the first variable, defaults to “prediction to evaluate”
v2_role_description (str, optional) – Role description for the second variable, defaults to “correct result”
system_prompt (Variable, optional) – System prompt to use for the comparison, defaults to “You are an evaluation system that compares two variables.”
role_descriptions (List[str])
parse_tags (List[str])
- Example:
TODO: Add an example
- class textgrad.loss.TextLoss(eval_system_prompt, engine=None)#
Bases:
Module
A vanilla loss function to evaluate a response. In particular, this module is used to evaluate any given text object.
- Parameters:
- Example:
>>> from textgrad import get_engine, Variable >>> from textgrad.loss import TextLoss >>> engine = get_engine("gpt-4o") >>> evaluation_instruction = Variable("Is ths a good joke?", requires_grad=False) >>> response_evaluator = TextLoss(evaluation_instruction, engine) >>> response = Variable("What did the fish say when it hit the wall? Dam.", requires_grad=True) >>> response_evaluator(response)