-
Notifications
You must be signed in to change notification settings - Fork 3
Creating Encodable For Callable Type #456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: staging-llm
Are you sure you want to change the base?
Conversation
effectful/handlers/llm/encoding.py
Outdated
| @classmethod | ||
| @abstractmethod | ||
| def decode(cls, vl: U) -> T: | ||
| def decode(cls, vl: U, template: typing.Any = None) -> T: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
encode and decode are intended mainly to deal with types that have special meaning for the LLM api. We shouldn't extend the interface in this way.
effectful/handlers/llm/synthesis.py
Outdated
|
|
||
| # Start with provided context or empty dict | ||
| # Include collections module for type hints in synthesized code | ||
| exec_globals: dict[str, typing.Any] = {"collections": collections} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The synthesized code should have imports for types or they should come from the lexical context.
effectful/handlers/llm/synthesis.py
Outdated
| _decode_counter: typing.ClassVar[int] = 0 | ||
|
|
||
| @classmethod | ||
| def decode(cls, vl: SynthesizedFunction, template: typing.Any = None) -> Callable: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be a decode_response handler, rather than changing the decode interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! it makes sense, so we add an indirection between the LLM response and decode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I implemented it, but I'm not sure it's the cleanest implementation, mainly in how we would pass context to decode, because context has to be bound before executing the returned code.
effectful/handlers/llm/synthesis.py
Outdated
| context: LexicalContext | None = None | ||
| if template is not None and hasattr(template, "__context__"): | ||
| ctx = template.__context__ | ||
| context = ctx if isinstance(ctx, LexicalContext) else LexicalContext(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should assume there is always a lexical context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I've updated it
Summarize changes:
Templateparameter to decodeTemplateto gather lexical context for decoding synthesized function