Skip to content

[FEATURE REQUEST] Bad words list #43

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Peter-Devine opened this issue Nov 5, 2021 · 2 comments
Closed

[FEATURE REQUEST] Bad words list #43

Peter-Devine opened this issue Nov 5, 2021 · 2 comments

Comments

@Peter-Devine
Copy link

I don't have access to the GPT-3 API yet (A guy can dream, eh?), but I have been reading through the docs and it seems like the completion module would be perfect for my use case except for the exclusion of a "bad words list" feature.

This feature would not allow certain words to be generated in the completion output. I am aware of the logit_bias argument, but this only stops individual tokens from being generated.
My idea would take an arbitrary string (Or list of token IDs) as input, and then not allow the completion of this string given the words before it.

I have successfully asked for this feature from the Huggingface .generate API many moons ago. Please see my feature request for a fuller run-down of how it could be implemented (link: huggingface/transformers#3061).

It would be a useful feature for customers because it could give peace of mind that the models that they are serving are not going to output any unsavoury language. I can see that an alternative to this feature would just be to train the model not to output generally bad language (E.g. overly aggressive or xenophobic language) through thoughtful use of training data, but since everyone's definition of bad language is different, it would be nice to customise the model accordingly.

Thanks!

@monsieurpooh
Copy link

monsieurpooh commented Nov 21, 2022

Bump. Another way to think about it is conditional token biasing; e.g. sometimes you want to disallow "on" but only after a specific word. There doesn't seem to be a way to do this currently on the OpenAI or the Goose AI API's, but huggingface transformers offers it via bad_words_ids.

@rattrayalex
Copy link
Collaborator

Thanks for the suggestion!

This sounds like a feature request for the underlying OpenAI API and not the Python library, so I'm going to go ahead and close this issue.

Would you mind reposting at community.openai.com?

@rattrayalex rattrayalex closed this as not planned Won't fix, can't repro, duplicate, stale Dec 30, 2023
safa0 pushed a commit to safa0/openai-agents-python that referenced this issue Apr 27, 2025
…enai#60)

This PR introduces a `strict_mode: bool = True` option to
`@function_tool`, allowing optional parameters when set to False. This
change enables more flexibility while maintaining strict JSON schema
validation by default.

resolves openai#43 

## Changes:

- Added `strict_mode` parameter to `@function_tool` and passed it to
`function_schema` and `FunctionTool`.
- Updated `function_schema.py` to respect `strict_mode` and allow
optional parameters when set to False.
- Added unit tests to verify optional parameters work correctly,
including multiple optional params with different types.

## Tests:

- Verified function calls with missing optional parameters behave as
expected.
- Added async tests to validate behavior under different configurations.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants