Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Getting TypeError: 'method' object is not iterable while using the LLM predictor #7

Open
shivam-4-14 opened this issue Sep 8, 2023 · 1 comment

Comments

@shivam-4-14
Copy link

Issue Description

Summary:
I encountered a TypeError while using the LLMPredictor class with a custom FlanLLM class. It appears to be related to the _identifying_params method.

Details:
When trying to create an instance of LLMPredictor with llm=FlanLLM(), I received the following error:

TypeError: 'method' object is not iterable

Expected Behavior
I expected to create an instance of LLMPredictor successfully using my custom FlanLLM class without encountering any errors.

Actual Behavior
I received a TypeError when attempting to create an instance of LLMPredictor. The error message indicates that there's an issue with the _identifying_params method in the FlanLLM class.

Steps to Reproduce

  1. Create a custom FlanLLM class as follows:
    class FlanLLM(LLM):
    model_name = "google/flan-t5-large"
    pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype": torch.bfloat16})

    def _call(self, prompt, stop=None):
    return self.pipeline(prompt, max_length=9999)[0]["generated_text"]

    def _identifying_params(self):
    return {"name_of_model": self.model_name}

    def _llm_type(self):
    return "custom"

class FlanLLM(LLM):
    model_name = "google/flan-t5-large"
    pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype": torch.bfloat16})

    def _call(self, prompt, stop=None):
        return self.pipeline(prompt, max_length=9999)[0]["generated_text"]

    def _identifying_params(self):
        return {"name_of_model": self.model_name}

    def _llm_type(self):
        return "custom"
  1. Create an instance of FlanLLM and attempt to create an instance of LLMPredictor:
llm_instance = FlanLLM()
llm_predictor = LLMPredictor(llm=llm_instance)

Code Snippet

class FlanLLM(LLM):
    model_name = "google/flan-t5-large"
    pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype": torch.bfloat16})

    def _call(self, prompt, stop=None):
        return self.pipeline(prompt, max_length=9999)[0]["generated_text"]

    def _identifying_params(self):
        return {"name_of_model": self.model_name}

    def _llm_type(self):
        return "custom"

llm_instance = FlanLLM()
llm_predictor = LLMPredictor(llm=llm_instance)

Additional Information

  • I have ensured that the FlanLLM class correctly inherits from the LLM class.
  • The error occurs at the line where LLMPredictor is instantiated.
  • I am using the appropriate versions of the required libraries and packages.

Screenshots or Log Output

[If applicable, include screenshots or log output that may help diagnose the issue.]

Possible Solutions

I'm not sure what is causing this TypeError. It appears to be related to the _identifying_params method, but I'm unsure how to resolve it. Any guidance or suggestions would be greatly appreciated.

Steps Taken to Resolve

I have reviewed my code, checked for typos, and ensured that the method names and parameters match the expected format. However, I have not been able to resolve this issue on my own.

Note: Please let me know if you need any additional information or if there are specific steps I should take to troubleshoot this issue further.


Screenshot (224)

@Manan-Santoki
Copy link

Hi. Did you managed to get some fix ? I am also facing same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants