Skip to content

chore: Add support for mode Groq models #294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

leehuwuj
Copy link
Collaborator

@leehuwuj leehuwuj commented Sep 17, 2024

Summary by CodeRabbit

  • New Features
    • Introduced new model mappings for enhanced data processing capabilities, including various versions of the "llama," "gemma," and "llava" models.
    • Expanded options for model selection, allowing for more tailored usage based on specific needs and capabilities.

These updates aim to improve the versatility and functionality of the application for end-users.

Copy link

changeset-bot bot commented Sep 17, 2024

🦋 Changeset detected

Latest commit: 3a107e3

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
create-llama Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

coderabbitai bot commented Sep 17, 2024

Warning

Rate limit exceeded

@leehuwuj has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 3 minutes and 57 seconds before requesting another review.

How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Commits

Files that changed from the base of the PR and between 2d3a8d3 and 3a107e3.

Walkthrough

The pull request introduces a new patch named "create-llama," which adds several new Groq models to enhance the system's capabilities. It modifies various files to include additional model mappings, expanding the available options for users. The changes maintain the existing control flow while enriching the model selection process, allowing for more tailored interactions with the data.

Changes

File Path Change Summary
.changeset/giant-trees-cheer.md Added patch entry "create-llama" to introduce new Groq models.
templates/components/settings/python/settings.py Added multiple model mappings to the model_map dictionary, including new entries for Llama and Gemma models.
templates/types/streaming/express/src/controllers/engine/settings.ts Introduced new key-value pairs for model configurations in the settings object within initGroq.
templates/types/streaming/nextjs/app/api/chat/engine/settings.ts Added new mappings for model identifiers and configurations in the initGroq function.
helpers/providers/groq.ts Expanded the MODELS constant to include additional models, enhancing the selection variety.

Possibly related PRs

🐇 In the garden where models bloom,
New Llamas dance, dispelling gloom.
With patches bright, they leap and play,
Enhancing queries in a joyful way.
Hop along, dear friends, and see,
How rich our data can truly be! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    -- I pushed a fix in commit <commit_id>, please review it.
    -- Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    -- @coderabbitai generate unit testing code for this file.
    -- @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    -- @coderabbitai generate interesting stats about this repository and render them as a table.
    -- @coderabbitai read src/utils.ts and generate unit testing code.
    -- @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    -- @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@@ -62,7 +62,7 @@ export const supportedTools: Tool[] = [
dependencies: [
{
name: "duckduckgo-search",
version: "6.1.7",
version: "6.2.11",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small update for duckduckgo-search

"llama-3.1-70b",
"llama-3.1-8b",
"llama3-groq-70b-tool-use",
"llama3-groq-8b-tool-use",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llama3-groq-8b-tool-use this model is not working well with ReAct agent.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

INFO:     Started server process [31412]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Creating chat engine with filters: filters=[MetadataFilter(key='private', value='true', operator=<FilterOperator.NE: '!='>)] condition=<FilterCondition.AND: 'and'>
> Running step 2e32ecc0-a18b-49ac-a4c9-180ddf9ca65e. Step input: What is the current status of Long Thanh airport? Use the tools
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
> Running step b2e926fa-5e74-4ddd-98b6-d34e04adbee5. Step input: None
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
> Running step 13ff9ef8-dc41-4ef2-8ba0-c00bf4194497. Step input: None
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
> Running step ff0b6ac4-623b-4352-8937-b96fddfc39a8. Step input: None
INFO:     127.0.0.1:60539 - "POST /api/chat HTTP/1.1" 200 OK

@leehuwuj
Copy link
Collaborator Author

Close this in favor of: #278

@leehuwuj leehuwuj closed this Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant