Skip to content

Commit 8ad1b6a

Browse files
author
Jet Xu
committed
- Optimized simple_mode:
- Removed dependencies on `Torch` and `Transformers` libraries - Reduced memory footprint - Eliminated related imports - Enhanced compatibility with AWS Lambda environment
1 parent e2b463e commit 8ad1b6a

File tree

4 files changed

+16
-8
lines changed

4 files changed

+16
-8
lines changed

CHANGELOG.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,15 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [0.1.4] - 2024-10-14
9+
10+
### Improved
11+
- Optimized `simple_mode`:
12+
- Removed dependencies on `Torch` and `Transformers` libraries
13+
- Reduced memory footprint
14+
- Eliminated related imports
15+
- Enhanced compatibility with AWS Lambda environment
16+
817
## [0.1.3] - 2024-10-14
918

1019
### Added
@@ -63,6 +72,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
6372
- Basic functionality for retrieving context from GitHub repositories
6473
- Integration with LLM for processing and generating responses
6574

75+
[0.1.4]: https://github.com/JetXu-LLM/llama-github/compare/v0.1.3...v0.1.4
6676
[0.1.3]: https://github.com/JetXu-LLM/llama-github/compare/v0.1.2...v0.1.3
6777
[0.1.2]: https://github.com/JetXu-LLM/llama-github/compare/v0.1.1...v0.1.2
6878
[0.1.1]: https://github.com/JetXu-LLM/llama-github/compare/v0.1.0...v0.1.1

llama_github/llm_integration/initial_load.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
# initial_load.py
2-
import torch
32
from typing import Optional, Any
43
from threading import Lock
54
from langchain_openai import ChatOpenAI
@@ -10,11 +9,6 @@
109
from llama_github.config.config import config
1110
from llama_github.logger import logger
1211

13-
from transformers import AutoModel
14-
from transformers import AutoModelForSequenceClassification
15-
from transformers import AutoTokenizer
16-
17-
1812
class LLMManager:
1913
_instance_lock = Lock()
2014
_instance = None
@@ -74,6 +68,10 @@ def __init__(self,
7468
self.model_type = "Hubgingface"
7569

7670
if not self.simple_mode:
71+
import torch
72+
from transformers import AutoModel
73+
from transformers import AutoModelForSequenceClassification
74+
from transformers import AutoTokenizer
7775
# initial model_kwargs
7876
if torch.cuda.is_available():
7977
self.device = torch.device('cuda')

llama_github/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = '0.1.3'
1+
__version__ = '0.1.4'

setup.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[metadata]
22
name = llama-github
3-
version = 0.1.3
3+
version = 0.1.4
44
author = Jet Xu
55
author_email = [email protected]
66
description = Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and Auto-dev Agents to conduct Retrieval from actively selected GitHub public projects. It Augments through LLMs and Generates context for any coding question, in order to streamline the development of sophisticated AI-driven applications.

0 commit comments

Comments
 (0)