Skip to content

Commit a265f31

Browse files
authored
Documentation for LangChain integration (#1310)
* Documentation for LangChain integration * Update langchain-integration.md
1 parent 6afc0ec commit a265f31

File tree

1 file changed

+76
-0
lines changed

1 file changed

+76
-0
lines changed

Diff for: docs/llms/langchain-integration.md

+76
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
<!--
2+
Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved.
3+
4+
Licensed under the Apache License, Version 2.0 (the "License");
5+
you may not use this file except in compliance with the License.
6+
You may obtain a copy of the License at
7+
8+
http://www.apache.org/licenses/LICENSE-2.0
9+
10+
Unless required by applicable law or agreed to in writing,
11+
software distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
-->
16+
17+
# **DeepSparse LangChain Integration**
18+
19+
[DeepSparse](https://github.com/neuralmagic/deepsparse) has an official integration within [LangChain](https://python.langchain.com/docs/integrations/llms/deepsparse).
20+
It is broken into two parts: installation and then examples of DeepSparse usage.
21+
22+
## Installation and Setup
23+
24+
- Install the Python packages with `pip install deepsparse-nightly langchain`
25+
- Choose a [SparseZoo model](https://sparsezoo.neuralmagic.com/?useCase=text_generation) or export a support model to ONNX [using Optimum](https://github.com/neuralmagic/notebooks/blob/main/notebooks/opt-text-generation-deepsparse-quickstart/OPT_Text_Generation_DeepSparse_Quickstart.ipynb)
26+
- Models hosted on HuggingFace are also supported by prepending `"hf:"` to the model id, such as [`"hf:mgoin/TinyStories-33M-quant-deepsparse"`](https://huggingface.co/mgoin/TinyStories-33M-quant-deepsparse)
27+
28+
## Wrappers
29+
30+
There exists a DeepSparse LLM wrapper, which you can access with:
31+
32+
```python
33+
from langchain.llms import DeepSparse
34+
```
35+
36+
It provides a simple, unified interface for all models:
37+
38+
```python
39+
from langchain.llms import DeepSparse
40+
llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none')
41+
print(llm('def fib():'))
42+
```
43+
44+
And provides support for per token output streaming:
45+
46+
```python
47+
from langchain.llms import DeepSparse
48+
llm = DeepSparse(
49+
model="zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base_quant-none",
50+
streaming=True
51+
)
52+
for chunk in llm.stream("Tell me a joke", stop=["'","\n"]):
53+
print(chunk, end='', flush=True)
54+
```
55+
56+
## Configuration
57+
58+
It has arguments to control the model loaded, any configs for how the model should be loaded, configs to control how tokens are generated, and then whether to return all tokens at once or to stream them one-by-one.
59+
60+
```python
61+
model: str
62+
"""The path to a model file or directory or the name of a SparseZoo model stub."""
63+
64+
model_config: Optional[Dict[str, Any]] = None
65+
"""Keyword arguments passed to the pipeline construction.
66+
Common parameters are sequence_length, prompt_sequence_length"""
67+
68+
generation_config: Union[None, str, Dict] = None
69+
"""GenerationConfig dictionary consisting of parameters used to control
70+
sequences generated for each prompt. Common parameters are:
71+
max_length, max_new_tokens, num_return_sequences, output_scores,
72+
top_p, top_k, repetition_penalty."""
73+
74+
streaming: bool = False
75+
"""Whether to stream the results, token by token."""
76+
```

0 commit comments

Comments
 (0)