Skip to content

Commit 75ec68e

Browse files
committed
Create example Node.js application
1 parent 20b6e6d commit 75ec68e

File tree

8 files changed

+372
-2
lines changed

8 files changed

+372
-2
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -112,8 +112,8 @@ Want to jump straight in? Get started with one of our sample applications/templa
112112
| React | Multilingual translation website | [link](./examples/react-translator/) |
113113
| Browser extension | Text classification extension | [link](./examples/extension/) |
114114
| Electron | Text classification application | [link](./examples/electron/) |
115+
| Node.js | Sentiment analysis API | [link](./examples/node/) |
115116
| Next.js | *Coming soon* | [link](./examples/next/) |
116-
| Node.js | *Coming soon* | [link](./examples/node/) |
117117

118118

119119
## Custom usage

docs/snippets/3_examples.snippet

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,5 @@ Want to jump straight in? Get started with one of our sample applications/templa
55
| React | Multilingual translation website | [link](./examples/react-translator/) |
66
| Browser extension | Text classification extension | [link](./examples/extension/) |
77
| Electron | Text classification application | [link](./examples/electron/) |
8+
| Node.js | Sentiment analysis API | [link](./examples/node/) |
89
| Next.js | *Coming soon* | [link](./examples/next/) |
9-
| Node.js | *Coming soon* | [link](./examples/node/) |

docs/source/_toctree.yml

+2
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@
1515
title: Building a Browser Extension
1616
- local: tutorials/electron
1717
title: Building an Electron Application
18+
- local: tutorials/node
19+
title: Server-side Inference in Node.js
1820
title: Tutorials
1921
- sections:
2022
- local: api/transformers

docs/source/tutorials/node.md

+218
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,218 @@
1+
2+
# Server-side Inference in Node.js
3+
4+
Although Transformers.js was originally designed to be used in the browser, it's sometimes necessary to run inference on the server. In this tutorial, we will design a simple Node.js API that uses Transformers.js for sentiment analysis.
5+
6+
We'll also show you how to use the library in both CommonJS and ECMAScript modules, so you can choose the module system that works best for your project:
7+
8+
- [ECMAScript modules (ESM)](#ecmascript-modules-esm) - The official standard format
9+
to package JavaScript code for reuse. It's the default module system in modern
10+
browsers, with modules imported using `import` and exported using `export`.
11+
Fortunately, starting with version 13.2.0, Node.js has stable support of ES modules.
12+
- [CommonJS](#commonjs) - The default module system in Node.js. In this system,
13+
modules are imported using `require()` and exported using `module.exports`.
14+
15+
<Tip>
16+
17+
Although you can always use the [Python library](https://github.com/huggingface/transformers) for server-side inference, using Transformers.js means that you can write all of your code in JavaScript (instead of having to set up and communicate with a separate Python process).
18+
19+
</Tip>
20+
21+
**Useful links:**
22+
- Source code ([ESM](https://github.com/xenova/transformers.js/tree/main/examples/node/esm/app.js) or [CommonJS](https://github.com/xenova/transformers.js/tree/main/examples/node/commonjs/app.js))
23+
- [Documentation](https://huggingface.co/docs/transformers.js)
24+
25+
26+
## Prerequisites
27+
28+
- [Node.js](https://nodejs.org/en/) version 16+
29+
- [npm](https://www.npmjs.com/) version 7+
30+
31+
32+
## Getting started
33+
34+
Let's start by creating a new Node.js project and installing Transformers.js via [NPM](https://www.npmjs.com/package/@xenova/transformers):
35+
36+
```bash
37+
npm init -y
38+
npm i @xenova/transformers
39+
```
40+
41+
Next, create a new file called `app.js`, which will be the entry point for our application. Depending on whether you're using [ECMAScript modules](#ecmascript-modules-esm) or [CommonJS](#commonjs), you will need to do some things differently (see below).
42+
43+
We'll also create a helper class called `MyClassificationPipeline` control the loading of the pipeline. It uses the [singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) to lazily create a single instance of the pipeline when `getInstance` is first called, and uses this pipeline for all subsequent calls:
44+
45+
46+
### ECMAScript modules (ESM)
47+
48+
To indicate that your project uses ECMAScript modules, you need to add `type: "module"` to your `package.json`:
49+
50+
```json
51+
{
52+
...
53+
"type": "module",
54+
...
55+
}
56+
```
57+
58+
Next, you will need to add the following imports to the top of `app.js`:
59+
60+
```javascript
61+
import http from 'http';
62+
import querystring from 'querystring';
63+
import url from 'url';
64+
```
65+
66+
Following that, let's import Transformers.js and define the `MyClassificationPipeline` class.
67+
68+
```javascript
69+
import { pipeline, env } from '@xenova/transformers';
70+
71+
class MyClassificationPipeline {
72+
static task = 'text-classification';
73+
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
74+
static instance = null;
75+
76+
static async getInstance(progress_callback = null) {
77+
if (this.instance === null) {
78+
// NOTE: Uncomment this to change the cache directory
79+
// env.cacheDir = './.cache';
80+
81+
this.instance = pipeline(this.task, this.model, { progress_callback });
82+
}
83+
84+
return this.instance;
85+
}
86+
}
87+
```
88+
89+
### CommonJS
90+
91+
Start by adding the following imports to the top of `app.js`:
92+
93+
```javascript
94+
const http = require('http');
95+
const querystring = require('querystring');
96+
const url = require('url');
97+
```
98+
99+
Following that, let's import Transformers.js and define the `MyClassificationPipeline` class. Since Transformers.js is an ESM module, we will need to dynamically import the library using the [`import()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import) function:
100+
101+
```javascript
102+
class MyClassificationPipeline {
103+
static task = 'text-classification';
104+
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
105+
static instance = null;
106+
107+
static async getInstance(progress_callback = null) {
108+
if (this.instance === null) {
109+
// Dynamically import the Transformers.js library
110+
let { pipeline, env } = await import('@xenova/transformers');
111+
112+
// NOTE: Uncomment this to change the cache directory
113+
// env.cacheDir = './.cache';
114+
115+
this.instance = pipeline(this.task, this.model, { progress_callback });
116+
}
117+
118+
return this.instance;
119+
}
120+
}
121+
```
122+
123+
## Creating a basic HTTP server
124+
125+
Next, let's create a basic server with the built-in [HTTP](https://nodejs.org/api/http.html#http) module. We will listen for requests made to the server (using the `/classify` endpoint), extract the `text` query parameter, and run this through the pipeline.
126+
127+
```javascript
128+
// Define the HTTP server
129+
const server = http.createServer();
130+
const hostname = '127.0.0.1';
131+
const port = 3000;
132+
133+
// Listen for requests made to the server
134+
server.on('request', async (req, res) => {
135+
// Parse the request URL
136+
const parsedUrl = url.parse(req.url);
137+
138+
// Extract the query parameters
139+
const { text } = querystring.parse(parsedUrl.query);
140+
141+
// Set the response headers
142+
res.setHeader('Content-Type', 'application/json');
143+
144+
let response;
145+
if (parsedUrl.pathname === '/classify' && text) {
146+
const classifier = await MyClassificationPipeline.getInstance();
147+
response = await classifier(text);
148+
res.statusCode = 200;
149+
} else {
150+
response = { 'error': 'Bad request' }
151+
res.statusCode = 400;
152+
}
153+
154+
// Send the JSON response
155+
res.end(JSON.stringify(response));
156+
});
157+
158+
server.listen(port, hostname, () => {
159+
console.log(`Server running at http://${hostname}:${port}/`);
160+
});
161+
162+
```
163+
164+
<Tip>
165+
166+
Since we use lazy loading, the first request made to the server will also be responsible for loading the pipeline. If you would like to begin loading the pipeline as soon as the server starts running, you can add the following line of code after defining `MyClassificationPipeline`:
167+
168+
```javascript
169+
MyClassificationPipeline.getInstance();
170+
```
171+
172+
</Tip>
173+
174+
To start the server, run the following command:
175+
176+
```bash
177+
node app.js
178+
```
179+
180+
The server should be live at http://127.0.0.1:3000/, which you can visit in your web browser. You should see the following message:
181+
182+
```json
183+
{"error":"Bad request"}
184+
```
185+
186+
This is because we aren't targeting the `/classify` endpoint with a valid `text` query parameter. Let's try again, this time with a valid request. For example, you can visit http://127.0.0.1:3000/classify?text=I%20love%20Transformers.js and you should see:
187+
188+
```json
189+
[{"label":"POSITIVE","score":0.9996721148490906}]
190+
```
191+
192+
Great! We've successfully created a basic HTTP server that uses Transformers.js to classify text.
193+
194+
## (Optional) Customization
195+
196+
### Model caching
197+
198+
By default, the first time you run the application, it will download the model files and cache them on your file system (in `./node_modules/@xenova/transformers/.cache/`). All subsequent requests will then use this model. You can change the location of the cache by setting `env.cacheDir`. For example, to cache the model in the `.cache` directory in the current working directory, you can add:
199+
200+
```javascript
201+
env.cacheDir = './.cache';
202+
```
203+
204+
### Use local models
205+
206+
If you want to use local model files, you can set `env.localModelPath` as follows:
207+
208+
```javascript
209+
// Specify a custom location for models (defaults to '/models/').
210+
env.localModelPath = '/path/to/models/';
211+
```
212+
213+
You can also disable loading of remote models by setting `env.allowRemoteModels` to `false`:
214+
215+
```javascript
216+
// Disable the loading of remote models from the Hugging Face Hub:
217+
env.allowRemoteModels = false;
218+
```

examples/node/commonjs/app.js

+63
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
2+
const http = require('http');
3+
const querystring = require('querystring');
4+
const url = require('url');
5+
6+
7+
class MyClassificationPipeline {
8+
static task = 'text-classification';
9+
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
10+
static instance = null;
11+
12+
static async getInstance(progress_callback = null) {
13+
if (this.instance === null) {
14+
// Dynamically import the Transformers.js library
15+
let { pipeline, env } = await import('@xenova/transformers');
16+
17+
// NOTE: Uncomment this to change the cache directory
18+
// env.cacheDir = './.cache';
19+
20+
this.instance = pipeline(this.task, this.model, { progress_callback });
21+
}
22+
23+
return this.instance;
24+
}
25+
}
26+
27+
// Comment out this line if you don't want to start loading the model as soon as the server starts.
28+
// If commented out, the model will be loaded when the first request is received (i.e,. lazily).
29+
MyClassificationPipeline.getInstance();
30+
31+
// Define the HTTP server
32+
const server = http.createServer();
33+
const hostname = '127.0.0.1';
34+
const port = 3000;
35+
36+
// Listen for requests made to the server
37+
server.on('request', async (req, res) => {
38+
// Parse the request URL
39+
const parsedUrl = url.parse(req.url);
40+
41+
// Extract the query parameters
42+
const { text } = querystring.parse(parsedUrl.query);
43+
44+
// Set the response headers
45+
res.setHeader('Content-Type', 'application/json');
46+
47+
let response;
48+
if (parsedUrl.pathname === '/classify' && text) {
49+
const classifier = await MyClassificationPipeline.getInstance();
50+
response = await classifier(text);
51+
res.statusCode = 200;
52+
} else {
53+
response = { 'error': 'Bad request' }
54+
res.statusCode = 400;
55+
}
56+
57+
// Send the JSON response
58+
res.end(JSON.stringify(response));
59+
});
60+
61+
server.listen(port, hostname, () => {
62+
console.log(`Server running at http://${hostname}:${port}/`);
63+
});

examples/node/commonjs/package.json

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
{
2+
"name": "commonjs",
3+
"version": "1.0.0",
4+
"description": "Server-side inference with Transformers.js (CommonJS)",
5+
"main": "app.js",
6+
"keywords": [],
7+
"author": "Xenova",
8+
"license": "ISC",
9+
"dependencies": {
10+
"@xenova/transformers": "^2.0.0"
11+
}
12+
}

examples/node/esm/app.js

+62
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
2+
import http from 'http';
3+
import querystring from 'querystring';
4+
import url from 'url';
5+
6+
import { pipeline, env } from '@xenova/transformers';
7+
8+
class MyClassificationPipeline {
9+
static task = 'text-classification';
10+
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
11+
static instance = null;
12+
13+
static async getInstance(progress_callback = null) {
14+
if (this.instance === null) {
15+
16+
// NOTE: Uncomment this to change the cache directory
17+
// env.cacheDir = './.cache';
18+
19+
this.instance = pipeline(this.task, this.model, { progress_callback });
20+
}
21+
22+
return this.instance;
23+
}
24+
}
25+
26+
// Comment out this line if you don't want to start loading the model as soon as the server starts.
27+
// If commented out, the model will be loaded when the first request is received (i.e,. lazily).
28+
MyClassificationPipeline.getInstance();
29+
30+
// Define the HTTP server
31+
const server = http.createServer();
32+
const hostname = '127.0.0.1';
33+
const port = 3000;
34+
35+
// Listen for requests made to the server
36+
server.on('request', async (req, res) => {
37+
// Parse the request URL
38+
const parsedUrl = url.parse(req.url);
39+
40+
// Extract the query parameters
41+
const { text } = querystring.parse(parsedUrl.query);
42+
43+
// Set the response headers
44+
res.setHeader('Content-Type', 'application/json');
45+
46+
let response;
47+
if (parsedUrl.pathname === '/classify' && text) {
48+
const classifier = await MyClassificationPipeline.getInstance();
49+
response = await classifier(text);
50+
res.statusCode = 200;
51+
} else {
52+
response = { 'error': 'Bad request' }
53+
res.statusCode = 400;
54+
}
55+
56+
// Send the JSON response
57+
res.end(JSON.stringify(response));
58+
});
59+
60+
server.listen(port, hostname, () => {
61+
console.log(`Server running at http://${hostname}:${port}/`);
62+
});

examples/node/esm/package.json

+13
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
{
2+
"name": "esm",
3+
"version": "1.0.0",
4+
"description": "Server-side inference with Transformers.js (ESM)",
5+
"type": "module",
6+
"main": "app.js",
7+
"keywords": [],
8+
"author": "Xenova",
9+
"license": "ISC",
10+
"dependencies": {
11+
"@xenova/transformers": "^2.0.0"
12+
}
13+
}

0 commit comments

Comments
 (0)