Skip to content
This repository was archived by the owner on Jul 21, 2023. It is now read-only.

@noble/ed25519 and BigInts #212

Closed
achingbrain opened this issue Nov 27, 2021 · 18 comments
Closed

@noble/ed25519 and BigInts #212

achingbrain opened this issue Nov 27, 2021 · 18 comments

Comments

@achingbrain
Copy link
Member

achingbrain commented Nov 27, 2021

#202 switched this library over from using node-forge for Ed25519 key operations to @noble/ed25519.

The benchmarks introduced in #213 show @noble/ed25519 is faster than node-forge, but it's got an order of magnitude more variance in the time taken to run each iteration of the benchmark:

noble ed25519 x 193 ops/sec ±33.95% (79 runs sampled)
node-forge ed25519 x 60.75 ops/sec ±3.36% (68 runs sampled)

Looking into the code, @noble/ed25519 uses the native BigInt type in it's calculations instead of bn.js or long.js or similar - this is not recommended:

Cryptography
The operations supported on BigInt values are not constant-time, and are thus open to timing attacks. JavaScript BigInts are therefore not well-suited for use in cryptography.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt#cryptography

If operations are not constant-time, this could explain why the variance is so large, as the time taken to run an iteration will depend on the input, which means you can derive information about the input from the runtime behaviour of the system.

What do you think @hugomrdias @vasco-santos @dignifiedquire? Is this something to be concerned about and should we consider reverting #202?

@dapplion
Copy link

What do you think @hugomrdias @vasco-santos @dignifiedquire? Is this something to be concerned about and should we consider reverting #202?

Worth getting expert crypto advice on the topic too

@achingbrain
Copy link
Member Author

They do mention this in the readme:

We're using built-in JS BigInt, which is "unsuitable for use in cryptography" as per official spec. This means that the lib is potentially vulnerable to timing attacks. But, JIT-compiler and Garbage Collector make "constant time" extremely hard to achieve in a scripting language. Which means any other JS library doesn't use constant-time bigints. Including bn.js or anything else. Even statically typed Rust, a language without GC, makes it harder to achieve constant-time for some cases. If your goal is absolute security, don't use any JS lib — including bindings to native ones. Use low-level libraries & languages.

Some of which may be even true, but the level of benchmark variance of this specific library still gives me significant pause.

@achingbrain
Copy link
Member Author

As an alternative, the wasm version benchmarked in #211 looks very promising, though it'll obviously have an impact on the bundle size.

@sneaker1
Copy link
Contributor

I also have a problem with noble/ed25519 and BigInts.

In my project i use libp2p in the browser with react (create-react-app).
After updating to the newest libp2p version, which uses the newest [email protected], which uses noble/ed25519 my webapp is broken.

My page isn't even loading anymore, and I get the error:
Uncaught TypeError: Cannot convert a BigInt value to a number

Kazam_screenshot_00008

Kazam_screenshot_00000

This only happens when i create a production build of my react app (npm run build) which is then served with an express app.
If i don't create a production build and run it with npm start it works without errors.

@achingbrain
Copy link
Member Author

achingbrain commented Nov 29, 2021

@sneaker1 your problem is unrelated to this one, please open a separate issue.

In brief you need to disable the babel-plugin-transform-exponentiation-operator plugin somehow or otherwise fix your babel config - see facebook/create-react-app#6907 (comment)

@hugomrdias
Copy link
Member

IMO its ok to use the noble secp uses bigints also and has been audited https://github.com/paulmillr/noble-secp256k1#security

refs:
https://github.com/paulmillr/noble-ed25519#security
https://paulmillr.com/posts/noble-secp256k1-fast-ecc/#js-bigints-jit-gc-and-other-scary-words

@paulmillr if you could give us some feedback here we would be grateful.

@dapplion
Copy link

dapplion commented Nov 29, 2021

IMO its ok to use the noble secp uses bigints also and has been audited https://github.com/paulmillr/noble-secp256k1#security

refs: https://github.com/paulmillr/noble-ed25519#security https://paulmillr.com/posts/noble-secp256k1-fast-ecc/#js-bigints-jit-gc-and-other-scary-words

@paulmillr if you could give us some feedback here we would be grateful.

The second point

Just-in-time compilation. Modern JS engines have unsurmountable amount of optimizations in their JIT compilers. Our code would be optimized and deoptimized all over the place. Which means it would take different time to run depending on a context.

Is defeatable since most consumers of this library are open source and can figure out the context. Also you assume that everyone is running latest Chrome / Firefox since they auto-update so you can also predict the JIT optimizations that each engine will apply. But agree with the assessment afterwards that it's not the highest impact risk consumers will be facing

@paulmillr
Copy link
Contributor

paulmillr commented Nov 29, 2021

Noble specifically benchmarks inputs of different bit length to ensure there is no difference in algorithmic complexity. See benchmark file. So, not sure why you're having this variance. Native bigints are no more insecure than bn.js, or other non-native stuff.

Try making node-force run first, and noble ed last and see what happens.

As for "using wasm": since wasm binaries are unsigned, you cannot be sure you aren't running a malware.

The npm has been allowing anyone to upload update to any package for 2+ years, including the unlogged period. This has been fixed 2 weeks ago. Anyone could have uploaded malware to any wasm package. Or they could upload it any any moment in the future by hacking someone's npm account.

@paulmillr
Copy link
Contributor

paulmillr commented Nov 29, 2021

The variance is the same on my machine, macos with M1, node v16.13.0:

@noble/ed25519 x 501 ops/sec ±0.26% (90 runs sampled)
@stablelib/ed25519 x 153 ops/sec ±0.37% (90 runs sampled)
node-forge/ed25519 x 148 ops/sec ±0.26% (87 runs sampled)
supercop.wasm x 5,639 ops/sec ±0.20% (91 runs sampled)
ed25519 (native module) x 6,828 ops/sec ±0.14% (91 runs sampled)
node.js web-crypto x 5,307 ops/sec ±0.76% (86 runs sampled)
fastest is ed25519 (native module)

@achingbrain
Copy link
Member Author

achingbrain commented Nov 29, 2021

Try making node-force run first, and noble ed last and see what happens.

I removed all other non-@noble/ed25519 requires and code from the suite but I still see wild variance:

$ node index.js 
@noble/ed25519 x 116 ops/sec ±27.86% (76 runs sampled)
fastest is @noble/ed25519

node v16.13.0
2.3 GHz Quad-Core Intel Core i7
32 GB 3733 MHz LPDDR4X

macos with M1, node v16.13.0:
@noble/ed25519 x 501 ops/sec ±0.26% (90 runs sampled)

Your machine is definitely faster than mine, maybe I should upgrade 😆

Anyone could have uploaded malware to any wasm package. Or they could upload it any any moment in the future by hacking someone's npm account.

To be fair, this is not a vulnerability exclusive to wasm, it applies to every single package on npm, @noble/ed25519 included.

@paulmillr
Copy link
Contributor

Any npm package could get malwared, but for noble, you could do a diff with its GitHub release that is signed by PGP key and see if there are any differences.

@paulmillr
Copy link
Contributor

As for requires, make noble the second or third. I've seen performance issues in benchmarks that only happen on first tests. This is because of JIT.

@achingbrain
Copy link
Member Author

As for requires, make noble the second or third.

Even as the last require and the last benchmark to run, the result is still the same:

$ node index.js 
@stablelib/ed25519 x 57.89 ops/sec ±2.50% (69 runs sampled)
node-forge/ed25519 x 56.69 ops/sec ±1.54% (68 runs sampled)
supercop.wasm x 3,425 ops/sec ±1.64% (82 runs sampled)
ed25519-wasm-pro x 3,583 ops/sec ±0.88% (83 runs sampled)
ed25519 (native module) x 4,608 ops/sec ±0.93% (84 runs sampled)
node.js web-crypto x 3,223 ops/sec ±4.73% (78 runs sampled)
@noble/ed25519 x 204 ops/sec ±25.72% (83 runs sampled)
fastest is ed25519 (native module)

@paulmillr
Copy link
Contributor

paulmillr commented Nov 30, 2021

Can you change benchmark to only test for one thing? This will allow to understand which action is doing the variance on your machine. Sign/verify/getPublicKey etc

Also, can you do the Buffer.from(math random) before the tests? The code could look like this:

messages = new Array(1000).fill(0).map(a => Buffer.from(...))

// in test
msg = messages[i++ % 1000]

@paulmillr
Copy link
Contributor

Investigating this by myself now, using an Intel VPS.

@paulmillr
Copy link
Contributor

On a first invocation of getPublicKey, noble precomputes a wNAF table of scalar multiplication. It takes 15 milliseconds or so.

noble.utils.precompute(8);

If you execute the code above before tests, everything would be fine. And the number is not variance, it's relative margin of error.

@paulmillr
Copy link
Contributor

As for the real difference between invocations, i'm using my micro-bmark to test it. This simple commit in noble-secp256k1 removes array-destructing assignments which removes quite a few OPcodes, which reduces GC time by a huge amount. See 1.3.0 and post-commit results between the fastest and slowest invocations:

1.3.0
getPublicKey(utils.randomPrivateKey()) x 6,058 ops/sec @ 165μs/op, min: 148μs, max: 1205μs ± 811%
sign x 4,631 ops/sec @ 215μs/op, min: 200μs, max: 1157μs ± 576%
signSync x 4,349 ops/sec @ 229μs/op, min: 212μs, max: 2ms ± 1080%
verify x 900 ops/sec @ 1110μs/op, min: 1068μs, max: 1690μs ± 158%
recoverPublicKey x 476 ops/sec @ 2ms/op, min: 2ms, max: 3ms ± 178%

a214e84229c0d324551902a49aa6e998bed7e529
getPublicKey(utils.randomPrivateKey()) x 6,036 ops/sec @ 165μs/op, min: 149μs, max: 801μs ± 536%
sign x 4,694 ops/sec @ 213μs/op, min: 203μs, max: 977μs ± 481%
signSync x 4,437 ops/sec @ 225μs/op, min: 211μs, max: 2ms ± 999%
verify x 901 ops/sec @ 1108μs/op, min: 1070μs, max: 1818μs ± 169%
recoverPublicKey x 477 ops/sec @ 2ms/op, min: 2ms, max: 2ms ± 132%

@dapplion this is just one example of GC

you can also predict the JIT optimizations that each engine will apply

@achingbrain
Copy link
Member Author

noble.utils.precompute(8);

Aha! @noble/ed25519 is within the same order of magnitude as all the others now, much better - thanks for your help:

$ node index.js 
@noble/ed25519 x 240 ops/sec ±2.14% (80 runs sampled)
@stablelib/ed25519 x 58.51 ops/sec ±2.66% (69 runs sampled)
node-forge/ed25519 x 52.87 ops/sec ±7.09% (64 runs sampled)
supercop.wasm x 3,656 ops/sec ±1.41% (85 runs sampled)
ed25519-wasm-pro x 3,709 ops/sec ±1.63% (84 runs sampled)
ed25519 (native module) x 4,739 ops/sec ±1.57% (83 runs sampled)
node.js web-crypto x 3,506 ops/sec ±3.24% (80 runs sampled)
fastest is ed25519 (native module)

I've fixed up the benchmark in #217

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants