@@ -2130,67 +2130,17 @@ Q. Is the CPython implementation fast for large numbers?
2130
2130
A. Yes. In the CPython and PyPy3 implementations, the C/CFFI versions of
2131
2131
the decimal module integrate the high speed `libmpdec
2132
2132
<https://www.bytereef.org/mpdecimal/doc/libmpdec/index.html> `_ library for
2133
- arbitrary precision correctly-rounded decimal floating point arithmetic [ # ]_ .
2133
+ arbitrary precision correctly-rounded decimal floating point arithmetic.
2134
2134
``libmpdec `` uses `Karatsuba multiplication
2135
2135
<https://en.wikipedia.org/wiki/Karatsuba_algorithm> `_
2136
2136
for medium-sized numbers and the `Number Theoretic Transform
2137
2137
<https://en.wikipedia.org/wiki/Discrete_Fourier_transform_(general)#Number-theoretic_transform> `_
2138
- for very large numbers.
2138
+ for very large numbers. However, to realize this performance gain, the
2139
+ context needs to be set for unrounded calculations.
2139
2140
2140
- The context must be adapted for exact arbitrary precision arithmetic. :attr: `Emin `
2141
- and :attr: `Emax ` should always be set to the maximum values, :attr: `clamp `
2142
- should always be 0 (the default). Setting :attr: `prec ` requires some care.
2141
+ >>> c = getcontext()
2142
+ >>> c.prec = MAX_PREC
2143
+ >>> c.Emax = MAX_EMAX
2144
+ >>> c.Emin = MIN_EMIN
2143
2145
2144
- The easiest approach for trying out bignum arithmetic is to use the maximum
2145
- value for :attr: `prec ` as well [# ]_::
2146
-
2147
- >>> setcontext(Context(prec=MAX_PREC, Emax=MAX_EMAX, Emin=MIN_EMIN))
2148
- >>> x = Decimal(2) ** 256
2149
- >>> x / 128
2150
- Decimal('904625697166532776746648320380374280103671755200316906558262375061821325312')
2151
-
2152
-
2153
- For inexact results, :attr: `MAX_PREC ` is far too large on 64-bit platforms and
2154
- the available memory will be insufficient::
2155
-
2156
- >>> Decimal(1) / 3
2157
- Traceback (most recent call last):
2158
- File "<stdin>", line 1, in <module>
2159
- MemoryError
2160
-
2161
- On systems with overallocation (e.g. Linux), a more sophisticated approach is to
2162
- adjust :attr: `prec ` to the amount of available RAM. Suppose that you have 8GB of
2163
- RAM and expect 10 simultaneous operands using a maximum of 500MB each::
2164
-
2165
- >>> import sys
2166
- >>>
2167
- >>> # Maximum number of digits for a single operand using 500MB in 8 byte words
2168
- >>> # with 19 (9 for the 32-bit version) digits per word:
2169
- >>> maxdigits = 19 * ((500 * 1024**2) // 8)
2170
- >>>
2171
- >>> # Check that this works:
2172
- >>> c = Context(prec=maxdigits, Emax=MAX_EMAX, Emin=MIN_EMIN)
2173
- >>> c.traps[Inexact] = True
2174
- >>> setcontext(c)
2175
- >>>
2176
- >>> # Fill the available precision with nines:
2177
- >>> x = Decimal(0).logical_invert() * 9
2178
- >>> sys.getsizeof(x)
2179
- 524288112
2180
- >>> x + 2
2181
- Traceback (most recent call last):
2182
- File "<stdin>", line 1, in <module>
2183
- decimal.Inexact: [<class 'decimal.Inexact'>]
2184
-
2185
- In general (and especially on systems without overallocation), it is recommended
2186
- to estimate even tighter bounds and set the :attr: `Inexact ` trap if all calculations
2187
- are expected to be exact.
2188
-
2189
-
2190
- .. [# ]
2191
- .. versionadded :: 3.3
2192
-
2193
- .. [# ]
2194
- .. versionchanged :: 3.9
2195
- This approach now works for all exact results except for non-integer powers.
2196
- Also backported to 3.7 and 3.8.
2146
+ .. versionadded :: 3.3
0 commit comments