Is JavaScript's BigInt broken?
Published on
Updated on
JavaScript's bigint
is
quite an unusual and controversial primitive. It was introduced and standardized in
ECMAScript in 2020. Its purpose is to easily handle integers of arbitrary size since the
number
safe integer range is
limited to [-(253 - 1), 253 - 1].
Despite its virtues, many people still criticize it because of many issues related to convenience and lack of integration with existing JS standard APIs / libraries.
The virtues of BigInt
Let's first understand what are the virtues and the purpose of bigint.
bigint allows us to handle and operate with integers that are beyond the safe integer range of
number. bigint has no fixed size, so it's not some 64-bit or
128-bit integer. Its size has no defined limits (in ECMAScript, but different JS engines
certainly have some limit). And this is what makes it so powerful. It guarantees (almost) that there will never be an integer
overflow:
console.log(1000_000_000_000_000_000_000n ** 6n); // 1 and 126 zeroes
bigint numbers support many integer operations (including integer arithmetic and bitwise
operations, except >>>) that you would expect
from normal integers:
console.log(1n + 10n); // 11n
console.log(10n - 2n); // 8n
console.log(10n * 3n); // 30n
console.log(11n / 2n); // 5n, integer division
console.log(1000_000_000_000_000_000n << 2n); // 4000000000000000000n
console.log(3n & 5n); // 1n
console.log(3n | 5n); // 7n
console.log(3n ^ 5n); // 6n
console.log(~5n); // -6n
console.log(1000n ** 5n); // 1000000000000000n
console.log(1000n / 0n); // division by zero error
The problems with BigInt
Despite this, bigint has several inconveniences. Which I'll discuss here.
Mixed BigInt and Number expressions are illegal
JavaScript doesn't allow mixed bigint and number expressions (exceptions are
comparisons: ==, ===, <, etc). If an expression
contains both number and bigint an error will occur:
console.log(1n * 1); // Exception
This means bigintnumbers won't be automatically converted to floating point numbers similarly to
many
other programming languages when the expression contains mixed integers and floating point numbers. This
decision was made
because of potential precision losses and also because bigint numbers can have larger absolute
values
than the maximum number finite absolute value
(21024 - 2971, or ~ 1.7976931348623157E+308).
Fortunately, you can still manually convert bigint to number and vice versa by using
functions Number() and BigInt():
console.log(Number(10n) * 4); // 40
console.log(BigInt(6) * 4n); // 24n
No Math support
Another inconvenience is that
Math
functions / utilities largely don't work with
bigint numbers. Even
Math.max()
doesn't work with bigint:
console.log(Math.max(1n, 2n)); // Exception
There is a proposal to add a math standard library for BigInt. Hopefully it will become a reality one day.
No JSON support
The lack of JSON support is another painful point. There is no bigint type in JSON. Also, if you
try to
call
JSON.stringify()
for something that contains a bigint value, an exception will occur:
const obj = {
prop: 1n,
};
console.log(JSON.stringify(obj)); // Exception
One workaround for this is to convert bigint numbers to strings by passing the replacer
callback to
JSON.stringify():
const obj = {
prop: 1n,
};
console.log(
JSON.stringify(obj, (key, value) => typeof value === "bigint" ? value.toString() : value)
); // {"prop":"1"}
When parsing the JSON with JSON.parse()
we can pass the reviver
callback where we can convert the stringified bigint back to bigint for known
properties:
console.log(JSON.parse(
'{"prop":"138","otherProp":"789"}',
(key, value) => key === "prop" && typeof value === "string" ? BigInt(value) : value
)); // { prop: 138n, otherProp: "789" }
Unideal and complicated for parsing when dealing with deep objects, but it works.
Worse performance
Yes, unfortunately. In addition to these inconveniences, bigint operations are generally slower than
for number. The
reasons are:
- Arbitrary precision.
numberfits in a 64-bit float and most ops use single CPU instructions.bigintvalues are variable-length "bignums" stored on the heap. Operations scale with the number of "limbs" (O(N)for add/sub, super-linear (up toO(N2)) for multiplications and divisions). -
Allocation & GC. Many
bigintops create new heap objects. In contrast,numbervalues are typically unboxed registers. -
Fewer JIT fast paths. Engines have decades of tricks for regular numbers and typed arrays.
bigintpaths are more specialized and can't leverage hardware FP units.
Conclusion
So is bigint broken and poorly designed? I would say "largely no". number has a
very large safe
integer range of [-(253 - 1), 253 - 1] for most general real world problems.
Yes, it would be better if it was a true signed 64-bit number. But number safe integer range is
only about ~1000
times smaller (on a logarithmical scale it's not that bad) than the range of a 64-bit signed integer.
So just don't use bigint if you don't really need it. bigint is not designed for
general usage.
bigint absolutely has valid (and growing) use-cases (such as 64-bit IDs from
databases and services, timestamps and durations with nanosecond precision, cryptography, finance, etc.). It's
just not a drop-in replacement
for
number, so it shines in specific domains where correctness and arbitrary precision are the highest priority.