Number Extensions
# Number Extensions
# Binary and Octal Notation
ES6 provides new ways to write binary and octal numbers, using the prefixes 0b (or 0B) and 0o (or 0O) respectively.
0b111110111 === 503 // true
0o767 === 503 // true
2
Starting from ES5, in strict mode, the prefix 0 is no longer allowed for representing octal numbers. ES6 further clarifies that the prefix 0o should be used.
// Non-strict mode
(function(){
console.log(0o11 === 011);
})() // true
// Strict mode
(function(){
'use strict';
console.log(0o11 === 011);
})() // Uncaught SyntaxError: Octal literals are not allowed in strict mode.
2
3
4
5
6
7
8
9
10
To convert strings with 0b and 0o prefixes to decimal numbers, use the Number method.
Number('0b111') // 7
Number('0o10') // 8
2
# Number.isFinite() and Number.isNaN()
ES6 provides two new methods on the Number object: Number.isFinite() and Number.isNaN().
Number.isFinite() checks whether a value is finite, i.e., not Infinity.
Number.isFinite(15); // true
Number.isFinite(0.8); // true
Number.isFinite(NaN); // false
Number.isFinite(Infinity); // false
Number.isFinite(-Infinity); // false
Number.isFinite('foo'); // false
Number.isFinite('15'); // false
Number.isFinite(true); // false
2
3
4
5
6
7
8
Note that if the argument type is not a number, Number.isFinite always returns false.
Number.isNaN() checks whether a value is NaN.
Number.isNaN(NaN) // true
Number.isNaN(15) // false
Number.isNaN('15') // false
Number.isNaN(true) // false
Number.isNaN(9/NaN) // true
Number.isNaN('true' / 0) // true
Number.isNaN('true' / 'true') // true
2
3
4
5
6
7
If the argument type is not NaN, Number.isNaN always returns false.
The difference between these methods and the traditional global methods isFinite() and isNaN() is that the traditional methods first call Number() to convert non-numeric values to numbers before making the determination. The new methods only work on numeric values: Number.isFinite() returns false for all non-numeric values, and Number.isNaN() returns true only for NaN and false for everything else.
isFinite(25) // true
isFinite("25") // true
Number.isFinite(25) // true
Number.isFinite("25") // false
isNaN(NaN) // true
isNaN("NaN") // true
Number.isNaN(NaN) // true
Number.isNaN("NaN") // false
Number.isNaN(1) // false
2
3
4
5
6
7
8
9
10
# Number.parseInt() and Number.parseFloat()
ES6 moves the global methods parseInt() and parseFloat() onto the Number object, with behavior remaining completely unchanged.
// ES5 syntax
parseInt('12.34') // 12
parseFloat('123.45#') // 123.45
// ES6 syntax
Number.parseInt('12.34') // 12
Number.parseFloat('123.45#') // 123.45
2
3
4
5
6
7
The purpose of this is to gradually reduce global methods and make the language progressively more modular.
Number.parseInt === parseInt // true
Number.parseFloat === parseFloat // true
2
# Number.isInteger()
Number.isInteger() determines whether a value is an integer.
Number.isInteger(25) // true
Number.isInteger(25.1) // false
2
Internally in JavaScript, integers and floating-point numbers use the same storage format, so 25 and 25.0 are considered the same value.
Number.isInteger(25) // true
Number.isInteger(25.0) // true
2
If the argument is not a number, Number.isInteger returns false.
Number.isInteger() // false
Number.isInteger(null) // false
Number.isInteger('15') // false
Number.isInteger(true) // false
2
3
4
Note that because JavaScript uses the IEEE 754 standard and stores numbers as 64-bit double-precision format, the maximum numeric precision is 53 binary digits (1 hidden bit plus 52 significant bits). If a number's precision exceeds this limit, bits from the 54th position onward are discarded. In such cases, Number.isInteger may return incorrect results.
Number.isInteger(3.0000000000000002) // true
In the code above, the argument to Number.isInteger is clearly not an integer, but it returns true. This is because the decimal has 16 decimal places of precision, which exceeds 53 binary digits when converted, causing the trailing 2 to be discarded.
A similar situation occurs when a number's absolute value is less than Number.MIN_VALUE (5E-324) — i.e., less than the smallest value JavaScript can distinguish — it is automatically converted to 0. In this case, Number.isInteger also gives an incorrect result.
Number.isInteger(5E-324) // false
Number.isInteger(5E-325) // true
2
In the code above, 5E-325 is automatically converted to 0 because the value is too small, so it returns true.
In summary, if high data precision is required, it is not recommended to use Number.isInteger() to determine whether a number is an integer.
# Number.EPSILON for Floating-Point Error Checking
ES6 adds a very small constant Number.EPSILON to the Number object. According to the specification, it represents the difference between 1 and the smallest floating-point number greater than 1.
For 64-bit floating-point numbers, the smallest floating-point number greater than 1 is equivalent to the binary 1.00..001, with 51 consecutive zeros after the decimal point. Subtracting 1 from this value gives 2 to the power of -52.
Number.EPSILON === Math.pow(2, -52)
// true
Number.EPSILON
// 2.220446049250313e-16
Number.EPSILON.toFixed(20)
// "0.00000000000000022204"
2
3
4
5
6
Number.EPSILON is effectively the smallest precision that JavaScript can represent. If an error is smaller than this value, it can be considered meaningless — i.e., there is no error.
The purpose of introducing such a small quantity is to set an error margin for floating-point calculations. We know that floating-point calculations are imprecise.
0.1 + 0.2
// 0.30000000000000004
0.1 + 0.2 - 0.3
// 5.551115123125783e-17
5.551115123125783e-17.toFixed(20)
// '0.00000000000000005551'
2
3
4
5
6
7
8
The code above explains why comparing 0.1 + 0.2 with 0.3 returns false.
0.1 + 0.2 === 0.3 // false
Number.EPSILON can be used to set an "acceptable error margin." For example, if we set the error margin to 2 to the power of -50 (i.e., Number.EPSILON * Math.pow(2, 2)), two floating-point numbers whose difference is less than this value are considered equal.
5.551115123125783e-17 < Number.EPSILON * Math.pow(2, 2)
// true
2
Therefore, the essence of Number.EPSILON is an acceptable minimum error margin.
function withinErrorMargin (left, right) {
return Math.abs(left - right) < Number.EPSILON * Math.pow(2, 2);
}
0.1 + 0.2 === 0.3 // false
withinErrorMargin(0.1 + 0.2, 0.3) // true
1.1 + 1.3 === 2.4 // false
withinErrorMargin(1.1 + 1.3, 2.4) // true
2
3
4
5
6
7
8
9
The code above deploys an error checking function for floating-point arithmetic.
# Safe Integers and Number.isSafeInteger()
The range of integers that JavaScript can accurately represent is between -2^53 and 2^53 (exclusive of both endpoints). Beyond this range, values cannot be represented precisely.
Math.pow(2, 53) // 9007199254740992
9007199254740992 // 9007199254740992
9007199254740993 // 9007199254740992
Math.pow(2, 53) === Math.pow(2, 53) + 1
// true
2
3
4
5
6
7
In the code above, once a number exceeds 2 to the 53rd power, it becomes imprecise.
ES6 introduced the constants Number.MAX_SAFE_INTEGER and Number.MIN_SAFE_INTEGER to represent the upper and lower bounds of this range.
Number.MAX_SAFE_INTEGER === Math.pow(2, 53) - 1
// true
Number.MAX_SAFE_INTEGER === 9007199254740991
// true
Number.MIN_SAFE_INTEGER === -Number.MAX_SAFE_INTEGER
// true
Number.MIN_SAFE_INTEGER === -9007199254740991
// true
2
3
4
5
6
7
8
9
In the code above, you can see the limits of precise representation in JavaScript.
Number.isSafeInteger() is used to determine whether an integer falls within this range.
Number.isSafeInteger('a') // false
Number.isSafeInteger(null) // false
Number.isSafeInteger(NaN) // false
Number.isSafeInteger(Infinity) // false
Number.isSafeInteger(-Infinity) // false
Number.isSafeInteger(3) // true
Number.isSafeInteger(1.2) // false
Number.isSafeInteger(9007199254740990) // true
Number.isSafeInteger(9007199254740992) // false
Number.isSafeInteger(Number.MIN_SAFE_INTEGER - 1) // false
Number.isSafeInteger(Number.MIN_SAFE_INTEGER) // true
Number.isSafeInteger(Number.MAX_SAFE_INTEGER) // true
Number.isSafeInteger(Number.MAX_SAFE_INTEGER + 1) // false
2
3
4
5
6
7
8
9
10
11
12
13
14
15
The implementation of this function is straightforward — it simply compares against the two boundary values of safe integers.
Number.isSafeInteger = function (n) {
return (typeof n === 'number' &&
Math.round(n) === n &&
Number.MIN_SAFE_INTEGER <= n &&
n <= Number.MAX_SAFE_INTEGER);
}
2
3
4
5
6
When using this function in practice, be careful. When verifying whether a computation result falls within the safe integer range, you should verify not only the result but also each operand involved in the computation.
Number.isSafeInteger(9007199254740993)
// false
Number.isSafeInteger(990)
// true
Number.isSafeInteger(9007199254740993 - 990)
// true
9007199254740993 - 990
// Returns 9007199254740002
// The correct answer should be 9007199254740003
2
3
4
5
6
7
8
9
In the code above, 9007199254740993 is not a safe integer, but Number.isSafeInteger shows the computation result as safe. This is because the number exceeds the precision range and is internally stored as 9007199254740992.
9007199254740993 === 9007199254740992
// true
2
Therefore, verifying only whether the result is a safe integer may yield incorrect results. The following function verifies both operands and the result simultaneously.
function trusty (left, right, result) {
if (
Number.isSafeInteger(left) &&
Number.isSafeInteger(right) &&
Number.isSafeInteger(result)
) {
return result;
}
throw new RangeError('Operation cannot be trusted!');
}
trusty(9007199254740993, 990, 9007199254740993 - 990)
// RangeError: Operation cannot be trusted!
trusty(1, 2, 3)
// 3
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Math Object Extensions
ES6 adds 17 math-related methods to the Math object. All of these are static methods that can only be called on the Math object.
# Math.trunc() — Removes the Decimal Part and Returns the Integer Part
Math.trunc removes the decimal part of a number and returns the integer part.
Math.trunc(4.1) // 4
Math.trunc(4.9) // 4
Math.trunc(-4.1) // -4
Math.trunc(-4.9) // -4
Math.trunc(-0.1234) // -0
2
3
4
5
For non-numeric values, Math.trunc first uses the Number method to convert them to numbers.
Math.trunc('123.456') // 123
Math.trunc(true) //1
Math.trunc(false) // 0
Math.trunc(null) // 0
2
3
4
For empty values and values that cannot be truncated to integers, it returns NaN.
Math.trunc(NaN); // NaN
Math.trunc('foo'); // NaN
Math.trunc(); // NaN
Math.trunc(undefined) // NaN
2
3
4
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.trunc = Math.trunc || function(x) {
return x < 0 ? Math.ceil(x) : Math.floor(x);
};
2
3
# Math.sign() — Determines Whether a Number is Positive, Negative, or Zero
Math.sign determines whether a number is positive, negative, or zero. Non-numeric values are first converted to numbers.
It returns five possible values:
- Positive argument: returns
+1 - Negative argument: returns
-1 - Argument is 0: returns
0 - Argument is -0: returns
-0 - Other values: returns
NaN
Math.sign(-5) // -1
Math.sign(5) // +1
Math.sign(0) // +0
Math.sign(-0) // -0
Math.sign(NaN) // NaN
2
3
4
5
For non-numeric arguments, they are automatically converted to numbers. Values that cannot be converted to numbers return NaN.
Math.sign('') // 0
Math.sign(true) // +1
Math.sign(false) // 0
Math.sign(null) // 0
Math.sign('9') // +1
Math.sign('foo') // NaN
Math.sign() // NaN
Math.sign(undefined) // NaN
2
3
4
5
6
7
8
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.sign = Math.sign || function(x) {
x = +x; // convert to a number
if (x === 0 || isNaN(x)) {
return x;
}
return x > 0 ? 1 : -1;
};
2
3
4
5
6
7
# Math.cbrt() — Calculates the Cube Root
Math.cbrt calculates the cube root of a number.
Math.cbrt(-1) // -1
Math.cbrt(0) // 0
Math.cbrt(1) // 1
Math.cbrt(2) // 1.2599210498948734
2
3
4
For non-numeric values, Math.cbrt also first uses the Number method to convert them to numbers.
Math.cbrt('8') // 2
Math.cbrt('hello') // NaN
2
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.cbrt = Math.cbrt || function(x) {
var y = Math.pow(Math.abs(x), 1/3);
return x < 0 ? -y : y;
};
2
3
4
# Math.clz32()
Math.clz32() converts the argument to a 32-bit unsigned integer and returns the number of leading zeros in that 32-bit value.
Math.clz32(0) // 32
Math.clz32(1) // 31
Math.clz32(1000) // 22
Math.clz32(0b01000000000000000000000000000000) // 1
Math.clz32(0b00100000000000000000000000000000) // 2
2
3
4
5
In the code above, the binary form of 0 is all zeros, so it has 32 leading zeros. The binary form of 1 is 0b1, occupying only 1 bit, so there are 31 leading zeros among the 32 bits. The binary form of 1000 is 0b1111101000, totaling 10 bits, so there are 22 leading zeros among the 32 bits.
The function name clz32 comes from the abbreviation of "count leading zero bits in 32-bit binary representation of a number."
The left shift operator (<<) is directly related to Math.clz32.
Math.clz32(0) // 32
Math.clz32(1) // 31
Math.clz32(1 << 1) // 30
Math.clz32(1 << 2) // 29
Math.clz32(1 << 29) // 2
2
3
4
5
For decimals, Math.clz32 only considers the integer part.
Math.clz32(3.2) // 30
Math.clz32(3.9) // 30
2
For empty values or other types of values, Math.clz32 first converts them to numbers before calculation.
Math.clz32() // 32
Math.clz32(NaN) // 32
Math.clz32(Infinity) // 32
Math.clz32(null) // 32
Math.clz32('foo') // 32
Math.clz32([]) // 32
Math.clz32({}) // 32
Math.clz32(true) // 31
2
3
4
5
6
7
8
# Math.imul()
Math.imul returns the result of multiplying two numbers as 32-bit signed integers, and the result is also a 32-bit signed integer.
Math.imul(2, 4) // 8
Math.imul(-1, 8) // -8
Math.imul(-2, -2) // 4
2
3
Considering only the last 32 bits, in most cases Math.imul(a, b) produces the same result as a * b — i.e., the method is equivalent to (a * b)|0 (bits beyond 32 overflow). This method is needed because JavaScript has precision limitations: values exceeding 2 to the 53rd power cannot be represented precisely. This means that for multiplication of very large numbers, the lower bits are often inaccurate. Math.imul can return the correct lower bits.
(0x7fffffff * 0x7fffffff)|0 // 0
The multiplication above returns 0. However, since the lowest bits of both binary numbers are 1, this result is clearly incorrect — the lowest bit of the product should also be 1 according to binary multiplication. This error occurs because their product exceeds 2 to the 53rd power, and JavaScript cannot preserve the extra precision, so it zeroes out the lower bits. Math.imul can return the correct value of 1.
Math.imul(0x7fffffff, 0x7fffffff) // 1
# Math.fround()
Math.fround returns the 32-bit single-precision floating-point representation of a number.
For 32-bit single-precision format, the numeric precision is 24 binary bits (1 hidden bit plus 23 significant bits). Therefore, for integers between -224 and 224 (exclusive of both endpoints), the result is identical to the argument.
Math.fround(0) // 0
Math.fround(1) // 1
Math.fround(2 ** 24 - 1) // 16777215
2
3
If the absolute value of the argument exceeds 224, the result begins to lose precision.
Math.fround(2 ** 24) // 16777216
Math.fround(2 ** 24 + 1) // 16777216
2
The primary purpose of Math.fround is to convert a 64-bit double-precision floating-point number to a 32-bit single-precision floating-point number. If the decimal precision exceeds 24 binary bits, the return value differs from the original; otherwise, the return value is unchanged (i.e., identical to the 64-bit double-precision value).
// No precision loss
Math.fround(1.125) // 1.125
Math.fround(7.25) // 7.25
// Precision loss
Math.fround(0.3) // 0.30000001192092896
Math.fround(0.7) // 0.699999988079071
Math.fround(1.0000000123) // 1
2
3
4
5
6
7
8
For NaN and Infinity, this method returns the original value. For other non-numeric types, Math.fround first converts them to numbers, then returns the single-precision floating-point number.
Math.fround(NaN) // NaN
Math.fround(Infinity) // Infinity
Math.fround('5') // 5
Math.fround(true) // 1
Math.fround(null) // 0
Math.fround([]) // 0
Math.fround({}) // NaN
2
3
4
5
6
7
8
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.fround = Math.fround || function (x) {
return new Float32Array([x])[0];
};
2
3
# Math.hypot()
Math.hypot returns the square root of the sum of squares of all its arguments.
Math.hypot(3, 4); // 5
Math.hypot(3, 4, 5); // 7.0710678118654755
Math.hypot(); // 0
Math.hypot(NaN); // NaN
Math.hypot(3, 4, 'foo'); // NaN
Math.hypot(3, 4, '5'); // 7.0710678118654755
Math.hypot(-3); // 3
2
3
4
5
6
7
In the code above, 3 squared plus 4 squared equals 5 squared.
If any argument is not a number, Math.hypot will convert it to a number. If any argument cannot be converted, it returns NaN.
# Logarithmic Methods
ES6 adds 4 logarithm-related methods.
(1) Math.expm1()
Math.expm1(x) returns ex - 1, i.e., Math.exp(x) - 1.
Math.expm1(-1) // -0.6321205588285577
Math.expm1(0) // 0
Math.expm1(1) // 1.718281828459045
2
3
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.expm1 = Math.expm1 || function(x) {
return Math.exp(x) - 1;
};
2
3
(2) Math.log1p()
Math.log1p(x) returns the natural logarithm of 1 + x, i.e., Math.log(1 + x). If x is less than -1, it returns NaN.
Math.log1p(1) // 0.6931471805599453
Math.log1p(0) // 0
Math.log1p(-1) // -Infinity
Math.log1p(-2) // NaN
2
3
4
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.log1p = Math.log1p || function(x) {
return Math.log(1 + x);
};
2
3
(3) Math.log10()
Math.log10(x) returns the base-10 logarithm of x. If x is less than 0, it returns NaN.
Math.log10(2) // 0.3010299956639812
Math.log10(1) // 0
Math.log10(0) // -Infinity
Math.log10(-2) // NaN
Math.log10(100000) // 5
2
3
4
5
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.log10 = Math.log10 || function(x) {
return Math.log(x) / Math.LN10;
};
2
3
(4) Math.log2()
Math.log2(x) returns the base-2 logarithm of x. If x is less than 0, it returns NaN.
Math.log2(3) // 1.584962500721156
Math.log2(2) // 1
Math.log2(1) // 0
Math.log2(0) // -Infinity
Math.log2(-2) // NaN
Math.log2(1024) // 10
Math.log2(1 << 29) // 29
2
3
4
5
6
7
For environments that have not implemented this method, the following code can be used as a polyfill.
Math.log2 = Math.log2 || function(x) {
return Math.log(x) / Math.LN2;
};
2
3
# Hyperbolic Function Methods
ES6 adds 6 hyperbolic function methods.
Math.sinh(x)returns the hyperbolic sine ofxMath.cosh(x)returns the hyperbolic cosine ofxMath.tanh(x)returns the hyperbolic tangent ofxMath.asinh(x)returns the inverse hyperbolic sine ofxMath.acosh(x)returns the inverse hyperbolic cosine ofxMath.atanh(x)returns the inverse hyperbolic tangent ofx
# Exponentiation Operator
ES2016 introduced an exponentiation operator (**).
2 ** 2 // 4
2 ** 3 // 8
2
One characteristic of this operator is right-associativity, as opposed to the more common left-associativity. When multiple exponentiation operators are chained, calculation proceeds from right to left.
// equivalent to 2 ** (3 ** 2)
2 ** 3 ** 2
// 512
2
3
In the code above, the second exponentiation operator is calculated first, not the first.
The exponentiation operator can be combined with the assignment operator to form a new assignment operator (**=).
let a = 1.5;
a **= 2;
// equivalent to a = a * a;
let b = 4;
b **= 3;
// equivalent to b = b * b * b;
2
3
4
5
6
7
Note that the V8 engine's implementation of the exponentiation operator differs slightly from Math.pow — for very large results, there may be minor differences.
Math.pow(99, 99)
// 3.697296376497263e+197
99 ** 99
// 3.697296376497268e+197
2
3
4
5
In the code above, the last significant digit of the two results differs.
# The BigInt Data Type
# Introduction
All numbers in JavaScript are stored as 64-bit floating-point numbers, which imposes two major limitations on numeric representation. First, numeric precision is limited to 53 binary bits (equivalent to 16 decimal digits). Integers beyond this range cannot be represented precisely, making JavaScript unsuitable for exact scientific and financial calculations. Second, numbers greater than or equal to 2 to the 1024th power cannot be represented — JavaScript returns Infinity.
// Numbers exceeding 53 binary digits cannot maintain precision
Math.pow(2, 53) === Math.pow(2, 53) + 1 // true
// Numbers exceeding 2 to the 1024th power cannot be represented
Math.pow(2, 1024) // Infinity
2
3
4
5
ES2020 (opens new window) introduced a new data type, BigInt (big integer), to solve this problem. BigInt is used exclusively to represent integers, with no limit on the number of digits — any integer can be represented precisely.
const a = 2172141653n;
const b = 15346349309n;
// BigInt maintains precision
a * b // 33334444555566667777n
// Regular integers cannot maintain precision
Number(a) * Number(b) // 33334444555566670000
2
3
4
5
6
7
8
To distinguish from the Number type, BigInt values must have the suffix n.
1234 // regular integer
1234n // BigInt
// BigInt arithmetic
1n + 2n // 3n
2
3
4
5
BigInt can also use various base notations, all requiring the n suffix.
0b1101n // binary
0o777n // octal
0xFFn // hexadecimal
2
3
BigInt and regular integers are two different types of values — they are not equal.
42n === 42 // false
The typeof operator returns bigint for BigInt values.
typeof 123n // 'bigint'
BigInt can use the minus sign (-) but not the plus sign (+), as it would conflict with asm.js.
-42n // correct
+42n // error
2
Previously, JavaScript could not calculate 70 factorial (70!) because it exceeded the representable precision.
let p = 1;
for (let i = 1; i <= 70; i++) {
p *= i;
}
console.log(p); // 1.197857166996989e+100
2
3
4
5
Now with BigInt support, this can be computed. Running the following code in a browser's developer tools will work.
let p = 1n;
for (let i = 1n; i <= 70n; i++) {
p *= i;
}
console.log(p); // 11978571...00000000n
2
3
4
5
# The BigInt Object
JavaScript natively provides the BigInt object, which can be used as a constructor to generate BigInt values. The conversion rules are basically the same as Number(), converting values of other types to BigInt.
BigInt(123) // 123n
BigInt('123') // 123n
BigInt(false) // 0n
BigInt(true) // 1n
2
3
4
The BigInt() constructor requires an argument, and the argument must be convertible to a number. The following usages will all throw errors.
new BigInt() // TypeError
BigInt(undefined) //TypeError
BigInt(null) // TypeError
BigInt('123n') // SyntaxError
BigInt('abc') // SyntaxError
2
3
4
5
In the code above, note particularly that the string 123n cannot be parsed as a Number type, so it throws an error.
If the argument is a decimal, an error will also be thrown.
BigInt(1.5) // RangeError
BigInt('1.5') // SyntaxError
2
The BigInt object inherits two instance methods from the Object object.
BigInt.prototype.toString()BigInt.prototype.valueOf()
It also inherits one instance method from the Number object.
BigInt.prototype.toLocaleString()
Additionally, it provides three static methods.
BigInt.asUintN(width, BigInt): converts the given BigInt to the corresponding value between 0 and 2width - 1.BigInt.asIntN(width, BigInt): converts the given BigInt to the corresponding value between -2width - 1 and 2width - 1 - 1.BigInt.parseInt(string[, radix]): similar toNumber.parseInt(), converts a string to a BigInt with the specified radix.
const max = 2n ** (64n - 1n) - 1n;
BigInt.asIntN(64, max)
// 9223372036854775807n
BigInt.asIntN(64, max + 1n)
// -9223372036854775808n
BigInt.asUintN(64, max + 1n)
// 9223372036854775808n
2
3
4
5
6
7
8
In the code above, max is the maximum value representable by a 64-bit signed BigInt. If 1n is added to this value, BigInt.asIntN() returns a negative value because the new bit is interpreted as the sign bit. BigInt.asUintN(), having no sign bit, can correctly return the result.
If BigInt.asIntN() and BigInt.asUintN() specify fewer bits than the number itself has, the leading bits are discarded.
const max = 2n ** (64n - 1n) - 1n;
BigInt.asIntN(32, max) // -1n
BigInt.asUintN(32, max) // 4294967295n
2
3
4
In the code above, max is a 64-bit BigInt. When converted to 32 bits, the leading 32 bits are all discarded.
Below is an example of BigInt.parseInt().
// Comparison of Number.parseInt() and BigInt.parseInt()
Number.parseInt('9007199254740993', 10)
// 9007199254740992
BigInt.parseInt('9007199254740993', 10)
// 9007199254740993n
2
3
4
5
In the code above, because the significant digits exceed the maximum limit, Number.parseInt returns an imprecise result, while BigInt.parseInt correctly returns the corresponding BigInt.
For binary arrays, BigInt introduces two new types: BigUint64Array and BigInt64Array. Both return 64-bit BigInt values. The DataView object's instance methods DataView.prototype.getBigInt64() and DataView.prototype.getBigUint64() also return BigInt.
# Conversion Rules
The three methods Boolean(), Number(), and String() can be used to convert BigInt to boolean, numeric, and string types respectively.
Boolean(0n) // false
Boolean(1n) // true
Number(1n) // 1
String(1n) // "1"
2
3
4
In the code above, note the last example: when converting to a string, the n suffix disappears.
Additionally, the logical NOT operator (!) can also convert BigInt to boolean.
!0n // true
!1n // false
2
# Mathematical Operations
For mathematical operations, the BigInt +, -, *, and ** binary operators behave the same as the Number type. The division operator / discards the decimal part and returns an integer.
9n / 5n
// 1n
2
Almost all numeric operators can be used with BigInt, with two exceptions.
- The unsigned right shift operator
>>> - The unary plus operator
+
Both operators throw errors when used with BigInt. The former is because >>> is unsigned, but BigInt is always signed, making the operation meaningless — it is completely equivalent to the signed right shift operator >>. The latter is because the unary + operator always returns a Number type in asm.js, so +1n is specified to throw an error to avoid breaking asm.js.
BigInt cannot be mixed with regular numbers in arithmetic operations.
1n + 1.3 // error
The error above occurs because regardless of whether BigInt or Number is returned, precision would be lost. For example, with the expression (2n**53n + 1n) + 0.5, if BigInt is returned, the 0.5 decimal part is lost; if Number is returned, the effective precision is limited to 53 bits, causing precision degradation.
For the same reason, if a standard library function expects a Number argument but receives a BigInt, it will throw an error.
// Incorrect usage
Math.sqrt(4n) // error
// Correct usage
Math.sqrt(Number(4n)) // 2
2
3
4
5
In the code above, Math.sqrt expects a Number argument. If BigInt is passed, it throws an error — the type must first be converted using the Number method.
In asm.js, |0 appended to a number returns a 32-bit integer. According to the rule that BigInt cannot be mixed with Number type, performing operations between BigInt and |0 will throw an error.
1n | 0 // error
# Other Operations
The boolean value corresponding to BigInt is consistent with the Number type: 0n converts to false, and all other values convert to true.
if (0n) {
console.log('if');
} else {
console.log('else');
}
// else
2
3
4
5
6
In the code above, 0n corresponds to false, so the else clause is entered.
Comparison operators (such as >) and equality operators (==) allow BigInt to be mixed with other types of values, as this does not cause precision loss.
0n < 1 // true
0n < true // true
0n == 0 // true
0n == false // true
0n === 0 // false
2
3
4
5
When BigInt is mixed with strings in operations, it is first converted to a string, then the operation is performed.
'' + 123n // "123"