problem
stringlengths 1.19k
65.4k
| solution
stringlengths 1.19k
67.5k
| topic
stringlengths 5
80
|
---|---|---|
---
title
prime_sieve_linear
---
# Linear Sieve
Given a number $n$, find all prime numbers in a segment $[2;n]$.
The standard way of solving a task is to use [the sieve of Eratosthenes](sieve-of-eratosthenes.md). This algorithm is very simple, but it has runtime $O(n \log \log n)$.
Although there are a lot of known algorithms with sublinear runtime (i.e. $o(n)$), the algorithm described below is interesting by its simplicity: it isn't any more complex than the classic sieve of Eratosthenes.
Besides, the algorithm given here calculates **factorizations of all numbers** in the segment $[2; n]$ as a side effect, and that can be helpful in many practical applications.
The weakness of the given algorithm is in using more memory than the classic sieve of Eratosthenes': it requires an array of $n$ numbers, while for the classic sieve of Eratosthenes it is enough to have $n$ bits of memory (which is 32 times less).
Thus, it makes sense to use the described algorithm only until for numbers of order $10^7$ and not greater.
The algorithm is due to Paul Pritchard. It is a variant of Algorithm 3.3 in (Pritchard, 1987: see references in the end of the article).
## Algorithm
Our goal is to calculate **minimum prime factor** $lp [i]$ for every number $i$ in the segment $[2; n]$.
Besides, we need to store the list of all the found prime numbers - let's call it $pr []$.
We'll initialize the values $lp [i]$ with zeros, which means that we assume all numbers are prime. During the algorithm execution this array will be filled gradually.
Now we'll go through the numbers from 2 to $n$. We have two cases for the current number $i$:
- $lp[i] = 0$ - that means that $i$ is prime, i.e. we haven't found any smaller factors for it.
Hence, we assign $lp [i] = i$ and add $i$ to the end of the list $pr[]$.
- $lp[i] \neq 0$ - that means that $i$ is composite, and its minimum prime factor is $lp [i]$.
In both cases we update values of $lp []$ for the numbers that are divisible by $i$. However, our goal is to learn to do so as to set a value $lp []$ at most once for every number. We can do it as follows:
Let's consider numbers $x_j = i \cdot p_j$, where $p_j$ are all prime numbers less than or equal to $lp [i]$ (this is why we need to store the list of all prime numbers).
We'll set a new value $lp [x_j] = p_j$ for all numbers of this form.
The proof of correctness of this algorithm and its runtime can be found after the implementation.
## Implementation
```cpp
const int N = 10000000;
vector<int> lp(N+1);
vector<int> pr;
for (int i=2; i <= N; ++i) {
if (lp[i] == 0) {
lp[i] = i;
pr.push_back(i);
}
for (int j = 0; i * pr[j] <= N; ++j) {
lp[i * pr[j]] = pr[j];
if (pr[j] == lp[i]) {
break;
}
}
}
```
## Correctness Proof
We need to prove that the algorithm sets all values $lp []$ correctly, and that every value will be set exactly once. Hence, the algorithm will have linear runtime, since all the remaining actions of the algorithm, obviously, work for $O (n)$.
Notice that every number $i$ has exactly one representation in form:
$$i = lp [i] \cdot x,$$
where $lp [i]$ is the minimal prime factor of $i$, and the number $x$ doesn't have any prime factors less than $lp [i]$, i.e.
$$lp [i] \le lp [x].$$
Now, let's compare this with the actions of our algorithm: in fact, for every $x$ it goes through all prime numbers it could be multiplied by, i.e. all prime numbers up to $lp [x]$ inclusive, in order to get the numbers in the form given above.
Hence, the algorithm will go through every composite number exactly once, setting the correct values $lp []$ there. Q.E.D.
## Runtime and Memory
Although the running time of $O(n)$ is better than $O(n \log \log n)$ of the classic sieve of Eratosthenes, the difference between them is not so big.
In practice the linear sieve runs about as fast as a typical implementation of the sieve of Eratosthenes.
In comparison to optimized versions of the sieve of Erathosthenes, e.g. the segmented sieve, it is much slower.
Considering the memory requirements of this algorithm - an array $lp []$ of length $n$, and an array of $pr []$ of length $\frac n {\ln n}$, this algorithm seems to worse than the classic sieve in every way.
However, its redeeming quality is that this algorithm calculates an array $lp []$, which allows us to find factorization of any number in the segment $[2; n]$ in the time of the size order of this factorization. Moreover, using just one extra array will allow us to avoid divisions when looking for factorization.
Knowing the factorizations of all numbers is very useful for some tasks, and this algorithm is one of the few which allow to find them in linear time.
## References
- Paul Pritchard, **Linear Prime-Number Sieves: a Family Tree**, Science of Computer Programming, vol. 9 (1987), pp.17-35.
|
---
title
prime_sieve_linear
---
# Linear Sieve
Given a number $n$, find all prime numbers in a segment $[2;n]$.
The standard way of solving a task is to use [the sieve of Eratosthenes](sieve-of-eratosthenes.md). This algorithm is very simple, but it has runtime $O(n \log \log n)$.
Although there are a lot of known algorithms with sublinear runtime (i.e. $o(n)$), the algorithm described below is interesting by its simplicity: it isn't any more complex than the classic sieve of Eratosthenes.
Besides, the algorithm given here calculates **factorizations of all numbers** in the segment $[2; n]$ as a side effect, and that can be helpful in many practical applications.
The weakness of the given algorithm is in using more memory than the classic sieve of Eratosthenes': it requires an array of $n$ numbers, while for the classic sieve of Eratosthenes it is enough to have $n$ bits of memory (which is 32 times less).
Thus, it makes sense to use the described algorithm only until for numbers of order $10^7$ and not greater.
The algorithm is due to Paul Pritchard. It is a variant of Algorithm 3.3 in (Pritchard, 1987: see references in the end of the article).
## Algorithm
Our goal is to calculate **minimum prime factor** $lp [i]$ for every number $i$ in the segment $[2; n]$.
Besides, we need to store the list of all the found prime numbers - let's call it $pr []$.
We'll initialize the values $lp [i]$ with zeros, which means that we assume all numbers are prime. During the algorithm execution this array will be filled gradually.
Now we'll go through the numbers from 2 to $n$. We have two cases for the current number $i$:
- $lp[i] = 0$ - that means that $i$ is prime, i.e. we haven't found any smaller factors for it.
Hence, we assign $lp [i] = i$ and add $i$ to the end of the list $pr[]$.
- $lp[i] \neq 0$ - that means that $i$ is composite, and its minimum prime factor is $lp [i]$.
In both cases we update values of $lp []$ for the numbers that are divisible by $i$. However, our goal is to learn to do so as to set a value $lp []$ at most once for every number. We can do it as follows:
Let's consider numbers $x_j = i \cdot p_j$, where $p_j$ are all prime numbers less than or equal to $lp [i]$ (this is why we need to store the list of all prime numbers).
We'll set a new value $lp [x_j] = p_j$ for all numbers of this form.
The proof of correctness of this algorithm and its runtime can be found after the implementation.
## Implementation
```cpp
const int N = 10000000;
vector<int> lp(N+1);
vector<int> pr;
for (int i=2; i <= N; ++i) {
if (lp[i] == 0) {
lp[i] = i;
pr.push_back(i);
}
for (int j = 0; i * pr[j] <= N; ++j) {
lp[i * pr[j]] = pr[j];
if (pr[j] == lp[i]) {
break;
}
}
}
```
## Correctness Proof
We need to prove that the algorithm sets all values $lp []$ correctly, and that every value will be set exactly once. Hence, the algorithm will have linear runtime, since all the remaining actions of the algorithm, obviously, work for $O (n)$.
Notice that every number $i$ has exactly one representation in form:
$$i = lp [i] \cdot x,$$
where $lp [i]$ is the minimal prime factor of $i$, and the number $x$ doesn't have any prime factors less than $lp [i]$, i.e.
$$lp [i] \le lp [x].$$
Now, let's compare this with the actions of our algorithm: in fact, for every $x$ it goes through all prime numbers it could be multiplied by, i.e. all prime numbers up to $lp [x]$ inclusive, in order to get the numbers in the form given above.
Hence, the algorithm will go through every composite number exactly once, setting the correct values $lp []$ there. Q.E.D.
## Runtime and Memory
Although the running time of $O(n)$ is better than $O(n \log \log n)$ of the classic sieve of Eratosthenes, the difference between them is not so big.
In practice the linear sieve runs about as fast as a typical implementation of the sieve of Eratosthenes.
In comparison to optimized versions of the sieve of Erathosthenes, e.g. the segmented sieve, it is much slower.
Considering the memory requirements of this algorithm - an array $lp []$ of length $n$, and an array of $pr []$ of length $\frac n {\ln n}$, this algorithm seems to worse than the classic sieve in every way.
However, its redeeming quality is that this algorithm calculates an array $lp []$, which allows us to find factorization of any number in the segment $[2; n]$ in the time of the size order of this factorization. Moreover, using just one extra array will allow us to avoid divisions when looking for factorization.
Knowing the factorizations of all numbers is very useful for some tasks, and this algorithm is one of the few which allow to find them in linear time.
## References
- Paul Pritchard, **Linear Prime-Number Sieves: a Family Tree**, Science of Computer Programming, vol. 9 (1987), pp.17-35.
|
Linear Sieve
|
---
title
big_integer
---
# Arbitrary-Precision Arithmetic
Arbitrary-Precision arithmetic, also known as "bignum" or simply "long arithmetic" is a set of data structures and algorithms which allows to process much greater numbers than can be fit in standard data types. Here are several types of arbitrary-precision arithmetic.
## Classical Integer Long Arithmetic
The main idea is that the number is stored as an array of its "digits" in some base. Several most frequently used bases are decimal, powers of decimal ($10^4$ or $10^9$) and binary.
Operations on numbers in this form are performed using "school" algorithms of column addition, subtraction, multiplication and division. It's also possible to use fast multiplication algorithms: fast Fourier transform and Karatsuba algorithm.
Here we describe long arithmetic for only non-negative integers. To extend the algorithms to handle negative integers one has to introduce and maintain additional "negative number" flag or use two's complement integer representation.
### Data Structure
We'll store numbers as a `vector<int>`, in which each element is a single "digit" of the number.
```cpp
typedef vector<int> lnum;
```
To improve performance we'll use $10^9$ as the base, so that each "digit" of the long number contains 9 decimal digits at once.
```cpp
const int base = 1000*1000*1000;
```
Digits will be stored in order from least to most significant. All operations will be implemented so that after each of them the result doesn't have any leading zeros, as long as operands didn't have any leading zeros either. All operations which might result in a number with leading zeros should be followed by code which removes them. Note that in this representation there are two valid notations for number zero: and empty vector, and a vector with a single zero digit.
### Output
Printing the long integer is the easiest operation. First we print the last element of the vector (or 0 if the vector is empty), followed by the rest of the elements padded with leading zeros if necessary so that they are exactly 9 digits long.
```cpp
printf ("%d", a.empty() ? 0 : a.back());
for (int i=(int)a.size()-2; i>=0; --i)
printf ("%09d", a[i]);
```
Note that we cast `a.size()` to integer to avoid unsigned integer underflow if vector contains less than 2 elements.
### Input
To read a long integer, read its notation into a `string` and then convert it to "digits":
```cpp
for (int i=(int)s.length(); i>0; i-=9)
if (i < 9)
a.push_back (atoi (s.substr (0, i).c_str()));
else
a.push_back (atoi (s.substr (i-9, 9).c_str()));
```
If we use an array of `char` instead of a `string`, the code will be even shorter:
```cpp
for (int i=(int)strlen(s); i>0; i-=9) {
s[i] = 0;
a.push_back (atoi (i>=9 ? s+i-9 : s));
}
```
If the input can contain leading zeros, they can be removed as follows:
```cpp
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
### Addition
Increment long integer $a$ by $b$ and store result in $a$:
```cpp
int carry = 0;
for (size_t i=0; i<max(a.size(),b.size()) || carry; ++i) {
if (i == a.size())
a.push_back (0);
a[i] += carry + (i < b.size() ? b[i] : 0);
carry = a[i] >= base;
if (carry) a[i] -= base;
}
```
### Subtraction
Decrement long integer $a$ by $b$ ($a \ge b$) and store result in $a$:
```cpp
int carry = 0;
for (size_t i=0; i<b.size() || carry; ++i) {
a[i] -= carry + (i < b.size() ? b[i] : 0);
carry = a[i] < 0;
if (carry) a[i] += base;
}
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
Note that after performing subtraction we remove leading zeros to keep up with the premise that our long integers don't have leading zeros.
### Multiplication by short integer
Multiply long integer $a$ by short integer $b$ ($b < base$) and store result in $a$:
```cpp
int carry = 0;
for (size_t i=0; i<a.size() || carry; ++i) {
if (i == a.size())
a.push_back (0);
long long cur = carry + a[i] * 1ll * b;
a[i] = int (cur % base);
carry = int (cur / base);
}
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
Additional optimization: If runtime is extremely important, you can try to replace two divisions with one by finding only integer result of division (variable `carry`) and then use it to find modulo using multiplication. This usually makes the code faster, though not dramatically.
### Multiplication by long integer
Multiply long integers $a$ and $b$ and store result in $c$:
```cpp
lnum c (a.size()+b.size());
for (size_t i=0; i<a.size(); ++i)
for (int j=0, carry=0; j<(int)b.size() || carry; ++j) {
long long cur = c[i+j] + a[i] * 1ll * (j < (int)b.size() ? b[j] : 0) + carry;
c[i+j] = int (cur % base);
carry = int (cur / base);
}
while (c.size() > 1 && c.back() == 0)
c.pop_back();
```
### Division by short integer
Divide long integer $a$ by short integer $b$ ($b < base$), store integer result in $a$ and remainder in `carry`:
```cpp
int carry = 0;
for (int i=(int)a.size()-1; i>=0; --i) {
long long cur = a[i] + carry * 1ll * base;
a[i] = int (cur / b);
carry = int (cur % b);
}
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
## Long Integer Arithmetic for Factorization Representation
The idea is to store the integer as its factorization, i.e. the powers of primes which divide it.
This approach is very easy to implement, and allows to do multiplication and division easily (asymptotically faster than the classical method), but not addition or subtraction. It is also very memory-efficient compared to the classical approach.
This method is often used for calculations modulo non-prime number M; in this case a number is stored as powers of divisors of M which divide the number, plus the remainder modulo M.
## Long Integer Arithmetic in prime modulos (Garner Algorithm)
The idea is to choose a set of prime numbers (typically they are small enough to fit into standard integer data type) and to store an integer as a vector of remainders from division of the integer by each of those primes.
Chinese remainder theorem states that this representation is sufficient to uniquely restore any number from 0 to product of these primes minus one. [Garner algorithm](chinese-remainder-theorem.md) allows to restore the number from such representation to normal integer.
This method allows to save memory compared to the classical approach (though the savings are not as dramatic as in factorization representation). Besides, it allows to perform fast addition, subtraction and multiplication in time proportional to the number of prime numbers used as modulos (see [Chinese remainder theorem](chinese-remainder-theorem.md) article for implementation).
The tradeoff is that converting the integer back to normal form is rather laborious and requires implementing classical arbitrary-precision arithmetic with multiplication. Besides, this method doesn't support division.
## Fractional Arbitrary-Precision Arithmetic
Fractions occur in programming competitions less frequently than integers, and long arithmetic is much trickier to implement for fractions, so programming competitions feature only a small subset of fractional long arithmetic.
### Arithmetic in Irreducible Fractions
A number is represented as an irreducible fraction $\frac{a}{b}$, where $a$ and $b$ are integers. All operations on fractions can be represented as operations on integer numerators and denominators of these fractions. Usually this requires using classical arbitrary-precision arithmetic for storing numerator and denominator, but sometimes a built-in 64-bit integer data type suffices.
### Storing Floating Point Position as Separate Type
Sometimes a problem requires handling very small or very large numbers without allowing overflow or underflow. Built-in double data type uses 8-10 bytes and allows values of the exponent in $[-308; 308]$ range, which sometimes might be insufficient.
The approach is very simple: a separate integer variable is used to store the value of the exponent, and after each operation the floating-point number is normalized, i.e. returned to $[0.1; 1)$ interval by adjusting the exponent accordingly.
When two such numbers are multiplied or divided, their exponents should be added or subtracted, respectively. When numbers are added or subtracted, they have to be brought to common exponent first by multiplying one of them by 10 raised to the power equal to the difference of exponent values.
As a final note, the exponent base doesn't have to equal 10. Based on the internal representation of floating-point numbers, it makes most sense to use 2 as the exponent base.
|
---
title
big_integer
---
# Arbitrary-Precision Arithmetic
Arbitrary-Precision arithmetic, also known as "bignum" or simply "long arithmetic" is a set of data structures and algorithms which allows to process much greater numbers than can be fit in standard data types. Here are several types of arbitrary-precision arithmetic.
## Classical Integer Long Arithmetic
The main idea is that the number is stored as an array of its "digits" in some base. Several most frequently used bases are decimal, powers of decimal ($10^4$ or $10^9$) and binary.
Operations on numbers in this form are performed using "school" algorithms of column addition, subtraction, multiplication and division. It's also possible to use fast multiplication algorithms: fast Fourier transform and Karatsuba algorithm.
Here we describe long arithmetic for only non-negative integers. To extend the algorithms to handle negative integers one has to introduce and maintain additional "negative number" flag or use two's complement integer representation.
### Data Structure
We'll store numbers as a `vector<int>`, in which each element is a single "digit" of the number.
```cpp
typedef vector<int> lnum;
```
To improve performance we'll use $10^9$ as the base, so that each "digit" of the long number contains 9 decimal digits at once.
```cpp
const int base = 1000*1000*1000;
```
Digits will be stored in order from least to most significant. All operations will be implemented so that after each of them the result doesn't have any leading zeros, as long as operands didn't have any leading zeros either. All operations which might result in a number with leading zeros should be followed by code which removes them. Note that in this representation there are two valid notations for number zero: and empty vector, and a vector with a single zero digit.
### Output
Printing the long integer is the easiest operation. First we print the last element of the vector (or 0 if the vector is empty), followed by the rest of the elements padded with leading zeros if necessary so that they are exactly 9 digits long.
```cpp
printf ("%d", a.empty() ? 0 : a.back());
for (int i=(int)a.size()-2; i>=0; --i)
printf ("%09d", a[i]);
```
Note that we cast `a.size()` to integer to avoid unsigned integer underflow if vector contains less than 2 elements.
### Input
To read a long integer, read its notation into a `string` and then convert it to "digits":
```cpp
for (int i=(int)s.length(); i>0; i-=9)
if (i < 9)
a.push_back (atoi (s.substr (0, i).c_str()));
else
a.push_back (atoi (s.substr (i-9, 9).c_str()));
```
If we use an array of `char` instead of a `string`, the code will be even shorter:
```cpp
for (int i=(int)strlen(s); i>0; i-=9) {
s[i] = 0;
a.push_back (atoi (i>=9 ? s+i-9 : s));
}
```
If the input can contain leading zeros, they can be removed as follows:
```cpp
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
### Addition
Increment long integer $a$ by $b$ and store result in $a$:
```cpp
int carry = 0;
for (size_t i=0; i<max(a.size(),b.size()) || carry; ++i) {
if (i == a.size())
a.push_back (0);
a[i] += carry + (i < b.size() ? b[i] : 0);
carry = a[i] >= base;
if (carry) a[i] -= base;
}
```
### Subtraction
Decrement long integer $a$ by $b$ ($a \ge b$) and store result in $a$:
```cpp
int carry = 0;
for (size_t i=0; i<b.size() || carry; ++i) {
a[i] -= carry + (i < b.size() ? b[i] : 0);
carry = a[i] < 0;
if (carry) a[i] += base;
}
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
Note that after performing subtraction we remove leading zeros to keep up with the premise that our long integers don't have leading zeros.
### Multiplication by short integer
Multiply long integer $a$ by short integer $b$ ($b < base$) and store result in $a$:
```cpp
int carry = 0;
for (size_t i=0; i<a.size() || carry; ++i) {
if (i == a.size())
a.push_back (0);
long long cur = carry + a[i] * 1ll * b;
a[i] = int (cur % base);
carry = int (cur / base);
}
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
Additional optimization: If runtime is extremely important, you can try to replace two divisions with one by finding only integer result of division (variable `carry`) and then use it to find modulo using multiplication. This usually makes the code faster, though not dramatically.
### Multiplication by long integer
Multiply long integers $a$ and $b$ and store result in $c$:
```cpp
lnum c (a.size()+b.size());
for (size_t i=0; i<a.size(); ++i)
for (int j=0, carry=0; j<(int)b.size() || carry; ++j) {
long long cur = c[i+j] + a[i] * 1ll * (j < (int)b.size() ? b[j] : 0) + carry;
c[i+j] = int (cur % base);
carry = int (cur / base);
}
while (c.size() > 1 && c.back() == 0)
c.pop_back();
```
### Division by short integer
Divide long integer $a$ by short integer $b$ ($b < base$), store integer result in $a$ and remainder in `carry`:
```cpp
int carry = 0;
for (int i=(int)a.size()-1; i>=0; --i) {
long long cur = a[i] + carry * 1ll * base;
a[i] = int (cur / b);
carry = int (cur % b);
}
while (a.size() > 1 && a.back() == 0)
a.pop_back();
```
## Long Integer Arithmetic for Factorization Representation
The idea is to store the integer as its factorization, i.e. the powers of primes which divide it.
This approach is very easy to implement, and allows to do multiplication and division easily (asymptotically faster than the classical method), but not addition or subtraction. It is also very memory-efficient compared to the classical approach.
This method is often used for calculations modulo non-prime number M; in this case a number is stored as powers of divisors of M which divide the number, plus the remainder modulo M.
## Long Integer Arithmetic in prime modulos (Garner Algorithm)
The idea is to choose a set of prime numbers (typically they are small enough to fit into standard integer data type) and to store an integer as a vector of remainders from division of the integer by each of those primes.
Chinese remainder theorem states that this representation is sufficient to uniquely restore any number from 0 to product of these primes minus one. [Garner algorithm](chinese-remainder-theorem.md) allows to restore the number from such representation to normal integer.
This method allows to save memory compared to the classical approach (though the savings are not as dramatic as in factorization representation). Besides, it allows to perform fast addition, subtraction and multiplication in time proportional to the number of prime numbers used as modulos (see [Chinese remainder theorem](chinese-remainder-theorem.md) article for implementation).
The tradeoff is that converting the integer back to normal form is rather laborious and requires implementing classical arbitrary-precision arithmetic with multiplication. Besides, this method doesn't support division.
## Fractional Arbitrary-Precision Arithmetic
Fractions occur in programming competitions less frequently than integers, and long arithmetic is much trickier to implement for fractions, so programming competitions feature only a small subset of fractional long arithmetic.
### Arithmetic in Irreducible Fractions
A number is represented as an irreducible fraction $\frac{a}{b}$, where $a$ and $b$ are integers. All operations on fractions can be represented as operations on integer numerators and denominators of these fractions. Usually this requires using classical arbitrary-precision arithmetic for storing numerator and denominator, but sometimes a built-in 64-bit integer data type suffices.
### Storing Floating Point Position as Separate Type
Sometimes a problem requires handling very small or very large numbers without allowing overflow or underflow. Built-in double data type uses 8-10 bytes and allows values of the exponent in $[-308; 308]$ range, which sometimes might be insufficient.
The approach is very simple: a separate integer variable is used to store the value of the exponent, and after each operation the floating-point number is normalized, i.e. returned to $[0.1; 1)$ interval by adjusting the exponent accordingly.
When two such numbers are multiplied or divided, their exponents should be added or subtracted, respectively. When numbers are added or subtracted, they have to be brought to common exponent first by multiplying one of them by 10 raised to the power equal to the difference of exponent values.
As a final note, the exponent base doesn't have to equal 10. Based on the internal representation of floating-point numbers, it makes most sense to use 2 as the exponent base.
## Practice Problems
* [UVA - How Many Fibs?](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1124)
* [UVA - Product](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=1047)
* [UVA - Maximum Sub-sequence Product](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=728)
* [SPOJ - Fast Multiplication](http://www.spoj.com/problems/MUL/en/)
* [SPOJ - GCD2](http://www.spoj.com/problems/GCD2/)
* [UVA - Division](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1024)
* [UVA - Fibonacci Freeze](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=436)
* [UVA - Krakovia](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1866)
* [UVA - Simplifying Fractions](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=1755)
* [UVA - 500!](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=564)
* [Hackerrank - Factorial digit sum](https://www.hackerrank.com/contests/projecteuler/challenges/euler020/problem)
* [UVA - Immortal Rabbits](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=4803)
* [SPOJ - 0110SS](http://www.spoj.com/problems/IWGBS/)
* [Codeforces - Notepad](http://codeforces.com/contest/17/problem/D)
|
Arbitrary-Precision Arithmetic
|
---
title
gray_code
---
# Gray code
Gray code is a binary numeral system where two successive values differ in only one bit.
For example, the sequence of Gray codes for 3-bit numbers is: 000, 001, 011, 010, 110, 111, 101, 100, so $G(4) = 6$.
This code was invented by Frank Gray in 1953.
## Finding Gray code
Let's look at the bits of number $n$ and the bits of number $G(n)$. Notice that $i$-th bit of $G(n)$ equals 1 only when $i$-th bit of $n$ equals 1 and $i + 1$-th bit equals 0 or the other way around ($i$-th bit equals 0 and $i + 1$-th bit equals 1). Thus, $G(n) = n \oplus (n >> 1)$:
```cpp
int g (int n) {
return n ^ (n >> 1);
}
```
## Finding inverse Gray code
Given Gray code $g$, restore the original number $n$.
We will move from the most significant bits to the least significant ones (the least significant bit has index 1 and the most significant bit has index $k$). The relation between the bits $n_i$ of number $n$ and the bits $g_i$ of number $g$:
$$\begin{align}
n_k &= g_k, \\
n_{k-1} &= g_{k-1} \oplus n_k = g_k \oplus g_{k-1}, \\
n_{k-2} &= g_{k-2} \oplus n_{k-1} = g_k \oplus g_{k-1} \oplus g_{k-2}, \\
n_{k-3} &= g_{k-3} \oplus n_{k-2} = g_k \oplus g_{k-1} \oplus g_{k-2} \oplus g_{k-3},
\vdots
\end{align}$$
The easiest way to write it in code is:
```cpp
int rev_g (int g) {
int n = 0;
for (; g; g >>= 1)
n ^= g;
return n;
}
```
## Practical applications
Gray codes have some useful applications, sometimes quite unexpected:
* Gray code of $n$ bits forms a Hamiltonian cycle on a hypercube, where each bit corresponds to one dimension.
* Gray codes are used to minimize the errors in digital-to-analog signals conversion (for example, in sensors).
* Gray code can be used to solve the Towers of Hanoi problem.
Let $n$ denote number of disks. Start with Gray code of length $n$ which
consists of all zeroes ($G(0)$) and move between consecutive Gray codes (from $G(i)$ to $G(i+1)$).
Let $i$-th bit of current Gray code represent $n$-th disk
(the least significant bit corresponds to the smallest disk and the most significant bit to the biggest disk).
Since exactly one bit changes on each step, we can treat changing $i$-th bit as moving $i$-th disk.
Notice that there is exactly one move option for each disk (except the smallest one) on each step (except start and finish positions).
There are always two move options for the smallest disk but there is a strategy which will always lead to answer:
if $n$ is odd then sequence of the smallest disk moves looks like $f \to t \to r \to f \to t \to r \to ...$
where $f$ is the initial rod, $t$ is the terminal rod and $r$ is the remaining rod), and
if $n$ is even: $f \to r \to t \to f \to r \to t \to ...$.
* Gray codes are also used in genetic algorithms theory.
|
---
title
gray_code
---
# Gray code
Gray code is a binary numeral system where two successive values differ in only one bit.
For example, the sequence of Gray codes for 3-bit numbers is: 000, 001, 011, 010, 110, 111, 101, 100, so $G(4) = 6$.
This code was invented by Frank Gray in 1953.
## Finding Gray code
Let's look at the bits of number $n$ and the bits of number $G(n)$. Notice that $i$-th bit of $G(n)$ equals 1 only when $i$-th bit of $n$ equals 1 and $i + 1$-th bit equals 0 or the other way around ($i$-th bit equals 0 and $i + 1$-th bit equals 1). Thus, $G(n) = n \oplus (n >> 1)$:
```cpp
int g (int n) {
return n ^ (n >> 1);
}
```
## Finding inverse Gray code
Given Gray code $g$, restore the original number $n$.
We will move from the most significant bits to the least significant ones (the least significant bit has index 1 and the most significant bit has index $k$). The relation between the bits $n_i$ of number $n$ and the bits $g_i$ of number $g$:
$$\begin{align}
n_k &= g_k, \\
n_{k-1} &= g_{k-1} \oplus n_k = g_k \oplus g_{k-1}, \\
n_{k-2} &= g_{k-2} \oplus n_{k-1} = g_k \oplus g_{k-1} \oplus g_{k-2}, \\
n_{k-3} &= g_{k-3} \oplus n_{k-2} = g_k \oplus g_{k-1} \oplus g_{k-2} \oplus g_{k-3},
\vdots
\end{align}$$
The easiest way to write it in code is:
```cpp
int rev_g (int g) {
int n = 0;
for (; g; g >>= 1)
n ^= g;
return n;
}
```
## Practical applications
Gray codes have some useful applications, sometimes quite unexpected:
* Gray code of $n$ bits forms a Hamiltonian cycle on a hypercube, where each bit corresponds to one dimension.
* Gray codes are used to minimize the errors in digital-to-analog signals conversion (for example, in sensors).
* Gray code can be used to solve the Towers of Hanoi problem.
Let $n$ denote number of disks. Start with Gray code of length $n$ which
consists of all zeroes ($G(0)$) and move between consecutive Gray codes (from $G(i)$ to $G(i+1)$).
Let $i$-th bit of current Gray code represent $n$-th disk
(the least significant bit corresponds to the smallest disk and the most significant bit to the biggest disk).
Since exactly one bit changes on each step, we can treat changing $i$-th bit as moving $i$-th disk.
Notice that there is exactly one move option for each disk (except the smallest one) on each step (except start and finish positions).
There are always two move options for the smallest disk but there is a strategy which will always lead to answer:
if $n$ is odd then sequence of the smallest disk moves looks like $f \to t \to r \to f \to t \to r \to ...$
where $f$ is the initial rod, $t$ is the terminal rod and $r$ is the remaining rod), and
if $n$ is even: $f \to r \to t \to f \to r \to t \to ...$.
* Gray codes are also used in genetic algorithms theory.
## Practice Problems
* <a href="http://codeforces.com/problemsets/acmsguru/problem/99999/249">SGU #249 <b>"Matrix"</b> [Difficulty: medium]</a>
|
Gray code
|
---
title
diofant_2_equation
---
# Linear Diophantine Equation
A Linear Diophantine Equation (in two variables) is an equation of the general form:
$$ax + by = c$$
where $a$, $b$, $c$ are given integers, and $x$, $y$ are unknown integers.
In this article, we consider several classical problems on these equations:
* finding one solution
* finding all solutions
* finding the number of solutions and the solutions themselves in a given interval
* finding a solution with minimum value of $x + y$
## The degenerate case
A degenerate case that need to be taken care of is when $a = b = 0$. It is easy to see that we either have no solutions or infinitely many solutions, depending on whether $c = 0$ or not. In the rest of this article, we will ignore this case.
## Analytic solution
When $a \neq 0$ and $b \neq 0$, the equation $ax+by=c$ can be equivalently treated as either of the following:
\begin{gather}
ax \equiv c \pmod b,\newline
by \equiv c \pmod a.
\end{gather}
Without loss of generality, assume that $b \neq 0$ and consider the first equation. When $a$ and $b$ are co-prime, the solution to it is given as
$$x \equiv ca^{-1} \pmod b,$$
where $a^{-1}$ is the [modular inverse](module-inverse.md) of $a$ modulo $b$.
When $a$ and $b$ are not co-prime, values of $ax$ modulo $b$ for all integer $x$ are divisible by $g=\gcd(a, b)$, so the solution only exists when $c$ is divisible by $g$. In this case, one of solutions can be found by reducing the equation by $g$:
$$(a/g) x \equiv (c/g) \pmod{b/g}.$$
By the definition of $g$, the numbers $a/g$ and $b/g$ are co-prime, so the solution is given explicitly as
$$\begin{cases}
x \equiv (c/g)(a/g)^{-1}\pmod{b/g},\\
y = \frac{c-ax}{b}.
\end{cases}$$
## Algorithmic solution
To find one solution of the Diophantine equation with 2 unknowns, you can use the [Extended Euclidean algorithm](extended-euclid-algorithm.md). First, assume that $a$ and $b$ are non-negative. When we apply Extended Euclidean algorithm for $a$ and $b$, we can find their greatest common divisor $g$ and 2 numbers $x_g$ and $y_g$ such that:
$$a x_g + b y_g = g$$
If $c$ is divisible by $g = \gcd(a, b)$, then the given Diophantine equation has a solution, otherwise it does not have any solution. The proof is straight-forward: a linear combination of two numbers is divisible by their common divisor.
Now supposed that $c$ is divisible by $g$, then we have:
$$a \cdot x_g \cdot \frac{c}{g} + b \cdot y_g \cdot \frac{c}{g} = c$$
Therefore one of the solutions of the Diophantine equation is:
$$x_0 = x_g \cdot \frac{c}{g},$$
$$y_0 = y_g \cdot \frac{c}{g}.$$
The above idea still works when $a$ or $b$ or both of them are negative. We only need to change the sign of $x_0$ and $y_0$ when necessary.
Finally, we can implement this idea as follows (note that this code does not consider the case $a = b = 0$):
```{.cpp file=linear_diophantine_any}
int gcd(int a, int b, int& x, int& y) {
if (b == 0) {
x = 1;
y = 0;
return a;
}
int x1, y1;
int d = gcd(b, a % b, x1, y1);
x = y1;
y = x1 - y1 * (a / b);
return d;
}
bool find_any_solution(int a, int b, int c, int &x0, int &y0, int &g) {
g = gcd(abs(a), abs(b), x0, y0);
if (c % g) {
return false;
}
x0 *= c / g;
y0 *= c / g;
if (a < 0) x0 = -x0;
if (b < 0) y0 = -y0;
return true;
}
```
## Getting all solutions
From one solution $(x_0, y_0)$, we can obtain all the solutions of the given equation.
Let $g = \gcd(a, b)$ and let $x_0, y_0$ be integers which satisfy the following:
$$a \cdot x_0 + b \cdot y_0 = c$$
Now, we should see that adding $b / g$ to $x_0$, and, at the same time subtracting $a / g$ from $y_0$ will not break the equality:
$$a \cdot \left(x_0 + \frac{b}{g}\right) + b \cdot \left(y_0 - \frac{a}{g}\right) = a \cdot x_0 + b \cdot y_0 + a \cdot \frac{b}{g} - b \cdot \frac{a}{g} = c$$
Obviously, this process can be repeated again, so all the numbers of the form:
$$x = x_0 + k \cdot \frac{b}{g}$$
$$y = y_0 - k \cdot \frac{a}{g}$$
are solutions of the given Diophantine equation.
Moreover, this is the set of all possible solutions of the given Diophantine equation.
## Finding the number of solutions and the solutions in a given interval
From previous section, it should be clear that if we don't impose any restrictions on the solutions, there would be infinite number of them. So in this section, we add some restrictions on the interval of $x$ and $y$, and we will try to count and enumerate all the solutions.
Let there be two intervals: $[min_x; max_x]$ and $[min_y; max_y]$ and let's say we only want to find the solutions in these two intervals.
Note that if $a$ or $b$ is $0$, then the problem only has one solution. We don't consider this case here.
First, we can find a solution which have minimum value of $x$, such that $x \ge min_x$. To do this, we first find any solution of the Diophantine equation. Then, we shift this solution to get $x \ge min_x$ (using what we know about the set of all solutions in previous section). This can be done in $O(1)$.
Denote this minimum value of $x$ by $l_{x1}$.
Similarly, we can find the maximum value of $x$ which satisfy $x \le max_x$. Denote this maximum value of $x$ by $r_{x1}$.
Similarly, we can find the minimum value of $y$ $(y \ge min_y)$ and maximum values of $y$ $(y \le max_y)$. Denote the corresponding values of $x$ by $l_{x2}$ and $r_{x2}$.
The final solution is all solutions with x in intersection of $[l_{x1}, r_{x1}]$ and $[l_{x2}, r_{x2}]$. Let denote this intersection by $[l_x, r_x]$.
Following is the code implementing this idea.
Notice that we divide $a$ and $b$ at the beginning by $g$.
Since the equation $a x + b y = c$ is equivalent to the equation $\frac{a}{g} x + \frac{b}{g} y = \frac{c}{g}$, we can use this one instead and have $\gcd(\frac{a}{g}, \frac{b}{g}) = 1$, which simplifies the formulas.
```{.cpp file=linear_diophantine_all}
void shift_solution(int & x, int & y, int a, int b, int cnt) {
x += cnt * b;
y -= cnt * a;
}
int find_all_solutions(int a, int b, int c, int minx, int maxx, int miny, int maxy) {
int x, y, g;
if (!find_any_solution(a, b, c, x, y, g))
return 0;
a /= g;
b /= g;
int sign_a = a > 0 ? +1 : -1;
int sign_b = b > 0 ? +1 : -1;
shift_solution(x, y, a, b, (minx - x) / b);
if (x < minx)
shift_solution(x, y, a, b, sign_b);
if (x > maxx)
return 0;
int lx1 = x;
shift_solution(x, y, a, b, (maxx - x) / b);
if (x > maxx)
shift_solution(x, y, a, b, -sign_b);
int rx1 = x;
shift_solution(x, y, a, b, -(miny - y) / a);
if (y < miny)
shift_solution(x, y, a, b, -sign_a);
if (y > maxy)
return 0;
int lx2 = x;
shift_solution(x, y, a, b, -(maxy - y) / a);
if (y > maxy)
shift_solution(x, y, a, b, sign_a);
int rx2 = x;
if (lx2 > rx2)
swap(lx2, rx2);
int lx = max(lx1, lx2);
int rx = min(rx1, rx2);
if (lx > rx)
return 0;
return (rx - lx) / abs(b) + 1;
}
```
Once we have $l_x$ and $r_x$, it is also simple to enumerate through all the solutions. Just need to iterate through $x = l_x + k \cdot \frac{b}{g}$ for all $k \ge 0$ until $x = r_x$, and find the corresponding $y$ values using the equation $a x + b y = c$.
## Find the solution with minimum value of $x + y$ { data-toc-label='Find the solution with minimum value of <script type="math/tex">x + y</script>' }
Here, $x$ and $y$ also need to be given some restriction, otherwise, the answer may become negative infinity.
The idea is similar to previous section: We find any solution of the Diophantine equation, and then shift the solution to satisfy some conditions.
Finally, use the knowledge of the set of all solutions to find the minimum:
$$x' = x + k \cdot \frac{b}{g},$$
$$y' = y - k \cdot \frac{a}{g}.$$
Note that $x + y$ change as follows:
$$x' + y' = x + y + k \cdot \left(\frac{b}{g} - \frac{a}{g}\right) = x + y + k \cdot \frac{b-a}{g}$$
If $a < b$, we need to select smallest possible value of $k$. If $a > b$, we need to select the largest possible value of $k$. If $a = b$, all solution will have the same sum $x + y$.
|
---
title
diofant_2_equation
---
# Linear Diophantine Equation
A Linear Diophantine Equation (in two variables) is an equation of the general form:
$$ax + by = c$$
where $a$, $b$, $c$ are given integers, and $x$, $y$ are unknown integers.
In this article, we consider several classical problems on these equations:
* finding one solution
* finding all solutions
* finding the number of solutions and the solutions themselves in a given interval
* finding a solution with minimum value of $x + y$
## The degenerate case
A degenerate case that need to be taken care of is when $a = b = 0$. It is easy to see that we either have no solutions or infinitely many solutions, depending on whether $c = 0$ or not. In the rest of this article, we will ignore this case.
## Analytic solution
When $a \neq 0$ and $b \neq 0$, the equation $ax+by=c$ can be equivalently treated as either of the following:
\begin{gather}
ax \equiv c \pmod b,\newline
by \equiv c \pmod a.
\end{gather}
Without loss of generality, assume that $b \neq 0$ and consider the first equation. When $a$ and $b$ are co-prime, the solution to it is given as
$$x \equiv ca^{-1} \pmod b,$$
where $a^{-1}$ is the [modular inverse](module-inverse.md) of $a$ modulo $b$.
When $a$ and $b$ are not co-prime, values of $ax$ modulo $b$ for all integer $x$ are divisible by $g=\gcd(a, b)$, so the solution only exists when $c$ is divisible by $g$. In this case, one of solutions can be found by reducing the equation by $g$:
$$(a/g) x \equiv (c/g) \pmod{b/g}.$$
By the definition of $g$, the numbers $a/g$ and $b/g$ are co-prime, so the solution is given explicitly as
$$\begin{cases}
x \equiv (c/g)(a/g)^{-1}\pmod{b/g},\\
y = \frac{c-ax}{b}.
\end{cases}$$
## Algorithmic solution
To find one solution of the Diophantine equation with 2 unknowns, you can use the [Extended Euclidean algorithm](extended-euclid-algorithm.md). First, assume that $a$ and $b$ are non-negative. When we apply Extended Euclidean algorithm for $a$ and $b$, we can find their greatest common divisor $g$ and 2 numbers $x_g$ and $y_g$ such that:
$$a x_g + b y_g = g$$
If $c$ is divisible by $g = \gcd(a, b)$, then the given Diophantine equation has a solution, otherwise it does not have any solution. The proof is straight-forward: a linear combination of two numbers is divisible by their common divisor.
Now supposed that $c$ is divisible by $g$, then we have:
$$a \cdot x_g \cdot \frac{c}{g} + b \cdot y_g \cdot \frac{c}{g} = c$$
Therefore one of the solutions of the Diophantine equation is:
$$x_0 = x_g \cdot \frac{c}{g},$$
$$y_0 = y_g \cdot \frac{c}{g}.$$
The above idea still works when $a$ or $b$ or both of them are negative. We only need to change the sign of $x_0$ and $y_0$ when necessary.
Finally, we can implement this idea as follows (note that this code does not consider the case $a = b = 0$):
```{.cpp file=linear_diophantine_any}
int gcd(int a, int b, int& x, int& y) {
if (b == 0) {
x = 1;
y = 0;
return a;
}
int x1, y1;
int d = gcd(b, a % b, x1, y1);
x = y1;
y = x1 - y1 * (a / b);
return d;
}
bool find_any_solution(int a, int b, int c, int &x0, int &y0, int &g) {
g = gcd(abs(a), abs(b), x0, y0);
if (c % g) {
return false;
}
x0 *= c / g;
y0 *= c / g;
if (a < 0) x0 = -x0;
if (b < 0) y0 = -y0;
return true;
}
```
## Getting all solutions
From one solution $(x_0, y_0)$, we can obtain all the solutions of the given equation.
Let $g = \gcd(a, b)$ and let $x_0, y_0$ be integers which satisfy the following:
$$a \cdot x_0 + b \cdot y_0 = c$$
Now, we should see that adding $b / g$ to $x_0$, and, at the same time subtracting $a / g$ from $y_0$ will not break the equality:
$$a \cdot \left(x_0 + \frac{b}{g}\right) + b \cdot \left(y_0 - \frac{a}{g}\right) = a \cdot x_0 + b \cdot y_0 + a \cdot \frac{b}{g} - b \cdot \frac{a}{g} = c$$
Obviously, this process can be repeated again, so all the numbers of the form:
$$x = x_0 + k \cdot \frac{b}{g}$$
$$y = y_0 - k \cdot \frac{a}{g}$$
are solutions of the given Diophantine equation.
Moreover, this is the set of all possible solutions of the given Diophantine equation.
## Finding the number of solutions and the solutions in a given interval
From previous section, it should be clear that if we don't impose any restrictions on the solutions, there would be infinite number of them. So in this section, we add some restrictions on the interval of $x$ and $y$, and we will try to count and enumerate all the solutions.
Let there be two intervals: $[min_x; max_x]$ and $[min_y; max_y]$ and let's say we only want to find the solutions in these two intervals.
Note that if $a$ or $b$ is $0$, then the problem only has one solution. We don't consider this case here.
First, we can find a solution which have minimum value of $x$, such that $x \ge min_x$. To do this, we first find any solution of the Diophantine equation. Then, we shift this solution to get $x \ge min_x$ (using what we know about the set of all solutions in previous section). This can be done in $O(1)$.
Denote this minimum value of $x$ by $l_{x1}$.
Similarly, we can find the maximum value of $x$ which satisfy $x \le max_x$. Denote this maximum value of $x$ by $r_{x1}$.
Similarly, we can find the minimum value of $y$ $(y \ge min_y)$ and maximum values of $y$ $(y \le max_y)$. Denote the corresponding values of $x$ by $l_{x2}$ and $r_{x2}$.
The final solution is all solutions with x in intersection of $[l_{x1}, r_{x1}]$ and $[l_{x2}, r_{x2}]$. Let denote this intersection by $[l_x, r_x]$.
Following is the code implementing this idea.
Notice that we divide $a$ and $b$ at the beginning by $g$.
Since the equation $a x + b y = c$ is equivalent to the equation $\frac{a}{g} x + \frac{b}{g} y = \frac{c}{g}$, we can use this one instead and have $\gcd(\frac{a}{g}, \frac{b}{g}) = 1$, which simplifies the formulas.
```{.cpp file=linear_diophantine_all}
void shift_solution(int & x, int & y, int a, int b, int cnt) {
x += cnt * b;
y -= cnt * a;
}
int find_all_solutions(int a, int b, int c, int minx, int maxx, int miny, int maxy) {
int x, y, g;
if (!find_any_solution(a, b, c, x, y, g))
return 0;
a /= g;
b /= g;
int sign_a = a > 0 ? +1 : -1;
int sign_b = b > 0 ? +1 : -1;
shift_solution(x, y, a, b, (minx - x) / b);
if (x < minx)
shift_solution(x, y, a, b, sign_b);
if (x > maxx)
return 0;
int lx1 = x;
shift_solution(x, y, a, b, (maxx - x) / b);
if (x > maxx)
shift_solution(x, y, a, b, -sign_b);
int rx1 = x;
shift_solution(x, y, a, b, -(miny - y) / a);
if (y < miny)
shift_solution(x, y, a, b, -sign_a);
if (y > maxy)
return 0;
int lx2 = x;
shift_solution(x, y, a, b, -(maxy - y) / a);
if (y > maxy)
shift_solution(x, y, a, b, sign_a);
int rx2 = x;
if (lx2 > rx2)
swap(lx2, rx2);
int lx = max(lx1, lx2);
int rx = min(rx1, rx2);
if (lx > rx)
return 0;
return (rx - lx) / abs(b) + 1;
}
```
Once we have $l_x$ and $r_x$, it is also simple to enumerate through all the solutions. Just need to iterate through $x = l_x + k \cdot \frac{b}{g}$ for all $k \ge 0$ until $x = r_x$, and find the corresponding $y$ values using the equation $a x + b y = c$.
## Find the solution with minimum value of $x + y$ { data-toc-label='Find the solution with minimum value of <script type="math/tex">x + y</script>' }
Here, $x$ and $y$ also need to be given some restriction, otherwise, the answer may become negative infinity.
The idea is similar to previous section: We find any solution of the Diophantine equation, and then shift the solution to satisfy some conditions.
Finally, use the knowledge of the set of all solutions to find the minimum:
$$x' = x + k \cdot \frac{b}{g},$$
$$y' = y - k \cdot \frac{a}{g}.$$
Note that $x + y$ change as follows:
$$x' + y' = x + y + k \cdot \left(\frac{b}{g} - \frac{a}{g}\right) = x + y + k \cdot \frac{b-a}{g}$$
If $a < b$, we need to select smallest possible value of $k$. If $a > b$, we need to select the largest possible value of $k$. If $a = b$, all solution will have the same sum $x + y$.
## Practice Problems
* [Spoj - Crucial Equation](http://www.spoj.com/problems/CEQU/)
* [SGU 106](http://codeforces.com/problemsets/acmsguru/problem/99999/106)
* [Codeforces - Ebony and Ivory](http://codeforces.com/contest/633/problem/A)
* [Codechef - Get AC in one go](https://www.codechef.com/problems/COPR16G)
* [LightOj - Solutions to an equation](http://www.lightoj.com/volume_showproblem.php?problem=1306)
|
Linear Diophantine Equation
|
---
title
discrete_log
---
# Discrete Logarithm
The discrete logarithm is an integer $x$ satisfying the equation
$$a^x \equiv b \pmod m$$
for given integers $a$, $b$ and $m$.
The discrete logarithm does not always exist, for instance there is no solution to $2^x \equiv 3 \pmod 7$. There is no simple condition to determine if the discrete logarithm exists.
In this article, we describe the **Baby-step giant-step** algorithm, an algorithm to compute the discrete logarithm proposed by Shanks in 1971, which has the time complexity $O(\sqrt{m})$. This is a **meet-in-the-middle** algorithm because it uses the technique of separating tasks in half.
## Algorithm
Consider the equation:
$$a^x \equiv b \pmod m,$$
where $a$ and $m$ are relatively prime.
Let $x = np - q$, where $n$ is some pre-selected constant (we will describe how to select $n$ later). $p$ is known as **giant step**, since increasing it by one increases $x$ by $n$. Similarly, $q$ is known as **baby step**.
Obviously, any number $x$ in the interval $[0; m)$ can be represented in this form, where $p \in [1; \lceil \frac{m}{n} \rceil ]$ and $q \in [0; n]$.
Then, the equation becomes:
$$a^{np - q} \equiv b \pmod m.$$
Using the fact that $a$ and $m$ are relatively prime, we obtain:
$$a^{np} \equiv ba^q \pmod m$$
This new equation can be rewritten in a simplified form:
$$f_1(p) = f_2(q).$$
This problem can be solved using the meet-in-the-middle method as follows:
* Calculate $f_1$ for all possible arguments $p$. Sort the array of value-argument pairs.
* For all possible arguments $q$, calculate $f_2$ and look for the corresponding $p$ in the sorted array using binary search.
## Complexity
We can calculate $f_1(p)$ in $O(\log m)$ using the [binary exponentiation algorithm](binary-exp.md). Similarly for $f_2(q)$.
In the first step of the algorithm, we need to calculate $f_1$ for every possible argument $p$ and then sort the values. Thus, this step has complexity:
$$O\left(\left\lceil \frac{m}{n} \right\rceil \left(\log m + \log \left\lceil \frac{m}{n} \right\rceil \right)\right) = O\left( \left\lceil \frac {m}{n} \right\rceil \log m\right)$$
In the second step of the algorithm, we need to calculate $f_2(q)$ for every possible argument $q$ and then do a binary search on the array of values of $f_1$, thus this step has complexity:
$$O\left(n \left(\log m + \log \frac{m}{n} \right) \right) = O\left(n \log m\right).$$
Now, when we add these two complexities, we get $\log m$ multiplied by the sum of $n$ and $m/n$, which is minimal when $n = m/n$, which means, to achieve optimal performance, $n$ should be chosen such that:
$$n = \sqrt{m}.$$
Then, the complexity of the algorithm becomes:
$$O(\sqrt {m} \log m).$$
## Implementation
### The simplest implementation
In the following code, the function `powmod` calculates $a^b \pmod m$ and the function `solve` produces a proper solution to the problem.
It returns $-1$ if there is no solution and returns one of the possible solutions otherwise.
```cpp
int powmod(int a, int b, int m) {
int res = 1;
while (b > 0) {
if (b & 1) {
res = (res * 1ll * a) % m;
}
a = (a * 1ll * a) % m;
b >>= 1;
}
return res;
}
int solve(int a, int b, int m) {
a %= m, b %= m;
int n = sqrt(m) + 1;
map<int, int> vals;
for (int p = 1; p <= n; ++p)
vals[powmod(a, p * n, m)] = p;
for (int q = 0; q <= n; ++q) {
int cur = (powmod(a, q, m) * 1ll * b) % m;
if (vals.count(cur)) {
int ans = vals[cur] * n - q;
return ans;
}
}
return -1;
}
```
In this code, we used `map` from the C++ standard library to store the values of $f_1$.
Internally, `map` uses a red-black tree to store values.
Thus this code is a little bit slower than if we had used an array and binary searched, but is much easier to write.
Notice that our code assumes $0^0 = 1$, i.e. the code will compute $0$ as solution for the equation $0^x \equiv 1 \pmod m$ and also as solution for $0^x \equiv 0 \pmod 1$.
This is an often used convention in algebra, but it's also not universally accepted in all areas.
Sometimes $0^0$ is simply undefined.
If you don't like our convention, then you need to handle the case $a=0$ separately:
```cpp
if (a == 0)
return b == 0 ? 1 : -1;
```
Another thing to note is that, if there are multiple arguments $p$ that map to the same value of $f_1$, we only store one such argument.
This works in this case because we only want to return one possible solution.
If we need to return all possible solutions, we need to change `map<int, int>` to, say, `map<int, vector<int>>`.
We also need to change the second step accordingly.
## Improved implementation
A possible improvement is to get rid of binary exponentiation.
This can be done by keeping a variable that is multiplied by $a$ each time we increase $q$ and a variable that is multiplied by $a^n$ each time we increase $p$.
With this change, the complexity of the algorithm is still the same, but now the $\log$ factor is only for the `map`.
Instead of a `map`, we can also use a hash table (`unordered_map` in C++) which has the average time complexity $O(1)$ for inserting and searching.
Problems often ask for the minimum $x$ which satisfies the solution.
It is possible to get all answers and take the minimum, or reduce the first found answer using [Euler's theorem](phi-function.md#toc-tgt-2), but we can be smart about the order in which we calculate values and ensure the first answer we find is the minimum.
```{.cpp file=discrete_log}
// Returns minimum x for which a ^ x % m = b % m, a and m are coprime.
int solve(int a, int b, int m) {
a %= m, b %= m;
int n = sqrt(m) + 1;
int an = 1;
for (int i = 0; i < n; ++i)
an = (an * 1ll * a) % m;
unordered_map<int, int> vals;
for (int q = 0, cur = b; q <= n; ++q) {
vals[cur] = q;
cur = (cur * 1ll * a) % m;
}
for (int p = 1, cur = 1; p <= n; ++p) {
cur = (cur * 1ll * an) % m;
if (vals.count(cur)) {
int ans = n * p - vals[cur];
return ans;
}
}
return -1;
}
```
The complexity is $O(\sqrt{m})$ using `unordered_map`.
## When $a$ and $m$ are not coprime { data-toc-label='When a and m are not coprime' }
Let $g = \gcd(a, m)$, and $g > 1$. Clearly $a^x \bmod m$ for every $x \ge 1$ will be divisible by $g$.
If $g \nmid b$, there is no solution for $x$.
If $g \mid b$, let $a = g \alpha, b = g \beta, m = g \nu$.
$$
\begin{aligned}
a^x & \equiv b \mod m \\\
(g \alpha) a^{x - 1} & \equiv g \beta \mod g \nu \\\
\alpha a^{x-1} & \equiv \beta \mod \nu
\end{aligned}
$$
The baby-step giant-step algorithm can be easily extended to solve $ka^{x} \equiv b \pmod m$ for $x$.
```{.cpp file=discrete_log_extended}
// Returns minimum x for which a ^ x % m = b % m.
int solve(int a, int b, int m) {
a %= m, b %= m;
int k = 1, add = 0, g;
while ((g = gcd(a, m)) > 1) {
if (b == k)
return add;
if (b % g)
return -1;
b /= g, m /= g, ++add;
k = (k * 1ll * a / g) % m;
}
int n = sqrt(m) + 1;
int an = 1;
for (int i = 0; i < n; ++i)
an = (an * 1ll * a) % m;
unordered_map<int, int> vals;
for (int q = 0, cur = b; q <= n; ++q) {
vals[cur] = q;
cur = (cur * 1ll * a) % m;
}
for (int p = 1, cur = k; p <= n; ++p) {
cur = (cur * 1ll * an) % m;
if (vals.count(cur)) {
int ans = n * p - vals[cur] + add;
return ans;
}
}
return -1;
}
```
The time complexity remains $O(\sqrt{m})$ as before since the initial reduction to coprime $a$ and $m$ is done in $O(\log^2 m)$.
|
---
title
discrete_log
---
# Discrete Logarithm
The discrete logarithm is an integer $x$ satisfying the equation
$$a^x \equiv b \pmod m$$
for given integers $a$, $b$ and $m$.
The discrete logarithm does not always exist, for instance there is no solution to $2^x \equiv 3 \pmod 7$. There is no simple condition to determine if the discrete logarithm exists.
In this article, we describe the **Baby-step giant-step** algorithm, an algorithm to compute the discrete logarithm proposed by Shanks in 1971, which has the time complexity $O(\sqrt{m})$. This is a **meet-in-the-middle** algorithm because it uses the technique of separating tasks in half.
## Algorithm
Consider the equation:
$$a^x \equiv b \pmod m,$$
where $a$ and $m$ are relatively prime.
Let $x = np - q$, where $n$ is some pre-selected constant (we will describe how to select $n$ later). $p$ is known as **giant step**, since increasing it by one increases $x$ by $n$. Similarly, $q$ is known as **baby step**.
Obviously, any number $x$ in the interval $[0; m)$ can be represented in this form, where $p \in [1; \lceil \frac{m}{n} \rceil ]$ and $q \in [0; n]$.
Then, the equation becomes:
$$a^{np - q} \equiv b \pmod m.$$
Using the fact that $a$ and $m$ are relatively prime, we obtain:
$$a^{np} \equiv ba^q \pmod m$$
This new equation can be rewritten in a simplified form:
$$f_1(p) = f_2(q).$$
This problem can be solved using the meet-in-the-middle method as follows:
* Calculate $f_1$ for all possible arguments $p$. Sort the array of value-argument pairs.
* For all possible arguments $q$, calculate $f_2$ and look for the corresponding $p$ in the sorted array using binary search.
## Complexity
We can calculate $f_1(p)$ in $O(\log m)$ using the [binary exponentiation algorithm](binary-exp.md). Similarly for $f_2(q)$.
In the first step of the algorithm, we need to calculate $f_1$ for every possible argument $p$ and then sort the values. Thus, this step has complexity:
$$O\left(\left\lceil \frac{m}{n} \right\rceil \left(\log m + \log \left\lceil \frac{m}{n} \right\rceil \right)\right) = O\left( \left\lceil \frac {m}{n} \right\rceil \log m\right)$$
In the second step of the algorithm, we need to calculate $f_2(q)$ for every possible argument $q$ and then do a binary search on the array of values of $f_1$, thus this step has complexity:
$$O\left(n \left(\log m + \log \frac{m}{n} \right) \right) = O\left(n \log m\right).$$
Now, when we add these two complexities, we get $\log m$ multiplied by the sum of $n$ and $m/n$, which is minimal when $n = m/n$, which means, to achieve optimal performance, $n$ should be chosen such that:
$$n = \sqrt{m}.$$
Then, the complexity of the algorithm becomes:
$$O(\sqrt {m} \log m).$$
## Implementation
### The simplest implementation
In the following code, the function `powmod` calculates $a^b \pmod m$ and the function `solve` produces a proper solution to the problem.
It returns $-1$ if there is no solution and returns one of the possible solutions otherwise.
```cpp
int powmod(int a, int b, int m) {
int res = 1;
while (b > 0) {
if (b & 1) {
res = (res * 1ll * a) % m;
}
a = (a * 1ll * a) % m;
b >>= 1;
}
return res;
}
int solve(int a, int b, int m) {
a %= m, b %= m;
int n = sqrt(m) + 1;
map<int, int> vals;
for (int p = 1; p <= n; ++p)
vals[powmod(a, p * n, m)] = p;
for (int q = 0; q <= n; ++q) {
int cur = (powmod(a, q, m) * 1ll * b) % m;
if (vals.count(cur)) {
int ans = vals[cur] * n - q;
return ans;
}
}
return -1;
}
```
In this code, we used `map` from the C++ standard library to store the values of $f_1$.
Internally, `map` uses a red-black tree to store values.
Thus this code is a little bit slower than if we had used an array and binary searched, but is much easier to write.
Notice that our code assumes $0^0 = 1$, i.e. the code will compute $0$ as solution for the equation $0^x \equiv 1 \pmod m$ and also as solution for $0^x \equiv 0 \pmod 1$.
This is an often used convention in algebra, but it's also not universally accepted in all areas.
Sometimes $0^0$ is simply undefined.
If you don't like our convention, then you need to handle the case $a=0$ separately:
```cpp
if (a == 0)
return b == 0 ? 1 : -1;
```
Another thing to note is that, if there are multiple arguments $p$ that map to the same value of $f_1$, we only store one such argument.
This works in this case because we only want to return one possible solution.
If we need to return all possible solutions, we need to change `map<int, int>` to, say, `map<int, vector<int>>`.
We also need to change the second step accordingly.
## Improved implementation
A possible improvement is to get rid of binary exponentiation.
This can be done by keeping a variable that is multiplied by $a$ each time we increase $q$ and a variable that is multiplied by $a^n$ each time we increase $p$.
With this change, the complexity of the algorithm is still the same, but now the $\log$ factor is only for the `map`.
Instead of a `map`, we can also use a hash table (`unordered_map` in C++) which has the average time complexity $O(1)$ for inserting and searching.
Problems often ask for the minimum $x$ which satisfies the solution.
It is possible to get all answers and take the minimum, or reduce the first found answer using [Euler's theorem](phi-function.md#toc-tgt-2), but we can be smart about the order in which we calculate values and ensure the first answer we find is the minimum.
```{.cpp file=discrete_log}
// Returns minimum x for which a ^ x % m = b % m, a and m are coprime.
int solve(int a, int b, int m) {
a %= m, b %= m;
int n = sqrt(m) + 1;
int an = 1;
for (int i = 0; i < n; ++i)
an = (an * 1ll * a) % m;
unordered_map<int, int> vals;
for (int q = 0, cur = b; q <= n; ++q) {
vals[cur] = q;
cur = (cur * 1ll * a) % m;
}
for (int p = 1, cur = 1; p <= n; ++p) {
cur = (cur * 1ll * an) % m;
if (vals.count(cur)) {
int ans = n * p - vals[cur];
return ans;
}
}
return -1;
}
```
The complexity is $O(\sqrt{m})$ using `unordered_map`.
## When $a$ and $m$ are not coprime { data-toc-label='When a and m are not coprime' }
Let $g = \gcd(a, m)$, and $g > 1$. Clearly $a^x \bmod m$ for every $x \ge 1$ will be divisible by $g$.
If $g \nmid b$, there is no solution for $x$.
If $g \mid b$, let $a = g \alpha, b = g \beta, m = g \nu$.
$$
\begin{aligned}
a^x & \equiv b \mod m \\\
(g \alpha) a^{x - 1} & \equiv g \beta \mod g \nu \\\
\alpha a^{x-1} & \equiv \beta \mod \nu
\end{aligned}
$$
The baby-step giant-step algorithm can be easily extended to solve $ka^{x} \equiv b \pmod m$ for $x$.
```{.cpp file=discrete_log_extended}
// Returns minimum x for which a ^ x % m = b % m.
int solve(int a, int b, int m) {
a %= m, b %= m;
int k = 1, add = 0, g;
while ((g = gcd(a, m)) > 1) {
if (b == k)
return add;
if (b % g)
return -1;
b /= g, m /= g, ++add;
k = (k * 1ll * a / g) % m;
}
int n = sqrt(m) + 1;
int an = 1;
for (int i = 0; i < n; ++i)
an = (an * 1ll * a) % m;
unordered_map<int, int> vals;
for (int q = 0, cur = b; q <= n; ++q) {
vals[cur] = q;
cur = (cur * 1ll * a) % m;
}
for (int p = 1, cur = k; p <= n; ++p) {
cur = (cur * 1ll * an) % m;
if (vals.count(cur)) {
int ans = n * p - vals[cur] + add;
return ans;
}
}
return -1;
}
```
The time complexity remains $O(\sqrt{m})$ as before since the initial reduction to coprime $a$ and $m$ is done in $O(\log^2 m)$.
## Practice Problems
* [Spoj - Power Modulo Inverted](http://www.spoj.com/problems/MOD/)
* [Topcoder - SplittingFoxes3](https://community.topcoder.com/stat?c=problem_statement&pm=14386&rd=16801)
* [CodeChef - Inverse of a Function](https://www.codechef.com/problems/INVXOR/)
* [Hard Equation](https://codeforces.com/gym/101853/problem/G) (assume that $0^0$ is undefined)
* [CodeChef - Chef and Modular Sequence](https://www.codechef.com/problems/CHEFMOD)
## References
* [Wikipedia - Baby-step giant-step](https://en.wikipedia.org/wiki/Baby-step_giant-step)
* [Answer by Zander on Mathematics StackExchange](https://math.stackexchange.com/a/133054)
|
Discrete Logarithm
|
---
title: Factorial modulo p
title
modular_factorial
---
# Factorial modulo $p$
In some cases it is necessary to consider complex formulas modulo some prime $p$, containing factorials in both numerator and denominator, like such that you encounter in the formula for Binomial coefficients.
We consider the case when $p$ is relatively small.
This problem makes only sense when the factorials appear in both numerator and denominator of fractions.
Otherwise $p!$ and subsequent terms will reduce to zero.
But in fractions the factors of $p$ can cancel, and the resulting expression will be non-zero modulo $p$.
Thus, formally the task is: You want to calculate $n! \bmod p$, without taking all the multiple factors of $p$ into account that appear in the factorial.
Imaging you write down the prime factorization of $n!$, remove all factors $p$, and compute the product modulo $p$.
We will denote this *modified* factorial with $n!_{\%p}$.
For instance $7!_{\%p} \equiv 1 \cdot 2 \cdot \underbrace{1}_{3} \cdot 4 \cdot 5 \underbrace{2}_{6} \cdot 7 \equiv 2 \bmod 3$.
Learning how to effectively calculate this modified factorial allows us to quickly calculate the value of the various combinatorial formulas (for example, [Binomial coefficients](../combinatorics/binomial-coefficients.md)).
## Algorithm
Let's write this modified factorial explicitly.
$$\begin{eqnarray}
n!_{\%p} &=& 1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot \underbrace{1}_{p} \cdot (p+1) \cdot (p+2) \cdot \ldots \cdot (2p-1) \cdot \underbrace{2}_{2p} \\\
& &\quad \cdot (2p+1) \cdot \ldots \cdot (p^2-1) \cdot \underbrace{1}_{p^2} \cdot (p^2 +1) \cdot \ldots \cdot n \pmod{p} \\\\
&=& 1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot \underbrace{1}_{p} \cdot 1 \cdot 2 \cdot \ldots \cdot (p-1) \cdot \underbrace{2}_{2p} \cdot 1 \cdot 2 \\\
& &\quad \cdot \ldots \cdot (p-1) \cdot \underbrace{1}_{p^2} \cdot 1 \cdot 2 \cdot \ldots \cdot (n \bmod p) \pmod{p}
\end{eqnarray}$$
It can be clearly seen that factorial is divided into several blocks of same length except for the last one.
$$\begin{eqnarray}
n!_{\%p}&=& \underbrace{1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot 1}_{1\text{st}} \cdot \underbrace{1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot 2}_{2\text{nd}} \cdot \ldots \\\\
& & \cdot \underbrace{1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot 1}_{p\text{th}} \cdot \ldots \cdot \quad \underbrace{1 \cdot 2 \cdot \cdot \ldots \cdot (n \bmod p)}_{\text{tail}} \pmod{p}.
\end{eqnarray}$$
The main part of the blocks it is easy to count — it's just $(p-1)!\ \mathrm{mod}\ p$.
We can compute that programmatically or just apply Wilson theorem which states that $(p-1)! \bmod p = -1$ for any prime $p$.
We have exactly $\lfloor \frac{n}{p} \rfloor$ such blocks, therefore we need to raise $-1$ to the power of $\lfloor \frac{n}{p} \rfloor$.
This can be done in logarithmic time using [Binary Exponentiation](binary-exp.md); however you can also notice that the result will switch between $-1$ and $1$, so we only need to look at the parity of the exponent and multiply by $-1$ if the parity is odd.
And instead of a multiplication, we can also just subtract the current result from $p$.
The value of the last partial block can be calculated separately in $O(p)$.
This leaves only the last element of each block.
If we hide the already handled elements, we can see the following pattern:
$$n!_{\%p} = \underbrace{ \ldots \cdot 1 } \cdot \underbrace{ \ldots \cdot 2} \cdot \ldots \cdot \underbrace{ \ldots \cdot (p-1)} \cdot \underbrace{ \ldots \cdot 1 } \cdot \underbrace{ \ldots \cdot 1} \cdot \underbrace{ \ldots \cdot 2} \cdots$$
This again is a *modified* factorial, only with a much smaller dimension.
It's $\lfloor n / p \rfloor !_{\%p}$.
Thus, during the calculation of the *modified* factorial $n\!_{\%p}$ we did $O(p)$ operations and are left with the calculation of $\lfloor n / p \rfloor !_{\%p}$.
We have a recursive formula.
The recursion depth is $O(\log_p n)$, and therefore the complete asymptotic behavior of the algorithm is $O(p \log_p n)$.
Notice, if you precompute the factorials $0!,~ 1!,~ 2!,~ \dots,~ (p-1)!$ modulo $p$, then the complexity will just be $O(\log_p n)$.
## Implementation
We don't need recursion because this is a case of tail recursion and thus can be easily implemented using iteration.
In the following implementation we precompute the factorials $0!,~ 1!,~ \dots,~ (p-1)!$, and thus have the runtime $O(p + \log_p n)$.
If you need to call the function multiple times, then you can do the precomputation outside of the function and do the computation of $n!_{\%p}$ in $O(\log_p n)$ time.
```cpp
int factmod(int n, int p) {
vector<int> f(p);
f[0] = 1;
for (int i = 1; i < p; i++)
f[i] = f[i-1] * i % p;
int res = 1;
while (n > 1) {
if ((n/p) % 2)
res = p - res;
res = res * f[n%p] % p;
n /= p;
}
return res;
}
```
Alternative, if you only have limit memory and can't afford storing all factorials, you can also just remember the factorials that you need, sort them, and then compute them in one sweep by computing the factorials $0!,~ 1!,~ 2!,~ \dots,~ (p-1)!$ in a loop without storing them explicitly.
## Multiplicity of $p$
If we want to compute a Binomial coefficient modulo $p$, then we additionally need the multiplicity of the $p$ in $n$, i.e. the number of times $p$ occurs in the prime factorization of $n$, or number of times we erased $p$ during the computation of the *modified* factorial.
[Legendre's formula](https://en.wikipedia.org/wiki/Legendre%27s_formula) gives us a way to compute this in $O(\log_p n)$ time.
The formula gives the multiplicity $\nu_p$ as:
$$\nu_p(n!) = \sum_{i=1}^{\infty} \left\lfloor \frac{n}{p^i} \right\rfloor$$
Thus we get the implementation:
```cpp
int multiplicity_factorial(int n, int p) {
int count = 0;
do {
n /= p;
count += n;
} while (n);
return count;
}
```
This formula can be proven very easily using the same ideas that we did in the previous sections.
Remove all elements that don't contain the factor $p$.
This leaves $\lfloor n/p \rfloor$ element remaining.
If we remove the factor $p$ from each of them, we get the product $1 \cdot 2 \cdots \lfloor n/p \rfloor = \lfloor n/p \rfloor !$, and again we have a recursion.
|
---
title: Factorial modulo p
title
modular_factorial
---
# Factorial modulo $p$
In some cases it is necessary to consider complex formulas modulo some prime $p$, containing factorials in both numerator and denominator, like such that you encounter in the formula for Binomial coefficients.
We consider the case when $p$ is relatively small.
This problem makes only sense when the factorials appear in both numerator and denominator of fractions.
Otherwise $p!$ and subsequent terms will reduce to zero.
But in fractions the factors of $p$ can cancel, and the resulting expression will be non-zero modulo $p$.
Thus, formally the task is: You want to calculate $n! \bmod p$, without taking all the multiple factors of $p$ into account that appear in the factorial.
Imaging you write down the prime factorization of $n!$, remove all factors $p$, and compute the product modulo $p$.
We will denote this *modified* factorial with $n!_{\%p}$.
For instance $7!_{\%p} \equiv 1 \cdot 2 \cdot \underbrace{1}_{3} \cdot 4 \cdot 5 \underbrace{2}_{6} \cdot 7 \equiv 2 \bmod 3$.
Learning how to effectively calculate this modified factorial allows us to quickly calculate the value of the various combinatorial formulas (for example, [Binomial coefficients](../combinatorics/binomial-coefficients.md)).
## Algorithm
Let's write this modified factorial explicitly.
$$\begin{eqnarray}
n!_{\%p} &=& 1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot \underbrace{1}_{p} \cdot (p+1) \cdot (p+2) \cdot \ldots \cdot (2p-1) \cdot \underbrace{2}_{2p} \\\
& &\quad \cdot (2p+1) \cdot \ldots \cdot (p^2-1) \cdot \underbrace{1}_{p^2} \cdot (p^2 +1) \cdot \ldots \cdot n \pmod{p} \\\\
&=& 1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot \underbrace{1}_{p} \cdot 1 \cdot 2 \cdot \ldots \cdot (p-1) \cdot \underbrace{2}_{2p} \cdot 1 \cdot 2 \\\
& &\quad \cdot \ldots \cdot (p-1) \cdot \underbrace{1}_{p^2} \cdot 1 \cdot 2 \cdot \ldots \cdot (n \bmod p) \pmod{p}
\end{eqnarray}$$
It can be clearly seen that factorial is divided into several blocks of same length except for the last one.
$$\begin{eqnarray}
n!_{\%p}&=& \underbrace{1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot 1}_{1\text{st}} \cdot \underbrace{1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot 2}_{2\text{nd}} \cdot \ldots \\\\
& & \cdot \underbrace{1 \cdot 2 \cdot 3 \cdot \ldots \cdot (p-2) \cdot (p-1) \cdot 1}_{p\text{th}} \cdot \ldots \cdot \quad \underbrace{1 \cdot 2 \cdot \cdot \ldots \cdot (n \bmod p)}_{\text{tail}} \pmod{p}.
\end{eqnarray}$$
The main part of the blocks it is easy to count — it's just $(p-1)!\ \mathrm{mod}\ p$.
We can compute that programmatically or just apply Wilson theorem which states that $(p-1)! \bmod p = -1$ for any prime $p$.
We have exactly $\lfloor \frac{n}{p} \rfloor$ such blocks, therefore we need to raise $-1$ to the power of $\lfloor \frac{n}{p} \rfloor$.
This can be done in logarithmic time using [Binary Exponentiation](binary-exp.md); however you can also notice that the result will switch between $-1$ and $1$, so we only need to look at the parity of the exponent and multiply by $-1$ if the parity is odd.
And instead of a multiplication, we can also just subtract the current result from $p$.
The value of the last partial block can be calculated separately in $O(p)$.
This leaves only the last element of each block.
If we hide the already handled elements, we can see the following pattern:
$$n!_{\%p} = \underbrace{ \ldots \cdot 1 } \cdot \underbrace{ \ldots \cdot 2} \cdot \ldots \cdot \underbrace{ \ldots \cdot (p-1)} \cdot \underbrace{ \ldots \cdot 1 } \cdot \underbrace{ \ldots \cdot 1} \cdot \underbrace{ \ldots \cdot 2} \cdots$$
This again is a *modified* factorial, only with a much smaller dimension.
It's $\lfloor n / p \rfloor !_{\%p}$.
Thus, during the calculation of the *modified* factorial $n\!_{\%p}$ we did $O(p)$ operations and are left with the calculation of $\lfloor n / p \rfloor !_{\%p}$.
We have a recursive formula.
The recursion depth is $O(\log_p n)$, and therefore the complete asymptotic behavior of the algorithm is $O(p \log_p n)$.
Notice, if you precompute the factorials $0!,~ 1!,~ 2!,~ \dots,~ (p-1)!$ modulo $p$, then the complexity will just be $O(\log_p n)$.
## Implementation
We don't need recursion because this is a case of tail recursion and thus can be easily implemented using iteration.
In the following implementation we precompute the factorials $0!,~ 1!,~ \dots,~ (p-1)!$, and thus have the runtime $O(p + \log_p n)$.
If you need to call the function multiple times, then you can do the precomputation outside of the function and do the computation of $n!_{\%p}$ in $O(\log_p n)$ time.
```cpp
int factmod(int n, int p) {
vector<int> f(p);
f[0] = 1;
for (int i = 1; i < p; i++)
f[i] = f[i-1] * i % p;
int res = 1;
while (n > 1) {
if ((n/p) % 2)
res = p - res;
res = res * f[n%p] % p;
n /= p;
}
return res;
}
```
Alternative, if you only have limit memory and can't afford storing all factorials, you can also just remember the factorials that you need, sort them, and then compute them in one sweep by computing the factorials $0!,~ 1!,~ 2!,~ \dots,~ (p-1)!$ in a loop without storing them explicitly.
## Multiplicity of $p$
If we want to compute a Binomial coefficient modulo $p$, then we additionally need the multiplicity of the $p$ in $n$, i.e. the number of times $p$ occurs in the prime factorization of $n$, or number of times we erased $p$ during the computation of the *modified* factorial.
[Legendre's formula](https://en.wikipedia.org/wiki/Legendre%27s_formula) gives us a way to compute this in $O(\log_p n)$ time.
The formula gives the multiplicity $\nu_p$ as:
$$\nu_p(n!) = \sum_{i=1}^{\infty} \left\lfloor \frac{n}{p^i} \right\rfloor$$
Thus we get the implementation:
```cpp
int multiplicity_factorial(int n, int p) {
int count = 0;
do {
n /= p;
count += n;
} while (n);
return count;
}
```
This formula can be proven very easily using the same ideas that we did in the previous sections.
Remove all elements that don't contain the factor $p$.
This leaves $\lfloor n/p \rfloor$ element remaining.
If we remove the factor $p$ from each of them, we get the product $1 \cdot 2 \cdots \lfloor n/p \rfloor = \lfloor n/p \rfloor !$, and again we have a recursion.
|
Factorial modulo $p$
|
---
title
primitive_root
---
# Primitive Root
## Definition
In modular arithmetic, a number $g$ is called a `primitive root modulo n` if every number coprime to $n$ is congruent to a power of $g$ modulo $n$. Mathematically, $g$ is a `primitive root modulo n` if and only if for any integer $a$ such that $\gcd(a, n) = 1$, there exists an integer $k$ such that:
$g^k \equiv a \pmod n$.
$k$ is then called the `index` or `discrete logarithm` of $a$ to the base $g$ modulo $n$. $g$ is also called the `generator` of the multiplicative group of integers modulo $n$.
In particular, for the case where $n$ is a prime, the powers of primitive root runs through all numbers from $1$ to $n-1$.
## Existence
Primitive root modulo $n$ exists if and only if:
* $n$ is 1, 2, 4, or
* $n$ is power of an odd prime number $(n = p^k)$, or
* $n$ is twice power of an odd prime number $(n = 2 \cdot p^k)$.
This theorem was proved by Gauss in 1801.
## Relation with the Euler function
Let $g$ be a primitive root modulo $n$. Then we can show that the smallest number $k$ for which $g^k \equiv 1 \pmod n$ is equal $\phi (n)$. Moreover, the reverse is also true, and this fact will be used in this article to find a primitive root.
Furthermore, the number of primitive roots modulo $n$, if there are any, is equal to $\phi (\phi (n) )$.
## Algorithm for finding a primitive root
A naive algorithm is to consider all numbers in range $[1, n-1]$. And then check if each one is a primitive root, by calculating all its power to see if they are all different. This algorithm has complexity $O(g \cdot n)$, which would be too slow. In this section, we propose a faster algorithm using several well-known theorems.
From previous section, we know that if the smallest number $k$ for which $g^k \equiv 1 \pmod n$ is $\phi (n)$, then $g$ is a primitive root. Since for any number $a$ relative prime to $n$, we know from Euler's theorem that $a ^ { \phi (n) } \equiv 1 \pmod n$, then to check if $g$ is primitive root, it is enough to check that for all $d$ less than $\phi (n)$, $g^d \not \equiv 1 \pmod n$. However, this algorithm is still too slow.
From Lagrange's theorem, we know that the index of 1 of any number modulo $n$ must be a divisor of $\phi (n)$. Thus, it is sufficient to verify for all proper divisor $d \mid \phi (n)$ that $g^d \not \equiv 1 \pmod n$. This is already a much faster algorithm, but we can still do better.
Factorize $\phi (n) = p_1 ^ {a_1} \cdots p_s ^ {a_s}$. We prove that in the previous algorithm, it is sufficient to consider only the values of $d$ which have the form $\frac { \phi (n) } {p_j}$. Indeed, let $d$ be any proper divisor of $\phi (n)$. Then, obviously, there exists such $j$ that $d \mid \frac { \phi (n) } {p_j}$, i.e. $d \cdot k = \frac { \phi (n) } {p_j}$. However, if $g^d \equiv 1 \pmod n$, we would get:
$g ^ { \frac { \phi (n)} {p_j} } \equiv g ^ {d \cdot k} \equiv (g^d) ^k \equiv 1^k \equiv 1 \pmod n$.
i.e. among the numbers of the form $\frac {\phi (n)} {p_i}$, there would be at least one such that the conditions were not met.
Now we have a complete algorithm for finding the primitive root:
* First, find $\phi (n)$ and factorize it.
* Then iterate through all numbers $g \in [1, n]$, and for each number, to check if it is primitive root, we do the following:
* Calculate all $g ^ { \frac {\phi (n)} {p_i}} \pmod n$.
* If all the calculated values are different from $1$, then $g$ is a primitive root.
Running time of this algorithm is $O(Ans \cdot \log \phi (n) \cdot \log n)$ (assume that $\phi (n)$ has $\log \phi (n)$ divisors).
Shoup (1990, 1992) proved, assuming the [generalized Riemann hypothesis](http://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis), that $g$ is $O(\log^6 p)$.
## Implementation
The following code assumes that the modulo `p` is a prime number. To make it works for any value of `p`, we must add calculation of $\phi (p)$.
```cpp
int powmod (int a, int b, int p) {
int res = 1;
while (b)
if (b & 1)
res = int (res * 1ll * a % p), --b;
else
a = int (a * 1ll * a % p), b >>= 1;
return res;
}
int generator (int p) {
vector<int> fact;
int phi = p-1, n = phi;
for (int i=2; i*i<=n; ++i)
if (n % i == 0) {
fact.push_back (i);
while (n % i == 0)
n /= i;
}
if (n > 1)
fact.push_back (n);
for (int res=2; res<=p; ++res) {
bool ok = true;
for (size_t i=0; i<fact.size() && ok; ++i)
ok &= powmod (res, phi / fact[i], p) != 1;
if (ok) return res;
}
return -1;
}
```
|
---
title
primitive_root
---
# Primitive Root
## Definition
In modular arithmetic, a number $g$ is called a `primitive root modulo n` if every number coprime to $n$ is congruent to a power of $g$ modulo $n$. Mathematically, $g$ is a `primitive root modulo n` if and only if for any integer $a$ such that $\gcd(a, n) = 1$, there exists an integer $k$ such that:
$g^k \equiv a \pmod n$.
$k$ is then called the `index` or `discrete logarithm` of $a$ to the base $g$ modulo $n$. $g$ is also called the `generator` of the multiplicative group of integers modulo $n$.
In particular, for the case where $n$ is a prime, the powers of primitive root runs through all numbers from $1$ to $n-1$.
## Existence
Primitive root modulo $n$ exists if and only if:
* $n$ is 1, 2, 4, or
* $n$ is power of an odd prime number $(n = p^k)$, or
* $n$ is twice power of an odd prime number $(n = 2 \cdot p^k)$.
This theorem was proved by Gauss in 1801.
## Relation with the Euler function
Let $g$ be a primitive root modulo $n$. Then we can show that the smallest number $k$ for which $g^k \equiv 1 \pmod n$ is equal $\phi (n)$. Moreover, the reverse is also true, and this fact will be used in this article to find a primitive root.
Furthermore, the number of primitive roots modulo $n$, if there are any, is equal to $\phi (\phi (n) )$.
## Algorithm for finding a primitive root
A naive algorithm is to consider all numbers in range $[1, n-1]$. And then check if each one is a primitive root, by calculating all its power to see if they are all different. This algorithm has complexity $O(g \cdot n)$, which would be too slow. In this section, we propose a faster algorithm using several well-known theorems.
From previous section, we know that if the smallest number $k$ for which $g^k \equiv 1 \pmod n$ is $\phi (n)$, then $g$ is a primitive root. Since for any number $a$ relative prime to $n$, we know from Euler's theorem that $a ^ { \phi (n) } \equiv 1 \pmod n$, then to check if $g$ is primitive root, it is enough to check that for all $d$ less than $\phi (n)$, $g^d \not \equiv 1 \pmod n$. However, this algorithm is still too slow.
From Lagrange's theorem, we know that the index of 1 of any number modulo $n$ must be a divisor of $\phi (n)$. Thus, it is sufficient to verify for all proper divisor $d \mid \phi (n)$ that $g^d \not \equiv 1 \pmod n$. This is already a much faster algorithm, but we can still do better.
Factorize $\phi (n) = p_1 ^ {a_1} \cdots p_s ^ {a_s}$. We prove that in the previous algorithm, it is sufficient to consider only the values of $d$ which have the form $\frac { \phi (n) } {p_j}$. Indeed, let $d$ be any proper divisor of $\phi (n)$. Then, obviously, there exists such $j$ that $d \mid \frac { \phi (n) } {p_j}$, i.e. $d \cdot k = \frac { \phi (n) } {p_j}$. However, if $g^d \equiv 1 \pmod n$, we would get:
$g ^ { \frac { \phi (n)} {p_j} } \equiv g ^ {d \cdot k} \equiv (g^d) ^k \equiv 1^k \equiv 1 \pmod n$.
i.e. among the numbers of the form $\frac {\phi (n)} {p_i}$, there would be at least one such that the conditions were not met.
Now we have a complete algorithm for finding the primitive root:
* First, find $\phi (n)$ and factorize it.
* Then iterate through all numbers $g \in [1, n]$, and for each number, to check if it is primitive root, we do the following:
* Calculate all $g ^ { \frac {\phi (n)} {p_i}} \pmod n$.
* If all the calculated values are different from $1$, then $g$ is a primitive root.
Running time of this algorithm is $O(Ans \cdot \log \phi (n) \cdot \log n)$ (assume that $\phi (n)$ has $\log \phi (n)$ divisors).
Shoup (1990, 1992) proved, assuming the [generalized Riemann hypothesis](http://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis), that $g$ is $O(\log^6 p)$.
## Implementation
The following code assumes that the modulo `p` is a prime number. To make it works for any value of `p`, we must add calculation of $\phi (p)$.
```cpp
int powmod (int a, int b, int p) {
int res = 1;
while (b)
if (b & 1)
res = int (res * 1ll * a % p), --b;
else
a = int (a * 1ll * a % p), b >>= 1;
return res;
}
int generator (int p) {
vector<int> fact;
int phi = p-1, n = phi;
for (int i=2; i*i<=n; ++i)
if (n % i == 0) {
fact.push_back (i);
while (n % i == 0)
n /= i;
}
if (n > 1)
fact.push_back (n);
for (int res=2; res<=p; ++res) {
bool ok = true;
for (size_t i=0; i<fact.size() && ok; ++i)
ok &= powmod (res, phi / fact[i], p) != 1;
if (ok) return res;
}
return -1;
}
```
|
Primitive Root
|
---
title
- Original
---
# Binary Exponentiation by Factoring
Consider a problem of computing $ax^y \pmod{2^d}$, given integers $a$, $x$, $y$ and $d \geq 3$, where $x$ is odd.
The algorithm below allows to solve this problem with $O(d)$ additions and binary operations and a single multiplication by $y$.
Due to the structure of the multiplicative group modulo $2^d$, any number $x$ such that $x \equiv 1 \pmod 4$ can be represented as
$$
x \equiv b^{L(x)} \pmod{2^d},
$$
where $b \equiv 5 \pmod 8$. Without loss of generality we assume that $x \equiv 1 \pmod 4$, as we can reduce $x \equiv 3 \pmod 4$ to $x \equiv 1 \pmod 4$ by substituting $x \mapsto -x$ and $a \mapsto (-1)^{y} a$. In this notion, $ax^y$ is represented as
$$
a x^y \equiv a b^{yL(x)} \pmod{2^d}.
$$
The core idea of the algorithm is to simplify the computation of $L(x)$ and $b^{y L(x)}$ using the fact that we're working modulo $2^d$. For reasons that will be apparent later on, we'll be working with $4L(x)$ rather than $L(x)$, but taken modulo $2^d$ instead of $2^{d-2}$.
In this article, we will cover the implementation for $32$-bit integers. Let
* `mbin_log_32(r, x)` be a function that computes $r+4L(x) \pmod{2^d}$;
* `mbin_exp_32(r, x)` be a function that computes $r b^{\frac{x}{4}} \pmod{2^d}$;
* `mbin_power_odd_32(a, x, y)` be a function that computes $ax^y \pmod{2^d}$.
Then `mbin_power_odd_32` is implemented as follows:
```cpp
uint32_t mbin_power_odd_32(uint32_t rem, uint32_t base, uint32_t exp) {
if (base & 2) {
/* divider is considered negative */
base = -base;
/* check if result should be negative */
if (exp & 1) {
rem = -rem;
}
}
return (mbin_exp_32(rem, mbin_log_32(0, base) * exp));
}
```
## Computing 4L(x) from x
Let $x$ be an odd number such that $x \equiv 1 \pmod 4$. It can be represented as
$$
x \equiv (2^{a_1}+1)\dots(2^{a_k}+1) \pmod{2^d},
$$
where $1 < a_1 < \dots < a_k < d$. Here $L(\cdot)$ is well-defined for each multiplier, as they're equal to $1$ modulo $4$. Hence,
$$
4L(x) \equiv 4L(2^{a_1}+1)+\dots+4L(2^{a_k}+1) \pmod{2^{d}}.
$$
So, if we precompute $t_k = 4L(2^n+1)$ for all $1 < k < d$, we will be able to compute $4L(x)$ for any number $x$.
For 32-bit integers, we can use the following table:
```cpp
const uint32_t mbin_log_32_table[32] = {
0x00000000, 0x00000000, 0xd3cfd984, 0x9ee62e18,
0xe83d9070, 0xb59e81e0, 0xa17407c0, 0xce601f80,
0xf4807f00, 0xe701fe00, 0xbe07fc00, 0xfc1ff800,
0xf87ff000, 0xf1ffe000, 0xe7ffc000, 0xdfff8000,
0xffff0000, 0xfffe0000, 0xfffc0000, 0xfff80000,
0xfff00000, 0xffe00000, 0xffc00000, 0xff800000,
0xff000000, 0xfe000000, 0xfc000000, 0xf8000000,
0xf0000000, 0xe0000000, 0xc0000000, 0x80000000,
};
```
On practice, a slightly different approach is used than described above. Rather than finding the factorization for $x$, we will consequently multiply $x$ with $2^n+1$ until we turn it into $1$ modulo $2^d$. In this way, we will find the representation of $x^{-1}$, that is
$$
x (2^{a_1}+1)\dots(2^{a_k}+1) \equiv 1 \pmod {2^d}.
$$
To do this, we iterate over $n$ such that $1 < n < d$. If the current $x$ has $n$-th bit set, we multiply $x$ with $2^n+1$, which is conveniently done in C++ as `x = x + (x << n)`. This won't change bits lower than $n$, but will turn the $n$-th bit to zero, because $x$ is odd.
With all this in mind, the function `mbin_log_32(r, x)` is implemented as follows:
```cpp
uint32_t mbin_log_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n < 32; n++) {
if (x & (1 << n)) {
x = x + (x << n);
r -= mbin_log_32_table[n];
}
}
return r;
}
```
Note that $4L(x) = -4L(x^{-1})$, so instead of adding $4L(2^n+1)$, we subtract it from $r$, which initially equates to $0$.
## Computing x from 4L(x)
Note that for $k \geq 1$ it holds that
$$
(a 2^{k}+1)^2 = a^2 2^{2k} +a 2^{k+1}+1 = b2^{k+1}+1,
$$
from which (by repeated squaring) we can deduce that
$$
(2^a+1)^{2^b} \equiv 1 \pmod{2^{a+b}}.
$$
Applying this result to $a=2^n+1$ and $b=d-k$ we deduce that the multiplicative order of $2^n+1$ is a divisor of $2^{d-n}$.
This, in turn, means that $L(2^n+1)$ must be divisible by $2^{n}$, as the order of $b$ is $2^{d-2}$ and the order of $b^y$ is $2^{d-2-v}$, where $2^v$ is the highest power of $2$ that divides $y$, so we need
$$
2^{d-k} \equiv 0 \pmod{2^{d-2-v}},
$$
thus $v$ must be greater or equal than $k-2$. This is a bit ugly and to mitigate this we said in the beginning that we multiply $L(x)$ by $4$. Now if we know $4L(x)$, we can uniquely decomposing it into a sum of $4L(2^n+1)$ by consequentially checking bits in $4L(x)$. If the $n$-th bit is set to $1$, we will multiply the result with $2^n+1$ and reduce the current $4L(x)$ by $4L(2^n+1)$.
Thus, `mbin_exp_32` is implemented as follows:
```cpp
uint32_t mbin_exp_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n < 32; n++) {
if (x & (1 << n)) {
r = r + (r << n);
x -= mbin_log_32_table[n];
}
}
return r;
}
```
## Further optimizations
It is possible to halve the number of iterations if you note that $4L(2^{d-1}+1)=2^{d-1}$ and that for $2k \geq d$ it holds that
$$
(2^n+1)^2 \equiv 2^{2n} + 2^{n+1}+1 \equiv 2^{n+1}+1 \pmod{2^d},
$$
which allows to deduce that $4L(2^n+1)=2^n$ for $2n \geq d$. So, you could simplify the algorithm by only going up to $\frac{d}{2}$ and then use the fact above to compute the remaining part with bitwise operations:
```cpp
uint32_t mbin_log_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n != 16; n++) {
if (x & (1 << n)) {
x = x + (x << n);
r -= mbin_log_32_table[n];
}
}
r -= (x & 0xFFFF0000);
return r;
}
uint32_t mbin_exp_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n != 16; n++) {
if (x & (1 << n)) {
r = r + (r << n);
x -= mbin_log_32_table[n];
}
}
r *= 1 - (x & 0xFFFF0000);
return r;
}
```
## Computing logarithm table
To compute log-table, one could modify the [Pohlig–Hellman algorithm](https://en.wikipedia.org/wiki/Pohlig–Hellman_algorithm) for the case when modulo is a power of $2$.
Our main task here is to compute $x$ such that $g^x \equiv y \pmod{2^d}$, where $g=5$ and $y$ is a number of kind $2^n+1$.
Squaring both parts $k$ times we arrive to
$$
g^{2^k x} \equiv y^{2^k} \pmod{2^d}.
$$
Note that the order of $g$ is not greater than $2^{d}$ (in fact, than $2^{d-2}$, but we will stick to $2^d$ for convenience), hence using $k=d-1$ we will have either $g^1$ or $g^0$ on the left hand side which allows us to determine the smallest bit of $x$ by comparing $y^{2^k}$ to $g$. Now assume that $x=x_0 + 2^k x_1$, where $x_0$ is a known part and $x_1$ is not yet known. Then
$$
g^{x_0+2^k x_1} \equiv y \pmod{2^d}.
$$
Multiplying both parts with $g^{-x_0}$, we get
$$
g^{2^k x_1} \equiv (g^{-x_0} y) \pmod{2^d}.
$$
Now, squaring both sides $d-k-1$ times we can obtain the next bit of $x$, eventually recovering all its bits.
## References
* [M30, Hans Petter Selasky, 2009](https://ia601602.us.archive.org/29/items/B-001-001-251/B-001-001-251.pdf#page=640)
|
---
title
- Original
---
# Binary Exponentiation by Factoring
Consider a problem of computing $ax^y \pmod{2^d}$, given integers $a$, $x$, $y$ and $d \geq 3$, where $x$ is odd.
The algorithm below allows to solve this problem with $O(d)$ additions and binary operations and a single multiplication by $y$.
Due to the structure of the multiplicative group modulo $2^d$, any number $x$ such that $x \equiv 1 \pmod 4$ can be represented as
$$
x \equiv b^{L(x)} \pmod{2^d},
$$
where $b \equiv 5 \pmod 8$. Without loss of generality we assume that $x \equiv 1 \pmod 4$, as we can reduce $x \equiv 3 \pmod 4$ to $x \equiv 1 \pmod 4$ by substituting $x \mapsto -x$ and $a \mapsto (-1)^{y} a$. In this notion, $ax^y$ is represented as
$$
a x^y \equiv a b^{yL(x)} \pmod{2^d}.
$$
The core idea of the algorithm is to simplify the computation of $L(x)$ and $b^{y L(x)}$ using the fact that we're working modulo $2^d$. For reasons that will be apparent later on, we'll be working with $4L(x)$ rather than $L(x)$, but taken modulo $2^d$ instead of $2^{d-2}$.
In this article, we will cover the implementation for $32$-bit integers. Let
* `mbin_log_32(r, x)` be a function that computes $r+4L(x) \pmod{2^d}$;
* `mbin_exp_32(r, x)` be a function that computes $r b^{\frac{x}{4}} \pmod{2^d}$;
* `mbin_power_odd_32(a, x, y)` be a function that computes $ax^y \pmod{2^d}$.
Then `mbin_power_odd_32` is implemented as follows:
```cpp
uint32_t mbin_power_odd_32(uint32_t rem, uint32_t base, uint32_t exp) {
if (base & 2) {
/* divider is considered negative */
base = -base;
/* check if result should be negative */
if (exp & 1) {
rem = -rem;
}
}
return (mbin_exp_32(rem, mbin_log_32(0, base) * exp));
}
```
## Computing 4L(x) from x
Let $x$ be an odd number such that $x \equiv 1 \pmod 4$. It can be represented as
$$
x \equiv (2^{a_1}+1)\dots(2^{a_k}+1) \pmod{2^d},
$$
where $1 < a_1 < \dots < a_k < d$. Here $L(\cdot)$ is well-defined for each multiplier, as they're equal to $1$ modulo $4$. Hence,
$$
4L(x) \equiv 4L(2^{a_1}+1)+\dots+4L(2^{a_k}+1) \pmod{2^{d}}.
$$
So, if we precompute $t_k = 4L(2^n+1)$ for all $1 < k < d$, we will be able to compute $4L(x)$ for any number $x$.
For 32-bit integers, we can use the following table:
```cpp
const uint32_t mbin_log_32_table[32] = {
0x00000000, 0x00000000, 0xd3cfd984, 0x9ee62e18,
0xe83d9070, 0xb59e81e0, 0xa17407c0, 0xce601f80,
0xf4807f00, 0xe701fe00, 0xbe07fc00, 0xfc1ff800,
0xf87ff000, 0xf1ffe000, 0xe7ffc000, 0xdfff8000,
0xffff0000, 0xfffe0000, 0xfffc0000, 0xfff80000,
0xfff00000, 0xffe00000, 0xffc00000, 0xff800000,
0xff000000, 0xfe000000, 0xfc000000, 0xf8000000,
0xf0000000, 0xe0000000, 0xc0000000, 0x80000000,
};
```
On practice, a slightly different approach is used than described above. Rather than finding the factorization for $x$, we will consequently multiply $x$ with $2^n+1$ until we turn it into $1$ modulo $2^d$. In this way, we will find the representation of $x^{-1}$, that is
$$
x (2^{a_1}+1)\dots(2^{a_k}+1) \equiv 1 \pmod {2^d}.
$$
To do this, we iterate over $n$ such that $1 < n < d$. If the current $x$ has $n$-th bit set, we multiply $x$ with $2^n+1$, which is conveniently done in C++ as `x = x + (x << n)`. This won't change bits lower than $n$, but will turn the $n$-th bit to zero, because $x$ is odd.
With all this in mind, the function `mbin_log_32(r, x)` is implemented as follows:
```cpp
uint32_t mbin_log_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n < 32; n++) {
if (x & (1 << n)) {
x = x + (x << n);
r -= mbin_log_32_table[n];
}
}
return r;
}
```
Note that $4L(x) = -4L(x^{-1})$, so instead of adding $4L(2^n+1)$, we subtract it from $r$, which initially equates to $0$.
## Computing x from 4L(x)
Note that for $k \geq 1$ it holds that
$$
(a 2^{k}+1)^2 = a^2 2^{2k} +a 2^{k+1}+1 = b2^{k+1}+1,
$$
from which (by repeated squaring) we can deduce that
$$
(2^a+1)^{2^b} \equiv 1 \pmod{2^{a+b}}.
$$
Applying this result to $a=2^n+1$ and $b=d-k$ we deduce that the multiplicative order of $2^n+1$ is a divisor of $2^{d-n}$.
This, in turn, means that $L(2^n+1)$ must be divisible by $2^{n}$, as the order of $b$ is $2^{d-2}$ and the order of $b^y$ is $2^{d-2-v}$, where $2^v$ is the highest power of $2$ that divides $y$, so we need
$$
2^{d-k} \equiv 0 \pmod{2^{d-2-v}},
$$
thus $v$ must be greater or equal than $k-2$. This is a bit ugly and to mitigate this we said in the beginning that we multiply $L(x)$ by $4$. Now if we know $4L(x)$, we can uniquely decomposing it into a sum of $4L(2^n+1)$ by consequentially checking bits in $4L(x)$. If the $n$-th bit is set to $1$, we will multiply the result with $2^n+1$ and reduce the current $4L(x)$ by $4L(2^n+1)$.
Thus, `mbin_exp_32` is implemented as follows:
```cpp
uint32_t mbin_exp_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n < 32; n++) {
if (x & (1 << n)) {
r = r + (r << n);
x -= mbin_log_32_table[n];
}
}
return r;
}
```
## Further optimizations
It is possible to halve the number of iterations if you note that $4L(2^{d-1}+1)=2^{d-1}$ and that for $2k \geq d$ it holds that
$$
(2^n+1)^2 \equiv 2^{2n} + 2^{n+1}+1 \equiv 2^{n+1}+1 \pmod{2^d},
$$
which allows to deduce that $4L(2^n+1)=2^n$ for $2n \geq d$. So, you could simplify the algorithm by only going up to $\frac{d}{2}$ and then use the fact above to compute the remaining part with bitwise operations:
```cpp
uint32_t mbin_log_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n != 16; n++) {
if (x & (1 << n)) {
x = x + (x << n);
r -= mbin_log_32_table[n];
}
}
r -= (x & 0xFFFF0000);
return r;
}
uint32_t mbin_exp_32(uint32_t r, uint32_t x) {
uint8_t n;
for (n = 2; n != 16; n++) {
if (x & (1 << n)) {
r = r + (r << n);
x -= mbin_log_32_table[n];
}
}
r *= 1 - (x & 0xFFFF0000);
return r;
}
```
## Computing logarithm table
To compute log-table, one could modify the [Pohlig–Hellman algorithm](https://en.wikipedia.org/wiki/Pohlig–Hellman_algorithm) for the case when modulo is a power of $2$.
Our main task here is to compute $x$ such that $g^x \equiv y \pmod{2^d}$, where $g=5$ and $y$ is a number of kind $2^n+1$.
Squaring both parts $k$ times we arrive to
$$
g^{2^k x} \equiv y^{2^k} \pmod{2^d}.
$$
Note that the order of $g$ is not greater than $2^{d}$ (in fact, than $2^{d-2}$, but we will stick to $2^d$ for convenience), hence using $k=d-1$ we will have either $g^1$ or $g^0$ on the left hand side which allows us to determine the smallest bit of $x$ by comparing $y^{2^k}$ to $g$. Now assume that $x=x_0 + 2^k x_1$, where $x_0$ is a known part and $x_1$ is not yet known. Then
$$
g^{x_0+2^k x_1} \equiv y \pmod{2^d}.
$$
Multiplying both parts with $g^{-x_0}$, we get
$$
g^{2^k x_1} \equiv (g^{-x_0} y) \pmod{2^d}.
$$
Now, squaring both sides $d-k-1$ times we can obtain the next bit of $x$, eventually recovering all its bits.
## References
* [M30, Hans Petter Selasky, 2009](https://ia601602.us.archive.org/29/items/B-001-001-251/B-001-001-251.pdf#page=640)
|
Binary Exponentiation by Factoring
|
---
title
reverse_element
---
# Modular Multiplicative Inverse
## Definition
A [modular multiplicative inverse](http://en.wikipedia.org/wiki/Modular_multiplicative_inverse) of an integer $a$ is an integer $x$ such that $a \cdot x$ is congruent to $1$ modular some modulus $m$.
To write it in a formal way: we want to find an integer $x$ so that
$$a \cdot x \equiv 1 \mod m.$$
We will also denote $x$ simply with $a^{-1}$.
We should note that the modular inverse does not always exist. For example, let $m = 4$, $a = 2$.
By checking all possible values modulo $m$, it should become clear that we cannot find $a^{-1}$ satisfying the above equation.
It can be proven that the modular inverse exists if and only if $a$ and $m$ are relatively prime (i.e. $\gcd(a, m) = 1$).
In this article, we present two methods for finding the modular inverse in case it exists, and one method for finding the modular inverse for all numbers in linear time.
## Finding the Modular Inverse using Extended Euclidean algorithm
Consider the following equation (with unknown $x$ and $y$):
$$a \cdot x + m \cdot y = 1$$
This is a [Linear Diophantine equation in two variables](linear-diophantine-equation.md).
As shown in the linked article, when $\gcd(a, m) = 1$, the equation has a solution which can be found using the [extended Euclidean algorithm](extended-euclid-algorithm.md).
Note that $\gcd(a, m) = 1$ is also the condition for the modular inverse to exist.
Now, if we take modulo $m$ of both sides, we can get rid of $m \cdot y$, and the equation becomes:
$$a \cdot x \equiv 1 \mod m$$
Thus, the modular inverse of $a$ is $x$.
The implementation is as follows:
```cpp
int x, y;
int g = extended_euclidean(a, m, x, y);
if (g != 1) {
cout << "No solution!";
}
else {
x = (x % m + m) % m;
cout << x << endl;
}
```
Notice that the way we modify `x`.
The resulting `x` from the extended Euclidean algorithm may be negative, so `x % m` might also be negative, and we first have to add `m` to make it positive.
<div id="fermat-euler"></div>
## Finding the Modular Inverse using Binary Exponentiation
Another method for finding modular inverse is to use Euler's theorem, which states that the following congruence is true if $a$ and $m$ are relatively prime:
$$a^{\phi (m)} \equiv 1 \mod m$$
$\phi$ is [Euler's Totient function](phi-function.md).
Again, note that $a$ and $m$ being relative prime was also the condition for the modular inverse to exist.
If $m$ is a prime number, this simplifies to [Fermat's little theorem](http://en.wikipedia.org/wiki/Fermat's_little_theorem):
$$a^{m - 1} \equiv 1 \mod m$$
Multiply both sides of the above equations by $a^{-1}$, and we get:
* For an arbitrary (but coprime) modulus $m$: $a ^ {\phi (m) - 1} \equiv a ^{-1} \mod m$
* For a prime modulus $m$: $a ^ {m - 2} \equiv a ^ {-1} \mod m$
From these results, we can easily find the modular inverse using the [binary exponentiation algorithm](binary-exp.md), which works in $O(\log m)$ time.
Even though this method is easier to understand than the method described in previous paragraph, in the case when $m$ is not a prime number, we need to calculate Euler phi function, which involves factorization of $m$, which might be very hard. If the prime factorization of $m$ is known, then the complexity of this method is $O(\log m)$.
<div id="finding-the-modular-inverse-using-euclidean-division"></div>
## Finding the modular inverse for prime moduli using Euclidean Division
Given a prime modulus $m > a$ (or we can apply modulo to make it smaller in 1 step), according to [Euclidean Division](https://en.wikipedia.org/wiki/Euclidean_division)
$$m = k \cdot a + r$$
where $k = \left\lfloor \frac{m}{a} \right\rfloor$ and $r = m \bmod a$, then
$$
\begin{align*}
& \implies & 0 & \equiv k \cdot a + r & \mod m \\
& \iff & r & \equiv -k \cdot a & \mod m \\
& \iff & r \cdot a^{-1} & \equiv -k & \mod m \\
& \iff & a^{-1} & \equiv -k \cdot r^{-1} & \mod m
\end{align*}
$$
Note that this reasoning does not hold if $m$ is not prime, since the existence of $a^{-1}$ does not imply the existence of $r^{-1}$
in the general case. To see this, lets try to calculate $5^{-1}$ modulo $12$ with the above formula. We would like to arrive at $5$,
since $5 \cdot 5 \equiv 1 \bmod 12$. However, $12 = 2 \cdot 5 + 2$, and we have $k=2$ and $r=2$, with $2$ being not invertible modulo $12$.
If the modulus is prime however, all $a$ with $0 < a < m$ are invertible modulo $m$, and we can have the following recursive function (in C++) for computing the modular inverse for number $a$ with respect to $m$
```{.cpp file=modular_inverse_euclidean_division}
int inv(int a) {
return a <= 1 ? a : m - (long long)(m/a) * inv(m % a) % m;
}
```
The exact time complexity of the this recursion is not known. It's is somewhere between $O(\frac{\log m}{\log\log m})$ and $O(m^{\frac{1}{3} - \frac{2}{177} + \epsilon})$.
See [On the length of Pierce expansions](https://arxiv.org/abs/2211.08374).
In practice this implementation is fast, e.g. for the modulus $10^9 + 7$ it will always finish in less than 50 iterations.
<div id="mod-inv-all-num"></div>
Applying this formula, we can also precompute the modular inverse for every number in the range $[1, m-1]$ in $O(m)$.
```{.cpp file=modular_inverse_euclidean_division_all}
inv[1] = 1;
for(int a = 2; a < m; ++a)
inv[a] = m - (long long)(m/a) * inv[m%a] % m;
```
## Finding the modular inverse for array of numbers modulo $m$
Suppose we are given an array and we want to find modular inverse for all numbers in it (all of them are invertible).
Instead of computing the inverse for every number, we can expand the fraction by the prefix product (excluding itself) and suffix product (excluding itself), and end up only computing a single inverse instead.
$$
\begin{align}
x_i^{-1} &= \frac{1}{x_i} = \frac{\overbrace{x_1 \cdot x_2 \cdots x_{i-1}}^{\text{prefix}_{i-1}} \cdot ~1~ \cdot \overbrace{x_{i+1} \cdot x_{i+2} \cdots x_n}^{\text{suffix}_{i+1}}}{x_1 \cdot x_2 \cdots x_{i-1} \cdot x_i \cdot x_{i+1} \cdot x_{i+2} \cdots x_n} \\
&= \text{prefix}_{i-1} \cdot \text{suffix}_{i+1} \cdot \left(x_1 \cdot x_2 \cdots x_n\right)^{-1}
\end{align}
$$
In the code we can just make a prefix product array (exclude itself, start from the identity element), compute the modular inverse for the product of all numbers and than multiply it by the prefix product and suffix product (exclude itself).
The suffix product is computed by iterating from the back to the front.
```cpp
std::vector<int> invs(const std::vector<int> &a, int m) {
int n = a.size();
if (n == 0) return {};
std::vector<int> b(n);
int v = 1;
for (int i = 0; i != n; ++i) {
b[i] = v;
v = static_cast<long long>(v) * a[i] % m;
}
int x, y;
extended_euclidean(v, m, x, y);
x = (x % m + m) % m;
for (int i = n - 1; i >= 0; --i) {
b[i] = static_cast<long long>(x) * b[i] % m;
x = static_cast<long long>(x) * a[i] % m;
}
return b;
}
```
|
---
title
reverse_element
---
# Modular Multiplicative Inverse
## Definition
A [modular multiplicative inverse](http://en.wikipedia.org/wiki/Modular_multiplicative_inverse) of an integer $a$ is an integer $x$ such that $a \cdot x$ is congruent to $1$ modular some modulus $m$.
To write it in a formal way: we want to find an integer $x$ so that
$$a \cdot x \equiv 1 \mod m.$$
We will also denote $x$ simply with $a^{-1}$.
We should note that the modular inverse does not always exist. For example, let $m = 4$, $a = 2$.
By checking all possible values modulo $m$, it should become clear that we cannot find $a^{-1}$ satisfying the above equation.
It can be proven that the modular inverse exists if and only if $a$ and $m$ are relatively prime (i.e. $\gcd(a, m) = 1$).
In this article, we present two methods for finding the modular inverse in case it exists, and one method for finding the modular inverse for all numbers in linear time.
## Finding the Modular Inverse using Extended Euclidean algorithm
Consider the following equation (with unknown $x$ and $y$):
$$a \cdot x + m \cdot y = 1$$
This is a [Linear Diophantine equation in two variables](linear-diophantine-equation.md).
As shown in the linked article, when $\gcd(a, m) = 1$, the equation has a solution which can be found using the [extended Euclidean algorithm](extended-euclid-algorithm.md).
Note that $\gcd(a, m) = 1$ is also the condition for the modular inverse to exist.
Now, if we take modulo $m$ of both sides, we can get rid of $m \cdot y$, and the equation becomes:
$$a \cdot x \equiv 1 \mod m$$
Thus, the modular inverse of $a$ is $x$.
The implementation is as follows:
```cpp
int x, y;
int g = extended_euclidean(a, m, x, y);
if (g != 1) {
cout << "No solution!";
}
else {
x = (x % m + m) % m;
cout << x << endl;
}
```
Notice that the way we modify `x`.
The resulting `x` from the extended Euclidean algorithm may be negative, so `x % m` might also be negative, and we first have to add `m` to make it positive.
<div id="fermat-euler"></div>
## Finding the Modular Inverse using Binary Exponentiation
Another method for finding modular inverse is to use Euler's theorem, which states that the following congruence is true if $a$ and $m$ are relatively prime:
$$a^{\phi (m)} \equiv 1 \mod m$$
$\phi$ is [Euler's Totient function](phi-function.md).
Again, note that $a$ and $m$ being relative prime was also the condition for the modular inverse to exist.
If $m$ is a prime number, this simplifies to [Fermat's little theorem](http://en.wikipedia.org/wiki/Fermat's_little_theorem):
$$a^{m - 1} \equiv 1 \mod m$$
Multiply both sides of the above equations by $a^{-1}$, and we get:
* For an arbitrary (but coprime) modulus $m$: $a ^ {\phi (m) - 1} \equiv a ^{-1} \mod m$
* For a prime modulus $m$: $a ^ {m - 2} \equiv a ^ {-1} \mod m$
From these results, we can easily find the modular inverse using the [binary exponentiation algorithm](binary-exp.md), which works in $O(\log m)$ time.
Even though this method is easier to understand than the method described in previous paragraph, in the case when $m$ is not a prime number, we need to calculate Euler phi function, which involves factorization of $m$, which might be very hard. If the prime factorization of $m$ is known, then the complexity of this method is $O(\log m)$.
<div id="finding-the-modular-inverse-using-euclidean-division"></div>
## Finding the modular inverse for prime moduli using Euclidean Division
Given a prime modulus $m > a$ (or we can apply modulo to make it smaller in 1 step), according to [Euclidean Division](https://en.wikipedia.org/wiki/Euclidean_division)
$$m = k \cdot a + r$$
where $k = \left\lfloor \frac{m}{a} \right\rfloor$ and $r = m \bmod a$, then
$$
\begin{align*}
& \implies & 0 & \equiv k \cdot a + r & \mod m \\
& \iff & r & \equiv -k \cdot a & \mod m \\
& \iff & r \cdot a^{-1} & \equiv -k & \mod m \\
& \iff & a^{-1} & \equiv -k \cdot r^{-1} & \mod m
\end{align*}
$$
Note that this reasoning does not hold if $m$ is not prime, since the existence of $a^{-1}$ does not imply the existence of $r^{-1}$
in the general case. To see this, lets try to calculate $5^{-1}$ modulo $12$ with the above formula. We would like to arrive at $5$,
since $5 \cdot 5 \equiv 1 \bmod 12$. However, $12 = 2 \cdot 5 + 2$, and we have $k=2$ and $r=2$, with $2$ being not invertible modulo $12$.
If the modulus is prime however, all $a$ with $0 < a < m$ are invertible modulo $m$, and we can have the following recursive function (in C++) for computing the modular inverse for number $a$ with respect to $m$
```{.cpp file=modular_inverse_euclidean_division}
int inv(int a) {
return a <= 1 ? a : m - (long long)(m/a) * inv(m % a) % m;
}
```
The exact time complexity of the this recursion is not known. It's is somewhere between $O(\frac{\log m}{\log\log m})$ and $O(m^{\frac{1}{3} - \frac{2}{177} + \epsilon})$.
See [On the length of Pierce expansions](https://arxiv.org/abs/2211.08374).
In practice this implementation is fast, e.g. for the modulus $10^9 + 7$ it will always finish in less than 50 iterations.
<div id="mod-inv-all-num"></div>
Applying this formula, we can also precompute the modular inverse for every number in the range $[1, m-1]$ in $O(m)$.
```{.cpp file=modular_inverse_euclidean_division_all}
inv[1] = 1;
for(int a = 2; a < m; ++a)
inv[a] = m - (long long)(m/a) * inv[m%a] % m;
```
## Finding the modular inverse for array of numbers modulo $m$
Suppose we are given an array and we want to find modular inverse for all numbers in it (all of them are invertible).
Instead of computing the inverse for every number, we can expand the fraction by the prefix product (excluding itself) and suffix product (excluding itself), and end up only computing a single inverse instead.
$$
\begin{align}
x_i^{-1} &= \frac{1}{x_i} = \frac{\overbrace{x_1 \cdot x_2 \cdots x_{i-1}}^{\text{prefix}_{i-1}} \cdot ~1~ \cdot \overbrace{x_{i+1} \cdot x_{i+2} \cdots x_n}^{\text{suffix}_{i+1}}}{x_1 \cdot x_2 \cdots x_{i-1} \cdot x_i \cdot x_{i+1} \cdot x_{i+2} \cdots x_n} \\
&= \text{prefix}_{i-1} \cdot \text{suffix}_{i+1} \cdot \left(x_1 \cdot x_2 \cdots x_n\right)^{-1}
\end{align}
$$
In the code we can just make a prefix product array (exclude itself, start from the identity element), compute the modular inverse for the product of all numbers and than multiply it by the prefix product and suffix product (exclude itself).
The suffix product is computed by iterating from the back to the front.
```cpp
std::vector<int> invs(const std::vector<int> &a, int m) {
int n = a.size();
if (n == 0) return {};
std::vector<int> b(n);
int v = 1;
for (int i = 0; i != n; ++i) {
b[i] = v;
v = static_cast<long long>(v) * a[i] % m;
}
int x, y;
extended_euclidean(v, m, x, y);
x = (x % m + m) % m;
for (int i = n - 1; i >= 0; --i) {
b[i] = static_cast<long long>(x) * b[i] % m;
x = static_cast<long long>(x) * a[i] % m;
}
return b;
}
```
## Practice Problems
* [UVa 11904 - One Unit Machine](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3055)
* [Hackerrank - Longest Increasing Subsequence Arrays](https://www.hackerrank.com/contests/world-codesprint-5/challenges/longest-increasing-subsequence-arrays)
* [Codeforces 300C - Beautiful Numbers](http://codeforces.com/problemset/problem/300/C)
* [Codeforces 622F - The Sum of the k-th Powers](http://codeforces.com/problemset/problem/622/F)
* [Codeforces 717A - Festival Organization](http://codeforces.com/problemset/problem/717/A)
* [Codeforces 896D - Nephren Runs a Cinema](http://codeforces.com/problemset/problem/896/D)
|
Modular Multiplicative Inverse
|
---
title
- Original
---
# Bit manipulation
## Binary number
A **binary number** is a number expressed in the base-2 numeral system or binary numeral system, it is a method of mathematical expression which uses only two symbols: typically "0" (zero) and "1" (one).
We say that a certain bit is **set**, if it is one, and **cleared** if it is zero.
The binary number $(a_k a_{k-1} \dots a_1 a_0)_2$ represents the number:
$$(a_k a_{k-1} \dots a_1 a_0)_2 = a_k \cdot 2^k + a_{k-1} \cdot 2^{k-1} + \dots + a_1 \cdot 2^1 + a_0 \cdot 2^0.$$
For instance the binary number $1101_2$ represents the number $13$:
$$\begin{align}
1101_2 &= 1 \cdot 2^3 + 1 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 \\
&= 1\cdot 8 + 1 \cdot 4 + 0 \cdot 2 + 1 \cdot 1 = 13
\end{align}$$
Computers represent integers as binary numbers.
Positive integers (both signed and unsigned) are just represented with their binary digits, and negative signed numbers (which can be positive and negative) are usually represented with the [Two's complement](https://en.wikipedia.org/wiki/Two%27s_complement).
```cpp
unsigned int unsigned_number = 13;
assert(unsigned_number == 0b1101);
int positive_signed_number = 13;
assert(positive_signed_number == 0b1101);
int negative_signed_number = -13;
assert(negative_signed_number == 0b1111'1111'1111'1111'1111'1111'1111'0011);
```
CPUs are very fast manipulating those bits with specific operations.
For some problems we can take these binary number representations to our advantage, and speed up the execution time.
And for some problems (typically in combinatorics or dynamic programming) where we want to track which objects we already picked from a given set of objects, we can just use an large enough integer where each digit represents an object and depending on if we pick or drop the object we set or clear the digit.
## Bit operators
All those introduced operators are instant (same speed as an addition) on a CPU for fixed-length integers.
### Bitwise operators
- $\&$ : The bitwise AND operator compares each bit of its first operand with the corresponding bit of its second operand.
If both bits are 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
- $|$ : The bitwise inclusive OR operator compares each bit of its first operand with the corresponding bit of its second operand.
If one of the two bits is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
- $\wedge$ : The bitwise exclusive OR (XOR) operator compares each bit of its first operand with the corresponding bit of its second operand.
If one bit is 0 and the other bit is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
- $\sim$ : The bitwise complement (NOT) operator flips each bit of a number, if a bit is set the operator will clear it, if it is cleared the operator sets it.
Examples:
```
n = 01011000
n-1 = 01010111
--------------------
n & (n-1) = 01010000
```
```
n = 01011000
n-1 = 01010111
--------------------
n | (n-1) = 01011111
```
```
n = 01011000
n-1 = 01010111
--------------------
n ^ (n-1) = 00001111
```
```
n = 01011000
--------------------
~n = 10100111
```
### Shift operators
There are two operators for shifting bits.
- $\gg$ Shifts a number to the right by removing the last few binary digits of the number.
Each shift by one represents an integer division by 2, so a right shift by $k$ represents an integer division by $2^k$.
E.g. $5 \gg 2 = 101_2 \gg 2 = 1_2 = 1$ which is the same as $\frac{5}{2^2} = \frac{5}{4} = 1$.
For a computer though shifting some bits is a lot faster than doing divisions.
- $\ll$ Shifts a number to left by appending zero digits.
In similar fashion to a right shift by $k$, a left shift by $k$ represents a multiplication by $2^k$.
E.g. $5 \ll 3 = 101_2 \ll 3 = 101000_2 = 40$ which is the same as $5 \cdot 2^3 = 5 \cdot 8 = 40$.
Notice however that for a fixed-length integer that means dropping the most left digits, and if you shift too much you end up with the number $0$.
## Useful tricks
### Set/flip/clear a bit
Using bitwise shifts and some basic bitwise operations we can easily set, flip or clear a bit.
$1 \ll x$ is a number with only the $x$-th bit set, while $\sim(1 \ll x)$ is a number with all bits set except the $x$-th bit.
- $n ~|~ (1 \ll x)$ sets the $x$-th bit in the number $n$
- $n ~\wedge~ (1 \ll x)$ flips the $x$-th bit in the number $n$
- $n ~\&~ \sim(1 \ll x)$ clears the $x$-th bit in the number $n$
### Check if a bit is set
The value of the $x$-th bit can be checked by shifting the number $x$ positions to the right, so that the $x$-th bit is at the unit place, after which we can extract it by performing a bitwise & with 1.
``` cpp
bool is_set(unsigned int number, int x) {
return (number >> x) & 1;
}
```
### Check if an integer is a power of 2
A power of two is a number that has only a single bit in it (e.g. $32 = 0010~0000_2$), while the predecessor of that number has that digit not set and all the digits after it set ($31 = 0001~1111_2$).
So the bitwise AND of a number with it's predecessor will always be 0, as they don't have any common digits set.
You can easily check that this only happens for the the power of twos and for the number $0$ which already has no digit set.
``` cpp
bool isPowerOfTwo(unsigned int n) {
return n && !(n & (n - 1));
}
```
### Clear the most-right set bit
The expression $n ~\&~ (n-1)$ can be used to turn off the rightmost set bit of a number $n$.
This works because the expression $n-1$ flips all bits after the rightmost set bit of $n$, including the rightmost set bit.
So all those digits are different from the original number, and by doing a bitwise AND they are all set to 0, giving you the original number $n$ with the rightmost set bit flipped.
For example, consider the number $52 = 0011~0100_2$:
```
n = 00110100
n-1 = 00110011
--------------------
n & (n-1) = 00110000
```
### Brian Kernighan's algorithm
We can count the number of bits set with the above expression.
The idea is to consider only the set bits of an integer by turning off its rightmost set bit (after counting it), so the next iteration of the loop considers the Next Rightmost bit.
``` cpp
int countSetBits(int n)
{
int count = 0;
while (n)
{
n = n & (n - 1);
count++;
}
return count;
}
```
### Additional tricks
- $n ~\&~ (n + 1)$ clears all trailing ones: $0011~0111_2 \rightarrow 0011~0000_2$.
- $n ~|~ (n + 1)$ sets the last cleared bit: $0011~0101_2 \rightarrow 0011~0111_2$.
- $n ~\&~ -n$ extracts the last set bit: $0011~0100_2 \rightarrow 0000~0100_2$.
Many more can be found in the book [Hacker's Delight](https://en.wikipedia.org/wiki/Hacker%27s_Delight).
### Language and compiler support
C++ supports some of those operations since C++20 via the [bit](https://en.cppreference.com/w/cpp/header/bit) standard library:
- `has_single_bit`: checks if the number is a power of two
- `bit_ceil` / `bit_floor`: round up/down to the next power of two
- `rotl` / `rotr`: rotate the bits in the number
- `countl_zero` / `countr_zero` / `countl_one` / `countr_one`: count the leading/trailing zeros/ones
- `popcount`: count the number of set bits
Additionally, there are also predefined functions in some compilers that help working with bits.
E.g. GCC defines a list at [Built-in Functions Provided by GCC](https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html) that also work in older versions of C++:
- `__builtin_popcount(unsigned int)` returns the number of set bits (`__builtin_popcount(0b0001'0010'1100) == 4`)
- `__builtin_ffs(int)` finds the index of the first (most right) set bit (`__builtin_ffs(0b0001'0010'1100) == 3`)
- `__builtin_clz(unsigned int)` the count of leading zeros (`__builtin_clz(0b0001'0010'1100) == 23`)
- `__builtin_ctz(unsigned int)` the count of trailing zeros (`__builtin_ctz(0b0001'0010'1100) == 2`)
_Note that some of the operations (both the C++20 functions and the Compiler Built-in ones) might be quite slow in GCC if you don't enable a specific compiler target with `#pragma GCC target("popcnt")`._
|
---
title
- Original
---
# Bit manipulation
## Binary number
A **binary number** is a number expressed in the base-2 numeral system or binary numeral system, it is a method of mathematical expression which uses only two symbols: typically "0" (zero) and "1" (one).
We say that a certain bit is **set**, if it is one, and **cleared** if it is zero.
The binary number $(a_k a_{k-1} \dots a_1 a_0)_2$ represents the number:
$$(a_k a_{k-1} \dots a_1 a_0)_2 = a_k \cdot 2^k + a_{k-1} \cdot 2^{k-1} + \dots + a_1 \cdot 2^1 + a_0 \cdot 2^0.$$
For instance the binary number $1101_2$ represents the number $13$:
$$\begin{align}
1101_2 &= 1 \cdot 2^3 + 1 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 \\
&= 1\cdot 8 + 1 \cdot 4 + 0 \cdot 2 + 1 \cdot 1 = 13
\end{align}$$
Computers represent integers as binary numbers.
Positive integers (both signed and unsigned) are just represented with their binary digits, and negative signed numbers (which can be positive and negative) are usually represented with the [Two's complement](https://en.wikipedia.org/wiki/Two%27s_complement).
```cpp
unsigned int unsigned_number = 13;
assert(unsigned_number == 0b1101);
int positive_signed_number = 13;
assert(positive_signed_number == 0b1101);
int negative_signed_number = -13;
assert(negative_signed_number == 0b1111'1111'1111'1111'1111'1111'1111'0011);
```
CPUs are very fast manipulating those bits with specific operations.
For some problems we can take these binary number representations to our advantage, and speed up the execution time.
And for some problems (typically in combinatorics or dynamic programming) where we want to track which objects we already picked from a given set of objects, we can just use an large enough integer where each digit represents an object and depending on if we pick or drop the object we set or clear the digit.
## Bit operators
All those introduced operators are instant (same speed as an addition) on a CPU for fixed-length integers.
### Bitwise operators
- $\&$ : The bitwise AND operator compares each bit of its first operand with the corresponding bit of its second operand.
If both bits are 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
- $|$ : The bitwise inclusive OR operator compares each bit of its first operand with the corresponding bit of its second operand.
If one of the two bits is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
- $\wedge$ : The bitwise exclusive OR (XOR) operator compares each bit of its first operand with the corresponding bit of its second operand.
If one bit is 0 and the other bit is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
- $\sim$ : The bitwise complement (NOT) operator flips each bit of a number, if a bit is set the operator will clear it, if it is cleared the operator sets it.
Examples:
```
n = 01011000
n-1 = 01010111
--------------------
n & (n-1) = 01010000
```
```
n = 01011000
n-1 = 01010111
--------------------
n | (n-1) = 01011111
```
```
n = 01011000
n-1 = 01010111
--------------------
n ^ (n-1) = 00001111
```
```
n = 01011000
--------------------
~n = 10100111
```
### Shift operators
There are two operators for shifting bits.
- $\gg$ Shifts a number to the right by removing the last few binary digits of the number.
Each shift by one represents an integer division by 2, so a right shift by $k$ represents an integer division by $2^k$.
E.g. $5 \gg 2 = 101_2 \gg 2 = 1_2 = 1$ which is the same as $\frac{5}{2^2} = \frac{5}{4} = 1$.
For a computer though shifting some bits is a lot faster than doing divisions.
- $\ll$ Shifts a number to left by appending zero digits.
In similar fashion to a right shift by $k$, a left shift by $k$ represents a multiplication by $2^k$.
E.g. $5 \ll 3 = 101_2 \ll 3 = 101000_2 = 40$ which is the same as $5 \cdot 2^3 = 5 \cdot 8 = 40$.
Notice however that for a fixed-length integer that means dropping the most left digits, and if you shift too much you end up with the number $0$.
## Useful tricks
### Set/flip/clear a bit
Using bitwise shifts and some basic bitwise operations we can easily set, flip or clear a bit.
$1 \ll x$ is a number with only the $x$-th bit set, while $\sim(1 \ll x)$ is a number with all bits set except the $x$-th bit.
- $n ~|~ (1 \ll x)$ sets the $x$-th bit in the number $n$
- $n ~\wedge~ (1 \ll x)$ flips the $x$-th bit in the number $n$
- $n ~\&~ \sim(1 \ll x)$ clears the $x$-th bit in the number $n$
### Check if a bit is set
The value of the $x$-th bit can be checked by shifting the number $x$ positions to the right, so that the $x$-th bit is at the unit place, after which we can extract it by performing a bitwise & with 1.
``` cpp
bool is_set(unsigned int number, int x) {
return (number >> x) & 1;
}
```
### Check if an integer is a power of 2
A power of two is a number that has only a single bit in it (e.g. $32 = 0010~0000_2$), while the predecessor of that number has that digit not set and all the digits after it set ($31 = 0001~1111_2$).
So the bitwise AND of a number with it's predecessor will always be 0, as they don't have any common digits set.
You can easily check that this only happens for the the power of twos and for the number $0$ which already has no digit set.
``` cpp
bool isPowerOfTwo(unsigned int n) {
return n && !(n & (n - 1));
}
```
### Clear the most-right set bit
The expression $n ~\&~ (n-1)$ can be used to turn off the rightmost set bit of a number $n$.
This works because the expression $n-1$ flips all bits after the rightmost set bit of $n$, including the rightmost set bit.
So all those digits are different from the original number, and by doing a bitwise AND they are all set to 0, giving you the original number $n$ with the rightmost set bit flipped.
For example, consider the number $52 = 0011~0100_2$:
```
n = 00110100
n-1 = 00110011
--------------------
n & (n-1) = 00110000
```
### Brian Kernighan's algorithm
We can count the number of bits set with the above expression.
The idea is to consider only the set bits of an integer by turning off its rightmost set bit (after counting it), so the next iteration of the loop considers the Next Rightmost bit.
``` cpp
int countSetBits(int n)
{
int count = 0;
while (n)
{
n = n & (n - 1);
count++;
}
return count;
}
```
### Additional tricks
- $n ~\&~ (n + 1)$ clears all trailing ones: $0011~0111_2 \rightarrow 0011~0000_2$.
- $n ~|~ (n + 1)$ sets the last cleared bit: $0011~0101_2 \rightarrow 0011~0111_2$.
- $n ~\&~ -n$ extracts the last set bit: $0011~0100_2 \rightarrow 0000~0100_2$.
Many more can be found in the book [Hacker's Delight](https://en.wikipedia.org/wiki/Hacker%27s_Delight).
### Language and compiler support
C++ supports some of those operations since C++20 via the [bit](https://en.cppreference.com/w/cpp/header/bit) standard library:
- `has_single_bit`: checks if the number is a power of two
- `bit_ceil` / `bit_floor`: round up/down to the next power of two
- `rotl` / `rotr`: rotate the bits in the number
- `countl_zero` / `countr_zero` / `countl_one` / `countr_one`: count the leading/trailing zeros/ones
- `popcount`: count the number of set bits
Additionally, there are also predefined functions in some compilers that help working with bits.
E.g. GCC defines a list at [Built-in Functions Provided by GCC](https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html) that also work in older versions of C++:
- `__builtin_popcount(unsigned int)` returns the number of set bits (`__builtin_popcount(0b0001'0010'1100) == 4`)
- `__builtin_ffs(int)` finds the index of the first (most right) set bit (`__builtin_ffs(0b0001'0010'1100) == 3`)
- `__builtin_clz(unsigned int)` the count of leading zeros (`__builtin_clz(0b0001'0010'1100) == 23`)
- `__builtin_ctz(unsigned int)` the count of trailing zeros (`__builtin_ctz(0b0001'0010'1100) == 2`)
_Note that some of the operations (both the C++20 functions and the Compiler Built-in ones) might be quite slow in GCC if you don't enable a specific compiler target with `#pragma GCC target("popcnt")`._
## Practice Problems
* [Codeforces - Raising Bacteria](https://codeforces.com/problemset/problem/579/A)
* [Codeforces - Fedor and New Game](https://codeforces.com/problemset/problem/467/B)
* [Codeforces - And Then There Were K](https://codeforces.com/problemset/problem/1527/A)
|
Bit manipulation
|
---
title
fibonacci_numbers
---
# Fibonacci Numbers
The Fibonacci sequence is defined as follows:
$$F_0 = 0, F_1 = 1, F_n = F_{n-1} + F_{n-2}$$
The first elements of the sequence ([OEIS A000045](http://oeis.org/A000045)) are:
$$0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...$$
## Properties
Fibonacci numbers possess a lot of interesting properties. Here are a few of them:
* Cassini's identity:
$$F_{n-1} F_{n+1} - F_n^2 = (-1)^n$$
* The "addition" rule:
$$F_{n+k} = F_k F_{n+1} + F_{k-1} F_n$$
* Applying the previous identity to the case $k = n$, we get:
$$F_{2n} = F_n (F_{n+1} + F_{n-1})$$
* From this we can prove by induction that for any positive integer $k$, $F_{nk}$ is multiple of $F_n$.
* The inverse is also true: if $F_m$ is multiple of $F_n$, then $m$ is multiple of $n$.
* GCD identity:
$$GCD(F_m, F_n) = F_{GCD(m, n)}$$
* Fibonacci numbers are the worst possible inputs for Euclidean algorithm (see Lame's theorem in [Euclidean algorithm](euclid-algorithm.md))
## Fibonacci Coding
We can use the sequence to encode positive integers into binary code words. According to Zeckendorf's theorem, any natural number $n$ can be uniquely represented as a sum of Fibonacci numbers:
$$N = F_{k_1} + F_{k_2} + \ldots + F_{k_r}$$
such that $k_1 \ge k_2 + 2,\ k_2 \ge k_3 + 2,\ \ldots,\ k_r \ge 2$ (i.e.: the representation cannot use two consecutive Fibonacci numbers).
It follows that any number can be uniquely encoded in the Fibonacci coding.
And we can describe this representation with binary codes $d_0 d_1 d_2 \dots d_s 1$, where $d_i$ is $1$ if $F_{i+2}$ is used in the representation.
The code will be appended by a $1$ do indicate the end of the code word.
Notice that this is the only occurrence where two consecutive 1-bits appear.
$$\begin{eqnarray}
1 &=& 1 &=& F_2 &=& (11)_F \\
2 &=& 2 &=& F_3 &=& (011)_F \\
6 &=& 5 + 1 &=& F_5 + F_2 &=& (10011)_F \\
8 &=& 8 &=& F_6 &=& (000011)_F \\
9 &=& 8 + 1 &=& F_6 + F_2 &=& (100011)_F \\
19 &=& 13 + 5 + 1 &=& F_7 + F_5 + F_2 &=& (1001011)_F
\end{eqnarray}$$
The encoding of an integer $n$ can be done with a simple greedy algorithm:
1. Iterate through the Fibonacci numbers from the largest to the smallest until you find one less than or equal to $n$.
2. Suppose this number was $F_i$. Subtract $F_i$ from $n$ and put a $1$ in the $i-2$ position of the code word (indexing from 0 from the leftmost to the rightmost bit).
3. Repeat until there is no remainder.
4. Add a final $1$ to the codeword to indicate its end.
To decode a code word, first remove the final $1$. Then, if the $i$-th bit is set (indexing from 0 from the leftmost to the rightmost bit), sum $F_{i+2}$ to the number.
## Formulas for the $n^{\text{th}}$ Fibonacci number { data-toc-label="Formulas for the <script type='math/tex'>n</script>-th Fibonacci number" }
### Closed-form expression
There is a formula known as "Binet's formula", even though it was already known by Moivre:
$$F_n = \frac{\left(\frac{1 + \sqrt{5}}{2}\right)^n - \left(\frac{1 - \sqrt{5}}{2}\right)^n}{\sqrt{5}}$$
This formula is easy to prove by induction, but it can be deduced with the help of the concept of generating functions or by solving a functional equation.
You can immediately notice that the second term's absolute value is always less than $1$, and it also decreases very rapidly (exponentially). Hence the value of the first term alone is "almost" $F_n$. This can be written strictly as:
$$F_n = \left[\frac{\left(\frac{1 + \sqrt{5}}{2}\right)^n}{\sqrt{5}}\right]$$
where the square brackets denote rounding to the nearest integer.
As these two formulas would require very high accuracy when working with fractional numbers, they are of little use in practical calculations.
### Fibonacci in linear time
The $n$-th Fibonacci number can be easily found in $O(n)$ by computing the numbers one by one up to $n$. However, there are also faster ways, as we will see.
We can start from an iterative approach, to take advantage of the use of the formula $F_n = F_{n-1} + F_{n-2}$, therefore, we will simply precalculate those values in an array. Taking into account the base cases for $F_0$ and $F_1$.
```{.cpp file=fibonacci_linear}
int fib(int n) {
int a = 0;
int b = 1;
for (int i = 0; i < n; i++) {
int tmp = a + b;
a = b;
b = tmp;
}
return a;
}
```
In this way, we obtain a linear solution, $O(n)$ time, saving all the values prior to $n$ in the sequence.
### Matrix form
It is easy to prove the following relation:
$$\begin{pmatrix} 1 & 1 \cr 1 & 0 \cr\end{pmatrix} ^ n = \begin{pmatrix} F_{n+1} & F_{n} \cr F_{n} & F_{n-1} \cr\end{pmatrix}$$
Thus, in order to find $F_n$ in $O(log n)$ time, we must raise the matrix to n. (See [Binary exponentiation](binary-exp.md))
```{.cpp file=fibonacci_matrix}
struct matrix {
long long mat[2][2];
matrix friend operator *(const matrix &a, const matrix &b){
matrix c;
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
c.mat[i][j] = 0;
for (int k = 0; k < 2; k++) {
c.mat[i][j] += a.mat[i][k] * b.mat[k][j];
}
}
}
return c;
}
};
matrix matpow(matrix base, long long n) {
matrix ans{ {
{1, 0},
{0, 1}
} };
while (n) {
if(n&1)
ans = ans*base;
base = base*base;
n >>= 1;
}
return ans;
}
long long fib(int n) {
matrix base{ {
{1, 1},
{1, 0}
} };
return matpow(base, n).mat[0][1];
}
```
### Fast Doubling Method
Using expanding the above matrix expression for $n = 2\cdot k$
$$
\begin{pmatrix}
F_{2k+1} & F_{2k}\\
F_{2k} & F_{2k-1}
\end{pmatrix}
=
\begin{pmatrix}
1 & 1\\
1 & 0
\end{pmatrix}^{2k}
=
\begin{pmatrix}
F_{k+1} & F_{k}\\
F_{k} & F_{k-1}
\end{pmatrix}
^2
$$
we can find these simpler equations:
$$ \begin{align}
F_{2k+1} &= F_{k+1}^2 + F_{k}^2 \\
F_{2k} &= F_k(F_{k+1}+F_{k-1}) = F_k (2F_{k+1} - F_{k})\\
\end{align}.$$
Thus using above two equations Fibonacci numbers can be calculated easily by the following code:
```{.cpp file=fibonacci_doubling}
pair<int, int> fib (int n) {
if (n == 0)
return {0, 1};
auto p = fib(n >> 1);
int c = p.first * (2 * p.second - p.first);
int d = p.first * p.first + p.second * p.second;
if (n & 1)
return {d, c + d};
else
return {c, d};
}
```
The above code returns $F_n$ and $F_{n+1}$ as a pair.
## Periodicity modulo p
Consider the Fibonacci sequence modulo $p$. We will prove the sequence is periodic.
Let us prove this by contradiction. Consider the first $p^2 + 1$ pairs of Fibonacci numbers taken modulo $p$:
$$(F_0,\ F_1),\ (F_1,\ F_2),\ \ldots,\ (F_{p^2},\ F_{p^2 + 1})$$
There can only be $p$ different remainders modulo $p$, and at most $p^2$ different pairs of remainders, so there are at least two identical pairs among them. This is sufficient to prove the sequence is periodic, as a Fibonacci number is only determined by it's two predecessors. Hence if two pairs of consecutive numbers repeat, that would also mean the numbers after the pair will repeat in the same fashion.
We now choose two pairs of identical remainders with the smallest indices in the sequence. Let the pairs be $(F_a,\ F_{a + 1})$ and $(F_b,\ F_{b + 1})$. We will prove that $a = 0$. If this was false, there would be two previous pairs $(F_{a-1},\ F_a)$ and $(F_{b-1},\ F_b)$, which, by the property of Fibonacci numbers, would also be equal. However, this contradicts the fact that we had chosen pairs with the smallest indices, completing our proof that there is no pre-period (i.e the numbers are periodic starting from $F_0$).
|
---
title
fibonacci_numbers
---
# Fibonacci Numbers
The Fibonacci sequence is defined as follows:
$$F_0 = 0, F_1 = 1, F_n = F_{n-1} + F_{n-2}$$
The first elements of the sequence ([OEIS A000045](http://oeis.org/A000045)) are:
$$0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...$$
## Properties
Fibonacci numbers possess a lot of interesting properties. Here are a few of them:
* Cassini's identity:
$$F_{n-1} F_{n+1} - F_n^2 = (-1)^n$$
* The "addition" rule:
$$F_{n+k} = F_k F_{n+1} + F_{k-1} F_n$$
* Applying the previous identity to the case $k = n$, we get:
$$F_{2n} = F_n (F_{n+1} + F_{n-1})$$
* From this we can prove by induction that for any positive integer $k$, $F_{nk}$ is multiple of $F_n$.
* The inverse is also true: if $F_m$ is multiple of $F_n$, then $m$ is multiple of $n$.
* GCD identity:
$$GCD(F_m, F_n) = F_{GCD(m, n)}$$
* Fibonacci numbers are the worst possible inputs for Euclidean algorithm (see Lame's theorem in [Euclidean algorithm](euclid-algorithm.md))
## Fibonacci Coding
We can use the sequence to encode positive integers into binary code words. According to Zeckendorf's theorem, any natural number $n$ can be uniquely represented as a sum of Fibonacci numbers:
$$N = F_{k_1} + F_{k_2} + \ldots + F_{k_r}$$
such that $k_1 \ge k_2 + 2,\ k_2 \ge k_3 + 2,\ \ldots,\ k_r \ge 2$ (i.e.: the representation cannot use two consecutive Fibonacci numbers).
It follows that any number can be uniquely encoded in the Fibonacci coding.
And we can describe this representation with binary codes $d_0 d_1 d_2 \dots d_s 1$, where $d_i$ is $1$ if $F_{i+2}$ is used in the representation.
The code will be appended by a $1$ do indicate the end of the code word.
Notice that this is the only occurrence where two consecutive 1-bits appear.
$$\begin{eqnarray}
1 &=& 1 &=& F_2 &=& (11)_F \\
2 &=& 2 &=& F_3 &=& (011)_F \\
6 &=& 5 + 1 &=& F_5 + F_2 &=& (10011)_F \\
8 &=& 8 &=& F_6 &=& (000011)_F \\
9 &=& 8 + 1 &=& F_6 + F_2 &=& (100011)_F \\
19 &=& 13 + 5 + 1 &=& F_7 + F_5 + F_2 &=& (1001011)_F
\end{eqnarray}$$
The encoding of an integer $n$ can be done with a simple greedy algorithm:
1. Iterate through the Fibonacci numbers from the largest to the smallest until you find one less than or equal to $n$.
2. Suppose this number was $F_i$. Subtract $F_i$ from $n$ and put a $1$ in the $i-2$ position of the code word (indexing from 0 from the leftmost to the rightmost bit).
3. Repeat until there is no remainder.
4. Add a final $1$ to the codeword to indicate its end.
To decode a code word, first remove the final $1$. Then, if the $i$-th bit is set (indexing from 0 from the leftmost to the rightmost bit), sum $F_{i+2}$ to the number.
## Formulas for the $n^{\text{th}}$ Fibonacci number { data-toc-label="Formulas for the <script type='math/tex'>n</script>-th Fibonacci number" }
### Closed-form expression
There is a formula known as "Binet's formula", even though it was already known by Moivre:
$$F_n = \frac{\left(\frac{1 + \sqrt{5}}{2}\right)^n - \left(\frac{1 - \sqrt{5}}{2}\right)^n}{\sqrt{5}}$$
This formula is easy to prove by induction, but it can be deduced with the help of the concept of generating functions or by solving a functional equation.
You can immediately notice that the second term's absolute value is always less than $1$, and it also decreases very rapidly (exponentially). Hence the value of the first term alone is "almost" $F_n$. This can be written strictly as:
$$F_n = \left[\frac{\left(\frac{1 + \sqrt{5}}{2}\right)^n}{\sqrt{5}}\right]$$
where the square brackets denote rounding to the nearest integer.
As these two formulas would require very high accuracy when working with fractional numbers, they are of little use in practical calculations.
### Fibonacci in linear time
The $n$-th Fibonacci number can be easily found in $O(n)$ by computing the numbers one by one up to $n$. However, there are also faster ways, as we will see.
We can start from an iterative approach, to take advantage of the use of the formula $F_n = F_{n-1} + F_{n-2}$, therefore, we will simply precalculate those values in an array. Taking into account the base cases for $F_0$ and $F_1$.
```{.cpp file=fibonacci_linear}
int fib(int n) {
int a = 0;
int b = 1;
for (int i = 0; i < n; i++) {
int tmp = a + b;
a = b;
b = tmp;
}
return a;
}
```
In this way, we obtain a linear solution, $O(n)$ time, saving all the values prior to $n$ in the sequence.
### Matrix form
It is easy to prove the following relation:
$$\begin{pmatrix} 1 & 1 \cr 1 & 0 \cr\end{pmatrix} ^ n = \begin{pmatrix} F_{n+1} & F_{n} \cr F_{n} & F_{n-1} \cr\end{pmatrix}$$
Thus, in order to find $F_n$ in $O(log n)$ time, we must raise the matrix to n. (See [Binary exponentiation](binary-exp.md))
```{.cpp file=fibonacci_matrix}
struct matrix {
long long mat[2][2];
matrix friend operator *(const matrix &a, const matrix &b){
matrix c;
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
c.mat[i][j] = 0;
for (int k = 0; k < 2; k++) {
c.mat[i][j] += a.mat[i][k] * b.mat[k][j];
}
}
}
return c;
}
};
matrix matpow(matrix base, long long n) {
matrix ans{ {
{1, 0},
{0, 1}
} };
while (n) {
if(n&1)
ans = ans*base;
base = base*base;
n >>= 1;
}
return ans;
}
long long fib(int n) {
matrix base{ {
{1, 1},
{1, 0}
} };
return matpow(base, n).mat[0][1];
}
```
### Fast Doubling Method
Using expanding the above matrix expression for $n = 2\cdot k$
$$
\begin{pmatrix}
F_{2k+1} & F_{2k}\\
F_{2k} & F_{2k-1}
\end{pmatrix}
=
\begin{pmatrix}
1 & 1\\
1 & 0
\end{pmatrix}^{2k}
=
\begin{pmatrix}
F_{k+1} & F_{k}\\
F_{k} & F_{k-1}
\end{pmatrix}
^2
$$
we can find these simpler equations:
$$ \begin{align}
F_{2k+1} &= F_{k+1}^2 + F_{k}^2 \\
F_{2k} &= F_k(F_{k+1}+F_{k-1}) = F_k (2F_{k+1} - F_{k})\\
\end{align}.$$
Thus using above two equations Fibonacci numbers can be calculated easily by the following code:
```{.cpp file=fibonacci_doubling}
pair<int, int> fib (int n) {
if (n == 0)
return {0, 1};
auto p = fib(n >> 1);
int c = p.first * (2 * p.second - p.first);
int d = p.first * p.first + p.second * p.second;
if (n & 1)
return {d, c + d};
else
return {c, d};
}
```
The above code returns $F_n$ and $F_{n+1}$ as a pair.
## Periodicity modulo p
Consider the Fibonacci sequence modulo $p$. We will prove the sequence is periodic.
Let us prove this by contradiction. Consider the first $p^2 + 1$ pairs of Fibonacci numbers taken modulo $p$:
$$(F_0,\ F_1),\ (F_1,\ F_2),\ \ldots,\ (F_{p^2},\ F_{p^2 + 1})$$
There can only be $p$ different remainders modulo $p$, and at most $p^2$ different pairs of remainders, so there are at least two identical pairs among them. This is sufficient to prove the sequence is periodic, as a Fibonacci number is only determined by it's two predecessors. Hence if two pairs of consecutive numbers repeat, that would also mean the numbers after the pair will repeat in the same fashion.
We now choose two pairs of identical remainders with the smallest indices in the sequence. Let the pairs be $(F_a,\ F_{a + 1})$ and $(F_b,\ F_{b + 1})$. We will prove that $a = 0$. If this was false, there would be two previous pairs $(F_{a-1},\ F_a)$ and $(F_{b-1},\ F_b)$, which, by the property of Fibonacci numbers, would also be equal. However, this contradicts the fact that we had chosen pairs with the smallest indices, completing our proof that there is no pre-period (i.e the numbers are periodic starting from $F_0$).
## Practice Problems
* [SPOJ - Euclid Algorithm Revisited](http://www.spoj.com/problems/MAIN74/)
* [SPOJ - Fibonacci Sum](http://www.spoj.com/problems/FIBOSUM/)
* [HackerRank - Is Fibo](https://www.hackerrank.com/challenges/is-fibo/problem)
* [Project Euler - Even Fibonacci numbers](https://www.hackerrank.com/contests/projecteuler/challenges/euler002/problem)
* [DMOJ - Fibonacci Sequence](https://dmoj.ca/problem/fibonacci)
* [DMOJ - Fibonacci Sequence (Harder)](https://dmoj.ca/problem/fibonacci2)
* [DMOJ UCLV - Numbered sequence of pencils](https://dmoj.uclv.edu.cu/problem/secnum)
* [DMOJ UCLV - Fibonacci 2D](https://dmoj.uclv.edu.cu/problem/fibonacci)
* [DMOJ UCLV - fibonacci calculation](https://dmoj.uclv.edu.cu/problem/fibonaccicalculatio)
* [LightOJ - Number Sequence](https://lightoj.com/problem/number-sequence)
* [Codeforces - C. Fibonacci](https://codeforces.com/problemset/gymProblem/102644/C)
* [Codeforces - A. Hexadecimal's theorem](https://codeforces.com/problemset/problem/199/A)
* [Codeforces - B. Blackboard Fibonacci](https://codeforces.com/problemset/problem/217/B)
* [Codeforces - E. Fibonacci Number](https://codeforces.com/problemset/problem/193/E)
|
Fibonacci Numbers
|
---
title
euler_function
---
# Euler's totient function
Euler's totient function, also known as **phi-function** $\phi (n)$, counts the number of integers between 1 and $n$ inclusive, which are coprime to $n$. Two numbers are coprime if their greatest common divisor equals $1$ ($1$ is considered to be coprime to any number).
Here are values of $\phi(n)$ for the first few positive integers:
$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 \\\\ \hline
\phi(n) & 1 & 1 & 2 & 2 & 4 & 2 & 6 & 4 & 6 & 4 & 10 & 4 & 12 & 6 & 8 & 8 & 16 & 6 & 18 & 8 & 12 \\\\ \hline
\end{array}$$
## Properties
The following properties of Euler totient function are sufficient to calculate it for any number:
- If $p$ is a prime number, then $\gcd(p, q) = 1$ for all $1 \le q < p$. Therefore we have:
$$\phi (p) = p - 1.$$
- If $p$ is a prime number and $k \ge 1$, then there are exactly $p^k / p$ numbers between $1$ and $p^k$ that are divisible by $p$.
Which gives us:
$$\phi(p^k) = p^k - p^{k-1}.$$
- If $a$ and $b$ are relatively prime, then:
\[\phi(a b) = \phi(a) \cdot \phi(b).\]
This relation is not trivial to see. It follows from the [Chinese remainder theorem](chinese-remainder-theorem.md). The Chinese remainder theorem guarantees, that for each $0 \le x < a$ and each $0 \le y < b$, there exists a unique $0 \le z < a b$ with $z \equiv x \pmod{a}$ and $z \equiv y \pmod{b}$. It's not hard to show that $z$ is coprime to $a b$ if and only if $x$ is coprime to $a$ and $y$ is coprime to $b$. Therefore the amount of integers coprime to $a b$ is equal to product of the amounts of $a$ and $b$.
- In general, for not coprime $a$ and $b$, the equation
\[\phi(ab) = \phi(a) \cdot \phi(b) \cdot \dfrac{d}{\phi(d)}\]
with $d = \gcd(a, b)$ holds.
Thus, using the first three properties, we can compute $\phi(n)$ through the factorization of $n$ (decomposition of $n$ into a product of its prime factors).
If $n = {p_1}^{a_1} \cdot {p_2}^{a_2} \cdots {p_k}^{a_k}$, where $p_i$ are prime factors of $n$,
$$\begin{align}
\phi (n) &= \phi ({p_1}^{a_1}) \cdot \phi ({p_2}^{a_2}) \cdots \phi ({p_k}^{a_k}) \\\\
&= \left({p_1}^{a_1} - {p_1}^{a_1 - 1}\right) \cdot \left({p_2}^{a_2} - {p_2}^{a_2 - 1}\right) \cdots \left({p_k}^{a_k} - {p_k}^{a_k - 1}\right) \\\\
&= p_1^{a_1} \cdot \left(1 - \frac{1}{p_1}\right) \cdot p_2^{a_2} \cdot \left(1 - \frac{1}{p_2}\right) \cdots p_k^{a_k} \cdot \left(1 - \frac{1}{p_k}\right) \\\\
&= n \cdot \left(1 - \frac{1}{p_1}\right) \cdot \left(1 - \frac{1}{p_2}\right) \cdots \left(1 - \frac{1}{p_k}\right)
\end{align}$$
## Implementation
Here is an implementation using factorization in $O(\sqrt{n})$:
```cpp
int phi(int n) {
int result = n;
for (int i = 2; i * i <= n; i++) {
if (n % i == 0) {
while (n % i == 0)
n /= i;
result -= result / i;
}
}
if (n > 1)
result -= result / n;
return result;
}
```
## Euler totient function from $1$ to $n$ in $O(n \log\log{n})$ { #etf_1_to_n data-toc-label="Euler totient function from 1 to n in <script type=\"math/tex\">O(n log log n)</script>" }
If we need all all the totient of all numbers between $1$ and $n$, then factorizing all $n$ numbers is not efficient.
We can use the same idea as the [Sieve of Eratosthenes](sieve-of-eratosthenes.md).
It is still based on the property shown above, but instead of updating the temporary result for each prime factor for each number, we find all prime numbers and for each one update the temporary results of all numbers that are divisible by that prime number.
Since this approach is basically identical to the Sieve of Eratosthenes, the complexity will also be the same: $O(n \log \log n)$
```cpp
void phi_1_to_n(int n) {
vector<int> phi(n + 1);
for (int i = 0; i <= n; i++)
phi[i] = i;
for (int i = 2; i <= n; i++) {
if (phi[i] == i) {
for (int j = i; j <= n; j += i)
phi[j] -= phi[j] / i;
}
}
}
```
## Divisor sum property { #divsum}
This interesting property was established by Gauss:
$$ \sum_{d|n} \phi{(d)} = n$$
Here the sum is over all positive divisors $d$ of $n$.
For instance the divisors of 10 are 1, 2, 5 and 10.
Hence $\phi{(1)} + \phi{(2)} + \phi{(5)} + \phi{(10)} = 1 + 1 + 4 + 4 = 10$.
### Finding the totient from 1 to $n$ using the divisor sum property { data-toc-label="Finding the totient from 1 to n using the divisor sum property" }
The divisor sum property also allows us to compute the totient of all numbers between 1 and $n$.
This implementation is a little simpler than the previous implementation based on the Sieve of Eratosthenes, however also has a slightly worse complexity: $O(n \log n)$
```cpp
void phi_1_to_n(int n) {
vector<int> phi(n + 1);
phi[0] = 0;
phi[1] = 1;
for (int i = 2; i <= n; i++)
phi[i] = i - 1;
for (int i = 2; i <= n; i++)
for (int j = 2 * i; j <= n; j += i)
phi[j] -= phi[i];
}
```
## Application in Euler's theorem { #application }
The most famous and important property of Euler's totient function is expressed in **Euler's theorem**:
$$a^{\phi(m)} \equiv 1 \pmod m \quad \text{if } a \text{ and } m \text{ are relatively prime.}$$
In the particular case when $m$ is prime, Euler's theorem turns into **Fermat's little theorem**:
$$a^{m - 1} \equiv 1 \pmod m$$
Euler's theorem and Euler's totient function occur quite often in practical applications, for example both are used to compute the [modular multiplicative inverse](module-inverse.md).
As immediate consequence we also get the equivalence:
$$a^n \equiv a^{n \bmod \phi(m)} \pmod m$$
This allows computing $x^n \bmod m$ for very big $n$, especially if $n$ is the result of another computation, as it allows to compute $n$ under a modulo.
## Generalization
There is a less known version of the last equivalence, that allows computing $x^n \bmod m$ efficiently for not coprime $x$ and $m$.
For arbitrary $x, m$ and $n \geq \log_2 m$:
$$x^{n}\equiv x^{\phi(m)+[n \bmod \phi(m)]} \mod m$$
Proof:
Let $p_1, \dots, p_t$ be common prime divisors of $x$ and $m$, and $k_i$ their exponents in $m$.
With those we define $a = p_1^{k_1} \dots p_t^{k_t}$, which makes $\frac{m}{a}$ coprime to $x$.
And let $k$ be the smallest number such that $a$ divides $x^k$.
Assuming $n \ge k$, we can write:
$$\begin{align}x^n \bmod m &= \frac{x^k}{a}ax^{n-k}\bmod m \\
&= \frac{x^k}{a}\left(ax^{n-k}\bmod m\right) \bmod m \\
&= \frac{x^k}{a}\left(ax^{n-k}\bmod a \frac{m}{a}\right) \bmod m \\
&=\frac{x^k}{a} a \left(x^{n-k} \bmod \frac{m}{a}\right)\bmod m \\
&= x^k\left(x^{n-k} \bmod \frac{m}{a}\right)\bmod m
\end{align}$$
The equivalence between the third and forth line follows from the fact that $ab \bmod ac = a(b \bmod c)$.
Indeed if $b = cd + r$ with $r < c$, then $ab = acd + ar$ with $ar < ac$.
Since $x$ and $\frac{m}{a}$ are coprime, we can apply Euler's theorem and get the efficient (since $k$ is very small; in fact $k \le \log_2 m$) formula:
$$x^n \bmod m = x^k\left(x^{n-k \bmod \phi(\frac{m}{a})} \bmod \frac{m}{a}\right)\bmod m.$$
This formula is difficult to apply, but we can use it to analyze the behavior of $x^n \bmod m$. We can see that the sequence of powers $(x^1 \bmod m, x^2 \bmod m, x^3 \bmod m, \dots)$ enters a cycle of length $\phi\left(\frac{m}{a}\right)$ after the first $k$ (or less) elements.
$\phi\left(\frac{m}{a}\right)$ divides $\phi(m)$ (because $a$ and $\frac{m}{a}$ are coprime we have $\phi(a) \cdot \phi\left(\frac{m}{a}\right) = \phi(m)$), therefore we can also say that the period has length $\phi(m)$.
And since $\phi(m) \ge \log_2 m \ge k$, we can conclude the desired, much simpler, formula:
$$ x^n \equiv x^{\phi(m)} x^{(n - \phi(m)) \bmod \phi(m)} \bmod m \equiv x^{\phi(m)+[n \bmod \phi(m)]} \mod m.$$
|
---
title
euler_function
---
# Euler's totient function
Euler's totient function, also known as **phi-function** $\phi (n)$, counts the number of integers between 1 and $n$ inclusive, which are coprime to $n$. Two numbers are coprime if their greatest common divisor equals $1$ ($1$ is considered to be coprime to any number).
Here are values of $\phi(n)$ for the first few positive integers:
$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 \\\\ \hline
\phi(n) & 1 & 1 & 2 & 2 & 4 & 2 & 6 & 4 & 6 & 4 & 10 & 4 & 12 & 6 & 8 & 8 & 16 & 6 & 18 & 8 & 12 \\\\ \hline
\end{array}$$
## Properties
The following properties of Euler totient function are sufficient to calculate it for any number:
- If $p$ is a prime number, then $\gcd(p, q) = 1$ for all $1 \le q < p$. Therefore we have:
$$\phi (p) = p - 1.$$
- If $p$ is a prime number and $k \ge 1$, then there are exactly $p^k / p$ numbers between $1$ and $p^k$ that are divisible by $p$.
Which gives us:
$$\phi(p^k) = p^k - p^{k-1}.$$
- If $a$ and $b$ are relatively prime, then:
\[\phi(a b) = \phi(a) \cdot \phi(b).\]
This relation is not trivial to see. It follows from the [Chinese remainder theorem](chinese-remainder-theorem.md). The Chinese remainder theorem guarantees, that for each $0 \le x < a$ and each $0 \le y < b$, there exists a unique $0 \le z < a b$ with $z \equiv x \pmod{a}$ and $z \equiv y \pmod{b}$. It's not hard to show that $z$ is coprime to $a b$ if and only if $x$ is coprime to $a$ and $y$ is coprime to $b$. Therefore the amount of integers coprime to $a b$ is equal to product of the amounts of $a$ and $b$.
- In general, for not coprime $a$ and $b$, the equation
\[\phi(ab) = \phi(a) \cdot \phi(b) \cdot \dfrac{d}{\phi(d)}\]
with $d = \gcd(a, b)$ holds.
Thus, using the first three properties, we can compute $\phi(n)$ through the factorization of $n$ (decomposition of $n$ into a product of its prime factors).
If $n = {p_1}^{a_1} \cdot {p_2}^{a_2} \cdots {p_k}^{a_k}$, where $p_i$ are prime factors of $n$,
$$\begin{align}
\phi (n) &= \phi ({p_1}^{a_1}) \cdot \phi ({p_2}^{a_2}) \cdots \phi ({p_k}^{a_k}) \\\\
&= \left({p_1}^{a_1} - {p_1}^{a_1 - 1}\right) \cdot \left({p_2}^{a_2} - {p_2}^{a_2 - 1}\right) \cdots \left({p_k}^{a_k} - {p_k}^{a_k - 1}\right) \\\\
&= p_1^{a_1} \cdot \left(1 - \frac{1}{p_1}\right) \cdot p_2^{a_2} \cdot \left(1 - \frac{1}{p_2}\right) \cdots p_k^{a_k} \cdot \left(1 - \frac{1}{p_k}\right) \\\\
&= n \cdot \left(1 - \frac{1}{p_1}\right) \cdot \left(1 - \frac{1}{p_2}\right) \cdots \left(1 - \frac{1}{p_k}\right)
\end{align}$$
## Implementation
Here is an implementation using factorization in $O(\sqrt{n})$:
```cpp
int phi(int n) {
int result = n;
for (int i = 2; i * i <= n; i++) {
if (n % i == 0) {
while (n % i == 0)
n /= i;
result -= result / i;
}
}
if (n > 1)
result -= result / n;
return result;
}
```
## Euler totient function from $1$ to $n$ in $O(n \log\log{n})$ { #etf_1_to_n data-toc-label="Euler totient function from 1 to n in <script type=\"math/tex\">O(n log log n)</script>" }
If we need all all the totient of all numbers between $1$ and $n$, then factorizing all $n$ numbers is not efficient.
We can use the same idea as the [Sieve of Eratosthenes](sieve-of-eratosthenes.md).
It is still based on the property shown above, but instead of updating the temporary result for each prime factor for each number, we find all prime numbers and for each one update the temporary results of all numbers that are divisible by that prime number.
Since this approach is basically identical to the Sieve of Eratosthenes, the complexity will also be the same: $O(n \log \log n)$
```cpp
void phi_1_to_n(int n) {
vector<int> phi(n + 1);
for (int i = 0; i <= n; i++)
phi[i] = i;
for (int i = 2; i <= n; i++) {
if (phi[i] == i) {
for (int j = i; j <= n; j += i)
phi[j] -= phi[j] / i;
}
}
}
```
## Divisor sum property { #divsum}
This interesting property was established by Gauss:
$$ \sum_{d|n} \phi{(d)} = n$$
Here the sum is over all positive divisors $d$ of $n$.
For instance the divisors of 10 are 1, 2, 5 and 10.
Hence $\phi{(1)} + \phi{(2)} + \phi{(5)} + \phi{(10)} = 1 + 1 + 4 + 4 = 10$.
### Finding the totient from 1 to $n$ using the divisor sum property { data-toc-label="Finding the totient from 1 to n using the divisor sum property" }
The divisor sum property also allows us to compute the totient of all numbers between 1 and $n$.
This implementation is a little simpler than the previous implementation based on the Sieve of Eratosthenes, however also has a slightly worse complexity: $O(n \log n)$
```cpp
void phi_1_to_n(int n) {
vector<int> phi(n + 1);
phi[0] = 0;
phi[1] = 1;
for (int i = 2; i <= n; i++)
phi[i] = i - 1;
for (int i = 2; i <= n; i++)
for (int j = 2 * i; j <= n; j += i)
phi[j] -= phi[i];
}
```
## Application in Euler's theorem { #application }
The most famous and important property of Euler's totient function is expressed in **Euler's theorem**:
$$a^{\phi(m)} \equiv 1 \pmod m \quad \text{if } a \text{ and } m \text{ are relatively prime.}$$
In the particular case when $m$ is prime, Euler's theorem turns into **Fermat's little theorem**:
$$a^{m - 1} \equiv 1 \pmod m$$
Euler's theorem and Euler's totient function occur quite often in practical applications, for example both are used to compute the [modular multiplicative inverse](module-inverse.md).
As immediate consequence we also get the equivalence:
$$a^n \equiv a^{n \bmod \phi(m)} \pmod m$$
This allows computing $x^n \bmod m$ for very big $n$, especially if $n$ is the result of another computation, as it allows to compute $n$ under a modulo.
## Generalization
There is a less known version of the last equivalence, that allows computing $x^n \bmod m$ efficiently for not coprime $x$ and $m$.
For arbitrary $x, m$ and $n \geq \log_2 m$:
$$x^{n}\equiv x^{\phi(m)+[n \bmod \phi(m)]} \mod m$$
Proof:
Let $p_1, \dots, p_t$ be common prime divisors of $x$ and $m$, and $k_i$ their exponents in $m$.
With those we define $a = p_1^{k_1} \dots p_t^{k_t}$, which makes $\frac{m}{a}$ coprime to $x$.
And let $k$ be the smallest number such that $a$ divides $x^k$.
Assuming $n \ge k$, we can write:
$$\begin{align}x^n \bmod m &= \frac{x^k}{a}ax^{n-k}\bmod m \\
&= \frac{x^k}{a}\left(ax^{n-k}\bmod m\right) \bmod m \\
&= \frac{x^k}{a}\left(ax^{n-k}\bmod a \frac{m}{a}\right) \bmod m \\
&=\frac{x^k}{a} a \left(x^{n-k} \bmod \frac{m}{a}\right)\bmod m \\
&= x^k\left(x^{n-k} \bmod \frac{m}{a}\right)\bmod m
\end{align}$$
The equivalence between the third and forth line follows from the fact that $ab \bmod ac = a(b \bmod c)$.
Indeed if $b = cd + r$ with $r < c$, then $ab = acd + ar$ with $ar < ac$.
Since $x$ and $\frac{m}{a}$ are coprime, we can apply Euler's theorem and get the efficient (since $k$ is very small; in fact $k \le \log_2 m$) formula:
$$x^n \bmod m = x^k\left(x^{n-k \bmod \phi(\frac{m}{a})} \bmod \frac{m}{a}\right)\bmod m.$$
This formula is difficult to apply, but we can use it to analyze the behavior of $x^n \bmod m$. We can see that the sequence of powers $(x^1 \bmod m, x^2 \bmod m, x^3 \bmod m, \dots)$ enters a cycle of length $\phi\left(\frac{m}{a}\right)$ after the first $k$ (or less) elements.
$\phi\left(\frac{m}{a}\right)$ divides $\phi(m)$ (because $a$ and $\frac{m}{a}$ are coprime we have $\phi(a) \cdot \phi\left(\frac{m}{a}\right) = \phi(m)$), therefore we can also say that the period has length $\phi(m)$.
And since $\phi(m) \ge \log_2 m \ge k$, we can conclude the desired, much simpler, formula:
$$ x^n \equiv x^{\phi(m)} x^{(n - \phi(m)) \bmod \phi(m)} \bmod m \equiv x^{\phi(m)+[n \bmod \phi(m)]} \mod m.$$
## Practice Problems
* [SPOJ #4141 "Euler Totient Function" [Difficulty: CakeWalk]](http://www.spoj.com/problems/ETF/)
* [UVA #10179 "Irreducible Basic Fractions" [Difficulty: Easy]](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1120)
* [UVA #10299 "Relatives" [Difficulty: Easy]](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1240)
* [UVA #11327 "Enumerating Rational Numbers" [Difficulty: Medium]](http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2302)
* [TIMUS #1673 "Admission to Exam" [Difficulty: High]](http://acm.timus.ru/problem.aspx?space=1&num=1673)
* [UVA 10990 - Another New Function](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1931)
* [Codechef - Golu and Sweetness](https://www.codechef.com/problems/COZIE)
* [SPOJ - LCM Sum](http://www.spoj.com/problems/LCMSUM/)
* [GYM - Simple Calculations (F)](http://codeforces.com/gym/100975)
* [UVA 13132 - Laser Mirrors](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=5043)
* [SPOJ - GCDEX](http://www.spoj.com/problems/GCDEX/)
* [UVA 12995 - Farey Sequence](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=4878)
* [SPOJ - Totient in Permutation (easy)](http://www.spoj.com/problems/TIP1/)
* [LOJ - Mathematically Hard](http://lightoj.com/volume_showproblem.php?problem=1007)
* [SPOJ - Totient Extreme](http://www.spoj.com/problems/DCEPCA03/)
* [SPOJ - Playing with GCD](http://www.spoj.com/problems/NAJPWG/)
* [SPOJ - G Force](http://www.spoj.com/problems/DCEPC12G/)
* [SPOJ - Smallest Inverse Euler Totient Function](http://www.spoj.com/problems/INVPHI/)
* [Codeforces - Power Tower](http://codeforces.com/problemset/problem/906/D)
* [Kattis - Exponial](https://open.kattis.com/problems/exponial)
* [LeetCode - 372. Super Pow](https://leetcode.com/problems/super-pow/)
* [Codeforces - The Holmes Children](http://codeforces.com/problemset/problem/776/E)
|
Euler's totient function
|
---
title
- Original
---
# Number of divisors / sum of divisors
In this article we discuss how to compute the number of divisors $d(n)$ and the sum of divisors $\sigma(n)$ of a given number $n$.
## Number of divisors
It should be obvious that the prime factorization of a divisor $d$ has to be a subset of the prime factorization of $n$, e.g. $6 = 2 \cdot 3$ is a divisor of $60 = 2^2 \cdot 3 \cdot 5$.
So we only need to find all different subsets of the prime factorization of $n$.
Usually the number of subsets is $2^x$ for a set with $x$ elements.
However this is no longer true, if there are repeated elements in the set. In our case some prime factors may appear multiple times in the prime factorization of $n$.
If a prime factor $p$ appears $e$ times in the prime factorization of $n$, then we can use the factor $p$ up to $e$ times in the subset.
Which means we have $e+1$ choices.
Therefore if the prime factorization of $n$ is $p_1^{e_1} \cdot p_2^{e_2} \cdots p_k^{e_k}$, where $p_i$ are distinct prime numbers, then the number of divisors is:
$$d(n) = (e_1 + 1) \cdot (e_2 + 1) \cdots (e_k + 1)$$
A way of thinking about it is the following:
* If there is only one distinct prime divisor $n = p_1^{e_1}$, then there are obviously $e_1 + 1$ divisors ($1, p_1, p_1^2, \dots, p_1^{e_1}$).
* If there are two distinct prime divisors $n = p_1^{e_1} \cdot p_2^{e_2}$, then you can arrange all divisors in form of a tabular.
$$\begin{array}{c|ccccc}
& 1 & p_2 & p_2^2 & \dots & p_2^{e_2} \\\\\hline
1 & 1 & p_2 & p_2^2 & \dots & p_2^{e_2} \\\\
p_1 & p_1 & p_1 \cdot p_2 & p_1 \cdot p_2^2 & \dots & p_1 \cdot p_2^{e_2} \\\\
p_1^2 & p_1^2 & p_1^2 \cdot p_2 & p_1^2 \cdot p_2^2 & \dots & p_1^2 \cdot p_2^{e_2} \\\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\\\
p_1^{e_1} & p_1^{e_1} & p_1^{e_1} \cdot p_2 & p_1^{e_1} \cdot p_2^2 & \dots & p_1^{e_1} \cdot p_2^{e_2} \\\\
\end{array}$$
So the number of divisors is trivially $(e_1 + 1) \cdot (e_2 + 1)$.
* A similar argument can be made if there are more then two distinct prime factors.
```cpp
long long numberOfDivisors(long long num) {
long long total = 1;
for (int i = 2; (long long)i * i <= num; i++) {
if (num % i == 0) {
int e = 0;
do {
e++;
num /= i;
} while (num % i == 0);
total *= e + 1;
}
}
if (num > 1) {
total *= 2;
}
return total;
}
```
## Sum of divisors
We can use the same argument of the previous section.
* If there is only one distinct prime divisor $n = p_1^{e_1}$, then the sum is:
$$1 + p_1 + p_1^2 + \dots + p_1^{e_1} = \frac{p_1^{e_1 + 1} - 1}{p_1 - 1}$$
* If there are two distinct prime divisors $n = p_1^{e_1} \cdot p_2^{e_2}$, then we can make the same table as before.
The only difference is that now we now want to compute the sum instead of counting the elements.
It is easy to see, that the sum of each combination can be expressed as:
$$\left(1 + p_1 + p_1^2 + \dots + p_1^{e_1}\right) \cdot \left(1 + p_2 + p_2^2 + \dots + p_2^{e_2}\right)$$
$$ = \frac{p_1^{e_1 + 1} - 1}{p_1 - 1} \cdot \frac{p_2^{e_2 + 1} - 1}{p_2 - 1}$$
* In general, for $n = p_1^{e_1} \cdot p_2^{e_2} \cdots p_k^{e_k}$ we receive the formula:
$$\sigma(n) = \frac{p_1^{e_1 + 1} - 1}{p_1 - 1} \cdot \frac{p_2^{e_2 + 1} - 1}{p_2 - 1} \cdots \frac{p_k^{e_k + 1} - 1}{p_k - 1}$$
```cpp
long long SumOfDivisors(long long num) {
long long total = 1;
for (int i = 2; (long long)i * i <= num; i++) {
if (num % i == 0) {
int e = 0;
do {
e++;
num /= i;
} while (num % i == 0);
long long sum = 0, pow = 1;
do {
sum += pow;
pow *= i;
} while (e-- > 0);
total *= sum;
}
}
if (num > 1) {
total *= (1 + num);
}
return total;
}
```
## Multiplicative functions
A multiplicative function is a function $f(x)$ which satisfies
$$f(a \cdot b) = f(a) \cdot f(b)$$
if $a$ and $b$ are coprime.
Both $d(n)$ and $\sigma(n)$ are multiplicative functions.
Multiplicative functions have a huge variety of interesting properties, which can be very useful in number theory problems.
For instance the Dirichlet convolution of two multiplicative functions is also multiplicative.
|
---
title
- Original
---
# Number of divisors / sum of divisors
In this article we discuss how to compute the number of divisors $d(n)$ and the sum of divisors $\sigma(n)$ of a given number $n$.
## Number of divisors
It should be obvious that the prime factorization of a divisor $d$ has to be a subset of the prime factorization of $n$, e.g. $6 = 2 \cdot 3$ is a divisor of $60 = 2^2 \cdot 3 \cdot 5$.
So we only need to find all different subsets of the prime factorization of $n$.
Usually the number of subsets is $2^x$ for a set with $x$ elements.
However this is no longer true, if there are repeated elements in the set. In our case some prime factors may appear multiple times in the prime factorization of $n$.
If a prime factor $p$ appears $e$ times in the prime factorization of $n$, then we can use the factor $p$ up to $e$ times in the subset.
Which means we have $e+1$ choices.
Therefore if the prime factorization of $n$ is $p_1^{e_1} \cdot p_2^{e_2} \cdots p_k^{e_k}$, where $p_i$ are distinct prime numbers, then the number of divisors is:
$$d(n) = (e_1 + 1) \cdot (e_2 + 1) \cdots (e_k + 1)$$
A way of thinking about it is the following:
* If there is only one distinct prime divisor $n = p_1^{e_1}$, then there are obviously $e_1 + 1$ divisors ($1, p_1, p_1^2, \dots, p_1^{e_1}$).
* If there are two distinct prime divisors $n = p_1^{e_1} \cdot p_2^{e_2}$, then you can arrange all divisors in form of a tabular.
$$\begin{array}{c|ccccc}
& 1 & p_2 & p_2^2 & \dots & p_2^{e_2} \\\\\hline
1 & 1 & p_2 & p_2^2 & \dots & p_2^{e_2} \\\\
p_1 & p_1 & p_1 \cdot p_2 & p_1 \cdot p_2^2 & \dots & p_1 \cdot p_2^{e_2} \\\\
p_1^2 & p_1^2 & p_1^2 \cdot p_2 & p_1^2 \cdot p_2^2 & \dots & p_1^2 \cdot p_2^{e_2} \\\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\\\
p_1^{e_1} & p_1^{e_1} & p_1^{e_1} \cdot p_2 & p_1^{e_1} \cdot p_2^2 & \dots & p_1^{e_1} \cdot p_2^{e_2} \\\\
\end{array}$$
So the number of divisors is trivially $(e_1 + 1) \cdot (e_2 + 1)$.
* A similar argument can be made if there are more then two distinct prime factors.
```cpp
long long numberOfDivisors(long long num) {
long long total = 1;
for (int i = 2; (long long)i * i <= num; i++) {
if (num % i == 0) {
int e = 0;
do {
e++;
num /= i;
} while (num % i == 0);
total *= e + 1;
}
}
if (num > 1) {
total *= 2;
}
return total;
}
```
## Sum of divisors
We can use the same argument of the previous section.
* If there is only one distinct prime divisor $n = p_1^{e_1}$, then the sum is:
$$1 + p_1 + p_1^2 + \dots + p_1^{e_1} = \frac{p_1^{e_1 + 1} - 1}{p_1 - 1}$$
* If there are two distinct prime divisors $n = p_1^{e_1} \cdot p_2^{e_2}$, then we can make the same table as before.
The only difference is that now we now want to compute the sum instead of counting the elements.
It is easy to see, that the sum of each combination can be expressed as:
$$\left(1 + p_1 + p_1^2 + \dots + p_1^{e_1}\right) \cdot \left(1 + p_2 + p_2^2 + \dots + p_2^{e_2}\right)$$
$$ = \frac{p_1^{e_1 + 1} - 1}{p_1 - 1} \cdot \frac{p_2^{e_2 + 1} - 1}{p_2 - 1}$$
* In general, for $n = p_1^{e_1} \cdot p_2^{e_2} \cdots p_k^{e_k}$ we receive the formula:
$$\sigma(n) = \frac{p_1^{e_1 + 1} - 1}{p_1 - 1} \cdot \frac{p_2^{e_2 + 1} - 1}{p_2 - 1} \cdots \frac{p_k^{e_k + 1} - 1}{p_k - 1}$$
```cpp
long long SumOfDivisors(long long num) {
long long total = 1;
for (int i = 2; (long long)i * i <= num; i++) {
if (num % i == 0) {
int e = 0;
do {
e++;
num /= i;
} while (num % i == 0);
long long sum = 0, pow = 1;
do {
sum += pow;
pow *= i;
} while (e-- > 0);
total *= sum;
}
}
if (num > 1) {
total *= (1 + num);
}
return total;
}
```
## Multiplicative functions
A multiplicative function is a function $f(x)$ which satisfies
$$f(a \cdot b) = f(a) \cdot f(b)$$
if $a$ and $b$ are coprime.
Both $d(n)$ and $\sigma(n)$ are multiplicative functions.
Multiplicative functions have a huge variety of interesting properties, which can be very useful in number theory problems.
For instance the Dirichlet convolution of two multiplicative functions is also multiplicative.
## Practice Problems
- [SPOJ - COMDIV](https://www.spoj.com/problems/COMDIV/)
- [SPOJ - DIVSUM](https://www.spoj.com/problems/DIVSUM/)
- [SPOJ - DIVSUM2](https://www.spoj.com/problems/DIVSUM2/)
|
Number of divisors / sum of divisors
|
---
title
chinese_theorem
---
# Chinese Remainder Theorem
The Chinese Remainder Theorem (which will be referred to as CRT in the rest of this article) was discovered by Chinese mathematician Sun Zi.
## Formulation
Let $m = m_1 \cdot m_2 \cdots m_k$, where $m_i$ are pairwise coprime. In addition to $m_i$, we are also given a system of congruences
$$\left\{\begin{array}{rcl}
a & \equiv & a_1 \pmod{m_1} \\
a & \equiv & a_2 \pmod{m_2} \\
& \vdots & \\
a & \equiv & a_k \pmod{m_k}
\end{array}\right.$$
where $a_i$ are some given constants. The original form of CRT then states that the given system of congruences always has *one and exactly one* solution modulo $m$.
E.g. the system of congruences
$$\left\{\begin{array}{rcl}
a & \equiv & 2 \pmod{3} \\
a & \equiv & 3 \pmod{5} \\
a & \equiv & 2 \pmod{7}
\end{array}\right.$$
has the solution $23$ modulo $105$, because $23 \bmod{3} = 2$, $23 \bmod{5} = 3$, and $23 \bmod{7} = 2$.
We can write down every solution as $23 + 105\cdot k$ for $k \in \mathbb{Z}$.
### Corollary
A consequence of the CRT is that the equation
$$x \equiv a \pmod{m}$$
is equivalent to the system of equations
$$\left\{\begin{array}{rcl}
x & \equiv & a_1 \pmod{m_1} \\
& \vdots & \\
x & \equiv & a_k \pmod{m_k}
\end{array}\right.$$
(As above, assume that $m = m_1 m_2 \cdots m_k$ and $m_i$ are pairwise coprime).
## Solution for Two Moduli
Consider a system of two equations for coprime $m_1, m_2$:
$$
\left\{\begin{align}
a &\equiv a_1 \pmod{m_1} \\
a &\equiv a_2 \pmod{m_2} \\
\end{align}\right.
$$
We want to find a solution for $a \pmod{m_1 m_2}$. Using the [Extended Euclidean Algorithm](extended-euclid-algorithm.md) we can find Bézout coefficients $n_1, n_2$ such that
$$n_1 m_1 + n_2 m_2 = 1.$$
In fact $n_1$ and $n_2$ are just the [modular inverses](module-inverse.md) of $m_1$ and $m_2$ modulo $m_2$ and $m_1$.
We have $n_1 m_1 \equiv 1 \pmod{m_2}$ so $n_1 \equiv m_1^{-1} \pmod{m_2}$, and vice versa $n_2 \equiv m_2^{-1} \pmod{m_1}$.
With those two coefficients we can define a solution:
$$a = a_1 n_2 m_2 + a_2 n_1 m_1 \bmod{m_1 m_2}$$
It's easy to verify that this is indeed a solution by computing $a \bmod{m_1}$ and $a \bmod{m_2}$.
$$
\begin{array}{rcll}
a & \equiv & a_1 n_2 m_2 + a_2 n_1 m_1 & \pmod{m_1}\\
& \equiv & a_1 (1 - n_1 m_1) + a_2 n_1 m_1 & \pmod{m_1}\\
& \equiv & a_1 - a_1 n_1 m_1 + a_2 n_1 m_1 & \pmod{m_1}\\
& \equiv & a_1 & \pmod{m_1}
\end{array}
$$
Notice, that the Chinese Remainder Theorem also guarantees, that only 1 solution exists modulo $m_1 m_2$.
This is also easy to prove.
Lets assume that you have two different solutions $x$ and $y$.
Because $x \equiv a_i \pmod{m_i}$ and $y \equiv a_i \pmod{m_i}$, it follows that $x − y \equiv 0 \pmod{m_i}$ and therefore $x − y \equiv 0 \pmod{m_1 m_2}$ or equivalently $x \equiv y \pmod{m_1 m_2}$.
So $x$ and $y$ are actually the same solution.
## Solution for General Case
### Inductive Solution
As $m_1 m_2$ is coprime to $m_3$, we can inductively repeatedly apply the solution for two moduli for any number of moduli.
First you compute $b_2 := a \pmod{m_1 m_2}$ using the first two congruences,
then you can compute $b_3 := a \pmod{m_1 m_2 m_3}$ using the congruences $a \equiv b_2 \pmod{m_1 m_2}$ and $a \equiv a_3 \pmod {m_3}$, etc.
### Direct Construction
A direct construction similar to Lagrange interpolation is possible.
Let $M_i := \prod_{i \neq j} m_j$, the product of all moduli but $m_i$, and $N_i$ the modular inverses $N_i := M_i^{-1} \bmod{m_i}$.
Then a solution to the system of congruences is:
$$a \equiv \sum_{i=1}^k a_i M_i N_i \pmod{m_1 m_2 \cdots m_k}$$
We can check this is indeed a solution, by computing $a \bmod{m_i}$ for all $i$.
Because $M_j$ is a multiple of $m_i$ for $i \neq j$ we have
$$\begin{array}{rcll}
a & \equiv & \sum_{j=1}^k a_j M_j N_j & \pmod{m_i} \\
& \equiv & a_i M_i N_i & \pmod{m_i} \\
& \equiv & a_i M_i M_i^{-1} & \pmod{m_i} \\
& \equiv & a_i & \pmod{m_i}
\end{array}$$
### Implementation
```{.cpp file=chinese_remainder_theorem}
struct Congruence {
long long a, m;
};
long long chinese_remainder_theorem(vector<Congruence> const& congruences) {
long long M = 1;
for (auto const& congruence : congruences) {
M *= congruence.m;
}
long long solution = 0;
for (auto const& congruence : congruences) {
long long a_i = congruence.a;
long long M_i = M / congruence.m;
long long N_i = mod_inv(M_i, congruence.m);
solution = (solution + a_i * M_i % M * N_i) % M;
}
return solution;
}
```
## Solution for not coprime moduli
As mentioned, the algorithm above only works for coprime moduli $m_1, m_2, \dots m_k$.
In the not coprime case, a system of congruences has exactly one solution modulo $\text{lcm}(m_1, m_2, \dots, m_k)$, or has no solution at all.
E.g. in the following system, the first congruence implies that the solution is odd, and the second congruence implies that the solution is even.
It's not possible that a number is both odd and even, therefore there is clearly no solution.
$$\left\{\begin{align}
a & \equiv 1 \pmod{4} \\
a & \equiv 2 \pmod{6}
\end{align}\right.$$
It is pretty simple to determine is a system has a solution.
And if it has one, we can use the original algorithm to solve a slightly modified system of congruences.
A single congruence $a \equiv a_i \pmod{m_i}$ is equivalent to the system of congruences $a \equiv a_i \pmod{p_j^{n_j}}$ where $p_1^{n_1} p_2^{n_2}\cdots p_k^{n_k}$ is the prime factorization of $m_i$.
With this fact, we can modify the system of congruences into a system, that only has prime powers as moduli.
E.g. the above system of congruences is equivalent to:
$$\left\{\begin{array}{ll}
a \equiv 1 & \pmod{4} \\
a \equiv 2 \equiv 0 & \pmod{2} \\
a \equiv 2 & \pmod{3}
\end{array}\right.$$
Because originally some moduli had common factors, we will get some congruences moduli based on the same prime, however possibly with different prime powers.
You can observe, that the congruence with the highest prime power modulus will be the strongest congruence of all congruences based on the same prime number.
Either it will give a contradiction with some other congruence, or it will imply already all other congruences.
In our case, the first congruence $a \equiv 1 \pmod{4}$ implies $a \equiv 1 \pmod{2}$, and therefore contradicts the second congruence $a \equiv 0 \pmod{2}$.
Therefore this system of congruences has no solution.
If there are no contradictions, then the system of equation has a solution.
We can ignore all congruences except the ones with the highest prime power moduli.
These moduli are now coprime, and therefore we can solve this one with the algorithm discussed in the sections above.
E.g. the following system has a solution modulo $\text{lcm}(10, 12) = 60$.
$$\left\{\begin{align}
a & \equiv 3 \pmod{10} \\
a & \equiv 5 \pmod{12}
\end{align}\right.$$
The system of congruence is equivalent to the system of congruences:
$$\left\{\begin{align}
a & \equiv 3 \equiv 1 \pmod{2} \\
a & \equiv 3 \equiv 3 \pmod{5} \\
a & \equiv 5 \equiv 1 \pmod{4} \\
a & \equiv 5 \equiv 2 \pmod{3}
\end{align}\right.$$
The only congruence with same prime modulo are $a \equiv 1 \pmod{4}$ and $a \equiv 1 \pmod{2}$.
The first one already implies the second one, so we can ignore the second one, and solve the following system with coprime moduli instead:
$$\left\{\begin{align}
a & \equiv 3 \equiv 3 \pmod{5} \\
a & \equiv 5 \equiv 1 \pmod{4} \\
a & \equiv 5 \equiv 2 \pmod{3}
\end{align}\right.$$
It has the solution $53 \pmod{60}$, and indeed $53 \bmod{10} = 3$ and $53 \bmod{12} = 5$.
## Garner's Algorithm
Another consequence of the CRT is that we can represent big numbers using an array of small integers.
Instead of doing a lot of computations with very large numbers numbers, which might be expensive (think of doing divisions with 1000-digit numbers), you can pick a couple of coprime moduli and represent the large number as a system of congruences, and perform all operations on the system of equations.
Any number $a$ less than $m_1 m_2 \cdots m_k$ can be represented as an array $a_1, \ldots, a_k$, where $a \equiv a_i \pmod{m_i}$.
By using the above algorithm, you can again reconstruct the large number whenever you need it.
Alternatively you can represent the number in the **mixed radix** representation:
$$a = x_1 + x_2 m_1 + x_3 m_1 m_2 + \ldots + x_k m_1 \cdots m_{k-1} \text{ with }x_i \in [0, m_i)$$
Garner's algorithm, which is discussed in the dedicated article [Garner's algorithm](garners-algorithm.md), computes the coefficients $x_i$.
And with those coefficients you can restore the full number.
|
---
title
chinese_theorem
---
# Chinese Remainder Theorem
The Chinese Remainder Theorem (which will be referred to as CRT in the rest of this article) was discovered by Chinese mathematician Sun Zi.
## Formulation
Let $m = m_1 \cdot m_2 \cdots m_k$, where $m_i$ are pairwise coprime. In addition to $m_i$, we are also given a system of congruences
$$\left\{\begin{array}{rcl}
a & \equiv & a_1 \pmod{m_1} \\
a & \equiv & a_2 \pmod{m_2} \\
& \vdots & \\
a & \equiv & a_k \pmod{m_k}
\end{array}\right.$$
where $a_i$ are some given constants. The original form of CRT then states that the given system of congruences always has *one and exactly one* solution modulo $m$.
E.g. the system of congruences
$$\left\{\begin{array}{rcl}
a & \equiv & 2 \pmod{3} \\
a & \equiv & 3 \pmod{5} \\
a & \equiv & 2 \pmod{7}
\end{array}\right.$$
has the solution $23$ modulo $105$, because $23 \bmod{3} = 2$, $23 \bmod{5} = 3$, and $23 \bmod{7} = 2$.
We can write down every solution as $23 + 105\cdot k$ for $k \in \mathbb{Z}$.
### Corollary
A consequence of the CRT is that the equation
$$x \equiv a \pmod{m}$$
is equivalent to the system of equations
$$\left\{\begin{array}{rcl}
x & \equiv & a_1 \pmod{m_1} \\
& \vdots & \\
x & \equiv & a_k \pmod{m_k}
\end{array}\right.$$
(As above, assume that $m = m_1 m_2 \cdots m_k$ and $m_i$ are pairwise coprime).
## Solution for Two Moduli
Consider a system of two equations for coprime $m_1, m_2$:
$$
\left\{\begin{align}
a &\equiv a_1 \pmod{m_1} \\
a &\equiv a_2 \pmod{m_2} \\
\end{align}\right.
$$
We want to find a solution for $a \pmod{m_1 m_2}$. Using the [Extended Euclidean Algorithm](extended-euclid-algorithm.md) we can find Bézout coefficients $n_1, n_2$ such that
$$n_1 m_1 + n_2 m_2 = 1.$$
In fact $n_1$ and $n_2$ are just the [modular inverses](module-inverse.md) of $m_1$ and $m_2$ modulo $m_2$ and $m_1$.
We have $n_1 m_1 \equiv 1 \pmod{m_2}$ so $n_1 \equiv m_1^{-1} \pmod{m_2}$, and vice versa $n_2 \equiv m_2^{-1} \pmod{m_1}$.
With those two coefficients we can define a solution:
$$a = a_1 n_2 m_2 + a_2 n_1 m_1 \bmod{m_1 m_2}$$
It's easy to verify that this is indeed a solution by computing $a \bmod{m_1}$ and $a \bmod{m_2}$.
$$
\begin{array}{rcll}
a & \equiv & a_1 n_2 m_2 + a_2 n_1 m_1 & \pmod{m_1}\\
& \equiv & a_1 (1 - n_1 m_1) + a_2 n_1 m_1 & \pmod{m_1}\\
& \equiv & a_1 - a_1 n_1 m_1 + a_2 n_1 m_1 & \pmod{m_1}\\
& \equiv & a_1 & \pmod{m_1}
\end{array}
$$
Notice, that the Chinese Remainder Theorem also guarantees, that only 1 solution exists modulo $m_1 m_2$.
This is also easy to prove.
Lets assume that you have two different solutions $x$ and $y$.
Because $x \equiv a_i \pmod{m_i}$ and $y \equiv a_i \pmod{m_i}$, it follows that $x − y \equiv 0 \pmod{m_i}$ and therefore $x − y \equiv 0 \pmod{m_1 m_2}$ or equivalently $x \equiv y \pmod{m_1 m_2}$.
So $x$ and $y$ are actually the same solution.
## Solution for General Case
### Inductive Solution
As $m_1 m_2$ is coprime to $m_3$, we can inductively repeatedly apply the solution for two moduli for any number of moduli.
First you compute $b_2 := a \pmod{m_1 m_2}$ using the first two congruences,
then you can compute $b_3 := a \pmod{m_1 m_2 m_3}$ using the congruences $a \equiv b_2 \pmod{m_1 m_2}$ and $a \equiv a_3 \pmod {m_3}$, etc.
### Direct Construction
A direct construction similar to Lagrange interpolation is possible.
Let $M_i := \prod_{i \neq j} m_j$, the product of all moduli but $m_i$, and $N_i$ the modular inverses $N_i := M_i^{-1} \bmod{m_i}$.
Then a solution to the system of congruences is:
$$a \equiv \sum_{i=1}^k a_i M_i N_i \pmod{m_1 m_2 \cdots m_k}$$
We can check this is indeed a solution, by computing $a \bmod{m_i}$ for all $i$.
Because $M_j$ is a multiple of $m_i$ for $i \neq j$ we have
$$\begin{array}{rcll}
a & \equiv & \sum_{j=1}^k a_j M_j N_j & \pmod{m_i} \\
& \equiv & a_i M_i N_i & \pmod{m_i} \\
& \equiv & a_i M_i M_i^{-1} & \pmod{m_i} \\
& \equiv & a_i & \pmod{m_i}
\end{array}$$
### Implementation
```{.cpp file=chinese_remainder_theorem}
struct Congruence {
long long a, m;
};
long long chinese_remainder_theorem(vector<Congruence> const& congruences) {
long long M = 1;
for (auto const& congruence : congruences) {
M *= congruence.m;
}
long long solution = 0;
for (auto const& congruence : congruences) {
long long a_i = congruence.a;
long long M_i = M / congruence.m;
long long N_i = mod_inv(M_i, congruence.m);
solution = (solution + a_i * M_i % M * N_i) % M;
}
return solution;
}
```
## Solution for not coprime moduli
As mentioned, the algorithm above only works for coprime moduli $m_1, m_2, \dots m_k$.
In the not coprime case, a system of congruences has exactly one solution modulo $\text{lcm}(m_1, m_2, \dots, m_k)$, or has no solution at all.
E.g. in the following system, the first congruence implies that the solution is odd, and the second congruence implies that the solution is even.
It's not possible that a number is both odd and even, therefore there is clearly no solution.
$$\left\{\begin{align}
a & \equiv 1 \pmod{4} \\
a & \equiv 2 \pmod{6}
\end{align}\right.$$
It is pretty simple to determine is a system has a solution.
And if it has one, we can use the original algorithm to solve a slightly modified system of congruences.
A single congruence $a \equiv a_i \pmod{m_i}$ is equivalent to the system of congruences $a \equiv a_i \pmod{p_j^{n_j}}$ where $p_1^{n_1} p_2^{n_2}\cdots p_k^{n_k}$ is the prime factorization of $m_i$.
With this fact, we can modify the system of congruences into a system, that only has prime powers as moduli.
E.g. the above system of congruences is equivalent to:
$$\left\{\begin{array}{ll}
a \equiv 1 & \pmod{4} \\
a \equiv 2 \equiv 0 & \pmod{2} \\
a \equiv 2 & \pmod{3}
\end{array}\right.$$
Because originally some moduli had common factors, we will get some congruences moduli based on the same prime, however possibly with different prime powers.
You can observe, that the congruence with the highest prime power modulus will be the strongest congruence of all congruences based on the same prime number.
Either it will give a contradiction with some other congruence, or it will imply already all other congruences.
In our case, the first congruence $a \equiv 1 \pmod{4}$ implies $a \equiv 1 \pmod{2}$, and therefore contradicts the second congruence $a \equiv 0 \pmod{2}$.
Therefore this system of congruences has no solution.
If there are no contradictions, then the system of equation has a solution.
We can ignore all congruences except the ones with the highest prime power moduli.
These moduli are now coprime, and therefore we can solve this one with the algorithm discussed in the sections above.
E.g. the following system has a solution modulo $\text{lcm}(10, 12) = 60$.
$$\left\{\begin{align}
a & \equiv 3 \pmod{10} \\
a & \equiv 5 \pmod{12}
\end{align}\right.$$
The system of congruence is equivalent to the system of congruences:
$$\left\{\begin{align}
a & \equiv 3 \equiv 1 \pmod{2} \\
a & \equiv 3 \equiv 3 \pmod{5} \\
a & \equiv 5 \equiv 1 \pmod{4} \\
a & \equiv 5 \equiv 2 \pmod{3}
\end{align}\right.$$
The only congruence with same prime modulo are $a \equiv 1 \pmod{4}$ and $a \equiv 1 \pmod{2}$.
The first one already implies the second one, so we can ignore the second one, and solve the following system with coprime moduli instead:
$$\left\{\begin{align}
a & \equiv 3 \equiv 3 \pmod{5} \\
a & \equiv 5 \equiv 1 \pmod{4} \\
a & \equiv 5 \equiv 2 \pmod{3}
\end{align}\right.$$
It has the solution $53 \pmod{60}$, and indeed $53 \bmod{10} = 3$ and $53 \bmod{12} = 5$.
## Garner's Algorithm
Another consequence of the CRT is that we can represent big numbers using an array of small integers.
Instead of doing a lot of computations with very large numbers numbers, which might be expensive (think of doing divisions with 1000-digit numbers), you can pick a couple of coprime moduli and represent the large number as a system of congruences, and perform all operations on the system of equations.
Any number $a$ less than $m_1 m_2 \cdots m_k$ can be represented as an array $a_1, \ldots, a_k$, where $a \equiv a_i \pmod{m_i}$.
By using the above algorithm, you can again reconstruct the large number whenever you need it.
Alternatively you can represent the number in the **mixed radix** representation:
$$a = x_1 + x_2 m_1 + x_3 m_1 m_2 + \ldots + x_k m_1 \cdots m_{k-1} \text{ with }x_i \in [0, m_i)$$
Garner's algorithm, which is discussed in the dedicated article [Garner's algorithm](garners-algorithm.md), computes the coefficients $x_i$.
And with those coefficients you can restore the full number.
## Practice Problems:
* [Google Code Jam - Golf Gophers](https://codingcompetitions.withgoogle.com/codejam/round/0000000000051635/0000000000104f1a#problem)
* [Hackerrank - Number of sequences](https://www.hackerrank.com/contests/w22/challenges/number-of-sequences)
* [Codeforces - Remainders Game](http://codeforces.com/problemset/problem/687/B)
|
Chinese Remainder Theorem
|
---
title
extended_euclid_algorithm
---
# Extended Euclidean Algorithm
While the [Euclidean algorithm](euclid-algorithm.md) calculates only the greatest common divisor (GCD) of two integers $a$ and $b$, the extended version also finds a way to represent GCD in terms of $a$ and $b$, i.e. coefficients $x$ and $y$ for which:
$$a \cdot x + b \cdot y = \gcd(a, b)$$
It's important to note that by [Bézout's identity](https://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity) we can always find such a representation. For instance, $\gcd(55, 80) = 5$, therefore we can represent $5$ as a linear combination with the terms $55$ and $80$: $55 \cdot 3 + 80 \cdot (-2) = 5$
A more general form of that problem is discussed in the article about [Linear Diophantine Equations](linear-diophantine-equation.md).
It will build upon this algorithm.
## Algorithm
We will denote the GCD of $a$ and $b$ with $g$ in this section.
The changes to the original algorithm are very simple.
If we recall the algorithm, we can see that the algorithm ends with $b = 0$ and $a = g$.
For these parameters we can easily find coefficients, namely $g \cdot 1 + 0 \cdot 0 = g$.
Starting from these coefficients $(x, y) = (1, 0)$, we can go backwards up the recursive calls.
All we need to do is to figure out how the coefficients $x$ and $y$ change during the transition from $(a, b)$ to $(b, a \bmod b)$.
Let us assume we found the coefficients $(x_1, y_1)$ for $(b, a \bmod b)$:
$$b \cdot x_1 + (a \bmod b) \cdot y_1 = g$$
and we want to find the pair $(x, y)$ for $(a, b)$:
$$ a \cdot x + b \cdot y = g$$
We can represent $a \bmod b$ as:
$$ a \bmod b = a - \left\lfloor \frac{a}{b} \right\rfloor \cdot b$$
Substituting this expression in the coefficient equation of $(x_1, y_1)$ gives:
$$ g = b \cdot x_1 + (a \bmod b) \cdot y_1 = b \cdot x_1 + \left(a - \left\lfloor \frac{a}{b} \right\rfloor \cdot b \right) \cdot y_1$$
and after rearranging the terms:
$$g = a \cdot y_1 + b \cdot \left( x_1 - y_1 \cdot \left\lfloor \frac{a}{b} \right\rfloor \right)$$
We found the values of $x$ and $y$:
$$\begin{cases}
x = y_1 \\
y = x_1 - y_1 \cdot \left\lfloor \frac{a}{b} \right\rfloor
\end{cases} $$
## Implementation
```{.cpp file=extended_gcd}
int gcd(int a, int b, int& x, int& y) {
if (b == 0) {
x = 1;
y = 0;
return a;
}
int x1, y1;
int d = gcd(b, a % b, x1, y1);
x = y1;
y = x1 - y1 * (a / b);
return d;
}
```
The recursive function above returns the GCD and the values of coefficients to `x` and `y` (which are passed by reference to the function).
This implementation of extended Euclidean algorithm produces correct results for negative integers as well.
## Iterative version
It's also possible to write the Extended Euclidean algorithm in an iterative way.
Because it avoids recursion, the code will run a little bit faster than the recursive one.
```{.cpp file=extended_gcd_iter}
int gcd(int a, int b, int& x, int& y) {
x = 1, y = 0;
int x1 = 0, y1 = 1, a1 = a, b1 = b;
while (b1) {
int q = a1 / b1;
tie(x, x1) = make_tuple(x1, x - q * x1);
tie(y, y1) = make_tuple(y1, y - q * y1);
tie(a1, b1) = make_tuple(b1, a1 - q * b1);
}
return a1;
}
```
If you look closely at the variable `a1` and `b1`, you can notice that they taking exactly the same values as in the iterative version of the normal [Euclidean algorithm](euclid-algorithm.md). So the algorithm will at least compute the correct GCD.
To see why the algorithm also computes the correct coefficients, you can check that the following invariants will hold at any time (before the while loop, and at the end of each iteration): $x \cdot a + y \cdot b = a_1$ and $x_1 \cdot a + y_1 \cdot b = b_1$.
It's trivial to see, that these two equations are satisfied at the beginning.
And you can check that the update in the loop iteration will still keep those equalities valid.
At the end we know that $a_1$ contains the GCD, so $x \cdot a + y \cdot b = g$.
Which means that we have found the required coefficients.
You can even optimize the code more, and remove the variable $a_1$ and $b_1$ from the code, and just reuse $a$ and $b$.
However if you do so, you lose the ability to argue about the invariants.
|
---
title
extended_euclid_algorithm
---
# Extended Euclidean Algorithm
While the [Euclidean algorithm](euclid-algorithm.md) calculates only the greatest common divisor (GCD) of two integers $a$ and $b$, the extended version also finds a way to represent GCD in terms of $a$ and $b$, i.e. coefficients $x$ and $y$ for which:
$$a \cdot x + b \cdot y = \gcd(a, b)$$
It's important to note that by [Bézout's identity](https://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity) we can always find such a representation. For instance, $\gcd(55, 80) = 5$, therefore we can represent $5$ as a linear combination with the terms $55$ and $80$: $55 \cdot 3 + 80 \cdot (-2) = 5$
A more general form of that problem is discussed in the article about [Linear Diophantine Equations](linear-diophantine-equation.md).
It will build upon this algorithm.
## Algorithm
We will denote the GCD of $a$ and $b$ with $g$ in this section.
The changes to the original algorithm are very simple.
If we recall the algorithm, we can see that the algorithm ends with $b = 0$ and $a = g$.
For these parameters we can easily find coefficients, namely $g \cdot 1 + 0 \cdot 0 = g$.
Starting from these coefficients $(x, y) = (1, 0)$, we can go backwards up the recursive calls.
All we need to do is to figure out how the coefficients $x$ and $y$ change during the transition from $(a, b)$ to $(b, a \bmod b)$.
Let us assume we found the coefficients $(x_1, y_1)$ for $(b, a \bmod b)$:
$$b \cdot x_1 + (a \bmod b) \cdot y_1 = g$$
and we want to find the pair $(x, y)$ for $(a, b)$:
$$ a \cdot x + b \cdot y = g$$
We can represent $a \bmod b$ as:
$$ a \bmod b = a - \left\lfloor \frac{a}{b} \right\rfloor \cdot b$$
Substituting this expression in the coefficient equation of $(x_1, y_1)$ gives:
$$ g = b \cdot x_1 + (a \bmod b) \cdot y_1 = b \cdot x_1 + \left(a - \left\lfloor \frac{a}{b} \right\rfloor \cdot b \right) \cdot y_1$$
and after rearranging the terms:
$$g = a \cdot y_1 + b \cdot \left( x_1 - y_1 \cdot \left\lfloor \frac{a}{b} \right\rfloor \right)$$
We found the values of $x$ and $y$:
$$\begin{cases}
x = y_1 \\
y = x_1 - y_1 \cdot \left\lfloor \frac{a}{b} \right\rfloor
\end{cases} $$
## Implementation
```{.cpp file=extended_gcd}
int gcd(int a, int b, int& x, int& y) {
if (b == 0) {
x = 1;
y = 0;
return a;
}
int x1, y1;
int d = gcd(b, a % b, x1, y1);
x = y1;
y = x1 - y1 * (a / b);
return d;
}
```
The recursive function above returns the GCD and the values of coefficients to `x` and `y` (which are passed by reference to the function).
This implementation of extended Euclidean algorithm produces correct results for negative integers as well.
## Iterative version
It's also possible to write the Extended Euclidean algorithm in an iterative way.
Because it avoids recursion, the code will run a little bit faster than the recursive one.
```{.cpp file=extended_gcd_iter}
int gcd(int a, int b, int& x, int& y) {
x = 1, y = 0;
int x1 = 0, y1 = 1, a1 = a, b1 = b;
while (b1) {
int q = a1 / b1;
tie(x, x1) = make_tuple(x1, x - q * x1);
tie(y, y1) = make_tuple(y1, y - q * y1);
tie(a1, b1) = make_tuple(b1, a1 - q * b1);
}
return a1;
}
```
If you look closely at the variable `a1` and `b1`, you can notice that they taking exactly the same values as in the iterative version of the normal [Euclidean algorithm](euclid-algorithm.md). So the algorithm will at least compute the correct GCD.
To see why the algorithm also computes the correct coefficients, you can check that the following invariants will hold at any time (before the while loop, and at the end of each iteration): $x \cdot a + y \cdot b = a_1$ and $x_1 \cdot a + y_1 \cdot b = b_1$.
It's trivial to see, that these two equations are satisfied at the beginning.
And you can check that the update in the loop iteration will still keep those equalities valid.
At the end we know that $a_1$ contains the GCD, so $x \cdot a + y \cdot b = g$.
Which means that we have found the required coefficients.
You can even optimize the code more, and remove the variable $a_1$ and $b_1$ from the code, and just reuse $a$ and $b$.
However if you do so, you lose the ability to argue about the invariants.
## Practice Problems
* [10104 - Euclid Problem](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=1045)
* [GYM - (J) Once Upon A Time](http://codeforces.com/gym/100963)
* [UVA - 12775 - Gift Dilemma](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=4628)
|
Extended Euclidean Algorithm
|
---
title
fft_multiply
---
# Fast Fourier transform
In this article we will discuss an algorithm that allows us to multiply two polynomials of length $n$ in $O(n \log n)$ time, which is better than the trivial multiplication which takes $O(n^2)$ time.
Obviously also multiplying two long numbers can be reduced to multiplying polynomials, so also two long numbers can be multiplied in $O(n \log n)$ time (where $n$ is the number of digits in the numbers).
The discovery of the **Fast Fourier transformation (FFT)** is attributed to Cooley and Tukey, who published an algorithm in 1965.
But in fact the FFT has been discovered repeatedly before, but the importance of it was not understood before the inventions of modern computers.
Some researchers attribute the discovery of the FFT to Runge and König in 1924.
But actually Gauss developed such a method already in 1805, but never published it.
Notice, that the FFT algorithm presented here runs in $O(n \log n)$ time, but it doesn't work for multiplying arbitrary big polynomials with arbitrary large coefficients or for multiplying arbitrary big integers.
It can easily handle polynomials of size $10^5$ with small coefficients, or multiplying two numbers of size $10^6$, but at some point the range and the precision of the used floating point numbers will not no longer be enough to give accurate results.
That is usually enough for solving competitive programming problems, but there are also more complex variations that can perform arbitrary large polynomial/integer multiplications.
E.g. in 1971 Schönhage and Strasser developed a variation for multiplying arbitrary large numbers that applies the FFT recursively in rings structures running in $O(n \log n \log \log n)$.
And recently (in 2019) Harvey and van der Hoeven published an algorithm that runs in true $O(n \log n)$.
## Discrete Fourier transform
Let there be a polynomial of degree $n - 1$:
$$A(x) = a_0 x^0 + a_1 x^1 + \dots + a_{n-1} x^{n-1}$$
Without loss of generality we assume that $n$ - the number of coefficients - is a power of $2$.
If $n$ is not a power of $2$, then we simply add the missing terms $a_i x^i$ and set the coefficients $a_i$ to $0$.
The theory of complex numbers tells us that the equation $x^n = 1$ has $n$ complex solutions (called the $n$-th roots of unity), and the solutions are of the form $w_{n, k} = e^{\frac{2 k \pi i}{n}}$ with $k = 0 \dots n-1$.
Additionally these complex numbers have some very interesting properties:
e.g. the principal $n$-th root $w_n = w_{n, 1} = e^{\frac{2 \pi i}{n}}$ can be used to describe all other $n$-th roots: $w_{n, k} = (w_n)^k$.
The **discrete Fourier transform (DFT)** of the polynomial $A(x)$ (or equivalently the vector of coefficients $(a_0, a_1, \dots, a_{n-1})$ is defined as the values of the polynomial at the points $x = w_{n, k}$, i.e. it is the vector:
$$\begin{align}
\text{DFT}(a_0, a_1, \dots, a_{n-1}) &= (y_0, y_1, \dots, y_{n-1}) \\
&= (A(w_{n, 0}), A(w_{n, 1}), \dots, A(w_{n, n-1})) \\
&= (A(w_n^0), A(w_n^1), \dots, A(w_n^{n-1}))
\end{align}$$
Similarly the **inverse discrete Fourier transform** is defined:
The inverse DFT of values of the polynomial $(y_0, y_1, \dots, y_{n-1})$ are the coefficients of the polynomial $(a_0, a_1, \dots, a_{n-1})$.
$$\text{InverseDFT}(y_0, y_1, \dots, y_{n-1}) = (a_0, a_1, \dots, a_{n-1})$$
Thus, if a direct DFT computes the values of the polynomial at the points at the $n$-th roots, the inverse DFT can restore the coefficients of the polynomial using those values.
### Application of the DFT: fast multiplication of polynomials
Let there be two polynomials $A$ and $B$.
We compute the DFT for each of them: $\text{DFT}(A)$ and $\text{DFT}(B)$.
What happens if we multiply these polynomials?
Obviously at each point the values are simply multiplied, i.e.
$$(A \cdot B)(x) = A(x) \cdot B(x).$$
This means that if we multiply the vectors $\text{DFT}(A)$ and $\text{DFT}(B)$ - by multiplying each element of one vector by the corresponding element of the other vector - then we get nothing other than the DFT of the polynomial $\text{DFT}(A \cdot B)$:
$$\text{DFT}(A \cdot B) = \text{DFT}(A) \cdot \text{DFT}(B)$$
Finally, applying the inverse DFT, we obtain:
$$A \cdot B = \text{InverseDFT}(\text{DFT}(A) \cdot \text{DFT}(B))$$
On the right the product of the two DFTs we mean the pairwise product of the vector elements.
This can be computed in $O(n)$ time.
If we can compute the DFT and the inverse DFT in $O(n \log n)$, then we can compute the product of the two polynomials (and consequently also two long numbers) with the same time complexity.
It should be noted, that the two polynomials should have the same degree.
Otherwise the two result vectors of the DFT have different length.
We can accomplish this by adding coefficients with the value $0$.
And also, since the result of the product of two polynomials is a polynomial of degree $2 (n - 1)$, we have to double the degrees of each polynomial (again by padding $0$s).
From a vector with $n$ values we cannot reconstruct the desired polynomial with $2n - 1$ coefficients.
### Fast Fourier Transform
The **fast Fourier transform** is a method that allows computing the DFT in $O(n \log n)$ time.
The basic idea of the FFT is to apply divide and conquer.
We divide the coefficient vector of the polynomial into two vectors, recursively compute the DFT for each of them, and combine the results to compute the DFT of the complete polynomial.
So let there be a polynomial $A(x)$ with degree $n - 1$, where $n$ is a power of $2$, and $n > 1$:
$$A(x) = a_0 x^0 + a_1 x^1 + \dots + a_{n-1} x^{n-1}$$
We divide it into two smaller polynomials, the one containing only the coefficients of the even positions, and the one containing the coefficients of the odd positions:
$$\begin{align}
A_0(x) &= a_0 x^0 + a_2 x^1 + \dots + a_{n-2} x^{\frac{n}{2}-1} \\
A_1(x) &= a_1 x^0 + a_3 x^1 + \dots + a_{n-1} x^{\frac{n}{2}-1}
\end{align}$$
It is easy to see that
$$A(x) = A_0(x^2) + x A_1(x^2).$$
The polynomials $A_0$ and $A_1$ are only half as much coefficients as the polynomial $A$.
If we can compute the $\text{DFT}(A)$ in linear time using $\text{DFT}(A_0)$ and $\text{DFT}(A_1)$, then we get the recurrence $T_{\text{DFT}}(n) = 2 T_{\text{DFT}}\left(\frac{n}{2}\right) + O(n)$ for the time complexity, which results in $T_{\text{DFT}}(n) = O(n \log n)$ by the **master theorem**.
Let's learn how we can accomplish that.
Suppose we have computed the vectors $\left(y_k^0\right)_{k=0}^{n/2-1} = \text{DFT}(A_0)$ and $\left(y_k^1\right)_{k=0}^{n/2-1} = \text{DFT}(A_1)$.
Let us find a expression for $\left(y_k\right)_{k=0}^{n-1} = \text{DFT}(A)$.
For the first $\frac{n}{2}$ values we can just use the previously noted equation $A(x) = A_0(x^2) + x A_1(x^2)$:
$$y_k = y_k^0 + w_n^k y_k^1, \quad k = 0 \dots \frac{n}{2} - 1.$$
However for the second $\frac{n}{2}$ values we need to find a slightly, different expression:
$$\begin{align}
y_{k+n/2} &= A\left(w_n^{k+n/2}\right) \\
&= A_0\left(w_n^{2k+n}\right) + w_n^{k + n/2} A_1\left(w_n^{2k+n}\right) \\
&= A_0\left(w_n^{2k} w_n^n\right) + w_n^k w_n^{n/2} A_1\left(w_n^{2k} w_n^n\right) \\
&= A_0\left(w_n^{2k}\right) - w_n^k A_1\left(w_n^{2k}\right) \\
&= y_k^0 - w_n^k y_k^1
\end{align}$$
Here we used again $A(x) = A_0(x^2) + x A_1(x^2)$ and the two identities $w_n^n = 1$ and $w_n^{n/2} = -1$.
Therefore we get the desired formulas for computing the whole vector $(y_k)$:
$$\begin{align}
y_k &= y_k^0 + w_n^k y_k^1, &\quad k = 0 \dots \frac{n}{2} - 1, \\
y_{k+n/2} &= y_k^0 - w_n^k y_k^1, &\quad k = 0 \dots \frac{n}{2} - 1.
\end{align}$$
(This pattern $a + b$ and $a - b$ is sometimes called a **butterfly**.)
Thus we learned how to compute the DFT in $O(n \log n)$ time.
### Inverse FFT
Let the vector $(y_0, y_1, \dots y_{n-1})$ - the values of polynomial $A$ of degree $n - 1$ in the points $x = w_n^k$ - be given.
We want to restore the coefficients $(a_0, a_1, \dots, a_{n-1})$ of the polynomial.
This known problem is called **interpolation**, and there are general algorithms for solving it.
But in this special case (since we know the values of the points at the roots of unity), we can obtains a much simpler algorithm (that is practically the same as the direct FFT).
We can write the DFT, according to its definition, in the matrix form:
$$
\begin{pmatrix}
w_n^0 & w_n^0 & w_n^0 & w_n^0 & \cdots & w_n^0 \\
w_n^0 & w_n^1 & w_n^2 & w_n^3 & \cdots & w_n^{n-1} \\
w_n^0 & w_n^2 & w_n^4 & w_n^6 & \cdots & w_n^{2(n-1)} \\
w_n^0 & w_n^3 & w_n^6 & w_n^9 & \cdots & w_n^{3(n-1)} \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
w_n^0 & w_n^{n-1} & w_n^{2(n-1)} & w_n^{3(n-1)} & \cdots & w_n^{(n-1)(n-1)}
\end{pmatrix} \begin{pmatrix}
a_0 \\ a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_{n-1}
\end{pmatrix} = \begin{pmatrix}
y_0 \\ y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_{n-1}
\end{pmatrix}
$$
This matrix is called the **Vandermonde matrix**.
Thus we can compute the vector $(a_0, a_1, \dots, a_{n-1})$ by multiplying the vector $(y_0, y_1, \dots y_{n-1})$ from the left with the inverse of the matrix:
$$
\begin{pmatrix}
a_0 \\ a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_{n-1}
\end{pmatrix} = \begin{pmatrix}
w_n^0 & w_n^0 & w_n^0 & w_n^0 & \cdots & w_n^0 \\
w_n^0 & w_n^1 & w_n^2 & w_n^3 & \cdots & w_n^{n-1} \\
w_n^0 & w_n^2 & w_n^4 & w_n^6 & \cdots & w_n^{2(n-1)} \\
w_n^0 & w_n^3 & w_n^6 & w_n^9 & \cdots & w_n^{3(n-1)} \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
w_n^0 & w_n^{n-1} & w_n^{2(n-1)} & w_n^{3(n-1)} & \cdots & w_n^{(n-1)(n-1)}
\end{pmatrix}^{-1} \begin{pmatrix}
y_0 \\ y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_{n-1}
\end{pmatrix}
$$
A quick check can verify that the inverse of the matrix has the following form:
$$
\frac{1}{n}
\begin{pmatrix}
w_n^0 & w_n^0 & w_n^0 & w_n^0 & \cdots & w_n^0 \\
w_n^0 & w_n^{-1} & w_n^{-2} & w_n^{-3} & \cdots & w_n^{-(n-1)} \\
w_n^0 & w_n^{-2} & w_n^{-4} & w_n^{-6} & \cdots & w_n^{-2(n-1)} \\
w_n^0 & w_n^{-3} & w_n^{-6} & w_n^{-9} & \cdots & w_n^{-3(n-1)} \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
w_n^0 & w_n^{-(n-1)} & w_n^{-2(n-1)} & w_n^{-3(n-1)} & \cdots & w_n^{-(n-1)(n-1)}
\end{pmatrix}
$$
Thus we obtain the formula:
$$a_k = \frac{1}{n} \sum_{j=0}^{n-1} y_j w_n^{-k j}$$
Comparing this to the formula for $y_k$
$$y_k = \sum_{j=0}^{n-1} a_j w_n^{k j},$$
we notice that these problems are almost the same, so the coefficients $a_k$ can be found by the same divide and conquer algorithm, as well as the direct FFT, only instead of $w_n^k$ we have to use $w_n^{-k}$, and at the end we need to divide the resulting coefficients by $n$.
Thus the computation of the inverse DFT is almost the same as the calculation of the direct DFT, and it also can be performed in $O(n \log n)$ time.
### Implementation
Here we present a simple recursive **implementation of the FFT** and the inverse FFT, both in one function, since the difference between the forward and the inverse FFT are so minimal.
To store the complex numbers we use the complex type in the C++ STL.
```{.cpp file=fft_recursive}
using cd = complex<double>;
const double PI = acos(-1);
void fft(vector<cd> & a, bool invert) {
int n = a.size();
if (n == 1)
return;
vector<cd> a0(n / 2), a1(n / 2);
for (int i = 0; 2 * i < n; i++) {
a0[i] = a[2*i];
a1[i] = a[2*i+1];
}
fft(a0, invert);
fft(a1, invert);
double ang = 2 * PI / n * (invert ? -1 : 1);
cd w(1), wn(cos(ang), sin(ang));
for (int i = 0; 2 * i < n; i++) {
a[i] = a0[i] + w * a1[i];
a[i + n/2] = a0[i] - w * a1[i];
if (invert) {
a[i] /= 2;
a[i + n/2] /= 2;
}
w *= wn;
}
}
```
The function gets passed a vector of coefficients, and the function will compute the DFT or inverse DFT and store the result again in this vector.
The argument $\text{invert}$ shows whether the direct or the inverse DFT should be computed.
Inside the function we first check if the length of the vector is equal to one, if this is the case then we don't have to do anything.
Otherwise we divide the vector $a$ into two vectors $a0$ and $a1$ and compute the DFT for both recursively.
Then we initialize the value $wn$ and a variable $w$, which will contain the current power of $wn$.
Then the values of the resulting DFT are computed using the above formulas.
If the flag $\text{invert}$ is set, then we replace $wn$ with $wn^{-1}$, and each of the values of the result is divided by $2$ (since this will be done in each level of the recursion, this will end up dividing the final values by $n$).
Using this function we can create a function for **multiplying two polynomials**:
```{.cpp file=fft_multiply}
vector<int> multiply(vector<int> const& a, vector<int> const& b) {
vector<cd> fa(a.begin(), a.end()), fb(b.begin(), b.end());
int n = 1;
while (n < a.size() + b.size())
n <<= 1;
fa.resize(n);
fb.resize(n);
fft(fa, false);
fft(fb, false);
for (int i = 0; i < n; i++)
fa[i] *= fb[i];
fft(fa, true);
vector<int> result(n);
for (int i = 0; i < n; i++)
result[i] = round(fa[i].real());
return result;
}
```
This function works with polynomials with integer coefficients, however you can also adjust it to work with other types.
Since there is some error when working with complex numbers, we need round the resulting coefficients at the end.
Finally the function for **multiplying** two long numbers practically doesn't differ from the function for multiplying polynomials.
The only thing we have to do afterwards, is to normalize the number:
```cpp
int carry = 0;
for (int i = 0; i < n; i++)
result[i] += carry;
carry = result[i] / 10;
result[i] %= 10;
}
```
Since the length of the product of two numbers never exceed the total length of both numbers, the size of the vector is enough to perform all carry operations.
### Improved implementation: in-place computation
To increase the efficiency we will switch from the recursive implementation to an iterative one.
In the above recursive implementation we explicitly separated the vector $a$ into two vectors - the element on the even positions got assigned to one temporary vector, and the elements on odd positions to another.
However if we reorder the elements in a certain way, we don't need to create these temporary vectors (i.e. all the calculations can be done "in-place", right in the vector $A$ itself).
Note that at the first recursion level, the elements whose lowest bit of the position was zero got assigned to the vector $a_0$, and the ones with a one as the lowest bit of the position got assigned to $a_1$.
In the second recursion level the same thing happens, but with the second lowest bit instead, etc.
Therefore if we reverse the bits of the position of each coefficient, and sort them by these reversed values, we get the desired order (it is called the bit-reversal permutation).
For example the desired order for $n = 8$ has the form:
$$a = \bigg\{ \Big[ (a_0, a_4), (a_2, a_6) \Big], \Big[ (a_1, a_5), (a_3, a_7) \Big] \bigg\}$$
Indeed in the first recursion level (surrounded by curly braces), the vector gets divided into two parts $[a_0, a_2, a_4, a_6]$ and $[a_1, a_3, a_5, a_7]$.
As we see, in the bit-reversal permutation this corresponds to simply dividing the vector into two halves: the first $\frac{n}{2}$ elements and the last $\frac{n}{2}$ elements.
Then there is a recursive call for each halve.
Let the resulting DFT for each of them be returned in place of the elements themselves (i.e. the first half and the second half of the vector $a$ respectively.
$$a = \bigg\{ \Big[y_0^0, y_1^0, y_2^0, y_3^0\Big], \Big[y_0^1, y_1^1, y_2^1, y_3^1 \Big] \bigg\}$$
Now we want to combine the two DFTs into one for the complete vector.
The order of the elements is ideal, and we can also perform the union directly in this vector.
We can take the elements $y_0^0$ and $y_0^1$ and perform the butterfly transform.
The place of the resulting two values is the same as the place of the two initial values, so we get:
$$a = \bigg\{ \Big[y_0^0 + w_n^0 y_0^1, y_1^0, y_2^0, y_3^0\Big], \Big[y_0^0 - w_n^0 y_0^1, y_1^1, y_2^1, y_3^1\Big] \bigg\}$$
Similarly we can compute the butterfly transform of $y_1^0$ and $y_1^1$ and put the results in their place, and so on.
As a result we get:
$$a = \bigg\{ \Big[y_0^0 + w_n^0 y_0^1, y_1^0 + w_n^1 y_1^1, y_2^0 + w_n^2 y_2^1, y_3^0 + w_n^3 y_3^1\Big], \Big[y_0^0 - w_n^0 y_0^1, y_1^0 - w_n^1 y_1^1, y_2^0 - w_n^2 y_2^1, y_3^0 - w_n^3 y_3^1\Big] \bigg\}$$
Thus we computed the required DFT from the vector $a$.
Here we described the process of computing the DFT only at the first recursion level, but the same works obviously also for all other levels.
Thus, after applying the bit-reversal permutation, we can compute the DFT in-place, without any additional memory.
This additionally allows us to get rid of the recursion.
We just start at the lowest level, i.e. we divide the vector into pairs and apply the butterfly transform to them.
This results with the vector $a$ with the work of the last level applied.
In the next step we divide the vector into vectors of size $4$, and again apply the butterfly transform, which gives us the DFT for each block of size $4$.
And so on.
Finally in the last step we obtained the result of the DFTs of both halves of $a$, and by applying the butterfly transform we obtain the DFT for the complete vector $a$.
```{.cpp file=fft_implementation_iterative}
using cd = complex<double>;
const double PI = acos(-1);
int reverse(int num, int lg_n) {
int res = 0;
for (int i = 0; i < lg_n; i++) {
if (num & (1 << i))
res |= 1 << (lg_n - 1 - i);
}
return res;
}
void fft(vector<cd> & a, bool invert) {
int n = a.size();
int lg_n = 0;
while ((1 << lg_n) < n)
lg_n++;
for (int i = 0; i < n; i++) {
if (i < reverse(i, lg_n))
swap(a[i], a[reverse(i, lg_n)]);
}
for (int len = 2; len <= n; len <<= 1) {
double ang = 2 * PI / len * (invert ? -1 : 1);
cd wlen(cos(ang), sin(ang));
for (int i = 0; i < n; i += len) {
cd w(1);
for (int j = 0; j < len / 2; j++) {
cd u = a[i+j], v = a[i+j+len/2] * w;
a[i+j] = u + v;
a[i+j+len/2] = u - v;
w *= wlen;
}
}
}
if (invert) {
for (cd & x : a)
x /= n;
}
}
```
At first we apply the bit-reversal permutation by swapping the each element with the element of the reversed position.
Then the $\log n - 1$ states of the algorithm we compute the DFT for each block of the corresponding size $\text{len}$.
For all those blocks we have the same root of unity $\text{wlen}$.
We iterate all blocks and perform the butterfly transform on each of them.
We can further optimize the reversal of the bits.
In the previous implementation we iterated all bits of the index and created the bitwise reversed index.
However we can reverse the bits in a different way.
Suppose that $j$ already contains the reverse of $i$.
Then by to go to $i + 1$, we have to increment $i$, and we also have to increment $j$, but in a "reversed" number system.
Adding one in the conventional binary system is equivalent to flip all tailing ones into zeros and flipping the zero right before them into a one.
Equivalently in the "reversed" number system, we flip all leading ones, and the also the next zero.
Thus we get the following implementation:
```{.cpp file=fft_implementation_iterative_opt}
using cd = complex<double>;
const double PI = acos(-1);
void fft(vector<cd> & a, bool invert) {
int n = a.size();
for (int i = 1, j = 0; i < n; i++) {
int bit = n >> 1;
for (; j & bit; bit >>= 1)
j ^= bit;
j ^= bit;
if (i < j)
swap(a[i], a[j]);
}
for (int len = 2; len <= n; len <<= 1) {
double ang = 2 * PI / len * (invert ? -1 : 1);
cd wlen(cos(ang), sin(ang));
for (int i = 0; i < n; i += len) {
cd w(1);
for (int j = 0; j < len / 2; j++) {
cd u = a[i+j], v = a[i+j+len/2] * w;
a[i+j] = u + v;
a[i+j+len/2] = u - v;
w *= wlen;
}
}
}
if (invert) {
for (cd & x : a)
x /= n;
}
}
```
Additionally we can precompute the bit-reversal permutation beforehand.
This is especially useful when the size $n$ is the same for all calls.
But even when we only have three calls (which are necessary for multiplying two polynomials), the effect is noticeable.
Also we can precompute all roots of unity and their powers.
## Number theoretic transform
Now we switch the objective a little bit.
We still want to multiply two polynomials in $O(n \log n)$ time, but this time we want to compute the coefficients modulo some prime number $p$.
Of course for this task we can use the normal DFT and apply the modulo operator to the result.
However, doing so might lead to rounding errors, especially when dealing with large numbers.
The **number theoretic transform (NTT)** has the advantage, that it only works with integer, and therefore the result are guaranteed to be correct.
The discrete Fourier transform is based on complex numbers, and the $n$-th roots of unity.
To efficiently compute it, we extensively use properties of the roots (e.g. that there is one root that generates all other roots by exponentiation).
But the same properties hold for the $n$-th roots of unity in modular arithmetic.
A $n$-th root of unity under a primitive field is such a number $w_n$ that satisfies:
$$\begin{align}
(w_n)^n &= 1 \pmod{p}, \\
(w_n)^k &\ne 1 \pmod{p}, \quad 1 \le k < n.
\end{align}$$
The other $n-1$ roots can be obtained as powers of the root $w_n$.
To apply it in the fast Fourier transform algorithm, we need a root to exist for some $n$, which is a power of $2$, and also for all smaller powers.
We can notice the following interesting property:
$$\begin{align}
(w_n^2)^m = w_n^n &= 1 \pmod{p}, \quad \text{with } m = \frac{n}{2}\\
(w_n^2)^k = w_n^{2k} &\ne 1 \pmod{p}, \quad 1 \le k < m.
\end{align}$$
Thus if $w_n$ is a $n$-th root of unity, then $w_n^2$ is a $\frac{n}{2}$-th root of unity.
And consequently for all smaller powers of two there exist roots of the required degree, and they can be computed using $w_n$.
For computing the inverse DFT, we need the inverse $w_n^{-1}$ of $w_n$.
But for a prime modulus the inverse always exists.
Thus all the properties that we need from the complex roots are also available in modular arithmetic, provided that we have a large enough module $p$ for which a $n$-th root of unity exists.
For example we can take the following values: module $p = 7340033$, $w_{2^{20}} = 5$.
If this module is not enough, we need to find a different pair.
We can use that fact that for modules of the form $p = c 2^k + 1$ (and $p$ is prime), there always exists the $2^k$-th root of unity.
It can be shown that $g^c$ is such a $2^k$-th root of unity, where $g$ is a [primitive root](primitive-root.md) of $p$.
```{.cpp file=fft_implementation_modular_arithmetic}
const int mod = 7340033;
const int root = 5;
const int root_1 = 4404020;
const int root_pw = 1 << 20;
void fft(vector<int> & a, bool invert) {
int n = a.size();
for (int i = 1, j = 0; i < n; i++) {
int bit = n >> 1;
for (; j & bit; bit >>= 1)
j ^= bit;
j ^= bit;
if (i < j)
swap(a[i], a[j]);
}
for (int len = 2; len <= n; len <<= 1) {
int wlen = invert ? root_1 : root;
for (int i = len; i < root_pw; i <<= 1)
wlen = (int)(1LL * wlen * wlen % mod);
for (int i = 0; i < n; i += len) {
int w = 1;
for (int j = 0; j < len / 2; j++) {
int u = a[i+j], v = (int)(1LL * a[i+j+len/2] * w % mod);
a[i+j] = u + v < mod ? u + v : u + v - mod;
a[i+j+len/2] = u - v >= 0 ? u - v : u - v + mod;
w = (int)(1LL * w * wlen % mod);
}
}
}
if (invert) {
int n_1 = inverse(n, mod);
for (int & x : a)
x = (int)(1LL * x * n_1 % mod);
}
}
```
Here the function `inverse` computes the modular inverse (see [Modular Multiplicative Inverse](module-inverse.md)).
The constants `mod`, `root`, `root_pw` determine the module and the root, and `root_1` is the inverse of `root` modulo `mod`.
In practice this implementation is slower than the implementation using complex numbers (due to the huge number of modulo operations), but it has some advantages such as less memory usage and no rounding errors.
## Multiplication with arbitrary modulus
Here we want to achieve the same goal as in previous section.
Multiplying two polynomial $A(x)$ and $B(x)$, and computing the coefficients modulo some number $M$.
The number theoretic transform only works for certain prime numbers.
What about the case when the modulus is not of the desired form?
One option would be to perform multiple number theoretic transforms with different prime numbers of the form $c 2^k + 1$, then apply the [Chinese Remainder Theorem](chinese-remainder-theorem.md) to compute the final coefficients.
Another options is to distribute the polynomials $A(x)$ and $B(x)$ into two smaller polynomials each
$$\begin{align}
A(x) &= A_1(x) + A_2(x) \cdot C \\
B(x) &= B_1(x) + B_2(x) \cdot C
\end{align}$$
with $C \approx \sqrt{M}$.
Then the product of $A(x)$ and $B(x)$ can then be represented as:
$$A(x) \cdot B(x) = A_1(x) \cdot B_1(x) + \left(A_1(x) \cdot B_2(x) + A_2(x) \cdot B_1(x)\right)\cdot C + \left(A_2(x) \cdot B_2(x)\right)\cdot C^2$$
The polynomials $A_1(x)$, $A_2(x)$, $B_1(x)$ and $B_2(x)$ contain only coefficients smaller than $\sqrt{M}$, therefore the coefficients of all the appearing products are smaller than $M \cdot n$, which is usually small enough to handle with typical floating point types.
This approach therefore requires computing the products of polynomials with smaller coefficients (by using the normal FFT and inverse FFT), and then the original product can be restored using modular addition and multiplication in $O(n)$ time.
## Applications
DFT can be used in a huge variety of other problems, which at the first glance have nothing to do with multiplying polynomials.
### All possible sums
We are given two arrays $a[]$ and $b[]$.
We have to find all possible sums $a[i] + b[j]$, and for each sum count how often it appears.
For example for $a = [1,~ 2,~ 3]$ and $b = [2,~ 4]$ we get:
then sum $3$ can be obtained in $1$ way, the sum $4$ also in $1$ way, $5$ in $2$, $6$ in $1$, $7$ in $1$.
We construct for the arrays $a$ and $b$ two polynomials $A$ and $B$.
The numbers of the array will act as the exponents in the polynomial ($a[i] \Rightarrow x^{a[i]}$); and the coefficients of this term will be how often the number appears in the array.
Then, by multiplying these two polynomials in $O(n \log n)$ time, we get a polynomial $C$, where the exponents will tell us which sums can be obtained, and the coefficients tell us how often.
To demonstrate this on the example:
$$(1 x^1 + 1 x^2 + 1 x^3) (1 x^2 + 1 x^4) = 1 x^3 + 1 x^4 + 2 x^5 + 1 x^6 + 1 x^7$$
### All possible scalar products
We are given two arrays $a[]$ and $b[]$ of length $n$.
We have to compute the products of $a$ with every cyclic shift of $b$.
We generate two new arrays of size $2n$:
We reverse $a$ and append $n$ zeros to it.
And we just append $b$ to itself.
When we multiply these two arrays as polynomials, and look at the coefficients $c[n-1],~ c[n],~ \dots,~ c[2n-2]$ of the product $c$, we get:
$$c[k] = \sum_{i+j=k} a[i] b[j]$$
And since all the elements $a[i] = 0$ for $i \ge n$:
$$c[k] = \sum_{i=0}^{n-1} a[i] b[k-i]$$
It is easy to see that this sum is just the scalar product of the vector $a$ with the $(k - (n - 1))$-th cyclic left shift of $b$.
Thus these coefficients are the answer to the problem, and we were still able to obtain it in $O(n \log n)$ time.
Note here that $c[2n-1]$ also gives us the $n$-th cyclic shift but that is the same as the $0$-th cyclic shift so we don't need to consider that separately into our answer.
### Two stripes
We are given two Boolean stripes (cyclic arrays of values $0$ and $1$) $a$ and $b$.
We want to find all ways to attach the first stripe to the second one, such that at no position we have a $1$ of the first stripe next to a $1$ of the second stripe.
The problem doesn't actually differ much from the previous problem.
Attaching two stripes just means that we perform a cyclic shift on the second array, and we can attach the two stripes, if scalar product of the two arrays is $0$.
### String matching
We are given two strings, a text $T$ and a pattern $P$, consisting of lowercase letters.
We have to compute all the occurrences of the pattern in the text.
We create a polynomial for each string ($T[i]$ and $P[I]$ are numbers between $0$ and $25$ corresponding to the $26$ letters of the alphabet):
$$A(x) = a_0 x^0 + a_1 x^1 + \dots + a_{n-1} x^{n-1}, \quad n = |T|$$
with
$$a_i = \cos(\alpha_i) + i \sin(\alpha_i), \quad \alpha_i = \frac{2 \pi T[i]}{26}.$$
And
$$B(x) = b_0 x^0 + b_1 x^1 + \dots + b_{m-1} x^{m-1}, \quad m = |P|$$
with
$$b_i = \cos(\beta_i) - i \sin(\beta_i), \quad \beta_i = \frac{2 \pi P[m-i-1]}{26}.$$
Notice that with the expression $P[m-i-1]$ explicitly reverses the pattern.
The $(m-1+i)$th coefficients of the product of the two polynomials $C(x) = A(x) \cdot B(x)$ will tell us, if the pattern appears in the text at position $i$.
$$c_{m-1+i} = \sum_{j = 0}^{m-1} a_{i+j} \cdot b_{m-1-j} = \sum_{j=0}^{m-1} \left(\cos(\alpha_{i+j}) + i \sin(\alpha_{i+j})\right) \cdot \left(\cos(\beta_j) - i \sin(\beta_j)\right)$$
with $\alpha_{i+j} = \frac{2 \pi T[i+j]}{26}$ and $\beta_j = \frac{2 \pi P[j]}{26}$
If there is a match, than $T[i+j] = P[j]$, and therefore $\alpha_{i+j} = \beta_j$.
This gives (using the Pythagorean trigonometric identity):
$$\begin{align}
c_{m-1+i} &= \sum_{j = 0}^{m-1} \left(\cos(\alpha_{i+j}) + i \sin(\alpha_{i+j})\right) \cdot \left(\cos(\alpha_{i+j}) - i \sin(\alpha_{i+j})\right) \\
&= \sum_{j = 0}^{m-1} \cos(\alpha_{i+j})^2 + \sin(\alpha_{i+j})^2 = \sum_{j = 0}^{m-1} 1 = m
\end{align}$$
If there isn't a match, then at least a character is different, which leads that one of the products $a_{i+1} \cdot b_{m-1-j}$ is not equal to $1$, which leads to the coefficient $c_{m-1+i} \ne m$.
### String matching with wildcards
This is an extension of the previous problem.
This time we allow that the pattern contains the wildcard character $\*$, which can match every possible letter.
E.g. the pattern $a*c$ appears in the text $abccaacc$ at exactly three positions, at index $0$, index $4$ and index $5$.
We create the exact same polynomials, except that we set $b_i = 0$ if $P[m-i-1] = *$.
If $x$ is the number of wildcards in $P$, then we will have a match of $P$ in $T$ at index $i$ if $c_{m-1+i} = m - x$.
|
---
title
fft_multiply
---
# Fast Fourier transform
In this article we will discuss an algorithm that allows us to multiply two polynomials of length $n$ in $O(n \log n)$ time, which is better than the trivial multiplication which takes $O(n^2)$ time.
Obviously also multiplying two long numbers can be reduced to multiplying polynomials, so also two long numbers can be multiplied in $O(n \log n)$ time (where $n$ is the number of digits in the numbers).
The discovery of the **Fast Fourier transformation (FFT)** is attributed to Cooley and Tukey, who published an algorithm in 1965.
But in fact the FFT has been discovered repeatedly before, but the importance of it was not understood before the inventions of modern computers.
Some researchers attribute the discovery of the FFT to Runge and König in 1924.
But actually Gauss developed such a method already in 1805, but never published it.
Notice, that the FFT algorithm presented here runs in $O(n \log n)$ time, but it doesn't work for multiplying arbitrary big polynomials with arbitrary large coefficients or for multiplying arbitrary big integers.
It can easily handle polynomials of size $10^5$ with small coefficients, or multiplying two numbers of size $10^6$, but at some point the range and the precision of the used floating point numbers will not no longer be enough to give accurate results.
That is usually enough for solving competitive programming problems, but there are also more complex variations that can perform arbitrary large polynomial/integer multiplications.
E.g. in 1971 Schönhage and Strasser developed a variation for multiplying arbitrary large numbers that applies the FFT recursively in rings structures running in $O(n \log n \log \log n)$.
And recently (in 2019) Harvey and van der Hoeven published an algorithm that runs in true $O(n \log n)$.
## Discrete Fourier transform
Let there be a polynomial of degree $n - 1$:
$$A(x) = a_0 x^0 + a_1 x^1 + \dots + a_{n-1} x^{n-1}$$
Without loss of generality we assume that $n$ - the number of coefficients - is a power of $2$.
If $n$ is not a power of $2$, then we simply add the missing terms $a_i x^i$ and set the coefficients $a_i$ to $0$.
The theory of complex numbers tells us that the equation $x^n = 1$ has $n$ complex solutions (called the $n$-th roots of unity), and the solutions are of the form $w_{n, k} = e^{\frac{2 k \pi i}{n}}$ with $k = 0 \dots n-1$.
Additionally these complex numbers have some very interesting properties:
e.g. the principal $n$-th root $w_n = w_{n, 1} = e^{\frac{2 \pi i}{n}}$ can be used to describe all other $n$-th roots: $w_{n, k} = (w_n)^k$.
The **discrete Fourier transform (DFT)** of the polynomial $A(x)$ (or equivalently the vector of coefficients $(a_0, a_1, \dots, a_{n-1})$ is defined as the values of the polynomial at the points $x = w_{n, k}$, i.e. it is the vector:
$$\begin{align}
\text{DFT}(a_0, a_1, \dots, a_{n-1}) &= (y_0, y_1, \dots, y_{n-1}) \\
&= (A(w_{n, 0}), A(w_{n, 1}), \dots, A(w_{n, n-1})) \\
&= (A(w_n^0), A(w_n^1), \dots, A(w_n^{n-1}))
\end{align}$$
Similarly the **inverse discrete Fourier transform** is defined:
The inverse DFT of values of the polynomial $(y_0, y_1, \dots, y_{n-1})$ are the coefficients of the polynomial $(a_0, a_1, \dots, a_{n-1})$.
$$\text{InverseDFT}(y_0, y_1, \dots, y_{n-1}) = (a_0, a_1, \dots, a_{n-1})$$
Thus, if a direct DFT computes the values of the polynomial at the points at the $n$-th roots, the inverse DFT can restore the coefficients of the polynomial using those values.
### Application of the DFT: fast multiplication of polynomials
Let there be two polynomials $A$ and $B$.
We compute the DFT for each of them: $\text{DFT}(A)$ and $\text{DFT}(B)$.
What happens if we multiply these polynomials?
Obviously at each point the values are simply multiplied, i.e.
$$(A \cdot B)(x) = A(x) \cdot B(x).$$
This means that if we multiply the vectors $\text{DFT}(A)$ and $\text{DFT}(B)$ - by multiplying each element of one vector by the corresponding element of the other vector - then we get nothing other than the DFT of the polynomial $\text{DFT}(A \cdot B)$:
$$\text{DFT}(A \cdot B) = \text{DFT}(A) \cdot \text{DFT}(B)$$
Finally, applying the inverse DFT, we obtain:
$$A \cdot B = \text{InverseDFT}(\text{DFT}(A) \cdot \text{DFT}(B))$$
On the right the product of the two DFTs we mean the pairwise product of the vector elements.
This can be computed in $O(n)$ time.
If we can compute the DFT and the inverse DFT in $O(n \log n)$, then we can compute the product of the two polynomials (and consequently also two long numbers) with the same time complexity.
It should be noted, that the two polynomials should have the same degree.
Otherwise the two result vectors of the DFT have different length.
We can accomplish this by adding coefficients with the value $0$.
And also, since the result of the product of two polynomials is a polynomial of degree $2 (n - 1)$, we have to double the degrees of each polynomial (again by padding $0$s).
From a vector with $n$ values we cannot reconstruct the desired polynomial with $2n - 1$ coefficients.
### Fast Fourier Transform
The **fast Fourier transform** is a method that allows computing the DFT in $O(n \log n)$ time.
The basic idea of the FFT is to apply divide and conquer.
We divide the coefficient vector of the polynomial into two vectors, recursively compute the DFT for each of them, and combine the results to compute the DFT of the complete polynomial.
So let there be a polynomial $A(x)$ with degree $n - 1$, where $n$ is a power of $2$, and $n > 1$:
$$A(x) = a_0 x^0 + a_1 x^1 + \dots + a_{n-1} x^{n-1}$$
We divide it into two smaller polynomials, the one containing only the coefficients of the even positions, and the one containing the coefficients of the odd positions:
$$\begin{align}
A_0(x) &= a_0 x^0 + a_2 x^1 + \dots + a_{n-2} x^{\frac{n}{2}-1} \\
A_1(x) &= a_1 x^0 + a_3 x^1 + \dots + a_{n-1} x^{\frac{n}{2}-1}
\end{align}$$
It is easy to see that
$$A(x) = A_0(x^2) + x A_1(x^2).$$
The polynomials $A_0$ and $A_1$ are only half as much coefficients as the polynomial $A$.
If we can compute the $\text{DFT}(A)$ in linear time using $\text{DFT}(A_0)$ and $\text{DFT}(A_1)$, then we get the recurrence $T_{\text{DFT}}(n) = 2 T_{\text{DFT}}\left(\frac{n}{2}\right) + O(n)$ for the time complexity, which results in $T_{\text{DFT}}(n) = O(n \log n)$ by the **master theorem**.
Let's learn how we can accomplish that.
Suppose we have computed the vectors $\left(y_k^0\right)_{k=0}^{n/2-1} = \text{DFT}(A_0)$ and $\left(y_k^1\right)_{k=0}^{n/2-1} = \text{DFT}(A_1)$.
Let us find a expression for $\left(y_k\right)_{k=0}^{n-1} = \text{DFT}(A)$.
For the first $\frac{n}{2}$ values we can just use the previously noted equation $A(x) = A_0(x^2) + x A_1(x^2)$:
$$y_k = y_k^0 + w_n^k y_k^1, \quad k = 0 \dots \frac{n}{2} - 1.$$
However for the second $\frac{n}{2}$ values we need to find a slightly, different expression:
$$\begin{align}
y_{k+n/2} &= A\left(w_n^{k+n/2}\right) \\
&= A_0\left(w_n^{2k+n}\right) + w_n^{k + n/2} A_1\left(w_n^{2k+n}\right) \\
&= A_0\left(w_n^{2k} w_n^n\right) + w_n^k w_n^{n/2} A_1\left(w_n^{2k} w_n^n\right) \\
&= A_0\left(w_n^{2k}\right) - w_n^k A_1\left(w_n^{2k}\right) \\
&= y_k^0 - w_n^k y_k^1
\end{align}$$
Here we used again $A(x) = A_0(x^2) + x A_1(x^2)$ and the two identities $w_n^n = 1$ and $w_n^{n/2} = -1$.
Therefore we get the desired formulas for computing the whole vector $(y_k)$:
$$\begin{align}
y_k &= y_k^0 + w_n^k y_k^1, &\quad k = 0 \dots \frac{n}{2} - 1, \\
y_{k+n/2} &= y_k^0 - w_n^k y_k^1, &\quad k = 0 \dots \frac{n}{2} - 1.
\end{align}$$
(This pattern $a + b$ and $a - b$ is sometimes called a **butterfly**.)
Thus we learned how to compute the DFT in $O(n \log n)$ time.
### Inverse FFT
Let the vector $(y_0, y_1, \dots y_{n-1})$ - the values of polynomial $A$ of degree $n - 1$ in the points $x = w_n^k$ - be given.
We want to restore the coefficients $(a_0, a_1, \dots, a_{n-1})$ of the polynomial.
This known problem is called **interpolation**, and there are general algorithms for solving it.
But in this special case (since we know the values of the points at the roots of unity), we can obtains a much simpler algorithm (that is practically the same as the direct FFT).
We can write the DFT, according to its definition, in the matrix form:
$$
\begin{pmatrix}
w_n^0 & w_n^0 & w_n^0 & w_n^0 & \cdots & w_n^0 \\
w_n^0 & w_n^1 & w_n^2 & w_n^3 & \cdots & w_n^{n-1} \\
w_n^0 & w_n^2 & w_n^4 & w_n^6 & \cdots & w_n^{2(n-1)} \\
w_n^0 & w_n^3 & w_n^6 & w_n^9 & \cdots & w_n^{3(n-1)} \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
w_n^0 & w_n^{n-1} & w_n^{2(n-1)} & w_n^{3(n-1)} & \cdots & w_n^{(n-1)(n-1)}
\end{pmatrix} \begin{pmatrix}
a_0 \\ a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_{n-1}
\end{pmatrix} = \begin{pmatrix}
y_0 \\ y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_{n-1}
\end{pmatrix}
$$
This matrix is called the **Vandermonde matrix**.
Thus we can compute the vector $(a_0, a_1, \dots, a_{n-1})$ by multiplying the vector $(y_0, y_1, \dots y_{n-1})$ from the left with the inverse of the matrix:
$$
\begin{pmatrix}
a_0 \\ a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_{n-1}
\end{pmatrix} = \begin{pmatrix}
w_n^0 & w_n^0 & w_n^0 & w_n^0 & \cdots & w_n^0 \\
w_n^0 & w_n^1 & w_n^2 & w_n^3 & \cdots & w_n^{n-1} \\
w_n^0 & w_n^2 & w_n^4 & w_n^6 & \cdots & w_n^{2(n-1)} \\
w_n^0 & w_n^3 & w_n^6 & w_n^9 & \cdots & w_n^{3(n-1)} \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
w_n^0 & w_n^{n-1} & w_n^{2(n-1)} & w_n^{3(n-1)} & \cdots & w_n^{(n-1)(n-1)}
\end{pmatrix}^{-1} \begin{pmatrix}
y_0 \\ y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_{n-1}
\end{pmatrix}
$$
A quick check can verify that the inverse of the matrix has the following form:
$$
\frac{1}{n}
\begin{pmatrix}
w_n^0 & w_n^0 & w_n^0 & w_n^0 & \cdots & w_n^0 \\
w_n^0 & w_n^{-1} & w_n^{-2} & w_n^{-3} & \cdots & w_n^{-(n-1)} \\
w_n^0 & w_n^{-2} & w_n^{-4} & w_n^{-6} & \cdots & w_n^{-2(n-1)} \\
w_n^0 & w_n^{-3} & w_n^{-6} & w_n^{-9} & \cdots & w_n^{-3(n-1)} \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
w_n^0 & w_n^{-(n-1)} & w_n^{-2(n-1)} & w_n^{-3(n-1)} & \cdots & w_n^{-(n-1)(n-1)}
\end{pmatrix}
$$
Thus we obtain the formula:
$$a_k = \frac{1}{n} \sum_{j=0}^{n-1} y_j w_n^{-k j}$$
Comparing this to the formula for $y_k$
$$y_k = \sum_{j=0}^{n-1} a_j w_n^{k j},$$
we notice that these problems are almost the same, so the coefficients $a_k$ can be found by the same divide and conquer algorithm, as well as the direct FFT, only instead of $w_n^k$ we have to use $w_n^{-k}$, and at the end we need to divide the resulting coefficients by $n$.
Thus the computation of the inverse DFT is almost the same as the calculation of the direct DFT, and it also can be performed in $O(n \log n)$ time.
### Implementation
Here we present a simple recursive **implementation of the FFT** and the inverse FFT, both in one function, since the difference between the forward and the inverse FFT are so minimal.
To store the complex numbers we use the complex type in the C++ STL.
```{.cpp file=fft_recursive}
using cd = complex<double>;
const double PI = acos(-1);
void fft(vector<cd> & a, bool invert) {
int n = a.size();
if (n == 1)
return;
vector<cd> a0(n / 2), a1(n / 2);
for (int i = 0; 2 * i < n; i++) {
a0[i] = a[2*i];
a1[i] = a[2*i+1];
}
fft(a0, invert);
fft(a1, invert);
double ang = 2 * PI / n * (invert ? -1 : 1);
cd w(1), wn(cos(ang), sin(ang));
for (int i = 0; 2 * i < n; i++) {
a[i] = a0[i] + w * a1[i];
a[i + n/2] = a0[i] - w * a1[i];
if (invert) {
a[i] /= 2;
a[i + n/2] /= 2;
}
w *= wn;
}
}
```
The function gets passed a vector of coefficients, and the function will compute the DFT or inverse DFT and store the result again in this vector.
The argument $\text{invert}$ shows whether the direct or the inverse DFT should be computed.
Inside the function we first check if the length of the vector is equal to one, if this is the case then we don't have to do anything.
Otherwise we divide the vector $a$ into two vectors $a0$ and $a1$ and compute the DFT for both recursively.
Then we initialize the value $wn$ and a variable $w$, which will contain the current power of $wn$.
Then the values of the resulting DFT are computed using the above formulas.
If the flag $\text{invert}$ is set, then we replace $wn$ with $wn^{-1}$, and each of the values of the result is divided by $2$ (since this will be done in each level of the recursion, this will end up dividing the final values by $n$).
Using this function we can create a function for **multiplying two polynomials**:
```{.cpp file=fft_multiply}
vector<int> multiply(vector<int> const& a, vector<int> const& b) {
vector<cd> fa(a.begin(), a.end()), fb(b.begin(), b.end());
int n = 1;
while (n < a.size() + b.size())
n <<= 1;
fa.resize(n);
fb.resize(n);
fft(fa, false);
fft(fb, false);
for (int i = 0; i < n; i++)
fa[i] *= fb[i];
fft(fa, true);
vector<int> result(n);
for (int i = 0; i < n; i++)
result[i] = round(fa[i].real());
return result;
}
```
This function works with polynomials with integer coefficients, however you can also adjust it to work with other types.
Since there is some error when working with complex numbers, we need round the resulting coefficients at the end.
Finally the function for **multiplying** two long numbers practically doesn't differ from the function for multiplying polynomials.
The only thing we have to do afterwards, is to normalize the number:
```cpp
int carry = 0;
for (int i = 0; i < n; i++)
result[i] += carry;
carry = result[i] / 10;
result[i] %= 10;
}
```
Since the length of the product of two numbers never exceed the total length of both numbers, the size of the vector is enough to perform all carry operations.
### Improved implementation: in-place computation
To increase the efficiency we will switch from the recursive implementation to an iterative one.
In the above recursive implementation we explicitly separated the vector $a$ into two vectors - the element on the even positions got assigned to one temporary vector, and the elements on odd positions to another.
However if we reorder the elements in a certain way, we don't need to create these temporary vectors (i.e. all the calculations can be done "in-place", right in the vector $A$ itself).
Note that at the first recursion level, the elements whose lowest bit of the position was zero got assigned to the vector $a_0$, and the ones with a one as the lowest bit of the position got assigned to $a_1$.
In the second recursion level the same thing happens, but with the second lowest bit instead, etc.
Therefore if we reverse the bits of the position of each coefficient, and sort them by these reversed values, we get the desired order (it is called the bit-reversal permutation).
For example the desired order for $n = 8$ has the form:
$$a = \bigg\{ \Big[ (a_0, a_4), (a_2, a_6) \Big], \Big[ (a_1, a_5), (a_3, a_7) \Big] \bigg\}$$
Indeed in the first recursion level (surrounded by curly braces), the vector gets divided into two parts $[a_0, a_2, a_4, a_6]$ and $[a_1, a_3, a_5, a_7]$.
As we see, in the bit-reversal permutation this corresponds to simply dividing the vector into two halves: the first $\frac{n}{2}$ elements and the last $\frac{n}{2}$ elements.
Then there is a recursive call for each halve.
Let the resulting DFT for each of them be returned in place of the elements themselves (i.e. the first half and the second half of the vector $a$ respectively.
$$a = \bigg\{ \Big[y_0^0, y_1^0, y_2^0, y_3^0\Big], \Big[y_0^1, y_1^1, y_2^1, y_3^1 \Big] \bigg\}$$
Now we want to combine the two DFTs into one for the complete vector.
The order of the elements is ideal, and we can also perform the union directly in this vector.
We can take the elements $y_0^0$ and $y_0^1$ and perform the butterfly transform.
The place of the resulting two values is the same as the place of the two initial values, so we get:
$$a = \bigg\{ \Big[y_0^0 + w_n^0 y_0^1, y_1^0, y_2^0, y_3^0\Big], \Big[y_0^0 - w_n^0 y_0^1, y_1^1, y_2^1, y_3^1\Big] \bigg\}$$
Similarly we can compute the butterfly transform of $y_1^0$ and $y_1^1$ and put the results in their place, and so on.
As a result we get:
$$a = \bigg\{ \Big[y_0^0 + w_n^0 y_0^1, y_1^0 + w_n^1 y_1^1, y_2^0 + w_n^2 y_2^1, y_3^0 + w_n^3 y_3^1\Big], \Big[y_0^0 - w_n^0 y_0^1, y_1^0 - w_n^1 y_1^1, y_2^0 - w_n^2 y_2^1, y_3^0 - w_n^3 y_3^1\Big] \bigg\}$$
Thus we computed the required DFT from the vector $a$.
Here we described the process of computing the DFT only at the first recursion level, but the same works obviously also for all other levels.
Thus, after applying the bit-reversal permutation, we can compute the DFT in-place, without any additional memory.
This additionally allows us to get rid of the recursion.
We just start at the lowest level, i.e. we divide the vector into pairs and apply the butterfly transform to them.
This results with the vector $a$ with the work of the last level applied.
In the next step we divide the vector into vectors of size $4$, and again apply the butterfly transform, which gives us the DFT for each block of size $4$.
And so on.
Finally in the last step we obtained the result of the DFTs of both halves of $a$, and by applying the butterfly transform we obtain the DFT for the complete vector $a$.
```{.cpp file=fft_implementation_iterative}
using cd = complex<double>;
const double PI = acos(-1);
int reverse(int num, int lg_n) {
int res = 0;
for (int i = 0; i < lg_n; i++) {
if (num & (1 << i))
res |= 1 << (lg_n - 1 - i);
}
return res;
}
void fft(vector<cd> & a, bool invert) {
int n = a.size();
int lg_n = 0;
while ((1 << lg_n) < n)
lg_n++;
for (int i = 0; i < n; i++) {
if (i < reverse(i, lg_n))
swap(a[i], a[reverse(i, lg_n)]);
}
for (int len = 2; len <= n; len <<= 1) {
double ang = 2 * PI / len * (invert ? -1 : 1);
cd wlen(cos(ang), sin(ang));
for (int i = 0; i < n; i += len) {
cd w(1);
for (int j = 0; j < len / 2; j++) {
cd u = a[i+j], v = a[i+j+len/2] * w;
a[i+j] = u + v;
a[i+j+len/2] = u - v;
w *= wlen;
}
}
}
if (invert) {
for (cd & x : a)
x /= n;
}
}
```
At first we apply the bit-reversal permutation by swapping the each element with the element of the reversed position.
Then the $\log n - 1$ states of the algorithm we compute the DFT for each block of the corresponding size $\text{len}$.
For all those blocks we have the same root of unity $\text{wlen}$.
We iterate all blocks and perform the butterfly transform on each of them.
We can further optimize the reversal of the bits.
In the previous implementation we iterated all bits of the index and created the bitwise reversed index.
However we can reverse the bits in a different way.
Suppose that $j$ already contains the reverse of $i$.
Then by to go to $i + 1$, we have to increment $i$, and we also have to increment $j$, but in a "reversed" number system.
Adding one in the conventional binary system is equivalent to flip all tailing ones into zeros and flipping the zero right before them into a one.
Equivalently in the "reversed" number system, we flip all leading ones, and the also the next zero.
Thus we get the following implementation:
```{.cpp file=fft_implementation_iterative_opt}
using cd = complex<double>;
const double PI = acos(-1);
void fft(vector<cd> & a, bool invert) {
int n = a.size();
for (int i = 1, j = 0; i < n; i++) {
int bit = n >> 1;
for (; j & bit; bit >>= 1)
j ^= bit;
j ^= bit;
if (i < j)
swap(a[i], a[j]);
}
for (int len = 2; len <= n; len <<= 1) {
double ang = 2 * PI / len * (invert ? -1 : 1);
cd wlen(cos(ang), sin(ang));
for (int i = 0; i < n; i += len) {
cd w(1);
for (int j = 0; j < len / 2; j++) {
cd u = a[i+j], v = a[i+j+len/2] * w;
a[i+j] = u + v;
a[i+j+len/2] = u - v;
w *= wlen;
}
}
}
if (invert) {
for (cd & x : a)
x /= n;
}
}
```
Additionally we can precompute the bit-reversal permutation beforehand.
This is especially useful when the size $n$ is the same for all calls.
But even when we only have three calls (which are necessary for multiplying two polynomials), the effect is noticeable.
Also we can precompute all roots of unity and their powers.
## Number theoretic transform
Now we switch the objective a little bit.
We still want to multiply two polynomials in $O(n \log n)$ time, but this time we want to compute the coefficients modulo some prime number $p$.
Of course for this task we can use the normal DFT and apply the modulo operator to the result.
However, doing so might lead to rounding errors, especially when dealing with large numbers.
The **number theoretic transform (NTT)** has the advantage, that it only works with integer, and therefore the result are guaranteed to be correct.
The discrete Fourier transform is based on complex numbers, and the $n$-th roots of unity.
To efficiently compute it, we extensively use properties of the roots (e.g. that there is one root that generates all other roots by exponentiation).
But the same properties hold for the $n$-th roots of unity in modular arithmetic.
A $n$-th root of unity under a primitive field is such a number $w_n$ that satisfies:
$$\begin{align}
(w_n)^n &= 1 \pmod{p}, \\
(w_n)^k &\ne 1 \pmod{p}, \quad 1 \le k < n.
\end{align}$$
The other $n-1$ roots can be obtained as powers of the root $w_n$.
To apply it in the fast Fourier transform algorithm, we need a root to exist for some $n$, which is a power of $2$, and also for all smaller powers.
We can notice the following interesting property:
$$\begin{align}
(w_n^2)^m = w_n^n &= 1 \pmod{p}, \quad \text{with } m = \frac{n}{2}\\
(w_n^2)^k = w_n^{2k} &\ne 1 \pmod{p}, \quad 1 \le k < m.
\end{align}$$
Thus if $w_n$ is a $n$-th root of unity, then $w_n^2$ is a $\frac{n}{2}$-th root of unity.
And consequently for all smaller powers of two there exist roots of the required degree, and they can be computed using $w_n$.
For computing the inverse DFT, we need the inverse $w_n^{-1}$ of $w_n$.
But for a prime modulus the inverse always exists.
Thus all the properties that we need from the complex roots are also available in modular arithmetic, provided that we have a large enough module $p$ for which a $n$-th root of unity exists.
For example we can take the following values: module $p = 7340033$, $w_{2^{20}} = 5$.
If this module is not enough, we need to find a different pair.
We can use that fact that for modules of the form $p = c 2^k + 1$ (and $p$ is prime), there always exists the $2^k$-th root of unity.
It can be shown that $g^c$ is such a $2^k$-th root of unity, where $g$ is a [primitive root](primitive-root.md) of $p$.
```{.cpp file=fft_implementation_modular_arithmetic}
const int mod = 7340033;
const int root = 5;
const int root_1 = 4404020;
const int root_pw = 1 << 20;
void fft(vector<int> & a, bool invert) {
int n = a.size();
for (int i = 1, j = 0; i < n; i++) {
int bit = n >> 1;
for (; j & bit; bit >>= 1)
j ^= bit;
j ^= bit;
if (i < j)
swap(a[i], a[j]);
}
for (int len = 2; len <= n; len <<= 1) {
int wlen = invert ? root_1 : root;
for (int i = len; i < root_pw; i <<= 1)
wlen = (int)(1LL * wlen * wlen % mod);
for (int i = 0; i < n; i += len) {
int w = 1;
for (int j = 0; j < len / 2; j++) {
int u = a[i+j], v = (int)(1LL * a[i+j+len/2] * w % mod);
a[i+j] = u + v < mod ? u + v : u + v - mod;
a[i+j+len/2] = u - v >= 0 ? u - v : u - v + mod;
w = (int)(1LL * w * wlen % mod);
}
}
}
if (invert) {
int n_1 = inverse(n, mod);
for (int & x : a)
x = (int)(1LL * x * n_1 % mod);
}
}
```
Here the function `inverse` computes the modular inverse (see [Modular Multiplicative Inverse](module-inverse.md)).
The constants `mod`, `root`, `root_pw` determine the module and the root, and `root_1` is the inverse of `root` modulo `mod`.
In practice this implementation is slower than the implementation using complex numbers (due to the huge number of modulo operations), but it has some advantages such as less memory usage and no rounding errors.
## Multiplication with arbitrary modulus
Here we want to achieve the same goal as in previous section.
Multiplying two polynomial $A(x)$ and $B(x)$, and computing the coefficients modulo some number $M$.
The number theoretic transform only works for certain prime numbers.
What about the case when the modulus is not of the desired form?
One option would be to perform multiple number theoretic transforms with different prime numbers of the form $c 2^k + 1$, then apply the [Chinese Remainder Theorem](chinese-remainder-theorem.md) to compute the final coefficients.
Another options is to distribute the polynomials $A(x)$ and $B(x)$ into two smaller polynomials each
$$\begin{align}
A(x) &= A_1(x) + A_2(x) \cdot C \\
B(x) &= B_1(x) + B_2(x) \cdot C
\end{align}$$
with $C \approx \sqrt{M}$.
Then the product of $A(x)$ and $B(x)$ can then be represented as:
$$A(x) \cdot B(x) = A_1(x) \cdot B_1(x) + \left(A_1(x) \cdot B_2(x) + A_2(x) \cdot B_1(x)\right)\cdot C + \left(A_2(x) \cdot B_2(x)\right)\cdot C^2$$
The polynomials $A_1(x)$, $A_2(x)$, $B_1(x)$ and $B_2(x)$ contain only coefficients smaller than $\sqrt{M}$, therefore the coefficients of all the appearing products are smaller than $M \cdot n$, which is usually small enough to handle with typical floating point types.
This approach therefore requires computing the products of polynomials with smaller coefficients (by using the normal FFT and inverse FFT), and then the original product can be restored using modular addition and multiplication in $O(n)$ time.
## Applications
DFT can be used in a huge variety of other problems, which at the first glance have nothing to do with multiplying polynomials.
### All possible sums
We are given two arrays $a[]$ and $b[]$.
We have to find all possible sums $a[i] + b[j]$, and for each sum count how often it appears.
For example for $a = [1,~ 2,~ 3]$ and $b = [2,~ 4]$ we get:
then sum $3$ can be obtained in $1$ way, the sum $4$ also in $1$ way, $5$ in $2$, $6$ in $1$, $7$ in $1$.
We construct for the arrays $a$ and $b$ two polynomials $A$ and $B$.
The numbers of the array will act as the exponents in the polynomial ($a[i] \Rightarrow x^{a[i]}$); and the coefficients of this term will be how often the number appears in the array.
Then, by multiplying these two polynomials in $O(n \log n)$ time, we get a polynomial $C$, where the exponents will tell us which sums can be obtained, and the coefficients tell us how often.
To demonstrate this on the example:
$$(1 x^1 + 1 x^2 + 1 x^3) (1 x^2 + 1 x^4) = 1 x^3 + 1 x^4 + 2 x^5 + 1 x^6 + 1 x^7$$
### All possible scalar products
We are given two arrays $a[]$ and $b[]$ of length $n$.
We have to compute the products of $a$ with every cyclic shift of $b$.
We generate two new arrays of size $2n$:
We reverse $a$ and append $n$ zeros to it.
And we just append $b$ to itself.
When we multiply these two arrays as polynomials, and look at the coefficients $c[n-1],~ c[n],~ \dots,~ c[2n-2]$ of the product $c$, we get:
$$c[k] = \sum_{i+j=k} a[i] b[j]$$
And since all the elements $a[i] = 0$ for $i \ge n$:
$$c[k] = \sum_{i=0}^{n-1} a[i] b[k-i]$$
It is easy to see that this sum is just the scalar product of the vector $a$ with the $(k - (n - 1))$-th cyclic left shift of $b$.
Thus these coefficients are the answer to the problem, and we were still able to obtain it in $O(n \log n)$ time.
Note here that $c[2n-1]$ also gives us the $n$-th cyclic shift but that is the same as the $0$-th cyclic shift so we don't need to consider that separately into our answer.
### Two stripes
We are given two Boolean stripes (cyclic arrays of values $0$ and $1$) $a$ and $b$.
We want to find all ways to attach the first stripe to the second one, such that at no position we have a $1$ of the first stripe next to a $1$ of the second stripe.
The problem doesn't actually differ much from the previous problem.
Attaching two stripes just means that we perform a cyclic shift on the second array, and we can attach the two stripes, if scalar product of the two arrays is $0$.
### String matching
We are given two strings, a text $T$ and a pattern $P$, consisting of lowercase letters.
We have to compute all the occurrences of the pattern in the text.
We create a polynomial for each string ($T[i]$ and $P[I]$ are numbers between $0$ and $25$ corresponding to the $26$ letters of the alphabet):
$$A(x) = a_0 x^0 + a_1 x^1 + \dots + a_{n-1} x^{n-1}, \quad n = |T|$$
with
$$a_i = \cos(\alpha_i) + i \sin(\alpha_i), \quad \alpha_i = \frac{2 \pi T[i]}{26}.$$
And
$$B(x) = b_0 x^0 + b_1 x^1 + \dots + b_{m-1} x^{m-1}, \quad m = |P|$$
with
$$b_i = \cos(\beta_i) - i \sin(\beta_i), \quad \beta_i = \frac{2 \pi P[m-i-1]}{26}.$$
Notice that with the expression $P[m-i-1]$ explicitly reverses the pattern.
The $(m-1+i)$th coefficients of the product of the two polynomials $C(x) = A(x) \cdot B(x)$ will tell us, if the pattern appears in the text at position $i$.
$$c_{m-1+i} = \sum_{j = 0}^{m-1} a_{i+j} \cdot b_{m-1-j} = \sum_{j=0}^{m-1} \left(\cos(\alpha_{i+j}) + i \sin(\alpha_{i+j})\right) \cdot \left(\cos(\beta_j) - i \sin(\beta_j)\right)$$
with $\alpha_{i+j} = \frac{2 \pi T[i+j]}{26}$ and $\beta_j = \frac{2 \pi P[j]}{26}$
If there is a match, than $T[i+j] = P[j]$, and therefore $\alpha_{i+j} = \beta_j$.
This gives (using the Pythagorean trigonometric identity):
$$\begin{align}
c_{m-1+i} &= \sum_{j = 0}^{m-1} \left(\cos(\alpha_{i+j}) + i \sin(\alpha_{i+j})\right) \cdot \left(\cos(\alpha_{i+j}) - i \sin(\alpha_{i+j})\right) \\
&= \sum_{j = 0}^{m-1} \cos(\alpha_{i+j})^2 + \sin(\alpha_{i+j})^2 = \sum_{j = 0}^{m-1} 1 = m
\end{align}$$
If there isn't a match, then at least a character is different, which leads that one of the products $a_{i+1} \cdot b_{m-1-j}$ is not equal to $1$, which leads to the coefficient $c_{m-1+i} \ne m$.
### String matching with wildcards
This is an extension of the previous problem.
This time we allow that the pattern contains the wildcard character $\*$, which can match every possible letter.
E.g. the pattern $a*c$ appears in the text $abccaacc$ at exactly three positions, at index $0$, index $4$ and index $5$.
We create the exact same polynomials, except that we set $b_i = 0$ if $P[m-i-1] = *$.
If $x$ is the number of wildcards in $P$, then we will have a match of $P$ in $T$ at index $i$ if $c_{m-1+i} = m - x$.
## Practice problems
- [SPOJ - POLYMUL](http://www.spoj.com/problems/POLYMUL/)
- [SPOJ - MAXMATCH](http://www.spoj.com/problems/MAXMATCH/)
- [SPOJ - ADAMATCH](http://www.spoj.com/problems/ADAMATCH/)
- [Codeforces - Yet Another String Matching Problem](http://codeforces.com/problemset/problem/954/I)
- [Codeforces - Lightsabers (hard)](http://codeforces.com/problemset/problem/958/F3)
- [Codeforces - Running Competition](https://codeforces.com/contest/1398/problem/G)
- [Kattis - A+B Problem](https://open.kattis.com/problems/aplusb)
- [Kattis - K-Inversions](https://open.kattis.com/problems/kinversions)
- [Codeforces - Dasha and cyclic table](http://codeforces.com/contest/754/problem/E)
- [CodeChef - Expected Number of Customers](https://www.codechef.com/COOK112A/problems/MMNN01)
- [CodeChef - Power Sum](https://www.codechef.com/SEPT19A/problems/PSUM)
- [Codeforces - Centroid Probabilities](https://codeforces.com/problemset/problem/1667/E)
|
Fast Fourier transform
|
---
title
diofant_1_equation
---
# Linear Congruence Equation
This equation is of the form:
$$a \cdot x \equiv b \pmod n,$$
where $a$, $b$ and $n$ are given integers and $x$ is an unknown integer.
It is required to find the value $x$ from the interval $[0, n-1]$ (clearly, on the entire number line there can be infinitely many solutions that will differ from each other in $n \cdot k$ , where $k$ is any integer). If the solution is not unique, then we will consider how to get all the solutions.
## Solution by finding the inverse element
Let us first consider a simpler case where $a$ and $n$ are **coprime** ($\gcd(a, n) = 1$).
Then one can find the [inverse](module-inverse.md) of $a$, and multiplying both sides of the equation with the inverse, and we can get a **unique** solution.
$$x \equiv b \cdot a ^ {- 1} \pmod n$$
Now consider the case where $a$ and $n$ are **not coprime** ($\gcd(a, n) \ne 1$).
Then the solution will not always exist (for example $2 \cdot x \equiv 1 \pmod 4$ has no solution).
Let $g = \gcd(a, n)$, i.e. the [greatest common divisor](euclid-algorithm.md) of $a$ and $n$ (which in this case is greater than one).
Then, if $b$ is not divisible by $g$, there is no solution. In fact, for any $x$ the left side of the equation $a \cdot x \pmod n$ , is always divisible by $g$, while the right-hand side is not divisible by it, hence it follows that there are no solutions.
If $g$ divides $b$, then by dividing both sides of the equation by $g$ (i.e. dividing $a$, $b$ and $n$ by $g$), we receive a new equation:
$$a^\prime \cdot x \equiv b^\prime \pmod{n^\prime}$$
in which $a^\prime$ and $n^\prime$ are already relatively prime, and we have already learned how to handle such an equation.
We get $x^\prime$ as solution for $x$.
It is clear that this $x^\prime$ will also be a solution of the original equation.
However it will **not be the only solution**.
It can be shown that the original equation has exactly $g$ solutions, and they will look like this:
$$x_i \equiv (x^\prime + i\cdot n^\prime) \pmod n \quad \text{for } i = 0 \ldots g-1$$
Summarizing, we can say that the **number of solutions** of the linear congruence equation is equal to either $g = \gcd(a, n)$ or to zero.
## Solution with the Extended Euclidean Algorithm
We can rewrite the linear congruence to the following Diophantine equation:
$$a \cdot x + n \cdot k = b,$$
where $x$ and $k$ are unknown integers.
The method of solving this equation is described in the corresponding article [Linear Diophantine equations](linear-diophantine-equation.md) and it consists of applying the [Extended Euclidean Algorithm](extended-euclid-algorithm.md).
It also describes the method of obtaining all solutions of this equation from one found solution, and incidentally this method, when carefully considered, is absolutely equivalent to the method described in the previous section.
|
---
title
diofant_1_equation
---
# Linear Congruence Equation
This equation is of the form:
$$a \cdot x \equiv b \pmod n,$$
where $a$, $b$ and $n$ are given integers and $x$ is an unknown integer.
It is required to find the value $x$ from the interval $[0, n-1]$ (clearly, on the entire number line there can be infinitely many solutions that will differ from each other in $n \cdot k$ , where $k$ is any integer). If the solution is not unique, then we will consider how to get all the solutions.
## Solution by finding the inverse element
Let us first consider a simpler case where $a$ and $n$ are **coprime** ($\gcd(a, n) = 1$).
Then one can find the [inverse](module-inverse.md) of $a$, and multiplying both sides of the equation with the inverse, and we can get a **unique** solution.
$$x \equiv b \cdot a ^ {- 1} \pmod n$$
Now consider the case where $a$ and $n$ are **not coprime** ($\gcd(a, n) \ne 1$).
Then the solution will not always exist (for example $2 \cdot x \equiv 1 \pmod 4$ has no solution).
Let $g = \gcd(a, n)$, i.e. the [greatest common divisor](euclid-algorithm.md) of $a$ and $n$ (which in this case is greater than one).
Then, if $b$ is not divisible by $g$, there is no solution. In fact, for any $x$ the left side of the equation $a \cdot x \pmod n$ , is always divisible by $g$, while the right-hand side is not divisible by it, hence it follows that there are no solutions.
If $g$ divides $b$, then by dividing both sides of the equation by $g$ (i.e. dividing $a$, $b$ and $n$ by $g$), we receive a new equation:
$$a^\prime \cdot x \equiv b^\prime \pmod{n^\prime}$$
in which $a^\prime$ and $n^\prime$ are already relatively prime, and we have already learned how to handle such an equation.
We get $x^\prime$ as solution for $x$.
It is clear that this $x^\prime$ will also be a solution of the original equation.
However it will **not be the only solution**.
It can be shown that the original equation has exactly $g$ solutions, and they will look like this:
$$x_i \equiv (x^\prime + i\cdot n^\prime) \pmod n \quad \text{for } i = 0 \ldots g-1$$
Summarizing, we can say that the **number of solutions** of the linear congruence equation is equal to either $g = \gcd(a, n)$ or to zero.
## Solution with the Extended Euclidean Algorithm
We can rewrite the linear congruence to the following Diophantine equation:
$$a \cdot x + n \cdot k = b,$$
where $x$ and $k$ are unknown integers.
The method of solving this equation is described in the corresponding article [Linear Diophantine equations](linear-diophantine-equation.md) and it consists of applying the [Extended Euclidean Algorithm](extended-euclid-algorithm.md).
It also describes the method of obtaining all solutions of this equation from one found solution, and incidentally this method, when carefully considered, is absolutely equivalent to the method described in the previous section.
|
Linear Congruence Equation
|
---
title
- Original
---
# Montgomery Multiplication
Many algorithms in number theory, like [prime testing](primality_tests.md) or [integer factorization](factorization.md), and in cryptography, like RSA, require lots of operations modulo a large number.
A multiplications like $x y \bmod{n}$ is quite slow to compute with the typical algorithms, since it requires a division to know how many times $n$ has to be subtracted from the product.
And division is a really expensive operation, especially with big numbers.
The **Montgomery (modular) multiplication** is a method that allows computing such multiplications faster.
Instead of dividing the product and subtracting $n$ multiple times, it adds multiples of $n$ to cancel out the lower bits and then just discards the lower bits.
## Montgomery representation
However the Montgomery multiplication doesn't come for free.
The algorithm works only in the **Montgomery space**.
And we need to transform our numbers into that space, before we can start multiplying.
For the space we need a positive integer $r \ge n$ coprime to $n$, i.e. $\gcd(n, r) = 1$.
In practice we always choose $r$ to be $2^m$ for a positive integer $m$, since multiplications, divisions and modulo $r$ operations can then be efficiently implemented using shifts and other bit operations.
$n$ will be an odd number in pretty much all applications, since it is not hard to factorize an even number.
So every power of $2$ will be coprime to $n$.
The representative $\bar{x}$ of a number $x$ in the Montgomery space is defined as:
$$\bar{x} := x \cdot r \bmod n$$
Notice, the transformation is actually such a multiplication that we want to optimize.
So this is still an expensive operation.
However you only need to transform a number once into the space.
As soon as you are in the Montgomery space, you can perform as many operations as you want efficiently.
And at the end you transform the final result back.
So as long as you are doing lots of operations modulo $n$, this will be no problem.
Inside the Montgomery space you can still perform most operations as usual.
You can add two elements ($x \cdot r + y \cdot r \equiv (x + y) \cdot r \bmod n$), subtract, check for equality, and even compute the greatest common divisor of a number with $n$ (since $\gcd(n, r) = 1$).
All with the usual algorithms.
However this is not the case for multiplication.
We expect the result to be:
$$\bar{x} * \bar{y} = \overline{x \cdot y} = (x \cdot y) \cdot r \bmod n.$$
But the normal multiplication will give us:
$$\bar{x} \cdot \bar{y} = (x \cdot y) \cdot r \cdot r \bmod n.$$
Therefore the multiplication in the Montgomery space is defined as:
$$\bar{x} * \bar{y} := \bar{x} \cdot \bar{y} \cdot r^{-1} \bmod n.$$
## Montgomery reduction
The multiplication of two numbers in the Montgomery space requires an efficient computation of $x \cdot r^{-1} \bmod n$.
This operation is called the **Montgomery reduction**, and is also known as the algorithm **REDC**.
Because $\gcd(n, r) = 1$, we know that there are two numbers $r^{-1}$ and $n^{\prime}$ with $0 < r^{-1}, n^{\prime} < n$ with
$$r \cdot r^{-1} + n \cdot n^{\prime} = 1.$$
Both $r^{-1}$ and $n^{\prime}$ can be computed using the [Extended Euclidean algorithm](extended-euclid-algorithm.md).
Using this identity we can write $x \cdot r^{-1}$ as:
$$\begin{aligned}
x \cdot r^{-1} &= x \cdot r \cdot r^{-1} / r = x \cdot (-n \cdot n^{\prime} + 1) / r \\
&= (-x \cdot n \cdot n^{\prime} + x) / r \equiv (-x \cdot n \cdot n^{\prime} + l \cdot r \cdot n + x) / r \bmod n\\
&\equiv ((-x \cdot n^{\prime} + l \cdot r) \cdot n + x) / r \bmod n
\end{aligned}$$
The equivalences hold for any arbitrary integer $l$.
This means, that we can add or subtract an arbitrary multiple of $r$ to $x \cdot n^{\prime}$, or in other words, we can compute $q := x \cdot n^{\prime}$ modulo $r$.
This gives us the following algorithm to compute $x \cdot r^{-1} \bmod n$:
```text
function reduce(x):
q = (x mod r) * n' mod r
a = (x - q * n) / r
if a < 0:
a += n
return a
```
Since $x < n \cdot n < r \cdot n$ (even if $x$ is the product of a multiplication) and $q \cdot n < r \cdot n$ we know that $-n < (x - q \cdot n) / r < n$.
Therefore the final modulo operation is implemented using a single check and one addition.
As we see, we can perform the Montgomery reduction without any heavy modulo operations.
If we choose $r$ as a power of $2$, the modulo operations and divisions in the algorithm can be computed using bitmasking and shifting.
A second application of the Montgomery reduction is to transfer a number back from the Montgomery space into the normal space.
## Fast inverse trick
For computing the inverse $n^{\prime} := n^{-1} \bmod r$ efficiently, we can use the following trick (which is inspired from the Newton's method):
$$a \cdot x \equiv 1 \bmod 2^k \Longrightarrow a \cdot x \cdot (2 - a \cdot x) \equiv 1 \bmod 2^{2k}$$
This can easily be proven.
If we have $a \cdot x = 1 + m \cdot 2^k$, then we have:
$$\begin{aligned}
a \cdot x \cdot (2 - a \cdot x) &= 2 \cdot a \cdot x - (a \cdot x)^2 \\
&= 2 \cdot (1 + m \cdot 2^k) - (1 + m \cdot 2^k)^2 \\
&= 2 + 2 \cdot m \cdot 2^k - 1 - 2 \cdot m \cdot 2^k - m^2 \cdot 2^{2k} \\
&= 1 - m^2 \cdot 2^{2k} \\
&\equiv 1 \bmod 2^{2k}.
\end{aligned}$$
This means we can start with $x = 1$ as the inverse of $a$ modulo $2^1$, apply the trick a few times and in each iteration we double the number of correct bits of $x$.
## Implementation
Using the GCC compiler we can compute $x \cdot y \bmod n$ still efficiently, when all three numbers are 64 bit integer, since the compiler supports 128 bit integer with the types `__int128` and `__uint128`.
```cpp
long long result = (__int128)x * y % n;
```
However there is no type for 256 bit integer.
Therefore we will here show an implementation for a 128 bit multiplication.
```cpp
using u64 = uint64_t;
using u128 = __uint128_t;
using i128 = __int128_t;
struct u256 {
u128 high, low;
static u256 mult(u128 x, u128 y) {
u64 a = x >> 64, b = x;
u64 c = y >> 64, d = y;
// (a*2^64 + b) * (c*2^64 + d) =
// (a*c) * 2^128 + (a*d + b*c)*2^64 + (b*d)
u128 ac = (u128)a * c;
u128 ad = (u128)a * d;
u128 bc = (u128)b * c;
u128 bd = (u128)b * d;
u128 carry = (u128)(u64)ad + (u128)(u64)bc + (bd >> 64u);
u128 high = ac + (ad >> 64u) + (bc >> 64u) + (carry >> 64u);
u128 low = (ad << 64u) + (bc << 64u) + bd;
return {high, low};
}
};
struct Montgomery {
Montgomery(u128 n) : mod(n), inv(1) {
for (int i = 0; i < 7; i++)
inv *= 2 - n * inv;
}
u128 init(u128 x) {
x %= mod;
for (int i = 0; i < 128; i++) {
x <<= 1;
if (x >= mod)
x -= mod;
}
return x;
}
u128 reduce(u256 x) {
u128 q = x.low * inv;
i128 a = x.high - u256::mult(q, mod).high;
if (a < 0)
a += mod;
return a;
}
u128 mult(u128 a, u128 b) {
return reduce(u256::mult(a, b));
}
u128 mod, inv;
};
```
## Fast transformation
The current method of transforming a number into Montgomery space is pretty slow.
There are faster ways.
You can notice the following relation:
$$\bar{x} := x \cdot r \bmod n = x \cdot r^2 / r = x * r^2$$
Transforming a number into the space is just a multiplication inside the space of the number with $r^2$.
Therefore we can precompute $r^2 \bmod n$ and just perform a multiplication instead of shifting the number 128 times.
In the following code we initialize `r2` with `-n % n`, which is equivalent to $r - n \equiv r \bmod n$, shift it 4 times to get $r \cdot 2^4 \bmod n$.
This number can be interpreted as $2^4$ in Montgomery space.
If we square it $5$ times, we get $(2^4)^{2^5} = (2^4)^{32} = 2^{128} = r$ in Montgomery space, which is exactly $r^2 \bmod n$.
```
struct Montgomery {
Montgomery(u128 n) : mod(n), inv(1), r2(-n % n) {
for (int i = 0; i < 7; i++)
inv *= 2 - n * inv;
for (int i = 0; i < 4; i++) {
r2 <<= 1;
if (r2 >= mod)
r2 -= mod;
}
for (int i = 0; i < 5; i++)
r2 = mul(r2, r2);
}
u128 init(u128 x) {
return mult(x, r2);
}
u128 mod, inv, r2;
};
```
|
---
title
- Original
---
# Montgomery Multiplication
Many algorithms in number theory, like [prime testing](primality_tests.md) or [integer factorization](factorization.md), and in cryptography, like RSA, require lots of operations modulo a large number.
A multiplications like $x y \bmod{n}$ is quite slow to compute with the typical algorithms, since it requires a division to know how many times $n$ has to be subtracted from the product.
And division is a really expensive operation, especially with big numbers.
The **Montgomery (modular) multiplication** is a method that allows computing such multiplications faster.
Instead of dividing the product and subtracting $n$ multiple times, it adds multiples of $n$ to cancel out the lower bits and then just discards the lower bits.
## Montgomery representation
However the Montgomery multiplication doesn't come for free.
The algorithm works only in the **Montgomery space**.
And we need to transform our numbers into that space, before we can start multiplying.
For the space we need a positive integer $r \ge n$ coprime to $n$, i.e. $\gcd(n, r) = 1$.
In practice we always choose $r$ to be $2^m$ for a positive integer $m$, since multiplications, divisions and modulo $r$ operations can then be efficiently implemented using shifts and other bit operations.
$n$ will be an odd number in pretty much all applications, since it is not hard to factorize an even number.
So every power of $2$ will be coprime to $n$.
The representative $\bar{x}$ of a number $x$ in the Montgomery space is defined as:
$$\bar{x} := x \cdot r \bmod n$$
Notice, the transformation is actually such a multiplication that we want to optimize.
So this is still an expensive operation.
However you only need to transform a number once into the space.
As soon as you are in the Montgomery space, you can perform as many operations as you want efficiently.
And at the end you transform the final result back.
So as long as you are doing lots of operations modulo $n$, this will be no problem.
Inside the Montgomery space you can still perform most operations as usual.
You can add two elements ($x \cdot r + y \cdot r \equiv (x + y) \cdot r \bmod n$), subtract, check for equality, and even compute the greatest common divisor of a number with $n$ (since $\gcd(n, r) = 1$).
All with the usual algorithms.
However this is not the case for multiplication.
We expect the result to be:
$$\bar{x} * \bar{y} = \overline{x \cdot y} = (x \cdot y) \cdot r \bmod n.$$
But the normal multiplication will give us:
$$\bar{x} \cdot \bar{y} = (x \cdot y) \cdot r \cdot r \bmod n.$$
Therefore the multiplication in the Montgomery space is defined as:
$$\bar{x} * \bar{y} := \bar{x} \cdot \bar{y} \cdot r^{-1} \bmod n.$$
## Montgomery reduction
The multiplication of two numbers in the Montgomery space requires an efficient computation of $x \cdot r^{-1} \bmod n$.
This operation is called the **Montgomery reduction**, and is also known as the algorithm **REDC**.
Because $\gcd(n, r) = 1$, we know that there are two numbers $r^{-1}$ and $n^{\prime}$ with $0 < r^{-1}, n^{\prime} < n$ with
$$r \cdot r^{-1} + n \cdot n^{\prime} = 1.$$
Both $r^{-1}$ and $n^{\prime}$ can be computed using the [Extended Euclidean algorithm](extended-euclid-algorithm.md).
Using this identity we can write $x \cdot r^{-1}$ as:
$$\begin{aligned}
x \cdot r^{-1} &= x \cdot r \cdot r^{-1} / r = x \cdot (-n \cdot n^{\prime} + 1) / r \\
&= (-x \cdot n \cdot n^{\prime} + x) / r \equiv (-x \cdot n \cdot n^{\prime} + l \cdot r \cdot n + x) / r \bmod n\\
&\equiv ((-x \cdot n^{\prime} + l \cdot r) \cdot n + x) / r \bmod n
\end{aligned}$$
The equivalences hold for any arbitrary integer $l$.
This means, that we can add or subtract an arbitrary multiple of $r$ to $x \cdot n^{\prime}$, or in other words, we can compute $q := x \cdot n^{\prime}$ modulo $r$.
This gives us the following algorithm to compute $x \cdot r^{-1} \bmod n$:
```text
function reduce(x):
q = (x mod r) * n' mod r
a = (x - q * n) / r
if a < 0:
a += n
return a
```
Since $x < n \cdot n < r \cdot n$ (even if $x$ is the product of a multiplication) and $q \cdot n < r \cdot n$ we know that $-n < (x - q \cdot n) / r < n$.
Therefore the final modulo operation is implemented using a single check and one addition.
As we see, we can perform the Montgomery reduction without any heavy modulo operations.
If we choose $r$ as a power of $2$, the modulo operations and divisions in the algorithm can be computed using bitmasking and shifting.
A second application of the Montgomery reduction is to transfer a number back from the Montgomery space into the normal space.
## Fast inverse trick
For computing the inverse $n^{\prime} := n^{-1} \bmod r$ efficiently, we can use the following trick (which is inspired from the Newton's method):
$$a \cdot x \equiv 1 \bmod 2^k \Longrightarrow a \cdot x \cdot (2 - a \cdot x) \equiv 1 \bmod 2^{2k}$$
This can easily be proven.
If we have $a \cdot x = 1 + m \cdot 2^k$, then we have:
$$\begin{aligned}
a \cdot x \cdot (2 - a \cdot x) &= 2 \cdot a \cdot x - (a \cdot x)^2 \\
&= 2 \cdot (1 + m \cdot 2^k) - (1 + m \cdot 2^k)^2 \\
&= 2 + 2 \cdot m \cdot 2^k - 1 - 2 \cdot m \cdot 2^k - m^2 \cdot 2^{2k} \\
&= 1 - m^2 \cdot 2^{2k} \\
&\equiv 1 \bmod 2^{2k}.
\end{aligned}$$
This means we can start with $x = 1$ as the inverse of $a$ modulo $2^1$, apply the trick a few times and in each iteration we double the number of correct bits of $x$.
## Implementation
Using the GCC compiler we can compute $x \cdot y \bmod n$ still efficiently, when all three numbers are 64 bit integer, since the compiler supports 128 bit integer with the types `__int128` and `__uint128`.
```cpp
long long result = (__int128)x * y % n;
```
However there is no type for 256 bit integer.
Therefore we will here show an implementation for a 128 bit multiplication.
```cpp
using u64 = uint64_t;
using u128 = __uint128_t;
using i128 = __int128_t;
struct u256 {
u128 high, low;
static u256 mult(u128 x, u128 y) {
u64 a = x >> 64, b = x;
u64 c = y >> 64, d = y;
// (a*2^64 + b) * (c*2^64 + d) =
// (a*c) * 2^128 + (a*d + b*c)*2^64 + (b*d)
u128 ac = (u128)a * c;
u128 ad = (u128)a * d;
u128 bc = (u128)b * c;
u128 bd = (u128)b * d;
u128 carry = (u128)(u64)ad + (u128)(u64)bc + (bd >> 64u);
u128 high = ac + (ad >> 64u) + (bc >> 64u) + (carry >> 64u);
u128 low = (ad << 64u) + (bc << 64u) + bd;
return {high, low};
}
};
struct Montgomery {
Montgomery(u128 n) : mod(n), inv(1) {
for (int i = 0; i < 7; i++)
inv *= 2 - n * inv;
}
u128 init(u128 x) {
x %= mod;
for (int i = 0; i < 128; i++) {
x <<= 1;
if (x >= mod)
x -= mod;
}
return x;
}
u128 reduce(u256 x) {
u128 q = x.low * inv;
i128 a = x.high - u256::mult(q, mod).high;
if (a < 0)
a += mod;
return a;
}
u128 mult(u128 a, u128 b) {
return reduce(u256::mult(a, b));
}
u128 mod, inv;
};
```
## Fast transformation
The current method of transforming a number into Montgomery space is pretty slow.
There are faster ways.
You can notice the following relation:
$$\bar{x} := x \cdot r \bmod n = x \cdot r^2 / r = x * r^2$$
Transforming a number into the space is just a multiplication inside the space of the number with $r^2$.
Therefore we can precompute $r^2 \bmod n$ and just perform a multiplication instead of shifting the number 128 times.
In the following code we initialize `r2` with `-n % n`, which is equivalent to $r - n \equiv r \bmod n$, shift it 4 times to get $r \cdot 2^4 \bmod n$.
This number can be interpreted as $2^4$ in Montgomery space.
If we square it $5$ times, we get $(2^4)^{2^5} = (2^4)^{32} = 2^{128} = r$ in Montgomery space, which is exactly $r^2 \bmod n$.
```
struct Montgomery {
Montgomery(u128 n) : mod(n), inv(1), r2(-n % n) {
for (int i = 0; i < 7; i++)
inv *= 2 - n * inv;
for (int i = 0; i < 4; i++) {
r2 <<= 1;
if (r2 >= mod)
r2 -= mod;
}
for (int i = 0; i < 5; i++)
r2 = mul(r2, r2);
}
u128 init(u128 x) {
return mult(x, r2);
}
u128 mod, inv, r2;
};
```
|
Montgomery Multiplication
|
---
title
- Original
---
# Primality tests
This article describes multiple algorithms to determine if a number is prime or not.
## Trial division
By definition a prime number doesn't have any divisors other than $1$ and itself.
A composite number has at least one additional divisor, let's call it $d$.
Naturally $\frac{n}{d}$ is also a divisor of $n$.
It's easy to see, that either $d \le \sqrt{n}$ or $\frac{n}{d} \le \sqrt{n}$, therefore one of the divisors $d$ and $\frac{n}{d}$ is $\le \sqrt{n}$.
We can use this information to check for primality.
We try to find a non-trivial divisor, by checking if any of the numbers between $2$ and $\sqrt{n}$ is a divisor of $n$.
If it is a divisor, than $n$ is definitely not prime, otherwise it is.
```cpp
bool isPrime(int x) {
for (int d = 2; d * d <= x; d++) {
if (x % d == 0)
return false;
}
return x >= 2;
}
```
This is the simplest form of a prime check.
You can optimize this function quite a bit, for instance by only checking all odd numbers in the loop, since the only even prime number is 2.
Multiple such optimizations are described in the article about [integer factorization](factorization.md).
## Fermat primality test
This is a probabilistic test.
Fermat's little theorem (see also [Euler's totient function](phi-function.md)) states, that for a prime number $p$ and a coprime integer $a$ the following equation holds:
$$a^{p-1} \equiv 1 \bmod p$$
In general this theorem doesn't hold for composite numbers.
This can be used to create a primality test.
We pick an integer $2 \le a \le p - 2$, and check if the equation holds or not.
If it doesn't hold, e.g. $a^{p-1} \not\equiv 1 \bmod p$, we know that $p$ cannot be a prime number.
In this case we call the base $a$ a *Fermat witness* for the compositeness of $p$.
However it is also possible, that the equation holds for a composite number.
So if the equation holds, we don't have a proof for primality.
We only can say that $p$ is *probably prime*.
If it turns out that the number is actually composite, we call the base $a$ a *Fermat liar*.
By running the test for all possible bases $a$, we can actually prove that a number is prime.
However this is not done in practice, since this is a lot more effort that just doing *trial division*.
Instead the test will be repeated multiple times with random choices for $a$.
If we find no witness for the compositeness, it is very likely that the number is in fact prime.
```cpp
bool probablyPrimeFermat(int n, int iter=5) {
if (n < 4)
return n == 2 || n == 3;
for (int i = 0; i < iter; i++) {
int a = 2 + rand() % (n - 3);
if (binpower(a, n - 1, n) != 1)
return false;
}
return true;
}
```
We use [Binary Exponentiation](binary-exp.md) to efficiently compute the power $a^{p-1}$.
There is one bad news though:
there exist some composite numbers where $a^{n-1} \equiv 1 \bmod n$ holds for all $a$ coprime to $n$, for instance for the number $561 = 3 \cdot 11 \cdot 17$.
Such numbers are called *Carmichael numbers*.
The Fermat primality test can identify these numbers only, if we have immense luck and choose a base $a$ with $\gcd(a, n) \ne 1$.
The Fermat test is still be used in practice, as it is very fast and Carmichael numbers are very rare.
E.g. there only exist 646 such numbers below $10^9$.
## Miller-Rabin primality test
The Miller-Rabin test extends the ideas from the Fermat test.
For an odd number $n$, $n-1$ is even and we can factor out all powers of 2.
We can write:
$$n - 1 = 2^s \cdot d,~\text{with}~d~\text{odd}.$$
This allows us to factorize the equation of Fermat's little theorem:
$$\begin{array}{rl}
a^{n-1} \equiv 1 \bmod n &\Longleftrightarrow a^{2^s d} - 1 \equiv 0 \bmod n \\\\
&\Longleftrightarrow (a^{2^{s-1} d} + 1) (a^{2^{s-1} d} - 1) \equiv 0 \bmod n \\\\
&\Longleftrightarrow (a^{2^{s-1} d} + 1) (a^{2^{s-2} d} + 1) (a^{2^{s-2} d} - 1) \equiv 0 \bmod n \\\\
&\quad\vdots \\\\
&\Longleftrightarrow (a^{2^{s-1} d} + 1) (a^{2^{s-2} d} + 1) \cdots (a^{d} + 1) (a^{d} - 1) \equiv 0 \bmod n \\\\
\end{array}$$
If $n$ is prime, then $n$ has to divide one of these factors.
And in the Miller-Rabin primality test we check exactly that statement, which is a more stricter version of the statement of the Fermat test.
For a base $2 \le a \le n-2$ we check if either
$$a^d \equiv 1 \bmod n$$
holds or
$$a^{2^r d} \equiv -1 \bmod n$$
holds for some $0 \le r \le s - 1$.
If we found a base $a$ which doesn't satisfy any of the above equalities, than we found a *witness* for the compositeness of $n$.
In this case we have proven that $n$ is not a prime number.
Similar to the Fermat test, it is also possible that the set of equations is satisfied for a composite number.
In that case the base $a$ is called a *strong liar*.
If a base $a$ satisfies the equations (one of them), $n$ is only *strong probable prime*.
However, there are no numbers like the Carmichael numbers, where all non-trivial bases lie.
In fact it is possible to show, that at most $\frac{1}{4}$ of the bases can be strong liars.
If $n$ is composite, we have a probability of $\ge 75\%$ that a random base will tell us that it is composite.
By doing multiple iterations, choosing different random bases, we can tell with very high probability if the number is truly prime or if it is composite.
Here is an implementation for 64 bit integer.
```cpp
using u64 = uint64_t;
using u128 = __uint128_t;
u64 binpower(u64 base, u64 e, u64 mod) {
u64 result = 1;
base %= mod;
while (e) {
if (e & 1)
result = (u128)result * base % mod;
base = (u128)base * base % mod;
e >>= 1;
}
return result;
}
bool check_composite(u64 n, u64 a, u64 d, int s) {
u64 x = binpower(a, d, n);
if (x == 1 || x == n - 1)
return false;
for (int r = 1; r < s; r++) {
x = (u128)x * x % n;
if (x == n - 1)
return false;
}
return true;
};
bool MillerRabin(u64 n, int iter=5) { // returns true if n is probably prime, else returns false.
if (n < 4)
return n == 2 || n == 3;
int s = 0;
u64 d = n - 1;
while ((d & 1) == 0) {
d >>= 1;
s++;
}
for (int i = 0; i < iter; i++) {
int a = 2 + rand() % (n - 3);
if (check_composite(n, a, d, s))
return false;
}
return true;
}
```
Before the Miller-Rabin test you can test additionally if one of the first few prime numbers is a divisor.
This can speed up the test by a lot, since most composite numbers have very small prime divisors.
E.g. $88\%$ of all numbers have a prime factors smaller than $100$.
### Deterministic version
Miller showed that it is possible to make the algorithm deterministic by only checking all bases $\le O((\ln n)^2)$.
Bach later gave a concrete bound, it is only necessary to test all bases $a \le 2 \ln(n)^2$.
This is still a pretty large number of bases.
So people have invested quite a lot of computation power into finding lower bounds.
It turns out, for testing a 32 bit integer it is only necessary to check the first 4 prime bases: 2, 3, 5 and 7.
The smallest composite number that fails this test is $3,215,031,751 = 151 \cdot 751 \cdot 28351$.
And for testing 64 bit integer it is enough to check the first 12 prime bases: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37.
This results in the following deterministic implementation:
```cpp
bool MillerRabin(u64 n) { // returns true if n is prime, else returns false.
if (n < 2)
return false;
int r = 0;
u64 d = n - 1;
while ((d & 1) == 0) {
d >>= 1;
r++;
}
for (int a : {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37}) {
if (n == a)
return true;
if (check_composite(n, a, d, r))
return false;
}
return true;
}
```
It's also possible to do the check with only 7 bases: 2, 325, 9375, 28178, 450775, 9780504 and 1795265022.
However, since these numbers (except 2) are not prime, you need to check additionally if the number you are checking is equal to any prime divisor of those bases: 2, 3, 5, 13, 19, 73, 193, 407521, 299210837.
|
---
title
- Original
---
# Primality tests
This article describes multiple algorithms to determine if a number is prime or not.
## Trial division
By definition a prime number doesn't have any divisors other than $1$ and itself.
A composite number has at least one additional divisor, let's call it $d$.
Naturally $\frac{n}{d}$ is also a divisor of $n$.
It's easy to see, that either $d \le \sqrt{n}$ or $\frac{n}{d} \le \sqrt{n}$, therefore one of the divisors $d$ and $\frac{n}{d}$ is $\le \sqrt{n}$.
We can use this information to check for primality.
We try to find a non-trivial divisor, by checking if any of the numbers between $2$ and $\sqrt{n}$ is a divisor of $n$.
If it is a divisor, than $n$ is definitely not prime, otherwise it is.
```cpp
bool isPrime(int x) {
for (int d = 2; d * d <= x; d++) {
if (x % d == 0)
return false;
}
return x >= 2;
}
```
This is the simplest form of a prime check.
You can optimize this function quite a bit, for instance by only checking all odd numbers in the loop, since the only even prime number is 2.
Multiple such optimizations are described in the article about [integer factorization](factorization.md).
## Fermat primality test
This is a probabilistic test.
Fermat's little theorem (see also [Euler's totient function](phi-function.md)) states, that for a prime number $p$ and a coprime integer $a$ the following equation holds:
$$a^{p-1} \equiv 1 \bmod p$$
In general this theorem doesn't hold for composite numbers.
This can be used to create a primality test.
We pick an integer $2 \le a \le p - 2$, and check if the equation holds or not.
If it doesn't hold, e.g. $a^{p-1} \not\equiv 1 \bmod p$, we know that $p$ cannot be a prime number.
In this case we call the base $a$ a *Fermat witness* for the compositeness of $p$.
However it is also possible, that the equation holds for a composite number.
So if the equation holds, we don't have a proof for primality.
We only can say that $p$ is *probably prime*.
If it turns out that the number is actually composite, we call the base $a$ a *Fermat liar*.
By running the test for all possible bases $a$, we can actually prove that a number is prime.
However this is not done in practice, since this is a lot more effort that just doing *trial division*.
Instead the test will be repeated multiple times with random choices for $a$.
If we find no witness for the compositeness, it is very likely that the number is in fact prime.
```cpp
bool probablyPrimeFermat(int n, int iter=5) {
if (n < 4)
return n == 2 || n == 3;
for (int i = 0; i < iter; i++) {
int a = 2 + rand() % (n - 3);
if (binpower(a, n - 1, n) != 1)
return false;
}
return true;
}
```
We use [Binary Exponentiation](binary-exp.md) to efficiently compute the power $a^{p-1}$.
There is one bad news though:
there exist some composite numbers where $a^{n-1} \equiv 1 \bmod n$ holds for all $a$ coprime to $n$, for instance for the number $561 = 3 \cdot 11 \cdot 17$.
Such numbers are called *Carmichael numbers*.
The Fermat primality test can identify these numbers only, if we have immense luck and choose a base $a$ with $\gcd(a, n) \ne 1$.
The Fermat test is still be used in practice, as it is very fast and Carmichael numbers are very rare.
E.g. there only exist 646 such numbers below $10^9$.
## Miller-Rabin primality test
The Miller-Rabin test extends the ideas from the Fermat test.
For an odd number $n$, $n-1$ is even and we can factor out all powers of 2.
We can write:
$$n - 1 = 2^s \cdot d,~\text{with}~d~\text{odd}.$$
This allows us to factorize the equation of Fermat's little theorem:
$$\begin{array}{rl}
a^{n-1} \equiv 1 \bmod n &\Longleftrightarrow a^{2^s d} - 1 \equiv 0 \bmod n \\\\
&\Longleftrightarrow (a^{2^{s-1} d} + 1) (a^{2^{s-1} d} - 1) \equiv 0 \bmod n \\\\
&\Longleftrightarrow (a^{2^{s-1} d} + 1) (a^{2^{s-2} d} + 1) (a^{2^{s-2} d} - 1) \equiv 0 \bmod n \\\\
&\quad\vdots \\\\
&\Longleftrightarrow (a^{2^{s-1} d} + 1) (a^{2^{s-2} d} + 1) \cdots (a^{d} + 1) (a^{d} - 1) \equiv 0 \bmod n \\\\
\end{array}$$
If $n$ is prime, then $n$ has to divide one of these factors.
And in the Miller-Rabin primality test we check exactly that statement, which is a more stricter version of the statement of the Fermat test.
For a base $2 \le a \le n-2$ we check if either
$$a^d \equiv 1 \bmod n$$
holds or
$$a^{2^r d} \equiv -1 \bmod n$$
holds for some $0 \le r \le s - 1$.
If we found a base $a$ which doesn't satisfy any of the above equalities, than we found a *witness* for the compositeness of $n$.
In this case we have proven that $n$ is not a prime number.
Similar to the Fermat test, it is also possible that the set of equations is satisfied for a composite number.
In that case the base $a$ is called a *strong liar*.
If a base $a$ satisfies the equations (one of them), $n$ is only *strong probable prime*.
However, there are no numbers like the Carmichael numbers, where all non-trivial bases lie.
In fact it is possible to show, that at most $\frac{1}{4}$ of the bases can be strong liars.
If $n$ is composite, we have a probability of $\ge 75\%$ that a random base will tell us that it is composite.
By doing multiple iterations, choosing different random bases, we can tell with very high probability if the number is truly prime or if it is composite.
Here is an implementation for 64 bit integer.
```cpp
using u64 = uint64_t;
using u128 = __uint128_t;
u64 binpower(u64 base, u64 e, u64 mod) {
u64 result = 1;
base %= mod;
while (e) {
if (e & 1)
result = (u128)result * base % mod;
base = (u128)base * base % mod;
e >>= 1;
}
return result;
}
bool check_composite(u64 n, u64 a, u64 d, int s) {
u64 x = binpower(a, d, n);
if (x == 1 || x == n - 1)
return false;
for (int r = 1; r < s; r++) {
x = (u128)x * x % n;
if (x == n - 1)
return false;
}
return true;
};
bool MillerRabin(u64 n, int iter=5) { // returns true if n is probably prime, else returns false.
if (n < 4)
return n == 2 || n == 3;
int s = 0;
u64 d = n - 1;
while ((d & 1) == 0) {
d >>= 1;
s++;
}
for (int i = 0; i < iter; i++) {
int a = 2 + rand() % (n - 3);
if (check_composite(n, a, d, s))
return false;
}
return true;
}
```
Before the Miller-Rabin test you can test additionally if one of the first few prime numbers is a divisor.
This can speed up the test by a lot, since most composite numbers have very small prime divisors.
E.g. $88\%$ of all numbers have a prime factors smaller than $100$.
### Deterministic version
Miller showed that it is possible to make the algorithm deterministic by only checking all bases $\le O((\ln n)^2)$.
Bach later gave a concrete bound, it is only necessary to test all bases $a \le 2 \ln(n)^2$.
This is still a pretty large number of bases.
So people have invested quite a lot of computation power into finding lower bounds.
It turns out, for testing a 32 bit integer it is only necessary to check the first 4 prime bases: 2, 3, 5 and 7.
The smallest composite number that fails this test is $3,215,031,751 = 151 \cdot 751 \cdot 28351$.
And for testing 64 bit integer it is enough to check the first 12 prime bases: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37.
This results in the following deterministic implementation:
```cpp
bool MillerRabin(u64 n) { // returns true if n is prime, else returns false.
if (n < 2)
return false;
int r = 0;
u64 d = n - 1;
while ((d & 1) == 0) {
d >>= 1;
r++;
}
for (int a : {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37}) {
if (n == a)
return true;
if (check_composite(n, a, d, r))
return false;
}
return true;
}
```
It's also possible to do the check with only 7 bases: 2, 325, 9375, 28178, 450775, 9780504 and 1795265022.
However, since these numbers (except 2) are not prime, you need to check additionally if the number you are checking is equal to any prime divisor of those bases: 2, 3, 5, 13, 19, 73, 193, 407521, 299210837.
## Practice Problems
- [SPOJ - Prime or Not](https://www.spoj.com/problems/PON/)
|
Primality tests
|
---
title
euclid_algorithm
---
# Euclidean algorithm for computing the greatest common divisor
Given two non-negative integers $a$ and $b$, we have to find their **GCD** (greatest common divisor), i.e. the largest number which is a divisor of both $a$ and $b$.
It's commonly denoted by $\gcd(a, b)$. Mathematically it is defined as:
$$\gcd(a, b) = \max \{k > 0 : (k \mid a) \text{ and } (k \mid b) \}$$
(here the symbol "$\mid$" denotes divisibility, i.e. "$k \mid a$" means "$k$ divides $a$")
When one of the numbers is zero, while the other is non-zero, their greatest common divisor, by definition, is the second number. When both numbers are zero, their greatest common divisor is undefined (it can be any arbitrarily large number), but it is convenient to define it as zero as well to preserve the associativity of $\gcd$. Which gives us a simple rule: if one of the numbers is zero, the greatest common divisor is the other number.
The Euclidean algorithm, discussed below, allows to find the greatest common divisor of two numbers $a$ and $b$ in $O(\log \min(a, b))$.
The algorithm was first described in Euclid's "Elements" (circa 300 BC), but it is possible that the algorithm has even earlier origins.
## Algorithm
Originally, the Euclidean algorithm was formulated as follows: subtract the smaller number from the larger one until one of the numbers is zero. Indeed, if $g$ divides $a$ and $b$, it also divides $a-b$. On the other hand, if $g$ divides $a-b$ and $b$, then it also divides $a = b + (a-b)$, which means that the sets of the common divisors of $\{a, b\}$ and $\{b,a-b\}$ coincide.
Note that $a$ remains the larger number until $b$ is subtracted from it at least $\left\lfloor\frac{a}{b}\right\rfloor$ times. Therefore, to speed things up, $a-b$ is substituted with $a-\left\lfloor\frac{a}{b}\right\rfloor b = a \bmod b$. Then the algorithm is formulated in an extremely simple way:
$$\gcd(a, b) = \begin{cases}a,&\text{if }b = 0 \\ \gcd(b, a \bmod b),&\text{otherwise.}\end{cases}$$
## Implementation
```cpp
int gcd (int a, int b) {
if (b == 0)
return a;
else
return gcd (b, a % b);
}
```
Using the ternary operator in C++, we can write it as a one-liner.
```cpp
int gcd (int a, int b) {
return b ? gcd (b, a % b) : a;
}
```
And finally, here is a non-recursive implementation:
```cpp
int gcd (int a, int b) {
while (b) {
a %= b;
swap(a, b);
}
return a;
}
```
Note that since C++17, `gcd` is implemented as a [standard function](https://en.cppreference.com/w/cpp/numeric/gcd) in C++.
## Time Complexity
The running time of the algorithm is estimated by Lamé's theorem, which establishes a surprising connection between the Euclidean algorithm and the Fibonacci sequence:
If $a > b \geq 1$ and $b < F_n$ for some $n$, the Euclidean algorithm performs at most $n-2$ recursive calls.
Moreover, it is possible to show that the upper bound of this theorem is optimal. When $a = F_n$ and $b = F_{n-1}$, $gcd(a, b)$ will perform exactly $n-2$ recursive calls. In other words, consecutive Fibonacci numbers are the worst case input for Euclid's algorithm.
Given that Fibonacci numbers grow exponentially, we get that the Euclidean algorithm works in $O(\log \min(a, b))$.
Another way to estimate the complexity is to notice that $a \bmod b$ for the case $a \geq b$ is at least $2$ times smaller than $a$, so the larger number is reduced at least in half on each iteration of the algorithm.
## Least common multiple
Calculating the least common multiple (commonly denoted **LCM**) can be reduced to calculating the GCD with the following simple formula:
$$\text{lcm}(a, b) = \frac{a \cdot b}{\gcd(a, b)}$$
Thus, LCM can be calculated using the Euclidean algorithm with the same time complexity:
A possible implementation, that cleverly avoids integer overflows by first dividing $a$ with the GCD, is given here:
```cpp
int lcm (int a, int b) {
return a / gcd(a, b) * b;
}
```
## Binary GCD
The Binary GCD algorithm is an optimization to the normal Euclidean algorithm.
The slow part of the normal algorithm are the modulo operations. Modulo operations, although we see them as $O(1)$, are a lot slower than simpler operations like addition, subtraction or bitwise operations.
So it would be better to avoid those.
It turns out, that you can design a fast GCD algorithm that avoids modulo operations.
It's based on a few properties:
- If both numbers are even, then we can factor out a two of both and compute the GCD of the remaining numbers: $\gcd(2a, 2b) = 2 \gcd(a, b)$.
- If one of the numbers is even and the other one is odd, then we can remove the factor 2 from the even one: $\gcd(2a, b) = \gcd(a, b)$ if $b$ is odd.
- If both numbers are odd, then subtracting one number of the other one will not change the GCD: $\gcd(a, b) = \gcd(b, a-b)$
Using only these properties, and some fast bitwise functions from GCC, we can implement a fast version:
```cpp
int gcd(int a, int b) {
if (!a || !b)
return a | b;
unsigned shift = __builtin_ctz(a | b);
a >>= __builtin_ctz(a);
do {
b >>= __builtin_ctz(b);
if (a > b)
swap(a, b);
b -= a;
} while (b);
return a << shift;
}
```
Notice, that such an optimization is usually not necessary, and most programming languages already have a GCD function in their standard libraries.
E.g. C++17 has such a function `std::gcd` in the `numeric` header.
|
---
title
euclid_algorithm
---
# Euclidean algorithm for computing the greatest common divisor
Given two non-negative integers $a$ and $b$, we have to find their **GCD** (greatest common divisor), i.e. the largest number which is a divisor of both $a$ and $b$.
It's commonly denoted by $\gcd(a, b)$. Mathematically it is defined as:
$$\gcd(a, b) = \max \{k > 0 : (k \mid a) \text{ and } (k \mid b) \}$$
(here the symbol "$\mid$" denotes divisibility, i.e. "$k \mid a$" means "$k$ divides $a$")
When one of the numbers is zero, while the other is non-zero, their greatest common divisor, by definition, is the second number. When both numbers are zero, their greatest common divisor is undefined (it can be any arbitrarily large number), but it is convenient to define it as zero as well to preserve the associativity of $\gcd$. Which gives us a simple rule: if one of the numbers is zero, the greatest common divisor is the other number.
The Euclidean algorithm, discussed below, allows to find the greatest common divisor of two numbers $a$ and $b$ in $O(\log \min(a, b))$.
The algorithm was first described in Euclid's "Elements" (circa 300 BC), but it is possible that the algorithm has even earlier origins.
## Algorithm
Originally, the Euclidean algorithm was formulated as follows: subtract the smaller number from the larger one until one of the numbers is zero. Indeed, if $g$ divides $a$ and $b$, it also divides $a-b$. On the other hand, if $g$ divides $a-b$ and $b$, then it also divides $a = b + (a-b)$, which means that the sets of the common divisors of $\{a, b\}$ and $\{b,a-b\}$ coincide.
Note that $a$ remains the larger number until $b$ is subtracted from it at least $\left\lfloor\frac{a}{b}\right\rfloor$ times. Therefore, to speed things up, $a-b$ is substituted with $a-\left\lfloor\frac{a}{b}\right\rfloor b = a \bmod b$. Then the algorithm is formulated in an extremely simple way:
$$\gcd(a, b) = \begin{cases}a,&\text{if }b = 0 \\ \gcd(b, a \bmod b),&\text{otherwise.}\end{cases}$$
## Implementation
```cpp
int gcd (int a, int b) {
if (b == 0)
return a;
else
return gcd (b, a % b);
}
```
Using the ternary operator in C++, we can write it as a one-liner.
```cpp
int gcd (int a, int b) {
return b ? gcd (b, a % b) : a;
}
```
And finally, here is a non-recursive implementation:
```cpp
int gcd (int a, int b) {
while (b) {
a %= b;
swap(a, b);
}
return a;
}
```
Note that since C++17, `gcd` is implemented as a [standard function](https://en.cppreference.com/w/cpp/numeric/gcd) in C++.
## Time Complexity
The running time of the algorithm is estimated by Lamé's theorem, which establishes a surprising connection between the Euclidean algorithm and the Fibonacci sequence:
If $a > b \geq 1$ and $b < F_n$ for some $n$, the Euclidean algorithm performs at most $n-2$ recursive calls.
Moreover, it is possible to show that the upper bound of this theorem is optimal. When $a = F_n$ and $b = F_{n-1}$, $gcd(a, b)$ will perform exactly $n-2$ recursive calls. In other words, consecutive Fibonacci numbers are the worst case input for Euclid's algorithm.
Given that Fibonacci numbers grow exponentially, we get that the Euclidean algorithm works in $O(\log \min(a, b))$.
Another way to estimate the complexity is to notice that $a \bmod b$ for the case $a \geq b$ is at least $2$ times smaller than $a$, so the larger number is reduced at least in half on each iteration of the algorithm.
## Least common multiple
Calculating the least common multiple (commonly denoted **LCM**) can be reduced to calculating the GCD with the following simple formula:
$$\text{lcm}(a, b) = \frac{a \cdot b}{\gcd(a, b)}$$
Thus, LCM can be calculated using the Euclidean algorithm with the same time complexity:
A possible implementation, that cleverly avoids integer overflows by first dividing $a$ with the GCD, is given here:
```cpp
int lcm (int a, int b) {
return a / gcd(a, b) * b;
}
```
## Binary GCD
The Binary GCD algorithm is an optimization to the normal Euclidean algorithm.
The slow part of the normal algorithm are the modulo operations. Modulo operations, although we see them as $O(1)$, are a lot slower than simpler operations like addition, subtraction or bitwise operations.
So it would be better to avoid those.
It turns out, that you can design a fast GCD algorithm that avoids modulo operations.
It's based on a few properties:
- If both numbers are even, then we can factor out a two of both and compute the GCD of the remaining numbers: $\gcd(2a, 2b) = 2 \gcd(a, b)$.
- If one of the numbers is even and the other one is odd, then we can remove the factor 2 from the even one: $\gcd(2a, b) = \gcd(a, b)$ if $b$ is odd.
- If both numbers are odd, then subtracting one number of the other one will not change the GCD: $\gcd(a, b) = \gcd(b, a-b)$
Using only these properties, and some fast bitwise functions from GCC, we can implement a fast version:
```cpp
int gcd(int a, int b) {
if (!a || !b)
return a | b;
unsigned shift = __builtin_ctz(a | b);
a >>= __builtin_ctz(a);
do {
b >>= __builtin_ctz(b);
if (a > b)
swap(a, b);
b -= a;
} while (b);
return a << shift;
}
```
Notice, that such an optimization is usually not necessary, and most programming languages already have a GCD function in their standard libraries.
E.g. C++17 has such a function `std::gcd` in the `numeric` header.
## Practice Problems
- [Codechef - GCD and LCM](https://www.codechef.com/problems/FLOW016)
|
Euclidean algorithm for computing the greatest common divisor
|
---
title: K-th order statistic in O(N)
title
kth_order_statistics
---
# $K$th order statistic in $O(N)$
Given an array $A$ of size $N$ and a number $K$. The problem is to find $K$-th largest number in the array, i.e., $K$-th order statistic.
The basic idea - to use the idea of quick sort algorithm. Actually, the algorithm is simple, it is more difficult to prove that it runs in an average of $O(N)$, in contrast to the quick sort.
## Implementation (not recursive)
```cpp
template <class T>
T order_statistics (std::vector<T> a, unsigned n, unsigned k)
{
using std::swap;
for (unsigned l=1, r=n; ; )
{
if (r <= l+1)
{
// the current part size is either 1 or 2, so it is easy to find the answer
if (r == l+1 && a[r] < a[l])
swap (a[l], a[r]);
return a[k];
}
// ordering a[l], a[l+1], a[r]
unsigned mid = (l + r) >> 1;
swap (a[mid], a[l+1]);
if (a[l] > a[r])
swap (a[l], a[r]);
if (a[l+1] > a[r])
swap (a[l+1], a[r]);
if (a[l] > a[l+1])
swap (a[l], a[l+1]);
// performing division
// barrier is a[l + 1], i.e. median among a[l], a[l + 1], a[r]
unsigned
i = l+1,
j = r;
const T
cur = a[l+1];
for (;;)
{
while (a[++i] < cur) ;
while (a[--j] > cur) ;
if (i > j)
break;
swap (a[i], a[j]);
}
// inserting the barrier
a[l+1] = a[j];
a[j] = cur;
// we continue to work in that part, which must contain the required element
if (j >= k)
r = j-1;
if (j <= k)
l = i;
}
}
```
## Notes
* The randomized algorithm above is named [quickselect](https://en.wikipedia.org/wiki/Quickselect). You should do random shuffle on $A$ before calling it or use a random element as a barrier for it to run properly. There are also deterministic algorithms that solve the specified problem in linear time, such as [median of medians](https://en.wikipedia.org/wiki/Median_of_medians).
* A deterministic linear solution is implemented in C++ standard library as [std::nth_element](https://en.cppreference.com/w/cpp/algorithm/nth_element).
* Finding $K$ smallest elements can be reduced to finding $K$-th element with a linear overhead, as they're exactly the elements that are smaller than $K$-th.
|
---
title: K-th order statistic in O(N)
title
kth_order_statistics
---
# $K$th order statistic in $O(N)$
Given an array $A$ of size $N$ and a number $K$. The problem is to find $K$-th largest number in the array, i.e., $K$-th order statistic.
The basic idea - to use the idea of quick sort algorithm. Actually, the algorithm is simple, it is more difficult to prove that it runs in an average of $O(N)$, in contrast to the quick sort.
## Implementation (not recursive)
```cpp
template <class T>
T order_statistics (std::vector<T> a, unsigned n, unsigned k)
{
using std::swap;
for (unsigned l=1, r=n; ; )
{
if (r <= l+1)
{
// the current part size is either 1 or 2, so it is easy to find the answer
if (r == l+1 && a[r] < a[l])
swap (a[l], a[r]);
return a[k];
}
// ordering a[l], a[l+1], a[r]
unsigned mid = (l + r) >> 1;
swap (a[mid], a[l+1]);
if (a[l] > a[r])
swap (a[l], a[r]);
if (a[l+1] > a[r])
swap (a[l+1], a[r]);
if (a[l] > a[l+1])
swap (a[l], a[l+1]);
// performing division
// barrier is a[l + 1], i.e. median among a[l], a[l + 1], a[r]
unsigned
i = l+1,
j = r;
const T
cur = a[l+1];
for (;;)
{
while (a[++i] < cur) ;
while (a[--j] > cur) ;
if (i > j)
break;
swap (a[i], a[j]);
}
// inserting the barrier
a[l+1] = a[j];
a[j] = cur;
// we continue to work in that part, which must contain the required element
if (j >= k)
r = j-1;
if (j <= k)
l = i;
}
}
```
## Notes
* The randomized algorithm above is named [quickselect](https://en.wikipedia.org/wiki/Quickselect). You should do random shuffle on $A$ before calling it or use a random element as a barrier for it to run properly. There are also deterministic algorithms that solve the specified problem in linear time, such as [median of medians](https://en.wikipedia.org/wiki/Median_of_medians).
* A deterministic linear solution is implemented in C++ standard library as [std::nth_element](https://en.cppreference.com/w/cpp/algorithm/nth_element).
* Finding $K$ smallest elements can be reduced to finding $K$-th element with a linear overhead, as they're exactly the elements that are smaller than $K$-th.
## Practice Problems
- [CODECHEF: Median](https://www.codechef.com/problems/CD1IT1)
|
$K$th order statistic in $O(N)$
|
---
title
longest_increasing_subseq_log
---
# Longest increasing subsequence
We are given an array with $n$ numbers: $a[0 \dots n-1]$.
The task is to find the longest, strictly increasing, subsequence in $a$.
Formally we look for the longest sequence of indices $i_1, \dots i_k$ such that
$$i_1 < i_2 < \dots < i_k,\quad
a[i_1] < a[i_2] < \dots < a[i_k]$$
In this article we discuss multiple algorithms for solving this task.
Also we will discuss some other problems, that can be reduced to this problem.
## Solution in $O(n^2)$ with dynamic programming {data-toc-label="Solution in O(n^2) with dynamic programming"}
Dynamic programming is a very general technique that allows to solve a huge class of problems.
Here we apply the technique for our specific task.
First we will search only for the **length** of the longest increasing subsequence, and only later learn how to restore the subsequence itself.
### Finding the length
To accomplish this task, we define an array $d[0 \dots n-1]$, where $d[i]$ is the length of the longest increasing subsequence that ends in the element at index $i$.
!!! example
$$\begin{array}{ll}
a &= \{8, 3, 4, 6, 5, 2, 0, 7, 9, 1\} \\
d &= \{1, 1, 2, 3, 3, 1, 1, 4, 5, 2\}
\end{array}$$
The longest increasing subsequence that ends at index 4 is $\{3, 4, 5\}$ with a length of 3, the longest ending at index 8 is either $\{3, 4, 5, 7, 9\}$ or $\{3, 4, 6, 7, 9\}$, both having length 5, and the longest ending at index 9 is $\{0, 1\}$ having length 2.
We will compute this array gradually: first $d[0]$, then $d[1]$, and so on.
After this array is computed, the answer to the problem will be the maximum value in the array $d[]$.
So let the current index be $i$.
I.e. we want to compute the value $d[i]$ and all previous values $d[0], \dots, d[i-1]$ are already known.
Then there are two options:
- $d[i] = 1$: the required subsequence consists only of the element $a[i]$.
- $d[i] > 1$: The subsequence will end it $a[i]$, and right before it will be some number $a[j]$ with $j < i$ and $a[j] < a[i]$.
It's easy to see, that the subsequence ending in $a[j]$ will itself be one of the longest increasing subsequences that ends in $a[j]$.
The number $a[i]$ just extends that longest increasing subsequence by one number.
Therefore, we can just iterate over all $j < i$ with $a[j] < a[i]$, and take the longest sequence that we get by appending $a[i]$ to the longest increasing subsequence ending in $a[j]$.
The longest increasing subsequence ending in $a[j]$ has length $d[j]$, extending it by one gives the length $d[j] + 1$.
$$d[i] = \max_{\substack{j < i \\\\ a[j] < a[i]}} \left(d[j] + 1\right)$$
If we combine these two cases we get the final answer for $d[i]$:
$$d[i] = \max\left(1, \max_{\substack{j < i \\\\ a[j] < a[i]}} \left(d[j] + 1\right)\right)$$
### Implementation
Here is an implementation of the algorithm described above, which computes the length of the longest increasing subsequence.
```{.cpp file=lis_n2}
int lis(vector<int> const& a) {
int n = a.size();
vector<int> d(n, 1);
for (int i = 0; i < n; i++) {
for (int j = 0; j < i; j++) {
if (a[j] < a[i])
d[i] = max(d[i], d[j] + 1);
}
}
int ans = d[0];
for (int i = 1; i < n; i++) {
ans = max(ans, d[i]);
}
return ans;
}
```
### Restoring the subsequence
So far we only learned how to find the length of the subsequence, but not how to find the subsequence itself.
To be able to restore the subsequence we generate an additional auxiliary array $p[0 \dots n-1]$ that we will compute alongside the array $d[]$.
$p[i]$ will be the index $j$ of the second last element in the longest increasing subsequence ending in $i$.
In other words the index $p[i]$ is the same index $j$ at which the highest value $d[i]$ was obtained.
This auxiliary array $p[]$ points in some sense to the ancestors.
Then to derive the subsequence, we just start at the index $i$ with the maximal $d[i]$, and follow the ancestors until we deduced the entire subsequence, i.e. until we reach the element with $d[i] = 1$.
### Implementation of restoring
We will change the code from the previous sections a little bit.
We will compute the array $p[]$ alongside $d[]$, and afterwards compute the subsequence.
For convenience we originally assign the ancestors with $p[i] = -1$.
For elements with $d[i] = 1$, the ancestors value will remain $-1$, which will be slightly more convenient for restoring the subsequence.
```{.cpp file=lis_n2_restore}
vector<int> lis(vector<int> const& a) {
int n = a.size();
vector<int> d(n, 1), p(n, -1);
for (int i = 0; i < n; i++) {
for (int j = 0; j < i; j++) {
if (a[j] < a[i] && d[i] < d[j] + 1) {
d[i] = d[j] + 1;
p[i] = j;
}
}
}
int ans = d[0], pos = 0;
for (int i = 1; i < n; i++) {
if (d[i] > ans) {
ans = d[i];
pos = i;
}
}
vector<int> subseq;
while (pos != -1) {
subseq.push_back(a[pos]);
pos = p[pos];
}
reverse(subseq.begin(), subseq.end());
return subseq;
}
```
### Alternative way of restoring the subsequence
It is also possible to restore the subsequence without the auxiliary array $p[]$.
We can simply recalculate the current value of $d[i]$ and also see how the maximum was reached.
This method leads to a slightly longer code, but in return we save some memory.
## Solution in $O(n \log n)$ with dynamic programming and binary search {data-toc-label="Solution in O(n log n) with dynamic programming and binary search"}
In order to obtain a faster solution for the problem, we construct a different dynamic programming solution that runs in $O(n^2)$, and then later improve it to $O(n \log n)$.
We will use the dynamic programming array $d[0 \dots n]$.
This time $d[l]$ doesn't corresponds to the element $a[i]$ or to an prefix of the array.
$d[l]$ will be the smallest element at which an increasing subsequence of length $l$ ends.
Initially we assume $d[0] = -\infty$ and for all other lengths $d[l] = \infty$.
We will again gradually process the numbers, first $a[0]$, then $a[1]$, etc, and in each step maintain the array $d[]$ so that it is up to date.
!!! example
Given the array $a = \{8, 3, 4, 6, 5, 2, 0, 7, 9, 1\}$, here are all their prefixes and their dynamic programming array.
Notice, that the values of the array don't always change at the end.
$$
\begin{array}{ll}
\text{prefix} = \{\} &\quad d = \{-\infty, \infty, \dots\}\\
\text{prefix} = \{8\} &\quad d = \{-\infty, 8, \infty, \dots\}\\
\text{prefix} = \{8, 3\} &\quad d = \{-\infty, 3, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4\} &\quad d = \{-\infty, 3, 4, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4, 6\} &\quad d = \{-\infty, 3, 4, 6, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4, 6, 5\} &\quad d = \{-\infty, 3, 4, 5, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2\} &\quad d = \{-\infty, 2, 4, 5, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0\} &\quad d = \{-\infty, 0, 4, 5, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0, 7\} &\quad d = \{-\infty, 0, 4, 5, 7, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0, 7, 9\} &\quad d = \{-\infty, 0, 4, 5, 7, 9, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0, 7, 9, 1\} &\quad d = \{-\infty, 0, 1, 5, 7, 9, \infty, \dots \}\\
\end{array}
$$
When we process $a[i]$, we can ask ourselves.
What have the conditions to be, that we write the current number $a[i]$ into the $d[0 \dots n]$ array?
We set $d[l] = a[i]$, if there is a longest increasing sequence of length $l$ that ends in $a[i]$, and there is no longest increasing sequence of length $l$ that ends in a smaller number.
Similar to the previous approach, if we remove the number $a[i]$ from the longest increasing sequence of length $l$, we get another longest increasing sequence of length $l -1$.
So we want to extend a longest increasing sequence of length $l - 1$ by the number $a[i]$, and obviously the longest increasing sequence of length $l - 1$ that ends with the smallest element will work the best, in other words the sequence of length $l-1$ that ends in element $d[l-1]$.
There is a longest increasing sequence of length $l - 1$ that we can extend with the number $a[i]$, exactly if $d[l-1] < a[i]$.
So we can just iterate over each length $l$, and check if we can extend a longest increasing sequence of length $l - 1$ by checking the criteria.
Additionally we also need to check, if we maybe have already found a longest increasing sequence of length $l$ with a smaller number at the end.
So we only update if $a[i] < d[l]$.
After processing all the elements of $a[]$ the length of the desired subsequence is the largest $l$ with $d[l] < \infty$.
```{.cpp file=lis_method2_n2}
int lis(vector<int> const& a) {
int n = a.size();
const int INF = 1e9;
vector<int> d(n+1, INF);
d[0] = -INF;
for (int i = 0; i < n; i++) {
for (int l = 1; l <= n; l++) {
if (d[l-1] < a[i] && a[i] < d[l])
d[l] = a[i];
}
}
int ans = 0;
for (int l = 0; l <= n; l++) {
if (d[l] < INF)
ans = l;
}
return ans;
}
```
We now make two important observations.
1. The array $d$ will always be sorted:
$d[l-1] < d[l]$ for all $i = 1 \dots n$.
This is trivial, as you can just remove the last element from the increasing subsequence of length $l$, and you get a increasing subsequence of length $l-1$ with a smalller ending number.
2. The element $a[i]$ will only update at most one value $d[l]$.
This follows immediately from the above implementation.
There can only be one place in the array with $d[l-1] < a[i] < d[l]$.
Thus we can find this element in the array $d[]$ using [binary search](../num_methods/binary_search.md) in $O(\log n)$.
In fact we can simply look in the array $d[]$ for the first number that is strictly greater than $a[i]$, and we try to update this element in the same way as the above implementation.
### Implementation
This gives us the improved $O(n \log n)$ implementation:
```{.cpp file=lis_method2_nlogn}
int lis(vector<int> const& a) {
int n = a.size();
const int INF = 1e9;
vector<int> d(n+1, INF);
d[0] = -INF;
for (int i = 0; i < n; i++) {
int l = upper_bound(d.begin(), d.end(), a[i]) - d.begin();
if (d[l-1] < a[i] && a[i] < d[l])
d[l] = a[i];
}
int ans = 0;
for (int l = 0; l <= n; l++) {
if (d[l] < INF)
ans = l;
}
return ans;
}
```
### Restoring the subsequence
It is also possible to restore the subsequence using this approach.
This time we have to maintain two auxiliary arrays.
One that tells us the index of the elements in $d[]$.
And again we have to create an array of "ancestors" $p[i]$.
$p[i]$ will be the index of the previous element for the optimal subsequence ending in element $i$.
It's easy to maintain these two arrays in the course of iteration over the array $a[]$ alongside the computations of $d[]$.
And at the end it is not difficult to restore the desired subsequence using these arrays.
## Solution in $O(n \log n)$ with data structures {data-toc-label="Solution in O(n log n) with data structures"}
Instead of the above method for computing the longest increasing subsequence in $O(n \log n)$ we can also solve the problem in a different way: using some simple data structures.
Let's go back to the first method.
Remember that $d[i]$ is the value $d[j] + 1$ with $j < i$ and $a[j] < a[i]$.
Thus if we define an additional array $t[]$ such that
$$t[a[i]] = d[i],$$
then the problem of computing the value $d[i]$ is equivalent to finding the **maximum value in a prefix** of the array $t[]$:
$$d[i] = \max\left(t[0 \dots a[i] - 1] + 1\right)$$
The problem of finding the maximum of a prefix of an array (which changes) is a standard problem that can be solved by many different data structures.
For instance we can use a [Segment tree](../data_structures/segment_tree.md) or a [Fenwick tree](../data_structures/fenwick.md).
This method has obviously some **shortcomings**:
in terms of length and complexity of the implementation this approach will be worse than the method using binary search.
In addition if the input numbers $a[i]$ are especially large, then we would have to use some tricks, like compressing the numbers (i.e. renumber them from $0$ to $n-1$), or use a dynamic segment tree (only generate the branches of the tree that are important).
Otherwise the memory consumption will be too high.
On the other hand this method has also some **advantages**:
with this method you don't have to think about any tricky properties in the dynamic programming solution.
And this approach allows us to generalize the problem very easily (see below).
## Related tasks
Here are several problems that are closely related to the problem of finding the longest increasing subsequence.
### Longest non-decreasing subsequence
This is in fact nearly the same problem.
Only now it is allowed to use identical numbers in the subsequence.
The solution is essentially also nearly the same.
We just have to change the inequality signs, and make a slightly modification to the binary search.
### Number of longest increasing subsequences
We can use the first discussed method, either the $O(n^2)$ version or the version using data structures.
We only have to additionally store in how many ways we can obtain longest increasing subsequences ending in the values $d[i]$.
The number of ways to form a longest increasing subsequences ending in $a[i]$ is the sum of all ways for all longest increasing subsequences ending in $j$ where $d[j]$ is maximal.
There can be multiple such $j$, so we need to sum all of them.
Using a Segment tree this approach can also be implemented in $O(n \log n)$.
It is not possible to use the binary search approach for this task.
### Smallest number of non-increasing subsequences covering a sequence
For a given array with $n$ numbers $a[0 \dots n - 1]$ we have to colorize the numbers in the smallest number of colors, so that each color forms a non-increasing subsequence.
To solve this, we notice that the minimum number of required colors is equal to the length of the longest increasing subsequence.
**Proof**:
We need to prove the **duality** of these two problems.
Let's denote by $x$ the length of the longest increasing subsequence and by $y$ the least number of non-increasing subsequences that form a cover.
We need to prove that $x = y$.
It is clear that $y < x$ is not possible, because if we have $x$ strictly increasing elements, than no two can be part of the same non-increasing subsequence.
Therefore we have $y \ge x$.
We now show that $y > x$ is not possible by contradiction.
Suppose that $y > x$.
Then we consider any optimal set of $y$ non-increasing subsequences.
We transform this in set in the following way:
as long as there are two such subsequences such that the first begins before the second subsequence, and the first sequence start with a number greater than or equal to the second, then we unhook this starting number and attach it to the beginning of second.
After a finite number of steps we have $y$ subsequences, and their starting numbers will form an increasing subsequence of length $y$.
Since we assumed that $y > x$ we reached a contradiction.
Thus it follows that $y = x$.
**Restoring the sequences**:
The desired partition of the sequence into subsequences can be done greedily.
I.e. go from left to right and assign the current number or that subsequence ending with the minimal number which is greater than or equal to the current one.
|
---
title
longest_increasing_subseq_log
---
# Longest increasing subsequence
We are given an array with $n$ numbers: $a[0 \dots n-1]$.
The task is to find the longest, strictly increasing, subsequence in $a$.
Formally we look for the longest sequence of indices $i_1, \dots i_k$ such that
$$i_1 < i_2 < \dots < i_k,\quad
a[i_1] < a[i_2] < \dots < a[i_k]$$
In this article we discuss multiple algorithms for solving this task.
Also we will discuss some other problems, that can be reduced to this problem.
## Solution in $O(n^2)$ with dynamic programming {data-toc-label="Solution in O(n^2) with dynamic programming"}
Dynamic programming is a very general technique that allows to solve a huge class of problems.
Here we apply the technique for our specific task.
First we will search only for the **length** of the longest increasing subsequence, and only later learn how to restore the subsequence itself.
### Finding the length
To accomplish this task, we define an array $d[0 \dots n-1]$, where $d[i]$ is the length of the longest increasing subsequence that ends in the element at index $i$.
!!! example
$$\begin{array}{ll}
a &= \{8, 3, 4, 6, 5, 2, 0, 7, 9, 1\} \\
d &= \{1, 1, 2, 3, 3, 1, 1, 4, 5, 2\}
\end{array}$$
The longest increasing subsequence that ends at index 4 is $\{3, 4, 5\}$ with a length of 3, the longest ending at index 8 is either $\{3, 4, 5, 7, 9\}$ or $\{3, 4, 6, 7, 9\}$, both having length 5, and the longest ending at index 9 is $\{0, 1\}$ having length 2.
We will compute this array gradually: first $d[0]$, then $d[1]$, and so on.
After this array is computed, the answer to the problem will be the maximum value in the array $d[]$.
So let the current index be $i$.
I.e. we want to compute the value $d[i]$ and all previous values $d[0], \dots, d[i-1]$ are already known.
Then there are two options:
- $d[i] = 1$: the required subsequence consists only of the element $a[i]$.
- $d[i] > 1$: The subsequence will end it $a[i]$, and right before it will be some number $a[j]$ with $j < i$ and $a[j] < a[i]$.
It's easy to see, that the subsequence ending in $a[j]$ will itself be one of the longest increasing subsequences that ends in $a[j]$.
The number $a[i]$ just extends that longest increasing subsequence by one number.
Therefore, we can just iterate over all $j < i$ with $a[j] < a[i]$, and take the longest sequence that we get by appending $a[i]$ to the longest increasing subsequence ending in $a[j]$.
The longest increasing subsequence ending in $a[j]$ has length $d[j]$, extending it by one gives the length $d[j] + 1$.
$$d[i] = \max_{\substack{j < i \\\\ a[j] < a[i]}} \left(d[j] + 1\right)$$
If we combine these two cases we get the final answer for $d[i]$:
$$d[i] = \max\left(1, \max_{\substack{j < i \\\\ a[j] < a[i]}} \left(d[j] + 1\right)\right)$$
### Implementation
Here is an implementation of the algorithm described above, which computes the length of the longest increasing subsequence.
```{.cpp file=lis_n2}
int lis(vector<int> const& a) {
int n = a.size();
vector<int> d(n, 1);
for (int i = 0; i < n; i++) {
for (int j = 0; j < i; j++) {
if (a[j] < a[i])
d[i] = max(d[i], d[j] + 1);
}
}
int ans = d[0];
for (int i = 1; i < n; i++) {
ans = max(ans, d[i]);
}
return ans;
}
```
### Restoring the subsequence
So far we only learned how to find the length of the subsequence, but not how to find the subsequence itself.
To be able to restore the subsequence we generate an additional auxiliary array $p[0 \dots n-1]$ that we will compute alongside the array $d[]$.
$p[i]$ will be the index $j$ of the second last element in the longest increasing subsequence ending in $i$.
In other words the index $p[i]$ is the same index $j$ at which the highest value $d[i]$ was obtained.
This auxiliary array $p[]$ points in some sense to the ancestors.
Then to derive the subsequence, we just start at the index $i$ with the maximal $d[i]$, and follow the ancestors until we deduced the entire subsequence, i.e. until we reach the element with $d[i] = 1$.
### Implementation of restoring
We will change the code from the previous sections a little bit.
We will compute the array $p[]$ alongside $d[]$, and afterwards compute the subsequence.
For convenience we originally assign the ancestors with $p[i] = -1$.
For elements with $d[i] = 1$, the ancestors value will remain $-1$, which will be slightly more convenient for restoring the subsequence.
```{.cpp file=lis_n2_restore}
vector<int> lis(vector<int> const& a) {
int n = a.size();
vector<int> d(n, 1), p(n, -1);
for (int i = 0; i < n; i++) {
for (int j = 0; j < i; j++) {
if (a[j] < a[i] && d[i] < d[j] + 1) {
d[i] = d[j] + 1;
p[i] = j;
}
}
}
int ans = d[0], pos = 0;
for (int i = 1; i < n; i++) {
if (d[i] > ans) {
ans = d[i];
pos = i;
}
}
vector<int> subseq;
while (pos != -1) {
subseq.push_back(a[pos]);
pos = p[pos];
}
reverse(subseq.begin(), subseq.end());
return subseq;
}
```
### Alternative way of restoring the subsequence
It is also possible to restore the subsequence without the auxiliary array $p[]$.
We can simply recalculate the current value of $d[i]$ and also see how the maximum was reached.
This method leads to a slightly longer code, but in return we save some memory.
## Solution in $O(n \log n)$ with dynamic programming and binary search {data-toc-label="Solution in O(n log n) with dynamic programming and binary search"}
In order to obtain a faster solution for the problem, we construct a different dynamic programming solution that runs in $O(n^2)$, and then later improve it to $O(n \log n)$.
We will use the dynamic programming array $d[0 \dots n]$.
This time $d[l]$ doesn't corresponds to the element $a[i]$ or to an prefix of the array.
$d[l]$ will be the smallest element at which an increasing subsequence of length $l$ ends.
Initially we assume $d[0] = -\infty$ and for all other lengths $d[l] = \infty$.
We will again gradually process the numbers, first $a[0]$, then $a[1]$, etc, and in each step maintain the array $d[]$ so that it is up to date.
!!! example
Given the array $a = \{8, 3, 4, 6, 5, 2, 0, 7, 9, 1\}$, here are all their prefixes and their dynamic programming array.
Notice, that the values of the array don't always change at the end.
$$
\begin{array}{ll}
\text{prefix} = \{\} &\quad d = \{-\infty, \infty, \dots\}\\
\text{prefix} = \{8\} &\quad d = \{-\infty, 8, \infty, \dots\}\\
\text{prefix} = \{8, 3\} &\quad d = \{-\infty, 3, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4\} &\quad d = \{-\infty, 3, 4, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4, 6\} &\quad d = \{-\infty, 3, 4, 6, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4, 6, 5\} &\quad d = \{-\infty, 3, 4, 5, \infty, \dots\}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2\} &\quad d = \{-\infty, 2, 4, 5, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0\} &\quad d = \{-\infty, 0, 4, 5, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0, 7\} &\quad d = \{-\infty, 0, 4, 5, 7, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0, 7, 9\} &\quad d = \{-\infty, 0, 4, 5, 7, 9, \infty, \dots \}\\
\text{prefix} = \{8, 3, 4, 6, 5, 2, 0, 7, 9, 1\} &\quad d = \{-\infty, 0, 1, 5, 7, 9, \infty, \dots \}\\
\end{array}
$$
When we process $a[i]$, we can ask ourselves.
What have the conditions to be, that we write the current number $a[i]$ into the $d[0 \dots n]$ array?
We set $d[l] = a[i]$, if there is a longest increasing sequence of length $l$ that ends in $a[i]$, and there is no longest increasing sequence of length $l$ that ends in a smaller number.
Similar to the previous approach, if we remove the number $a[i]$ from the longest increasing sequence of length $l$, we get another longest increasing sequence of length $l -1$.
So we want to extend a longest increasing sequence of length $l - 1$ by the number $a[i]$, and obviously the longest increasing sequence of length $l - 1$ that ends with the smallest element will work the best, in other words the sequence of length $l-1$ that ends in element $d[l-1]$.
There is a longest increasing sequence of length $l - 1$ that we can extend with the number $a[i]$, exactly if $d[l-1] < a[i]$.
So we can just iterate over each length $l$, and check if we can extend a longest increasing sequence of length $l - 1$ by checking the criteria.
Additionally we also need to check, if we maybe have already found a longest increasing sequence of length $l$ with a smaller number at the end.
So we only update if $a[i] < d[l]$.
After processing all the elements of $a[]$ the length of the desired subsequence is the largest $l$ with $d[l] < \infty$.
```{.cpp file=lis_method2_n2}
int lis(vector<int> const& a) {
int n = a.size();
const int INF = 1e9;
vector<int> d(n+1, INF);
d[0] = -INF;
for (int i = 0; i < n; i++) {
for (int l = 1; l <= n; l++) {
if (d[l-1] < a[i] && a[i] < d[l])
d[l] = a[i];
}
}
int ans = 0;
for (int l = 0; l <= n; l++) {
if (d[l] < INF)
ans = l;
}
return ans;
}
```
We now make two important observations.
1. The array $d$ will always be sorted:
$d[l-1] < d[l]$ for all $i = 1 \dots n$.
This is trivial, as you can just remove the last element from the increasing subsequence of length $l$, and you get a increasing subsequence of length $l-1$ with a smalller ending number.
2. The element $a[i]$ will only update at most one value $d[l]$.
This follows immediately from the above implementation.
There can only be one place in the array with $d[l-1] < a[i] < d[l]$.
Thus we can find this element in the array $d[]$ using [binary search](../num_methods/binary_search.md) in $O(\log n)$.
In fact we can simply look in the array $d[]$ for the first number that is strictly greater than $a[i]$, and we try to update this element in the same way as the above implementation.
### Implementation
This gives us the improved $O(n \log n)$ implementation:
```{.cpp file=lis_method2_nlogn}
int lis(vector<int> const& a) {
int n = a.size();
const int INF = 1e9;
vector<int> d(n+1, INF);
d[0] = -INF;
for (int i = 0; i < n; i++) {
int l = upper_bound(d.begin(), d.end(), a[i]) - d.begin();
if (d[l-1] < a[i] && a[i] < d[l])
d[l] = a[i];
}
int ans = 0;
for (int l = 0; l <= n; l++) {
if (d[l] < INF)
ans = l;
}
return ans;
}
```
### Restoring the subsequence
It is also possible to restore the subsequence using this approach.
This time we have to maintain two auxiliary arrays.
One that tells us the index of the elements in $d[]$.
And again we have to create an array of "ancestors" $p[i]$.
$p[i]$ will be the index of the previous element for the optimal subsequence ending in element $i$.
It's easy to maintain these two arrays in the course of iteration over the array $a[]$ alongside the computations of $d[]$.
And at the end it is not difficult to restore the desired subsequence using these arrays.
## Solution in $O(n \log n)$ with data structures {data-toc-label="Solution in O(n log n) with data structures"}
Instead of the above method for computing the longest increasing subsequence in $O(n \log n)$ we can also solve the problem in a different way: using some simple data structures.
Let's go back to the first method.
Remember that $d[i]$ is the value $d[j] + 1$ with $j < i$ and $a[j] < a[i]$.
Thus if we define an additional array $t[]$ such that
$$t[a[i]] = d[i],$$
then the problem of computing the value $d[i]$ is equivalent to finding the **maximum value in a prefix** of the array $t[]$:
$$d[i] = \max\left(t[0 \dots a[i] - 1] + 1\right)$$
The problem of finding the maximum of a prefix of an array (which changes) is a standard problem that can be solved by many different data structures.
For instance we can use a [Segment tree](../data_structures/segment_tree.md) or a [Fenwick tree](../data_structures/fenwick.md).
This method has obviously some **shortcomings**:
in terms of length and complexity of the implementation this approach will be worse than the method using binary search.
In addition if the input numbers $a[i]$ are especially large, then we would have to use some tricks, like compressing the numbers (i.e. renumber them from $0$ to $n-1$), or use a dynamic segment tree (only generate the branches of the tree that are important).
Otherwise the memory consumption will be too high.
On the other hand this method has also some **advantages**:
with this method you don't have to think about any tricky properties in the dynamic programming solution.
And this approach allows us to generalize the problem very easily (see below).
## Related tasks
Here are several problems that are closely related to the problem of finding the longest increasing subsequence.
### Longest non-decreasing subsequence
This is in fact nearly the same problem.
Only now it is allowed to use identical numbers in the subsequence.
The solution is essentially also nearly the same.
We just have to change the inequality signs, and make a slightly modification to the binary search.
### Number of longest increasing subsequences
We can use the first discussed method, either the $O(n^2)$ version or the version using data structures.
We only have to additionally store in how many ways we can obtain longest increasing subsequences ending in the values $d[i]$.
The number of ways to form a longest increasing subsequences ending in $a[i]$ is the sum of all ways for all longest increasing subsequences ending in $j$ where $d[j]$ is maximal.
There can be multiple such $j$, so we need to sum all of them.
Using a Segment tree this approach can also be implemented in $O(n \log n)$.
It is not possible to use the binary search approach for this task.
### Smallest number of non-increasing subsequences covering a sequence
For a given array with $n$ numbers $a[0 \dots n - 1]$ we have to colorize the numbers in the smallest number of colors, so that each color forms a non-increasing subsequence.
To solve this, we notice that the minimum number of required colors is equal to the length of the longest increasing subsequence.
**Proof**:
We need to prove the **duality** of these two problems.
Let's denote by $x$ the length of the longest increasing subsequence and by $y$ the least number of non-increasing subsequences that form a cover.
We need to prove that $x = y$.
It is clear that $y < x$ is not possible, because if we have $x$ strictly increasing elements, than no two can be part of the same non-increasing subsequence.
Therefore we have $y \ge x$.
We now show that $y > x$ is not possible by contradiction.
Suppose that $y > x$.
Then we consider any optimal set of $y$ non-increasing subsequences.
We transform this in set in the following way:
as long as there are two such subsequences such that the first begins before the second subsequence, and the first sequence start with a number greater than or equal to the second, then we unhook this starting number and attach it to the beginning of second.
After a finite number of steps we have $y$ subsequences, and their starting numbers will form an increasing subsequence of length $y$.
Since we assumed that $y > x$ we reached a contradiction.
Thus it follows that $y = x$.
**Restoring the sequences**:
The desired partition of the sequence into subsequences can be done greedily.
I.e. go from left to right and assign the current number or that subsequence ending with the minimal number which is greater than or equal to the current one.
## Practice Problems
- [ACMSGURU - "North-East"](http://codeforces.com/problemsets/acmsguru/problem/99999/521)
- [Codeforces - LCIS](http://codeforces.com/problemset/problem/10/D)
- [Codeforces - Tourist](http://codeforces.com/contest/76/problem/F)
- [SPOJ - DOSA](https://www.spoj.com/problems/DOSA/)
- [SPOJ - HMLIS](https://www.spoj.com/problems/HMLIS/)
- [SPOJ - ONEXLIS](https://www.spoj.com/problems/ONEXLIS/)
- [SPOJ - SUPPER](http://www.spoj.com/problems/SUPPER/)
- [Topcoder - AutoMarket](https://community.topcoder.com/stat?c=problem_statement&pm=3937&rd=6532)
- [Topcoder - BridgeArrangement](https://community.topcoder.com/stat?c=problem_statement&pm=2967&rd=5881)
- [Topcoder - IntegerSequence](https://community.topcoder.com/stat?c=problem_statement&pm=5922&rd=8075)
- [UVA - Back To Edit Distance](https://onlinejudge.org/external/127/12747.pdf)
- [UVA - Happy Birthday](https://onlinejudge.org/external/120/12002.pdf)
- [UVA - Tiling Up Blocks](https://onlinejudge.org/external/11/1196.pdf)
|
Longest increasing subsequence
|
---
title
- Original
title: MEX (minimal excluded) of a sequence
---
# MEX (minimal excluded) of a sequence
Given an array $A$ of size $N$. You have to find the minimal non-negative element that is not present in the array. That number is commonly called the **MEX** (minimal excluded).
$$
\begin{align}
\text{mex}(\{0, 1, 2, 4, 5\}) &= 3 \\
\text{mex}(\{0, 1, 2, 3, 4\}) &= 5 \\
\text{mex}(\{1, 2, 3, 4, 5\}) &= 0 \\
\end{align}
$$
Notice, that the MEX of an array of size $N$ can never be bigger than $N$ itself.
The easiest approach is to create a set of all elements in the array $A$, so that we can quickly check if a number is part of the array or not.
Then we can check all numbers from $0$ to $N$, if the current number is not present in the set, return it.
## Implementation
The following algorithm runs in $O(N \log N)$ time.
```{.cpp file=mex_simple}
int mex(vector<int> const& A) {
set<int> b(A.begin(), A.end());
int result = 0;
while (b.count(result))
++result;
return result;
}
```
If an algorithm requires a $O(N)$ MEX computation, it is possible by using a boolean vector instead of a set.
Notice, that the array needs to be as big as the biggest possible array size.
```{.cpp file=mex_linear}
int mex(vector<int> const& A) {
static bool used[MAX_N+1] = { 0 };
// mark the given numbers
for (int x : A) {
if (x <= MAX_N)
used[x] = true;
}
// find the mex
int result = 0;
while (used[result])
++result;
// clear the array again
for (int x : A) {
if (x <= MAX_N)
used[x] = false;
}
return result;
}
```
This approach is fast, but only works well if you have to compute the MEX once.
If you need to compute the MEX over and over, e.g. because your array keeps changing, then it is not effective.
For that, we need something better.
## MEX with array updates
In the problem you need to change individual numbers in the array, and compute the new MEX of the array after each such update.
There is a need for a better data structure that handles such queries efficiently.
One approach would be take the frequency of each number from $0$ to $N$, and build a tree-like data structure over it.
E.g. a segment tree or a treap.
Each node represents a range of numbers, and together to total frequency in the range, you additionally store the amount of distinct numbers in that range.
It's possible to update this data structure in $O(\log N)$ time, and also find the MEX in $O(\log N)$ time, by doing a binary search for the MEX.
If the node representing the range $[0, \lfloor N/2 \rfloor)$ doesn't contain $\lfloor N/2 \rfloor$ many distinct numbers, then one is missing and the MEX is smaller than $\lfloor N/2 \rfloor$, and you can recurse in the left branch of the tree. Otherwise it is at least $\lfloor N/2 \rfloor$, and you can recurse in the right branch of the tree.
It's also possible to use the standard library data structures `map` and `set` (based on an approach explained [here](https://codeforces.com/blog/entry/81287?#comment-677837)).
With a `map` we will remember the frequency of each number, and with the `set` we represent the numbers that are currently missing from the array.
Since a `set` is ordered, `*set.begin()` will be the MEX.
In total we need $O(N \log N)$ precomputation, and afterwards the MEX can be computed in $O(1)$ and an update can be performed in $O(\log N)$.
```{.cpp file=mex_updates}
class Mex {
private:
map<int, int> frequency;
set<int> missing_numbers;
vector<int> A;
public:
Mex(vector<int> const& A) : A(A) {
for (int i = 0; i <= A.size(); i++)
missing_numbers.insert(i);
for (int x : A) {
++frequency[x];
missing_numbers.erase(x);
}
}
int mex() {
return *missing_numbers.begin();
}
void update(int idx, int new_value) {
if (--frequency[A[idx]] == 0)
missing_numbers.insert(A[idx]);
A[idx] = new_value;
++frequency[new_value];
missing_numbers.erase(new_value);
}
};
```
|
---
title
- Original
title: MEX (minimal excluded) of a sequence
---
# MEX (minimal excluded) of a sequence
Given an array $A$ of size $N$. You have to find the minimal non-negative element that is not present in the array. That number is commonly called the **MEX** (minimal excluded).
$$
\begin{align}
\text{mex}(\{0, 1, 2, 4, 5\}) &= 3 \\
\text{mex}(\{0, 1, 2, 3, 4\}) &= 5 \\
\text{mex}(\{1, 2, 3, 4, 5\}) &= 0 \\
\end{align}
$$
Notice, that the MEX of an array of size $N$ can never be bigger than $N$ itself.
The easiest approach is to create a set of all elements in the array $A$, so that we can quickly check if a number is part of the array or not.
Then we can check all numbers from $0$ to $N$, if the current number is not present in the set, return it.
## Implementation
The following algorithm runs in $O(N \log N)$ time.
```{.cpp file=mex_simple}
int mex(vector<int> const& A) {
set<int> b(A.begin(), A.end());
int result = 0;
while (b.count(result))
++result;
return result;
}
```
If an algorithm requires a $O(N)$ MEX computation, it is possible by using a boolean vector instead of a set.
Notice, that the array needs to be as big as the biggest possible array size.
```{.cpp file=mex_linear}
int mex(vector<int> const& A) {
static bool used[MAX_N+1] = { 0 };
// mark the given numbers
for (int x : A) {
if (x <= MAX_N)
used[x] = true;
}
// find the mex
int result = 0;
while (used[result])
++result;
// clear the array again
for (int x : A) {
if (x <= MAX_N)
used[x] = false;
}
return result;
}
```
This approach is fast, but only works well if you have to compute the MEX once.
If you need to compute the MEX over and over, e.g. because your array keeps changing, then it is not effective.
For that, we need something better.
## MEX with array updates
In the problem you need to change individual numbers in the array, and compute the new MEX of the array after each such update.
There is a need for a better data structure that handles such queries efficiently.
One approach would be take the frequency of each number from $0$ to $N$, and build a tree-like data structure over it.
E.g. a segment tree or a treap.
Each node represents a range of numbers, and together to total frequency in the range, you additionally store the amount of distinct numbers in that range.
It's possible to update this data structure in $O(\log N)$ time, and also find the MEX in $O(\log N)$ time, by doing a binary search for the MEX.
If the node representing the range $[0, \lfloor N/2 \rfloor)$ doesn't contain $\lfloor N/2 \rfloor$ many distinct numbers, then one is missing and the MEX is smaller than $\lfloor N/2 \rfloor$, and you can recurse in the left branch of the tree. Otherwise it is at least $\lfloor N/2 \rfloor$, and you can recurse in the right branch of the tree.
It's also possible to use the standard library data structures `map` and `set` (based on an approach explained [here](https://codeforces.com/blog/entry/81287?#comment-677837)).
With a `map` we will remember the frequency of each number, and with the `set` we represent the numbers that are currently missing from the array.
Since a `set` is ordered, `*set.begin()` will be the MEX.
In total we need $O(N \log N)$ precomputation, and afterwards the MEX can be computed in $O(1)$ and an update can be performed in $O(\log N)$.
```{.cpp file=mex_updates}
class Mex {
private:
map<int, int> frequency;
set<int> missing_numbers;
vector<int> A;
public:
Mex(vector<int> const& A) : A(A) {
for (int i = 0; i <= A.size(); i++)
missing_numbers.insert(i);
for (int x : A) {
++frequency[x];
missing_numbers.erase(x);
}
}
int mex() {
return *missing_numbers.begin();
}
void update(int idx, int new_value) {
if (--frequency[A[idx]] == 0)
missing_numbers.insert(A[idx]);
A[idx] = new_value;
++frequency[new_value];
missing_numbers.erase(new_value);
}
};
```
## Practice Problems
- [AtCoder: Neq Min](https://atcoder.jp/contests/hhkb2020/tasks/hhkb2020_c)
- [Codeforces: Replace by MEX](https://codeforces.com/contest/1375/problem/D)
- [Codeforces: Vitya and Strange Lesson](https://codeforces.com/problemset/problem/842/D)
- [Codeforces: MEX Queries](https://codeforces.com/contest/817/problem/F)
|
MEX (minimal excluded) of a sequence
|
---
title
rmq
---
# Range Minimum Query
You are given an array $A[1..N]$.
You have to answer incoming queries of the form $(L, R)$, which ask to find the minimum element in array $A$ between positions $L$ and $R$ inclusive.
RMQ can appear in problems directly or can be applied in some other tasks, e.g. the [Lowest Common Ancestor](../graph/lca.md) problem.
## Solution
There are lots of possible approaches and data structures that you can use to solve the RMQ task.
The ones that are explained on this site are listed below.
First the approaches that allow modifications to the array between answering queries.
- [Sqrt-decomposition](../data_structures/sqrt_decomposition.md) - answers each query in $O(\sqrt{N})$, preprocessing done in $O(N)$.
Pros: a very simple data structure. Cons: worse complexity.
- [Segment tree](../data_structures/segment_tree.md) - answers each query in $O(\log N)$, preprocessing done in $O(N)$.
Pros: good time complexity. Cons: larger amount of code compared to the other data structures.
- [Fenwick tree](../data_structures/fenwick.md) - answers each query in $O(\log N)$, preprocessing done in $O(N \log N)$.
Pros: the shortest code, good time complexity. Cons: Fenwick tree can only be used for queries with $L = 1$, so it is not applicable to many problems.
And here are the approaches that only work on static arrays, i.e. it is not possible to change a value in the array without recomputing the complete data structure.
- [Sparse Table](../data_structures/sparse-table.md) - answers each query in $O(1)$, preprocessing done in $O(N \log N)$.
Pros: simple data structure, excellent time complexity.
- [Sqrt Tree](../data_structures/sqrt-tree.md) - answers queries in $O(1)$, preprocessing done in $O(N \log \log N)$. Pros: fast. Cons: Complicated to implement.
- [Disjoint Set Union / Arpa's Trick](../data_structures/disjoint_set_union.md#arpa) - answers queries in $O(1)$, preprocessing in $O(n)$. Pros: short, fast. Cons: only works if all queries are known in advance, i.e. only supports off-line processing of the queries.
- [Cartesian Tree](../graph/rmq_linear.md) and [Farach-Colton and Bender algorithm](../graph/lca_farachcoltonbender.md) - answers queries in $O(1)$, preprocessing in $O(n)$. Pros: optimal complexity. Cons: large amount of code.
Note: Preprocessing is the preliminary processing of the given array by building the corresponding data structure for it.
|
---
title
rmq
---
# Range Minimum Query
You are given an array $A[1..N]$.
You have to answer incoming queries of the form $(L, R)$, which ask to find the minimum element in array $A$ between positions $L$ and $R$ inclusive.
RMQ can appear in problems directly or can be applied in some other tasks, e.g. the [Lowest Common Ancestor](../graph/lca.md) problem.
## Solution
There are lots of possible approaches and data structures that you can use to solve the RMQ task.
The ones that are explained on this site are listed below.
First the approaches that allow modifications to the array between answering queries.
- [Sqrt-decomposition](../data_structures/sqrt_decomposition.md) - answers each query in $O(\sqrt{N})$, preprocessing done in $O(N)$.
Pros: a very simple data structure. Cons: worse complexity.
- [Segment tree](../data_structures/segment_tree.md) - answers each query in $O(\log N)$, preprocessing done in $O(N)$.
Pros: good time complexity. Cons: larger amount of code compared to the other data structures.
- [Fenwick tree](../data_structures/fenwick.md) - answers each query in $O(\log N)$, preprocessing done in $O(N \log N)$.
Pros: the shortest code, good time complexity. Cons: Fenwick tree can only be used for queries with $L = 1$, so it is not applicable to many problems.
And here are the approaches that only work on static arrays, i.e. it is not possible to change a value in the array without recomputing the complete data structure.
- [Sparse Table](../data_structures/sparse-table.md) - answers each query in $O(1)$, preprocessing done in $O(N \log N)$.
Pros: simple data structure, excellent time complexity.
- [Sqrt Tree](../data_structures/sqrt-tree.md) - answers queries in $O(1)$, preprocessing done in $O(N \log \log N)$. Pros: fast. Cons: Complicated to implement.
- [Disjoint Set Union / Arpa's Trick](../data_structures/disjoint_set_union.md#arpa) - answers queries in $O(1)$, preprocessing in $O(n)$. Pros: short, fast. Cons: only works if all queries are known in advance, i.e. only supports off-line processing of the queries.
- [Cartesian Tree](../graph/rmq_linear.md) and [Farach-Colton and Bender algorithm](../graph/lca_farachcoltonbender.md) - answers queries in $O(1)$, preprocessing in $O(n)$. Pros: optimal complexity. Cons: large amount of code.
Note: Preprocessing is the preliminary processing of the given array by building the corresponding data structure for it.
## Practice Problems
- [SPOJ: Range Minimum Query](http://www.spoj.com/problems/RMQSQ/)
- [CODECHEF: Chef And Array](https://www.codechef.com/problems/FRMQ)
- [Codeforces: Array Partition](https://codeforces.com/contest/1454/problem/F)
|
Range Minimum Query
|
---
title
randomized_heap
---
# Randomized Heap
A randomized heap is a heap that, through using randomization, allows to perform all operations in expected logarithmic time.
A **min heap** is a binary tree in which the value of each vertex is less than or equal to the values of its children.
Thus the minimum of the tree is always in the root vertex.
A max heap can be defined in a similar way: by replacing less with greater.
The default operations of a heap are:
- Adding a value
- Extracting the minimum
- Removing the minimum
- Merging two heaps (without deleting duplicates)
- Removing an arbitrary element (if its position in the tree is known)
A randomized heap can perform all these operations in expected $O(\log n)$ time with a very simple implementation.
## Data structure
We can immediately describe the structure of the binary heap:
```{.cpp file=randomized_heap_structure}
struct Tree {
int value;
Tree * l = nullptr;
Tree * r = nullptr;
};
```
In the vertex we store a value.
In addition we have pointers to the left and right children, which are point to null if the corresponding child doesn't exist.
## Operations
It is not difficult to see, that all operations can be reduced to a single one: **merging** two heaps into one.
Indeed, adding a new value to the heap is equivalent to merging the heap with a heap consisting of a single vertex with that value.
Finding a minimum doesn't require any operation at all - the minimum is simply the value at the root.
Removing the minimum is equivalent to the result of merging the left and right children of the root vertex.
And removing an arbitrary element is similar.
We merge the children of the vertex and replace the vertex with the result of the merge.
So we actually only need to implement the operation of merging two heaps.
All other operations are trivially reduced to this operation.
Let two heaps $T_1$ and $T_2$ be given.
It is clear that the root of each of these heaps contains its minimum.
So the root of the resulting heap will be the minimum of these two values.
So we compare both values, and use the smaller one as the new root.
Now we have to combine the children of the selected vertex with the remaining heap.
For this we select one of the children, and merge it with the remaining heap.
Thus we again have the operation of merging two heaps.
Sooner of later this process will end (the number of such steps is limited by the sum of the heights of the two heaps)
To achieve logarithmic complexity on average, we need to specify a method for choosing one of the two children so that the average path length is logarithmic.
It is not difficult to guess, that we will make this decision **randomly**.
Thus the implementation of the merging operation is as follows:
```{.cpp file=randomized_heap_merge}
Tree* merge(Tree* t1, Tree* t2) {
if (!t1 || !t2)
return t1 ? t1 : t2;
if (t2->value < t1->value)
swap(t1, t2);
if (rand() & 1)
swap(t1->l, t1->r);
t1->l = merge(t1->l, t2);
return t1;
}
```
Here first we check if one of the heaps is empty, then we don't need to perform any merge action at all.
Otherwise we make the heap `t1` the one with the smaller value (by swapping `t1` and `t2` if necessary).
We want to merge the left child of `t1` with `t2`, therefore we randomly swap the children of `t1`, and then perform the merge.
## Complexity
We introduce the random variable $h(T)$ which will denote the **length of the random path** from the root to the leaf (the length in the number of edges).
It is clear that the algorithm `merge` performs $O(h(T_1) + h(T_2))$ steps.
Therefore to understand the complexity of the operations, we must look into the random variable $h(T)$.
### Expected value
We assume that the expectation $h(T)$ can be estimated from above by the logarithm of the number of vertices in the heap:
$$\mathbf{E} h(T) \le \log(n+1)$$
This can be easily proven by induction.
Let $L$ and $R$ be the left and the right subtrees of the root $T$, and $n_L$ and $n_R$ the number of vertices in them ($n = n_L + n_R + 1$).
The following shows the induction step:
$$\begin{align}
\mathbf{E} h(T) &= 1 + \frac{\mathbf{E} h(L) + \mathbf{E} h(R)}{2}
\le 1 + \frac{\log(n_L + 1) \log(n_R + 1)}{2} \\\\
&= 1 + \log\sqrt{(n_L + 1)(n_R + 1)} = \log 2\sqrt{(n_L + 1)(n_R + 1)} \\\\
&\le \log \frac{2\left((n_L + 1) + (n_R + 1)\right)}{2} = \log(n_L + n_R + 2) = \log(n+1)
\end{align}$$
### Exceeding the expected value
Of course we are still not happy.
The expected value of $h(T)$ doesn't say anything about the worst case.
It is still possible that the paths from the root to the vertices is on average much greater than $\log(n + 1)$ for a specific tree.
Let us prove that exceeding the expected value is indeed very small:
$$P\{h(T) > (c+1) \log n\} < \frac{1}{n^c}$$
for any positive constant $c$.
Here we denote by $P$ the set of paths from the root of the heap to the leaves where the length exceeds $(c+1) \log n$.
Note that for any path $p$ of length $|p|$ the probability that it will be chosen as random path is $2^{-|p|}$.
Therefore we get:
$$P\{h(T) > (c+1) \log n\} = \sum_{p \in P} 2^{-|p|} < \sum_{p \in P} 2^{-(c+1) \log n} = |P| n^{-(c+1)} \le n^{-c}$$
### Complexity of the algorithm
Thus the algorithm `merge`, and hence all other operations expressed with it, can be performed in $O(\log n)$ on average.
Moreover for any positive constant $\epsilon$ there is a positive constant $c$, such that the probability that the operation will require more than $c \log n$ steps is less than $n^{-\epsilon}$ (in some sense this describes the worst case behavior of the algorithm).
|
---
title
randomized_heap
---
# Randomized Heap
A randomized heap is a heap that, through using randomization, allows to perform all operations in expected logarithmic time.
A **min heap** is a binary tree in which the value of each vertex is less than or equal to the values of its children.
Thus the minimum of the tree is always in the root vertex.
A max heap can be defined in a similar way: by replacing less with greater.
The default operations of a heap are:
- Adding a value
- Extracting the minimum
- Removing the minimum
- Merging two heaps (without deleting duplicates)
- Removing an arbitrary element (if its position in the tree is known)
A randomized heap can perform all these operations in expected $O(\log n)$ time with a very simple implementation.
## Data structure
We can immediately describe the structure of the binary heap:
```{.cpp file=randomized_heap_structure}
struct Tree {
int value;
Tree * l = nullptr;
Tree * r = nullptr;
};
```
In the vertex we store a value.
In addition we have pointers to the left and right children, which are point to null if the corresponding child doesn't exist.
## Operations
It is not difficult to see, that all operations can be reduced to a single one: **merging** two heaps into one.
Indeed, adding a new value to the heap is equivalent to merging the heap with a heap consisting of a single vertex with that value.
Finding a minimum doesn't require any operation at all - the minimum is simply the value at the root.
Removing the minimum is equivalent to the result of merging the left and right children of the root vertex.
And removing an arbitrary element is similar.
We merge the children of the vertex and replace the vertex with the result of the merge.
So we actually only need to implement the operation of merging two heaps.
All other operations are trivially reduced to this operation.
Let two heaps $T_1$ and $T_2$ be given.
It is clear that the root of each of these heaps contains its minimum.
So the root of the resulting heap will be the minimum of these two values.
So we compare both values, and use the smaller one as the new root.
Now we have to combine the children of the selected vertex with the remaining heap.
For this we select one of the children, and merge it with the remaining heap.
Thus we again have the operation of merging two heaps.
Sooner of later this process will end (the number of such steps is limited by the sum of the heights of the two heaps)
To achieve logarithmic complexity on average, we need to specify a method for choosing one of the two children so that the average path length is logarithmic.
It is not difficult to guess, that we will make this decision **randomly**.
Thus the implementation of the merging operation is as follows:
```{.cpp file=randomized_heap_merge}
Tree* merge(Tree* t1, Tree* t2) {
if (!t1 || !t2)
return t1 ? t1 : t2;
if (t2->value < t1->value)
swap(t1, t2);
if (rand() & 1)
swap(t1->l, t1->r);
t1->l = merge(t1->l, t2);
return t1;
}
```
Here first we check if one of the heaps is empty, then we don't need to perform any merge action at all.
Otherwise we make the heap `t1` the one with the smaller value (by swapping `t1` and `t2` if necessary).
We want to merge the left child of `t1` with `t2`, therefore we randomly swap the children of `t1`, and then perform the merge.
## Complexity
We introduce the random variable $h(T)$ which will denote the **length of the random path** from the root to the leaf (the length in the number of edges).
It is clear that the algorithm `merge` performs $O(h(T_1) + h(T_2))$ steps.
Therefore to understand the complexity of the operations, we must look into the random variable $h(T)$.
### Expected value
We assume that the expectation $h(T)$ can be estimated from above by the logarithm of the number of vertices in the heap:
$$\mathbf{E} h(T) \le \log(n+1)$$
This can be easily proven by induction.
Let $L$ and $R$ be the left and the right subtrees of the root $T$, and $n_L$ and $n_R$ the number of vertices in them ($n = n_L + n_R + 1$).
The following shows the induction step:
$$\begin{align}
\mathbf{E} h(T) &= 1 + \frac{\mathbf{E} h(L) + \mathbf{E} h(R)}{2}
\le 1 + \frac{\log(n_L + 1) \log(n_R + 1)}{2} \\\\
&= 1 + \log\sqrt{(n_L + 1)(n_R + 1)} = \log 2\sqrt{(n_L + 1)(n_R + 1)} \\\\
&\le \log \frac{2\left((n_L + 1) + (n_R + 1)\right)}{2} = \log(n_L + n_R + 2) = \log(n+1)
\end{align}$$
### Exceeding the expected value
Of course we are still not happy.
The expected value of $h(T)$ doesn't say anything about the worst case.
It is still possible that the paths from the root to the vertices is on average much greater than $\log(n + 1)$ for a specific tree.
Let us prove that exceeding the expected value is indeed very small:
$$P\{h(T) > (c+1) \log n\} < \frac{1}{n^c}$$
for any positive constant $c$.
Here we denote by $P$ the set of paths from the root of the heap to the leaves where the length exceeds $(c+1) \log n$.
Note that for any path $p$ of length $|p|$ the probability that it will be chosen as random path is $2^{-|p|}$.
Therefore we get:
$$P\{h(T) > (c+1) \log n\} = \sum_{p \in P} 2^{-|p|} < \sum_{p \in P} 2^{-(c+1) \log n} = |P| n^{-(c+1)} \le n^{-c}$$
### Complexity of the algorithm
Thus the algorithm `merge`, and hence all other operations expressed with it, can be performed in $O(\log n)$ on average.
Moreover for any positive constant $\epsilon$ there is a positive constant $c$, such that the probability that the operation will require more than $c \log n$ steps is less than $n^{-\epsilon}$ (in some sense this describes the worst case behavior of the algorithm).
|
Randomized Heap
|
---
title
fenwick_tree
---
# Fenwick Tree
Let, $f$ be some group operation (binary associative function over a set with identity element and inverse elements) and $A$ be an array of integers of length $N$.
Fenwick tree is a data structure which:
* calculates the value of function $f$ in the given range $[l, r]$ (i.e. $f(A_l, A_{l+1}, \dots, A_r)$) in $O(\log N)$ time;
* updates the value of an element of $A$ in $O(\log N)$ time;
* requires $O(N)$ memory, or in other words, exactly the same memory required for $A$;
* is easy to use and code, especially, in the case of multidimensional arrays.
The most common application of Fenwick tree is _calculating the sum of a range_ (i.e. using addition over the set of integers $\mathbb{Z}$: $f(A_1, A_2, \dots, A_k) = A_1 + A_2 + \dots + A_k$).
Fenwick tree is also called **Binary Indexed Tree**, or just **BIT** abbreviated.
Fenwick tree was first described in a paper titled "A new data structure for cumulative frequency tables" (Peter M. Fenwick, 1994).
## Description
### Overview
For the sake of simplicity, we will assume that function $f$ is just a *sum function*.
Given an array of integers $A[0 \dots N-1]$.
A Fenwick tree is just an array $T[0 \dots N-1]$, where each of its elements is equal to the sum of elements of $A$ in some range $[g(i), i]$:
$$T_i = \sum_{j = g(i)}^{i}{A_j},$$
where $g$ is some function that satisfies $0 \le g(i) \le i$.
We will define the function in the next few paragraphs.
The data structure is called tree, because there is a nice representation of the data structure as tree, although we don't need to model an actual tree with nodes and edges.
We will only need to maintain the array $T$ to handle all queries.
**Note:** The Fenwick tree presented here uses zero-based indexing.
Many people will actually use a version of the Fenwick tree that uses one-based indexing.
Therefore you will also find an alternative implementation using one-based indexing in the implementation section.
Both versions are equivalent in terms of time and memory complexity.
Now we can write some pseudo-code for the two operations mentioned above - get the sum of elements of $A$ in the range $[0, r]$ and update (increase) some element $A_i$:
```python
def sum(int r):
res = 0
while (r >= 0):
res += t[r]
r = g(r) - 1
return res
def increase(int i, int delta):
for all j with g(j) <= i <= j:
t[j] += delta
```
The function `sum` works as follows:
1. first, it adds the sum of the range $[g(r), r]$ (i.e. $T[r]$) to the `result`
2. then, it "jumps" to the range $[g(g(r)-1), g(r)-1]$, and adds this range's sum to the `result`
3. and so on, until it "jumps" from $[0, g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1), -1]$; that is where the `sum` function stops jumping.
The function `increase` works with the same analogy, but "jumps" in the direction of increasing indices:
1. sums of the ranges $[g(j), j]$ that satisfy the condition $g(j) \le i \le j$ are increased by `delta` , that is `t[j] += delta`. Therefore we updated all elements in $T$ that correspond to ranges in which $A_i$ lies.
It is obvious that the complexity of both `sum` and `increase` depend on the function $g$.
There are lots of ways to choose the function $g$, as long as $0 \le g(i) \le i$ for all $i$.
For instance the function $g(i) = i$ works, which results just in $T = A$, and therefore summation queries are slow.
We can also take the function $g(i) = 0$.
This will correspond to prefix sum arrays, which means that finding the sum of the range $[0, i]$ will only take constant time, but updates are slow.
The clever part of the Fenwick algorithm is, that there it uses a special definition of the function $g$ that can handle both operations in $O(\log N)$ time.
### Definition of $g(i)$ { data-toc-label='Definition of <script type="math/tex">g(i)</script>' }
The computation of $g(i)$ is defined using the following simple operation:
we replace all trailing $1$ bits in the binary representation of $i$ with $0$ bits.
In other words, if the least significant digit of $i$ in binary is $0$, then $g(i) = i$.
And otherwise the least significant digit is a $1$, and we take this $1$ and all other trailing $1$s and flip them.
For instance we get
$$\begin{align}
g(11) = g(1011_2) = 1000_2 &= 8 \\\\
g(12) = g(1100_2) = 1100_2 &= 12 \\\\
g(13) = g(1101_2) = 1100_2 &= 12 \\\\
g(14) = g(1110_2) = 1110_2 &= 14 \\\\
g(15) = g(1111_2) = 0000_2 &= 0 \\\\
\end{align}$$
There exists a simple implementation using bitwise operations for the non-trivial operation described above:
$$g(i) = i ~\&~ (i+1),$$
where $\&$ is the bitwise AND operator. It is not hard to convince yourself that this solution does the same thing as the operation described above.
Now, we just need to find a way to iterate over all $j$'s, such that $g(j) \le i \le j$.
It is easy to see that we can find all such $j$'s by starting with $i$ and flipping the last unset bit.
We will call this operation $h(j)$.
For example, for $i = 10$ we have:
$$\begin{align}
10 &= 0001010_2 \\\\
h(10) = 11 &= 0001011_2 \\\\
h(11) = 15 &= 0001111_2 \\\\
h(15) = 31 &= 0011111_2 \\\\
h(31) = 63 &= 0111111_2 \\\\
\vdots &
\end{align}$$
Unsurprisingly, there also exists a simple way to perform $h$ using bitwise operations:
$$h(j) = j ~\|~ (j+1),$$
where $\|$ is the bitwise OR operator.
The following image shows a possible interpretation of the Fenwick tree as tree.
The nodes of the tree show the ranges they cover.
<center></center>
## Implementation
### Finding sum in one-dimensional array
Here we present an implementation of the Fenwick tree for sum queries and single updates.
The normal Fenwick tree can only answer sum queries of the type $[0, r]$ using `sum(int r)`, however we can also answer other queries of the type $[l, r]$ by computing two sums $[0, r]$ and $[0, l-1]$ and subtract them.
This is handled in the `sum(int l, int r)` method.
Also this implementation supports two constructors.
You can create a Fenwick tree initialized with zeros, or you can convert an existing array into the Fenwick form.
```{.cpp file=fenwick_sum}
struct FenwickTree {
vector<int> bit; // binary indexed tree
int n;
FenwickTree(int n) {
this->n = n;
bit.assign(n, 0);
}
FenwickTree(vector<int> const &a) : FenwickTree(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int r) {
int ret = 0;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret += bit[r];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] += delta;
}
};
```
### Linear construction
The above implementation requires $O(N \log N)$ time.
It's possible to improve that to $O(N)$ time.
The idea is, that the number $a[i]$ at index $i$ will contribute to the range stored in $bit[i]$, and to all ranges that the index $i | (i + 1)$ contributes to.
So by adding the numbers in order, you only have to push the current sum further to the next range, where it will then get pushed further to the next range, and so on.
```cpp
FenwickTree(vector<int> const &a) : FenwickTree(a.size()){
for (int i = 0; i < n; i++) {
bit[i] += a[i];
int r = i | (i + 1);
if (r < n) bit[r] += bit[i];
}
}
```
### Finding minimum of $[0, r]$ in one-dimensional array { data-toc-label='Finding minimum of <script type="math/tex">[0, r]</script> in one-dimensional array' }
It is obvious that there is no easy way of finding minimum of range $[l, r]$ using Fenwick tree, as Fenwick tree can only answer queries of type $[0, r]$.
Additionally, each time a value is `update`'d, the new value has to be smaller than the current value.
Both significant limitations are because the $min$ operation together with the set of integers doesn't form a group, as there are no inverse elements.
```{.cpp file=fenwick_min}
struct FenwickTreeMin {
vector<int> bit;
int n;
const int INF = (int)1e9;
FenwickTreeMin(int n) {
this->n = n;
bit.assign(n, INF);
}
FenwickTreeMin(vector<int> a) : FenwickTreeMin(a.size()) {
for (size_t i = 0; i < a.size(); i++)
update(i, a[i]);
}
int getmin(int r) {
int ret = INF;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret = min(ret, bit[r]);
return ret;
}
void update(int idx, int val) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] = min(bit[idx], val);
}
};
```
Note: it is possible to implement a Fenwick tree that can handle arbitrary minimum range queries and arbitrary updates.
The paper [Efficient Range Minimum Queries using Binary Indexed Trees](http://ioinformatics.org/oi/pdf/v9_2015_39_44.pdf) describes such an approach.
However with that approach you need to maintain a second binary indexed trees over the data, with a slightly different structure, since you one tree is not enough to store the values of all elements in the array.
The implementation is also a lot harder compared to the normal implementation for sums.
### Finding sum in two-dimensional array
As claimed before, it is very easy to implement Fenwick Tree for multidimensional array.
```cpp
struct FenwickTree2D {
vector<vector<int>> bit;
int n, m;
// init(...) { ... }
int sum(int x, int y) {
int ret = 0;
for (int i = x; i >= 0; i = (i & (i + 1)) - 1)
for (int j = y; j >= 0; j = (j & (j + 1)) - 1)
ret += bit[i][j];
return ret;
}
void add(int x, int y, int delta) {
for (int i = x; i < n; i = i | (i + 1))
for (int j = y; j < m; j = j | (j + 1))
bit[i][j] += delta;
}
};
```
### One-based indexing approach
For this approach we change the requirements and definition for $T[]$ and $g()$ a little bit.
We want $T[i]$ to store the sum of $[g(i)+1; i]$.
This changes the implementation a little bit, and allows for a similar nice definition for $g(i)$:
```python
def sum(int r):
res = 0
while (r > 0):
res += t[r]
r = g(r)
return res
def increase(int i, int delta):
for all j with g(j) < i <= j:
t[j] += delta
```
The computation of $g(i)$ is defined as:
toggling of the last set $1$ bit in the binary representation of $i$.
$$\begin{align}
g(7) = g(111_2) = 110_2 &= 6 \\\\
g(6) = g(110_2) = 100_2 &= 4 \\\\
g(4) = g(100_2) = 000_2 &= 0 \\\\
\end{align}$$
The last set bit can be extracted using $i ~\&~ (-i)$, so the operation can be expressed as:
$$g(i) = i - (i ~\&~ (-i)).$$
And it's not hard to see, that you need to change all values $T[j]$ in the sequence $i,~ h(i),~ h(h(i)),~ \dots$ when you want to update $A[j]$, where $h(i)$ is defined as:
$$h(i) = i + (i ~\&~ (-i)).$$
As you can see, the main benefit of this approach is that the binary operations complement each other very nicely.
The following implementation can be used like the other implementations, however it uses one-based indexing internally.
```{.cpp file=fenwick_sum_onebased}
struct FenwickTreeOneBasedIndexing {
vector<int> bit; // binary indexed tree
int n;
FenwickTreeOneBasedIndexing(int n) {
this->n = n + 1;
bit.assign(n + 1, 0);
}
FenwickTreeOneBasedIndexing(vector<int> a)
: FenwickTreeOneBasedIndexing(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += delta;
}
};
```
## Range operations
A Fenwick tree can support the following range operations:
1. Point Update and Range Query
2. Range Update and Point Query
3. Range Update and Range Query
### 1. Point Update and Range Query
This is just the ordinary Fenwick tree as explained above.
### 2. Range Update and Point Query
Using simple tricks we can also do the reverse operations: increasing ranges and querying for single values.
Let the Fenwick tree be initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by $x$.
We make two point update operations on Fenwick tree which are `add(l, x)` and `add(r+1, -x)`.
If we want to get the value of $A[i]$, we just need to take the prefix sum using the ordinary range sum method.
To see why this is true, we can just focus on the previous increment operation again.
If $i < l$, then the two update operations have no effect on the query and we get the sum $0$.
If $i \in [l, r]$, then we get the answer $x$ because of the first update operation.
And if $i > r$, then the second update operation will cancel the effect of first one.
The following implementation uses one-based indexing.
```cpp
void add(int idx, int val) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += val;
}
void range_add(int l, int r, int val) {
add(l, val);
add(r + 1, -val);
}
int point_query(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
```
Note: of course it is also possible to increase a single point $A[i]$ with `range_add(i, i, val)`.
### 3. Range Updates and Range Queries
To support both range updates and range queries we will use two BITs namely $B_1[]$ and $B_2[]$, initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by the value $x$.
Similarly as in the previous method, we perform two point updates on $B_1$: `add(B1, l, x)` and `add(B1, r+1, -x)`.
And we also update $B_2$. The details will be explained later.
```python
def range_add(l, r, x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r))
```
After the range update $(l, r, x)$ the range sum query should return the following values:
$$
sum[0, i]=
\begin{cases}
0 & i < l \\\\
x \cdot (i-(l-1)) & l \le i \le r \\\\
x \cdot (r-l+1) & i > r \\\\
\end{cases}
$$
We can write the range sum as difference of two terms, where we use $B_1$ for first term and $B_2$ for second term.
The difference of the queries will give us prefix sum over $[0, i]$.
$$\begin{align}
sum[0, i] &= sum(B_1, i) \cdot i - sum(B_2, i) \\\\
&= \begin{cases}
0 \cdot i - 0 & i < l\\\\
x \cdot i - x \cdot (l-1) & l \le i \le r \\\\
0 \cdot i - (x \cdot (l-1) - x \cdot r) & i > r \\\\
\end{cases}
\end{align}
$$
The last expression is exactly equal to the required terms.
Thus we can use $B_2$ for shaving off extra terms when we multiply $B_1[i]\times i$.
We can find arbitrary range sums by computing the prefix sums for $l-1$ and $r$ and taking the difference of them again.
```python
def add(b, idx, x):
while idx <= N:
b[idx] += x
idx += idx & -idx
def range_add(l,r,x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r)
def sum(b, idx):
total = 0
while idx > 0:
total += b[idx]
idx -= idx & -idx
return total
def prefix_sum(idx):
return sum(B1, idx)*idx - sum(B2, idx)
def range_sum(l, r):
return prefix_sum(r) - prefix_sum(l-1)
```
|
---
title
fenwick_tree
---
# Fenwick Tree
Let, $f$ be some group operation (binary associative function over a set with identity element and inverse elements) and $A$ be an array of integers of length $N$.
Fenwick tree is a data structure which:
* calculates the value of function $f$ in the given range $[l, r]$ (i.e. $f(A_l, A_{l+1}, \dots, A_r)$) in $O(\log N)$ time;
* updates the value of an element of $A$ in $O(\log N)$ time;
* requires $O(N)$ memory, or in other words, exactly the same memory required for $A$;
* is easy to use and code, especially, in the case of multidimensional arrays.
The most common application of Fenwick tree is _calculating the sum of a range_ (i.e. using addition over the set of integers $\mathbb{Z}$: $f(A_1, A_2, \dots, A_k) = A_1 + A_2 + \dots + A_k$).
Fenwick tree is also called **Binary Indexed Tree**, or just **BIT** abbreviated.
Fenwick tree was first described in a paper titled "A new data structure for cumulative frequency tables" (Peter M. Fenwick, 1994).
## Description
### Overview
For the sake of simplicity, we will assume that function $f$ is just a *sum function*.
Given an array of integers $A[0 \dots N-1]$.
A Fenwick tree is just an array $T[0 \dots N-1]$, where each of its elements is equal to the sum of elements of $A$ in some range $[g(i), i]$:
$$T_i = \sum_{j = g(i)}^{i}{A_j},$$
where $g$ is some function that satisfies $0 \le g(i) \le i$.
We will define the function in the next few paragraphs.
The data structure is called tree, because there is a nice representation of the data structure as tree, although we don't need to model an actual tree with nodes and edges.
We will only need to maintain the array $T$ to handle all queries.
**Note:** The Fenwick tree presented here uses zero-based indexing.
Many people will actually use a version of the Fenwick tree that uses one-based indexing.
Therefore you will also find an alternative implementation using one-based indexing in the implementation section.
Both versions are equivalent in terms of time and memory complexity.
Now we can write some pseudo-code for the two operations mentioned above - get the sum of elements of $A$ in the range $[0, r]$ and update (increase) some element $A_i$:
```python
def sum(int r):
res = 0
while (r >= 0):
res += t[r]
r = g(r) - 1
return res
def increase(int i, int delta):
for all j with g(j) <= i <= j:
t[j] += delta
```
The function `sum` works as follows:
1. first, it adds the sum of the range $[g(r), r]$ (i.e. $T[r]$) to the `result`
2. then, it "jumps" to the range $[g(g(r)-1), g(r)-1]$, and adds this range's sum to the `result`
3. and so on, until it "jumps" from $[0, g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1), -1]$; that is where the `sum` function stops jumping.
The function `increase` works with the same analogy, but "jumps" in the direction of increasing indices:
1. sums of the ranges $[g(j), j]$ that satisfy the condition $g(j) \le i \le j$ are increased by `delta` , that is `t[j] += delta`. Therefore we updated all elements in $T$ that correspond to ranges in which $A_i$ lies.
It is obvious that the complexity of both `sum` and `increase` depend on the function $g$.
There are lots of ways to choose the function $g$, as long as $0 \le g(i) \le i$ for all $i$.
For instance the function $g(i) = i$ works, which results just in $T = A$, and therefore summation queries are slow.
We can also take the function $g(i) = 0$.
This will correspond to prefix sum arrays, which means that finding the sum of the range $[0, i]$ will only take constant time, but updates are slow.
The clever part of the Fenwick algorithm is, that there it uses a special definition of the function $g$ that can handle both operations in $O(\log N)$ time.
### Definition of $g(i)$ { data-toc-label='Definition of <script type="math/tex">g(i)</script>' }
The computation of $g(i)$ is defined using the following simple operation:
we replace all trailing $1$ bits in the binary representation of $i$ with $0$ bits.
In other words, if the least significant digit of $i$ in binary is $0$, then $g(i) = i$.
And otherwise the least significant digit is a $1$, and we take this $1$ and all other trailing $1$s and flip them.
For instance we get
$$\begin{align}
g(11) = g(1011_2) = 1000_2 &= 8 \\\\
g(12) = g(1100_2) = 1100_2 &= 12 \\\\
g(13) = g(1101_2) = 1100_2 &= 12 \\\\
g(14) = g(1110_2) = 1110_2 &= 14 \\\\
g(15) = g(1111_2) = 0000_2 &= 0 \\\\
\end{align}$$
There exists a simple implementation using bitwise operations for the non-trivial operation described above:
$$g(i) = i ~\&~ (i+1),$$
where $\&$ is the bitwise AND operator. It is not hard to convince yourself that this solution does the same thing as the operation described above.
Now, we just need to find a way to iterate over all $j$'s, such that $g(j) \le i \le j$.
It is easy to see that we can find all such $j$'s by starting with $i$ and flipping the last unset bit.
We will call this operation $h(j)$.
For example, for $i = 10$ we have:
$$\begin{align}
10 &= 0001010_2 \\\\
h(10) = 11 &= 0001011_2 \\\\
h(11) = 15 &= 0001111_2 \\\\
h(15) = 31 &= 0011111_2 \\\\
h(31) = 63 &= 0111111_2 \\\\
\vdots &
\end{align}$$
Unsurprisingly, there also exists a simple way to perform $h$ using bitwise operations:
$$h(j) = j ~\|~ (j+1),$$
where $\|$ is the bitwise OR operator.
The following image shows a possible interpretation of the Fenwick tree as tree.
The nodes of the tree show the ranges they cover.
<center></center>
## Implementation
### Finding sum in one-dimensional array
Here we present an implementation of the Fenwick tree for sum queries and single updates.
The normal Fenwick tree can only answer sum queries of the type $[0, r]$ using `sum(int r)`, however we can also answer other queries of the type $[l, r]$ by computing two sums $[0, r]$ and $[0, l-1]$ and subtract them.
This is handled in the `sum(int l, int r)` method.
Also this implementation supports two constructors.
You can create a Fenwick tree initialized with zeros, or you can convert an existing array into the Fenwick form.
```{.cpp file=fenwick_sum}
struct FenwickTree {
vector<int> bit; // binary indexed tree
int n;
FenwickTree(int n) {
this->n = n;
bit.assign(n, 0);
}
FenwickTree(vector<int> const &a) : FenwickTree(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int r) {
int ret = 0;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret += bit[r];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] += delta;
}
};
```
### Linear construction
The above implementation requires $O(N \log N)$ time.
It's possible to improve that to $O(N)$ time.
The idea is, that the number $a[i]$ at index $i$ will contribute to the range stored in $bit[i]$, and to all ranges that the index $i | (i + 1)$ contributes to.
So by adding the numbers in order, you only have to push the current sum further to the next range, where it will then get pushed further to the next range, and so on.
```cpp
FenwickTree(vector<int> const &a) : FenwickTree(a.size()){
for (int i = 0; i < n; i++) {
bit[i] += a[i];
int r = i | (i + 1);
if (r < n) bit[r] += bit[i];
}
}
```
### Finding minimum of $[0, r]$ in one-dimensional array { data-toc-label='Finding minimum of <script type="math/tex">[0, r]</script> in one-dimensional array' }
It is obvious that there is no easy way of finding minimum of range $[l, r]$ using Fenwick tree, as Fenwick tree can only answer queries of type $[0, r]$.
Additionally, each time a value is `update`'d, the new value has to be smaller than the current value.
Both significant limitations are because the $min$ operation together with the set of integers doesn't form a group, as there are no inverse elements.
```{.cpp file=fenwick_min}
struct FenwickTreeMin {
vector<int> bit;
int n;
const int INF = (int)1e9;
FenwickTreeMin(int n) {
this->n = n;
bit.assign(n, INF);
}
FenwickTreeMin(vector<int> a) : FenwickTreeMin(a.size()) {
for (size_t i = 0; i < a.size(); i++)
update(i, a[i]);
}
int getmin(int r) {
int ret = INF;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret = min(ret, bit[r]);
return ret;
}
void update(int idx, int val) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] = min(bit[idx], val);
}
};
```
Note: it is possible to implement a Fenwick tree that can handle arbitrary minimum range queries and arbitrary updates.
The paper [Efficient Range Minimum Queries using Binary Indexed Trees](http://ioinformatics.org/oi/pdf/v9_2015_39_44.pdf) describes such an approach.
However with that approach you need to maintain a second binary indexed trees over the data, with a slightly different structure, since you one tree is not enough to store the values of all elements in the array.
The implementation is also a lot harder compared to the normal implementation for sums.
### Finding sum in two-dimensional array
As claimed before, it is very easy to implement Fenwick Tree for multidimensional array.
```cpp
struct FenwickTree2D {
vector<vector<int>> bit;
int n, m;
// init(...) { ... }
int sum(int x, int y) {
int ret = 0;
for (int i = x; i >= 0; i = (i & (i + 1)) - 1)
for (int j = y; j >= 0; j = (j & (j + 1)) - 1)
ret += bit[i][j];
return ret;
}
void add(int x, int y, int delta) {
for (int i = x; i < n; i = i | (i + 1))
for (int j = y; j < m; j = j | (j + 1))
bit[i][j] += delta;
}
};
```
### One-based indexing approach
For this approach we change the requirements and definition for $T[]$ and $g()$ a little bit.
We want $T[i]$ to store the sum of $[g(i)+1; i]$.
This changes the implementation a little bit, and allows for a similar nice definition for $g(i)$:
```python
def sum(int r):
res = 0
while (r > 0):
res += t[r]
r = g(r)
return res
def increase(int i, int delta):
for all j with g(j) < i <= j:
t[j] += delta
```
The computation of $g(i)$ is defined as:
toggling of the last set $1$ bit in the binary representation of $i$.
$$\begin{align}
g(7) = g(111_2) = 110_2 &= 6 \\\\
g(6) = g(110_2) = 100_2 &= 4 \\\\
g(4) = g(100_2) = 000_2 &= 0 \\\\
\end{align}$$
The last set bit can be extracted using $i ~\&~ (-i)$, so the operation can be expressed as:
$$g(i) = i - (i ~\&~ (-i)).$$
And it's not hard to see, that you need to change all values $T[j]$ in the sequence $i,~ h(i),~ h(h(i)),~ \dots$ when you want to update $A[j]$, where $h(i)$ is defined as:
$$h(i) = i + (i ~\&~ (-i)).$$
As you can see, the main benefit of this approach is that the binary operations complement each other very nicely.
The following implementation can be used like the other implementations, however it uses one-based indexing internally.
```{.cpp file=fenwick_sum_onebased}
struct FenwickTreeOneBasedIndexing {
vector<int> bit; // binary indexed tree
int n;
FenwickTreeOneBasedIndexing(int n) {
this->n = n + 1;
bit.assign(n + 1, 0);
}
FenwickTreeOneBasedIndexing(vector<int> a)
: FenwickTreeOneBasedIndexing(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += delta;
}
};
```
## Range operations
A Fenwick tree can support the following range operations:
1. Point Update and Range Query
2. Range Update and Point Query
3. Range Update and Range Query
### 1. Point Update and Range Query
This is just the ordinary Fenwick tree as explained above.
### 2. Range Update and Point Query
Using simple tricks we can also do the reverse operations: increasing ranges and querying for single values.
Let the Fenwick tree be initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by $x$.
We make two point update operations on Fenwick tree which are `add(l, x)` and `add(r+1, -x)`.
If we want to get the value of $A[i]$, we just need to take the prefix sum using the ordinary range sum method.
To see why this is true, we can just focus on the previous increment operation again.
If $i < l$, then the two update operations have no effect on the query and we get the sum $0$.
If $i \in [l, r]$, then we get the answer $x$ because of the first update operation.
And if $i > r$, then the second update operation will cancel the effect of first one.
The following implementation uses one-based indexing.
```cpp
void add(int idx, int val) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += val;
}
void range_add(int l, int r, int val) {
add(l, val);
add(r + 1, -val);
}
int point_query(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
```
Note: of course it is also possible to increase a single point $A[i]$ with `range_add(i, i, val)`.
### 3. Range Updates and Range Queries
To support both range updates and range queries we will use two BITs namely $B_1[]$ and $B_2[]$, initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by the value $x$.
Similarly as in the previous method, we perform two point updates on $B_1$: `add(B1, l, x)` and `add(B1, r+1, -x)`.
And we also update $B_2$. The details will be explained later.
```python
def range_add(l, r, x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r))
```
After the range update $(l, r, x)$ the range sum query should return the following values:
$$
sum[0, i]=
\begin{cases}
0 & i < l \\\\
x \cdot (i-(l-1)) & l \le i \le r \\\\
x \cdot (r-l+1) & i > r \\\\
\end{cases}
$$
We can write the range sum as difference of two terms, where we use $B_1$ for first term and $B_2$ for second term.
The difference of the queries will give us prefix sum over $[0, i]$.
$$\begin{align}
sum[0, i] &= sum(B_1, i) \cdot i - sum(B_2, i) \\\\
&= \begin{cases}
0 \cdot i - 0 & i < l\\\\
x \cdot i - x \cdot (l-1) & l \le i \le r \\\\
0 \cdot i - (x \cdot (l-1) - x \cdot r) & i > r \\\\
\end{cases}
\end{align}
$$
The last expression is exactly equal to the required terms.
Thus we can use $B_2$ for shaving off extra terms when we multiply $B_1[i]\times i$.
We can find arbitrary range sums by computing the prefix sums for $l-1$ and $r$ and taking the difference of them again.
```python
def add(b, idx, x):
while idx <= N:
b[idx] += x
idx += idx & -idx
def range_add(l,r,x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r)
def sum(b, idx):
total = 0
while idx > 0:
total += b[idx]
idx -= idx & -idx
return total
def prefix_sum(idx):
return sum(B1, idx)*idx - sum(B2, idx)
def range_sum(l, r):
return prefix_sum(r) - prefix_sum(l-1)
```
## Practice Problems
* [UVA 12086 - Potentiometers](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=24&page=show_problem&problem=3238)
* [LOJ 1112 - Curious Robin Hood](http://www.lightoj.com/volume_showproblem.php?problem=1112)
* [LOJ 1266 - Points in Rectangle](http://www.lightoj.com/volume_showproblem.php?problem=1266 "2D Fenwick Tree")
* [Codechef - SPREAD](http://www.codechef.com/problems/SPREAD)
* [SPOJ - CTRICK](http://www.spoj.com/problems/CTRICK/)
* [SPOJ - MATSUM](http://www.spoj.com/problems/MATSUM/)
* [SPOJ - DQUERY](http://www.spoj.com/problems/DQUERY/)
* [SPOJ - NKTEAM](http://www.spoj.com/problems/NKTEAM/)
* [SPOJ - YODANESS](http://www.spoj.com/problems/YODANESS/)
* [SRM 310 - FloatingMedian](https://community.topcoder.com/stat?c=problem_statement&pm=6551&rd=9990)
* [SPOJ - Ada and Behives](http://www.spoj.com/problems/ADABEHIVE/)
* [Hackerearth - Counting in Byteland](https://www.hackerearth.com/practice/data-structures/advanced-data-structures/fenwick-binary-indexed-trees/practice-problems/algorithm/counting-in-byteland/)
* [DevSkill - Shan and String (archived)](http://web.archive.org/web/20210322010617/https://devskill.com/CodingProblems/ViewProblem/300)
* [Codeforces - Little Artem and Time Machine](http://codeforces.com/contest/669/problem/E)
* [Codeforces - Hanoi Factory](http://codeforces.com/contest/777/problem/E)
* [SPOJ - Tulip and Numbers](http://www.spoj.com/problems/TULIPNUM/)
* [SPOJ - SUMSUM](http://www.spoj.com/problems/SUMSUM/)
* [SPOJ - Sabir and Gifts](http://www.spoj.com/problems/SGIFT/)
* [SPOJ - The Permutation Game Again](http://www.spoj.com/problems/TPGA/)
* [SPOJ - Zig when you Zag](http://www.spoj.com/problems/ZIGZAG2/)
* [SPOJ - Cryon](http://www.spoj.com/problems/CRAYON/)
* [SPOJ - Weird Points](http://www.spoj.com/problems/DCEPC705/)
* [SPOJ - Its a Murder](http://www.spoj.com/problems/DCEPC206/)
* [SPOJ - Bored of Suffixes and Prefixes](http://www.spoj.com/problems/KOPC12G/)
* [SPOJ - Mega Inversions](http://www.spoj.com/problems/TRIPINV/)
* [Codeforces - Subsequences](http://codeforces.com/contest/597/problem/C)
* [Codeforces - Ball](http://codeforces.com/contest/12/problem/D)
* [GYM - The Kamphaeng Phet's Chedis](http://codeforces.com/gym/101047/problem/J)
* [Codeforces - Garlands](http://codeforces.com/contest/707/problem/E)
* [Codeforces - Inversions after Shuffle](http://codeforces.com/contest/749/problem/E)
* [GYM - Cairo Market](http://codeforces.com/problemset/gymProblem/101055/D)
* [Codeforces - Goodbye Souvenir](http://codeforces.com/contest/849/problem/E)
* [SPOJ - Ada and Species](http://www.spoj.com/problems/ADACABAA/)
* [Codeforces - Thor](https://codeforces.com/problemset/problem/704/A)
* [CSES - Forest Queries II](https://cses.fi/problemset/task/1739/)
* [Latin American Regionals 2017 - Fundraising](http://matcomgrader.com/problem/9346/fundraising/)
## Other sources
* [Fenwick tree on Wikipedia](http://en.wikipedia.org/wiki/Fenwick_tree)
* [Binary indexed trees tutorial on TopCoder](https://www.topcoder.com/community/data-science/data-science-tutorials/binary-indexed-trees/)
* [Range updates and queries ](https://programmingcontests.quora.com/Tutorial-Range-Updates-in-Fenwick-Tree)
|
Fenwick Tree
|
---
title
- Original
---
# Sparse Table
Sparse Table is a data structure, that allows answering range queries.
It can answer most range queries in $O(\log n)$, but its true power is answering range minimum queries (or equivalent range maximum queries).
For those queries it can compute the answer in $O(1)$ time.
The only drawback of this data structure is, that it can only be used on _immutable_ arrays.
This means, that the array cannot be changed between two queries.
If any element in the array changes, the complete data structure has to be recomputed.
## Intuition
Any non-negative number can be uniquely represented as a sum of decreasing powers of two.
This is just a variant of the binary representation of a number.
E.g. $13 = (1101)_2 = 8 + 4 + 1$.
For a number $x$ there can be at most $\lceil \log_2 x \rceil$ summands.
By the same reasoning any interval can be uniquely represented as a union of intervals with lengths that are decreasing powers of two.
E.g. $[2, 14] = [2, 9] \cup [10, 13] \cup [14, 14]$, where the complete interval has length 13, and the individual intervals have the lengths 8, 4 and 1 respectively.
And also here the union consists of at most $\lceil \log_2(\text{length of interval}) \rceil$ many intervals.
The main idea behind Sparse Tables is to precompute all answers for range queries with power of two length.
Afterwards a different range query can be answered by splitting the range into ranges with power of two lengths, looking up the precomputed answers, and combining them to receive a complete answer.
## Precomputation
We will use a 2-dimensional array for storing the answers to the precomputed queries.
$\text{st}[i][j]$ will store the answer for the range $[j, j + 2^i - 1]$ of length $2^i$.
The size of the 2-dimensional array will be $(K + 1) \times \text{MAXN}$, where $\text{MAXN}$ is the biggest possible array length.
$\text{K}$ has to satisfy $\text{K} \ge \lfloor \log_2 \text{MAXN} \rfloor$, because $2^{\lfloor \log_2 \text{MAXN} \rfloor}$ is the biggest power of two range, that we have to support.
For arrays with reasonable length ($\le 10^7$ elements), $K = 25$ is a good value.
The $\text{MAXN}$ dimension is second to allow (cache friendly) consecutive memory accesses.
```{.cpp file=sparsetable_definition}
int st[K + 1][MAXN];
```
Because the range $[j, j + 2^i - 1]$ of length $2^i$ splits nicely into the ranges $[j, j + 2^{i - 1} - 1]$ and $[j + 2^{i - 1}, j + 2^i - 1]$, both of length $2^{i - 1}$, we can generate the table efficiently using dynamic programming:
```{.cpp file=sparsetable_generation}
std::copy(array.begin(), array.end(), st[0]);
for (int i = 1; i <= K; i++)
for (int j = 0; j + (1 << i) <= N; j++)
st[i][j] = f(st[i - 1][j], st[i - 1][j + (1 << (i - 1))]);
```
The function $f$ will depend on the type of query.
For range sum queries it will compute the sum, for range minimum queries it will compute the minimum.
The time complexity of the precomputation is $O(\text{N} \log \text{N})$.
## Range Sum Queries
For this type of queries, we want to find the sum of all values in a range.
Therefore the natural definition of the function $f$ is $f(x, y) = x + y$.
We can construct the data structure with:
```{.cpp file=sparsetable_sum_generation}
long long st[K + 1][MAXN];
std::copy(array.begin(), array.end(), st[0]);
for (int i = 1; i <= K; i++)
for (int j = 0; j + (1 << i) <= N; j++)
st[i][j] = st[i - 1][j] + st[i - 1][j + (1 << (i - 1))];
```
To answer the sum query for the range $[L, R]$, we iterate over all powers of two, starting from the biggest one.
As soon as a power of two $2^i$ is smaller or equal to the length of the range ($= R - L + 1$), we process the first part of range $[L, L + 2^i - 1]$, and continue with the remaining range $[L + 2^i, R]$.
```{.cpp file=sparsetable_sum_query}
long long sum = 0;
for (int i = K; i >= 0; i--) {
if ((1 << i) <= R - L + 1) {
sum += st[i][L];
L += 1 << i;
}
}
```
Time complexity for a Range Sum Query is $O(K) = O(\log \text{MAXN})$.
## Range Minimum Queries (RMQ)
These are the queries where the Sparse Table shines.
When computing the minimum of a range, it doesn't matter if we process a value in the range once or twice.
Therefore instead of splitting a range into multiple ranges, we can also split the range into only two overlapping ranges with power of two length.
E.g. we can split the range $[1, 6]$ into the ranges $[1, 4]$ and $[3, 6]$.
The range minimum of $[1, 6]$ is clearly the same as the minimum of the range minimum of $[1, 4]$ and the range minimum of $[3, 6]$.
So we can compute the minimum of the range $[L, R]$ with:
$$\min(\text{st}[i][L], \text{st}[i][R - 2^i + 1]) \quad \text{ where } i = \log_2(R - L + 1)$$
This requires that we are able to compute $\log_2(R - L + 1)$ fast.
You can accomplish that by precomputing all logarithms:
```{.cpp file=sparse_table_log_table}
int lg[MAXN+1];
lg[1] = 0;
for (int i = 2; i <= MAXN; i++)
lg[i] = lg[i/2] + 1;
```
Alternatively, log can be computed on the fly in constant space and time:
```c++
// C++20
#include <bit>
int log2_floor(unsigned long i) {
return std::bit_width(i) - 1;
}
// pre C++20
int log2_floor(unsigned long long i) {
return i ? __builtin_clzll(1) - __builtin_clzll(i) : -1;
}
```
[This benchmark](https://quick-bench.com/q/Zghbdj_TEkmw4XG2nqOpD3tsJ8U) shows that using `lg` array is slower because of cache misses.
Afterwards we need to precompute the Sparse Table structure. This time we define $f$ with $f(x, y) = \min(x, y)$.
```{.cpp file=sparse_table_minimum_generation}
int st[K + 1][MAXN];
std::copy(array.begin(), array.end(), st[0]);
for (int i = 1; i <= K; i++)
for (int j = 0; j + (1 << i) <= N; j++)
st[i][j] = min(st[i - 1][j], st[i - 1][j + (1 << (i - 1))]);
```
And the minimum of a range $[L, R]$ can be computed with:
```{.cpp file=sparse_table_minimum_query}
int i = lg[R - L + 1];
int minimum = min(st[i][L], st[i][R - (1 << i) + 1]);
```
Time complexity for a Range Minimum Query is $O(1)$.
## Similar data structures supporting more types of queries
One of the main weakness of the $O(1)$ approach discussed in the previous section is, that this approach only supports queries of [idempotent functions](https://en.wikipedia.org/wiki/Idempotence).
I.e. it works great for range minimum queries, but it is not possible to answer range sum queries using this approach.
There are similar data structures that can handle any type of associative functions and answer range queries in $O(1)$.
One of them is called [Disjoint Sparse Table](https://discuss.codechef.com/questions/117696/tutorial-disjoint-sparse-table).
Another one would be the [Sqrt Tree](sqrt-tree.md).
|
---
title
- Original
---
# Sparse Table
Sparse Table is a data structure, that allows answering range queries.
It can answer most range queries in $O(\log n)$, but its true power is answering range minimum queries (or equivalent range maximum queries).
For those queries it can compute the answer in $O(1)$ time.
The only drawback of this data structure is, that it can only be used on _immutable_ arrays.
This means, that the array cannot be changed between two queries.
If any element in the array changes, the complete data structure has to be recomputed.
## Intuition
Any non-negative number can be uniquely represented as a sum of decreasing powers of two.
This is just a variant of the binary representation of a number.
E.g. $13 = (1101)_2 = 8 + 4 + 1$.
For a number $x$ there can be at most $\lceil \log_2 x \rceil$ summands.
By the same reasoning any interval can be uniquely represented as a union of intervals with lengths that are decreasing powers of two.
E.g. $[2, 14] = [2, 9] \cup [10, 13] \cup [14, 14]$, where the complete interval has length 13, and the individual intervals have the lengths 8, 4 and 1 respectively.
And also here the union consists of at most $\lceil \log_2(\text{length of interval}) \rceil$ many intervals.
The main idea behind Sparse Tables is to precompute all answers for range queries with power of two length.
Afterwards a different range query can be answered by splitting the range into ranges with power of two lengths, looking up the precomputed answers, and combining them to receive a complete answer.
## Precomputation
We will use a 2-dimensional array for storing the answers to the precomputed queries.
$\text{st}[i][j]$ will store the answer for the range $[j, j + 2^i - 1]$ of length $2^i$.
The size of the 2-dimensional array will be $(K + 1) \times \text{MAXN}$, where $\text{MAXN}$ is the biggest possible array length.
$\text{K}$ has to satisfy $\text{K} \ge \lfloor \log_2 \text{MAXN} \rfloor$, because $2^{\lfloor \log_2 \text{MAXN} \rfloor}$ is the biggest power of two range, that we have to support.
For arrays with reasonable length ($\le 10^7$ elements), $K = 25$ is a good value.
The $\text{MAXN}$ dimension is second to allow (cache friendly) consecutive memory accesses.
```{.cpp file=sparsetable_definition}
int st[K + 1][MAXN];
```
Because the range $[j, j + 2^i - 1]$ of length $2^i$ splits nicely into the ranges $[j, j + 2^{i - 1} - 1]$ and $[j + 2^{i - 1}, j + 2^i - 1]$, both of length $2^{i - 1}$, we can generate the table efficiently using dynamic programming:
```{.cpp file=sparsetable_generation}
std::copy(array.begin(), array.end(), st[0]);
for (int i = 1; i <= K; i++)
for (int j = 0; j + (1 << i) <= N; j++)
st[i][j] = f(st[i - 1][j], st[i - 1][j + (1 << (i - 1))]);
```
The function $f$ will depend on the type of query.
For range sum queries it will compute the sum, for range minimum queries it will compute the minimum.
The time complexity of the precomputation is $O(\text{N} \log \text{N})$.
## Range Sum Queries
For this type of queries, we want to find the sum of all values in a range.
Therefore the natural definition of the function $f$ is $f(x, y) = x + y$.
We can construct the data structure with:
```{.cpp file=sparsetable_sum_generation}
long long st[K + 1][MAXN];
std::copy(array.begin(), array.end(), st[0]);
for (int i = 1; i <= K; i++)
for (int j = 0; j + (1 << i) <= N; j++)
st[i][j] = st[i - 1][j] + st[i - 1][j + (1 << (i - 1))];
```
To answer the sum query for the range $[L, R]$, we iterate over all powers of two, starting from the biggest one.
As soon as a power of two $2^i$ is smaller or equal to the length of the range ($= R - L + 1$), we process the first part of range $[L, L + 2^i - 1]$, and continue with the remaining range $[L + 2^i, R]$.
```{.cpp file=sparsetable_sum_query}
long long sum = 0;
for (int i = K; i >= 0; i--) {
if ((1 << i) <= R - L + 1) {
sum += st[i][L];
L += 1 << i;
}
}
```
Time complexity for a Range Sum Query is $O(K) = O(\log \text{MAXN})$.
## Range Minimum Queries (RMQ)
These are the queries where the Sparse Table shines.
When computing the minimum of a range, it doesn't matter if we process a value in the range once or twice.
Therefore instead of splitting a range into multiple ranges, we can also split the range into only two overlapping ranges with power of two length.
E.g. we can split the range $[1, 6]$ into the ranges $[1, 4]$ and $[3, 6]$.
The range minimum of $[1, 6]$ is clearly the same as the minimum of the range minimum of $[1, 4]$ and the range minimum of $[3, 6]$.
So we can compute the minimum of the range $[L, R]$ with:
$$\min(\text{st}[i][L], \text{st}[i][R - 2^i + 1]) \quad \text{ where } i = \log_2(R - L + 1)$$
This requires that we are able to compute $\log_2(R - L + 1)$ fast.
You can accomplish that by precomputing all logarithms:
```{.cpp file=sparse_table_log_table}
int lg[MAXN+1];
lg[1] = 0;
for (int i = 2; i <= MAXN; i++)
lg[i] = lg[i/2] + 1;
```
Alternatively, log can be computed on the fly in constant space and time:
```c++
// C++20
#include <bit>
int log2_floor(unsigned long i) {
return std::bit_width(i) - 1;
}
// pre C++20
int log2_floor(unsigned long long i) {
return i ? __builtin_clzll(1) - __builtin_clzll(i) : -1;
}
```
[This benchmark](https://quick-bench.com/q/Zghbdj_TEkmw4XG2nqOpD3tsJ8U) shows that using `lg` array is slower because of cache misses.
Afterwards we need to precompute the Sparse Table structure. This time we define $f$ with $f(x, y) = \min(x, y)$.
```{.cpp file=sparse_table_minimum_generation}
int st[K + 1][MAXN];
std::copy(array.begin(), array.end(), st[0]);
for (int i = 1; i <= K; i++)
for (int j = 0; j + (1 << i) <= N; j++)
st[i][j] = min(st[i - 1][j], st[i - 1][j + (1 << (i - 1))]);
```
And the minimum of a range $[L, R]$ can be computed with:
```{.cpp file=sparse_table_minimum_query}
int i = lg[R - L + 1];
int minimum = min(st[i][L], st[i][R - (1 << i) + 1]);
```
Time complexity for a Range Minimum Query is $O(1)$.
## Similar data structures supporting more types of queries
One of the main weakness of the $O(1)$ approach discussed in the previous section is, that this approach only supports queries of [idempotent functions](https://en.wikipedia.org/wiki/Idempotence).
I.e. it works great for range minimum queries, but it is not possible to answer range sum queries using this approach.
There are similar data structures that can handle any type of associative functions and answer range queries in $O(1)$.
One of them is called [Disjoint Sparse Table](https://discuss.codechef.com/questions/117696/tutorial-disjoint-sparse-table).
Another one would be the [Sqrt Tree](sqrt-tree.md).
## Practice Problems
* [SPOJ - RMQSQ](http://www.spoj.com/problems/RMQSQ/)
* [SPOJ - THRBL](http://www.spoj.com/problems/THRBL/)
* [Codechef - MSTICK](https://www.codechef.com/problems/MSTICK)
* [Codechef - SEAD](https://www.codechef.com/problems/SEAD)
* [Codeforces - CGCDSSQ](http://codeforces.com/contest/475/problem/D)
* [Codeforces - R2D2 and Droid Army](http://codeforces.com/problemset/problem/514/D)
* [Codeforces - Maximum of Maximums of Minimums](http://codeforces.com/problemset/problem/872/B)
* [SPOJ - Miraculous](http://www.spoj.com/problems/TNVFC1M/)
* [DevSkill - Multiplication Interval (archived)](http://web.archive.org/web/20200922003506/https://devskill.com/CodingProblems/ViewProblem/19)
* [Codeforces - Animals and Puzzles](http://codeforces.com/contest/713/problem/D)
* [Codeforces - Trains and Statistics](http://codeforces.com/contest/675/problem/E)
* [SPOJ - Postering](http://www.spoj.com/problems/POSTERIN/)
* [SPOJ - Negative Score](http://www.spoj.com/problems/RPLN/)
* [SPOJ - A Famous City](http://www.spoj.com/problems/CITY2/)
* [SPOJ - Diferencija](http://www.spoj.com/problems/DIFERENC/)
* [Codeforces - Turn off the TV](http://codeforces.com/contest/863/problem/E)
* [Codeforces - Map](http://codeforces.com/contest/15/problem/D)
* [Codeforces - Awards for Contestants](http://codeforces.com/contest/873/problem/E)
* [Codeforces - Longest Regular Bracket Sequence](http://codeforces.com/contest/5/problem/C)
* [Codeforces - Array Stabilization (GCD version)](http://codeforces.com/problemset/problem/1547/F)
|
Sparse Table
|
---
title
treap
---
# Treap (Cartesian tree)
A treap is a data structure which combines binary tree and binary heap (hence the name: tree + heap $\Rightarrow$ Treap).
More specifically, treap is a data structure that stores pairs $(X, Y)$ in a binary tree in such a way that it is a binary search tree by $X$ and a binary heap by $Y$.
If some node of the tree contains values $(X_0, Y_0)$, all nodes in the left subtree have $X \leq X_0$, all nodes in the right subtree have $X_0 \leq X$, and all nodes in both left and right subtrees have $Y \leq Y_0$.
A treap is also often referred to as a "cartesian tree", as it is easy to embed it in a Cartesian plane:
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/e/e4/Treap.svg" width="350px"/>
</center>
Treaps have been proposed by Raimund Siedel and Cecilia Aragon in 1989.
## Advantages of such data organisation
In such implementation, $X$ values are the keys (and at same time the values stored in the treap), and $Y$ values are called **priorities**. Without priorities, the treap would be a regular binary search tree by $X$, and one set of $X$ values could correspond to a lot of different trees, some of them degenerate (for example, in the form of a linked list), and therefore extremely slow (the main operations would have $O(N)$ complexity).
At the same time, **priorities** (when they're unique) allow to **uniquely** specify the tree that will be constructed (of course, it does not depend on the order in which values are added), which can be proven using corresponding theorem. Obviously, if you **choose the priorities randomly**, you will get non-degenerate trees on average, which will ensure $O(\log N)$ complexity for the main operations. Hence another name of this data structure - **randomized binary search tree**.
## Operations
A treap provides the following operations:
- **Insert (X,Y)** in $O(\log N)$.
Adds a new node to the tree. One possible variant is to pass only $X$ and generate $Y$ randomly inside the operation.
- **Search (X)** in $O(\log N)$.
Looks for a node with the specified key value $X$. The implementation is the same as for an ordinary binary search tree.
- **Erase (X)** in $O(\log N)$.
Looks for a node with the specified key value $X$ and removes it from the tree.
- **Build ($X_1$, ..., $X_N$)** in $O(N)$.
Builds a tree from a list of values. This can be done in linear time (assuming that $X_1, ..., X_N$ are sorted).
- **Union ($T_1$, $T_2$)** in $O(M \log (N/M))$.
Merges two trees, assuming that all the elements are different. It is possible to achieve the same complexity if duplicate elements should be removed during merge.
- **Intersect ($T_1$, $T_2$)** in $O(M \log (N/M))$.
Finds the intersection of two trees (i.e. their common elements). We will not consider the implementation of this operation here.
In addition, due to the fact that a treap is a binary search tree, it can implement other operations, such as finding the $K$-th largest element or finding the index of an element.
## Implementation Description
In terms of implementation, each node contains $X$, $Y$ and pointers to the left ($L$) and right ($R$) children.
We will implement all the required operations using just two auxiliary operations: Split and Merge.
### Split
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/6/69/Treap_split.svg" width="450px"/>
</center>
**Split ($T$, $X$)** separates tree $T$ in 2 subtrees $L$ and $R$ trees (which are the return values of split) so that $L$ contains all elements with key $X_L \le X$, and $R$ contains all elements with key $X_R > X$. This operation has $O (\log N)$ complexity and is implemented using a clean recursion:
1. If the value of the root node (R) is $\le X$, then `L` would at least consist of `R->L` and `R`. We then call split on `R->R`, and note its split result as `L'` and `R'`. Finally, `L` would also contain `L'`, whereas `R = R'`.
2. If the value of the root node (R) is $> X$, then `R` would at least consist of `R` and `R->R`. We then call split on `R->L`, and note its split result as `L'` and `R'`. Finally, `L=L'`, whereas `R` would also contain `R'`.
Thus, the split algorithm is:
1. decide which subtree the root node would belong to (left or right)
2. recursively call split on one of its children
3. create the final result by reusing the recursive split call.
### Merge
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/a/a8/Treap_merge.svg" width="500px"/>
</center>
**Merge ($T_1$, $T_2$)** combines two subtrees $T_1$ and $T_2$ and returns the new tree. This operation also has $O (\log N)$ complexity. It works under the assumption that $T_1$ and $T_2$ are ordered (all keys $X$ in $T_1$ are smaller than keys in $T_2$). Thus, we need to combine these trees without violating the order of priorities $Y$. To do this, we choose as the root the tree which has higher priority $Y$ in the root node, and recursively call Merge for the other tree and the corresponding subtree of the selected root node.
### Insert
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/3/35/Treap_insert.svg" width="500px"/>
</center>
Now implementation of **Insert ($X$, $Y$)** becomes obvious. First we descend in the tree (as in a regular binary search tree by X), and stop at the first node in which the priority value is less than $Y$. We have found the place where we will insert the new element. Next, we call **Split (T, X)** on the subtree starting at the found node, and use returned subtrees $L$ and $R$ as left and right children of the new node.
Alternatively, insert can be done by splitting the initial treap on $X$ and doing $2$ merges with the new node (see the picture).
### Erase
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/6/62/Treap_erase.svg" width="500px"/>
</center>
Implementation of **Erase ($X$)** is also clear. First we descend in the tree (as in a regular binary search tree by $X$), looking for the element we want to delete. Once the node is found, we call **Merge** on it children and put the return value of the operation in the place of the element we're deleting.
Alternatively, we can factor out the subtree holding $X$ with $2$ split operations and merge the remaining treaps (see the picture).
### Build
We implement **Build** operation with $O (N \log N)$ complexity using $N$ **Insert** calls.
### Union
**Union ($T_1$, $T_2$)** has theoretical complexity $O (M \log (N / M))$, but in practice it works very well, probably with a very small hidden constant. Let's assume without loss of generality that $T_1 \rightarrow Y > T_2 \rightarrow Y$, i. e. root of $T_1$ will be the root of the result. To get the result, we need to merge trees $T_1 \rightarrow L$, $T_1 \rightarrow R$ and $T_2$ in two trees which could be children of $T_1$ root. To do this, we call Split ($T_2$, $T_1\rightarrow X$), thus splitting $T_2$ in two parts L and R, which we then recursively combine with children of $T_1$: Union ($T_1 \rightarrow L$, $L$) and Union ($T_1 \rightarrow R$, $R$), thus getting left and right subtrees of the result.
## Implementation
```cpp
struct item {
int key, prior;
item *l, *r;
item () { }
item (int key) : key(key), prior(rand()), l(NULL), r(NULL) { }
item (int key, int prior) : key(key), prior(prior), l(NULL), r(NULL) { }
};
typedef item* pitem;
```
This is our item defintion. Note there are two child pointers, and an integer key (for the BST) and an integer priority (for the heap). The priority is assigned using a random number generator.
```cpp
void split (pitem t, int key, pitem & l, pitem & r) {
if (!t)
l = r = NULL;
else if (t->key <= key)
split (t->r, key, t->r, r), l = t;
else
split (t->l, key, l, t->l), r = t;
}
```
`t` is the treap to split, and `key` is the BST value by which to split. Note that we do not `return` the result values anywhere, instead, we just use them like so:
```cpp
pitem l = nullptr, r = nullptr;
split(t, 5, l, r);
if (l) cout << "Left subtree size: " << (l->size) << endl;
if (r) cout << "Right subtree size: " << (r->size) << endl;
```
This `split` function can be tricky to understand, as it has both pointers (`pitem`) as well as reference to those pointers (`pitem &l`). Let us understand in words what the function call `split(t, k, l, r)` intends: "split treap `t` by value `k` into two treaps, and store the left treaps in `l` and right treap in `r`". Great! Now, let us apply this definition to the two recursive calls, using the case work we analyzed in the previous section: (The first if condition is a trivial base case for an empty treap)
1. When the root node value is $\le$ key, we call `split (t->r, key, t->r, r)`, which means: "split treap `t->r` (right subtree of `t`) by value `key` and store the left subtree in `t->r` and right subtree in `r`". After that, we set `l = t`. Note now that the `l` result value contains `t->l`, `t` as well as `t->r` (which is the result of the recursive call we made) all already merged in the correct order! You should pause to ensure that this result of `l` and `r` corresponds exactly with what we discussed earlier in Implementation Description.
2. When the root node value is greater than key, we call `split (t->l, key, l, t->l)`, which means: "split treap `t->l` (left subtree of `t`) by value `key` and store the left subtree in `l` and right subtree in `t->l`". After that, we set `r = t`. Note now that the `r` result value contains `t->l` (which is the result of the recursive call we made), `t` as well as `t->r`, all already merged in the correct order! You should pause to ensure that this result of `l` and `r` corresponds exactly with what we discussed earlier in Implementation Description.
If you're still having trouble understanding the implementation, you should look at it _inductively_, that is: do *not* try to break down the recursive calls over and over again. Assume the split implementation works correct on empty treap, then try to run it for a single node treap, then a two node treap, and so on, each time reusing your knowledge that split on smaller treaps works.
```cpp
void insert (pitem & t, pitem it) {
if (!t)
t = it;
else if (it->prior > t->prior)
split (t, it->key, it->l, it->r), t = it;
else
insert (t->key <= it->key ? t->r : t->l, it);
}
void merge (pitem & t, pitem l, pitem r) {
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
else
merge (r->l, l, r->l), t = r;
}
void erase (pitem & t, int key) {
if (t->key == key) {
pitem th = t;
merge (t, t->l, t->r);
delete th;
}
else
erase (key < t->key ? t->l : t->r, key);
}
pitem unite (pitem l, pitem r) {
if (!l || !r) return l ? l : r;
if (l->prior < r->prior) swap (l, r);
pitem lt, rt;
split (r, l->key, lt, rt);
l->l = unite (l->l, lt);
l->r = unite (l->r, rt);
return l;
}
```
## Maintaining the sizes of subtrees
To extend the functionality of the treap, it is often necessary to store the number of nodes in subtree of each node - field `int cnt` in the `item` structure. For example, it can be used to find K-th largest element of tree in $O (\log N)$, or to find the index of the element in the sorted list with the same complexity. The implementation of these operations will be the same as for the regular binary search tree.
When a tree changes (nodes are added or removed etc.), `cnt` of some nodes should be updated accordingly. We'll create two functions: `cnt()` will return the current value of `cnt` or 0 if the node does not exist, and `upd_cnt()` will update the value of `cnt` for this node assuming that for its children L and R the values of `cnt` have already been updated. Evidently it's sufficient to add calls of `upd_cnt()` to the end of `insert`, `erase`, `split` and `merge` to keep `cnt` values up-to-date.
```cpp
int cnt (pitem t) {
return t ? t->cnt : 0;
}
void upd_cnt (pitem t) {
if (t)
t->cnt = 1 + cnt(t->l) + cnt (t->r);
}
```
## Building a Treap in $O (N)$ in offline mode {data-toc-label="Building a Treap in O(N) in offline mode"}
Given a sorted list of keys, it is possible to construct a treap faster than by inserting the keys one at a time which takes $O(N \log N)$. Since the keys are sorted, a balanced binary search tree can be easily constructed in linear time. The heap values $Y$ are initialized randomly and then can be heapified independent of the keys $X$ to [build the heap](https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap) in $O(N)$.
```cpp
void heapify (pitem t) {
if (!t) return;
pitem max = t;
if (t->l != NULL && t->l->prior > max->prior)
max = t->l;
if (t->r != NULL && t->r->prior > max->prior)
max = t->r;
if (max != t) {
swap (t->prior, max->prior);
heapify (max);
}
}
pitem build (int * a, int n) {
// Construct a treap on values {a[0], a[1], ..., a[n - 1]}
if (n == 0) return NULL;
int mid = n / 2;
pitem t = new item (a[mid], rand ());
t->l = build (a, mid);
t->r = build (a + mid + 1, n - mid - 1);
heapify (t);
upd_cnt(t)
return t;
}
```
Note: calling `upd_cnt(t)` is only necessary if you need the subtree sizes.
The approach above always provides a perfectly balanced tree, which is generally good for practical purposes, but at the cost of not preserving the priorities that were initially assigned to each node. Thus, this approach is not feasible to solve the following problem:
!!! example "[acmsguru - Cartesian Tree](https://codeforces.com/problemsets/acmsguru/problem/99999/155)"
Given a sequence of pairs $(x_i, y_i)$, construct a cartesian tree on them. All $x_i$ and all $y_i$ are unique.
Note that in this problem priorities are not random, hence just inserting vertices one by one could provide a quadratic solution.
One of possible solutions here is to find for each element the closest elements to the left and to the right which have a smaller priority than this element. Among these two elements, the one with the larger priority must be the parent of the current element.
This problem is solvable with a [minimum stack](./stack_queue_modification.md) modification in linear time:
```cpp
void connect(auto from, auto to) {
vector<pitem> st;
for(auto it: ranges::subrange(from, to)) {
while(!st.empty() && st.back()->prior > it->prior) {
st.pop_back();
}
if(!st.empty()) {
if(!it->p || it->p->prior < st.back()->prior) {
it->p = st.back();
}
}
st.push_back(it);
}
}
pitem build(int *x, int *y, int n) {
vector<pitem> nodes(n);
for(int i = 0; i < n; i++) {
nodes[i] = new item(x[i], y[i]);
}
connect(nodes.begin(), nodes.end());
connect(nodes.rbegin(), nodes.rend());
for(int i = 0; i < n; i++) {
if(nodes[i]->p) {
if(nodes[i]->p->key < nodes[i]->key) {
nodes[i]->p->r = nodes[i];
} else {
nodes[i]->p->l = nodes[i];
}
}
}
return nodes[min_element(y, y + n) - y];
}
```
## Implicit Treaps
Implicit treap is a simple modification of the regular treap which is a very powerful data structure. In fact, implicit treap can be considered as an array with the following procedures implemented (all in $O (\log N)$ in the online mode):
- Inserting an element in the array in any location
- Removal of an arbitrary element
- Finding sum, minimum / maximum element etc. on an arbitrary interval
- Addition, painting on an arbitrary interval
- Reversing elements on an arbitrary interval
The idea is that the keys should be null-based **indices** of the elements in the array. But we will not store these values explicitly (otherwise, for example, inserting an element would cause changes of the key in $O (N)$ nodes of the tree).
Note that the key of a node is the number of nodes less than it (such nodes can be present not only in its left subtree but also in left subtrees of its ancestors).
More specifically, the **implicit key** for some node T is the number of vertices $cnt (T \rightarrow L)$ in the left subtree of this node plus similar values $cnt (P \rightarrow L) + 1$ for each ancestor P of the node T, if T is in the right subtree of P.
Now it's clear how to calculate the implicit key of current node quickly. Since in all operations we arrive to any node by descending in the tree, we can just accumulate this sum and pass it to the function. If we go to the left subtree, the accumulated sum does not change, if we go to the right subtree it increases by $cnt (T \rightarrow L) +1$.
Here are the new implementations of **Split** and **Merge**:
```cpp
void merge (pitem & t, pitem l, pitem r) {
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
else
merge (r->l, l, r->l), t = r;
upd_cnt (t);
}
void split (pitem t, pitem & l, pitem & r, int key, int add = 0) {
if (!t)
return void( l = r = 0 );
int cur_key = add + cnt(t->l); //implicit key
if (key <= cur_key)
split (t->l, l, t->l, key, add), r = t;
else
split (t->r, t->r, r, key, add + 1 + cnt(t->l)), l = t;
upd_cnt (t);
}
```
In the implementation above, after the call of $split(T, T_1, T_2, k)$, the tree $T_1$ will consist of first $k$ elements of $T$ (that is, of elements having their implicit key less than $k$) and $T_2$ will consist of all the rest.
Now let's consider the implementation of various operations on implicit treaps:
- **Insert element**.
Suppose we need to insert an element at position $pos$. We divide the treap into two parts, which correspond to arrays $[0..pos-1]$ and $[pos..sz]$; to do this we call $split(T, T_1, T_2, pos)$. Then we can combine tree $T_1$ with the new vertex by calling $merge(T_1, T_1, \text{new item})$ (it is easy to see that all preconditions are met). Finally, we combine trees $T_1$ and $T_2$ back into $T$ by calling $merge(T, T_1, T_2)$.
- **Delete element**.
This operation is even easier: find the element to be deleted $T$, perform merge of its children $L$ and $R$, and replace the element $T$ with the result of merge. In fact, element deletion in the implicit treap is exactly the same as in the regular treap.
- Find **sum / minimum**, etc. on the interval.
First, create an additional field $F$ in the `item` structure to store the value of the target function for this node's subtree. This field is easy to maintain similarly to maintaining sizes of subtrees: create a function which calculates this value for a node based on values for its children and add calls of this function in the end of all functions which modify the tree.
Second, we need to know how to process a query for an arbitrary interval $[A; B]$.
To get a part of tree which corresponds to the interval $[A; B]$, we need to call $split(T, T_2, T_3, B+1)$, and then $split(T_2, T_1, T_2, A)$: after this $T_2$ will consist of all the elements in the interval $[A; B]$, and only of them. Therefore, the response to the query will be stored in the field $F$ of the root of $T_2$. After the query is answered, the tree has to be restored by calling $merge(T, T_1, T_2)$ and $merge(T, T, T_3)$.
- **Addition / painting** on the interval.
We act similarly to the previous paragraph, but instead of the field F we will store a field `add` which will contain the added value for the subtree (or the value to which the subtree is painted). Before performing any operation we have to "push" this value correctly - i.e. change $T \rightarrow L \rightarrow add$ and $T \rightarrow R \rightarrow add$, and to clean up `add` in the parent node. This way after any changes to the tree the information will not be lost.
- **Reverse** on the interval.
This is again similar to the previous operation: we have to add boolean flag `rev` and set it to true when the subtree of the current node has to be reversed. "Pushing" this value is a bit complicated - we swap children of this node and set this flag to true for them.
Here is an example implementation of the implicit treap with reverse on the interval. For each node we store field called `value` which is the actual value of the array element at current position. We also provide implementation of the function `output()`, which outputs an array that corresponds to the current state of the implicit treap.
```cpp
typedef struct item * pitem;
struct item {
int prior, value, cnt;
bool rev;
pitem l, r;
};
int cnt (pitem it) {
return it ? it->cnt : 0;
}
void upd_cnt (pitem it) {
if (it)
it->cnt = cnt(it->l) + cnt(it->r) + 1;
}
void push (pitem it) {
if (it && it->rev) {
it->rev = false;
swap (it->l, it->r);
if (it->l) it->l->rev ^= true;
if (it->r) it->r->rev ^= true;
}
}
void merge (pitem & t, pitem l, pitem r) {
push (l);
push (r);
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
else
merge (r->l, l, r->l), t = r;
upd_cnt (t);
}
void split (pitem t, pitem & l, pitem & r, int key, int add = 0) {
if (!t)
return void( l = r = 0 );
push (t);
int cur_key = add + cnt(t->l);
if (key <= cur_key)
split (t->l, l, t->l, key, add), r = t;
else
split (t->r, t->r, r, key, add + 1 + cnt(t->l)), l = t;
upd_cnt (t);
}
void reverse (pitem t, int l, int r) {
pitem t1, t2, t3;
split (t, t1, t2, l);
split (t2, t2, t3, r-l+1);
t2->rev ^= true;
merge (t, t1, t2);
merge (t, t, t3);
}
void output (pitem t) {
if (!t) return;
push (t);
output (t->l);
printf ("%d ", t->value);
output (t->r);
}
```
## Literature
* [Blelloch, Reid-Miller "Fast Set Operations Using Treaps"](https://www.cs.cmu.edu/~scandal/papers/treaps-spaa98.pdf)
|
---
title
treap
---
# Treap (Cartesian tree)
A treap is a data structure which combines binary tree and binary heap (hence the name: tree + heap $\Rightarrow$ Treap).
More specifically, treap is a data structure that stores pairs $(X, Y)$ in a binary tree in such a way that it is a binary search tree by $X$ and a binary heap by $Y$.
If some node of the tree contains values $(X_0, Y_0)$, all nodes in the left subtree have $X \leq X_0$, all nodes in the right subtree have $X_0 \leq X$, and all nodes in both left and right subtrees have $Y \leq Y_0$.
A treap is also often referred to as a "cartesian tree", as it is easy to embed it in a Cartesian plane:
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/e/e4/Treap.svg" width="350px"/>
</center>
Treaps have been proposed by Raimund Siedel and Cecilia Aragon in 1989.
## Advantages of such data organisation
In such implementation, $X$ values are the keys (and at same time the values stored in the treap), and $Y$ values are called **priorities**. Without priorities, the treap would be a regular binary search tree by $X$, and one set of $X$ values could correspond to a lot of different trees, some of them degenerate (for example, in the form of a linked list), and therefore extremely slow (the main operations would have $O(N)$ complexity).
At the same time, **priorities** (when they're unique) allow to **uniquely** specify the tree that will be constructed (of course, it does not depend on the order in which values are added), which can be proven using corresponding theorem. Obviously, if you **choose the priorities randomly**, you will get non-degenerate trees on average, which will ensure $O(\log N)$ complexity for the main operations. Hence another name of this data structure - **randomized binary search tree**.
## Operations
A treap provides the following operations:
- **Insert (X,Y)** in $O(\log N)$.
Adds a new node to the tree. One possible variant is to pass only $X$ and generate $Y$ randomly inside the operation.
- **Search (X)** in $O(\log N)$.
Looks for a node with the specified key value $X$. The implementation is the same as for an ordinary binary search tree.
- **Erase (X)** in $O(\log N)$.
Looks for a node with the specified key value $X$ and removes it from the tree.
- **Build ($X_1$, ..., $X_N$)** in $O(N)$.
Builds a tree from a list of values. This can be done in linear time (assuming that $X_1, ..., X_N$ are sorted).
- **Union ($T_1$, $T_2$)** in $O(M \log (N/M))$.
Merges two trees, assuming that all the elements are different. It is possible to achieve the same complexity if duplicate elements should be removed during merge.
- **Intersect ($T_1$, $T_2$)** in $O(M \log (N/M))$.
Finds the intersection of two trees (i.e. their common elements). We will not consider the implementation of this operation here.
In addition, due to the fact that a treap is a binary search tree, it can implement other operations, such as finding the $K$-th largest element or finding the index of an element.
## Implementation Description
In terms of implementation, each node contains $X$, $Y$ and pointers to the left ($L$) and right ($R$) children.
We will implement all the required operations using just two auxiliary operations: Split and Merge.
### Split
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/6/69/Treap_split.svg" width="450px"/>
</center>
**Split ($T$, $X$)** separates tree $T$ in 2 subtrees $L$ and $R$ trees (which are the return values of split) so that $L$ contains all elements with key $X_L \le X$, and $R$ contains all elements with key $X_R > X$. This operation has $O (\log N)$ complexity and is implemented using a clean recursion:
1. If the value of the root node (R) is $\le X$, then `L` would at least consist of `R->L` and `R`. We then call split on `R->R`, and note its split result as `L'` and `R'`. Finally, `L` would also contain `L'`, whereas `R = R'`.
2. If the value of the root node (R) is $> X$, then `R` would at least consist of `R` and `R->R`. We then call split on `R->L`, and note its split result as `L'` and `R'`. Finally, `L=L'`, whereas `R` would also contain `R'`.
Thus, the split algorithm is:
1. decide which subtree the root node would belong to (left or right)
2. recursively call split on one of its children
3. create the final result by reusing the recursive split call.
### Merge
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/a/a8/Treap_merge.svg" width="500px"/>
</center>
**Merge ($T_1$, $T_2$)** combines two subtrees $T_1$ and $T_2$ and returns the new tree. This operation also has $O (\log N)$ complexity. It works under the assumption that $T_1$ and $T_2$ are ordered (all keys $X$ in $T_1$ are smaller than keys in $T_2$). Thus, we need to combine these trees without violating the order of priorities $Y$. To do this, we choose as the root the tree which has higher priority $Y$ in the root node, and recursively call Merge for the other tree and the corresponding subtree of the selected root node.
### Insert
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/3/35/Treap_insert.svg" width="500px"/>
</center>
Now implementation of **Insert ($X$, $Y$)** becomes obvious. First we descend in the tree (as in a regular binary search tree by X), and stop at the first node in which the priority value is less than $Y$. We have found the place where we will insert the new element. Next, we call **Split (T, X)** on the subtree starting at the found node, and use returned subtrees $L$ and $R$ as left and right children of the new node.
Alternatively, insert can be done by splitting the initial treap on $X$ and doing $2$ merges with the new node (see the picture).
### Erase
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/6/62/Treap_erase.svg" width="500px"/>
</center>
Implementation of **Erase ($X$)** is also clear. First we descend in the tree (as in a regular binary search tree by $X$), looking for the element we want to delete. Once the node is found, we call **Merge** on it children and put the return value of the operation in the place of the element we're deleting.
Alternatively, we can factor out the subtree holding $X$ with $2$ split operations and merge the remaining treaps (see the picture).
### Build
We implement **Build** operation with $O (N \log N)$ complexity using $N$ **Insert** calls.
### Union
**Union ($T_1$, $T_2$)** has theoretical complexity $O (M \log (N / M))$, but in practice it works very well, probably with a very small hidden constant. Let's assume without loss of generality that $T_1 \rightarrow Y > T_2 \rightarrow Y$, i. e. root of $T_1$ will be the root of the result. To get the result, we need to merge trees $T_1 \rightarrow L$, $T_1 \rightarrow R$ and $T_2$ in two trees which could be children of $T_1$ root. To do this, we call Split ($T_2$, $T_1\rightarrow X$), thus splitting $T_2$ in two parts L and R, which we then recursively combine with children of $T_1$: Union ($T_1 \rightarrow L$, $L$) and Union ($T_1 \rightarrow R$, $R$), thus getting left and right subtrees of the result.
## Implementation
```cpp
struct item {
int key, prior;
item *l, *r;
item () { }
item (int key) : key(key), prior(rand()), l(NULL), r(NULL) { }
item (int key, int prior) : key(key), prior(prior), l(NULL), r(NULL) { }
};
typedef item* pitem;
```
This is our item defintion. Note there are two child pointers, and an integer key (for the BST) and an integer priority (for the heap). The priority is assigned using a random number generator.
```cpp
void split (pitem t, int key, pitem & l, pitem & r) {
if (!t)
l = r = NULL;
else if (t->key <= key)
split (t->r, key, t->r, r), l = t;
else
split (t->l, key, l, t->l), r = t;
}
```
`t` is the treap to split, and `key` is the BST value by which to split. Note that we do not `return` the result values anywhere, instead, we just use them like so:
```cpp
pitem l = nullptr, r = nullptr;
split(t, 5, l, r);
if (l) cout << "Left subtree size: " << (l->size) << endl;
if (r) cout << "Right subtree size: " << (r->size) << endl;
```
This `split` function can be tricky to understand, as it has both pointers (`pitem`) as well as reference to those pointers (`pitem &l`). Let us understand in words what the function call `split(t, k, l, r)` intends: "split treap `t` by value `k` into two treaps, and store the left treaps in `l` and right treap in `r`". Great! Now, let us apply this definition to the two recursive calls, using the case work we analyzed in the previous section: (The first if condition is a trivial base case for an empty treap)
1. When the root node value is $\le$ key, we call `split (t->r, key, t->r, r)`, which means: "split treap `t->r` (right subtree of `t`) by value `key` and store the left subtree in `t->r` and right subtree in `r`". After that, we set `l = t`. Note now that the `l` result value contains `t->l`, `t` as well as `t->r` (which is the result of the recursive call we made) all already merged in the correct order! You should pause to ensure that this result of `l` and `r` corresponds exactly with what we discussed earlier in Implementation Description.
2. When the root node value is greater than key, we call `split (t->l, key, l, t->l)`, which means: "split treap `t->l` (left subtree of `t`) by value `key` and store the left subtree in `l` and right subtree in `t->l`". After that, we set `r = t`. Note now that the `r` result value contains `t->l` (which is the result of the recursive call we made), `t` as well as `t->r`, all already merged in the correct order! You should pause to ensure that this result of `l` and `r` corresponds exactly with what we discussed earlier in Implementation Description.
If you're still having trouble understanding the implementation, you should look at it _inductively_, that is: do *not* try to break down the recursive calls over and over again. Assume the split implementation works correct on empty treap, then try to run it for a single node treap, then a two node treap, and so on, each time reusing your knowledge that split on smaller treaps works.
```cpp
void insert (pitem & t, pitem it) {
if (!t)
t = it;
else if (it->prior > t->prior)
split (t, it->key, it->l, it->r), t = it;
else
insert (t->key <= it->key ? t->r : t->l, it);
}
void merge (pitem & t, pitem l, pitem r) {
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
else
merge (r->l, l, r->l), t = r;
}
void erase (pitem & t, int key) {
if (t->key == key) {
pitem th = t;
merge (t, t->l, t->r);
delete th;
}
else
erase (key < t->key ? t->l : t->r, key);
}
pitem unite (pitem l, pitem r) {
if (!l || !r) return l ? l : r;
if (l->prior < r->prior) swap (l, r);
pitem lt, rt;
split (r, l->key, lt, rt);
l->l = unite (l->l, lt);
l->r = unite (l->r, rt);
return l;
}
```
## Maintaining the sizes of subtrees
To extend the functionality of the treap, it is often necessary to store the number of nodes in subtree of each node - field `int cnt` in the `item` structure. For example, it can be used to find K-th largest element of tree in $O (\log N)$, or to find the index of the element in the sorted list with the same complexity. The implementation of these operations will be the same as for the regular binary search tree.
When a tree changes (nodes are added or removed etc.), `cnt` of some nodes should be updated accordingly. We'll create two functions: `cnt()` will return the current value of `cnt` or 0 if the node does not exist, and `upd_cnt()` will update the value of `cnt` for this node assuming that for its children L and R the values of `cnt` have already been updated. Evidently it's sufficient to add calls of `upd_cnt()` to the end of `insert`, `erase`, `split` and `merge` to keep `cnt` values up-to-date.
```cpp
int cnt (pitem t) {
return t ? t->cnt : 0;
}
void upd_cnt (pitem t) {
if (t)
t->cnt = 1 + cnt(t->l) + cnt (t->r);
}
```
## Building a Treap in $O (N)$ in offline mode {data-toc-label="Building a Treap in O(N) in offline mode"}
Given a sorted list of keys, it is possible to construct a treap faster than by inserting the keys one at a time which takes $O(N \log N)$. Since the keys are sorted, a balanced binary search tree can be easily constructed in linear time. The heap values $Y$ are initialized randomly and then can be heapified independent of the keys $X$ to [build the heap](https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap) in $O(N)$.
```cpp
void heapify (pitem t) {
if (!t) return;
pitem max = t;
if (t->l != NULL && t->l->prior > max->prior)
max = t->l;
if (t->r != NULL && t->r->prior > max->prior)
max = t->r;
if (max != t) {
swap (t->prior, max->prior);
heapify (max);
}
}
pitem build (int * a, int n) {
// Construct a treap on values {a[0], a[1], ..., a[n - 1]}
if (n == 0) return NULL;
int mid = n / 2;
pitem t = new item (a[mid], rand ());
t->l = build (a, mid);
t->r = build (a + mid + 1, n - mid - 1);
heapify (t);
upd_cnt(t)
return t;
}
```
Note: calling `upd_cnt(t)` is only necessary if you need the subtree sizes.
The approach above always provides a perfectly balanced tree, which is generally good for practical purposes, but at the cost of not preserving the priorities that were initially assigned to each node. Thus, this approach is not feasible to solve the following problem:
!!! example "[acmsguru - Cartesian Tree](https://codeforces.com/problemsets/acmsguru/problem/99999/155)"
Given a sequence of pairs $(x_i, y_i)$, construct a cartesian tree on them. All $x_i$ and all $y_i$ are unique.
Note that in this problem priorities are not random, hence just inserting vertices one by one could provide a quadratic solution.
One of possible solutions here is to find for each element the closest elements to the left and to the right which have a smaller priority than this element. Among these two elements, the one with the larger priority must be the parent of the current element.
This problem is solvable with a [minimum stack](./stack_queue_modification.md) modification in linear time:
```cpp
void connect(auto from, auto to) {
vector<pitem> st;
for(auto it: ranges::subrange(from, to)) {
while(!st.empty() && st.back()->prior > it->prior) {
st.pop_back();
}
if(!st.empty()) {
if(!it->p || it->p->prior < st.back()->prior) {
it->p = st.back();
}
}
st.push_back(it);
}
}
pitem build(int *x, int *y, int n) {
vector<pitem> nodes(n);
for(int i = 0; i < n; i++) {
nodes[i] = new item(x[i], y[i]);
}
connect(nodes.begin(), nodes.end());
connect(nodes.rbegin(), nodes.rend());
for(int i = 0; i < n; i++) {
if(nodes[i]->p) {
if(nodes[i]->p->key < nodes[i]->key) {
nodes[i]->p->r = nodes[i];
} else {
nodes[i]->p->l = nodes[i];
}
}
}
return nodes[min_element(y, y + n) - y];
}
```
## Implicit Treaps
Implicit treap is a simple modification of the regular treap which is a very powerful data structure. In fact, implicit treap can be considered as an array with the following procedures implemented (all in $O (\log N)$ in the online mode):
- Inserting an element in the array in any location
- Removal of an arbitrary element
- Finding sum, minimum / maximum element etc. on an arbitrary interval
- Addition, painting on an arbitrary interval
- Reversing elements on an arbitrary interval
The idea is that the keys should be null-based **indices** of the elements in the array. But we will not store these values explicitly (otherwise, for example, inserting an element would cause changes of the key in $O (N)$ nodes of the tree).
Note that the key of a node is the number of nodes less than it (such nodes can be present not only in its left subtree but also in left subtrees of its ancestors).
More specifically, the **implicit key** for some node T is the number of vertices $cnt (T \rightarrow L)$ in the left subtree of this node plus similar values $cnt (P \rightarrow L) + 1$ for each ancestor P of the node T, if T is in the right subtree of P.
Now it's clear how to calculate the implicit key of current node quickly. Since in all operations we arrive to any node by descending in the tree, we can just accumulate this sum and pass it to the function. If we go to the left subtree, the accumulated sum does not change, if we go to the right subtree it increases by $cnt (T \rightarrow L) +1$.
Here are the new implementations of **Split** and **Merge**:
```cpp
void merge (pitem & t, pitem l, pitem r) {
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
else
merge (r->l, l, r->l), t = r;
upd_cnt (t);
}
void split (pitem t, pitem & l, pitem & r, int key, int add = 0) {
if (!t)
return void( l = r = 0 );
int cur_key = add + cnt(t->l); //implicit key
if (key <= cur_key)
split (t->l, l, t->l, key, add), r = t;
else
split (t->r, t->r, r, key, add + 1 + cnt(t->l)), l = t;
upd_cnt (t);
}
```
In the implementation above, after the call of $split(T, T_1, T_2, k)$, the tree $T_1$ will consist of first $k$ elements of $T$ (that is, of elements having their implicit key less than $k$) and $T_2$ will consist of all the rest.
Now let's consider the implementation of various operations on implicit treaps:
- **Insert element**.
Suppose we need to insert an element at position $pos$. We divide the treap into two parts, which correspond to arrays $[0..pos-1]$ and $[pos..sz]$; to do this we call $split(T, T_1, T_2, pos)$. Then we can combine tree $T_1$ with the new vertex by calling $merge(T_1, T_1, \text{new item})$ (it is easy to see that all preconditions are met). Finally, we combine trees $T_1$ and $T_2$ back into $T$ by calling $merge(T, T_1, T_2)$.
- **Delete element**.
This operation is even easier: find the element to be deleted $T$, perform merge of its children $L$ and $R$, and replace the element $T$ with the result of merge. In fact, element deletion in the implicit treap is exactly the same as in the regular treap.
- Find **sum / minimum**, etc. on the interval.
First, create an additional field $F$ in the `item` structure to store the value of the target function for this node's subtree. This field is easy to maintain similarly to maintaining sizes of subtrees: create a function which calculates this value for a node based on values for its children and add calls of this function in the end of all functions which modify the tree.
Second, we need to know how to process a query for an arbitrary interval $[A; B]$.
To get a part of tree which corresponds to the interval $[A; B]$, we need to call $split(T, T_2, T_3, B+1)$, and then $split(T_2, T_1, T_2, A)$: after this $T_2$ will consist of all the elements in the interval $[A; B]$, and only of them. Therefore, the response to the query will be stored in the field $F$ of the root of $T_2$. After the query is answered, the tree has to be restored by calling $merge(T, T_1, T_2)$ and $merge(T, T, T_3)$.
- **Addition / painting** on the interval.
We act similarly to the previous paragraph, but instead of the field F we will store a field `add` which will contain the added value for the subtree (or the value to which the subtree is painted). Before performing any operation we have to "push" this value correctly - i.e. change $T \rightarrow L \rightarrow add$ and $T \rightarrow R \rightarrow add$, and to clean up `add` in the parent node. This way after any changes to the tree the information will not be lost.
- **Reverse** on the interval.
This is again similar to the previous operation: we have to add boolean flag `rev` and set it to true when the subtree of the current node has to be reversed. "Pushing" this value is a bit complicated - we swap children of this node and set this flag to true for them.
Here is an example implementation of the implicit treap with reverse on the interval. For each node we store field called `value` which is the actual value of the array element at current position. We also provide implementation of the function `output()`, which outputs an array that corresponds to the current state of the implicit treap.
```cpp
typedef struct item * pitem;
struct item {
int prior, value, cnt;
bool rev;
pitem l, r;
};
int cnt (pitem it) {
return it ? it->cnt : 0;
}
void upd_cnt (pitem it) {
if (it)
it->cnt = cnt(it->l) + cnt(it->r) + 1;
}
void push (pitem it) {
if (it && it->rev) {
it->rev = false;
swap (it->l, it->r);
if (it->l) it->l->rev ^= true;
if (it->r) it->r->rev ^= true;
}
}
void merge (pitem & t, pitem l, pitem r) {
push (l);
push (r);
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
else
merge (r->l, l, r->l), t = r;
upd_cnt (t);
}
void split (pitem t, pitem & l, pitem & r, int key, int add = 0) {
if (!t)
return void( l = r = 0 );
push (t);
int cur_key = add + cnt(t->l);
if (key <= cur_key)
split (t->l, l, t->l, key, add), r = t;
else
split (t->r, t->r, r, key, add + 1 + cnt(t->l)), l = t;
upd_cnt (t);
}
void reverse (pitem t, int l, int r) {
pitem t1, t2, t3;
split (t, t1, t2, l);
split (t2, t2, t3, r-l+1);
t2->rev ^= true;
merge (t, t1, t2);
merge (t, t, t3);
}
void output (pitem t) {
if (!t) return;
push (t);
output (t->l);
printf ("%d ", t->value);
output (t->r);
}
```
## Literature
* [Blelloch, Reid-Miller "Fast Set Operations Using Treaps"](https://www.cs.cmu.edu/~scandal/papers/treaps-spaa98.pdf)
## Practice Problems
* [SPOJ - Ada and Aphids](http://www.spoj.com/problems/ADAAPHID/)
* [SPOJ - Ada and Harvest](http://www.spoj.com/problems/ADACROP/)
* [Codeforces - Radio Stations](http://codeforces.com/contest/762/problem/E)
* [SPOJ - Ghost Town](http://www.spoj.com/problems/COUNT1IT/)
* [SPOJ - Arrangement Validity](http://www.spoj.com/problems/IITWPC4D/)
* [SPOJ - All in One](http://www.spoj.com/problems/ALLIN1/)
* [Codeforces - Dog Show](http://codeforces.com/contest/847/problem/D)
* [Codeforces - Yet Another Array Queries Problem](http://codeforces.com/contest/863/problem/D)
* [SPOJ - Mean of Array](http://www.spoj.com/problems/MEANARR/)
* [SPOJ - TWIST](http://www.spoj.com/problems/TWIST/)
* [SPOJ - KOILINE](http://www.spoj.com/problems/KOILINE/)
* [CodeChef - The Prestige](https://www.codechef.com/problems/PRESTIGE)
* [Codeforces - T-Shirts](https://codeforces.com/contest/702/problem/F)
* [Codeforces - Wizards and Roads](https://codeforces.com/problemset/problem/167/D)
* [Codeforces - Yaroslav and Points](https://codeforces.com/contest/295/problem/E)
|
Treap (Cartesian tree)
|
---
title: Deleting from a data structure in O(T(n) log n)
title
- Original
---
# Deleting from a data structure in $O(T(n)\log n)$
Suppose you have a data structure which allows adding elements in **true** $O(T(n))$.
This article will describe a technique that allows deletion in $O(T(n)\log n)$ offline.
## Algorithm
Each element lives in the data structure for some segments of time between additions and deletions.
Let's build a segment tree over the queries.
Each segment when some element is alive splits into $O(\log n)$ nodes of the tree.
Let's put each query when we want to know something about the structure into the corresponding leaf.
Now to process all queries we will run a DFS on the segment tree.
When entering the node we will add all the elements that are inside this node.
Then we will go further to the children of this node or answer the queries (if the node is a leaf).
When leaving the node, we must undo the additions.
Note that if we change the structure in $O(T(n))$ we can roll back the changes in $O(T(n))$ by keeping a stack of changes.
Note that rollbacks break amortized complexity.
## Notes
The idea of creating a segment tree over segments when something is alive may be used not only for data structure problems.
See some problems below.
## Implementation
This implementation is for the [dynamic connectivity](https://en.wikipedia.org/wiki/Dynamic_connectivity) problem.
It can add edges, remove edges and count the number of connected components.
```{.cpp file=dynamic-conn}
struct dsu_save {
int v, rnkv, u, rnku;
dsu_save() {}
dsu_save(int _v, int _rnkv, int _u, int _rnku)
: v(_v), rnkv(_rnkv), u(_u), rnku(_rnku) {}
};
struct dsu_with_rollbacks {
vector<int> p, rnk;
int comps;
stack<dsu_save> op;
dsu_with_rollbacks() {}
dsu_with_rollbacks(int n) {
p.resize(n);
rnk.resize(n);
for (int i = 0; i < n; i++) {
p[i] = i;
rnk[i] = 0;
}
comps = n;
}
int find_set(int v) {
return (v == p[v]) ? v : find_set(p[v]);
}
bool unite(int v, int u) {
v = find_set(v);
u = find_set(u);
if (v == u)
return false;
comps--;
if (rnk[v] > rnk[u])
swap(v, u);
op.push(dsu_save(v, rnk[v], u, rnk[u]));
p[v] = u;
if (rnk[u] == rnk[v])
rnk[u]++;
return true;
}
void rollback() {
if (op.empty())
return;
dsu_save x = op.top();
op.pop();
comps++;
p[x.v] = x.v;
rnk[x.v] = x.rnkv;
p[x.u] = x.u;
rnk[x.u] = x.rnku;
}
};
struct query {
int v, u;
bool united;
query(int _v, int _u) : v(_v), u(_u) {
}
};
struct QueryTree {
vector<vector<query>> t;
dsu_with_rollbacks dsu;
int T;
QueryTree() {}
QueryTree(int _T, int n) : T(_T) {
dsu = dsu_with_rollbacks(n);
t.resize(4 * T + 4);
}
void add_to_tree(int v, int l, int r, int ul, int ur, query& q) {
if (ul > ur)
return;
if (l == ul && r == ur) {
t[v].push_back(q);
return;
}
int mid = (l + r) / 2;
add_to_tree(2 * v, l, mid, ul, min(ur, mid), q);
add_to_tree(2 * v + 1, mid + 1, r, max(ul, mid + 1), ur, q);
}
void add_query(query q, int l, int r) {
add_to_tree(1, 0, T - 1, l, r, q);
}
void dfs(int v, int l, int r, vector<int>& ans) {
for (query& q : t[v]) {
q.united = dsu.unite(q.v, q.u);
}
if (l == r)
ans[l] = dsu.comps;
else {
int mid = (l + r) / 2;
dfs(2 * v, l, mid, ans);
dfs(2 * v + 1, mid + 1, r, ans);
}
for (query q : t[v]) {
if (q.united)
dsu.rollback();
}
}
vector<int> solve() {
vector<int> ans(T);
dfs(1, 0, T - 1, ans);
return ans;
}
};
```
|
---
title: Deleting from a data structure in O(T(n) log n)
title
- Original
---
# Deleting from a data structure in $O(T(n)\log n)$
Suppose you have a data structure which allows adding elements in **true** $O(T(n))$.
This article will describe a technique that allows deletion in $O(T(n)\log n)$ offline.
## Algorithm
Each element lives in the data structure for some segments of time between additions and deletions.
Let's build a segment tree over the queries.
Each segment when some element is alive splits into $O(\log n)$ nodes of the tree.
Let's put each query when we want to know something about the structure into the corresponding leaf.
Now to process all queries we will run a DFS on the segment tree.
When entering the node we will add all the elements that are inside this node.
Then we will go further to the children of this node or answer the queries (if the node is a leaf).
When leaving the node, we must undo the additions.
Note that if we change the structure in $O(T(n))$ we can roll back the changes in $O(T(n))$ by keeping a stack of changes.
Note that rollbacks break amortized complexity.
## Notes
The idea of creating a segment tree over segments when something is alive may be used not only for data structure problems.
See some problems below.
## Implementation
This implementation is for the [dynamic connectivity](https://en.wikipedia.org/wiki/Dynamic_connectivity) problem.
It can add edges, remove edges and count the number of connected components.
```{.cpp file=dynamic-conn}
struct dsu_save {
int v, rnkv, u, rnku;
dsu_save() {}
dsu_save(int _v, int _rnkv, int _u, int _rnku)
: v(_v), rnkv(_rnkv), u(_u), rnku(_rnku) {}
};
struct dsu_with_rollbacks {
vector<int> p, rnk;
int comps;
stack<dsu_save> op;
dsu_with_rollbacks() {}
dsu_with_rollbacks(int n) {
p.resize(n);
rnk.resize(n);
for (int i = 0; i < n; i++) {
p[i] = i;
rnk[i] = 0;
}
comps = n;
}
int find_set(int v) {
return (v == p[v]) ? v : find_set(p[v]);
}
bool unite(int v, int u) {
v = find_set(v);
u = find_set(u);
if (v == u)
return false;
comps--;
if (rnk[v] > rnk[u])
swap(v, u);
op.push(dsu_save(v, rnk[v], u, rnk[u]));
p[v] = u;
if (rnk[u] == rnk[v])
rnk[u]++;
return true;
}
void rollback() {
if (op.empty())
return;
dsu_save x = op.top();
op.pop();
comps++;
p[x.v] = x.v;
rnk[x.v] = x.rnkv;
p[x.u] = x.u;
rnk[x.u] = x.rnku;
}
};
struct query {
int v, u;
bool united;
query(int _v, int _u) : v(_v), u(_u) {
}
};
struct QueryTree {
vector<vector<query>> t;
dsu_with_rollbacks dsu;
int T;
QueryTree() {}
QueryTree(int _T, int n) : T(_T) {
dsu = dsu_with_rollbacks(n);
t.resize(4 * T + 4);
}
void add_to_tree(int v, int l, int r, int ul, int ur, query& q) {
if (ul > ur)
return;
if (l == ul && r == ur) {
t[v].push_back(q);
return;
}
int mid = (l + r) / 2;
add_to_tree(2 * v, l, mid, ul, min(ur, mid), q);
add_to_tree(2 * v + 1, mid + 1, r, max(ul, mid + 1), ur, q);
}
void add_query(query q, int l, int r) {
add_to_tree(1, 0, T - 1, l, r, q);
}
void dfs(int v, int l, int r, vector<int>& ans) {
for (query& q : t[v]) {
q.united = dsu.unite(q.v, q.u);
}
if (l == r)
ans[l] = dsu.comps;
else {
int mid = (l + r) / 2;
dfs(2 * v, l, mid, ans);
dfs(2 * v + 1, mid + 1, r, ans);
}
for (query q : t[v]) {
if (q.united)
dsu.rollback();
}
}
vector<int> solve() {
vector<int> ans(T);
dfs(1, 0, T - 1, ans);
return ans;
}
};
```
## Problems
- [Codeforces - Connect and Disconnect](https://codeforces.com/gym/100551/problem/A)
- [Codeforces - Addition on Segments](https://codeforces.com/contest/981/problem/E)
- [Codeforces - Extending Set of Points](https://codeforces.com/contest/1140/problem/F)
|
Deleting from a data structure in $O(T(n)\log n)$
|
---
title
segment_tree
---
# Segment Tree
A Segment Tree is a data structure that stores information about array intervals as a tree. This allows answering range queries over an array efficiently, while still being flexible enough to allow quick modification of the array.
This includes finding the sum of consecutive array elements $a[l \dots r]$, or finding the minimum element in a such a range in $O(\log n)$ time.
Between answering such queries, the Segment Tree allows modifying the array by replacing one element, or even changing the elements of a whole subsegment (e.g. assigning all elements $a[l \dots r]$ to any value, or adding a value to all element in the subsegment).
In general, a Segment Tree is a very flexible data structure, and a huge number of problems can be solved with it.
Additionally, it is also possible to apply more complex operations and answer more complex queries (see [Advanced versions of Segment Trees](segment_tree.md#advanced-versions-of-segment-trees)).
In particular the Segment Tree can be easily generalized to larger dimensions.
For instance, with a two-dimensional Segment Tree you can answer sum or minimum queries over some subrectangle of a given matrix in only $O(\log^2 n)$ time.
One important property of Segment Trees is that they require only a linear amount of memory.
The standard Segment Tree requires $4n$ vertices for working on an array of size $n$.
## Simplest form of a Segment Tree
To start easy, we consider the simplest form of a Segment Tree.
We want to answer sum queries efficiently.
The formal definition of our task is:
Given an array $a[0 \dots n-1]$, the Segment Tree must be able to find the sum of elements between the indices $l$ and $r$ (i.e. computing the sum $\sum_{i=l}^r a[i]$), and also handle changing values of the elements in the array (i.e. perform assignments of the form $a[i] = x$).
The Segment Tree should be able to process **both** queries in $O(\log n)$ time.
This is an improvement over the simpler approaches.
A naive array implementation - just using a simple array - can update elements in $O(1)$, but requires $O(n)$ to compute each sum query.
And precomputed prefix sums can compute sum queries in $O(1)$, but updating an array element requires $O(n)$ changes to the prefix sums.
### Structure of the Segment Tree
We can take a divide-and-conquer approach when it comes to array segments.
We compute and store the sum of the elements of the whole array, i.e. the sum of the segment $a[0 \dots n-1]$.
We then split the array into two halves $a[0 \dots n/2-1]$ and $a[n/2 \dots n-1]$ and compute the sum of each halve and store them.
Each of these two halves in turn are split in half, and so on until all segments reach size $1$.
We can view these segments as forming a binary tree:
the root of this tree is the segment $a[0 \dots n-1]$, and each vertex (except leaf vertices) has exactly two child vertices.
This is why the data structure is called "Segment Tree", even though in most implementations the tree is not constructed explicitly (see [Implementation](segment_tree.md#implementation)).
Here is a visual representation of such a Segment Tree over the array $a = [1, 3, -2, 8, -7]$:

From this short description of the data structure, we can already conclude that a Segment Tree only requires a linear number of vertices.
The first level of the tree contains a single node (the root), the second level will contain two vertices, in the third it will contain four vertices, until the number of vertices reaches $n$.
Thus the number of vertices in the worst case can be estimated by the sum $1 + 2 + 4 + \dots + 2^{\lceil\log_2 n\rceil} \lt 2^{\lceil\log_2 n\rceil + 1} \lt 4n$.
It is worth noting that whenever $n$ is not a power of two, not all levels of the Segment Tree will be completely filled.
We can see that behavior in the image.
For now we can forget about this fact, but it will become important later during the implementation.
The height of the Segment Tree is $O(\log n)$, because when going down from the root to the leaves the size of the segments decreases approximately by half.
### Construction
Before constructing the segment tree, we need to decide:
1. the *value* that gets stored at each node of the segment tree.
For example, in a sum segment tree, a node would store the sum of the elements in its range $[l, r]$.
2. the *merge* operation that merges two siblings in a segment tree.
For example, in a sum segment tree, the two nodes corresponding to the ranges $a[l_1 \dots r_1]$ and $a[l_2 \dots r_2]$ would be merged into a node corresponding to the range $a[l_1 \dots r_2]$ by adding the values of the two nodes.
Note that a vertex is a "leaf vertex", if its corresponding segment covers only one value in the original array. It is present at the lowermost level of a segment tree. Its value would be equal to the (corresponding) element $a[i]$.
Now, for construction of the segment tree, we start at the bottom level (the leaf vertices) and assign them their respective values. On the basis of these values, we can compute the values of the previous level, using the `merge` function.
And on the basis of those, we can compute the values of the previous, and repeat the procedure until we reach the root vertex.
It is convenient to describe this operation recursively in the other direction, i.e., from the root vertex to the leaf vertices. The construction procedure, if called on a non-leaf vertex, does the following:
1. recursively construct the values of the two child vertices
2. merge the computed values of these children.
We start the construction at the root vertex, and hence, we are able to compute the entire segment tree.
The time complexity of this construction is $O(n)$, assuming that the merge operation is constant time (the merge operation gets called $n$ times, which is equal to the number of internal nodes in the segment tree).
### Sum queries
For now we are going to answer sum queries. As an input we receive two integers $l$ and $r$, and we have to compute the sum of the segment $a[l \dots r]$ in $O(\log n)$ time.
To do this, we will traverse the Segment Tree and use the precomputed sums of the segments.
Let's assume that we are currently at the vertex that covers the segment $a[tl \dots tr]$.
There are three possible cases.
The easiest case is when the segment $a[l \dots r]$ is equal to the corresponding segment of the current vertex (i.e. $a[l \dots r] = a[tl \dots tr]$), then we are finished and can return the precomputed sum that is stored in the vertex.
Alternatively the segment of the query can fall completely into the domain of either the left or the right child.
Recall that the left child covers the segment $a[tl \dots tm]$ and the right vertex covers the segment $a[tm + 1 \dots tr]$ with $tm = (tl + tr) / 2$.
In this case we can simply go to the child vertex, which corresponding segment covers the query segment, and execute the algorithm described here with that vertex.
And then there is the last case, the query segment intersects with both children.
In this case we have no other option as to make two recursive calls, one for each child.
First we go to the left child, compute a partial answer for this vertex (i.e. the sum of values of the intersection between the segment of the query and the segment of the left child), then go to the right child, compute the partial answer using that vertex, and then combine the answers by adding them.
In other words, since the left child represents the segment $a[tl \dots tm]$ and the right child the segment $a[tm+1 \dots tr]$, we compute the sum query $a[l \dots tm]$ using the left child, and the sum query $a[tm+1 \dots r]$ using the right child.
So processing a sum query is a function that recursively calls itself once with either the left or the right child (without changing the query boundaries), or twice, once for the left and once for the right child (by splitting the query into two subqueries).
And the recursion ends, whenever the boundaries of the current query segment coincides with the boundaries of the segment of the current vertex.
In that case the answer will be the precomputed value of the sum of this segment, which is stored in the tree.
In other words, the calculation of the query is a traversal of the tree, which spreads through all necessary branches of the tree, and uses the precomputed sum values of the segments in the tree.
Obviously we will start the traversal from the root vertex of the Segment Tree.
The procedure is illustrated in the following image.
Again the array $a = [1, 3, -2, 8, -7]$ is used, and here we want to compute the sum $\sum_{i=2}^4 a[i]$.
The colored vertices will be visited, and we will use the precomputed values of the green vertices.
This gives us the result $-2 + 1 = -1$.

Why is the complexity of this algorithm $O(\log n)$?
To show this complexity we look at each level of the tree.
It turns out, that for each level we only visit not more than four vertices.
And since the height of the tree is $O(\log n)$, we receive the desired running time.
We can show that this proposition (at most four vertices each level) is true by induction.
At the first level, we only visit one vertex, the root vertex, so here we visit less than four vertices.
Now let's look at an arbitrary level.
By induction hypothesis, we visit at most four vertices.
If we only visit at most two vertices, the next level has at most four vertices. That is trivial, because each vertex can only cause at most two recursive calls.
So let's assume that we visit three or four vertices in the current level.
From those vertices, we will analyze the vertices in the middle more carefully.
Since the sum query asks for the sum of a continuous subarray, we know that segments corresponding to the visited vertices in the middle will be completely covered by the segment of the sum query.
Therefore these vertices will not make any recursive calls.
So only the most left, and the most right vertex will have the potential to make recursive calls.
And those will only create at most four recursive calls, so also the next level will satisfy the assertion.
We can say that one branch approaches the left boundary of the query, and the second branch approaches the right one.
Therefore we visit at most $4 \log n$ vertices in total, and that is equal to a running time of $O(\log n)$.
In conclusion the query works by dividing the input segment into several sub-segments for which all the sums are already precomputed and stored in the tree.
And if we stop partitioning whenever the query segment coincides with the vertex segment, then we only need $O(\log n)$ such segments, which gives the effectiveness of the Segment Tree.
### Update queries
Now we want to modify a specific element in the array, let's say we want to do the assignment $a[i] = x$.
And we have to rebuild the Segment Tree, such that it correspond to the new, modified array.
This query is easier than the sum query.
Each level of a Segment Tree forms a partition of the array.
Therefore an element $a[i]$ only contributes to one segment from each level.
Thus only $O(\log n)$ vertices need to be updated.
It is easy to see, that the update request can be implemented using a recursive function.
The function gets passed the current tree vertex, and it recursively calls itself with one of the two child vertices (the one that contains $a[i]$ in its segment), and after that recomputes its sum value, similar how it is done in the build method (that is as the sum of its two children).
Again here is a visualization using the same array.
Here we perform the update $a[2] = 3$.
The green vertices are the vertices that we visit and update.

### Implementation ### { #implementation}
The main consideration is how to store the Segment Tree.
Of course we can define a $\text{Vertex}$ struct and create objects, that store the boundaries of the segment, its sum and additionally also pointers to its child vertices.
However, this requires storing a lot of redundant information in the form of pointers.
We will use a simple trick to make this a lot more efficient by using an _implicit data structure_: Only storing the sums in an array.
(A similar method is used for binary heaps).
The sum of the root vertex at index 1, the sums of its two child vertices at indices 2 and 3, the sums of the children of those two vertices at indices 4 to 7, and so on.
With 1-indexing, conveniently the left child of a vertex at index $i$ is stored at index $2i$, and the right one at index $2i + 1$.
Equivalently, the parent of a vertex at index $i$ is stored at $i/2$ (integer division).
This simplifies the implementation a lot.
We don't need to store the structure of the tree in memory.
It is defined implicitly.
We only need one array which contains the sums of all segments.
As noted before, we need to store at most $4n$ vertices.
It might be less, but for convenience we always allocate an array of size $4n$.
There will be some elements in the sum array, that will not correspond to any vertices in the actual tree, but this doesn't complicate the implementation.
So, we store the Segment Tree simply as an array $t[]$ with a size of four times the input size $n$:
```{.cpp file=segment_tree_implementation_definition}
int n, t[4*MAXN];
```
The procedure for constructing the Segment Tree from a given array $a[]$ looks like this:
it is a recursive function with the parameters $a[]$ (the input array), $v$ (the index of the current vertex), and the boundaries $tl$ and $tr$ of the current segment.
In the main program this function will be called with the parameters of the root vertex: $v = 1$, $tl = 0$, and $tr = n - 1$.
```{.cpp file=segment_tree_implementation_build}
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = a[tl];
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = t[v*2] + t[v*2+1];
}
}
```
Further the function for answering sum queries is also a recursive function, which receives as parameters information about the current vertex/segment (i.e. the index $v$ and the boundaries $tl$ and $tr$) and also the information about the boundaries of the query, $l$ and $r$.
In order to simplify the code, this function always does two recursive calls, even if only one is necessary - in that case the superfluous recursive call will have $l > r$, and this can easily be caught using an additional check at the beginning of the function.
```{.cpp file=segment_tree_implementation_sum}
int sum(int v, int tl, int tr, int l, int r) {
if (l > r)
return 0;
if (l == tl && r == tr) {
return t[v];
}
int tm = (tl + tr) / 2;
return sum(v*2, tl, tm, l, min(r, tm))
+ sum(v*2+1, tm+1, tr, max(l, tm+1), r);
}
```
Finally the update query. The function will also receive information about the current vertex/segment, and additionally also the parameter of the update query (i.e. the position of the element and its new value).
```{.cpp file=segment_tree_implementation_update}
void update(int v, int tl, int tr, int pos, int new_val) {
if (tl == tr) {
t[v] = new_val;
} else {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
t[v] = t[v*2] + t[v*2+1];
}
}
```
### Memory efficient implementation
Most people use the implementation from the previous section. If you look at the array `t` you can see that it follows the numbering of the tree nodes in the order of a BFS traversal (level-order traversal).
Using this traversal the children of vertex $v$ are $2v$ and $2v + 1$ respectively.
However if $n$ is not a power of two, this method will skip some indices and leave some parts of the array `t` unused.
The memory consumption is limited by $4n$, even though a Segment Tree of an array of $n$ elements requires only $2n - 1$ vertices.
However it can be reduced.
We renumber the vertices of the tree in the order of an Euler tour traversal (pre-order traversal), and we write all these vertices next to each other.
Lets look at a vertex at index $v$, and let him be responsible for the segment $[l, r]$, and let $mid = \dfrac{l + r}{2}$.
It is obvious that the left child will have the index $v + 1$.
The left child is responsible for the segment $[l, mid]$, i.e. in total there will be $2 * (mid - l + 1) - 1$ vertices in the left child's subtree.
Thus we can compute the index of the right child of $v$. The index will be $v + 2 * (mid - l + 1)$.
By this numbering we achieve a reduction of the necessary memory to $2n$.
## <a name="advanced-versions-of-segment-trees"></a>Advanced versions of Segment Trees
A Segment Tree is a very flexible data structure, and allows variations and extensions in many different directions.
Let's try to categorize them below.
### More complex queries
It can be quite easy to change the Segment Tree in a direction, such that it computes different queries (e.g. computing the minimum / maximum instead of the sum), but it also can be very nontrivial.
#### Finding the maximum
Let us slightly change the condition of the problem described above: instead of querying the sum, we will now make maximum queries.
The tree will have exactly the same structure as the tree described above.
We only need to change the way $t[v]$ is computed in the $\text{build}$ and $\text{update}$ functions.
$t[v]$ will now store the maximum of the corresponding segment.
And we also need to change the calculation of the returned value of the $\text{sum}$ function (replacing the summation by the maximum).
Of course this problem can be easily changed into computing the minimum instead of the maximum.
Instead of showing an implementation to this problem, the implementation will be given to a more complex version of this problem in the next section.
#### Finding the maximum and the number of times it appears
This task is very similar to the previous one.
In addition of finding the maximum, we also have to find the number of occurrences of the maximum.
To solve this problem, we store a pair of numbers at each vertex in the tree:
In addition to the maximum we also store the number of occurrences of it in the corresponding segment.
Determining the correct pair to store at $t[v]$ can still be done in constant time using the information of the pairs stored at the child vertices.
Combining two such pairs should be done in a separate function, since this will be an operation that we will do while building the tree, while answering maximum queries and while performing modifications.
```{.cpp file=segment_tree_maximum_and_count}
pair<int, int> t[4*MAXN];
pair<int, int> combine(pair<int, int> a, pair<int, int> b) {
if (a.first > b.first)
return a;
if (b.first > a.first)
return b;
return make_pair(a.first, a.second + b.second);
}
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = make_pair(a[tl], 1);
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
pair<int, int> get_max(int v, int tl, int tr, int l, int r) {
if (l > r)
return make_pair(-INF, 0);
if (l == tl && r == tr)
return t[v];
int tm = (tl + tr) / 2;
return combine(get_max(v*2, tl, tm, l, min(r, tm)),
get_max(v*2+1, tm+1, tr, max(l, tm+1), r));
}
void update(int v, int tl, int tr, int pos, int new_val) {
if (tl == tr) {
t[v] = make_pair(new_val, 1);
} else {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
```
#### Compute the greatest common divisor / least common multiple
In this problem we want to compute the GCD / LCM of all numbers of given ranges of the array.
This interesting variation of the Segment Tree can be solved in exactly the same way as the Segment Trees we derived for sum / minimum / maximum queries:
it is enough to store the GCD / LCM of the corresponding vertex in each vertex of the tree.
Combining two vertices can be done by computing the GCD / LCM of both vertices.
#### Counting the number of zeros, searching for the $k$-th zero { #counting-zero-search-kth data-toc-label="Counting the number of zeros, searching for the k-th zero"}
In this problem we want to find the number of zeros in a given range, and additionally find the index of the $k$-th zero using a second function.
Again we have to change the store values of the tree a bit:
This time we will store the number of zeros in each segment in $t[]$.
It is pretty clear, how to implement the $\text{build}$, $\text{update}$ and $\text{count_zero}$ functions, we can simply use the ideas from the sum query problem.
Thus we solved the first part of the problem.
Now we learn how to solve the problem of finding the $k$-th zero in the array $a[]$.
To do this task, we will descend the Segment Tree, starting at the root vertex, and moving each time to either the left or the right child, depending on which segment contains the $k$-th zero.
In order to decide to which child we need to go, it is enough to look at the number of zeros appearing in the segment corresponding to the left vertex.
If this precomputed count is greater or equal to $k$, it is necessary to descend to the left child, and otherwise descent to the right child.
Notice, if we chose the right child, we have to subtract the number of zeros of the left child from $k$.
In the implementation we can handle the special case, $a[]$ containing less than $k$ zeros, by returning -1.
```{.cpp file=segment_tree_kth_zero}
int find_kth(int v, int tl, int tr, int k) {
if (k > t[v])
return -1;
if (tl == tr)
return tl;
int tm = (tl + tr) / 2;
if (t[v*2] >= k)
return find_kth(v*2, tl, tm, k);
else
return find_kth(v*2+1, tm+1, tr, k - t[v*2]);
}
```
#### Searching for an array prefix with a given amount
The task is as follows:
for a given value $x$ we have to quickly find smallest index $i$ such that the sum of the first $i$ elements of the array $a[]$ is greater or equal to $x$ (assuming that the array $a[]$ only contains non-negative values).
This task can be solved using binary search, computing the sum of the prefixes with the Segment Tree.
However this will lead to a $O(\log^2 n)$ solution.
Instead we can use the same idea as in the previous section, and find the position by descending the tree:
by moving each time to the left or the right, depending on the sum of the left child.
Thus finding the answer in $O(\log n)$ time.
#### Searching for the first element greater than a given amount
The task is as follows:
for a given value $x$ and a range $a[l \dots r]$ find the smallest $i$ in the range $a[l \dots r]$, such that $a[i]$ is greater than $x$.
This task can be solved using binary search over max prefix queries with the Segment Tree.
However, this will lead to a $O(\log^2 n)$ solution.
Instead, we can use the same idea as in the previous sections, and find the position by descending the tree:
by moving each time to the left or the right, depending on the maximum value of the left child.
Thus finding the answer in $O(\log n)$ time.
```{.cpp file=segment_tree_first_greater}
int get_first(int v, int lv, int rv, int l, int r, int x) {
if(lv > r || rv < l) return -1;
if(l <= lv && rv <= r) {
if(t[v] <= x) return -1;
while(lv != rv) {
int mid = lv + (rv-lv)/2;
if(t[2*v] > x) {
v = 2*v;
rv = mid;
}else {
v = 2*v+1;
lv = mid+1;
}
}
return lv;
}
int mid = lv + (rv-lv)/2;
int rs = get_first(2*v, lv, mid, l, r, x);
if(rs != -1) return rs;
return get_first(2*v+1, mid+1, rv, l ,r, x);
}
```
#### Finding subsegments with the maximal sum
Here again we receive a range $a[l \dots r]$ for each query, this time we have to find a subsegment $a[l^\prime \dots r^\prime]$ such that $l \le l^\prime$ and $r^\prime \le r$ and the sum of the elements of this segment is maximal.
As before we also want to be able to modify individual elements of the array.
The elements of the array can be negative, and the optimal subsegment can be empty (e.g. if all elements are negative).
This problem is a non-trivial usage of a Segment Tree.
This time we will store four values for each vertex:
the sum of the segment, the maximum prefix sum, the maximum suffix sum, and the sum of the maximal subsegment in it.
In other words for each segment of the Segment Tree the answer is already precomputed as well as the answers for segments touching the left and the right boundaries of the segment.
How to build a tree with such data?
Again we compute it in a recursive fashion:
we first compute all four values for the left and the right child, and then combine those to archive the four values for the current vertex.
Note the answer for the current vertex is either:
* the answer of the left child, which means that the optimal subsegment is entirely placed in the segment of the left child
* the answer of the right child, which means that the optimal subsegment is entirely placed in the segment of the right child
* the sum of the maximum suffix sum of the left child and the maximum prefix sum of the right child, which means that the optimal subsegment intersects with both children.
Hence the answer to the current vertex is the maximum of these three values.
Computing the maximum prefix / suffix sum is even easier.
Here is the implementation of the $\text{combine}$ function, which receives only data from the left and right child, and returns the data of the current vertex.
```{.cpp file=segment_tree_maximal_sum_subsegments1}
struct data {
int sum, pref, suff, ans;
};
data combine(data l, data r) {
data res;
res.sum = l.sum + r.sum;
res.pref = max(l.pref, l.sum + r.pref);
res.suff = max(r.suff, r.sum + l.suff);
res.ans = max(max(l.ans, r.ans), l.suff + r.pref);
return res;
}
```
Using the $\text{combine}$ function it is easy to build the Segment Tree.
We can implement it in exactly the same way as in the previous implementations.
To initialize the leaf vertices, we additionally create the auxiliary function $\text{make_data}$, which will return a $\text{data}$ object holding the information of a single value.
```{.cpp file=segment_tree_maximal_sum_subsegments2}
data make_data(int val) {
data res;
res.sum = val;
res.pref = res.suff = res.ans = max(0, val);
return res;
}
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = make_data(a[tl]);
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
void update(int v, int tl, int tr, int pos, int new_val) {
if (tl == tr) {
t[v] = make_data(new_val);
} else {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
```
It only remains, how to compute the answer to a query.
To answer it, we go down the tree as before, breaking the query into several subsegments that coincide with the segments of the Segment Tree, and combine the answers in them into a single answer for the query.
Then it should be clear, that the work is exactly the same as in the simple Segment Tree, but instead of summing / minimizing / maximizing the values, we use the $\text{combine}$ function.
```{.cpp file=segment_tree_maximal_sum_subsegments3}
data query(int v, int tl, int tr, int l, int r) {
if (l > r)
return make_data(0);
if (l == tl && r == tr)
return t[v];
int tm = (tl + tr) / 2;
return combine(query(v*2, tl, tm, l, min(r, tm)),
query(v*2+1, tm+1, tr, max(l, tm+1), r));
}
```
### <a name="saving-the-entire-subarrays-in-each-vertex"></a>Saving the entire subarrays in each vertex
This is a separate subsection that stands apart from the others, because at each vertex of the Segment Tree we don't store information about the corresponding segment in compressed form (sum, minimum, maximum, ...), but store all elements of the segment.
Thus the root of the Segment Tree will store all elements of the array, the left child vertex will store the first half of the array, the right vertex the second half, and so on.
In its simplest application of this technique we store the elements in sorted order.
In more complex versions the elements are not stored in lists, but more advanced data structures (sets, maps, ...).
But all these methods have the common factor, that each vertex requires linear memory (i.e. proportional to the length of the corresponding segment).
The first natural question, when considering these Segment Trees, is about memory consumption.
Intuitively this might look like $O(n^2)$ memory, but it turns out that the complete tree will only need $O(n \log n)$ memory.
Why is this so?
Quite simply, because each element of the array falls into $O(\log n)$ segments (remember the height of the tree is $O(\log n)$).
So in spite of the apparent extravagance of such a Segment Tree, it consumes only slightly more memory than the usual Segment Tree.
Several typical applications of this data structure are described below.
It is worth noting the similarity of these Segment Trees with 2D data structures (in fact this is a 2D data structure, but with rather limited capabilities).
#### Find the smallest number greater or equal to a specified number. No modification queries.
We want to answer queries of the following form:
for three given numbers $(l, r, x)$ we have to find the minimal number in the segment $a[l \dots r]$ which is greater than or equal to $x$.
We construct a Segment Tree.
In each vertex we store a sorted list of all numbers occurring in the corresponding segment, like described above.
How to build such a Segment Tree as effectively as possible?
As always we approach this problem recursively: let the lists of the left and right children already be constructed, and we want to build the list for the current vertex.
From this view the operation is now trivial and can be accomplished in linear time:
We only need to combine the two sorted lists into one, which can be done by iterating over them using two pointers.
The C++ STL already has an implementation of this algorithm.
Because this structure of the Segment Tree and the similarities to the merge sort algorithm, the data structure is also often called "Merge Sort Tree".
```{.cpp file=segment_tree_smallest_number_greater1}
vector<int> t[4*MAXN];
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = vector<int>(1, a[tl]);
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
merge(t[v*2].begin(), t[v*2].end(), t[v*2+1].begin(), t[v*2+1].end(),
back_inserter(t[v]));
}
}
```
We already know that the Segment Tree constructed in this way will require $O(n \log n)$ memory.
And thanks to this implementation its construction also takes $O(n \log n)$ time, after all each list is constructed in linear time in respect to its size.
Now consider the answer to the query.
We will go down the tree, like in the regular Segment Tree, breaking our segment $a[l \dots r]$ into several subsegments (into at most $O(\log n)$ pieces).
It is clear that the answer of the whole answer is the minimum of each of the subqueries.
So now we only need to understand, how to respond to a query on one such subsegment that corresponds with some vertex of the tree.
We are at some vertex of the Segment Tree and we want to compute the answer to the query, i.e. find the minimum number greater that or equal to a given number $x$.
Since the vertex contains the list of elements in sorted order, we can simply perform a binary search on this list and return the first number, greater than or equal to $x$.
Thus the answer to the query in one segment of the tree takes $O(\log n)$ time, and the entire query is processed in $O(\log^2 n)$.
```{.cpp file=segment_tree_smallest_number_greater2}
int query(int v, int tl, int tr, int l, int r, int x) {
if (l > r)
return INF;
if (l == tl && r == tr) {
vector<int>::iterator pos = lower_bound(t[v].begin(), t[v].end(), x);
if (pos != t[v].end())
return *pos;
return INF;
}
int tm = (tl + tr) / 2;
return min(query(v*2, tl, tm, l, min(r, tm), x),
query(v*2+1, tm+1, tr, max(l, tm+1), r, x));
}
```
The constant $\text{INF}$ is equal to some large number that is bigger than all numbers in the array.
Its usage means, that there is no number greater than or equal to $x$ in the segment.
It has the meaning of "there is no answer in the given interval".
#### Find the smallest number greater or equal to a specified number. With modification queries.
This task is similar to the previous.
The last approach has a disadvantage, it was not possible to modify the array between answering queries.
Now we want to do exactly this: a modification query will do the assignment $a[i] = y$.
The solution is similar to the solution of the previous problem, but instead of lists at each vertex of the Segment Tree, we will store a balanced list that allows you to quickly search for numbers, delete numbers, and insert new numbers.
Since the array can contain a number repeated, the optimal choice is the data structure $\text{multiset}$.
The construction of such a Segment Tree is done in pretty much the same way as in the previous problem, only now we need to combine $\text{multiset}$s and not sorted lists.
This leads to a construction time of $O(n \log^2 n)$ (in general merging two red-black trees can be done in linear time, but the C++ STL doesn't guarantee this time complexity).
The $\text{query}$ function is also almost equivalent, only now the $\text{lower_bound}$ function of the $\text{multiset}$ function should be called instead ($\text{std::lower_bound}$ only works in $O(\log n)$ time if used with random-access iterators).
Finally the modification request.
To process it, we must go down the tree, and modify all $\text{multiset}$ from the corresponding segments that contain the effected element.
We simply delete the old value of this element (but only one occurrence), and insert the new value.
```cpp
void update(int v, int tl, int tr, int pos, int new_val) {
t[v].erase(t[v].find(a[pos]));
t[v].insert(new_val);
if (tl != tr) {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
} else {
a[pos] = new_val;
}
}
```
Processing of this modification query also takes $O(\log^2 n)$ time.
#### Find the smallest number greater or equal to a specified number. Acceleration with "fractional cascading".
We have the same problem statement, we want to find the minimal number greater than or equal to $x$ in a segment, but this time in $O(\log n)$ time.
We will improve the time complexity using the technique "fractional cascading".
Fractional cascading is a simple technique that allows you to improve the running time of multiple binary searches, which are conducted at the same time.
Our previous approach to the search query was, that we divide the task into several subtasks, each of which is solved with a binary search.
Fractional cascading allows you to replace all of these binary searches with a single one.
The simplest and most obvious example of fractional cascading is the following problem:
there are $k$ sorted lists of numbers, and we must find in each list the first number greater than or equal to the given number.
Instead of performing a binary search for each list, we could merge all lists into one big sorted list.
Additionally for each element $y$ we store a list of results of searching for $y$ in each of the $k$ lists.
Therefore if we want to find the smallest number greater than or equal to $x$, we just need to perform one single binary search, and from the list of indices we can determine the smallest number in each list.
This approach however requires $O(n \cdot k)$ ($n$ is the length of the combined lists), which can be quite inefficient.
Fractional cascading reduces this memory complexity to $O(n)$ memory, by creating from the $k$ input lists $k$ new lists, in which each list contains the corresponding list and additionally also every second element of the following new list.
Using this structure it is only necessary to store two indices, the index of the element in the original list, and the index of the element in the following new list.
So this approach only uses $O(n)$ memory, and still can answer the queries using a single binary search.
But for our application we do not need the full power of fractional cascading.
In our Segment Tree a vertex will contain the sorted list of all elements that occur in either the left or the right subtrees (like in the Merge Sort Tree).
Additionally to this sorted list, we store two positions for each element.
For an element $y$ we store the smallest index $i$, such that the $i$th element in the sorted list of the left child is greater or equal to $y$.
And we store the smallest index $j$, such that the $j$th element in the sorted list of the right child is greater or equal to $y$.
These values can be computed in parallel to the merging step when we build the tree.
How does this speed up the queries?
Remember, in the normal solution we did a binary search in ever node.
But with this modification, we can avoid all except one.
To answer a query, we simply to a binary search in the root node.
This gives as the smallest element $y \ge x$ in the complete array, but it also gives us two positions.
The index of the smallest element greater or equal $x$ in the left subtree, and the index of the smallest element $y$ in the right subtree. Notice that $\ge y$ is the same as $\ge x$, since our array doesn't contain any elements between $x$ and $y$.
In the normal Merge Sort Tree solution we would compute these indices via binary search, but with the help of the precomputed values we can just look them up in $O(1)$.
And we can repeat that until we visited all nodes that cover our query interval.
To summarize, as usual we touch $O(\log n)$ nodes during a query. In the root node we do a binary search, and in all other nodes we only do constant work.
This means the complexity for answering a query is $O(\log n)$.
But notice, that this uses three times more memory than a normal Merge Sort Tree, which already uses a lot of memory ($O(n \log n)$).
It is straightforward to apply this technique to a problem, that doesn't require any modification queries.
The two positions are just integers and can easily be computed by counting when merging the two sorted sequences.
It it still possible to also allow modification queries, but that complicates the entire code.
Instead of integers, you need to store the sorted array as `multiset`, and instead of indices you need to store iterators.
And you need to work very carefully, so that you increment or decrement the correct iterators during a modification query.
#### Other possible variations
This technique implies a whole new class of possible applications.
Instead of storing a $\text{vector}$ or a $\text{multiset}$ in each vertex, other data structures can be used:
other Segment Trees (somewhat discussed in [Generalization to higher dimensions](segment_tree.md#generalization-to-higher-dimensions)), Fenwick Trees, Cartesian trees, etc.
### Range updates (Lazy Propagation)
All problems in the above sections discussed modification queries that only effected a single element of the array each.
However the Segment Tree allows applying modification queries to an entire segment of contiguous elements, and perform the query in the same time $O(\log n)$.
#### Addition on segments
We begin by considering problems of the simplest form: the modification query should add a number $x$ to all numbers in the segment $a[l \dots r]$.
The second query, that we are supposed to answer, asked simply for the value of $a[i]$.
To make the addition query efficient, we store at each vertex in the Segment Tree how many we should add to all numbers in the corresponding segment.
For example, if the query "add 3 to the whole array $a[0 \dots n-1]$" comes, then we place the number 3 in the root of the tree.
In general we have to place this number to multiple segments, which form a partition of the query segment.
Thus we don't have to change all $O(n)$ values, but only $O(\log n)$ many.
If now there comes a query that asks the current value of a particular array entry, it is enough to go down the tree and add up all values found along the way.
```cpp
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = a[tl];
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = 0;
}
}
void update(int v, int tl, int tr, int l, int r, int add) {
if (l > r)
return;
if (l == tl && r == tr) {
t[v] += add;
} else {
int tm = (tl + tr) / 2;
update(v*2, tl, tm, l, min(r, tm), add);
update(v*2+1, tm+1, tr, max(l, tm+1), r, add);
}
}
int get(int v, int tl, int tr, int pos) {
if (tl == tr)
return t[v];
int tm = (tl + tr) / 2;
if (pos <= tm)
return t[v] + get(v*2, tl, tm, pos);
else
return t[v] + get(v*2+1, tm+1, tr, pos);
}
```
#### Assignment on segments
Suppose now that the modification query asks to assign each element of a certain segment $a[l \dots r]$ to some value $p$.
As a second query we will again consider reading the value of the array $a[i]$.
To perform this modification query on a whole segment, you have to store at each vertex of the Segment Tree whether the corresponding segment is covered entirely with the same value or not.
This allows us to make a "lazy" update:
instead of changing all segments in the tree that cover the query segment, we only change some, and leave others unchanged.
A marked vertex will mean, that every element of the corresponding segment is assigned to that value, and actually also the complete subtree should only contain this value.
In a sense we are lazy and delay writing the new value to all those vertices.
We can do this tedious task later, if this is necessary.
So after the modification query is executed, some parts of the tree become irrelevant - some modifications remain unfulfilled in it.
For example if a modification query "assign a number to the whole array $a[0 \dots n-1]$" gets executed, in the Segment Tree only a single change is made - the number is placed in the root of the tree and this vertex gets marked.
The remaining segments remain unchanged, although in fact the number should be placed in the whole tree.
Suppose now that the second modification query says, that the first half of the array $a[0 \dots n/2]$ should be assigned with some other number.
To process this query we must assign each element in the whole left child of the root vertex with that number.
But before we do this, we must first sort out the root vertex first.
The subtlety here is that the right half of the array should still be assigned to the value of the first query, and at the moment there is no information for the right half stored.
The way to solve this is to push the information of the root to its children, i.e. if the root of the tree was assigned with any number, then we assign the left and the right child vertices with this number and remove the mark of the root.
After that, we can assign the left child with the new value, without loosing any necessary information.
Summarizing we get:
for any queries (a modification or reading query) during the descent along the tree we should always push information from the current vertex into both of its children.
We can understand this in such a way, that when we descent the tree we apply delayed modifications, but exactly as much as necessary (so not to degrade the complexity of $O(\log n)$).
For the implementation we need to make a $\text{push}$ function, which will receive the current vertex, and it will push the information for its vertex to both its children.
We will call this function at the beginning of the query functions (but we will not call it from the leaves, because there is no need to push information from them any further).
```cpp
void push(int v) {
if (marked[v]) {
t[v*2] = t[v*2+1] = t[v];
marked[v*2] = marked[v*2+1] = true;
marked[v] = false;
}
}
void update(int v, int tl, int tr, int l, int r, int new_val) {
if (l > r)
return;
if (l == tl && tr == r) {
t[v] = new_val;
marked[v] = true;
} else {
push(v);
int tm = (tl + tr) / 2;
update(v*2, tl, tm, l, min(r, tm), new_val);
update(v*2+1, tm+1, tr, max(l, tm+1), r, new_val);
}
}
int get(int v, int tl, int tr, int pos) {
if (tl == tr) {
return t[v];
}
push(v);
int tm = (tl + tr) / 2;
if (pos <= tm)
return get(v*2, tl, tm, pos);
else
return get(v*2+1, tm+1, tr, pos);
}
```
Notice: the function $\text{get}$ can also be implemented in a different way:
do not make delayed updates, but immediately return the value $t[v]$ if $marked[v]$ is true.
#### Adding on segments, querying for maximum
Now the modification query is to add a number to all elements in a range, and the reading query is to find the maximum in a range.
So for each vertex of the Segment Tree we have to store the maximum of the corresponding subsegment.
The interesting part is how to recompute these values during a modification request.
For this purpose we keep store an additional value for each vertex.
In this value we store the addends we haven't propagated to the child vertices.
Before traversing to a child vertex, we call $\text{push}$ and propagate the value to both children.
We have to do this in both the $\text{update}$ function and the $\text{query}$ function.
```cpp
void push(int v) {
t[v*2] += lazy[v];
lazy[v*2] += lazy[v];
t[v*2+1] += lazy[v];
lazy[v*2+1] += lazy[v];
lazy[v] = 0;
}
void update(int v, int tl, int tr, int l, int r, int addend) {
if (l > r)
return;
if (l == tl && tr == r) {
t[v] += addend;
lazy[v] += addend;
} else {
push(v);
int tm = (tl + tr) / 2;
update(v*2, tl, tm, l, min(r, tm), addend);
update(v*2+1, tm+1, tr, max(l, tm+1), r, addend);
t[v] = max(t[v*2], t[v*2+1]);
}
}
int query(int v, int tl, int tr, int l, int r) {
if (l > r)
return -INF;
if (l == tl && tr == r)
return t[v];
push(v);
int tm = (tl + tr) / 2;
return max(query(v*2, tl, tm, l, min(r, tm)),
query(v*2+1, tm+1, tr, max(l, tm+1), r));
}
```
### <a name="generalization-to-higher-dimensions"></a>Generalization to higher dimensions
A Segment Tree can be generalized quite natural to higher dimensions.
If in the one-dimensional case we split the indices of the array into segments, then in the two-dimensional we make an ordinary Segment Tree with respect to the first indices, and for each segment we build an ordinary Segment Tree with respect to the second indices.
#### Simple 2D Segment Tree
A matrix $a[0 \dots n-1, 0 \dots m-1]$ is given, and we have to find the sum (or minimum/maximum) on some submatrix $a[x_1 \dots x_2, y_1 \dots y_2]$, as well as perform modifications of individual matrix elements (i.e. queries of the form $a[x][y] = p$).
So we build a 2D Segment Tree: first the Segment Tree using the first coordinate ($x$), then the second ($y$).
To make the construction process more understandable, you can forget for a while that the matrix is two-dimensional, and only leave the first coordinate.
We will construct an ordinary one-dimensional Segment Tree using only the first coordinate.
But instead of storing a number in a segment, we store an entire Segment Tree:
i.e. at this moment we remember that we also have a second coordinate; but because at this moment the first coordinate is already fixed to some interval $[l \dots r]$, we actually work with such a strip $a[l \dots r, 0 \dots m-1]$ and for it we build a Segment Tree.
Here is the implementation of the construction of a 2D Segment Tree.
It actually represents two separate blocks:
the construction of a Segment Tree along the $x$ coordinate ($\text{build}_x$), and the $y$ coordinate ($\text{build}_y$).
For the leaf nodes in $\text{build}_y$ we have to separate two cases:
when the current segment of the first coordinate $[tlx \dots trx]$ has length 1, and when it has a length greater than one. In the first case, we just take the corresponding value from the matrix, and in the second case we can combine the values of two Segment Trees from the left and the right son in the coordinate $x$.
```cpp
void build_y(int vx, int lx, int rx, int vy, int ly, int ry) {
if (ly == ry) {
if (lx == rx)
t[vx][vy] = a[lx][ly];
else
t[vx][vy] = t[vx*2][vy] + t[vx*2+1][vy];
} else {
int my = (ly + ry) / 2;
build_y(vx, lx, rx, vy*2, ly, my);
build_y(vx, lx, rx, vy*2+1, my+1, ry);
t[vx][vy] = t[vx][vy*2] + t[vx][vy*2+1];
}
}
void build_x(int vx, int lx, int rx) {
if (lx != rx) {
int mx = (lx + rx) / 2;
build_x(vx*2, lx, mx);
build_x(vx*2+1, mx+1, rx);
}
build_y(vx, lx, rx, 1, 0, m-1);
}
```
Such a Segment Tree still uses a linear amount of memory, but with a larger constant: $16 n m$.
It is clear that the described procedure $\text{build}_x$ also works in linear time.
Now we turn to processing of queries. We will answer to the two-dimensional query using the same principle:
first break the query on the first coordinate, and then for every reached vertex, we call the corresponding Segment Tree of the second coordinate.
```cpp
int sum_y(int vx, int vy, int tly, int try_, int ly, int ry) {
if (ly > ry)
return 0;
if (ly == tly && try_ == ry)
return t[vx][vy];
int tmy = (tly + try_) / 2;
return sum_y(vx, vy*2, tly, tmy, ly, min(ry, tmy))
+ sum_y(vx, vy*2+1, tmy+1, try_, max(ly, tmy+1), ry);
}
int sum_x(int vx, int tlx, int trx, int lx, int rx, int ly, int ry) {
if (lx > rx)
return 0;
if (lx == tlx && trx == rx)
return sum_y(vx, 1, 0, m-1, ly, ry);
int tmx = (tlx + trx) / 2;
return sum_x(vx*2, tlx, tmx, lx, min(rx, tmx), ly, ry)
+ sum_x(vx*2+1, tmx+1, trx, max(lx, tmx+1), rx, ly, ry);
}
```
This function works in $O(\log n \log m)$ time, since it first descends the tree in the first coordinate, and for each traversed vertex in the tree it makes a query in the corresponding Segment Tree along the second coordinate.
Finally we consider the modification query.
We want to learn how to modify the Segment Tree in accordance with the change in the value of some element $a[x][y] = p$.
It is clear, that the changes will occur only in those vertices of the first Segment Tree that cover the coordinate $x$ (and such will be $O(\log n)$), and for Segment Trees corresponding to them the changes will only occurs at those vertices that covers the coordinate $y$ (and such will be $O(\log m)$).
Therefore the implementation will be not very different form the one-dimensional case, only now we first descend the first coordinate, and then the second.
```cpp
void update_y(int vx, int lx, int rx, int vy, int ly, int ry, int x, int y, int new_val) {
if (ly == ry) {
if (lx == rx)
t[vx][vy] = new_val;
else
t[vx][vy] = t[vx*2][vy] + t[vx*2+1][vy];
} else {
int my = (ly + ry) / 2;
if (y <= my)
update_y(vx, lx, rx, vy*2, ly, my, x, y, new_val);
else
update_y(vx, lx, rx, vy*2+1, my+1, ry, x, y, new_val);
t[vx][vy] = t[vx][vy*2] + t[vx][vy*2+1];
}
}
void update_x(int vx, int lx, int rx, int x, int y, int new_val) {
if (lx != rx) {
int mx = (lx + rx) / 2;
if (x <= mx)
update_x(vx*2, lx, mx, x, y, new_val);
else
update_x(vx*2+1, mx+1, rx, x, y, new_val);
}
update_y(vx, lx, rx, 1, 0, m-1, x, y, new_val);
}
```
#### Compression of 2D Segment Tree
Let the problem be the following: there are $n$ points on the plane given by their coordinates $(x_i, y_i)$ and queries of the form "count the number of points lying in the rectangle $((x_1, y_1), (x_2, y_2))$".
It is clear that in the case of such a problem it becomes unreasonably wasteful to construct a two-dimensional Segment Tree with $O(n^2)$ elements.
Most on this memory will be wasted, since each single point can only get into $O(\log n)$ segments of the tree along the first coordinate, and therefore the total "useful" size of all tree segments on the second coordinate is $O(n \log n)$.
So we proceed as follows:
at each vertex of the Segment Tree with respect to the first coordinate we store a Segment Tree constructed only by those second coordinates that occur in the current segment of the first coordinates.
In other words, when constructing a Segment Tree inside some vertex with index $vx$ and the boundaries $tlx$ and $trx$, we only consider those points that fall into this interval $x \in [tlx, trx]$, and build a Segment Tree just using them.
Thus we will achieve that each Segment Tree on the second coordinate will occupy exactly as much memory as it should.
As a result, the total amount of memory will decrease to $O(n \log n)$.
We still can answer the queries in $O(\log^2 n)$ time, we just have to make a binary search on the second coordinate, but this will not worsen the complexity.
But modification queries will be impossible with this structure:
in fact if a new point appears, we have to add a new element in the middle of some Segment Tree along the second coordinate, which cannot be effectively done.
In conclusion we note that the two-dimensional Segment Tree contracted in the described way becomes practically equivalent to the modification of the one-dimensional Segment Tree (see [Saving the entire subarrays in each vertex](segment_tree.md#saving-the-entire-subarrays-in-each-vertex)).
In particular the two-dimensional Segment Tree is just a special case of storing a subarray in each vertex of the tree.
It follows, that if you gave to abandon a two-dimensional Segment Tree due to the impossibility of executing a query, it makes sense to try to replace the nested Segment Tree with some more powerful data structure, for example a Cartesian tree.
### Preserving the history of its values (Persistent Segment Tree)
A persistent data structure is a data structure that remembers it previous state for each modification.
This allows to access any version of this data structure that interest us and execute a query on it.
Segment Tree is a data structure that can be turned into a persistent data structure efficiently (both in time and memory consumption).
We want to avoid copying the complete tree before each modification, and we don't want to loose the $O(\log n)$ time behavior for answering range queries.
In fact, any change request in the Segment Tree leads to a change in the data of only $O(\log n)$ vertices along the path starting from the root.
So if we store the Segment Tree using pointers (i.e. a vertex stores pointers to the left and the right child vertices), then when performing the modification query, we simply need to create new vertices instead of changing the available vertices.
Vertices that are not affected by the modification query can still be used by pointing the pointers to the old vertices.
Thus for a modification query $O(\log n)$ new vertices will be created, including a new root vertex of the Segment Tree, and the entire previous version of the tree rooted at the old root vertex will remain unchanged.
Let's give an example implementation for the simplest Segment Tree: when there is only a query asking for sums, and modification queries of single elements.
```cpp
struct Vertex {
Vertex *l, *r;
int sum;
Vertex(int val) : l(nullptr), r(nullptr), sum(val) {}
Vertex(Vertex *l, Vertex *r) : l(l), r(r), sum(0) {
if (l) sum += l->sum;
if (r) sum += r->sum;
}
};
Vertex* build(int a[], int tl, int tr) {
if (tl == tr)
return new Vertex(a[tl]);
int tm = (tl + tr) / 2;
return new Vertex(build(a, tl, tm), build(a, tm+1, tr));
}
int get_sum(Vertex* v, int tl, int tr, int l, int r) {
if (l > r)
return 0;
if (l == tl && tr == r)
return v->sum;
int tm = (tl + tr) / 2;
return get_sum(v->l, tl, tm, l, min(r, tm))
+ get_sum(v->r, tm+1, tr, max(l, tm+1), r);
}
Vertex* update(Vertex* v, int tl, int tr, int pos, int new_val) {
if (tl == tr)
return new Vertex(new_val);
int tm = (tl + tr) / 2;
if (pos <= tm)
return new Vertex(update(v->l, tl, tm, pos, new_val), v->r);
else
return new Vertex(v->l, update(v->r, tm+1, tr, pos, new_val));
}
```
For each modification of the Segment Tree we will receive a new root vertex.
To quickly jump between two different versions of the Segment Tree, we need to store this roots in an array.
To use a specific version of the Segment Tree we simply call the query using the appropriate root vertex.
With the approach described above almost any Segment Tree can be turned into a persistent data structure.
#### Finding the $k$-th smallest number in a range {data-toc-label="Finding the k-th smallest number in a range"}
This time we have to answer queries of the form "What is the $k$-th smallest element in the range $a[l \dots r]$.
This query can be answered using a binary search and a Merge Sort Tree, but the time complexity for a single query would be $O(\log^3 n)$.
We will accomplish the same task using a persistent Segment Tree in $O(\log n)$.
First we will discuss a solution for a simpler problem:
We will only consider arrays in which the elements are bound by $0 \le a[i] \lt n$.
And we only want to find the $k$-th smallest element in some prefix of the array $a$.
It will be very easy to extent the developed ideas later for not restricted arrays and not restricted range queries.
Note that we will be using 1 based indexing for $a$.
We will use a Segment Tree that counts all appearing numbers, i.e. in the Segment Tree we will store the histogram of the array.
So the leaf vertices will store how often the values $0$, $1$, $\dots$, $n-1$ will appear in the array, and the other vertices store how many numbers in some range are in the array.
In other words we create a regular Segment Tree with sum queries over the histogram of the array.
But instead of creating all $n$ Segment Trees for every possible prefix, we will create one persistent one, that will contain the same information.
We will start with an empty Segment Tree (all counts will be $0$) pointed to by $root_0$, and add the elements $a[1]$, $a[2]$, $\dots$, $a[n]$ one after another.
For each modification we will receive a new root vertex, let's call $root_i$ the root of the Segment Tree after inserting the first $i$ elements of the array $a$.
The Segment Tree rooted at $root_i$ will contain the histogram of the prefix $a[1 \dots i]$.
Using this Segment Tree we can find in $O(\log n)$ time the position of the $k$-th element using the same technique discussed in [Counting the number of zeros, searching for the $k$-th zero](segment_tree.md#counting-zero-search-kth).
Now to the not-restricted version of the problem.
First for the restriction on the queries:
Instead of only performing these queries over a prefix of $a$, we want to use any arbitrary segments $a[l \dots r]$.
Here we need a Segment Tree that represents the histogram of the elements in the range $a[l \dots r]$.
It is easy to see that such a Segment Tree is just the difference between the Segment Tree rooted at $root_{r}$ and the Segment Tree rooted at $root_{l-1}$, i.e. every vertex in the $[l \dots r]$ Segment Tree can be computed with the vertex of the $root_{r}$ tree minus the vertex of the $root_{l-1}$ tree.
In the implementation of the $\text{find_kth}$ function this can be handled by passing two vertex pointer and computing the count/sum of the current segment as difference of the two counts/sums of the vertices.
Here are the modified $\text{build}$, $\text{update}$ and $\text{find_kth}$ functions
```{.cpp file=kth_smallest_persistent_segment_tree}
Vertex* build(int tl, int tr) {
if (tl == tr)
return new Vertex(0);
int tm = (tl + tr) / 2;
return new Vertex(build(tl, tm), build(tm+1, tr));
}
Vertex* update(Vertex* v, int tl, int tr, int pos) {
if (tl == tr)
return new Vertex(v->sum+1);
int tm = (tl + tr) / 2;
if (pos <= tm)
return new Vertex(update(v->l, tl, tm, pos), v->r);
else
return new Vertex(v->l, update(v->r, tm+1, tr, pos));
}
int find_kth(Vertex* vl, Vertex *vr, int tl, int tr, int k) {
if (tl == tr)
return tl;
int tm = (tl + tr) / 2, left_count = vr->l->sum - vl->l->sum;
if (left_count >= k)
return find_kth(vl->l, vr->l, tl, tm, k);
return find_kth(vl->r, vr->r, tm+1, tr, k-left_count);
}
```
As already written above, we need to store the root of the initial Segment Tree, and also all the roots after each update.
Here is the code for building a persistent Segment Tree over an vector `a` with elements in the range `[0, MAX_VALUE]`.
```{.cpp file=kth_smallest_persistent_segment_tree_build}
int tl = 0, tr = MAX_VALUE + 1;
std::vector<Vertex*> roots;
roots.push_back(build(tl, tr));
for (int i = 0; i < a.size(); i++) {
roots.push_back(update(roots.back(), tl, tr, a[i]));
}
// find the 5th smallest number from the subarray [a[2], a[3], ..., a[19]]
int result = find_kth(roots[2], roots[20], tl, tr, 5);
```
Now to the restrictions on the array elements:
We can actually transform any array to such an array by index compression.
The smallest element in the array will gets assigned the value 0, the second smallest the value 1, and so forth.
It is easy to generate lookup tables (e.g. using $\text{map}$), that convert a value to its index and vice versa in $O(\log n)$ time.
### Dynamic segment tree
(Called so because its shape is dynamic and the nodes are usually dynamically allocated.
Also known as _implicit segment tree_ or _sparse segment tree_.)
Previously, we considered cases when we have the ability to build the original segment tree. But what to do if the original size is filled with some default element, but its size does not allow you to completely build up to it in advance?
We can solve this problem by creating a segment tree lazily (incrementally). Initially, we will create only the root, and we will create the other vertexes only when we need them.
In this case, we will use the implementation on pointers(before going to the vertex children, check whether they are created, and if not, create them).
Each query has still only the complexity $O(\log n)$, which is small enough for most use-cases (e.g. $\log_2 10^9 \approx 30$).
In this implementation we have two queries, adding a value to a position (initially all values are $0$), and computing the sum of all values in a range.
`Vertex(0, n)` will be the root vertex of the implicit tree.
```cpp
struct Vertex {
int left, right;
int sum = 0;
Vertex *left_child = nullptr, *right_child = nullptr;
Vertex(int lb, int rb) {
left = lb;
right = rb;
}
void extend() {
if (!left_child && left + 1 < right) {
int t = (left + right) / 2;
left_child = new Vertex(left, t);
right_child = new Vertex(t, right);
}
}
void add(int k, int x) {
extend();
sum += x;
if (left_child) {
if (k < left_child->right)
left_child->add(k, x);
else
right_child->add(k, x);
}
}
int get_sum(int lq, int rq) {
if (lq <= left && right <= rq)
return sum;
if (max(left, lq) >= min(right, rq))
return 0;
extend();
return left_child->get_sum(lq, rq) + right_child->get_sum(lq, rq);
}
};
```
Obviously this idea can be extended in lots of different ways. E.g. by adding support for range updates via lazy propagation.
|
---
title
segment_tree
---
# Segment Tree
A Segment Tree is a data structure that stores information about array intervals as a tree. This allows answering range queries over an array efficiently, while still being flexible enough to allow quick modification of the array.
This includes finding the sum of consecutive array elements $a[l \dots r]$, or finding the minimum element in a such a range in $O(\log n)$ time.
Between answering such queries, the Segment Tree allows modifying the array by replacing one element, or even changing the elements of a whole subsegment (e.g. assigning all elements $a[l \dots r]$ to any value, or adding a value to all element in the subsegment).
In general, a Segment Tree is a very flexible data structure, and a huge number of problems can be solved with it.
Additionally, it is also possible to apply more complex operations and answer more complex queries (see [Advanced versions of Segment Trees](segment_tree.md#advanced-versions-of-segment-trees)).
In particular the Segment Tree can be easily generalized to larger dimensions.
For instance, with a two-dimensional Segment Tree you can answer sum or minimum queries over some subrectangle of a given matrix in only $O(\log^2 n)$ time.
One important property of Segment Trees is that they require only a linear amount of memory.
The standard Segment Tree requires $4n$ vertices for working on an array of size $n$.
## Simplest form of a Segment Tree
To start easy, we consider the simplest form of a Segment Tree.
We want to answer sum queries efficiently.
The formal definition of our task is:
Given an array $a[0 \dots n-1]$, the Segment Tree must be able to find the sum of elements between the indices $l$ and $r$ (i.e. computing the sum $\sum_{i=l}^r a[i]$), and also handle changing values of the elements in the array (i.e. perform assignments of the form $a[i] = x$).
The Segment Tree should be able to process **both** queries in $O(\log n)$ time.
This is an improvement over the simpler approaches.
A naive array implementation - just using a simple array - can update elements in $O(1)$, but requires $O(n)$ to compute each sum query.
And precomputed prefix sums can compute sum queries in $O(1)$, but updating an array element requires $O(n)$ changes to the prefix sums.
### Structure of the Segment Tree
We can take a divide-and-conquer approach when it comes to array segments.
We compute and store the sum of the elements of the whole array, i.e. the sum of the segment $a[0 \dots n-1]$.
We then split the array into two halves $a[0 \dots n/2-1]$ and $a[n/2 \dots n-1]$ and compute the sum of each halve and store them.
Each of these two halves in turn are split in half, and so on until all segments reach size $1$.
We can view these segments as forming a binary tree:
the root of this tree is the segment $a[0 \dots n-1]$, and each vertex (except leaf vertices) has exactly two child vertices.
This is why the data structure is called "Segment Tree", even though in most implementations the tree is not constructed explicitly (see [Implementation](segment_tree.md#implementation)).
Here is a visual representation of such a Segment Tree over the array $a = [1, 3, -2, 8, -7]$:

From this short description of the data structure, we can already conclude that a Segment Tree only requires a linear number of vertices.
The first level of the tree contains a single node (the root), the second level will contain two vertices, in the third it will contain four vertices, until the number of vertices reaches $n$.
Thus the number of vertices in the worst case can be estimated by the sum $1 + 2 + 4 + \dots + 2^{\lceil\log_2 n\rceil} \lt 2^{\lceil\log_2 n\rceil + 1} \lt 4n$.
It is worth noting that whenever $n$ is not a power of two, not all levels of the Segment Tree will be completely filled.
We can see that behavior in the image.
For now we can forget about this fact, but it will become important later during the implementation.
The height of the Segment Tree is $O(\log n)$, because when going down from the root to the leaves the size of the segments decreases approximately by half.
### Construction
Before constructing the segment tree, we need to decide:
1. the *value* that gets stored at each node of the segment tree.
For example, in a sum segment tree, a node would store the sum of the elements in its range $[l, r]$.
2. the *merge* operation that merges two siblings in a segment tree.
For example, in a sum segment tree, the two nodes corresponding to the ranges $a[l_1 \dots r_1]$ and $a[l_2 \dots r_2]$ would be merged into a node corresponding to the range $a[l_1 \dots r_2]$ by adding the values of the two nodes.
Note that a vertex is a "leaf vertex", if its corresponding segment covers only one value in the original array. It is present at the lowermost level of a segment tree. Its value would be equal to the (corresponding) element $a[i]$.
Now, for construction of the segment tree, we start at the bottom level (the leaf vertices) and assign them their respective values. On the basis of these values, we can compute the values of the previous level, using the `merge` function.
And on the basis of those, we can compute the values of the previous, and repeat the procedure until we reach the root vertex.
It is convenient to describe this operation recursively in the other direction, i.e., from the root vertex to the leaf vertices. The construction procedure, if called on a non-leaf vertex, does the following:
1. recursively construct the values of the two child vertices
2. merge the computed values of these children.
We start the construction at the root vertex, and hence, we are able to compute the entire segment tree.
The time complexity of this construction is $O(n)$, assuming that the merge operation is constant time (the merge operation gets called $n$ times, which is equal to the number of internal nodes in the segment tree).
### Sum queries
For now we are going to answer sum queries. As an input we receive two integers $l$ and $r$, and we have to compute the sum of the segment $a[l \dots r]$ in $O(\log n)$ time.
To do this, we will traverse the Segment Tree and use the precomputed sums of the segments.
Let's assume that we are currently at the vertex that covers the segment $a[tl \dots tr]$.
There are three possible cases.
The easiest case is when the segment $a[l \dots r]$ is equal to the corresponding segment of the current vertex (i.e. $a[l \dots r] = a[tl \dots tr]$), then we are finished and can return the precomputed sum that is stored in the vertex.
Alternatively the segment of the query can fall completely into the domain of either the left or the right child.
Recall that the left child covers the segment $a[tl \dots tm]$ and the right vertex covers the segment $a[tm + 1 \dots tr]$ with $tm = (tl + tr) / 2$.
In this case we can simply go to the child vertex, which corresponding segment covers the query segment, and execute the algorithm described here with that vertex.
And then there is the last case, the query segment intersects with both children.
In this case we have no other option as to make two recursive calls, one for each child.
First we go to the left child, compute a partial answer for this vertex (i.e. the sum of values of the intersection between the segment of the query and the segment of the left child), then go to the right child, compute the partial answer using that vertex, and then combine the answers by adding them.
In other words, since the left child represents the segment $a[tl \dots tm]$ and the right child the segment $a[tm+1 \dots tr]$, we compute the sum query $a[l \dots tm]$ using the left child, and the sum query $a[tm+1 \dots r]$ using the right child.
So processing a sum query is a function that recursively calls itself once with either the left or the right child (without changing the query boundaries), or twice, once for the left and once for the right child (by splitting the query into two subqueries).
And the recursion ends, whenever the boundaries of the current query segment coincides with the boundaries of the segment of the current vertex.
In that case the answer will be the precomputed value of the sum of this segment, which is stored in the tree.
In other words, the calculation of the query is a traversal of the tree, which spreads through all necessary branches of the tree, and uses the precomputed sum values of the segments in the tree.
Obviously we will start the traversal from the root vertex of the Segment Tree.
The procedure is illustrated in the following image.
Again the array $a = [1, 3, -2, 8, -7]$ is used, and here we want to compute the sum $\sum_{i=2}^4 a[i]$.
The colored vertices will be visited, and we will use the precomputed values of the green vertices.
This gives us the result $-2 + 1 = -1$.

Why is the complexity of this algorithm $O(\log n)$?
To show this complexity we look at each level of the tree.
It turns out, that for each level we only visit not more than four vertices.
And since the height of the tree is $O(\log n)$, we receive the desired running time.
We can show that this proposition (at most four vertices each level) is true by induction.
At the first level, we only visit one vertex, the root vertex, so here we visit less than four vertices.
Now let's look at an arbitrary level.
By induction hypothesis, we visit at most four vertices.
If we only visit at most two vertices, the next level has at most four vertices. That is trivial, because each vertex can only cause at most two recursive calls.
So let's assume that we visit three or four vertices in the current level.
From those vertices, we will analyze the vertices in the middle more carefully.
Since the sum query asks for the sum of a continuous subarray, we know that segments corresponding to the visited vertices in the middle will be completely covered by the segment of the sum query.
Therefore these vertices will not make any recursive calls.
So only the most left, and the most right vertex will have the potential to make recursive calls.
And those will only create at most four recursive calls, so also the next level will satisfy the assertion.
We can say that one branch approaches the left boundary of the query, and the second branch approaches the right one.
Therefore we visit at most $4 \log n$ vertices in total, and that is equal to a running time of $O(\log n)$.
In conclusion the query works by dividing the input segment into several sub-segments for which all the sums are already precomputed and stored in the tree.
And if we stop partitioning whenever the query segment coincides with the vertex segment, then we only need $O(\log n)$ such segments, which gives the effectiveness of the Segment Tree.
### Update queries
Now we want to modify a specific element in the array, let's say we want to do the assignment $a[i] = x$.
And we have to rebuild the Segment Tree, such that it correspond to the new, modified array.
This query is easier than the sum query.
Each level of a Segment Tree forms a partition of the array.
Therefore an element $a[i]$ only contributes to one segment from each level.
Thus only $O(\log n)$ vertices need to be updated.
It is easy to see, that the update request can be implemented using a recursive function.
The function gets passed the current tree vertex, and it recursively calls itself with one of the two child vertices (the one that contains $a[i]$ in its segment), and after that recomputes its sum value, similar how it is done in the build method (that is as the sum of its two children).
Again here is a visualization using the same array.
Here we perform the update $a[2] = 3$.
The green vertices are the vertices that we visit and update.

### Implementation ### { #implementation}
The main consideration is how to store the Segment Tree.
Of course we can define a $\text{Vertex}$ struct and create objects, that store the boundaries of the segment, its sum and additionally also pointers to its child vertices.
However, this requires storing a lot of redundant information in the form of pointers.
We will use a simple trick to make this a lot more efficient by using an _implicit data structure_: Only storing the sums in an array.
(A similar method is used for binary heaps).
The sum of the root vertex at index 1, the sums of its two child vertices at indices 2 and 3, the sums of the children of those two vertices at indices 4 to 7, and so on.
With 1-indexing, conveniently the left child of a vertex at index $i$ is stored at index $2i$, and the right one at index $2i + 1$.
Equivalently, the parent of a vertex at index $i$ is stored at $i/2$ (integer division).
This simplifies the implementation a lot.
We don't need to store the structure of the tree in memory.
It is defined implicitly.
We only need one array which contains the sums of all segments.
As noted before, we need to store at most $4n$ vertices.
It might be less, but for convenience we always allocate an array of size $4n$.
There will be some elements in the sum array, that will not correspond to any vertices in the actual tree, but this doesn't complicate the implementation.
So, we store the Segment Tree simply as an array $t[]$ with a size of four times the input size $n$:
```{.cpp file=segment_tree_implementation_definition}
int n, t[4*MAXN];
```
The procedure for constructing the Segment Tree from a given array $a[]$ looks like this:
it is a recursive function with the parameters $a[]$ (the input array), $v$ (the index of the current vertex), and the boundaries $tl$ and $tr$ of the current segment.
In the main program this function will be called with the parameters of the root vertex: $v = 1$, $tl = 0$, and $tr = n - 1$.
```{.cpp file=segment_tree_implementation_build}
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = a[tl];
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = t[v*2] + t[v*2+1];
}
}
```
Further the function for answering sum queries is also a recursive function, which receives as parameters information about the current vertex/segment (i.e. the index $v$ and the boundaries $tl$ and $tr$) and also the information about the boundaries of the query, $l$ and $r$.
In order to simplify the code, this function always does two recursive calls, even if only one is necessary - in that case the superfluous recursive call will have $l > r$, and this can easily be caught using an additional check at the beginning of the function.
```{.cpp file=segment_tree_implementation_sum}
int sum(int v, int tl, int tr, int l, int r) {
if (l > r)
return 0;
if (l == tl && r == tr) {
return t[v];
}
int tm = (tl + tr) / 2;
return sum(v*2, tl, tm, l, min(r, tm))
+ sum(v*2+1, tm+1, tr, max(l, tm+1), r);
}
```
Finally the update query. The function will also receive information about the current vertex/segment, and additionally also the parameter of the update query (i.e. the position of the element and its new value).
```{.cpp file=segment_tree_implementation_update}
void update(int v, int tl, int tr, int pos, int new_val) {
if (tl == tr) {
t[v] = new_val;
} else {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
t[v] = t[v*2] + t[v*2+1];
}
}
```
### Memory efficient implementation
Most people use the implementation from the previous section. If you look at the array `t` you can see that it follows the numbering of the tree nodes in the order of a BFS traversal (level-order traversal).
Using this traversal the children of vertex $v$ are $2v$ and $2v + 1$ respectively.
However if $n$ is not a power of two, this method will skip some indices and leave some parts of the array `t` unused.
The memory consumption is limited by $4n$, even though a Segment Tree of an array of $n$ elements requires only $2n - 1$ vertices.
However it can be reduced.
We renumber the vertices of the tree in the order of an Euler tour traversal (pre-order traversal), and we write all these vertices next to each other.
Lets look at a vertex at index $v$, and let him be responsible for the segment $[l, r]$, and let $mid = \dfrac{l + r}{2}$.
It is obvious that the left child will have the index $v + 1$.
The left child is responsible for the segment $[l, mid]$, i.e. in total there will be $2 * (mid - l + 1) - 1$ vertices in the left child's subtree.
Thus we can compute the index of the right child of $v$. The index will be $v + 2 * (mid - l + 1)$.
By this numbering we achieve a reduction of the necessary memory to $2n$.
## <a name="advanced-versions-of-segment-trees"></a>Advanced versions of Segment Trees
A Segment Tree is a very flexible data structure, and allows variations and extensions in many different directions.
Let's try to categorize them below.
### More complex queries
It can be quite easy to change the Segment Tree in a direction, such that it computes different queries (e.g. computing the minimum / maximum instead of the sum), but it also can be very nontrivial.
#### Finding the maximum
Let us slightly change the condition of the problem described above: instead of querying the sum, we will now make maximum queries.
The tree will have exactly the same structure as the tree described above.
We only need to change the way $t[v]$ is computed in the $\text{build}$ and $\text{update}$ functions.
$t[v]$ will now store the maximum of the corresponding segment.
And we also need to change the calculation of the returned value of the $\text{sum}$ function (replacing the summation by the maximum).
Of course this problem can be easily changed into computing the minimum instead of the maximum.
Instead of showing an implementation to this problem, the implementation will be given to a more complex version of this problem in the next section.
#### Finding the maximum and the number of times it appears
This task is very similar to the previous one.
In addition of finding the maximum, we also have to find the number of occurrences of the maximum.
To solve this problem, we store a pair of numbers at each vertex in the tree:
In addition to the maximum we also store the number of occurrences of it in the corresponding segment.
Determining the correct pair to store at $t[v]$ can still be done in constant time using the information of the pairs stored at the child vertices.
Combining two such pairs should be done in a separate function, since this will be an operation that we will do while building the tree, while answering maximum queries and while performing modifications.
```{.cpp file=segment_tree_maximum_and_count}
pair<int, int> t[4*MAXN];
pair<int, int> combine(pair<int, int> a, pair<int, int> b) {
if (a.first > b.first)
return a;
if (b.first > a.first)
return b;
return make_pair(a.first, a.second + b.second);
}
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = make_pair(a[tl], 1);
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
pair<int, int> get_max(int v, int tl, int tr, int l, int r) {
if (l > r)
return make_pair(-INF, 0);
if (l == tl && r == tr)
return t[v];
int tm = (tl + tr) / 2;
return combine(get_max(v*2, tl, tm, l, min(r, tm)),
get_max(v*2+1, tm+1, tr, max(l, tm+1), r));
}
void update(int v, int tl, int tr, int pos, int new_val) {
if (tl == tr) {
t[v] = make_pair(new_val, 1);
} else {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
```
#### Compute the greatest common divisor / least common multiple
In this problem we want to compute the GCD / LCM of all numbers of given ranges of the array.
This interesting variation of the Segment Tree can be solved in exactly the same way as the Segment Trees we derived for sum / minimum / maximum queries:
it is enough to store the GCD / LCM of the corresponding vertex in each vertex of the tree.
Combining two vertices can be done by computing the GCD / LCM of both vertices.
#### Counting the number of zeros, searching for the $k$-th zero { #counting-zero-search-kth data-toc-label="Counting the number of zeros, searching for the k-th zero"}
In this problem we want to find the number of zeros in a given range, and additionally find the index of the $k$-th zero using a second function.
Again we have to change the store values of the tree a bit:
This time we will store the number of zeros in each segment in $t[]$.
It is pretty clear, how to implement the $\text{build}$, $\text{update}$ and $\text{count_zero}$ functions, we can simply use the ideas from the sum query problem.
Thus we solved the first part of the problem.
Now we learn how to solve the problem of finding the $k$-th zero in the array $a[]$.
To do this task, we will descend the Segment Tree, starting at the root vertex, and moving each time to either the left or the right child, depending on which segment contains the $k$-th zero.
In order to decide to which child we need to go, it is enough to look at the number of zeros appearing in the segment corresponding to the left vertex.
If this precomputed count is greater or equal to $k$, it is necessary to descend to the left child, and otherwise descent to the right child.
Notice, if we chose the right child, we have to subtract the number of zeros of the left child from $k$.
In the implementation we can handle the special case, $a[]$ containing less than $k$ zeros, by returning -1.
```{.cpp file=segment_tree_kth_zero}
int find_kth(int v, int tl, int tr, int k) {
if (k > t[v])
return -1;
if (tl == tr)
return tl;
int tm = (tl + tr) / 2;
if (t[v*2] >= k)
return find_kth(v*2, tl, tm, k);
else
return find_kth(v*2+1, tm+1, tr, k - t[v*2]);
}
```
#### Searching for an array prefix with a given amount
The task is as follows:
for a given value $x$ we have to quickly find smallest index $i$ such that the sum of the first $i$ elements of the array $a[]$ is greater or equal to $x$ (assuming that the array $a[]$ only contains non-negative values).
This task can be solved using binary search, computing the sum of the prefixes with the Segment Tree.
However this will lead to a $O(\log^2 n)$ solution.
Instead we can use the same idea as in the previous section, and find the position by descending the tree:
by moving each time to the left or the right, depending on the sum of the left child.
Thus finding the answer in $O(\log n)$ time.
#### Searching for the first element greater than a given amount
The task is as follows:
for a given value $x$ and a range $a[l \dots r]$ find the smallest $i$ in the range $a[l \dots r]$, such that $a[i]$ is greater than $x$.
This task can be solved using binary search over max prefix queries with the Segment Tree.
However, this will lead to a $O(\log^2 n)$ solution.
Instead, we can use the same idea as in the previous sections, and find the position by descending the tree:
by moving each time to the left or the right, depending on the maximum value of the left child.
Thus finding the answer in $O(\log n)$ time.
```{.cpp file=segment_tree_first_greater}
int get_first(int v, int lv, int rv, int l, int r, int x) {
if(lv > r || rv < l) return -1;
if(l <= lv && rv <= r) {
if(t[v] <= x) return -1;
while(lv != rv) {
int mid = lv + (rv-lv)/2;
if(t[2*v] > x) {
v = 2*v;
rv = mid;
}else {
v = 2*v+1;
lv = mid+1;
}
}
return lv;
}
int mid = lv + (rv-lv)/2;
int rs = get_first(2*v, lv, mid, l, r, x);
if(rs != -1) return rs;
return get_first(2*v+1, mid+1, rv, l ,r, x);
}
```
#### Finding subsegments with the maximal sum
Here again we receive a range $a[l \dots r]$ for each query, this time we have to find a subsegment $a[l^\prime \dots r^\prime]$ such that $l \le l^\prime$ and $r^\prime \le r$ and the sum of the elements of this segment is maximal.
As before we also want to be able to modify individual elements of the array.
The elements of the array can be negative, and the optimal subsegment can be empty (e.g. if all elements are negative).
This problem is a non-trivial usage of a Segment Tree.
This time we will store four values for each vertex:
the sum of the segment, the maximum prefix sum, the maximum suffix sum, and the sum of the maximal subsegment in it.
In other words for each segment of the Segment Tree the answer is already precomputed as well as the answers for segments touching the left and the right boundaries of the segment.
How to build a tree with such data?
Again we compute it in a recursive fashion:
we first compute all four values for the left and the right child, and then combine those to archive the four values for the current vertex.
Note the answer for the current vertex is either:
* the answer of the left child, which means that the optimal subsegment is entirely placed in the segment of the left child
* the answer of the right child, which means that the optimal subsegment is entirely placed in the segment of the right child
* the sum of the maximum suffix sum of the left child and the maximum prefix sum of the right child, which means that the optimal subsegment intersects with both children.
Hence the answer to the current vertex is the maximum of these three values.
Computing the maximum prefix / suffix sum is even easier.
Here is the implementation of the $\text{combine}$ function, which receives only data from the left and right child, and returns the data of the current vertex.
```{.cpp file=segment_tree_maximal_sum_subsegments1}
struct data {
int sum, pref, suff, ans;
};
data combine(data l, data r) {
data res;
res.sum = l.sum + r.sum;
res.pref = max(l.pref, l.sum + r.pref);
res.suff = max(r.suff, r.sum + l.suff);
res.ans = max(max(l.ans, r.ans), l.suff + r.pref);
return res;
}
```
Using the $\text{combine}$ function it is easy to build the Segment Tree.
We can implement it in exactly the same way as in the previous implementations.
To initialize the leaf vertices, we additionally create the auxiliary function $\text{make_data}$, which will return a $\text{data}$ object holding the information of a single value.
```{.cpp file=segment_tree_maximal_sum_subsegments2}
data make_data(int val) {
data res;
res.sum = val;
res.pref = res.suff = res.ans = max(0, val);
return res;
}
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = make_data(a[tl]);
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
void update(int v, int tl, int tr, int pos, int new_val) {
if (tl == tr) {
t[v] = make_data(new_val);
} else {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
t[v] = combine(t[v*2], t[v*2+1]);
}
}
```
It only remains, how to compute the answer to a query.
To answer it, we go down the tree as before, breaking the query into several subsegments that coincide with the segments of the Segment Tree, and combine the answers in them into a single answer for the query.
Then it should be clear, that the work is exactly the same as in the simple Segment Tree, but instead of summing / minimizing / maximizing the values, we use the $\text{combine}$ function.
```{.cpp file=segment_tree_maximal_sum_subsegments3}
data query(int v, int tl, int tr, int l, int r) {
if (l > r)
return make_data(0);
if (l == tl && r == tr)
return t[v];
int tm = (tl + tr) / 2;
return combine(query(v*2, tl, tm, l, min(r, tm)),
query(v*2+1, tm+1, tr, max(l, tm+1), r));
}
```
### <a name="saving-the-entire-subarrays-in-each-vertex"></a>Saving the entire subarrays in each vertex
This is a separate subsection that stands apart from the others, because at each vertex of the Segment Tree we don't store information about the corresponding segment in compressed form (sum, minimum, maximum, ...), but store all elements of the segment.
Thus the root of the Segment Tree will store all elements of the array, the left child vertex will store the first half of the array, the right vertex the second half, and so on.
In its simplest application of this technique we store the elements in sorted order.
In more complex versions the elements are not stored in lists, but more advanced data structures (sets, maps, ...).
But all these methods have the common factor, that each vertex requires linear memory (i.e. proportional to the length of the corresponding segment).
The first natural question, when considering these Segment Trees, is about memory consumption.
Intuitively this might look like $O(n^2)$ memory, but it turns out that the complete tree will only need $O(n \log n)$ memory.
Why is this so?
Quite simply, because each element of the array falls into $O(\log n)$ segments (remember the height of the tree is $O(\log n)$).
So in spite of the apparent extravagance of such a Segment Tree, it consumes only slightly more memory than the usual Segment Tree.
Several typical applications of this data structure are described below.
It is worth noting the similarity of these Segment Trees with 2D data structures (in fact this is a 2D data structure, but with rather limited capabilities).
#### Find the smallest number greater or equal to a specified number. No modification queries.
We want to answer queries of the following form:
for three given numbers $(l, r, x)$ we have to find the minimal number in the segment $a[l \dots r]$ which is greater than or equal to $x$.
We construct a Segment Tree.
In each vertex we store a sorted list of all numbers occurring in the corresponding segment, like described above.
How to build such a Segment Tree as effectively as possible?
As always we approach this problem recursively: let the lists of the left and right children already be constructed, and we want to build the list for the current vertex.
From this view the operation is now trivial and can be accomplished in linear time:
We only need to combine the two sorted lists into one, which can be done by iterating over them using two pointers.
The C++ STL already has an implementation of this algorithm.
Because this structure of the Segment Tree and the similarities to the merge sort algorithm, the data structure is also often called "Merge Sort Tree".
```{.cpp file=segment_tree_smallest_number_greater1}
vector<int> t[4*MAXN];
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = vector<int>(1, a[tl]);
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
merge(t[v*2].begin(), t[v*2].end(), t[v*2+1].begin(), t[v*2+1].end(),
back_inserter(t[v]));
}
}
```
We already know that the Segment Tree constructed in this way will require $O(n \log n)$ memory.
And thanks to this implementation its construction also takes $O(n \log n)$ time, after all each list is constructed in linear time in respect to its size.
Now consider the answer to the query.
We will go down the tree, like in the regular Segment Tree, breaking our segment $a[l \dots r]$ into several subsegments (into at most $O(\log n)$ pieces).
It is clear that the answer of the whole answer is the minimum of each of the subqueries.
So now we only need to understand, how to respond to a query on one such subsegment that corresponds with some vertex of the tree.
We are at some vertex of the Segment Tree and we want to compute the answer to the query, i.e. find the minimum number greater that or equal to a given number $x$.
Since the vertex contains the list of elements in sorted order, we can simply perform a binary search on this list and return the first number, greater than or equal to $x$.
Thus the answer to the query in one segment of the tree takes $O(\log n)$ time, and the entire query is processed in $O(\log^2 n)$.
```{.cpp file=segment_tree_smallest_number_greater2}
int query(int v, int tl, int tr, int l, int r, int x) {
if (l > r)
return INF;
if (l == tl && r == tr) {
vector<int>::iterator pos = lower_bound(t[v].begin(), t[v].end(), x);
if (pos != t[v].end())
return *pos;
return INF;
}
int tm = (tl + tr) / 2;
return min(query(v*2, tl, tm, l, min(r, tm), x),
query(v*2+1, tm+1, tr, max(l, tm+1), r, x));
}
```
The constant $\text{INF}$ is equal to some large number that is bigger than all numbers in the array.
Its usage means, that there is no number greater than or equal to $x$ in the segment.
It has the meaning of "there is no answer in the given interval".
#### Find the smallest number greater or equal to a specified number. With modification queries.
This task is similar to the previous.
The last approach has a disadvantage, it was not possible to modify the array between answering queries.
Now we want to do exactly this: a modification query will do the assignment $a[i] = y$.
The solution is similar to the solution of the previous problem, but instead of lists at each vertex of the Segment Tree, we will store a balanced list that allows you to quickly search for numbers, delete numbers, and insert new numbers.
Since the array can contain a number repeated, the optimal choice is the data structure $\text{multiset}$.
The construction of such a Segment Tree is done in pretty much the same way as in the previous problem, only now we need to combine $\text{multiset}$s and not sorted lists.
This leads to a construction time of $O(n \log^2 n)$ (in general merging two red-black trees can be done in linear time, but the C++ STL doesn't guarantee this time complexity).
The $\text{query}$ function is also almost equivalent, only now the $\text{lower_bound}$ function of the $\text{multiset}$ function should be called instead ($\text{std::lower_bound}$ only works in $O(\log n)$ time if used with random-access iterators).
Finally the modification request.
To process it, we must go down the tree, and modify all $\text{multiset}$ from the corresponding segments that contain the effected element.
We simply delete the old value of this element (but only one occurrence), and insert the new value.
```cpp
void update(int v, int tl, int tr, int pos, int new_val) {
t[v].erase(t[v].find(a[pos]));
t[v].insert(new_val);
if (tl != tr) {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(v*2, tl, tm, pos, new_val);
else
update(v*2+1, tm+1, tr, pos, new_val);
} else {
a[pos] = new_val;
}
}
```
Processing of this modification query also takes $O(\log^2 n)$ time.
#### Find the smallest number greater or equal to a specified number. Acceleration with "fractional cascading".
We have the same problem statement, we want to find the minimal number greater than or equal to $x$ in a segment, but this time in $O(\log n)$ time.
We will improve the time complexity using the technique "fractional cascading".
Fractional cascading is a simple technique that allows you to improve the running time of multiple binary searches, which are conducted at the same time.
Our previous approach to the search query was, that we divide the task into several subtasks, each of which is solved with a binary search.
Fractional cascading allows you to replace all of these binary searches with a single one.
The simplest and most obvious example of fractional cascading is the following problem:
there are $k$ sorted lists of numbers, and we must find in each list the first number greater than or equal to the given number.
Instead of performing a binary search for each list, we could merge all lists into one big sorted list.
Additionally for each element $y$ we store a list of results of searching for $y$ in each of the $k$ lists.
Therefore if we want to find the smallest number greater than or equal to $x$, we just need to perform one single binary search, and from the list of indices we can determine the smallest number in each list.
This approach however requires $O(n \cdot k)$ ($n$ is the length of the combined lists), which can be quite inefficient.
Fractional cascading reduces this memory complexity to $O(n)$ memory, by creating from the $k$ input lists $k$ new lists, in which each list contains the corresponding list and additionally also every second element of the following new list.
Using this structure it is only necessary to store two indices, the index of the element in the original list, and the index of the element in the following new list.
So this approach only uses $O(n)$ memory, and still can answer the queries using a single binary search.
But for our application we do not need the full power of fractional cascading.
In our Segment Tree a vertex will contain the sorted list of all elements that occur in either the left or the right subtrees (like in the Merge Sort Tree).
Additionally to this sorted list, we store two positions for each element.
For an element $y$ we store the smallest index $i$, such that the $i$th element in the sorted list of the left child is greater or equal to $y$.
And we store the smallest index $j$, such that the $j$th element in the sorted list of the right child is greater or equal to $y$.
These values can be computed in parallel to the merging step when we build the tree.
How does this speed up the queries?
Remember, in the normal solution we did a binary search in ever node.
But with this modification, we can avoid all except one.
To answer a query, we simply to a binary search in the root node.
This gives as the smallest element $y \ge x$ in the complete array, but it also gives us two positions.
The index of the smallest element greater or equal $x$ in the left subtree, and the index of the smallest element $y$ in the right subtree. Notice that $\ge y$ is the same as $\ge x$, since our array doesn't contain any elements between $x$ and $y$.
In the normal Merge Sort Tree solution we would compute these indices via binary search, but with the help of the precomputed values we can just look them up in $O(1)$.
And we can repeat that until we visited all nodes that cover our query interval.
To summarize, as usual we touch $O(\log n)$ nodes during a query. In the root node we do a binary search, and in all other nodes we only do constant work.
This means the complexity for answering a query is $O(\log n)$.
But notice, that this uses three times more memory than a normal Merge Sort Tree, which already uses a lot of memory ($O(n \log n)$).
It is straightforward to apply this technique to a problem, that doesn't require any modification queries.
The two positions are just integers and can easily be computed by counting when merging the two sorted sequences.
It it still possible to also allow modification queries, but that complicates the entire code.
Instead of integers, you need to store the sorted array as `multiset`, and instead of indices you need to store iterators.
And you need to work very carefully, so that you increment or decrement the correct iterators during a modification query.
#### Other possible variations
This technique implies a whole new class of possible applications.
Instead of storing a $\text{vector}$ or a $\text{multiset}$ in each vertex, other data structures can be used:
other Segment Trees (somewhat discussed in [Generalization to higher dimensions](segment_tree.md#generalization-to-higher-dimensions)), Fenwick Trees, Cartesian trees, etc.
### Range updates (Lazy Propagation)
All problems in the above sections discussed modification queries that only effected a single element of the array each.
However the Segment Tree allows applying modification queries to an entire segment of contiguous elements, and perform the query in the same time $O(\log n)$.
#### Addition on segments
We begin by considering problems of the simplest form: the modification query should add a number $x$ to all numbers in the segment $a[l \dots r]$.
The second query, that we are supposed to answer, asked simply for the value of $a[i]$.
To make the addition query efficient, we store at each vertex in the Segment Tree how many we should add to all numbers in the corresponding segment.
For example, if the query "add 3 to the whole array $a[0 \dots n-1]$" comes, then we place the number 3 in the root of the tree.
In general we have to place this number to multiple segments, which form a partition of the query segment.
Thus we don't have to change all $O(n)$ values, but only $O(\log n)$ many.
If now there comes a query that asks the current value of a particular array entry, it is enough to go down the tree and add up all values found along the way.
```cpp
void build(int a[], int v, int tl, int tr) {
if (tl == tr) {
t[v] = a[tl];
} else {
int tm = (tl + tr) / 2;
build(a, v*2, tl, tm);
build(a, v*2+1, tm+1, tr);
t[v] = 0;
}
}
void update(int v, int tl, int tr, int l, int r, int add) {
if (l > r)
return;
if (l == tl && r == tr) {
t[v] += add;
} else {
int tm = (tl + tr) / 2;
update(v*2, tl, tm, l, min(r, tm), add);
update(v*2+1, tm+1, tr, max(l, tm+1), r, add);
}
}
int get(int v, int tl, int tr, int pos) {
if (tl == tr)
return t[v];
int tm = (tl + tr) / 2;
if (pos <= tm)
return t[v] + get(v*2, tl, tm, pos);
else
return t[v] + get(v*2+1, tm+1, tr, pos);
}
```
#### Assignment on segments
Suppose now that the modification query asks to assign each element of a certain segment $a[l \dots r]$ to some value $p$.
As a second query we will again consider reading the value of the array $a[i]$.
To perform this modification query on a whole segment, you have to store at each vertex of the Segment Tree whether the corresponding segment is covered entirely with the same value or not.
This allows us to make a "lazy" update:
instead of changing all segments in the tree that cover the query segment, we only change some, and leave others unchanged.
A marked vertex will mean, that every element of the corresponding segment is assigned to that value, and actually also the complete subtree should only contain this value.
In a sense we are lazy and delay writing the new value to all those vertices.
We can do this tedious task later, if this is necessary.
So after the modification query is executed, some parts of the tree become irrelevant - some modifications remain unfulfilled in it.
For example if a modification query "assign a number to the whole array $a[0 \dots n-1]$" gets executed, in the Segment Tree only a single change is made - the number is placed in the root of the tree and this vertex gets marked.
The remaining segments remain unchanged, although in fact the number should be placed in the whole tree.
Suppose now that the second modification query says, that the first half of the array $a[0 \dots n/2]$ should be assigned with some other number.
To process this query we must assign each element in the whole left child of the root vertex with that number.
But before we do this, we must first sort out the root vertex first.
The subtlety here is that the right half of the array should still be assigned to the value of the first query, and at the moment there is no information for the right half stored.
The way to solve this is to push the information of the root to its children, i.e. if the root of the tree was assigned with any number, then we assign the left and the right child vertices with this number and remove the mark of the root.
After that, we can assign the left child with the new value, without loosing any necessary information.
Summarizing we get:
for any queries (a modification or reading query) during the descent along the tree we should always push information from the current vertex into both of its children.
We can understand this in such a way, that when we descent the tree we apply delayed modifications, but exactly as much as necessary (so not to degrade the complexity of $O(\log n)$).
For the implementation we need to make a $\text{push}$ function, which will receive the current vertex, and it will push the information for its vertex to both its children.
We will call this function at the beginning of the query functions (but we will not call it from the leaves, because there is no need to push information from them any further).
```cpp
void push(int v) {
if (marked[v]) {
t[v*2] = t[v*2+1] = t[v];
marked[v*2] = marked[v*2+1] = true;
marked[v] = false;
}
}
void update(int v, int tl, int tr, int l, int r, int new_val) {
if (l > r)
return;
if (l == tl && tr == r) {
t[v] = new_val;
marked[v] = true;
} else {
push(v);
int tm = (tl + tr) / 2;
update(v*2, tl, tm, l, min(r, tm), new_val);
update(v*2+1, tm+1, tr, max(l, tm+1), r, new_val);
}
}
int get(int v, int tl, int tr, int pos) {
if (tl == tr) {
return t[v];
}
push(v);
int tm = (tl + tr) / 2;
if (pos <= tm)
return get(v*2, tl, tm, pos);
else
return get(v*2+1, tm+1, tr, pos);
}
```
Notice: the function $\text{get}$ can also be implemented in a different way:
do not make delayed updates, but immediately return the value $t[v]$ if $marked[v]$ is true.
#### Adding on segments, querying for maximum
Now the modification query is to add a number to all elements in a range, and the reading query is to find the maximum in a range.
So for each vertex of the Segment Tree we have to store the maximum of the corresponding subsegment.
The interesting part is how to recompute these values during a modification request.
For this purpose we keep store an additional value for each vertex.
In this value we store the addends we haven't propagated to the child vertices.
Before traversing to a child vertex, we call $\text{push}$ and propagate the value to both children.
We have to do this in both the $\text{update}$ function and the $\text{query}$ function.
```cpp
void push(int v) {
t[v*2] += lazy[v];
lazy[v*2] += lazy[v];
t[v*2+1] += lazy[v];
lazy[v*2+1] += lazy[v];
lazy[v] = 0;
}
void update(int v, int tl, int tr, int l, int r, int addend) {
if (l > r)
return;
if (l == tl && tr == r) {
t[v] += addend;
lazy[v] += addend;
} else {
push(v);
int tm = (tl + tr) / 2;
update(v*2, tl, tm, l, min(r, tm), addend);
update(v*2+1, tm+1, tr, max(l, tm+1), r, addend);
t[v] = max(t[v*2], t[v*2+1]);
}
}
int query(int v, int tl, int tr, int l, int r) {
if (l > r)
return -INF;
if (l == tl && tr == r)
return t[v];
push(v);
int tm = (tl + tr) / 2;
return max(query(v*2, tl, tm, l, min(r, tm)),
query(v*2+1, tm+1, tr, max(l, tm+1), r));
}
```
### <a name="generalization-to-higher-dimensions"></a>Generalization to higher dimensions
A Segment Tree can be generalized quite natural to higher dimensions.
If in the one-dimensional case we split the indices of the array into segments, then in the two-dimensional we make an ordinary Segment Tree with respect to the first indices, and for each segment we build an ordinary Segment Tree with respect to the second indices.
#### Simple 2D Segment Tree
A matrix $a[0 \dots n-1, 0 \dots m-1]$ is given, and we have to find the sum (or minimum/maximum) on some submatrix $a[x_1 \dots x_2, y_1 \dots y_2]$, as well as perform modifications of individual matrix elements (i.e. queries of the form $a[x][y] = p$).
So we build a 2D Segment Tree: first the Segment Tree using the first coordinate ($x$), then the second ($y$).
To make the construction process more understandable, you can forget for a while that the matrix is two-dimensional, and only leave the first coordinate.
We will construct an ordinary one-dimensional Segment Tree using only the first coordinate.
But instead of storing a number in a segment, we store an entire Segment Tree:
i.e. at this moment we remember that we also have a second coordinate; but because at this moment the first coordinate is already fixed to some interval $[l \dots r]$, we actually work with such a strip $a[l \dots r, 0 \dots m-1]$ and for it we build a Segment Tree.
Here is the implementation of the construction of a 2D Segment Tree.
It actually represents two separate blocks:
the construction of a Segment Tree along the $x$ coordinate ($\text{build}_x$), and the $y$ coordinate ($\text{build}_y$).
For the leaf nodes in $\text{build}_y$ we have to separate two cases:
when the current segment of the first coordinate $[tlx \dots trx]$ has length 1, and when it has a length greater than one. In the first case, we just take the corresponding value from the matrix, and in the second case we can combine the values of two Segment Trees from the left and the right son in the coordinate $x$.
```cpp
void build_y(int vx, int lx, int rx, int vy, int ly, int ry) {
if (ly == ry) {
if (lx == rx)
t[vx][vy] = a[lx][ly];
else
t[vx][vy] = t[vx*2][vy] + t[vx*2+1][vy];
} else {
int my = (ly + ry) / 2;
build_y(vx, lx, rx, vy*2, ly, my);
build_y(vx, lx, rx, vy*2+1, my+1, ry);
t[vx][vy] = t[vx][vy*2] + t[vx][vy*2+1];
}
}
void build_x(int vx, int lx, int rx) {
if (lx != rx) {
int mx = (lx + rx) / 2;
build_x(vx*2, lx, mx);
build_x(vx*2+1, mx+1, rx);
}
build_y(vx, lx, rx, 1, 0, m-1);
}
```
Such a Segment Tree still uses a linear amount of memory, but with a larger constant: $16 n m$.
It is clear that the described procedure $\text{build}_x$ also works in linear time.
Now we turn to processing of queries. We will answer to the two-dimensional query using the same principle:
first break the query on the first coordinate, and then for every reached vertex, we call the corresponding Segment Tree of the second coordinate.
```cpp
int sum_y(int vx, int vy, int tly, int try_, int ly, int ry) {
if (ly > ry)
return 0;
if (ly == tly && try_ == ry)
return t[vx][vy];
int tmy = (tly + try_) / 2;
return sum_y(vx, vy*2, tly, tmy, ly, min(ry, tmy))
+ sum_y(vx, vy*2+1, tmy+1, try_, max(ly, tmy+1), ry);
}
int sum_x(int vx, int tlx, int trx, int lx, int rx, int ly, int ry) {
if (lx > rx)
return 0;
if (lx == tlx && trx == rx)
return sum_y(vx, 1, 0, m-1, ly, ry);
int tmx = (tlx + trx) / 2;
return sum_x(vx*2, tlx, tmx, lx, min(rx, tmx), ly, ry)
+ sum_x(vx*2+1, tmx+1, trx, max(lx, tmx+1), rx, ly, ry);
}
```
This function works in $O(\log n \log m)$ time, since it first descends the tree in the first coordinate, and for each traversed vertex in the tree it makes a query in the corresponding Segment Tree along the second coordinate.
Finally we consider the modification query.
We want to learn how to modify the Segment Tree in accordance with the change in the value of some element $a[x][y] = p$.
It is clear, that the changes will occur only in those vertices of the first Segment Tree that cover the coordinate $x$ (and such will be $O(\log n)$), and for Segment Trees corresponding to them the changes will only occurs at those vertices that covers the coordinate $y$ (and such will be $O(\log m)$).
Therefore the implementation will be not very different form the one-dimensional case, only now we first descend the first coordinate, and then the second.
```cpp
void update_y(int vx, int lx, int rx, int vy, int ly, int ry, int x, int y, int new_val) {
if (ly == ry) {
if (lx == rx)
t[vx][vy] = new_val;
else
t[vx][vy] = t[vx*2][vy] + t[vx*2+1][vy];
} else {
int my = (ly + ry) / 2;
if (y <= my)
update_y(vx, lx, rx, vy*2, ly, my, x, y, new_val);
else
update_y(vx, lx, rx, vy*2+1, my+1, ry, x, y, new_val);
t[vx][vy] = t[vx][vy*2] + t[vx][vy*2+1];
}
}
void update_x(int vx, int lx, int rx, int x, int y, int new_val) {
if (lx != rx) {
int mx = (lx + rx) / 2;
if (x <= mx)
update_x(vx*2, lx, mx, x, y, new_val);
else
update_x(vx*2+1, mx+1, rx, x, y, new_val);
}
update_y(vx, lx, rx, 1, 0, m-1, x, y, new_val);
}
```
#### Compression of 2D Segment Tree
Let the problem be the following: there are $n$ points on the plane given by their coordinates $(x_i, y_i)$ and queries of the form "count the number of points lying in the rectangle $((x_1, y_1), (x_2, y_2))$".
It is clear that in the case of such a problem it becomes unreasonably wasteful to construct a two-dimensional Segment Tree with $O(n^2)$ elements.
Most on this memory will be wasted, since each single point can only get into $O(\log n)$ segments of the tree along the first coordinate, and therefore the total "useful" size of all tree segments on the second coordinate is $O(n \log n)$.
So we proceed as follows:
at each vertex of the Segment Tree with respect to the first coordinate we store a Segment Tree constructed only by those second coordinates that occur in the current segment of the first coordinates.
In other words, when constructing a Segment Tree inside some vertex with index $vx$ and the boundaries $tlx$ and $trx$, we only consider those points that fall into this interval $x \in [tlx, trx]$, and build a Segment Tree just using them.
Thus we will achieve that each Segment Tree on the second coordinate will occupy exactly as much memory as it should.
As a result, the total amount of memory will decrease to $O(n \log n)$.
We still can answer the queries in $O(\log^2 n)$ time, we just have to make a binary search on the second coordinate, but this will not worsen the complexity.
But modification queries will be impossible with this structure:
in fact if a new point appears, we have to add a new element in the middle of some Segment Tree along the second coordinate, which cannot be effectively done.
In conclusion we note that the two-dimensional Segment Tree contracted in the described way becomes practically equivalent to the modification of the one-dimensional Segment Tree (see [Saving the entire subarrays in each vertex](segment_tree.md#saving-the-entire-subarrays-in-each-vertex)).
In particular the two-dimensional Segment Tree is just a special case of storing a subarray in each vertex of the tree.
It follows, that if you gave to abandon a two-dimensional Segment Tree due to the impossibility of executing a query, it makes sense to try to replace the nested Segment Tree with some more powerful data structure, for example a Cartesian tree.
### Preserving the history of its values (Persistent Segment Tree)
A persistent data structure is a data structure that remembers it previous state for each modification.
This allows to access any version of this data structure that interest us and execute a query on it.
Segment Tree is a data structure that can be turned into a persistent data structure efficiently (both in time and memory consumption).
We want to avoid copying the complete tree before each modification, and we don't want to loose the $O(\log n)$ time behavior for answering range queries.
In fact, any change request in the Segment Tree leads to a change in the data of only $O(\log n)$ vertices along the path starting from the root.
So if we store the Segment Tree using pointers (i.e. a vertex stores pointers to the left and the right child vertices), then when performing the modification query, we simply need to create new vertices instead of changing the available vertices.
Vertices that are not affected by the modification query can still be used by pointing the pointers to the old vertices.
Thus for a modification query $O(\log n)$ new vertices will be created, including a new root vertex of the Segment Tree, and the entire previous version of the tree rooted at the old root vertex will remain unchanged.
Let's give an example implementation for the simplest Segment Tree: when there is only a query asking for sums, and modification queries of single elements.
```cpp
struct Vertex {
Vertex *l, *r;
int sum;
Vertex(int val) : l(nullptr), r(nullptr), sum(val) {}
Vertex(Vertex *l, Vertex *r) : l(l), r(r), sum(0) {
if (l) sum += l->sum;
if (r) sum += r->sum;
}
};
Vertex* build(int a[], int tl, int tr) {
if (tl == tr)
return new Vertex(a[tl]);
int tm = (tl + tr) / 2;
return new Vertex(build(a, tl, tm), build(a, tm+1, tr));
}
int get_sum(Vertex* v, int tl, int tr, int l, int r) {
if (l > r)
return 0;
if (l == tl && tr == r)
return v->sum;
int tm = (tl + tr) / 2;
return get_sum(v->l, tl, tm, l, min(r, tm))
+ get_sum(v->r, tm+1, tr, max(l, tm+1), r);
}
Vertex* update(Vertex* v, int tl, int tr, int pos, int new_val) {
if (tl == tr)
return new Vertex(new_val);
int tm = (tl + tr) / 2;
if (pos <= tm)
return new Vertex(update(v->l, tl, tm, pos, new_val), v->r);
else
return new Vertex(v->l, update(v->r, tm+1, tr, pos, new_val));
}
```
For each modification of the Segment Tree we will receive a new root vertex.
To quickly jump between two different versions of the Segment Tree, we need to store this roots in an array.
To use a specific version of the Segment Tree we simply call the query using the appropriate root vertex.
With the approach described above almost any Segment Tree can be turned into a persistent data structure.
#### Finding the $k$-th smallest number in a range {data-toc-label="Finding the k-th smallest number in a range"}
This time we have to answer queries of the form "What is the $k$-th smallest element in the range $a[l \dots r]$.
This query can be answered using a binary search and a Merge Sort Tree, but the time complexity for a single query would be $O(\log^3 n)$.
We will accomplish the same task using a persistent Segment Tree in $O(\log n)$.
First we will discuss a solution for a simpler problem:
We will only consider arrays in which the elements are bound by $0 \le a[i] \lt n$.
And we only want to find the $k$-th smallest element in some prefix of the array $a$.
It will be very easy to extent the developed ideas later for not restricted arrays and not restricted range queries.
Note that we will be using 1 based indexing for $a$.
We will use a Segment Tree that counts all appearing numbers, i.e. in the Segment Tree we will store the histogram of the array.
So the leaf vertices will store how often the values $0$, $1$, $\dots$, $n-1$ will appear in the array, and the other vertices store how many numbers in some range are in the array.
In other words we create a regular Segment Tree with sum queries over the histogram of the array.
But instead of creating all $n$ Segment Trees for every possible prefix, we will create one persistent one, that will contain the same information.
We will start with an empty Segment Tree (all counts will be $0$) pointed to by $root_0$, and add the elements $a[1]$, $a[2]$, $\dots$, $a[n]$ one after another.
For each modification we will receive a new root vertex, let's call $root_i$ the root of the Segment Tree after inserting the first $i$ elements of the array $a$.
The Segment Tree rooted at $root_i$ will contain the histogram of the prefix $a[1 \dots i]$.
Using this Segment Tree we can find in $O(\log n)$ time the position of the $k$-th element using the same technique discussed in [Counting the number of zeros, searching for the $k$-th zero](segment_tree.md#counting-zero-search-kth).
Now to the not-restricted version of the problem.
First for the restriction on the queries:
Instead of only performing these queries over a prefix of $a$, we want to use any arbitrary segments $a[l \dots r]$.
Here we need a Segment Tree that represents the histogram of the elements in the range $a[l \dots r]$.
It is easy to see that such a Segment Tree is just the difference between the Segment Tree rooted at $root_{r}$ and the Segment Tree rooted at $root_{l-1}$, i.e. every vertex in the $[l \dots r]$ Segment Tree can be computed with the vertex of the $root_{r}$ tree minus the vertex of the $root_{l-1}$ tree.
In the implementation of the $\text{find_kth}$ function this can be handled by passing two vertex pointer and computing the count/sum of the current segment as difference of the two counts/sums of the vertices.
Here are the modified $\text{build}$, $\text{update}$ and $\text{find_kth}$ functions
```{.cpp file=kth_smallest_persistent_segment_tree}
Vertex* build(int tl, int tr) {
if (tl == tr)
return new Vertex(0);
int tm = (tl + tr) / 2;
return new Vertex(build(tl, tm), build(tm+1, tr));
}
Vertex* update(Vertex* v, int tl, int tr, int pos) {
if (tl == tr)
return new Vertex(v->sum+1);
int tm = (tl + tr) / 2;
if (pos <= tm)
return new Vertex(update(v->l, tl, tm, pos), v->r);
else
return new Vertex(v->l, update(v->r, tm+1, tr, pos));
}
int find_kth(Vertex* vl, Vertex *vr, int tl, int tr, int k) {
if (tl == tr)
return tl;
int tm = (tl + tr) / 2, left_count = vr->l->sum - vl->l->sum;
if (left_count >= k)
return find_kth(vl->l, vr->l, tl, tm, k);
return find_kth(vl->r, vr->r, tm+1, tr, k-left_count);
}
```
As already written above, we need to store the root of the initial Segment Tree, and also all the roots after each update.
Here is the code for building a persistent Segment Tree over an vector `a` with elements in the range `[0, MAX_VALUE]`.
```{.cpp file=kth_smallest_persistent_segment_tree_build}
int tl = 0, tr = MAX_VALUE + 1;
std::vector<Vertex*> roots;
roots.push_back(build(tl, tr));
for (int i = 0; i < a.size(); i++) {
roots.push_back(update(roots.back(), tl, tr, a[i]));
}
// find the 5th smallest number from the subarray [a[2], a[3], ..., a[19]]
int result = find_kth(roots[2], roots[20], tl, tr, 5);
```
Now to the restrictions on the array elements:
We can actually transform any array to such an array by index compression.
The smallest element in the array will gets assigned the value 0, the second smallest the value 1, and so forth.
It is easy to generate lookup tables (e.g. using $\text{map}$), that convert a value to its index and vice versa in $O(\log n)$ time.
### Dynamic segment tree
(Called so because its shape is dynamic and the nodes are usually dynamically allocated.
Also known as _implicit segment tree_ or _sparse segment tree_.)
Previously, we considered cases when we have the ability to build the original segment tree. But what to do if the original size is filled with some default element, but its size does not allow you to completely build up to it in advance?
We can solve this problem by creating a segment tree lazily (incrementally). Initially, we will create only the root, and we will create the other vertexes only when we need them.
In this case, we will use the implementation on pointers(before going to the vertex children, check whether they are created, and if not, create them).
Each query has still only the complexity $O(\log n)$, which is small enough for most use-cases (e.g. $\log_2 10^9 \approx 30$).
In this implementation we have two queries, adding a value to a position (initially all values are $0$), and computing the sum of all values in a range.
`Vertex(0, n)` will be the root vertex of the implicit tree.
```cpp
struct Vertex {
int left, right;
int sum = 0;
Vertex *left_child = nullptr, *right_child = nullptr;
Vertex(int lb, int rb) {
left = lb;
right = rb;
}
void extend() {
if (!left_child && left + 1 < right) {
int t = (left + right) / 2;
left_child = new Vertex(left, t);
right_child = new Vertex(t, right);
}
}
void add(int k, int x) {
extend();
sum += x;
if (left_child) {
if (k < left_child->right)
left_child->add(k, x);
else
right_child->add(k, x);
}
}
int get_sum(int lq, int rq) {
if (lq <= left && right <= rq)
return sum;
if (max(left, lq) >= min(right, rq))
return 0;
extend();
return left_child->get_sum(lq, rq) + right_child->get_sum(lq, rq);
}
};
```
Obviously this idea can be extended in lots of different ways. E.g. by adding support for range updates via lazy propagation.
## Practice Problems
* [SPOJ - KQUERY](http://www.spoj.com/problems/KQUERY/) [Persistent segment tree / Merge sort tree]
* [Codeforces - Xenia and Bit Operations](https://codeforces.com/problemset/problem/339/D)
* [UVA 11402 - Ahoy, Pirates!](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2397)
* [SPOJ - GSS3](http://www.spoj.com/problems/GSS3/)
* [Codeforces - Distinct Characters Queries](https://codeforces.com/problemset/problem/1234/D)
* [Codeforces - Knight Tournament](https://codeforces.com/contest/356/problem/A) [For beginners]
* [Codeforces - Ant colony](https://codeforces.com/contest/474/problem/F)
* [Codeforces - Drazil and Park](https://codeforces.com/contest/515/problem/E)
* [Codeforces - Circular RMQ](https://codeforces.com/problemset/problem/52/C)
* [Codeforces - Lucky Array](https://codeforces.com/contest/121/problem/E)
* [Codeforces - The Child and Sequence](https://codeforces.com/contest/438/problem/D)
* [Codeforces - DZY Loves Fibonacci Numbers](https://codeforces.com/contest/446/problem/C) [Lazy propagation]
* [Codeforces - Alphabet Permutations](https://codeforces.com/problemset/problem/610/E)
* [Codeforces - Eyes Closed](https://codeforces.com/problemset/problem/895/E)
* [Codeforces - Kefa and Watch](https://codeforces.com/problemset/problem/580/E)
* [Codeforces - A Simple Task](https://codeforces.com/problemset/problem/558/E)
* [Codeforces - SUM and REPLACE](https://codeforces.com/problemset/problem/920/F)
* [Codeforces - XOR on Segment](https://codeforces.com/problemset/problem/242/E) [Lazy propagation]
* [Codeforces - Please, another Queries on Array?](https://codeforces.com/problemset/problem/1114/F) [Lazy propagation]
* [COCI - Deda](https://oj.uz/problem/view/COCI17_deda) [Last element smaller or equal to x / Binary search]
* [Codeforces - The Untended Antiquity](https://codeforces.com/problemset/problem/869/E) [2D]
* [CSES - Hotel Queries](https://cses.fi/problemset/task/1143)
* [CSES - Polynomial Queries](https://cses.fi/problemset/task/1736)
* [CSES - Range Updates and Sums](https://cses.fi/problemset/task/1735)
|
Segment Tree
|
---
title
- Original
---
# Sqrt Tree
Given an array $a$ that contains $n$ elements and the operation $\circ$ that satisfies associative property: $(x \circ y) \circ z = x \circ (y \circ z)$ is true for any $x$, $y$, $z$.
So, such operations as $\gcd$, $\min$, $\max$, $+$, $\text{and}$, $\text{or}$, $\text{xor}$, etc. satisfy these conditions.
Also we have some queries $q(l, r)$. For each query, we need to compute $a_l \circ a_{l+1} \circ \dots \circ a_r$.
Sqrt Tree can process such queries in $O(1)$ time with $O(n \cdot \log \log n)$ preprocessing time and $O(n \cdot \log \log n)$ memory.
## Description
### Building sqrt decomposition
Let's make a [sqrt decomposition](/data_structures/sqrt_decomposition.html). We divide our array in $\sqrt{n}$ blocks, each block has size $\sqrt{n}$. For each block, we compute:
1. Answers to the queries that lie in the block and begin at the beginning of the block ($\text{prefixOp}$)
2. Answers to the queries that lie in the block and end at the end of the block ($\text{suffixOp}$)
And we'll compute an additional array:
3. $\text{between}_{i, j}$ (for $i \le j$) - answer to the query that begins at the start of block $i$ and ends at the end of block $j$. Note that we have $\sqrt{n}$ blocks, so the size of this array will be $O(\sqrt{n}^2) = O(n)$.
Let's see the example.
Let $\circ$ be $+$ (we calculate sum on a segment) and we have the following array $a$:
`{1, 2, 3, 4, 5, 6, 7, 8, 9}`
It will be divided onto three blocks: `{1, 2, 3}`, `{4, 5, 6}` and `{7, 8, 9}`.
For first block $\text{prefixOp}$ is `{1, 3, 6}` and $\text{suffixOp}$ is `{6, 5, 3}`.
For second block $\text{prefixOp}$ is `{4, 9, 15}` and $\text{suffixOp}$ is `{15, 11, 6}`.
For third block $\text{prefixOp}$ is `{7, 15, 24}` and $\text{suffixOp}$ is `{24, 17, 9}`.
$\text{between}$ array is:
~~~~~
{
{6, 21, 45},
{0, 15, 39},
{0, 0, 24}
}
~~~~~
(we assume that invalid elements where $i > j$ are filled with zeroes)
It's obvious to see that these arrays can be easily calculated in $O(n)$ time and memory.
We already can answer some queries using these arrays. If the query doesn't fit into one block, we can divide it onto three parts: suffix of a block, then some segment of contiguous blocks and then prefix of some block. We can answer a query by dividing it into three parts and taking our operation of some value from $\text{suffixOp}$, then some value from $\text{between}$, then some value from $\text{prefixOp}$.
But if we have queries that entirely fit into one block, we cannot process them using these three arrays. So, we need to do something.
### Making a tree
We cannot answer only the queries that entirely fit in one block. But what **if we build the same structure as described above for each block?** Yes, we can do it. And we do it recursively, until we reach the block size of $1$ or $2$. Answers for such blocks can be calculated easily in $O(1)$.
So, we get a tree. Each node of the tree represents some segment of the array. Node that represents array segment with size $k$ has $\sqrt{k}$ children -- for each block. Also each node contains the three arrays described above for the segment it contains. The root of the tree represents the entire array. Nodes with segment lengths $1$ or $2$ are leaves.
Also it's obvious that the height of this tree is $O(\log \log n)$, because if some vertex of the tree represents an array with length $k$, then its children have length $\sqrt{k}$. $\log(\sqrt{k}) = \frac{\log{k}}{2}$, so $\log k$ decreases two times every layer of the tree and so its height is $O(\log \log n)$. The time for building and memory usage will be $O(n \cdot \log \log n)$, because every element of the array appears exactly once on each layer of the tree.
Now we can answer the queries in $O(\log \log n)$. We can go down on the tree until we meet a segment with length $1$ or $2$ (answer for it can be calculated in $O(1)$ time) or meet the first segment in which our query doesn't fit entirely into one block. See the first section on how to answer the query in this case.
OK, now we can do $O(\log \log n)$ per query. Can it be done faster?
### Optimizing the query complexity
One of the most obvious optimization is to binary search the tree node we need. Using binary search, we can reach the $O(\log \log \log n)$ complexity per query. Can we do it even faster?
The answer is yes. Let's assume the following two things:
1. Each block size is a power of two.
2. All the blocks are equal on each layer.
To reach this, we can add some zero elements to our array so that its size becomes a power of two.
When we use this, some block sizes may become twice larger to be a power of two, but it still be $O(\sqrt{k})$ in size and we keep linear complexity for building the arrays in a segment.
Now, we can easily check if the query fits entirely into a block with size $2^k$. Let's write the ranges of the query, $l$ and $r$ (we use 0-indexation) in binary form. For instance, let's assume $k=4, l=39, r=46$. The binary representation of $l$ and $r$ is:
$l = 39_{10} = 100111_2$
$r = 46_{10} = 101110_2$
Remember that one layer contains segments of the equal size, and the block on one layer have also equal size (in our case, their size is $2^k = 2^4 = 16$. The blocks cover the array entirely, so the first block covers elements $(0 - 15)$ ($(000000_2 - 001111_2)$ in binary), the second one covers elements $(16 - 31)$ ($(010000_2 - 011111_2)$ in binary) and so on. We see that the indices of the positions covered by one block may differ only in $k$ (in our case, $4$) last bits. In our case $l$ and $r$ have equal bits except four lowest, so they lie in one block.
So, we need to check if nothing more that $k$ smallest bits differ (or $l\ \text{xor}\ r$ doesn't exceed $2^k-1$).
Using this observation, we can find a layer that is suitable to answer the query quickly. How to do this:
1. For each $i$ that doesn't exceed the array size, we find the highest bit that is equal to $1$. To do this quickly, we use DP and a precalculated array.
2. Now, for each $q(l, r)$ we find the highest bit of $l\ \text{xor}\ r$ and, using this information, it's easy to choose the layer on which we can process the query easily. We can also use a precalculated array here.
For more details, see the code below.
So, using this, we can answer the queries in $O(1)$ each. Hooray! :)
## Updating elements
We can also update elements in Sqrt Tree. Both single element updates and updates on a segment are supported.
### Updating a single element
Consider a query $\text{update}(x, val)$ that does the assignment $a_x = val$. We need to perform this query fast enough.
#### Naive approach
First, let's take a look of what is changed in the tree when a single element changes. Consider a tree node with length $l$ and its arrays: $\text{prefixOp}$, $\text{suffixOp}$ and $\text{between}$. It is easy to see that only $O(\sqrt{l})$ elements from $\text{prefixOp}$ and $\text{suffixOp}$ change (only inside the block with the changed element). $O(l)$ elements are changed in $\text{between}$. Therefore, $O(l)$ elements in the tree node are updated.
We remember that any element $x$ is present in exactly one tree node at each layer. Root node (layer $0$) has length $O(n)$, nodes on layer $1$ have length $O(\sqrt{n})$, nodes on layer $2$ have length $O(\sqrt{\sqrt{n}})$, etc. So the time complexity per update is $O(n + \sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(n)$.
But it's too slow. Can it be done faster?
#### An sqrt-tree inside the sqrt-tree
Note that the bottleneck of updating is rebuilding $\text{between}$ of the root node. To optimize the tree, let's get rid of this array! Instead of $\text{between}$ array, we store another sqrt-tree for the root node. Let's call it $\text{index}$. It plays the same role as $\text{between}$— answers the queries on segments of blocks. Note that the rest of the tree nodes don't have $\text{index}$, they keep their $\text{between}$ arrays.
A sqrt-tree is _indexed_, if its root node has $\text{index}$. A sqrt-tree with $\text{between}$ array in its root node is _unindexed_. Note that $\text{index}$ **is _unindexed_ itself**.
So, we have the following algorithm for updating an _indexed_ tree:
* Update $\text{prefixOp}$ and $\text{suffixOp}$ in $O(\sqrt{n})$.
* Update $\text{index}$. It has length $O(\sqrt{n})$ and we need to update only one item in it (that represents the changed block). So, the time complexity for this step is $O(\sqrt{n})$. We can use the algorithm described in the beginning of this section (the "slow" one) to do it.
* Go into the child node that represents the changed block and update it in $O(\sqrt{n})$ with the "slow" algorithm.
Note that the query complexity is still $O(1)$: we need to use $\text{index}$ in query no more than once, and this will take $O(1)$ time.
So, total time complexity for updating a single element is $O(\sqrt{n})$. Hooray! :)
### Updating a segment
Sqrt-tree also can do things like assigning an element on a segment. $\text{massUpdate}(x, l, r)$ means $a_i = x$ for all $l \le i \le r$.
There are two approaches to do this: one of them does $\text{massUpdate}$ in $O(\sqrt{n}\cdot \log \log n)$, keeping $O(1)$ per query. The second one does $\text{massUpdate}$ in $O(\sqrt{n})$, but the query complexity becomes $O(\log \log n)$.
We will do lazy propagation in the same way as it is done in segment trees: we mark some nodes as _lazy_, meaning that we'll push them when it's necessary. But one thing is different from segment trees: pushing a node is expensive, so it cannot be done in queries. On the layer $0$, pushing a node takes $O(\sqrt{n})$ time. So, we don't push nodes inside queries, we only look if the current node or its parent are _lazy_, and just take it into account while performing queries.
#### First approach
In the first approach, we say that only nodes on layer $1$ (with length $O(\sqrt{n}$) can be _lazy_. When pushing such node, it updates all its subtree including itself in $O(\sqrt{n}\cdot \log \log n)$. The $\text{massUpdate}$ process is done as follows:
* Consider the nodes on layer $1$ and blocks corresponding to them.
* Some blocks are entirely covered by $\text{massUpdate}$. Mark them as _lazy_ in $O(\sqrt{n})$.
* Some blocks are partially covered. Note there are no more than two blocks of this kind. Rebuild them in $O(\sqrt{n}\cdot \log \log n)$. If they were _lazy_, take it into account.
* Update $\text{prefixOp}$ and $\text{suffixOp}$ for partially covered blocks in $O(\sqrt{n})$ (because there are only two such blocks).
* Rebuild the $\text{index}$ in $O(\sqrt{n}\cdot \log \log n)$.
So we can do $\text{massUpdate}$ fast. But how lazy propagation affects queries? They will have the following modifications:
* If our query entirely lies in a _lazy_ block, calculate it and take _lazy_ into account. $O(1)$.
* If our query consists of many blocks, some of which are _lazy_, we need to take care of _lazy_ only on the leftmost and the rightmost block. The rest of the blocks are calculated using $\text{index}$, which already knows the answer on _lazy_ block (because it's rebuilt after each modification). $O(1)$.
The query complexity still remains $O(1)$.
#### Second approach
In this approach, each node can be _lazy_ (except root). Even nodes in $\text{index}$ can be _lazy_. So, while processing a query, we have to look for _lazy_ tags in all the parent nodes, i. e. query complexity will be $O(\log \log n)$.
But $\text{massUpdate}$ becomes faster. It looks in the following way:
* Some blocks are fully covered with $\text{massUpdate}$. So, _lazy_ tags are added to them. It is $O(\sqrt{n})$.
* Update $\text{prefixOp}$ and $\text{suffixOp}$ for partially covered blocks in $O(\sqrt{n})$ (because there are only two such blocks).
* Do not forget to update the index. It is $O(\sqrt{n})$ (we use the same $\text{massUpdate}$ algorithm).
* Update $\text{between}$ array for _unindexed_ subtrees.
* Go into the nodes representing partially covered blocks and call $\text{massUpdate}$ recursively.
Note that when we do the recursive call, we do prefix or suffix $\text{massUpdate}$. But for prefix and suffix updates we can have no more than one partially covered child. So, we visit one node on layer $1$, two nodes on layer $2$ and two nodes on any deeper level. So, the time complexity is $O(\sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(\sqrt{n})$. The approach here is similar to the segment tree mass update.
## Implementation
The following implementation of Sqrt Tree can perform the following operations: build in $O(n \cdot \log \log n)$, answer queries in $O(1)$ and update an element in $O(\sqrt{n})$.
~~~~~cpp
SqrtTreeItem op(const SqrtTreeItem &a, const SqrtTreeItem &b);
inline int log2Up(int n) {
int res = 0;
while ((1 << res) < n) {
res++;
}
return res;
}
class SqrtTree {
private:
int n, lg, indexSz;
vector<SqrtTreeItem> v;
vector<int> clz, layers, onLayer;
vector< vector<SqrtTreeItem> > pref, suf, between;
inline void buildBlock(int layer, int l, int r) {
pref[layer][l] = v[l];
for (int i = l+1; i < r; i++) {
pref[layer][i] = op(pref[layer][i-1], v[i]);
}
suf[layer][r-1] = v[r-1];
for (int i = r-2; i >= l; i--) {
suf[layer][i] = op(v[i], suf[layer][i+1]);
}
}
inline void buildBetween(int layer, int lBound, int rBound, int betweenOffs) {
int bSzLog = (layers[layer]+1) >> 1;
int bCntLog = layers[layer] >> 1;
int bSz = 1 << bSzLog;
int bCnt = (rBound - lBound + bSz - 1) >> bSzLog;
for (int i = 0; i < bCnt; i++) {
SqrtTreeItem ans;
for (int j = i; j < bCnt; j++) {
SqrtTreeItem add = suf[layer][lBound + (j << bSzLog)];
ans = (i == j) ? add : op(ans, add);
between[layer-1][betweenOffs + lBound + (i << bCntLog) + j] = ans;
}
}
}
inline void buildBetweenZero() {
int bSzLog = (lg+1) >> 1;
for (int i = 0; i < indexSz; i++) {
v[n+i] = suf[0][i << bSzLog];
}
build(1, n, n + indexSz, (1 << lg) - n);
}
inline void updateBetweenZero(int bid) {
int bSzLog = (lg+1) >> 1;
v[n+bid] = suf[0][bid << bSzLog];
update(1, n, n + indexSz, (1 << lg) - n, n+bid);
}
void build(int layer, int lBound, int rBound, int betweenOffs) {
if (layer >= (int)layers.size()) {
return;
}
int bSz = 1 << ((layers[layer]+1) >> 1);
for (int l = lBound; l < rBound; l += bSz) {
int r = min(l + bSz, rBound);
buildBlock(layer, l, r);
build(layer+1, l, r, betweenOffs);
}
if (layer == 0) {
buildBetweenZero();
} else {
buildBetween(layer, lBound, rBound, betweenOffs);
}
}
void update(int layer, int lBound, int rBound, int betweenOffs, int x) {
if (layer >= (int)layers.size()) {
return;
}
int bSzLog = (layers[layer]+1) >> 1;
int bSz = 1 << bSzLog;
int blockIdx = (x - lBound) >> bSzLog;
int l = lBound + (blockIdx << bSzLog);
int r = min(l + bSz, rBound);
buildBlock(layer, l, r);
if (layer == 0) {
updateBetweenZero(blockIdx);
} else {
buildBetween(layer, lBound, rBound, betweenOffs);
}
update(layer+1, l, r, betweenOffs, x);
}
inline SqrtTreeItem query(int l, int r, int betweenOffs, int base) {
if (l == r) {
return v[l];
}
if (l + 1 == r) {
return op(v[l], v[r]);
}
int layer = onLayer[clz[(l - base) ^ (r - base)]];
int bSzLog = (layers[layer]+1) >> 1;
int bCntLog = layers[layer] >> 1;
int lBound = (((l - base) >> layers[layer]) << layers[layer]) + base;
int lBlock = ((l - lBound) >> bSzLog) + 1;
int rBlock = ((r - lBound) >> bSzLog) - 1;
SqrtTreeItem ans = suf[layer][l];
if (lBlock <= rBlock) {
SqrtTreeItem add = (layer == 0) ? (
query(n + lBlock, n + rBlock, (1 << lg) - n, n)
) : (
between[layer-1][betweenOffs + lBound + (lBlock << bCntLog) + rBlock]
);
ans = op(ans, add);
}
ans = op(ans, pref[layer][r]);
return ans;
}
public:
inline SqrtTreeItem query(int l, int r) {
return query(l, r, 0, 0);
}
inline void update(int x, const SqrtTreeItem &item) {
v[x] = item;
update(0, 0, n, 0, x);
}
SqrtTree(const vector<SqrtTreeItem>& a)
: n((int)a.size()), lg(log2Up(n)), v(a), clz(1 << lg), onLayer(lg+1) {
clz[0] = 0;
for (int i = 1; i < (int)clz.size(); i++) {
clz[i] = clz[i >> 1] + 1;
}
int tlg = lg;
while (tlg > 1) {
onLayer[tlg] = (int)layers.size();
layers.push_back(tlg);
tlg = (tlg+1) >> 1;
}
for (int i = lg-1; i >= 0; i--) {
onLayer[i] = max(onLayer[i], onLayer[i+1]);
}
int betweenLayers = max(0, (int)layers.size() - 1);
int bSzLog = (lg+1) >> 1;
int bSz = 1 << bSzLog;
indexSz = (n + bSz - 1) >> bSzLog;
v.resize(n + indexSz);
pref.assign(layers.size(), vector<SqrtTreeItem>(n + indexSz));
suf.assign(layers.size(), vector<SqrtTreeItem>(n + indexSz));
between.assign(betweenLayers, vector<SqrtTreeItem>((1 << lg) + bSz));
build(0, 0, n, 0);
}
};
~~~~~
|
---
title
- Original
---
# Sqrt Tree
Given an array $a$ that contains $n$ elements and the operation $\circ$ that satisfies associative property: $(x \circ y) \circ z = x \circ (y \circ z)$ is true for any $x$, $y$, $z$.
So, such operations as $\gcd$, $\min$, $\max$, $+$, $\text{and}$, $\text{or}$, $\text{xor}$, etc. satisfy these conditions.
Also we have some queries $q(l, r)$. For each query, we need to compute $a_l \circ a_{l+1} \circ \dots \circ a_r$.
Sqrt Tree can process such queries in $O(1)$ time with $O(n \cdot \log \log n)$ preprocessing time and $O(n \cdot \log \log n)$ memory.
## Description
### Building sqrt decomposition
Let's make a [sqrt decomposition](/data_structures/sqrt_decomposition.html). We divide our array in $\sqrt{n}$ blocks, each block has size $\sqrt{n}$. For each block, we compute:
1. Answers to the queries that lie in the block and begin at the beginning of the block ($\text{prefixOp}$)
2. Answers to the queries that lie in the block and end at the end of the block ($\text{suffixOp}$)
And we'll compute an additional array:
3. $\text{between}_{i, j}$ (for $i \le j$) - answer to the query that begins at the start of block $i$ and ends at the end of block $j$. Note that we have $\sqrt{n}$ blocks, so the size of this array will be $O(\sqrt{n}^2) = O(n)$.
Let's see the example.
Let $\circ$ be $+$ (we calculate sum on a segment) and we have the following array $a$:
`{1, 2, 3, 4, 5, 6, 7, 8, 9}`
It will be divided onto three blocks: `{1, 2, 3}`, `{4, 5, 6}` and `{7, 8, 9}`.
For first block $\text{prefixOp}$ is `{1, 3, 6}` and $\text{suffixOp}$ is `{6, 5, 3}`.
For second block $\text{prefixOp}$ is `{4, 9, 15}` and $\text{suffixOp}$ is `{15, 11, 6}`.
For third block $\text{prefixOp}$ is `{7, 15, 24}` and $\text{suffixOp}$ is `{24, 17, 9}`.
$\text{between}$ array is:
~~~~~
{
{6, 21, 45},
{0, 15, 39},
{0, 0, 24}
}
~~~~~
(we assume that invalid elements where $i > j$ are filled with zeroes)
It's obvious to see that these arrays can be easily calculated in $O(n)$ time and memory.
We already can answer some queries using these arrays. If the query doesn't fit into one block, we can divide it onto three parts: suffix of a block, then some segment of contiguous blocks and then prefix of some block. We can answer a query by dividing it into three parts and taking our operation of some value from $\text{suffixOp}$, then some value from $\text{between}$, then some value from $\text{prefixOp}$.
But if we have queries that entirely fit into one block, we cannot process them using these three arrays. So, we need to do something.
### Making a tree
We cannot answer only the queries that entirely fit in one block. But what **if we build the same structure as described above for each block?** Yes, we can do it. And we do it recursively, until we reach the block size of $1$ or $2$. Answers for such blocks can be calculated easily in $O(1)$.
So, we get a tree. Each node of the tree represents some segment of the array. Node that represents array segment with size $k$ has $\sqrt{k}$ children -- for each block. Also each node contains the three arrays described above for the segment it contains. The root of the tree represents the entire array. Nodes with segment lengths $1$ or $2$ are leaves.
Also it's obvious that the height of this tree is $O(\log \log n)$, because if some vertex of the tree represents an array with length $k$, then its children have length $\sqrt{k}$. $\log(\sqrt{k}) = \frac{\log{k}}{2}$, so $\log k$ decreases two times every layer of the tree and so its height is $O(\log \log n)$. The time for building and memory usage will be $O(n \cdot \log \log n)$, because every element of the array appears exactly once on each layer of the tree.
Now we can answer the queries in $O(\log \log n)$. We can go down on the tree until we meet a segment with length $1$ or $2$ (answer for it can be calculated in $O(1)$ time) or meet the first segment in which our query doesn't fit entirely into one block. See the first section on how to answer the query in this case.
OK, now we can do $O(\log \log n)$ per query. Can it be done faster?
### Optimizing the query complexity
One of the most obvious optimization is to binary search the tree node we need. Using binary search, we can reach the $O(\log \log \log n)$ complexity per query. Can we do it even faster?
The answer is yes. Let's assume the following two things:
1. Each block size is a power of two.
2. All the blocks are equal on each layer.
To reach this, we can add some zero elements to our array so that its size becomes a power of two.
When we use this, some block sizes may become twice larger to be a power of two, but it still be $O(\sqrt{k})$ in size and we keep linear complexity for building the arrays in a segment.
Now, we can easily check if the query fits entirely into a block with size $2^k$. Let's write the ranges of the query, $l$ and $r$ (we use 0-indexation) in binary form. For instance, let's assume $k=4, l=39, r=46$. The binary representation of $l$ and $r$ is:
$l = 39_{10} = 100111_2$
$r = 46_{10} = 101110_2$
Remember that one layer contains segments of the equal size, and the block on one layer have also equal size (in our case, their size is $2^k = 2^4 = 16$. The blocks cover the array entirely, so the first block covers elements $(0 - 15)$ ($(000000_2 - 001111_2)$ in binary), the second one covers elements $(16 - 31)$ ($(010000_2 - 011111_2)$ in binary) and so on. We see that the indices of the positions covered by one block may differ only in $k$ (in our case, $4$) last bits. In our case $l$ and $r$ have equal bits except four lowest, so they lie in one block.
So, we need to check if nothing more that $k$ smallest bits differ (or $l\ \text{xor}\ r$ doesn't exceed $2^k-1$).
Using this observation, we can find a layer that is suitable to answer the query quickly. How to do this:
1. For each $i$ that doesn't exceed the array size, we find the highest bit that is equal to $1$. To do this quickly, we use DP and a precalculated array.
2. Now, for each $q(l, r)$ we find the highest bit of $l\ \text{xor}\ r$ and, using this information, it's easy to choose the layer on which we can process the query easily. We can also use a precalculated array here.
For more details, see the code below.
So, using this, we can answer the queries in $O(1)$ each. Hooray! :)
## Updating elements
We can also update elements in Sqrt Tree. Both single element updates and updates on a segment are supported.
### Updating a single element
Consider a query $\text{update}(x, val)$ that does the assignment $a_x = val$. We need to perform this query fast enough.
#### Naive approach
First, let's take a look of what is changed in the tree when a single element changes. Consider a tree node with length $l$ and its arrays: $\text{prefixOp}$, $\text{suffixOp}$ and $\text{between}$. It is easy to see that only $O(\sqrt{l})$ elements from $\text{prefixOp}$ and $\text{suffixOp}$ change (only inside the block with the changed element). $O(l)$ elements are changed in $\text{between}$. Therefore, $O(l)$ elements in the tree node are updated.
We remember that any element $x$ is present in exactly one tree node at each layer. Root node (layer $0$) has length $O(n)$, nodes on layer $1$ have length $O(\sqrt{n})$, nodes on layer $2$ have length $O(\sqrt{\sqrt{n}})$, etc. So the time complexity per update is $O(n + \sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(n)$.
But it's too slow. Can it be done faster?
#### An sqrt-tree inside the sqrt-tree
Note that the bottleneck of updating is rebuilding $\text{between}$ of the root node. To optimize the tree, let's get rid of this array! Instead of $\text{between}$ array, we store another sqrt-tree for the root node. Let's call it $\text{index}$. It plays the same role as $\text{between}$— answers the queries on segments of blocks. Note that the rest of the tree nodes don't have $\text{index}$, they keep their $\text{between}$ arrays.
A sqrt-tree is _indexed_, if its root node has $\text{index}$. A sqrt-tree with $\text{between}$ array in its root node is _unindexed_. Note that $\text{index}$ **is _unindexed_ itself**.
So, we have the following algorithm for updating an _indexed_ tree:
* Update $\text{prefixOp}$ and $\text{suffixOp}$ in $O(\sqrt{n})$.
* Update $\text{index}$. It has length $O(\sqrt{n})$ and we need to update only one item in it (that represents the changed block). So, the time complexity for this step is $O(\sqrt{n})$. We can use the algorithm described in the beginning of this section (the "slow" one) to do it.
* Go into the child node that represents the changed block and update it in $O(\sqrt{n})$ with the "slow" algorithm.
Note that the query complexity is still $O(1)$: we need to use $\text{index}$ in query no more than once, and this will take $O(1)$ time.
So, total time complexity for updating a single element is $O(\sqrt{n})$. Hooray! :)
### Updating a segment
Sqrt-tree also can do things like assigning an element on a segment. $\text{massUpdate}(x, l, r)$ means $a_i = x$ for all $l \le i \le r$.
There are two approaches to do this: one of them does $\text{massUpdate}$ in $O(\sqrt{n}\cdot \log \log n)$, keeping $O(1)$ per query. The second one does $\text{massUpdate}$ in $O(\sqrt{n})$, but the query complexity becomes $O(\log \log n)$.
We will do lazy propagation in the same way as it is done in segment trees: we mark some nodes as _lazy_, meaning that we'll push them when it's necessary. But one thing is different from segment trees: pushing a node is expensive, so it cannot be done in queries. On the layer $0$, pushing a node takes $O(\sqrt{n})$ time. So, we don't push nodes inside queries, we only look if the current node or its parent are _lazy_, and just take it into account while performing queries.
#### First approach
In the first approach, we say that only nodes on layer $1$ (with length $O(\sqrt{n}$) can be _lazy_. When pushing such node, it updates all its subtree including itself in $O(\sqrt{n}\cdot \log \log n)$. The $\text{massUpdate}$ process is done as follows:
* Consider the nodes on layer $1$ and blocks corresponding to them.
* Some blocks are entirely covered by $\text{massUpdate}$. Mark them as _lazy_ in $O(\sqrt{n})$.
* Some blocks are partially covered. Note there are no more than two blocks of this kind. Rebuild them in $O(\sqrt{n}\cdot \log \log n)$. If they were _lazy_, take it into account.
* Update $\text{prefixOp}$ and $\text{suffixOp}$ for partially covered blocks in $O(\sqrt{n})$ (because there are only two such blocks).
* Rebuild the $\text{index}$ in $O(\sqrt{n}\cdot \log \log n)$.
So we can do $\text{massUpdate}$ fast. But how lazy propagation affects queries? They will have the following modifications:
* If our query entirely lies in a _lazy_ block, calculate it and take _lazy_ into account. $O(1)$.
* If our query consists of many blocks, some of which are _lazy_, we need to take care of _lazy_ only on the leftmost and the rightmost block. The rest of the blocks are calculated using $\text{index}$, which already knows the answer on _lazy_ block (because it's rebuilt after each modification). $O(1)$.
The query complexity still remains $O(1)$.
#### Second approach
In this approach, each node can be _lazy_ (except root). Even nodes in $\text{index}$ can be _lazy_. So, while processing a query, we have to look for _lazy_ tags in all the parent nodes, i. e. query complexity will be $O(\log \log n)$.
But $\text{massUpdate}$ becomes faster. It looks in the following way:
* Some blocks are fully covered with $\text{massUpdate}$. So, _lazy_ tags are added to them. It is $O(\sqrt{n})$.
* Update $\text{prefixOp}$ and $\text{suffixOp}$ for partially covered blocks in $O(\sqrt{n})$ (because there are only two such blocks).
* Do not forget to update the index. It is $O(\sqrt{n})$ (we use the same $\text{massUpdate}$ algorithm).
* Update $\text{between}$ array for _unindexed_ subtrees.
* Go into the nodes representing partially covered blocks and call $\text{massUpdate}$ recursively.
Note that when we do the recursive call, we do prefix or suffix $\text{massUpdate}$. But for prefix and suffix updates we can have no more than one partially covered child. So, we visit one node on layer $1$, two nodes on layer $2$ and two nodes on any deeper level. So, the time complexity is $O(\sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(\sqrt{n})$. The approach here is similar to the segment tree mass update.
## Implementation
The following implementation of Sqrt Tree can perform the following operations: build in $O(n \cdot \log \log n)$, answer queries in $O(1)$ and update an element in $O(\sqrt{n})$.
~~~~~cpp
SqrtTreeItem op(const SqrtTreeItem &a, const SqrtTreeItem &b);
inline int log2Up(int n) {
int res = 0;
while ((1 << res) < n) {
res++;
}
return res;
}
class SqrtTree {
private:
int n, lg, indexSz;
vector<SqrtTreeItem> v;
vector<int> clz, layers, onLayer;
vector< vector<SqrtTreeItem> > pref, suf, between;
inline void buildBlock(int layer, int l, int r) {
pref[layer][l] = v[l];
for (int i = l+1; i < r; i++) {
pref[layer][i] = op(pref[layer][i-1], v[i]);
}
suf[layer][r-1] = v[r-1];
for (int i = r-2; i >= l; i--) {
suf[layer][i] = op(v[i], suf[layer][i+1]);
}
}
inline void buildBetween(int layer, int lBound, int rBound, int betweenOffs) {
int bSzLog = (layers[layer]+1) >> 1;
int bCntLog = layers[layer] >> 1;
int bSz = 1 << bSzLog;
int bCnt = (rBound - lBound + bSz - 1) >> bSzLog;
for (int i = 0; i < bCnt; i++) {
SqrtTreeItem ans;
for (int j = i; j < bCnt; j++) {
SqrtTreeItem add = suf[layer][lBound + (j << bSzLog)];
ans = (i == j) ? add : op(ans, add);
between[layer-1][betweenOffs + lBound + (i << bCntLog) + j] = ans;
}
}
}
inline void buildBetweenZero() {
int bSzLog = (lg+1) >> 1;
for (int i = 0; i < indexSz; i++) {
v[n+i] = suf[0][i << bSzLog];
}
build(1, n, n + indexSz, (1 << lg) - n);
}
inline void updateBetweenZero(int bid) {
int bSzLog = (lg+1) >> 1;
v[n+bid] = suf[0][bid << bSzLog];
update(1, n, n + indexSz, (1 << lg) - n, n+bid);
}
void build(int layer, int lBound, int rBound, int betweenOffs) {
if (layer >= (int)layers.size()) {
return;
}
int bSz = 1 << ((layers[layer]+1) >> 1);
for (int l = lBound; l < rBound; l += bSz) {
int r = min(l + bSz, rBound);
buildBlock(layer, l, r);
build(layer+1, l, r, betweenOffs);
}
if (layer == 0) {
buildBetweenZero();
} else {
buildBetween(layer, lBound, rBound, betweenOffs);
}
}
void update(int layer, int lBound, int rBound, int betweenOffs, int x) {
if (layer >= (int)layers.size()) {
return;
}
int bSzLog = (layers[layer]+1) >> 1;
int bSz = 1 << bSzLog;
int blockIdx = (x - lBound) >> bSzLog;
int l = lBound + (blockIdx << bSzLog);
int r = min(l + bSz, rBound);
buildBlock(layer, l, r);
if (layer == 0) {
updateBetweenZero(blockIdx);
} else {
buildBetween(layer, lBound, rBound, betweenOffs);
}
update(layer+1, l, r, betweenOffs, x);
}
inline SqrtTreeItem query(int l, int r, int betweenOffs, int base) {
if (l == r) {
return v[l];
}
if (l + 1 == r) {
return op(v[l], v[r]);
}
int layer = onLayer[clz[(l - base) ^ (r - base)]];
int bSzLog = (layers[layer]+1) >> 1;
int bCntLog = layers[layer] >> 1;
int lBound = (((l - base) >> layers[layer]) << layers[layer]) + base;
int lBlock = ((l - lBound) >> bSzLog) + 1;
int rBlock = ((r - lBound) >> bSzLog) - 1;
SqrtTreeItem ans = suf[layer][l];
if (lBlock <= rBlock) {
SqrtTreeItem add = (layer == 0) ? (
query(n + lBlock, n + rBlock, (1 << lg) - n, n)
) : (
between[layer-1][betweenOffs + lBound + (lBlock << bCntLog) + rBlock]
);
ans = op(ans, add);
}
ans = op(ans, pref[layer][r]);
return ans;
}
public:
inline SqrtTreeItem query(int l, int r) {
return query(l, r, 0, 0);
}
inline void update(int x, const SqrtTreeItem &item) {
v[x] = item;
update(0, 0, n, 0, x);
}
SqrtTree(const vector<SqrtTreeItem>& a)
: n((int)a.size()), lg(log2Up(n)), v(a), clz(1 << lg), onLayer(lg+1) {
clz[0] = 0;
for (int i = 1; i < (int)clz.size(); i++) {
clz[i] = clz[i >> 1] + 1;
}
int tlg = lg;
while (tlg > 1) {
onLayer[tlg] = (int)layers.size();
layers.push_back(tlg);
tlg = (tlg+1) >> 1;
}
for (int i = lg-1; i >= 0; i--) {
onLayer[i] = max(onLayer[i], onLayer[i+1]);
}
int betweenLayers = max(0, (int)layers.size() - 1);
int bSzLog = (lg+1) >> 1;
int bSz = 1 << bSzLog;
indexSz = (n + bSz - 1) >> bSzLog;
v.resize(n + indexSz);
pref.assign(layers.size(), vector<SqrtTreeItem>(n + indexSz));
suf.assign(layers.size(), vector<SqrtTreeItem>(n + indexSz));
between.assign(betweenLayers, vector<SqrtTreeItem>((1 << lg) + bSz));
build(0, 0, n, 0);
}
};
~~~~~
## Problems
[CodeChef - SEGPROD](https://www.codechef.com/NOV17/problems/SEGPROD)
|
Sqrt Tree
|
---
title
sqrt_decomposition
---
# Sqrt Decomposition
Sqrt Decomposition is a method (or a data structure) that allows you to perform some common operations (finding sum of the elements of the sub-array, finding the minimal/maximal element, etc.) in $O(\sqrt n)$ operations, which is much faster than $O(n)$ for the trivial algorithm.
First we describe the data structure for one of the simplest applications of this idea, then show how to generalize it to solve some other problems, and finally look at a slightly different use of this idea: splitting the input requests into sqrt blocks.
## Sqrt-decomposition based data structure
Given an array $a[0 \dots n-1]$, implement a data structure that allows to find the sum of the elements $a[l \dots r]$ for arbitrary $l$ and $r$ in $O(\sqrt n)$ operations.
### Description
The basic idea of sqrt decomposition is preprocessing. We'll divide the array $a$ into blocks of length approximately $\sqrt n$, and for each block $i$ we'll precalculate the sum of elements in it $b[i]$.
We can assume that both the size of the block and the number of blocks are equal to $\sqrt n$ rounded up:
$$ s = \lceil \sqrt n \rceil $$
Then the array $a$ is divided into blocks in the following way:
$$ \underbrace{a[0], a[1], \dots, a[s-1]}_{\text{b[0]}}, \underbrace{a[s], \dots, a[2s-1]}_{\text{b[1]}}, \dots, \underbrace{a[(s-1) \cdot s], \dots, a[n-1]}_{\text{b[s-1]}} $$
The last block may have fewer elements than the others (if $n$ not a multiple of $s$), it is not important to the discussion (as it can be handled easily).
Thus, for each block $k$, we know the sum of elements on it $b[k]$:
$$ b[k] = \sum\limits_{i=k\cdot s}^{\min {(n-1,(k+1)\cdot s - 1})} a[i] $$
So, we have calculated the values of $b[k]$ (this required $O(n)$ operations). How can they help us to answer each query $[l, r]$ ?
Notice that if the interval $[l, r]$ is long enough, it will contain several whole blocks, and for those blocks we can find the sum of elements in them in a single operation. As a result, the interval $[l, r]$ will contain parts of only two blocks, and we'll have to calculate the sum of elements in these parts trivially.
Thus, in order to calculate the sum of elements on the interval $[l, r]$ we only need to sum the elements of the two "tails":
$[l\dots (k + 1)\cdot s-1]$ and $[p\cdot s\dots r]$ , and sum the values $b[i]$ in all the blocks from $k + 1$ to $p-1$:
$$ \sum\limits_{i=l}^r a[i] = \sum\limits_{i=l}^{(k+1) \cdot s-1} a[i] + \sum\limits_{i=k+1}^{p-1} b[i] + \sum\limits_{i=p\cdot s}^r a[i] $$
_Note: When $k = p$, i.e. $l$ and $r$ belong to the same block, the formula can't be applied, and the sum should be calculated trivially._
This approach allows us to significantly reduce the number of operations. Indeed, the size of each "tail" does not exceed the block length $s$, and the number of blocks in the sum does not exceed $s$. Since we have chosen $s \approx \sqrt n$, the total number of operations required to find the sum of elements on the interval $[l, r]$ is $O(\sqrt n)$.
### Implementation
Let's start with the simplest implementation:
```cpp
// input data
int n;
vector<int> a (n);
// preprocessing
int len = (int) sqrt (n + .0) + 1; // size of the block and the number of blocks
vector<int> b (len);
for (int i=0; i<n; ++i)
b[i / len] += a[i];
// answering the queries
for (;;) {
int l, r;
// read input data for the next query
int sum = 0;
for (int i=l; i<=r; )
if (i % len == 0 && i + len - 1 <= r) {
// if the whole block starting at i belongs to [l, r]
sum += b[i / len];
i += len;
}
else {
sum += a[i];
++i;
}
}
```
This implementation has unreasonably many division operations (which are much slower than other arithmetical operations). Instead, we can calculate the indices of the blocks $c_l$ and $c_r$ which contain indices $l$ and $r$, and loop through blocks $c_l+1 \dots c_r-1$ with separate processing of the "tails" in blocks $c_l$ and $c_r$. This approach corresponds to the last formula in the description, and makes the case $c_l = c_r$ a special case.
```cpp
int sum = 0;
int c_l = l / len, c_r = r / len;
if (c_l == c_r)
for (int i=l; i<=r; ++i)
sum += a[i];
else {
for (int i=l, end=(c_l+1)*len-1; i<=end; ++i)
sum += a[i];
for (int i=c_l+1; i<=c_r-1; ++i)
sum += b[i];
for (int i=c_r*len; i<=r; ++i)
sum += a[i];
}
```
## Other problems
So far we were discussing the problem of finding the sum of elements of a continuous subarray. This problem can be extended to allow to **update individual array elements**. If an element $a[i]$ changes, it's sufficient to update the value of $b[k]$ for the block to which this element belongs ($k = i / s$) in one operation:
$$ b[k] += a_{new}[i] - a_{old}[i] $$
On the other hand, the task of finding the sum of elements can be replaced with the task of finding minimal/maximal element of a subarray. If this problem has to address individual elements' updates as well, updating the value of $b[k]$ is also possible, but it will require iterating through all values of block $k$ in $O(s) = O(\sqrt{n})$ operations.
Sqrt decomposition can be applied in a similar way to a whole class of other problems: finding the number of zero elements, finding the first non-zero element, counting elements which satisfy a certain property etc.
Another class of problems appears when we need to **update array elements on intervals**: increment existing elements or replace them with a given value.
For example, let's say we can do two types of operations on an array: add a given value $\delta$ to all array elements on interval $[l, r]$ or query the value of element $a[i]$. Let's store the value which has to be added to all elements of block $k$ in $b[k]$ (initially all $b[k] = 0$). During each "add" operation we need to add $\delta$ to $b[k]$ for all blocks which belong to interval $[l, r]$ and to add $\delta$ to $a[i]$ for all elements which belong to the "tails" of the interval. The answer a query $i$ is simply $a[i] + b[i/s]$. This way "add" operation has $O(\sqrt{n})$ complexity, and answering a query has $O(1)$ complexity.
Finally, those two classes of problems can be combined if the task requires doing **both** element updates on an interval and queries on an interval. Both operations can be done with $O(\sqrt{n})$ complexity. This will require two block arrays $b$ and $c$: one to keep track of element updates and another to keep track of answers to the query.
There exist other problems which can be solved using sqrt decomposition, for example, a problem about maintaining a set of numbers which would allow adding/deleting numbers, checking whether a number belongs to the set and finding $k$-th largest number. To solve it one has to store numbers in increasing order, split into several blocks with $\sqrt{n}$ numbers in each. Every time a number is added/deleted, the blocks have to be rebalanced by moving numbers between beginnings and ends of adjacent blocks.
## Mo's algorithm
A similar idea, based on sqrt decomposition, can be used to answer range queries ($Q$) offline in $O((N+Q)\sqrt{N})$.
This might sound like a lot worse than the methods in the previous section, since this is a slightly worse complexity than we had earlier and cannot update values between two queries.
But in a lot of situations this method has advantages.
During a normal sqrt decomposition, we have to precompute the answers for each block, and merge them during answering queries.
In some problems this merging step can be quite problematic.
E.g. when each queries asks to find the **mode** of its range (the number that appears the most often).
For this each block would have to store the count of each number in it in some sort of data structure, and we cannot longer perform the merge step fast enough any more.
**Mo's algorithm** uses a completely different approach, that can answer these kind of queries fast, because it only keeps track of one data structure, and the only operations with it are easy and fast.
The idea is to answer the queries in a special order based on the indices.
We will first answer all queries which have the left index in block 0, then answer all queries which have left index in block 1 and so on.
And also we will have to answer the queries of a block is a special order, namely sorted by the right index of the queries.
As already said we will use a single data structure.
This data structure will store information about the range.
At the beginning this range will be empty.
When we want to answer the next query (in the special order), we simply extend or reduce the range, by adding/removing elements on both sides of the current range, until we transformed it into the query range.
This way, we only need to add or remove a single element once at a time, which should be pretty easy operations in our data structure.
Since we change the order of answering the queries, this is only possible when we are allowed to answer the queries in offline mode.
### Implementation
In Mo's algorithm we use two functions for adding an index and for removing an index from the range which we are currently maintaining.
```cpp
void remove(idx); // TODO: remove value at idx from data structure
void add(idx); // TODO: add value at idx from data structure
int get_answer(); // TODO: extract the current answer of the data structure
int block_size;
struct Query {
int l, r, idx;
bool operator<(Query other) const
{
return make_pair(l / block_size, r) <
make_pair(other.l / block_size, other.r);
}
};
vector<int> mo_s_algorithm(vector<Query> queries) {
vector<int> answers(queries.size());
sort(queries.begin(), queries.end());
// TODO: initialize data structure
int cur_l = 0;
int cur_r = -1;
// invariant: data structure will always reflect the range [cur_l, cur_r]
for (Query q : queries) {
while (cur_l > q.l) {
cur_l--;
add(cur_l);
}
while (cur_r < q.r) {
cur_r++;
add(cur_r);
}
while (cur_l < q.l) {
remove(cur_l);
cur_l++;
}
while (cur_r > q.r) {
remove(cur_r);
cur_r--;
}
answers[q.idx] = get_answer();
}
return answers;
}
```
Based on the problem we can use a different data structure and modify the `add`/`remove`/`get_answer` functions accordingly.
For example if we are asked to find range sum queries then we use a simple integer as data structure, which is $0$ at the beginning.
The `add` function will simply add the value of the position and subsequently update the answer variable.
On the other hand `remove` function will subtract the value at position and subsequently update the answer variable.
And `get_answer` just returns the integer.
For answering mode-queries, we can use a binary search tree (e.g. `map<int, int>`) for storing how often each number appears in the current range, and a second binary search tree (e.g. `set<pair<int, int>>`) for keeping counts of the numbers (e.g. as count-number pairs) in order.
The `add` method removes the current number from the second BST, increases the count in the first one, and inserts the number back into the second one.
`remove` does the same thing, it only decreases the count.
And `get_answer` just looks at second tree and returns the best value in $O(1)$.
### Complexity
Sorting all queries will take $O(Q \log Q)$.
How about the other operations?
How many times will the `add` and `remove` be called?
Let's say the block size is $S$.
If we only look at all queries having the left index in the same block, the queries are sorted by the right index.
Therefore we will call `add(cur_r)` and `remove(cur_r)` only $O(N)$ times for all these queries combined.
This gives $O(\frac{N}{S} N)$ calls for all blocks.
The value of `cur_l` can change by at most $O(S)$ during between two queries.
Therefore we have an additional $O(S Q)$ calls of `add(cur_l)` and `remove(cur_l)`.
For $S \approx \sqrt{N}$ this gives $O((N + Q) \sqrt{N})$ operations in total.
Thus the complexity is $O((N+Q)F\sqrt{N})$ where $O(F)$ is the complexity of `add` and `remove` function.
### Tips for improving runtime
* Block size of precisely $\sqrt{N}$ doesn't always offer the best runtime. For example, if $\sqrt{N}=750$ then it may happen that block size of $700$ or $800$ may run better.
More importantly, don't compute the block size at runtime - make it `const`. Division by constants is well optimized by compilers.
* In odd blocks sort the right index in ascending order and in even blocks sort it in descending order. This will minimize the movement of right pointer, as the normal sorting will move the right pointer from the end back to the beginning at the start of every block. With the improved version this resetting is no more necessary.
```cpp
bool cmp(pair<int, int> p, pair<int, int> q) {
if (p.first / BLOCK_SIZE != q.first / BLOCK_SIZE)
return p < q;
return (p.first / BLOCK_SIZE & 1) ? (p.second < q.second) : (p.second > q.second);
}
```
You can read about even faster sorting approach [here](https://codeforces.com/blog/entry/61203).
|
---
title
sqrt_decomposition
---
# Sqrt Decomposition
Sqrt Decomposition is a method (or a data structure) that allows you to perform some common operations (finding sum of the elements of the sub-array, finding the minimal/maximal element, etc.) in $O(\sqrt n)$ operations, which is much faster than $O(n)$ for the trivial algorithm.
First we describe the data structure for one of the simplest applications of this idea, then show how to generalize it to solve some other problems, and finally look at a slightly different use of this idea: splitting the input requests into sqrt blocks.
## Sqrt-decomposition based data structure
Given an array $a[0 \dots n-1]$, implement a data structure that allows to find the sum of the elements $a[l \dots r]$ for arbitrary $l$ and $r$ in $O(\sqrt n)$ operations.
### Description
The basic idea of sqrt decomposition is preprocessing. We'll divide the array $a$ into blocks of length approximately $\sqrt n$, and for each block $i$ we'll precalculate the sum of elements in it $b[i]$.
We can assume that both the size of the block and the number of blocks are equal to $\sqrt n$ rounded up:
$$ s = \lceil \sqrt n \rceil $$
Then the array $a$ is divided into blocks in the following way:
$$ \underbrace{a[0], a[1], \dots, a[s-1]}_{\text{b[0]}}, \underbrace{a[s], \dots, a[2s-1]}_{\text{b[1]}}, \dots, \underbrace{a[(s-1) \cdot s], \dots, a[n-1]}_{\text{b[s-1]}} $$
The last block may have fewer elements than the others (if $n$ not a multiple of $s$), it is not important to the discussion (as it can be handled easily).
Thus, for each block $k$, we know the sum of elements on it $b[k]$:
$$ b[k] = \sum\limits_{i=k\cdot s}^{\min {(n-1,(k+1)\cdot s - 1})} a[i] $$
So, we have calculated the values of $b[k]$ (this required $O(n)$ operations). How can they help us to answer each query $[l, r]$ ?
Notice that if the interval $[l, r]$ is long enough, it will contain several whole blocks, and for those blocks we can find the sum of elements in them in a single operation. As a result, the interval $[l, r]$ will contain parts of only two blocks, and we'll have to calculate the sum of elements in these parts trivially.
Thus, in order to calculate the sum of elements on the interval $[l, r]$ we only need to sum the elements of the two "tails":
$[l\dots (k + 1)\cdot s-1]$ and $[p\cdot s\dots r]$ , and sum the values $b[i]$ in all the blocks from $k + 1$ to $p-1$:
$$ \sum\limits_{i=l}^r a[i] = \sum\limits_{i=l}^{(k+1) \cdot s-1} a[i] + \sum\limits_{i=k+1}^{p-1} b[i] + \sum\limits_{i=p\cdot s}^r a[i] $$
_Note: When $k = p$, i.e. $l$ and $r$ belong to the same block, the formula can't be applied, and the sum should be calculated trivially._
This approach allows us to significantly reduce the number of operations. Indeed, the size of each "tail" does not exceed the block length $s$, and the number of blocks in the sum does not exceed $s$. Since we have chosen $s \approx \sqrt n$, the total number of operations required to find the sum of elements on the interval $[l, r]$ is $O(\sqrt n)$.
### Implementation
Let's start with the simplest implementation:
```cpp
// input data
int n;
vector<int> a (n);
// preprocessing
int len = (int) sqrt (n + .0) + 1; // size of the block and the number of blocks
vector<int> b (len);
for (int i=0; i<n; ++i)
b[i / len] += a[i];
// answering the queries
for (;;) {
int l, r;
// read input data for the next query
int sum = 0;
for (int i=l; i<=r; )
if (i % len == 0 && i + len - 1 <= r) {
// if the whole block starting at i belongs to [l, r]
sum += b[i / len];
i += len;
}
else {
sum += a[i];
++i;
}
}
```
This implementation has unreasonably many division operations (which are much slower than other arithmetical operations). Instead, we can calculate the indices of the blocks $c_l$ and $c_r$ which contain indices $l$ and $r$, and loop through blocks $c_l+1 \dots c_r-1$ with separate processing of the "tails" in blocks $c_l$ and $c_r$. This approach corresponds to the last formula in the description, and makes the case $c_l = c_r$ a special case.
```cpp
int sum = 0;
int c_l = l / len, c_r = r / len;
if (c_l == c_r)
for (int i=l; i<=r; ++i)
sum += a[i];
else {
for (int i=l, end=(c_l+1)*len-1; i<=end; ++i)
sum += a[i];
for (int i=c_l+1; i<=c_r-1; ++i)
sum += b[i];
for (int i=c_r*len; i<=r; ++i)
sum += a[i];
}
```
## Other problems
So far we were discussing the problem of finding the sum of elements of a continuous subarray. This problem can be extended to allow to **update individual array elements**. If an element $a[i]$ changes, it's sufficient to update the value of $b[k]$ for the block to which this element belongs ($k = i / s$) in one operation:
$$ b[k] += a_{new}[i] - a_{old}[i] $$
On the other hand, the task of finding the sum of elements can be replaced with the task of finding minimal/maximal element of a subarray. If this problem has to address individual elements' updates as well, updating the value of $b[k]$ is also possible, but it will require iterating through all values of block $k$ in $O(s) = O(\sqrt{n})$ operations.
Sqrt decomposition can be applied in a similar way to a whole class of other problems: finding the number of zero elements, finding the first non-zero element, counting elements which satisfy a certain property etc.
Another class of problems appears when we need to **update array elements on intervals**: increment existing elements or replace them with a given value.
For example, let's say we can do two types of operations on an array: add a given value $\delta$ to all array elements on interval $[l, r]$ or query the value of element $a[i]$. Let's store the value which has to be added to all elements of block $k$ in $b[k]$ (initially all $b[k] = 0$). During each "add" operation we need to add $\delta$ to $b[k]$ for all blocks which belong to interval $[l, r]$ and to add $\delta$ to $a[i]$ for all elements which belong to the "tails" of the interval. The answer a query $i$ is simply $a[i] + b[i/s]$. This way "add" operation has $O(\sqrt{n})$ complexity, and answering a query has $O(1)$ complexity.
Finally, those two classes of problems can be combined if the task requires doing **both** element updates on an interval and queries on an interval. Both operations can be done with $O(\sqrt{n})$ complexity. This will require two block arrays $b$ and $c$: one to keep track of element updates and another to keep track of answers to the query.
There exist other problems which can be solved using sqrt decomposition, for example, a problem about maintaining a set of numbers which would allow adding/deleting numbers, checking whether a number belongs to the set and finding $k$-th largest number. To solve it one has to store numbers in increasing order, split into several blocks with $\sqrt{n}$ numbers in each. Every time a number is added/deleted, the blocks have to be rebalanced by moving numbers between beginnings and ends of adjacent blocks.
## Mo's algorithm
A similar idea, based on sqrt decomposition, can be used to answer range queries ($Q$) offline in $O((N+Q)\sqrt{N})$.
This might sound like a lot worse than the methods in the previous section, since this is a slightly worse complexity than we had earlier and cannot update values between two queries.
But in a lot of situations this method has advantages.
During a normal sqrt decomposition, we have to precompute the answers for each block, and merge them during answering queries.
In some problems this merging step can be quite problematic.
E.g. when each queries asks to find the **mode** of its range (the number that appears the most often).
For this each block would have to store the count of each number in it in some sort of data structure, and we cannot longer perform the merge step fast enough any more.
**Mo's algorithm** uses a completely different approach, that can answer these kind of queries fast, because it only keeps track of one data structure, and the only operations with it are easy and fast.
The idea is to answer the queries in a special order based on the indices.
We will first answer all queries which have the left index in block 0, then answer all queries which have left index in block 1 and so on.
And also we will have to answer the queries of a block is a special order, namely sorted by the right index of the queries.
As already said we will use a single data structure.
This data structure will store information about the range.
At the beginning this range will be empty.
When we want to answer the next query (in the special order), we simply extend or reduce the range, by adding/removing elements on both sides of the current range, until we transformed it into the query range.
This way, we only need to add or remove a single element once at a time, which should be pretty easy operations in our data structure.
Since we change the order of answering the queries, this is only possible when we are allowed to answer the queries in offline mode.
### Implementation
In Mo's algorithm we use two functions for adding an index and for removing an index from the range which we are currently maintaining.
```cpp
void remove(idx); // TODO: remove value at idx from data structure
void add(idx); // TODO: add value at idx from data structure
int get_answer(); // TODO: extract the current answer of the data structure
int block_size;
struct Query {
int l, r, idx;
bool operator<(Query other) const
{
return make_pair(l / block_size, r) <
make_pair(other.l / block_size, other.r);
}
};
vector<int> mo_s_algorithm(vector<Query> queries) {
vector<int> answers(queries.size());
sort(queries.begin(), queries.end());
// TODO: initialize data structure
int cur_l = 0;
int cur_r = -1;
// invariant: data structure will always reflect the range [cur_l, cur_r]
for (Query q : queries) {
while (cur_l > q.l) {
cur_l--;
add(cur_l);
}
while (cur_r < q.r) {
cur_r++;
add(cur_r);
}
while (cur_l < q.l) {
remove(cur_l);
cur_l++;
}
while (cur_r > q.r) {
remove(cur_r);
cur_r--;
}
answers[q.idx] = get_answer();
}
return answers;
}
```
Based on the problem we can use a different data structure and modify the `add`/`remove`/`get_answer` functions accordingly.
For example if we are asked to find range sum queries then we use a simple integer as data structure, which is $0$ at the beginning.
The `add` function will simply add the value of the position and subsequently update the answer variable.
On the other hand `remove` function will subtract the value at position and subsequently update the answer variable.
And `get_answer` just returns the integer.
For answering mode-queries, we can use a binary search tree (e.g. `map<int, int>`) for storing how often each number appears in the current range, and a second binary search tree (e.g. `set<pair<int, int>>`) for keeping counts of the numbers (e.g. as count-number pairs) in order.
The `add` method removes the current number from the second BST, increases the count in the first one, and inserts the number back into the second one.
`remove` does the same thing, it only decreases the count.
And `get_answer` just looks at second tree and returns the best value in $O(1)$.
### Complexity
Sorting all queries will take $O(Q \log Q)$.
How about the other operations?
How many times will the `add` and `remove` be called?
Let's say the block size is $S$.
If we only look at all queries having the left index in the same block, the queries are sorted by the right index.
Therefore we will call `add(cur_r)` and `remove(cur_r)` only $O(N)$ times for all these queries combined.
This gives $O(\frac{N}{S} N)$ calls for all blocks.
The value of `cur_l` can change by at most $O(S)$ during between two queries.
Therefore we have an additional $O(S Q)$ calls of `add(cur_l)` and `remove(cur_l)`.
For $S \approx \sqrt{N}$ this gives $O((N + Q) \sqrt{N})$ operations in total.
Thus the complexity is $O((N+Q)F\sqrt{N})$ where $O(F)$ is the complexity of `add` and `remove` function.
### Tips for improving runtime
* Block size of precisely $\sqrt{N}$ doesn't always offer the best runtime. For example, if $\sqrt{N}=750$ then it may happen that block size of $700$ or $800$ may run better.
More importantly, don't compute the block size at runtime - make it `const`. Division by constants is well optimized by compilers.
* In odd blocks sort the right index in ascending order and in even blocks sort it in descending order. This will minimize the movement of right pointer, as the normal sorting will move the right pointer from the end back to the beginning at the start of every block. With the improved version this resetting is no more necessary.
```cpp
bool cmp(pair<int, int> p, pair<int, int> q) {
if (p.first / BLOCK_SIZE != q.first / BLOCK_SIZE)
return p < q;
return (p.first / BLOCK_SIZE & 1) ? (p.second < q.second) : (p.second > q.second);
}
```
You can read about even faster sorting approach [here](https://codeforces.com/blog/entry/61203).
## Practice Problems
* [UVA - 12003 - Array Transformer](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3154)
* [UVA - 11990 Dynamic Inversion](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3141)
* [SPOJ - Give Away](http://www.spoj.com/problems/GIVEAWAY/)
* [Codeforces - Till I Collapse](http://codeforces.com/contest/786/problem/C)
* [Codeforces - Destiny](http://codeforces.com/contest/840/problem/D)
* [Codeforces - Holes](http://codeforces.com/contest/13/problem/E)
* [Codeforces - XOR and Favorite Number](https://codeforces.com/problemset/problem/617/E)
* [Codeforces - Powerful array](http://codeforces.com/problemset/problem/86/D)
* [SPOJ - DQUERY](https://www.spoj.com/problems/DQUERY)
|
Sqrt Decomposition
|
---
title
stacks_for_minima
---
# Minimum stack / Minimum queue
In this article we will consider three problems:
first we will modify a stack in a way that allows us to find the smallest element of the stack in $O(1)$, then we will do the same thing with a queue, and finally we will use these data structures to find the minimum in all subarrays of a fixed length in an array in $O(n)$
## Stack modification
We want to modify the stack data structure in such a way, that it possible to find the smallest element in the stack in $O(1)$ time, while maintaining the same asymptotic behavior for adding and removing elements from the stack.
Quick reminder, on a stack we only add and remove elements on one end.
To do this, we will not only store the elements in the stack, but we will store them in pairs: the element itself and the minimum in the stack starting from this element and below.
```cpp
stack<pair<int, int>> st;
```
It is clear that finding the minimum in the whole stack consists only of looking at the value `stack.top().second`.
It is also obvious that adding or removing a new element to the stack can be done in constant time.
Implementation:
* Adding an element:
```cpp
int new_min = st.empty() ? new_elem : min(new_elem, st.top().second);
st.push({new_elem, new_min});
```
* Removing an element:
```cpp
int removed_element = st.top().first;
st.pop();
```
* Finding the minimum:
```cpp
int minimum = st.top().second;
```
## Queue modification (method 1)
Now we want to achieve the same operations with a queue, i.e. we want to add elements at the end and remove them from the front.
Here we consider a simple method for modifying a queue.
It has a big disadvantage though, because the modified queue will actually not store all elements.
The key idea is to only store the items in the queue that are needed to determine the minimum.
Namely we will keep the queue in nondecreasing order (i.e. the smallest value will be stored in the head), and of course not in any arbitrary way, the actual minimum has to be always contained in the queue.
This way the smallest element will always be in the head of the queue.
Before adding a new element to the queue, it is enough to make a "cut":
we will remove all trailing elements of the queue that are larger than the new element, and afterwards add the new element to the queue.
This way we don't break the order of the queue, and we will also not loose the current element if it is at any subsequent step the minimum.
All the elements that we removed can never be a minimum itself, so this operation is allowed.
When we want to extract an element from the head, it actually might not be there (because we removed it previously while adding a smaller element).
Therefore when deleting an element from a queue we need to know the value of the element.
If the head of the queue has the same value, we can safely remove it, otherwise we do nothing.
Consider the implementations of the above operations:
```cpp
deque<int> q;
```
* Finding the minimum:
```cpp
int minimum = q.front();
```
* Adding an element:
```cpp
while (!q.empty() && q.back() > new_element)
q.pop_back();
q.push_back(new_element);
```
* Removing an element:
```cpp
if (!q.empty() && q.front() == remove_element)
q.pop_front();
```
It is clear that on average all these operation only take $O(1)$ time (because every element can only be pushed and popped once).
## Queue modification (method 2)
This is a modification of method 1.
We want to be able to remove elements without knowing which element we have to remove.
We can accomplish that by storing the index for each element in the queue.
And we also remember how many elements we already have added and removed.
```cpp
deque<pair<int, int>> q;
int cnt_added = 0;
int cnt_removed = 0;
```
* Finding the minimum:
```cpp
int minimum = q.front().first;
```
* Adding an element:
```cpp
while (!q.empty() && q.back().first > new_element)
q.pop_back();
q.push_back({new_element, cnt_added});
cnt_added++;
```
* Removing an element:
```cpp
if (!q.empty() && q.front().second == cnt_removed)
q.pop_front();
cnt_removed++;
```
## Queue modification (method 3)
Here we consider another way of modifying a queue to find the minimum in $O(1)$.
This way is somewhat more complicated to implement, but this time we actually store all elements.
And we also can remove an element from the front without knowing its value.
The idea is to reduce the problem to the problem of stacks, which was already solved by us.
So we only need to learn how to simulate a queue using two stacks.
We make two stacks, `s1` and `s2`.
Of course these stack will be of the modified form, so that we can find the minimum in $O(1)$.
We will add new elements to the stack `s1`, and remove elements from the stack `s2`.
If at any time the stack `s2` is empty, we move all elements from `s1` to `s2` (which essentially reverses the order of those elements).
Finally finding the minimum in a queue involves just finding the minimum of both stacks.
Thus we perform all operations in $O(1)$ on average (each element will be once added to stack `s1`, once transferred to `s2`, and once popped from `s2`)
Implementation:
```cpp
stack<pair<int, int>> s1, s2;
```
* Finding the minimum:
```cpp
if (s1.empty() || s2.empty())
minimum = s1.empty() ? s2.top().second : s1.top().second;
else
minimum = min(s1.top().second, s2.top().second);
```
* Add element:
```cpp
int minimum = s1.empty() ? new_element : min(new_element, s1.top().second);
s1.push({new_element, minimum});
```
* Removing an element:
```cpp
if (s2.empty()) {
while (!s1.empty()) {
int element = s1.top().first;
s1.pop();
int minimum = s2.empty() ? element : min(element, s2.top().second);
s2.push({element, minimum});
}
}
int remove_element = s2.top().first;
s2.pop();
```
## Finding the minimum for all subarrays of fixed length
Suppose we are given an array $A$ of length $N$ and a given $M \le N$.
We have to find the minimum of each subarray of length $M$ in this array, i.e. we have to find:
$$\min_{0 \le i \le M-1} A[i], \min_{1 \le i \le M} A[i], \min_{2 \le i \le M+1} A[i],~\dots~, \min_{N-M \le i \le N-1} A[i]$$
We have to solve this problem in linear time, i.e. $O(n)$.
We can use any of the three modified queues to solve the problem.
The solutions should be clear:
we add the first $M$ element of the array, find and output its minimum, then add the next element to the queue and remove the first element of the array, find and output its minimum, etc.
Since all operations with the queue are performed in constant time on average, the complexity of the whole algorithm will be $O(n)$.
|
---
title
stacks_for_minima
---
# Minimum stack / Minimum queue
In this article we will consider three problems:
first we will modify a stack in a way that allows us to find the smallest element of the stack in $O(1)$, then we will do the same thing with a queue, and finally we will use these data structures to find the minimum in all subarrays of a fixed length in an array in $O(n)$
## Stack modification
We want to modify the stack data structure in such a way, that it possible to find the smallest element in the stack in $O(1)$ time, while maintaining the same asymptotic behavior for adding and removing elements from the stack.
Quick reminder, on a stack we only add and remove elements on one end.
To do this, we will not only store the elements in the stack, but we will store them in pairs: the element itself and the minimum in the stack starting from this element and below.
```cpp
stack<pair<int, int>> st;
```
It is clear that finding the minimum in the whole stack consists only of looking at the value `stack.top().second`.
It is also obvious that adding or removing a new element to the stack can be done in constant time.
Implementation:
* Adding an element:
```cpp
int new_min = st.empty() ? new_elem : min(new_elem, st.top().second);
st.push({new_elem, new_min});
```
* Removing an element:
```cpp
int removed_element = st.top().first;
st.pop();
```
* Finding the minimum:
```cpp
int minimum = st.top().second;
```
## Queue modification (method 1)
Now we want to achieve the same operations with a queue, i.e. we want to add elements at the end and remove them from the front.
Here we consider a simple method for modifying a queue.
It has a big disadvantage though, because the modified queue will actually not store all elements.
The key idea is to only store the items in the queue that are needed to determine the minimum.
Namely we will keep the queue in nondecreasing order (i.e. the smallest value will be stored in the head), and of course not in any arbitrary way, the actual minimum has to be always contained in the queue.
This way the smallest element will always be in the head of the queue.
Before adding a new element to the queue, it is enough to make a "cut":
we will remove all trailing elements of the queue that are larger than the new element, and afterwards add the new element to the queue.
This way we don't break the order of the queue, and we will also not loose the current element if it is at any subsequent step the minimum.
All the elements that we removed can never be a minimum itself, so this operation is allowed.
When we want to extract an element from the head, it actually might not be there (because we removed it previously while adding a smaller element).
Therefore when deleting an element from a queue we need to know the value of the element.
If the head of the queue has the same value, we can safely remove it, otherwise we do nothing.
Consider the implementations of the above operations:
```cpp
deque<int> q;
```
* Finding the minimum:
```cpp
int minimum = q.front();
```
* Adding an element:
```cpp
while (!q.empty() && q.back() > new_element)
q.pop_back();
q.push_back(new_element);
```
* Removing an element:
```cpp
if (!q.empty() && q.front() == remove_element)
q.pop_front();
```
It is clear that on average all these operation only take $O(1)$ time (because every element can only be pushed and popped once).
## Queue modification (method 2)
This is a modification of method 1.
We want to be able to remove elements without knowing which element we have to remove.
We can accomplish that by storing the index for each element in the queue.
And we also remember how many elements we already have added and removed.
```cpp
deque<pair<int, int>> q;
int cnt_added = 0;
int cnt_removed = 0;
```
* Finding the minimum:
```cpp
int minimum = q.front().first;
```
* Adding an element:
```cpp
while (!q.empty() && q.back().first > new_element)
q.pop_back();
q.push_back({new_element, cnt_added});
cnt_added++;
```
* Removing an element:
```cpp
if (!q.empty() && q.front().second == cnt_removed)
q.pop_front();
cnt_removed++;
```
## Queue modification (method 3)
Here we consider another way of modifying a queue to find the minimum in $O(1)$.
This way is somewhat more complicated to implement, but this time we actually store all elements.
And we also can remove an element from the front without knowing its value.
The idea is to reduce the problem to the problem of stacks, which was already solved by us.
So we only need to learn how to simulate a queue using two stacks.
We make two stacks, `s1` and `s2`.
Of course these stack will be of the modified form, so that we can find the minimum in $O(1)$.
We will add new elements to the stack `s1`, and remove elements from the stack `s2`.
If at any time the stack `s2` is empty, we move all elements from `s1` to `s2` (which essentially reverses the order of those elements).
Finally finding the minimum in a queue involves just finding the minimum of both stacks.
Thus we perform all operations in $O(1)$ on average (each element will be once added to stack `s1`, once transferred to `s2`, and once popped from `s2`)
Implementation:
```cpp
stack<pair<int, int>> s1, s2;
```
* Finding the minimum:
```cpp
if (s1.empty() || s2.empty())
minimum = s1.empty() ? s2.top().second : s1.top().second;
else
minimum = min(s1.top().second, s2.top().second);
```
* Add element:
```cpp
int minimum = s1.empty() ? new_element : min(new_element, s1.top().second);
s1.push({new_element, minimum});
```
* Removing an element:
```cpp
if (s2.empty()) {
while (!s1.empty()) {
int element = s1.top().first;
s1.pop();
int minimum = s2.empty() ? element : min(element, s2.top().second);
s2.push({element, minimum});
}
}
int remove_element = s2.top().first;
s2.pop();
```
## Finding the minimum for all subarrays of fixed length
Suppose we are given an array $A$ of length $N$ and a given $M \le N$.
We have to find the minimum of each subarray of length $M$ in this array, i.e. we have to find:
$$\min_{0 \le i \le M-1} A[i], \min_{1 \le i \le M} A[i], \min_{2 \le i \le M+1} A[i],~\dots~, \min_{N-M \le i \le N-1} A[i]$$
We have to solve this problem in linear time, i.e. $O(n)$.
We can use any of the three modified queues to solve the problem.
The solutions should be clear:
we add the first $M$ element of the array, find and output its minimum, then add the next element to the queue and remove the first element of the array, find and output its minimum, etc.
Since all operations with the queue are performed in constant time on average, the complexity of the whole algorithm will be $O(n)$.
## Practice Problems
* [Queries with Fixed Length](https://www.hackerrank.com/challenges/queries-with-fixed-length/problem)
* [Binary Land](https://www.codechef.com/MAY20A/problems/BINLAND)
|
Minimum stack / Minimum queue
|
---
title
dsu
---
# Disjoint Set Union
This article discusses the data structure **Disjoint Set Union** or **DSU**.
Often it is also called **Union Find** because of its two main operations.
This data structure provides the following capabilities.
We are given several elements, each of which is a separate set.
A DSU will have an operation to combine any two sets, and it will be able to tell in which set a specific element is.
The classical version also introduces a third operation, it can create a set from a new element.
Thus the basic interface of this data structure consists of only three operations:
- `make_set(v)` - creates a new set consisting of the new element `v`
- `union_sets(a, b)` - merges the two specified sets (the set in which the element `a` is located, and the set in which the element `b` is located)
- `find_set(v)` - returns the representative (also called leader) of the set that contains the element `v`.
This representative is an element of its corresponding set.
It is selected in each set by the data structure itself (and can change over time, namely after `union_sets` calls).
This representative can be used to check if two elements are part of the same set or not.
`a` and `b` are exactly in the same set, if `find_set(a) == find_set(b)`.
Otherwise they are in different sets.
As described in more detail later, the data structure allows you to do each of these operations in almost $O(1)$ time on average.
Also in one of the subsections an alternative structure of a DSU is explained, which achieves a slower average complexity of $O(\log n)$, but can be more powerful than the regular DSU structure.
## Build an efficient data structure
We will store the sets in the form of **trees**: each tree will correspond to one set.
And the root of the tree will be the representative/leader of the set.
In the following image you can see the representation of such trees.

In the beginning, every element starts as a single set, therefore each vertex is its own tree.
Then we combine the set containing the element 1 and the set containing the element 2.
Then we combine the set containing the element 3 and the set containing the element 4.
And in the last step, we combine the set containing the element 1 and the set containing the element 3.
For the implementation this means that we will have to maintain an array `parent` that stores a reference to its immediate ancestor in the tree.
### Naive implementation
We can already write the first implementation of the Disjoint Set Union data structure.
It will be pretty inefficient at first, but later we can improve it using two optimizations, so that it will take nearly constant time for each function call.
As we said, all the information about the sets of elements will be kept in an array `parent`.
To create a new set (operation `make_set(v)`), we simply create a tree with root in the vertex `v`, meaning that it is its own ancestor.
To combine two sets (operation `union_sets(a, b)`), we first find the representative of the set in which `a` is located, and the representative of the set in which `b` is located.
If the representatives are identical, that we have nothing to do, the sets are already merged.
Otherwise, we can simply specify that one of the representatives is the parent of the other representative - thereby combining the two trees.
Finally the implementation of the find representative function (operation `find_set(v)`):
we simply climb the ancestors of the vertex `v` until we reach the root, i.e. a vertex such that the reference to the ancestor leads to itself.
This operation is easily implemented recursively.
```cpp
void make_set(int v) {
parent[v] = v;
}
int find_set(int v) {
if (v == parent[v])
return v;
return find_set(parent[v]);
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b)
parent[b] = a;
}
```
However this implementation is inefficient.
It is easy to construct an example, so that the trees degenerate into long chains.
In that case each call `find_set(v)` can take $O(n)$ time.
This is far away from the complexity that we want to have (nearly constant time).
Therefore we will consider two optimizations that will allow to significantly accelerate the work.
### Path compression optimization
This optimization is designed for speeding up `find_set`.
If we call `find_set(v)` for some vertex `v`, we actually find the representative `p` for all vertices that we visit on the path between `v` and the actual representative `p`.
The trick is to make the paths for all those nodes shorter, by setting the parent of each visited vertex directly to `p`.
You can see the operation in the following image.
On the left there is a tree, and on the right side there is the compressed tree after calling `find_set(7)`, which shortens the paths for the visited nodes 7, 5, 3 and 2.

The new implementation of `find_set` is as follows:
```cpp
int find_set(int v) {
if (v == parent[v])
return v;
return parent[v] = find_set(parent[v]);
}
```
The simple implementation does what was intended:
first find the representative of the set (root vertex), and then in the process of stack unwinding the visited nodes are attached directly to the representative.
This simple modification of the operation already achieves the time complexity $O(\log n)$ per call on average (here without proof).
There is a second modification, that will make it even faster.
### Union by size / rank
In this optimization we will change the `union_set` operation.
To be precise, we will change which tree gets attached to the other one.
In the naive implementation the second tree always got attached to the first one.
In practice that can lead to trees containing chains of length $O(n)$.
With this optimization we will avoid this by choosing very carefully which tree gets attached.
There are many possible heuristics that can be used.
Most popular are the following two approaches:
In the first approach we use the size of the trees as rank, and in the second one we use the depth of the tree (more precisely, the upper bound on the tree depth, because the depth will get smaller when applying path compression).
In both approaches the essence of the optimization is the same: we attach the tree with the lower rank to the one with the bigger rank.
Here is the implementation of union by size:
```cpp
void make_set(int v) {
parent[v] = v;
size[v] = 1;
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (size[a] < size[b])
swap(a, b);
parent[b] = a;
size[a] += size[b];
}
}
```
And here is the implementation of union by rank based on the depth of the trees:
```cpp
void make_set(int v) {
parent[v] = v;
rank[v] = 0;
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (rank[a] < rank[b])
swap(a, b);
parent[b] = a;
if (rank[a] == rank[b])
rank[a]++;
}
}
```
Both optimizations are equivalent in terms of time and space complexity. So in practice you can use any of them.
### Time complexity
As mentioned before, if we combine both optimizations - path compression with union by size / rank - we will reach nearly constant time queries.
It turns out, that the final amortized time complexity is $O(\alpha(n))$, where $\alpha(n)$ is the inverse Ackermann function, which grows very slowly.
In fact it grows so slowly, that it doesn't exceed $4$ for all reasonable $n$ (approximately $n < 10^{600}$).
Amortized complexity is the total time per operation, evaluated over a sequence of multiple operations.
The idea is to guarantee the total time of the entire sequence, while allowing single operations to be much slower then the amortized time.
E.g. in our case a single call might take $O(\log n)$ in the worst case, but if we do $m$ such calls back to back we will end up with an average time of $O(\alpha(n))$.
We will also not present a proof for this time complexity, since it is quite long and complicated.
Also, it's worth mentioning that DSU with union by size / rank, but without path compression works in $O(\log n)$ time per query.
### Linking by index / coin-flip linking
Both union by rank and union by size require that you store additional data for each set, and maintain these values during each union operation.
There exist also a randomized algorithm, that simplifies the union operation a little bit: linking by index.
We assign each set a random value called the index, and we attach the set with the smaller index to the one with the larger one.
It is likely that a bigger set will have a bigger index than the smaller set, therefore this operation is closely related to union by size.
In fact it can be proven, that this operation has the same time complexity as union by size.
However in practice it is slightly slower than union by size.
You can find a proof of the complexity and even more union techniques [here](http://www.cis.upenn.edu/~sanjeev/papers/soda14_disjoint_set_union.pdf).
```cpp
void make_set(int v) {
parent[v] = v;
index[v] = rand();
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (index[a] < index[b])
swap(a, b);
parent[b] = a;
}
}
```
It's a common misconception that just flipping a coin, to decide which set we attach to the other, has the same complexity.
However that's not true.
The paper linked above conjectures that coin-flip linking combined with path compression has complexity $\Omega\left(n \frac{\log n}{\log \log n}\right)$.
And in benchmarks it performs a lot worse than union by size/rank or linking by index.
```cpp
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (rand() % 2)
swap(a, b);
parent[b] = a;
}
}
```
## Applications and various improvements
In this section we consider several applications of the data structure, both the trivial uses and some improvements to the data structure.
### Connected components in a graph
This is one of the obvious applications of DSU.
Formally the problem is defined in the following way:
Initially we have an empty graph.
We have to add vertices and undirected edges, and answer queries of the form $(a, b)$ - "are the vertices $a$ and $b$ in the same connected component of the graph?"
Here we can directly apply the data structure, and get a solution that handles an addition of a vertex or an edge and a query in nearly constant time on average.
This application is quite important, because nearly the same problem appears in [Kruskal's algorithm for finding a minimum spanning tree](../graph/mst_kruskal.md).
Using DSU we can [improve](../graph/mst_kruskal_with_dsu.md) the $O(m \log n + n^2)$ complexity to $O(m \log n)$.
### Search for connected components in an image
One of the applications of DSU is the following task:
there is an image of $n \times m$ pixels.
Originally all are white, but then a few black pixels are drawn.
You want to determine the size of each white connected component in the final image.
For the solution we simply iterate over all white pixels in the image, for each cell iterate over its four neighbors, and if the neighbor is white call `union_sets`.
Thus we will have a DSU with $n m$ nodes corresponding to image pixels.
The resulting trees in the DSU are the desired connected components.
The problem can also be solved by [DFS](../graph/depth-first-search.md) or [BFS](../graph/breadth-first-search.md), but the method described here has an advantage:
it can process the matrix row by row (i.e. to process a row we only need the previous and the current row, and only need a DSU built for the elements of one row) in $O(\min(n, m))$ memory.
### Store additional information for each set
DSU allows you to easily store additional information in the sets.
A simple example is the size of the sets:
storing the sizes was already described in the Union by size section (the information was stored by the current representative of the set).
In the same way - by storing it at the representative nodes - you can also store any other information about the sets.
### Compress jumps along a segment / Painting subarrays offline
One common application of the DSU is the following:
There is a set of vertices, and each vertex has an outgoing edge to another vertex.
With DSU you can find the end point, to which we get after following all edges from a given starting point, in almost constant time.
A good example of this application is the **problem of painting subarrays**.
We have a segment of length $L$, each element initially has the color 0.
We have to repaint the subarray $[l, r]$ with the color $c$ for each query $(l, r, c)$.
At the end we want to find the final color of each cell.
We assume that we know all the queries in advance, i.e. the task is offline.
For the solution we can make a DSU, which for each cell stores a link to the next unpainted cell.
Thus initially each cell points to itself.
After painting one requested repaint of a segment, all cells from that segment will point to the cell after the segment.
Now to solve this problem, we consider the queries **in the reverse order**: from last to first.
This way when we execute a query, we only have to paint exactly the unpainted cells in the subarray $[l, r]$.
All other cells already contain their final color.
To quickly iterate over all unpainted cells, we use the DSU.
We find the left-most unpainted cell inside of a segment, repaint it, and with the pointer we move to the next empty cell to the right.
Here we can use the DSU with path compression, but we cannot use union by rank / size (because it is important who becomes the leader after the merge).
Therefore the complexity will be $O(\log n)$ per union (which is also quite fast).
Implementation:
```cpp
for (int i = 0; i <= L; i++) {
make_set(i);
}
for (int i = m-1; i >= 0; i--) {
int l = query[i].l;
int r = query[i].r;
int c = query[i].c;
for (int v = find_set(l); v <= r; v = find_set(v)) {
answer[v] = c;
parent[v] = v + 1;
}
}
```
There is one optimization:
We can use **union by rank**, if we store the next unpainted cell in an additional array `end[]`.
Then we can merge two sets into one ranked according to their heuristics, and we obtain the solution in $O(\alpha(n))$.
### Support distances up to representative
Sometimes in specific applications of the DSU you need to maintain the distance between a vertex and the representative of its set (i.e. the path length in the tree from the current node to the root of the tree).
If we don't use path compression, the distance is just the number of recursive calls.
But this will be inefficient.
However it is possible to do path compression, if we store the **distance to the parent** as additional information for each node.
In the implementation it is convenient to use an array of pairs for `parent[]` and the function `find_set` now returns two numbers: the representative of the set, and the distance to it.
```cpp
void make_set(int v) {
parent[v] = make_pair(v, 0);
rank[v] = 0;
}
pair<int, int> find_set(int v) {
if (v != parent[v].first) {
int len = parent[v].second;
parent[v] = find_set(parent[v].first);
parent[v].second += len;
}
return parent[v];
}
void union_sets(int a, int b) {
a = find_set(a).first;
b = find_set(b).first;
if (a != b) {
if (rank[a] < rank[b])
swap(a, b);
parent[b] = make_pair(a, 1);
if (rank[a] == rank[b])
rank[a]++;
}
}
```
### Support the parity of the path length / Checking bipartiteness online
In the same way as computing the path length to the leader, it is possible to maintain the parity of the length of the path before him.
Why is this application in a separate paragraph?
The unusual requirement of storing the parity of the path comes up in the following task:
initially we are given an empty graph, it can be added edges, and we have to answer queries of the form "is the connected component containing this vertex **bipartite**?".
To solve this problem, we make a DSU for storing of the components and store the parity of the path up to the representative for each vertex.
Thus we can quickly check if adding an edge leads to a violation of the bipartiteness or not:
namely if the ends of the edge lie in the same connected component and have the same parity length to the leader, then adding this edge will produce a cycle of odd length, and the component will lose the bipartiteness property.
The only difficulty that we face is to compute the parity in the `union_find` method.
If we add an edge $(a, b)$ that connects two connected components into one, then when you attach one tree to another we need to adjust the parity.
Let's derive a formula, which computes the parity issued to the leader of the set that will get attached to another set.
Let $x$ be the parity of the path length from vertex $a$ up to its leader $A$, and $y$ as the parity of the path length from vertex $b$ up to its leader $B$, and $t$ the desired parity that we have to assign to $B$ after the merge.
The path contains the of the three parts:
from $B$ to $b$, from $b$ to $a$, which is connected by one edge and therefore has parity $1$, and from $a$ to $A$.
Therefore we receive the formula ($\oplus$ denotes the XOR operation):
$$t = x \oplus y \oplus 1$$
Thus regardless of how many joins we perform, the parity of the edges is carried from one leader to another.
We give the implementation of the DSU that supports parity. As in the previous section we use a pair to store the ancestor and the parity. In addition for each set we store in the array `bipartite[]` whether it is still bipartite or not.
```cpp
void make_set(int v) {
parent[v] = make_pair(v, 0);
rank[v] = 0;
bipartite[v] = true;
}
pair<int, int> find_set(int v) {
if (v != parent[v].first) {
int parity = parent[v].second;
parent[v] = find_set(parent[v].first);
parent[v].second ^= parity;
}
return parent[v];
}
void add_edge(int a, int b) {
pair<int, int> pa = find_set(a);
a = pa.first;
int x = pa.second;
pair<int, int> pb = find_set(b);
b = pb.first;
int y = pb.second;
if (a == b) {
if (x == y)
bipartite[a] = false;
} else {
if (rank[a] < rank[b])
swap (a, b);
parent[b] = make_pair(a, x^y^1);
bipartite[a] &= bipartite[b];
if (rank[a] == rank[b])
++rank[a];
}
}
bool is_bipartite(int v) {
return bipartite[find_set(v).first];
}
```
### Offline RMQ (range minimum query) in $O(\alpha(n))$ on average / Arpa's trick { #arpa data-toc-label="Offline RMQ / Arpa's trick"}
We are given an array `a[]` and we have to compute some minima in given segments of the array.
The idea to solve this problem with DSU is the following:
We will iterate over the array and when we are at the `i`th element we will answer all queries `(L, R)` with `R == i`.
To do this efficiently we will keep a DSU using the first `i` elements with the following structure: the parent of an element is the next smaller element to the right of it.
Then using this structure the answer to a query will be the `a[find_set(L)]`, the smallest number to the right of `L`.
This approach obviously only works offline, i.e. if we know all queries beforehand.
It is easy to see that we can apply path compression.
And we can also use Union by rank, if we store the actual leader in an separate array.
```cpp
struct Query {
int L, R, idx;
};
vector<int> answer;
vector<vector<Query>> container;
```
`container[i]` contains all queries with `R == i`.
```cpp
stack<int> s;
for (int i = 0; i < n; i++) {
while (!s.empty() && a[s.top()] > a[i]) {
parent[s.top()] = i;
s.pop();
}
s.push(i);
for (Query q : container[i]) {
answer[q.idx] = a[find_set(q.L)];
}
}
```
Nowadays this algorithm is known as Arpa's trick.
It is named after AmirReza Poorakhavan, who independently discovered and popularized this technique.
Although this algorithm existed already before his discovery.
### Offline LCA (lowest common ancestor in a tree) in $O(\alpha(n))$ on average {data-toc-label="Offline LCA"}
The algorithm for finding the LCA is discussed in the article [Lowest Common Ancestor - Tarjan's off-line algorithm](../graph/lca_tarjan.md).
This algorithm compares favorable with other algorithms for finding the LCA due to its simplicity (especially compared to an optimal algorithm like the one from [Farach-Colton and Bender](../graph/lca_farachcoltonbender.md)).
### Storing the DSU explicitly in a set list / Applications of this idea when merging various data structures
One of the alternative ways of storing the DSU is the preservation of each set in the form of an **explicitly stored list of its elements**.
At the same time each element also stores the reference to the representative of his set.
At first glance this looks like an inefficient data structure:
by combining two sets we will have to add one list to the end of another and have to update the leadership in all elements of one of the lists.
However it turns out, the use of a **weighting heuristic** (similar to Union by size) can significantly reduce the asymptotic complexity:
$O(m + n \log n)$ to perform $m$ queries on the $n$ elements.
Under weighting heuristic we mean, that we will always **add the smaller of the two sets to the bigger set**.
Adding one set to another is easy to implement in `union_sets` and will take time proportional to the size of the added set.
And the search for the leader in `find_set` will take $O(1)$ with this method of storing.
Let us prove the **time complexity** $O(m + n \log n)$ for the execution of $m$ queries.
We will fix an arbitrary element $x$ and count how often it was touched in the merge operation `union_sets`.
When the element $x$ gets touched the first time, the size of the new set will be at least $2$.
When it gets touched the second time, the resulting set will have size of at least $4$, because the smaller set gets added to the bigger one.
And so on.
This means, that $x$ can only be moved in at most $\log n$ merge operations.
Thus the sum over all vertices gives $O(n \log n)$ plus $O(1)$ for each request.
Here is an implementation:
```cpp
vector<int> lst[MAXN];
int parent[MAXN];
void make_set(int v) {
lst[v] = vector<int>(1, v);
parent[v] = v;
}
int find_set(int v) {
return parent[v];
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (lst[a].size() < lst[b].size())
swap(a, b);
while (!lst[b].empty()) {
int v = lst[b].back();
lst[b].pop_back();
parent[v] = a;
lst[a].push_back (v);
}
}
}
```
This idea of adding the smaller part to a bigger part can also be used in a lot of solutions that have nothing to do with DSU.
For example consider the following **problem**:
we are given a tree, each leaf has a number assigned (same number can appear multiple times on different leaves).
We want to compute the number of different numbers in the subtree for every node of the tree.
Applying to this task the same idea it is possible to obtain this solution:
we can implement a [DFS](../graph/depth-first-search.md), which will return a pointer to a set of integers - the list of numbers in that subtree.
Then to get the answer for the current node (unless of course it is a leaf), we call DFS for all children of that node, and merge all the received sets together.
The size of the resulting set will be the answer for the current node.
To efficiently combine multiple sets we just apply the above-described recipe:
we merge the sets by simply adding smaller ones to larger.
In the end we get a $O(n \log^2 n)$ solution, because one number will only added to a set at most $O(\log n)$ times.
### Storing the DSU by maintaining a clear tree structure / Online bridge finding in $O(\alpha(n))$ on average {data-toc-label="Storing the DSU by maintaining a clear tree structure / Online bridge finding"}
One of the most powerful applications of DSU is that it allows you to store both as **compressed and uncompressed trees**.
The compressed form can be used for merging of trees and for the verification if two vertices are in the same tree, and the uncompressed form can be used - for example - to search for paths between two given vertices, or other traversals of the tree structure.
In the implementation this means that in addition to the compressed ancestor array `parent[]` we will need to keep the array of uncompressed ancestors `real_parent[]`.
It is trivial that maintaining this additional array will not worsen the complexity:
changes in it only occur when we merge two trees, and only in one element.
On the other hand when applied in practice, we often need to connect trees using a specified edge other that using the two root nodes.
This means that we have no other choice but to re-root one of the trees (make the ends of the edge the new root of the tree).
At first glance it seems that this re-rooting is very costly and will greatly worsen the time complexity.
Indeed, for rooting a tree at vertex $v$ we must go from the vertex to the old root and change directions in `parent[]` and `real_parent[]` for all nodes on that path.
However in reality it isn't so bad, we can just re-root the smaller of the two trees similar to the ideas in the previous sections, and get $O(\log n)$ on average.
More details (including proof of the time complexity) can be found in the article [Finding Bridges Online](../graph/bridge-searching-online.md).
## Historical retrospective
The data structure DSU has been known for a long time.
This way of storing this structure in the form **of a forest of trees** was apparently first described by Galler and Fisher in 1964 (Galler, Fisher, "An Improved Equivalence Algorithm), however the complete analysis of the time complexity was conducted much later.
The optimizations path compression and Union by rank has been developed by McIlroy and Morris, and independently of them also by Tritter.
Hopcroft and Ullman showed in 1973 the time complexity $O(\log^\star n)$ (Hopcroft, Ullman "Set-merging algorithms") - here $\log^\star$ is the **iterated logarithm** (this is a slow-growing function, but still not as slow as the inverse Ackermann function).
For the first time the evaluation of $O(\alpha(n))$ was shown in 1975 (Tarjan "Efficiency of a Good But Not Linear Set Union Algorithm").
Later in 1985 he, along with Leeuwen, published multiple complexity analyses for several different rank heuristics and ways of compressing the path (Tarjan, Leeuwen "Worst-case Analysis of Set Union Algorithms").
Finally in 1989 Fredman and Sachs proved that in the adopted model of computation **any** algorithm for the disjoint set union problem has to work in at least $O(\alpha(n))$ time on average (Fredman, Saks, "The cell probe complexity of dynamic data structures").
|
---
title
dsu
---
# Disjoint Set Union
This article discusses the data structure **Disjoint Set Union** or **DSU**.
Often it is also called **Union Find** because of its two main operations.
This data structure provides the following capabilities.
We are given several elements, each of which is a separate set.
A DSU will have an operation to combine any two sets, and it will be able to tell in which set a specific element is.
The classical version also introduces a third operation, it can create a set from a new element.
Thus the basic interface of this data structure consists of only three operations:
- `make_set(v)` - creates a new set consisting of the new element `v`
- `union_sets(a, b)` - merges the two specified sets (the set in which the element `a` is located, and the set in which the element `b` is located)
- `find_set(v)` - returns the representative (also called leader) of the set that contains the element `v`.
This representative is an element of its corresponding set.
It is selected in each set by the data structure itself (and can change over time, namely after `union_sets` calls).
This representative can be used to check if two elements are part of the same set or not.
`a` and `b` are exactly in the same set, if `find_set(a) == find_set(b)`.
Otherwise they are in different sets.
As described in more detail later, the data structure allows you to do each of these operations in almost $O(1)$ time on average.
Also in one of the subsections an alternative structure of a DSU is explained, which achieves a slower average complexity of $O(\log n)$, but can be more powerful than the regular DSU structure.
## Build an efficient data structure
We will store the sets in the form of **trees**: each tree will correspond to one set.
And the root of the tree will be the representative/leader of the set.
In the following image you can see the representation of such trees.

In the beginning, every element starts as a single set, therefore each vertex is its own tree.
Then we combine the set containing the element 1 and the set containing the element 2.
Then we combine the set containing the element 3 and the set containing the element 4.
And in the last step, we combine the set containing the element 1 and the set containing the element 3.
For the implementation this means that we will have to maintain an array `parent` that stores a reference to its immediate ancestor in the tree.
### Naive implementation
We can already write the first implementation of the Disjoint Set Union data structure.
It will be pretty inefficient at first, but later we can improve it using two optimizations, so that it will take nearly constant time for each function call.
As we said, all the information about the sets of elements will be kept in an array `parent`.
To create a new set (operation `make_set(v)`), we simply create a tree with root in the vertex `v`, meaning that it is its own ancestor.
To combine two sets (operation `union_sets(a, b)`), we first find the representative of the set in which `a` is located, and the representative of the set in which `b` is located.
If the representatives are identical, that we have nothing to do, the sets are already merged.
Otherwise, we can simply specify that one of the representatives is the parent of the other representative - thereby combining the two trees.
Finally the implementation of the find representative function (operation `find_set(v)`):
we simply climb the ancestors of the vertex `v` until we reach the root, i.e. a vertex such that the reference to the ancestor leads to itself.
This operation is easily implemented recursively.
```cpp
void make_set(int v) {
parent[v] = v;
}
int find_set(int v) {
if (v == parent[v])
return v;
return find_set(parent[v]);
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b)
parent[b] = a;
}
```
However this implementation is inefficient.
It is easy to construct an example, so that the trees degenerate into long chains.
In that case each call `find_set(v)` can take $O(n)$ time.
This is far away from the complexity that we want to have (nearly constant time).
Therefore we will consider two optimizations that will allow to significantly accelerate the work.
### Path compression optimization
This optimization is designed for speeding up `find_set`.
If we call `find_set(v)` for some vertex `v`, we actually find the representative `p` for all vertices that we visit on the path between `v` and the actual representative `p`.
The trick is to make the paths for all those nodes shorter, by setting the parent of each visited vertex directly to `p`.
You can see the operation in the following image.
On the left there is a tree, and on the right side there is the compressed tree after calling `find_set(7)`, which shortens the paths for the visited nodes 7, 5, 3 and 2.

The new implementation of `find_set` is as follows:
```cpp
int find_set(int v) {
if (v == parent[v])
return v;
return parent[v] = find_set(parent[v]);
}
```
The simple implementation does what was intended:
first find the representative of the set (root vertex), and then in the process of stack unwinding the visited nodes are attached directly to the representative.
This simple modification of the operation already achieves the time complexity $O(\log n)$ per call on average (here without proof).
There is a second modification, that will make it even faster.
### Union by size / rank
In this optimization we will change the `union_set` operation.
To be precise, we will change which tree gets attached to the other one.
In the naive implementation the second tree always got attached to the first one.
In practice that can lead to trees containing chains of length $O(n)$.
With this optimization we will avoid this by choosing very carefully which tree gets attached.
There are many possible heuristics that can be used.
Most popular are the following two approaches:
In the first approach we use the size of the trees as rank, and in the second one we use the depth of the tree (more precisely, the upper bound on the tree depth, because the depth will get smaller when applying path compression).
In both approaches the essence of the optimization is the same: we attach the tree with the lower rank to the one with the bigger rank.
Here is the implementation of union by size:
```cpp
void make_set(int v) {
parent[v] = v;
size[v] = 1;
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (size[a] < size[b])
swap(a, b);
parent[b] = a;
size[a] += size[b];
}
}
```
And here is the implementation of union by rank based on the depth of the trees:
```cpp
void make_set(int v) {
parent[v] = v;
rank[v] = 0;
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (rank[a] < rank[b])
swap(a, b);
parent[b] = a;
if (rank[a] == rank[b])
rank[a]++;
}
}
```
Both optimizations are equivalent in terms of time and space complexity. So in practice you can use any of them.
### Time complexity
As mentioned before, if we combine both optimizations - path compression with union by size / rank - we will reach nearly constant time queries.
It turns out, that the final amortized time complexity is $O(\alpha(n))$, where $\alpha(n)$ is the inverse Ackermann function, which grows very slowly.
In fact it grows so slowly, that it doesn't exceed $4$ for all reasonable $n$ (approximately $n < 10^{600}$).
Amortized complexity is the total time per operation, evaluated over a sequence of multiple operations.
The idea is to guarantee the total time of the entire sequence, while allowing single operations to be much slower then the amortized time.
E.g. in our case a single call might take $O(\log n)$ in the worst case, but if we do $m$ such calls back to back we will end up with an average time of $O(\alpha(n))$.
We will also not present a proof for this time complexity, since it is quite long and complicated.
Also, it's worth mentioning that DSU with union by size / rank, but without path compression works in $O(\log n)$ time per query.
### Linking by index / coin-flip linking
Both union by rank and union by size require that you store additional data for each set, and maintain these values during each union operation.
There exist also a randomized algorithm, that simplifies the union operation a little bit: linking by index.
We assign each set a random value called the index, and we attach the set with the smaller index to the one with the larger one.
It is likely that a bigger set will have a bigger index than the smaller set, therefore this operation is closely related to union by size.
In fact it can be proven, that this operation has the same time complexity as union by size.
However in practice it is slightly slower than union by size.
You can find a proof of the complexity and even more union techniques [here](http://www.cis.upenn.edu/~sanjeev/papers/soda14_disjoint_set_union.pdf).
```cpp
void make_set(int v) {
parent[v] = v;
index[v] = rand();
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (index[a] < index[b])
swap(a, b);
parent[b] = a;
}
}
```
It's a common misconception that just flipping a coin, to decide which set we attach to the other, has the same complexity.
However that's not true.
The paper linked above conjectures that coin-flip linking combined with path compression has complexity $\Omega\left(n \frac{\log n}{\log \log n}\right)$.
And in benchmarks it performs a lot worse than union by size/rank or linking by index.
```cpp
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (rand() % 2)
swap(a, b);
parent[b] = a;
}
}
```
## Applications and various improvements
In this section we consider several applications of the data structure, both the trivial uses and some improvements to the data structure.
### Connected components in a graph
This is one of the obvious applications of DSU.
Formally the problem is defined in the following way:
Initially we have an empty graph.
We have to add vertices and undirected edges, and answer queries of the form $(a, b)$ - "are the vertices $a$ and $b$ in the same connected component of the graph?"
Here we can directly apply the data structure, and get a solution that handles an addition of a vertex or an edge and a query in nearly constant time on average.
This application is quite important, because nearly the same problem appears in [Kruskal's algorithm for finding a minimum spanning tree](../graph/mst_kruskal.md).
Using DSU we can [improve](../graph/mst_kruskal_with_dsu.md) the $O(m \log n + n^2)$ complexity to $O(m \log n)$.
### Search for connected components in an image
One of the applications of DSU is the following task:
there is an image of $n \times m$ pixels.
Originally all are white, but then a few black pixels are drawn.
You want to determine the size of each white connected component in the final image.
For the solution we simply iterate over all white pixels in the image, for each cell iterate over its four neighbors, and if the neighbor is white call `union_sets`.
Thus we will have a DSU with $n m$ nodes corresponding to image pixels.
The resulting trees in the DSU are the desired connected components.
The problem can also be solved by [DFS](../graph/depth-first-search.md) or [BFS](../graph/breadth-first-search.md), but the method described here has an advantage:
it can process the matrix row by row (i.e. to process a row we only need the previous and the current row, and only need a DSU built for the elements of one row) in $O(\min(n, m))$ memory.
### Store additional information for each set
DSU allows you to easily store additional information in the sets.
A simple example is the size of the sets:
storing the sizes was already described in the Union by size section (the information was stored by the current representative of the set).
In the same way - by storing it at the representative nodes - you can also store any other information about the sets.
### Compress jumps along a segment / Painting subarrays offline
One common application of the DSU is the following:
There is a set of vertices, and each vertex has an outgoing edge to another vertex.
With DSU you can find the end point, to which we get after following all edges from a given starting point, in almost constant time.
A good example of this application is the **problem of painting subarrays**.
We have a segment of length $L$, each element initially has the color 0.
We have to repaint the subarray $[l, r]$ with the color $c$ for each query $(l, r, c)$.
At the end we want to find the final color of each cell.
We assume that we know all the queries in advance, i.e. the task is offline.
For the solution we can make a DSU, which for each cell stores a link to the next unpainted cell.
Thus initially each cell points to itself.
After painting one requested repaint of a segment, all cells from that segment will point to the cell after the segment.
Now to solve this problem, we consider the queries **in the reverse order**: from last to first.
This way when we execute a query, we only have to paint exactly the unpainted cells in the subarray $[l, r]$.
All other cells already contain their final color.
To quickly iterate over all unpainted cells, we use the DSU.
We find the left-most unpainted cell inside of a segment, repaint it, and with the pointer we move to the next empty cell to the right.
Here we can use the DSU with path compression, but we cannot use union by rank / size (because it is important who becomes the leader after the merge).
Therefore the complexity will be $O(\log n)$ per union (which is also quite fast).
Implementation:
```cpp
for (int i = 0; i <= L; i++) {
make_set(i);
}
for (int i = m-1; i >= 0; i--) {
int l = query[i].l;
int r = query[i].r;
int c = query[i].c;
for (int v = find_set(l); v <= r; v = find_set(v)) {
answer[v] = c;
parent[v] = v + 1;
}
}
```
There is one optimization:
We can use **union by rank**, if we store the next unpainted cell in an additional array `end[]`.
Then we can merge two sets into one ranked according to their heuristics, and we obtain the solution in $O(\alpha(n))$.
### Support distances up to representative
Sometimes in specific applications of the DSU you need to maintain the distance between a vertex and the representative of its set (i.e. the path length in the tree from the current node to the root of the tree).
If we don't use path compression, the distance is just the number of recursive calls.
But this will be inefficient.
However it is possible to do path compression, if we store the **distance to the parent** as additional information for each node.
In the implementation it is convenient to use an array of pairs for `parent[]` and the function `find_set` now returns two numbers: the representative of the set, and the distance to it.
```cpp
void make_set(int v) {
parent[v] = make_pair(v, 0);
rank[v] = 0;
}
pair<int, int> find_set(int v) {
if (v != parent[v].first) {
int len = parent[v].second;
parent[v] = find_set(parent[v].first);
parent[v].second += len;
}
return parent[v];
}
void union_sets(int a, int b) {
a = find_set(a).first;
b = find_set(b).first;
if (a != b) {
if (rank[a] < rank[b])
swap(a, b);
parent[b] = make_pair(a, 1);
if (rank[a] == rank[b])
rank[a]++;
}
}
```
### Support the parity of the path length / Checking bipartiteness online
In the same way as computing the path length to the leader, it is possible to maintain the parity of the length of the path before him.
Why is this application in a separate paragraph?
The unusual requirement of storing the parity of the path comes up in the following task:
initially we are given an empty graph, it can be added edges, and we have to answer queries of the form "is the connected component containing this vertex **bipartite**?".
To solve this problem, we make a DSU for storing of the components and store the parity of the path up to the representative for each vertex.
Thus we can quickly check if adding an edge leads to a violation of the bipartiteness or not:
namely if the ends of the edge lie in the same connected component and have the same parity length to the leader, then adding this edge will produce a cycle of odd length, and the component will lose the bipartiteness property.
The only difficulty that we face is to compute the parity in the `union_find` method.
If we add an edge $(a, b)$ that connects two connected components into one, then when you attach one tree to another we need to adjust the parity.
Let's derive a formula, which computes the parity issued to the leader of the set that will get attached to another set.
Let $x$ be the parity of the path length from vertex $a$ up to its leader $A$, and $y$ as the parity of the path length from vertex $b$ up to its leader $B$, and $t$ the desired parity that we have to assign to $B$ after the merge.
The path contains the of the three parts:
from $B$ to $b$, from $b$ to $a$, which is connected by one edge and therefore has parity $1$, and from $a$ to $A$.
Therefore we receive the formula ($\oplus$ denotes the XOR operation):
$$t = x \oplus y \oplus 1$$
Thus regardless of how many joins we perform, the parity of the edges is carried from one leader to another.
We give the implementation of the DSU that supports parity. As in the previous section we use a pair to store the ancestor and the parity. In addition for each set we store in the array `bipartite[]` whether it is still bipartite or not.
```cpp
void make_set(int v) {
parent[v] = make_pair(v, 0);
rank[v] = 0;
bipartite[v] = true;
}
pair<int, int> find_set(int v) {
if (v != parent[v].first) {
int parity = parent[v].second;
parent[v] = find_set(parent[v].first);
parent[v].second ^= parity;
}
return parent[v];
}
void add_edge(int a, int b) {
pair<int, int> pa = find_set(a);
a = pa.first;
int x = pa.second;
pair<int, int> pb = find_set(b);
b = pb.first;
int y = pb.second;
if (a == b) {
if (x == y)
bipartite[a] = false;
} else {
if (rank[a] < rank[b])
swap (a, b);
parent[b] = make_pair(a, x^y^1);
bipartite[a] &= bipartite[b];
if (rank[a] == rank[b])
++rank[a];
}
}
bool is_bipartite(int v) {
return bipartite[find_set(v).first];
}
```
### Offline RMQ (range minimum query) in $O(\alpha(n))$ on average / Arpa's trick { #arpa data-toc-label="Offline RMQ / Arpa's trick"}
We are given an array `a[]` and we have to compute some minima in given segments of the array.
The idea to solve this problem with DSU is the following:
We will iterate over the array and when we are at the `i`th element we will answer all queries `(L, R)` with `R == i`.
To do this efficiently we will keep a DSU using the first `i` elements with the following structure: the parent of an element is the next smaller element to the right of it.
Then using this structure the answer to a query will be the `a[find_set(L)]`, the smallest number to the right of `L`.
This approach obviously only works offline, i.e. if we know all queries beforehand.
It is easy to see that we can apply path compression.
And we can also use Union by rank, if we store the actual leader in an separate array.
```cpp
struct Query {
int L, R, idx;
};
vector<int> answer;
vector<vector<Query>> container;
```
`container[i]` contains all queries with `R == i`.
```cpp
stack<int> s;
for (int i = 0; i < n; i++) {
while (!s.empty() && a[s.top()] > a[i]) {
parent[s.top()] = i;
s.pop();
}
s.push(i);
for (Query q : container[i]) {
answer[q.idx] = a[find_set(q.L)];
}
}
```
Nowadays this algorithm is known as Arpa's trick.
It is named after AmirReza Poorakhavan, who independently discovered and popularized this technique.
Although this algorithm existed already before his discovery.
### Offline LCA (lowest common ancestor in a tree) in $O(\alpha(n))$ on average {data-toc-label="Offline LCA"}
The algorithm for finding the LCA is discussed in the article [Lowest Common Ancestor - Tarjan's off-line algorithm](../graph/lca_tarjan.md).
This algorithm compares favorable with other algorithms for finding the LCA due to its simplicity (especially compared to an optimal algorithm like the one from [Farach-Colton and Bender](../graph/lca_farachcoltonbender.md)).
### Storing the DSU explicitly in a set list / Applications of this idea when merging various data structures
One of the alternative ways of storing the DSU is the preservation of each set in the form of an **explicitly stored list of its elements**.
At the same time each element also stores the reference to the representative of his set.
At first glance this looks like an inefficient data structure:
by combining two sets we will have to add one list to the end of another and have to update the leadership in all elements of one of the lists.
However it turns out, the use of a **weighting heuristic** (similar to Union by size) can significantly reduce the asymptotic complexity:
$O(m + n \log n)$ to perform $m$ queries on the $n$ elements.
Under weighting heuristic we mean, that we will always **add the smaller of the two sets to the bigger set**.
Adding one set to another is easy to implement in `union_sets` and will take time proportional to the size of the added set.
And the search for the leader in `find_set` will take $O(1)$ with this method of storing.
Let us prove the **time complexity** $O(m + n \log n)$ for the execution of $m$ queries.
We will fix an arbitrary element $x$ and count how often it was touched in the merge operation `union_sets`.
When the element $x$ gets touched the first time, the size of the new set will be at least $2$.
When it gets touched the second time, the resulting set will have size of at least $4$, because the smaller set gets added to the bigger one.
And so on.
This means, that $x$ can only be moved in at most $\log n$ merge operations.
Thus the sum over all vertices gives $O(n \log n)$ plus $O(1)$ for each request.
Here is an implementation:
```cpp
vector<int> lst[MAXN];
int parent[MAXN];
void make_set(int v) {
lst[v] = vector<int>(1, v);
parent[v] = v;
}
int find_set(int v) {
return parent[v];
}
void union_sets(int a, int b) {
a = find_set(a);
b = find_set(b);
if (a != b) {
if (lst[a].size() < lst[b].size())
swap(a, b);
while (!lst[b].empty()) {
int v = lst[b].back();
lst[b].pop_back();
parent[v] = a;
lst[a].push_back (v);
}
}
}
```
This idea of adding the smaller part to a bigger part can also be used in a lot of solutions that have nothing to do with DSU.
For example consider the following **problem**:
we are given a tree, each leaf has a number assigned (same number can appear multiple times on different leaves).
We want to compute the number of different numbers in the subtree for every node of the tree.
Applying to this task the same idea it is possible to obtain this solution:
we can implement a [DFS](../graph/depth-first-search.md), which will return a pointer to a set of integers - the list of numbers in that subtree.
Then to get the answer for the current node (unless of course it is a leaf), we call DFS for all children of that node, and merge all the received sets together.
The size of the resulting set will be the answer for the current node.
To efficiently combine multiple sets we just apply the above-described recipe:
we merge the sets by simply adding smaller ones to larger.
In the end we get a $O(n \log^2 n)$ solution, because one number will only added to a set at most $O(\log n)$ times.
### Storing the DSU by maintaining a clear tree structure / Online bridge finding in $O(\alpha(n))$ on average {data-toc-label="Storing the DSU by maintaining a clear tree structure / Online bridge finding"}
One of the most powerful applications of DSU is that it allows you to store both as **compressed and uncompressed trees**.
The compressed form can be used for merging of trees and for the verification if two vertices are in the same tree, and the uncompressed form can be used - for example - to search for paths between two given vertices, or other traversals of the tree structure.
In the implementation this means that in addition to the compressed ancestor array `parent[]` we will need to keep the array of uncompressed ancestors `real_parent[]`.
It is trivial that maintaining this additional array will not worsen the complexity:
changes in it only occur when we merge two trees, and only in one element.
On the other hand when applied in practice, we often need to connect trees using a specified edge other that using the two root nodes.
This means that we have no other choice but to re-root one of the trees (make the ends of the edge the new root of the tree).
At first glance it seems that this re-rooting is very costly and will greatly worsen the time complexity.
Indeed, for rooting a tree at vertex $v$ we must go from the vertex to the old root and change directions in `parent[]` and `real_parent[]` for all nodes on that path.
However in reality it isn't so bad, we can just re-root the smaller of the two trees similar to the ideas in the previous sections, and get $O(\log n)$ on average.
More details (including proof of the time complexity) can be found in the article [Finding Bridges Online](../graph/bridge-searching-online.md).
## Historical retrospective
The data structure DSU has been known for a long time.
This way of storing this structure in the form **of a forest of trees** was apparently first described by Galler and Fisher in 1964 (Galler, Fisher, "An Improved Equivalence Algorithm), however the complete analysis of the time complexity was conducted much later.
The optimizations path compression and Union by rank has been developed by McIlroy and Morris, and independently of them also by Tritter.
Hopcroft and Ullman showed in 1973 the time complexity $O(\log^\star n)$ (Hopcroft, Ullman "Set-merging algorithms") - here $\log^\star$ is the **iterated logarithm** (this is a slow-growing function, but still not as slow as the inverse Ackermann function).
For the first time the evaluation of $O(\alpha(n))$ was shown in 1975 (Tarjan "Efficiency of a Good But Not Linear Set Union Algorithm").
Later in 1985 he, along with Leeuwen, published multiple complexity analyses for several different rank heuristics and ways of compressing the path (Tarjan, Leeuwen "Worst-case Analysis of Set Union Algorithms").
Finally in 1989 Fredman and Sachs proved that in the adopted model of computation **any** algorithm for the disjoint set union problem has to work in at least $O(\alpha(n))$ time on average (Fredman, Saks, "The cell probe complexity of dynamic data structures").
## Problems
* [TIMUS - Anansi's Cobweb](http://acm.timus.ru/problem.aspx?space=1&num=1671)
* [Codeforces - Roads not only in Berland](http://codeforces.com/contest/25/problem/D)
* [TIMUS - Parity](http://acm.timus.ru/problem.aspx?space=1&num=1003)
* [SPOJ - Strange Food Chain](http://www.spoj.com/problems/CHAIN/)
* [SPOJ - COLORFUL ARRAY](https://www.spoj.com/problems/CLFLARR/)
* [SPOJ - Consecutive Letters](https://www.spoj.com/problems/CONSEC/)
* [Toph - Unbelievable Array](https://toph.co/p/unbelievable-array)
* [HackerEarth - Lexicographically minimal string](https://www.hackerearth.com/practice/data-structures/disjoint-data-strutures/basics-of-disjoint-data-structures/practice-problems/algorithm/lexicographically-minimal-string-6edc1406/description/)
* [HackerEarth - Fight in Ninja World](https://www.hackerearth.com/practice/algorithms/graphs/breadth-first-search/practice-problems/algorithm/containers-of-choclates-1/)
|
Disjoint Set Union
|
---
title
fenwick_tree
---
# Fenwick Tree
Let, $f$ be some group operation (binary associative function over a set with identity element and inverse elements) and $A$ be an array of integers of length $N$.
Fenwick tree is a data structure which:
* calculates the value of function $f$ in the given range $[l, r]$ (i.e. $f(A_l, A_{l+1}, \dots, A_r)$) in $O(\log N)$ time;
* updates the value of an element of $A$ in $O(\log N)$ time;
* requires $O(N)$ memory, or in other words, exactly the same memory required for $A$;
* is easy to use and code, especially, in the case of multidimensional arrays.
The most common application of Fenwick tree is _calculating the sum of a range_ (i.e. using addition over the set of integers $\mathbb{Z}$: $f(A_1, A_2, \dots, A_k) = A_1 + A_2 + \dots + A_k$).
Fenwick tree is also called **Binary Indexed Tree**, or just **BIT** abbreviated.
Fenwick tree was first described in a paper titled "A new data structure for cumulative frequency tables" (Peter M. Fenwick, 1994).
## Description
### Overview
For the sake of simplicity, we will assume that function $f$ is just a *sum function*.
Given an array of integers $A[0 \dots N-1]$.
A Fenwick tree is just an array $T[0 \dots N-1]$, where each of its elements is equal to the sum of elements of $A$ in some range $[g(i), i]$:
$$T_i = \sum_{j = g(i)}^{i}{A_j},$$
where $g$ is some function that satisfies $0 \le g(i) \le i$.
We will define the function in the next few paragraphs.
The data structure is called tree, because there is a nice representation of the data structure as tree, although we don't need to model an actual tree with nodes and edges.
We will only need to maintain the array $T$ to handle all queries.
**Note:** The Fenwick tree presented here uses zero-based indexing.
Many people will actually use a version of the Fenwick tree that uses one-based indexing.
Therefore you will also find an alternative implementation using one-based indexing in the implementation section.
Both versions are equivalent in terms of time and memory complexity.
Now we can write some pseudo-code for the two operations mentioned above - get the sum of elements of $A$ in the range $[0, r]$ and update (increase) some element $A_i$:
```python
def sum(int r):
res = 0
while (r >= 0):
res += t[r]
r = g(r) - 1
return res
def increase(int i, int delta):
for all j with g(j) <= i <= j:
t[j] += delta
```
The function `sum` works as follows:
1. first, it adds the sum of the range $[g(r), r]$ (i.e. $T[r]$) to the `result`
2. then, it "jumps" to the range $[g(g(r)-1), g(r)-1]$, and adds this range's sum to the `result`
3. and so on, until it "jumps" from $[0, g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1), -1]$; that is where the `sum` function stops jumping.
The function `increase` works with the same analogy, but "jumps" in the direction of increasing indices:
1. sums of the ranges $[g(j), j]$ that satisfy the condition $g(j) \le i \le j$ are increased by `delta` , that is `t[j] += delta`. Therefore we updated all elements in $T$ that correspond to ranges in which $A_i$ lies.
It is obvious that the complexity of both `sum` and `increase` depend on the function $g$.
There are lots of ways to choose the function $g$, as long as $0 \le g(i) \le i$ for all $i$.
For instance the function $g(i) = i$ works, which results just in $T = A$, and therefore summation queries are slow.
We can also take the function $g(i) = 0$.
This will correspond to prefix sum arrays, which means that finding the sum of the range $[0, i]$ will only take constant time, but updates are slow.
The clever part of the Fenwick algorithm is, that there it uses a special definition of the function $g$ that can handle both operations in $O(\log N)$ time.
### Definition of $g(i)$ { data-toc-label='Definition of <script type="math/tex">g(i)</script>' }
The computation of $g(i)$ is defined using the following simple operation:
we replace all trailing $1$ bits in the binary representation of $i$ with $0$ bits.
In other words, if the least significant digit of $i$ in binary is $0$, then $g(i) = i$.
And otherwise the least significant digit is a $1$, and we take this $1$ and all other trailing $1$s and flip them.
For instance we get
$$\begin{align}
g(11) = g(1011_2) = 1000_2 &= 8 \\\\
g(12) = g(1100_2) = 1100_2 &= 12 \\\\
g(13) = g(1101_2) = 1100_2 &= 12 \\\\
g(14) = g(1110_2) = 1110_2 &= 14 \\\\
g(15) = g(1111_2) = 0000_2 &= 0 \\\\
\end{align}$$
There exists a simple implementation using bitwise operations for the non-trivial operation described above:
$$g(i) = i ~\&~ (i+1),$$
where $\&$ is the bitwise AND operator. It is not hard to convince yourself that this solution does the same thing as the operation described above.
Now, we just need to find a way to iterate over all $j$'s, such that $g(j) \le i \le j$.
It is easy to see that we can find all such $j$'s by starting with $i$ and flipping the last unset bit.
We will call this operation $h(j)$.
For example, for $i = 10$ we have:
$$\begin{align}
10 &= 0001010_2 \\\\
h(10) = 11 &= 0001011_2 \\\\
h(11) = 15 &= 0001111_2 \\\\
h(15) = 31 &= 0011111_2 \\\\
h(31) = 63 &= 0111111_2 \\\\
\vdots &
\end{align}$$
Unsurprisingly, there also exists a simple way to perform $h$ using bitwise operations:
$$h(j) = j ~\|~ (j+1),$$
where $\|$ is the bitwise OR operator.
The following image shows a possible interpretation of the Fenwick tree as tree.
The nodes of the tree show the ranges they cover.
<center></center>
## Implementation
### Finding sum in one-dimensional array
Here we present an implementation of the Fenwick tree for sum queries and single updates.
The normal Fenwick tree can only answer sum queries of the type $[0, r]$ using `sum(int r)`, however we can also answer other queries of the type $[l, r]$ by computing two sums $[0, r]$ and $[0, l-1]$ and subtract them.
This is handled in the `sum(int l, int r)` method.
Also this implementation supports two constructors.
You can create a Fenwick tree initialized with zeros, or you can convert an existing array into the Fenwick form.
```{.cpp file=fenwick_sum}
struct FenwickTree {
vector<int> bit; // binary indexed tree
int n;
FenwickTree(int n) {
this->n = n;
bit.assign(n, 0);
}
FenwickTree(vector<int> const &a) : FenwickTree(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int r) {
int ret = 0;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret += bit[r];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] += delta;
}
};
```
### Linear construction
The above implementation requires $O(N \log N)$ time.
It's possible to improve that to $O(N)$ time.
The idea is, that the number $a[i]$ at index $i$ will contribute to the range stored in $bit[i]$, and to all ranges that the index $i | (i + 1)$ contributes to.
So by adding the numbers in order, you only have to push the current sum further to the next range, where it will then get pushed further to the next range, and so on.
```cpp
FenwickTree(vector<int> const &a) : FenwickTree(a.size()){
for (int i = 0; i < n; i++) {
bit[i] += a[i];
int r = i | (i + 1);
if (r < n) bit[r] += bit[i];
}
}
```
### Finding minimum of $[0, r]$ in one-dimensional array { data-toc-label='Finding minimum of <script type="math/tex">[0, r]</script> in one-dimensional array' }
It is obvious that there is no easy way of finding minimum of range $[l, r]$ using Fenwick tree, as Fenwick tree can only answer queries of type $[0, r]$.
Additionally, each time a value is `update`'d, the new value has to be smaller than the current value.
Both significant limitations are because the $min$ operation together with the set of integers doesn't form a group, as there are no inverse elements.
```{.cpp file=fenwick_min}
struct FenwickTreeMin {
vector<int> bit;
int n;
const int INF = (int)1e9;
FenwickTreeMin(int n) {
this->n = n;
bit.assign(n, INF);
}
FenwickTreeMin(vector<int> a) : FenwickTreeMin(a.size()) {
for (size_t i = 0; i < a.size(); i++)
update(i, a[i]);
}
int getmin(int r) {
int ret = INF;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret = min(ret, bit[r]);
return ret;
}
void update(int idx, int val) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] = min(bit[idx], val);
}
};
```
Note: it is possible to implement a Fenwick tree that can handle arbitrary minimum range queries and arbitrary updates.
The paper [Efficient Range Minimum Queries using Binary Indexed Trees](http://ioinformatics.org/oi/pdf/v9_2015_39_44.pdf) describes such an approach.
However with that approach you need to maintain a second binary indexed trees over the data, with a slightly different structure, since you one tree is not enough to store the values of all elements in the array.
The implementation is also a lot harder compared to the normal implementation for sums.
### Finding sum in two-dimensional array
As claimed before, it is very easy to implement Fenwick Tree for multidimensional array.
```cpp
struct FenwickTree2D {
vector<vector<int>> bit;
int n, m;
// init(...) { ... }
int sum(int x, int y) {
int ret = 0;
for (int i = x; i >= 0; i = (i & (i + 1)) - 1)
for (int j = y; j >= 0; j = (j & (j + 1)) - 1)
ret += bit[i][j];
return ret;
}
void add(int x, int y, int delta) {
for (int i = x; i < n; i = i | (i + 1))
for (int j = y; j < m; j = j | (j + 1))
bit[i][j] += delta;
}
};
```
### One-based indexing approach
For this approach we change the requirements and definition for $T[]$ and $g()$ a little bit.
We want $T[i]$ to store the sum of $[g(i)+1; i]$.
This changes the implementation a little bit, and allows for a similar nice definition for $g(i)$:
```python
def sum(int r):
res = 0
while (r > 0):
res += t[r]
r = g(r)
return res
def increase(int i, int delta):
for all j with g(j) < i <= j:
t[j] += delta
```
The computation of $g(i)$ is defined as:
toggling of the last set $1$ bit in the binary representation of $i$.
$$\begin{align}
g(7) = g(111_2) = 110_2 &= 6 \\\\
g(6) = g(110_2) = 100_2 &= 4 \\\\
g(4) = g(100_2) = 000_2 &= 0 \\\\
\end{align}$$
The last set bit can be extracted using $i ~\&~ (-i)$, so the operation can be expressed as:
$$g(i) = i - (i ~\&~ (-i)).$$
And it's not hard to see, that you need to change all values $T[j]$ in the sequence $i,~ h(i),~ h(h(i)),~ \dots$ when you want to update $A[j]$, where $h(i)$ is defined as:
$$h(i) = i + (i ~\&~ (-i)).$$
As you can see, the main benefit of this approach is that the binary operations complement each other very nicely.
The following implementation can be used like the other implementations, however it uses one-based indexing internally.
```{.cpp file=fenwick_sum_onebased}
struct FenwickTreeOneBasedIndexing {
vector<int> bit; // binary indexed tree
int n;
FenwickTreeOneBasedIndexing(int n) {
this->n = n + 1;
bit.assign(n + 1, 0);
}
FenwickTreeOneBasedIndexing(vector<int> a)
: FenwickTreeOneBasedIndexing(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += delta;
}
};
```
## Range operations
A Fenwick tree can support the following range operations:
1. Point Update and Range Query
2. Range Update and Point Query
3. Range Update and Range Query
### 1. Point Update and Range Query
This is just the ordinary Fenwick tree as explained above.
### 2. Range Update and Point Query
Using simple tricks we can also do the reverse operations: increasing ranges and querying for single values.
Let the Fenwick tree be initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by $x$.
We make two point update operations on Fenwick tree which are `add(l, x)` and `add(r+1, -x)`.
If we want to get the value of $A[i]$, we just need to take the prefix sum using the ordinary range sum method.
To see why this is true, we can just focus on the previous increment operation again.
If $i < l$, then the two update operations have no effect on the query and we get the sum $0$.
If $i \in [l, r]$, then we get the answer $x$ because of the first update operation.
And if $i > r$, then the second update operation will cancel the effect of first one.
The following implementation uses one-based indexing.
```cpp
void add(int idx, int val) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += val;
}
void range_add(int l, int r, int val) {
add(l, val);
add(r + 1, -val);
}
int point_query(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
```
Note: of course it is also possible to increase a single point $A[i]$ with `range_add(i, i, val)`.
### 3. Range Updates and Range Queries
To support both range updates and range queries we will use two BITs namely $B_1[]$ and $B_2[]$, initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by the value $x$.
Similarly as in the previous method, we perform two point updates on $B_1$: `add(B1, l, x)` and `add(B1, r+1, -x)`.
And we also update $B_2$. The details will be explained later.
```python
def range_add(l, r, x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r))
```
After the range update $(l, r, x)$ the range sum query should return the following values:
$$
sum[0, i]=
\begin{cases}
0 & i < l \\\\
x \cdot (i-(l-1)) & l \le i \le r \\\\
x \cdot (r-l+1) & i > r \\\\
\end{cases}
$$
We can write the range sum as difference of two terms, where we use $B_1$ for first term and $B_2$ for second term.
The difference of the queries will give us prefix sum over $[0, i]$.
$$\begin{align}
sum[0, i] &= sum(B_1, i) \cdot i - sum(B_2, i) \\\\
&= \begin{cases}
0 \cdot i - 0 & i < l\\\\
x \cdot i - x \cdot (l-1) & l \le i \le r \\\\
0 \cdot i - (x \cdot (l-1) - x \cdot r) & i > r \\\\
\end{cases}
\end{align}
$$
The last expression is exactly equal to the required terms.
Thus we can use $B_2$ for shaving off extra terms when we multiply $B_1[i]\times i$.
We can find arbitrary range sums by computing the prefix sums for $l-1$ and $r$ and taking the difference of them again.
```python
def add(b, idx, x):
while idx <= N:
b[idx] += x
idx += idx & -idx
def range_add(l,r,x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r)
def sum(b, idx):
total = 0
while idx > 0:
total += b[idx]
idx -= idx & -idx
return total
def prefix_sum(idx):
return sum(B1, idx)*idx - sum(B2, idx)
def range_sum(l, r):
return prefix_sum(r) - prefix_sum(l-1)
```
|
---
title
fenwick_tree
---
# Fenwick Tree
Let, $f$ be some group operation (binary associative function over a set with identity element and inverse elements) and $A$ be an array of integers of length $N$.
Fenwick tree is a data structure which:
* calculates the value of function $f$ in the given range $[l, r]$ (i.e. $f(A_l, A_{l+1}, \dots, A_r)$) in $O(\log N)$ time;
* updates the value of an element of $A$ in $O(\log N)$ time;
* requires $O(N)$ memory, or in other words, exactly the same memory required for $A$;
* is easy to use and code, especially, in the case of multidimensional arrays.
The most common application of Fenwick tree is _calculating the sum of a range_ (i.e. using addition over the set of integers $\mathbb{Z}$: $f(A_1, A_2, \dots, A_k) = A_1 + A_2 + \dots + A_k$).
Fenwick tree is also called **Binary Indexed Tree**, or just **BIT** abbreviated.
Fenwick tree was first described in a paper titled "A new data structure for cumulative frequency tables" (Peter M. Fenwick, 1994).
## Description
### Overview
For the sake of simplicity, we will assume that function $f$ is just a *sum function*.
Given an array of integers $A[0 \dots N-1]$.
A Fenwick tree is just an array $T[0 \dots N-1]$, where each of its elements is equal to the sum of elements of $A$ in some range $[g(i), i]$:
$$T_i = \sum_{j = g(i)}^{i}{A_j},$$
where $g$ is some function that satisfies $0 \le g(i) \le i$.
We will define the function in the next few paragraphs.
The data structure is called tree, because there is a nice representation of the data structure as tree, although we don't need to model an actual tree with nodes and edges.
We will only need to maintain the array $T$ to handle all queries.
**Note:** The Fenwick tree presented here uses zero-based indexing.
Many people will actually use a version of the Fenwick tree that uses one-based indexing.
Therefore you will also find an alternative implementation using one-based indexing in the implementation section.
Both versions are equivalent in terms of time and memory complexity.
Now we can write some pseudo-code for the two operations mentioned above - get the sum of elements of $A$ in the range $[0, r]$ and update (increase) some element $A_i$:
```python
def sum(int r):
res = 0
while (r >= 0):
res += t[r]
r = g(r) - 1
return res
def increase(int i, int delta):
for all j with g(j) <= i <= j:
t[j] += delta
```
The function `sum` works as follows:
1. first, it adds the sum of the range $[g(r), r]$ (i.e. $T[r]$) to the `result`
2. then, it "jumps" to the range $[g(g(r)-1), g(r)-1]$, and adds this range's sum to the `result`
3. and so on, until it "jumps" from $[0, g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1), -1]$; that is where the `sum` function stops jumping.
The function `increase` works with the same analogy, but "jumps" in the direction of increasing indices:
1. sums of the ranges $[g(j), j]$ that satisfy the condition $g(j) \le i \le j$ are increased by `delta` , that is `t[j] += delta`. Therefore we updated all elements in $T$ that correspond to ranges in which $A_i$ lies.
It is obvious that the complexity of both `sum` and `increase` depend on the function $g$.
There are lots of ways to choose the function $g$, as long as $0 \le g(i) \le i$ for all $i$.
For instance the function $g(i) = i$ works, which results just in $T = A$, and therefore summation queries are slow.
We can also take the function $g(i) = 0$.
This will correspond to prefix sum arrays, which means that finding the sum of the range $[0, i]$ will only take constant time, but updates are slow.
The clever part of the Fenwick algorithm is, that there it uses a special definition of the function $g$ that can handle both operations in $O(\log N)$ time.
### Definition of $g(i)$ { data-toc-label='Definition of <script type="math/tex">g(i)</script>' }
The computation of $g(i)$ is defined using the following simple operation:
we replace all trailing $1$ bits in the binary representation of $i$ with $0$ bits.
In other words, if the least significant digit of $i$ in binary is $0$, then $g(i) = i$.
And otherwise the least significant digit is a $1$, and we take this $1$ and all other trailing $1$s and flip them.
For instance we get
$$\begin{align}
g(11) = g(1011_2) = 1000_2 &= 8 \\\\
g(12) = g(1100_2) = 1100_2 &= 12 \\\\
g(13) = g(1101_2) = 1100_2 &= 12 \\\\
g(14) = g(1110_2) = 1110_2 &= 14 \\\\
g(15) = g(1111_2) = 0000_2 &= 0 \\\\
\end{align}$$
There exists a simple implementation using bitwise operations for the non-trivial operation described above:
$$g(i) = i ~\&~ (i+1),$$
where $\&$ is the bitwise AND operator. It is not hard to convince yourself that this solution does the same thing as the operation described above.
Now, we just need to find a way to iterate over all $j$'s, such that $g(j) \le i \le j$.
It is easy to see that we can find all such $j$'s by starting with $i$ and flipping the last unset bit.
We will call this operation $h(j)$.
For example, for $i = 10$ we have:
$$\begin{align}
10 &= 0001010_2 \\\\
h(10) = 11 &= 0001011_2 \\\\
h(11) = 15 &= 0001111_2 \\\\
h(15) = 31 &= 0011111_2 \\\\
h(31) = 63 &= 0111111_2 \\\\
\vdots &
\end{align}$$
Unsurprisingly, there also exists a simple way to perform $h$ using bitwise operations:
$$h(j) = j ~\|~ (j+1),$$
where $\|$ is the bitwise OR operator.
The following image shows a possible interpretation of the Fenwick tree as tree.
The nodes of the tree show the ranges they cover.
<center></center>
## Implementation
### Finding sum in one-dimensional array
Here we present an implementation of the Fenwick tree for sum queries and single updates.
The normal Fenwick tree can only answer sum queries of the type $[0, r]$ using `sum(int r)`, however we can also answer other queries of the type $[l, r]$ by computing two sums $[0, r]$ and $[0, l-1]$ and subtract them.
This is handled in the `sum(int l, int r)` method.
Also this implementation supports two constructors.
You can create a Fenwick tree initialized with zeros, or you can convert an existing array into the Fenwick form.
```{.cpp file=fenwick_sum}
struct FenwickTree {
vector<int> bit; // binary indexed tree
int n;
FenwickTree(int n) {
this->n = n;
bit.assign(n, 0);
}
FenwickTree(vector<int> const &a) : FenwickTree(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int r) {
int ret = 0;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret += bit[r];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] += delta;
}
};
```
### Linear construction
The above implementation requires $O(N \log N)$ time.
It's possible to improve that to $O(N)$ time.
The idea is, that the number $a[i]$ at index $i$ will contribute to the range stored in $bit[i]$, and to all ranges that the index $i | (i + 1)$ contributes to.
So by adding the numbers in order, you only have to push the current sum further to the next range, where it will then get pushed further to the next range, and so on.
```cpp
FenwickTree(vector<int> const &a) : FenwickTree(a.size()){
for (int i = 0; i < n; i++) {
bit[i] += a[i];
int r = i | (i + 1);
if (r < n) bit[r] += bit[i];
}
}
```
### Finding minimum of $[0, r]$ in one-dimensional array { data-toc-label='Finding minimum of <script type="math/tex">[0, r]</script> in one-dimensional array' }
It is obvious that there is no easy way of finding minimum of range $[l, r]$ using Fenwick tree, as Fenwick tree can only answer queries of type $[0, r]$.
Additionally, each time a value is `update`'d, the new value has to be smaller than the current value.
Both significant limitations are because the $min$ operation together with the set of integers doesn't form a group, as there are no inverse elements.
```{.cpp file=fenwick_min}
struct FenwickTreeMin {
vector<int> bit;
int n;
const int INF = (int)1e9;
FenwickTreeMin(int n) {
this->n = n;
bit.assign(n, INF);
}
FenwickTreeMin(vector<int> a) : FenwickTreeMin(a.size()) {
for (size_t i = 0; i < a.size(); i++)
update(i, a[i]);
}
int getmin(int r) {
int ret = INF;
for (; r >= 0; r = (r & (r + 1)) - 1)
ret = min(ret, bit[r]);
return ret;
}
void update(int idx, int val) {
for (; idx < n; idx = idx | (idx + 1))
bit[idx] = min(bit[idx], val);
}
};
```
Note: it is possible to implement a Fenwick tree that can handle arbitrary minimum range queries and arbitrary updates.
The paper [Efficient Range Minimum Queries using Binary Indexed Trees](http://ioinformatics.org/oi/pdf/v9_2015_39_44.pdf) describes such an approach.
However with that approach you need to maintain a second binary indexed trees over the data, with a slightly different structure, since you one tree is not enough to store the values of all elements in the array.
The implementation is also a lot harder compared to the normal implementation for sums.
### Finding sum in two-dimensional array
As claimed before, it is very easy to implement Fenwick Tree for multidimensional array.
```cpp
struct FenwickTree2D {
vector<vector<int>> bit;
int n, m;
// init(...) { ... }
int sum(int x, int y) {
int ret = 0;
for (int i = x; i >= 0; i = (i & (i + 1)) - 1)
for (int j = y; j >= 0; j = (j & (j + 1)) - 1)
ret += bit[i][j];
return ret;
}
void add(int x, int y, int delta) {
for (int i = x; i < n; i = i | (i + 1))
for (int j = y; j < m; j = j | (j + 1))
bit[i][j] += delta;
}
};
```
### One-based indexing approach
For this approach we change the requirements and definition for $T[]$ and $g()$ a little bit.
We want $T[i]$ to store the sum of $[g(i)+1; i]$.
This changes the implementation a little bit, and allows for a similar nice definition for $g(i)$:
```python
def sum(int r):
res = 0
while (r > 0):
res += t[r]
r = g(r)
return res
def increase(int i, int delta):
for all j with g(j) < i <= j:
t[j] += delta
```
The computation of $g(i)$ is defined as:
toggling of the last set $1$ bit in the binary representation of $i$.
$$\begin{align}
g(7) = g(111_2) = 110_2 &= 6 \\\\
g(6) = g(110_2) = 100_2 &= 4 \\\\
g(4) = g(100_2) = 000_2 &= 0 \\\\
\end{align}$$
The last set bit can be extracted using $i ~\&~ (-i)$, so the operation can be expressed as:
$$g(i) = i - (i ~\&~ (-i)).$$
And it's not hard to see, that you need to change all values $T[j]$ in the sequence $i,~ h(i),~ h(h(i)),~ \dots$ when you want to update $A[j]$, where $h(i)$ is defined as:
$$h(i) = i + (i ~\&~ (-i)).$$
As you can see, the main benefit of this approach is that the binary operations complement each other very nicely.
The following implementation can be used like the other implementations, however it uses one-based indexing internally.
```{.cpp file=fenwick_sum_onebased}
struct FenwickTreeOneBasedIndexing {
vector<int> bit; // binary indexed tree
int n;
FenwickTreeOneBasedIndexing(int n) {
this->n = n + 1;
bit.assign(n + 1, 0);
}
FenwickTreeOneBasedIndexing(vector<int> a)
: FenwickTreeOneBasedIndexing(a.size()) {
for (size_t i = 0; i < a.size(); i++)
add(i, a[i]);
}
int sum(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
int sum(int l, int r) {
return sum(r) - sum(l - 1);
}
void add(int idx, int delta) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += delta;
}
};
```
## Range operations
A Fenwick tree can support the following range operations:
1. Point Update and Range Query
2. Range Update and Point Query
3. Range Update and Range Query
### 1. Point Update and Range Query
This is just the ordinary Fenwick tree as explained above.
### 2. Range Update and Point Query
Using simple tricks we can also do the reverse operations: increasing ranges and querying for single values.
Let the Fenwick tree be initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by $x$.
We make two point update operations on Fenwick tree which are `add(l, x)` and `add(r+1, -x)`.
If we want to get the value of $A[i]$, we just need to take the prefix sum using the ordinary range sum method.
To see why this is true, we can just focus on the previous increment operation again.
If $i < l$, then the two update operations have no effect on the query and we get the sum $0$.
If $i \in [l, r]$, then we get the answer $x$ because of the first update operation.
And if $i > r$, then the second update operation will cancel the effect of first one.
The following implementation uses one-based indexing.
```cpp
void add(int idx, int val) {
for (++idx; idx < n; idx += idx & -idx)
bit[idx] += val;
}
void range_add(int l, int r, int val) {
add(l, val);
add(r + 1, -val);
}
int point_query(int idx) {
int ret = 0;
for (++idx; idx > 0; idx -= idx & -idx)
ret += bit[idx];
return ret;
}
```
Note: of course it is also possible to increase a single point $A[i]$ with `range_add(i, i, val)`.
### 3. Range Updates and Range Queries
To support both range updates and range queries we will use two BITs namely $B_1[]$ and $B_2[]$, initialized with zeros.
Suppose that we want to increment the interval $[l, r]$ by the value $x$.
Similarly as in the previous method, we perform two point updates on $B_1$: `add(B1, l, x)` and `add(B1, r+1, -x)`.
And we also update $B_2$. The details will be explained later.
```python
def range_add(l, r, x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r))
```
After the range update $(l, r, x)$ the range sum query should return the following values:
$$
sum[0, i]=
\begin{cases}
0 & i < l \\\\
x \cdot (i-(l-1)) & l \le i \le r \\\\
x \cdot (r-l+1) & i > r \\\\
\end{cases}
$$
We can write the range sum as difference of two terms, where we use $B_1$ for first term and $B_2$ for second term.
The difference of the queries will give us prefix sum over $[0, i]$.
$$\begin{align}
sum[0, i] &= sum(B_1, i) \cdot i - sum(B_2, i) \\\\
&= \begin{cases}
0 \cdot i - 0 & i < l\\\\
x \cdot i - x \cdot (l-1) & l \le i \le r \\\\
0 \cdot i - (x \cdot (l-1) - x \cdot r) & i > r \\\\
\end{cases}
\end{align}
$$
The last expression is exactly equal to the required terms.
Thus we can use $B_2$ for shaving off extra terms when we multiply $B_1[i]\times i$.
We can find arbitrary range sums by computing the prefix sums for $l-1$ and $r$ and taking the difference of them again.
```python
def add(b, idx, x):
while idx <= N:
b[idx] += x
idx += idx & -idx
def range_add(l,r,x):
add(B1, l, x)
add(B1, r+1, -x)
add(B2, l, x*(l-1))
add(B2, r+1, -x*r)
def sum(b, idx):
total = 0
while idx > 0:
total += b[idx]
idx -= idx & -idx
return total
def prefix_sum(idx):
return sum(B1, idx)*idx - sum(B2, idx)
def range_sum(l, r):
return prefix_sum(r) - prefix_sum(l-1)
```
## Practice Problems
* [UVA 12086 - Potentiometers](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=24&page=show_problem&problem=3238)
* [LOJ 1112 - Curious Robin Hood](http://www.lightoj.com/volume_showproblem.php?problem=1112)
* [LOJ 1266 - Points in Rectangle](http://www.lightoj.com/volume_showproblem.php?problem=1266 "2D Fenwick Tree")
* [Codechef - SPREAD](http://www.codechef.com/problems/SPREAD)
* [SPOJ - CTRICK](http://www.spoj.com/problems/CTRICK/)
* [SPOJ - MATSUM](http://www.spoj.com/problems/MATSUM/)
* [SPOJ - DQUERY](http://www.spoj.com/problems/DQUERY/)
* [SPOJ - NKTEAM](http://www.spoj.com/problems/NKTEAM/)
* [SPOJ - YODANESS](http://www.spoj.com/problems/YODANESS/)
* [SRM 310 - FloatingMedian](https://community.topcoder.com/stat?c=problem_statement&pm=6551&rd=9990)
* [SPOJ - Ada and Behives](http://www.spoj.com/problems/ADABEHIVE/)
* [Hackerearth - Counting in Byteland](https://www.hackerearth.com/practice/data-structures/advanced-data-structures/fenwick-binary-indexed-trees/practice-problems/algorithm/counting-in-byteland/)
* [DevSkill - Shan and String (archived)](http://web.archive.org/web/20210322010617/https://devskill.com/CodingProblems/ViewProblem/300)
* [Codeforces - Little Artem and Time Machine](http://codeforces.com/contest/669/problem/E)
* [Codeforces - Hanoi Factory](http://codeforces.com/contest/777/problem/E)
* [SPOJ - Tulip and Numbers](http://www.spoj.com/problems/TULIPNUM/)
* [SPOJ - SUMSUM](http://www.spoj.com/problems/SUMSUM/)
* [SPOJ - Sabir and Gifts](http://www.spoj.com/problems/SGIFT/)
* [SPOJ - The Permutation Game Again](http://www.spoj.com/problems/TPGA/)
* [SPOJ - Zig when you Zag](http://www.spoj.com/problems/ZIGZAG2/)
* [SPOJ - Cryon](http://www.spoj.com/problems/CRAYON/)
* [SPOJ - Weird Points](http://www.spoj.com/problems/DCEPC705/)
* [SPOJ - Its a Murder](http://www.spoj.com/problems/DCEPC206/)
* [SPOJ - Bored of Suffixes and Prefixes](http://www.spoj.com/problems/KOPC12G/)
* [SPOJ - Mega Inversions](http://www.spoj.com/problems/TRIPINV/)
* [Codeforces - Subsequences](http://codeforces.com/contest/597/problem/C)
* [Codeforces - Ball](http://codeforces.com/contest/12/problem/D)
* [GYM - The Kamphaeng Phet's Chedis](http://codeforces.com/gym/101047/problem/J)
* [Codeforces - Garlands](http://codeforces.com/contest/707/problem/E)
* [Codeforces - Inversions after Shuffle](http://codeforces.com/contest/749/problem/E)
* [GYM - Cairo Market](http://codeforces.com/problemset/gymProblem/101055/D)
* [Codeforces - Goodbye Souvenir](http://codeforces.com/contest/849/problem/E)
* [SPOJ - Ada and Species](http://www.spoj.com/problems/ADACABAA/)
* [Codeforces - Thor](https://codeforces.com/problemset/problem/704/A)
* [CSES - Forest Queries II](https://cses.fi/problemset/task/1739/)
* [Latin American Regionals 2017 - Fundraising](http://matcomgrader.com/problem/9346/fundraising/)
## Other sources
* [Fenwick tree on Wikipedia](http://en.wikipedia.org/wiki/Fenwick_tree)
* [Binary indexed trees tutorial on TopCoder](https://www.topcoder.com/community/data-science/data-science-tutorials/binary-indexed-trees/)
* [Range updates and queries ](https://programmingcontests.quora.com/Tutorial-Range-Updates-in-Fenwick-Tree)
|
Fenwick Tree
|
---
title
suffix_automata
---
# Suffix Automaton
A **suffix automaton** is a powerful data structure that allows solving many string-related problems.
For example, you can search for all occurrences of one string in another, or count the amount of different substrings of a given string.
Both tasks can be solved in linear time with the help of a suffix automaton.
Intuitively a suffix automaton can be understood as a compressed form of **all substrings** of a given string.
An impressive fact is, that the suffix automaton contains all this information in a highly compressed form.
For a string of length $n$ it only requires $O(n)$ memory.
Moreover, it can also be built in $O(n)$ time (if we consider the size $k$ of the alphabet as a constant), otherwise both the memory and the time complexity will be $O(n \log k)$.
The linearity of the size of the suffix automaton was first discovered in 1983 by Blumer et al., and in 1985 the first linear algorithms for the construction was presented by Crochemore and Blumer.
## Definition of a suffix automaton
A suffix automaton for a given string $s$ is a minimal **DFA** (deterministic finite automaton / deterministic finite state machine) that accepts all the suffixes of the string $s$.
In other words:
- A suffix automaton is an oriented acyclic graph.
The vertices are called **states**, and the edges are called **transitions** between states.
- One of the states $t_0$ is the **initial state**, and it must be the source of the graph (all other states are reachable from $t_0$).
- Each **transition** is labeled with some character.
All transitions originating from a state must have **different** labels.
- One or multiple states are marked as **terminal states**.
If we start from the initial state $t_0$ and move along transitions to a terminal state, then the labels of the passed transitions must spell one of the suffixes of the string $s$.
Each of the suffixes of $s$ must be spellable using a path from $t_0$ to a terminal state.
- The suffix automaton contains the minimum number of vertices among all automata satisfying the conditions described above.
### Substring property
The simplest and most important property of a suffix automaton is, that it contains information about all substrings of the string $s$.
Any path starting at the initial state $t_0$, if we write down the labels of the transitions, forms a **substring** of $s$.
And conversely every substring of $s$ corresponds to a certain path starting at $t_0$.
In order to simplify the explanations, we will say that the substring **corresponds** to that path (starting at $t_0$ and the labels spell the substring).
And conversely we say that any path **corresponds** to the string spelled by its labels.
One or multiple paths can lead to a state.
Thus, we will say that a state **corresponds** to the set of strings, which correspond to these paths.
### Examples of constructed suffix automata
Here we will show some examples of suffix automata for several simple strings.
We will denote the initial state with blue and the terminal states with green.
For the string $s =~ \text{""}$:

For the string $s =~ \text{"a"}$:

For the string $s =~ \text{"aa"}$:

For the string $s =~ \text{"ab"}$:

For the string $s =~ \text{"aba"}$:

For the string $s =~ \text{"abb"}$:

For the string $s =~ \text{"abbb"}$:

## Construction in linear time
Before we describe the algorithm to construct a suffix automaton in linear time, we need to introduce several new concepts and simple proofs, which will be very important in understanding the construction.
### End positions $endpos$ {data-toc-label="End positions"}
Consider any non-empty substring $t$ of the string $s$.
We will denote with $endpos(t)$ the set of all positions in the string $s$, in which the occurrences of $t$ end. For instance, we have $endpos(\text{"bc"}) = \{2, 4\}$ for the string $\text{"abcbc"}$.
We will call two substrings $t_1$ and $t_2$ $endpos$-equivalent, if their ending sets coincide: $endpos(t_1) = endpos(t_2)$.
Thus all non-empty substrings of the string $s$ can be decomposed into several **equivalence classes** according to their sets $endpos$.
It turns out, that in a suffix machine $endpos$-equivalent substrings **correspond to the same state**.
In other words the number of states in a suffix automaton is equal to the number of equivalence classes among all substrings, plus the initial state.
Each state of a suffix automaton corresponds to one or more substrings having the same value $endpos$.
We will later describe the construction algorithm using this assumption.
We will then see, that all the required properties of a suffix automaton, except for the minimality, are fulfilled.
And the minimality follows from Nerode's theorem (which will not be proven in this article).
We can make some important observations concerning the values $endpos$:
**Lemma 1**:
Two non-empty substrings $u$ and $w$ (with $length(u) \le length(w)$) are $endpos$-equivalent, if and only if the string $u$ occurs in $s$ only in the form of a suffix of $w$.
The proof is obvious.
If $u$ and $w$ have the same $endpos$ values, then $u$ is a suffix of $w$ and appears only in the form of a suffix of $w$ in $s$.
And if $u$ is a suffix of $w$ and appears only in the form as a suffix in $s$, then the values $endpos$ are equal by definition.
**Lemma 2**:
Consider two non-empty substrings $u$ and $w$ (with $length(u) \le length(w)$).
Then their sets $endpos$ either don't intersect at all, or $endpos(w)$ is a subset of $endpos(u)$.
And it depends on if $u$ is a suffix of $w$ or not.
$$\begin{cases}
endpos(w) \subseteq endpos(u) & \text{if } u \text{ is a suffix of } w \\\\
endpos(w) \cap endpos(u) = \emptyset & \text{otherwise}
\end{cases}$$
Proof:
If the sets $endpos(u)$ and $endpos(w)$ have at least one common element, then the strings $u$ and $w$ both end in that position, i.e. $u$ is a suffix of $w$.
But then at every occurrence of $w$ also appears the substring $u$, which means that $endpos(w)$ is a subset of $endpos(u)$.
**Lemma 3**:
Consider an $endpos$-equivalence class.
Sort all the substrings in this class by decreasing length.
Then in the resulting sequence each substring will be one shorter than the previous one, and at the same time will be a suffix of the previous one.
In other words, in a same equivalence class, the shorter substrings are actually suffixes of the longer substrings, and they take all possible lengths in a certain interval $[x; y]$.
Proof:
Fix some $endpos$-equivalence class.
If it only contains one string, then the lemma is obviously true.
Now let's say that the number of strings in the class is greater than one.
According to Lemma 1, two different $endpos$-equivalent strings are always in such a way, that the shorter one is a proper suffix of the longer one.
Consequently, there cannot be two strings of the same length in the equivalence class.
Let's denote by $w$ the longest, and through $u$ the shortest string in the equivalence class.
According to Lemma 1, the string $u$ is a proper suffix of the string $w$.
Consider now any suffix of $w$ with a length in the interval $[length(u); length(w)]$.
It is easy to see, that this suffix is also contained in the same equivalence class.
Because this suffix can only appear in the form of a suffix of $w$ in the string $s$ (since also the shorter suffix $u$ occurs in $s$ only in the form of a suffix of $w$).
Consequently, according to Lemma 1, this suffix is $endpos$-equivalent to the string $w$.
### Suffix links $link$ {data-toc-label="Suffix links"}
Consider some state $v \ne t_0$ in the automaton.
As we know, the state $v$ corresponds to the class of strings with the same $endpos$ values.
And if we denote by $w$ the longest of these strings, then all the other strings are suffixes of $w$.
We also know the first few suffixes of a string $w$ (if we consider suffixes in descending order of their length) are all contained in this equivalence class, and all other suffixes (at least one other - the empty suffix) are in some other classes.
We denote by $t$ the biggest such suffix, and make a suffix link to it.
In other words, a **suffix link** $link(v)$ leads to the state that corresponds to the **longest suffix** of $w$ that is in another $endpos$-equivalence class.
Here we assume that the initial state $t_0$ corresponds to its own equivalence class (containing only the empty string), and for convenience we set $endpos(t_0) = \{-1, 0, \dots, length(s)-1\}$.
**Lemma 4**:
Suffix links form a **tree** with the root $t_0$.
Proof:
Consider an arbitrary state $v \ne t_0$.
A suffix link $link(v)$ leads to a state corresponding to strings with strictly smaller length (this follows from the definition of the suffix links and from Lemma 3).
Therefore, by moving along the suffix links, we will sooner or later come to the initial state $t_0$, which corresponds to the empty string.
**Lemma 5**:
If we construct a tree using the sets $endpos$ (by the rule that the set of a parent node contains the sets of all children as subsets), then the structure will coincide with the tree of suffix links.
Proof:
The fact that we can construct a tree using the sets $endpos$ follows directly from Lemma 2 (that any two sets either do not intersect or one is contained in the other).
Let us now consider an arbitrary state $v \ne t_0$, and its suffix link $link(v)$.
From the definition of the suffix link and from Lemma 2 it follows that
$$endpos(v) \subseteq endpos(link(v)),$$
which together with the previous lemma proves the assertion:
the tree of suffix links is essentially a tree of sets $endpos$.
Here is an **example** of a tree of suffix links in the suffix automaton build for the string $\text{"abcbc"}$.
The nodes are labeled with the longest substring from the corresponding equivalence class.

### Recap
Before proceeding to the algorithm itself, we recap the accumulated knowledge, and introduce a few auxiliary notations.
- The substrings of the string $s$ can be decomposed into equivalence classes according to their end positions $endpos$.
- The suffix automaton consists of the initial state $t_0$, as well as of one state for each $endpos$-equivalence class.
- For each state $v$ one or multiple substrings match.
We denote by $longest(v)$ the longest such string, and through $len(v)$ its length.
We denote by $shortest(v)$ the shortest such substring, and its length with $minlen(v)$.
Then all the strings corresponding to this state are different suffixes of the string $longest(v)$ and have all possible lengths in the interval $[minlen(v); len(v)]$.
- For each state $v \ne t_0$ a suffix link is defined as a link, that leads to a state that corresponds to the suffix of the string $longest(v)$ of length $minlen(v) - 1$.
The suffix links form a tree with the root in $t_0$, and at the same time this tree forms an inclusion relationship between the sets $endpos$.
- We can express $minlen(v)$ for $v \ne t_0$ using the suffix link $link(v)$ as:
$$minlen(v) = len(link(v)) + 1$$
- If we start from an arbitrary state $v_0$ and follow the suffix links, then sooner or later we will reach the initial state $t_0$.
In this case we obtain a sequence of disjoint intervals $[minlen(v_i); len(v_i)]$, which in union forms the continuous interval $[0; len(v_0)]$.
### Algorithm
Now we can proceed to the algorithm itself.
The algorithm will be **online**, i.e. we will add the characters of the string one by one, and modify the automaton accordingly in each step.
To achieve linear memory consumption, we will only store the values $len$, $link$ and a list of transitions in each state.
We will not label terminal states (but we will later show how to arrange these labels after constructing the suffix automaton).
Initially the automaton consists of a single state $t_0$, which will be the index $0$ (the remaining states will receive the indices $1, 2, \dots$).
We assign it $len = 0$ and $link = -1$ for convenience ($-1$ will be a fictional, non-existing state).
Now the whole task boils down to implementing the process of **adding one character** $c$ to the end of the current string.
Let us describe this process:
- Let $last$ be the state corresponding to the entire string before adding the character $c$.
(Initially we set $last = 0$, and we will change $last$ in the last step of the algorithm accordingly.)
- Create a new state $cur$, and assign it with $len(cur) = len(last) + 1$.
The value $link(cur)$ is not known at the time.
- Now we to the following procedure:
We start at the state $last$.
While there isn't a transition through the letter $c$, we will add a transition to the state $cur$, and follow the suffix link.
If at some point there already exists a transition through the letter $c$, then we will stop and denote this state with $p$.
- If it haven't found such a state $p$, then we reached the fictitious state $-1$, then we can just assign $link(cur) = 0$ and leave.
- Suppose now that we have found a state $p$, from which there exists a transition through the letter $c$.
We will denote the state, to which the transition leads, with $q$.
- Now we have two cases. Either $len(p) + 1 = len(q)$, or not.
- If $len(p) + 1 = len(q)$, then we can simply assign $link(cur) = q$ and leave.
- Otherwise it is a bit more complicated.
It is necessary to **clone** the state $q$:
we create a new state $clone$, copy all the data from $q$ (suffix link and transition) except the value $len$.
We will assign $len(clone) = len(p) + 1$.
After cloning we direct the suffix link from $cur$ to $clone$, and also from $q$ to clone.
Finally we need to walk from the state $p$ back using suffix links as long as there is a transition through $c$ to the state $q$, and redirect all those to the state $clone$.
- In any of the three cases, after completing the procedure, we update the value $last$ with the state $cur$.
If we also want to know which states are **terminal** and which are not, the we can find all terminal states after constructing the complete suffix automaton for the entire string $s$.
To do this, we take the state corresponding to the entire string (stored in the variable $last$), and follow its suffix links until we reach the initial state.
We will mark all visited states as terminal.
It is easy to understand that by doing so we will mark exactly the states corresponding to all the suffixes of the string $s$, which are exactly the terminal states.
In the next section we will look in detail at each step and show its **correctness**.
Here we only note that, since we only create one or two new states for each character of $s$, the suffix automaton contains a **linear number of states**.
The linearity of the number of transitions, and in general the linearity of the runtime of the algorithm is less clear, and they will be proven after we proved the correctness.
### Correctness
- We will call a transition $(p, q)$ **continuous** if $len(p) + 1 = len(q)$.
Otherwise, i.e. when $len(p) + 1 < len(q)$, the transition will be called **non-continuous**.
As we can see from the description of the algorithm, continuous and non-continuous transitions will lead to different cases of the algorithm.
Continuous transitions are fixed, and will never change again.
In contrast non-continuous transition may change, when new letters are added to the string (the end of the transition edge may change).
- To avoid ambiguity we will denote the string, for which the suffix automaton was built before adding the current character $c$, with $s$.
- The algorithm begins with creating a new state $cur$, which will correspond to the entire string $s + c$.
It is clear why we have to create a new state.
Together with the new character a new equivalence class is created.
- After creating a new state we traverse by suffix links starting from the state corresponding to the entire string $s$.
For each state we try to add a transition with the character $c$ to the new state $cur$.
Thus we append to each suffix of $s$ the character $c$.
However we can only add these new transitions, if they don't conflict with an already existing one.
Therefore as soon as we find an already existing transition with $c$ we have to stop.
- In the simplest case we reached the fictitious state $-1$.
This means we added the transition with $c$ to all suffixes of $s$.
This also means, that the character $c$ hasn't been part of the string $s$ before.
Therefore the suffix link of $cur$ has to lead to the state $0$.
- In the second case we came across an existing transition $(p, q)$.
This means that we tried to add a string $x + c$ (where $x$ is a suffix of $s$) to the machine that **already exists** in the machine (the string $x + c$ already appears as a substring of $s$).
Since we assume that the automaton for the string $s$ is build correctly, we should not add a new transition here.
However there is a difficulty.
To which state should the suffix link from the state $cur$ lead?
We have to make a suffix link to a state, in which the longest string is exactly $x + c$, i.e. the $len$ of this state should be $len(p) + 1$.
However it is possible, that such a state doesn't yet exists, i.e. $len(q) > len(p) + 1$.
In this case we have to create such a state, by **splitting** the state $q$.
- If the transition $(p, q)$ turns out to be continuous, then $len(q) = len(p) + 1$.
In this case everything is simple.
We direct the suffix link from $cur$ to the state $q$.
- Otherwise the transition is non-continuous, i.e. $len(q) > len(p) + 1$.
This means that the state $q$ corresponds to not only the suffix of $s + c$ with length $len(p) + 1$, but also to longer substrings of $s$.
We can do nothing other than **splitting** the state $q$ into two sub-states, so that the first one has length $len(p) + 1$.
How can we split a state?
We **clone** the state $q$, which gives us the state $clone$, and we set $len(clone) = len(p) + 1$.
We copy all the transitions from $q$ to $clone$, because we don't want to change the paths that traverse through $q$.
Also we set the suffix link from $clone$ to the target of the suffix link of $q$, and set the suffix link of $q$ to $clone$.
And after splitting the state, we set the suffix link from $cur$ to $clone$.
In the last step we change some of the transitions to $q$, we redirect them to $clone$.
Which transitions do we have to change?
It is enough to redirect only the transitions corresponding to all the suffixes of the string $w + c$ (where $w$ is the longest string of $p$), i.e. we need to continue to move along the suffix links, starting from the vertex $p$ until we reach the fictitious state $-1$ or a transition that leads to a different state than $q$.
### Linear number of operations
First we immediately make the assumption that the size of the alphabet is **constant**.
If this is not the case, then it will not be possible to talk about the linear time complexity.
The list of transitions from one vertex will be stored in a balanced tree, which allows you to quickly perform key search operations and adding keys.
Therefore if we denote with $k$ the size of the alphabet, then the asymptotic behavior of the algorithm will be $O(n \log k)$ with $O(n)$ memory.
However if the alphabet is small enough, then you can sacrifice memory by avoiding balanced trees, and store the transitions at each vertex as an array of length $k$ (for quick searching by key) and a dynamic list (to quickly traverse all available keys).
Thus we reach the $O(n)$ time complexity for the algorithm, but at a cost of $O(n k)$ memory complexity.
So we will consider the size of the alphabet to be constant, i.e. each operation of searching for a transition on a character, adding a transition, searching for the next transition - all these operations can be done in $O(1)$.
If we consider all parts of the algorithm, then it contains three places in the algorithm in which the linear complexity is not obvious:
- The first place is the traversal through the suffix links from the state $last$, adding transitions with the character $c$.
- The second place is the copying of transitions when the state $q$ is cloned into a new state $clone$.
- Third place is changing the transition leading to $q$, redirecting them to $clone$.
We use the fact that the size of the suffix automaton (both in number of states and in the number of transitions) is **linear**.
(The proof of the linearity of the number of states is the algorithm itself, and the proof of linearity of the number of states is given below, after the implementation of the algorithm).
Thus the total complexity of the **first and second places** is obvious, after all each operation adds only one amortized new transition to the automaton.
It remains to estimate the total complexity of the **third place**, in which we redirect transitions, that pointed originally to $q$, to $clone$.
We denote $v = longest(p)$.
This is a suffix of the string $s$, and with each iteration its length decreases - and therefore the position $v$ as the suffix of the string $s$ increases monotonically with each iteration.
In this case, if before the first iteration of the loop, the corresponding string $v$ was at the depth $k$ ($k \ge 2$) from $last$ (by counting the depth as the number of suffix links), then after the last iteration the string $v + c$ will be a $2$-th suffix link on the path from $cur$ (which will become the new value $last$).
Thus, each iteration of this loop leads to the fact that the position of the string $longest(link(link(last))$ as suffix of the current string will monotonically increase.
Therefore this cycle cannot be executed more than $n$ iterations, which was required to prove.
### Implementation
First we describe a data structure that will store all information about a specific transition ($len$, $link$ and the list of transitions).
If necessary you can add a terminal flag here, as well as other information.
We will store the list of transitions in the form of a $map$, which allows us to achieve total $O(n)$ memory and $O(n \log k)$ time for processing the entire string.
```{.cpp file=suffix_automaton_struct}
struct state {
int len, link;
map<char, int> next;
};
```
The suffix automaton itself will be stored in an array of these structures $state$.
We store the current size $sz$ and also the variable $last$, the state corresponding to the entire string at the moment.
```{.cpp file=suffix_automaton_def}
const int MAXLEN = 100000;
state st[MAXLEN * 2];
int sz, last;
```
We give a function that initializes a suffix automaton (creating a suffix automaton with a single state).
```{.cpp file=suffix_automaton_init}
void sa_init() {
st[0].len = 0;
st[0].link = -1;
sz++;
last = 0;
}
```
And finally we give the implementation of the main function - which adds the next character to the end of the current line, rebuilding the machine accordingly.
```{.cpp file=suffix_automaton_extend}
void sa_extend(char c) {
int cur = sz++;
st[cur].len = st[last].len + 1;
int p = last;
while (p != -1 && !st[p].next.count(c)) {
st[p].next[c] = cur;
p = st[p].link;
}
if (p == -1) {
st[cur].link = 0;
} else {
int q = st[p].next[c];
if (st[p].len + 1 == st[q].len) {
st[cur].link = q;
} else {
int clone = sz++;
st[clone].len = st[p].len + 1;
st[clone].next = st[q].next;
st[clone].link = st[q].link;
while (p != -1 && st[p].next[c] == q) {
st[p].next[c] = clone;
p = st[p].link;
}
st[q].link = st[cur].link = clone;
}
}
last = cur;
}
```
As mentioned above, if you sacrifice memory ($O(n k)$, where $k$ is the size of the alphabet), then you can achieve the build time of the machine in $O(n)$, even for any alphabet size $k$.
But for this you will have to store an array of size $k$ in each state (for quickly jumping to the transition of the letter), and additional a list of all transitions (to quickly iterate over the transitions them).
## Additional properties
### Number of states
The number of states in a suffix automaton of the string $s$ of length $n$ **doesn't exceed** $2n - 1$ (for $n \ge 2$).
The proof is the construction algorithm itself, since initially the automaton consists of one state, and in the first and second iteration only a single state will be created, and in the remaining $n-2$ steps at most $2$ states will be created each.
However we can also **show** this estimation **without knowing the algorithm**.
Let us recall that the number of states is equal to the number of different sets $endpos$.
In addition theses sets $endpos$ form a tree (a parent vertex contains all children sets in his set).
Consider this tree and transform it a little bit:
as long as it has an internal vertex with only one child (which means that the set of the child misses at least one position from the parent set), we create a new child with the set of the missing positions.
In the end we have a tree in which each inner vertex has a degree greater than one, and the number of leaves does not exceed $n$.
Therefore there are no more than $2n - 1$ vertices in such a tree.
This bound of the number of states can actually be achieved for each $n$.
A possible string is:
$$\text{"abbb}\dots \text{bbb"}$$
In each iteration, starting at the third one, the algorithm will split a state, resulting in exactly $2n - 1$ states.
### Number of transitions
The number of transitions in a suffix automaton of a string $s$ of length $n$ **doesn't exceed** $3n - 4$ (for $n \ge 3$).
Let us prove this:
Let us first estimate the number of continuous transitions.
Consider a spanning tree of the longest paths in the automaton starting in the state $t_0$.
This skeleton will consist of only the continuous edges, and therefore their number is less than the number of states, i.e. it does not exceed $2n - 2$.
Now let us estimate the number of non-continuous transitions.
Let the current non-continuous transition be $(p, q)$ with the character $c$.
We take the correspondent string $u + c + w$, where the string $u$ corresponds to the longest path from the initial state to $p$, and $w$ to the longest path from $q$ to any terminal state.
On one hand, each such string $u + c + w$ for each incomplete strings will be different (since the strings $u$ and $w$ are formed only by complete transitions).
On the other hand each such string $u + c + w$, by the definition of the terminal states, will be a suffix of the entire string $s$.
Since there are only $n$ non-empty suffixes of $s$, and non of the strings $u + c + w$ can contain $s$ (because the entire string only contains complete transitions), the total number of incomplete transitions does not exceed $n - 1$.
Combining these two estimates gives us the bound $3n - 3$.
However, since the maximum number of states can only be achieved with the test case $\text{"abbb\dots bbb"}$ and this case has clearly less than $3n - 3$ transitions, we get the tighter bound of $3n - 4$ for the number of transitions in a suffix automaton.
This bound can also be achieved with the string:
$$\text{"abbb}\dots \text{bbbc"}$$
## Applications
Here we look at some tasks that can be solved using the suffix automaton.
For the simplicity we assume that the alphabet size $k$ is constant, which allows us to consider the complexity of appending a character and the traversal as constant.
### Check for occurrence
Given a text $T$, and multiple patters $P$.
We have to check whether or not the strings $P$ appear as a substring of $T$.
We build a suffix automaton of the text $T$ in $O(length(T))$ time.
To check if a pattern $P$ appears in $T$, we follow the transitions, starting from $t_0$, according to the characters of $P$.
If at some point there doesn't exists a transition, then the pattern $P$ doesn't appear as a substring of $T$.
If we can process the entire string $P$ this way, then the string appears in $T$.
It is clear that this will take $O(length(P))$ time for each string $P$.
Moreover the algorithm actually finds the length of the longest prefix of $P$ that appears in the text.
### Number of different substrings
Given a string $S$.
You want to compute the number of different substrings.
Let us build a suffix automaton for the string $S$.
Each substring of $S$ corresponds to some path in the automaton.
Therefore the number of different substrings is equal to the number of different paths in the automaton starting at $t_0$.
Given that the suffix automaton is a directed acyclic graph, the number of different ways can be computed using dynamic programming.
Namely, let $d[v]$ be the number of ways, starting at the state $v$ (including the path of length zero).
Then we have the recursion:
$$d[v] = 1 + \sum_{w : (v, w, c) \in DAWG} d[w]$$
I.e. $d[v]$ can be expressed as the sum of answers for all ends of the transitions of $v$.
The number of different substrings is the value $d[t_0] - 1$ (since we don't count the empty substring).
Total time complexity: $O(length(S))$
Alternatively, we can take advantage of the fact that each state $v$ matches to substrings of length $[minlen(v),len(v)]$.
Therefore, given $minlen(v) = 1 + len(link(v))$, we have total distinct substrings at state $v$ being $len(v) - minlen(v) + 1 = len(v) - (1 + len(link(v))) + 1 = len(v) - len(link(v))$.
This is demonstrated succinctly below:
```cpp
long long get_diff_strings(){
long long tot = 0;
for(int i = 1; i < sz; i++) {
tot += st[i].len - st[st[i].link].len;
}
return tot;
}
```
While this is also $O(length(S))$, it requires no extra space and no recursive calls, consequently running faster in practice.
### Total length of all different substrings
Given a string $S$.
We want to compute the total length of all its various substrings.
The solution is similar to the previous one, only now it is necessary to consider two quantities for the dynamic programming part:
the number of different substrings $d[v]$ and their total length $ans[v]$.
We already described how to compute $d[v]$ in the previous task.
The value $ans[v]$ can be computed using the recursion:
$$ans[v] = \sum_{w : (v, w, c) \in DAWG} d[w] + ans[w]$$
We take the answer of each adjacent vertex $w$, and add to it $d[w]$ (since every substrings is one character longer when starting from the state $v$).
Again this task can be computed in $O(length(S))$ time.
Alternatively, we can, again, take advantage of the fact that each state $v$ matches to substrings of length $[minlen(v),len(v)]$.
Since $minlen(v) = 1 + len(link(v))$ and the arithmetic series formula $S_n = n \cdot \frac{a_1+a_n}{2}$ (where $S_n$ denotes the sum of $n$ terms, $a_1$ representing the first term, and $a_n$ representing the last), we can compute the length of substrings at a state in constant time. We then sum up these totals for each state $v \neq t_0$ in the automaton. This is shown by the code below:
```cpp
long long get_tot_len_diff_substings() {
long long tot = 0;
for(int i = 1; i < sz; i++) {
long long shortest = st[st[i].link].len + 1;
long long longest = st[i].len;
long long num_strings = longest - shortest + 1;
long long cur = num_strings * (longest + shortest) / 2;
tot += cur;
}
return tot;
}
```
This approaches runs in $O(length(S))$ time, but experimentally runs 20x faster than the memoized dynamic programming version on randomized strings. It requires no extra space and no recursion.
### Lexicographically $k$-th substring {data-toc-label="Lexicographically k-th substring"}
Given a string $S$.
We have to answer multiple queries.
For each given number $K_i$ we have to find the $K_i$-th string in the lexicographically ordered list of all substrings.
The solution of this problem is based on the idea of the previous two problems.
The lexicographically $k$-th substring corresponds to the lexicographically $k$-th path in the suffix automaton.
Therefore after counting the number of paths from each state, we can easily search for the $k$-th path starting from the root of the automaton.
This takes $O(length(S))$ time for preprocessing and then $O(length(ans) \cdot k)$ for each query (where $ans$ is the answer for the query and $k$ is the size of the alphabet).
### Smallest cyclic shift
Given a string $S$.
We want to find the lexicographically smallest cyclic shift.
We construct a suffix automaton for the string $S + S$.
Then the automaton will contain in itself as paths all the cyclic shifts of the string $S$.
Consequently the problem is reduced to finding the lexicographically smallest path of length $length(S)$, which can be done in a trivial way: we start in the initial state and greedily pass through the transitions with the minimal character.
Total time complexity is $O(length(S))$.
### Number of occurrences
For a given text $T$.
We have to answer multiple queries.
For each given pattern $P$ we have to find out how many times the string $P$ appears in the string $T$ as substring.
We construct the suffix automaton for the text $T$.
Next we do the following preprocessing:
for each state $v$ in the automaton we calculate the number $cnt[v]$ that is equal to the size of the set $endpos(v)$.
In fact all strings corresponding to the same state $v$ appear in the text $T$ an equal amount of times, which is equal to the number of positions in the set $endpos$.
However we cannot construct the sets $endpos$ explicitly, therefore we only consider their sizes $cnt$.
To compute them we proceed as follows.
For each state, if it was not created by cloning (and if it is not the initial state $t_0$), we initialize it with $cnt = 1$.
Then we will go through all states in decreasing order of their length $len$, and add the current value $cnt[v]$ to the suffix links:
$$cnt[link(v)] \text{ += } cnt[v]$$
This gives the correct value for each state.
Why is this correct?
The total number of states obtained _not_ via cloning is exactly $length(T)$, and the first $i$ of them appeared when we added the first $i$ characters.
Consequently for each of these states we count the corresponding position at which it was processed.
Therefore initially we have $cnt = 1$ for each such state, and $cnt = 0$ for all other.
Then we apply the following operation for each $v$: $cnt[link(v)] \text{ += } cnt[v]$.
The meaning behind this is, that if a string $v$ appears $cnt[v]$ times, then also all its suffixes appear at the exact same end positions, therefore also $cnt[v]$ times.
Why don't we overcount in this procedure (i.e. don't count some position twice)?
Because we add the positions of a state to only one other state, so it can not happen that one state directs its positions to another state twice in two different ways.
Thus we can compute the quantities $cnt$ for all states in the automaton in $O(length(T))$ time.
After that answering a query by just looking up the value $cnt[t]$, where $t$ is the state corresponding to the pattern, if such a state exists.
Otherwise answer with $0$.
Answering a query takes $O(length(P))$ time.
### First occurrence position
Given a text $T$ and multiple queries.
For each query string $P$ we want to find the position of the first occurrence of $P$ in the string $T$ (the position of the beginning of $P$).
We again construct a suffix automaton.
Additionally we precompute the position $firstpos$ for all states in the automaton, i.e. for each state $v$ we want to find the position $firstpos[v]$ of the end of the first occurrence.
In other words, we want to find in advance the minimal element of each set $endpos$ (since obviously cannot maintain all sets $endpos$ explicitly).
To maintain these positions $firstpos$ we extend the function `sa_extend()`.
When we create a new state $cur$, we set:
$$firstpos(cur) = len(cur) - 1$$
And when we clone a vertex $q$ as $clone$, we set:
$$firstpos(clone) = firstpos(q)$$
(since the only other option for a value would be $firstpos(cur)$ which is definitely too big)
Thus the answer for a query is simply $firstpos(t) - length(P) + 1$, where $t$ is the state corresponding to the string $P$.
Answering a query again takes only $O(length(P))$ time.
### All occurrence positions
This time we have to display all positions of the occurrences in the string $T$.
Again we construct a suffix automaton for the text $T$.
Similar as in the previous task we compute the position $firstpos$ for all states.
Clearly $firstpos(t)$ is part of the answer, if $t$ is the state corresponding to a query string $P$.
So we took into account the state of the automaton containing $P$.
What other states do we need to take into account?
All states that correspond to strings for which $P$ is a suffix.
In other words we need to find all the states that can reach the state $t$ via suffix links.
Therefore to solve the problem we need to save for each state a list of suffix references leading to it.
The answer to the query then will then contain all $firstpos$ for each state that we can find on a DFS / BFS starting from the state $t$ using only the suffix references.
This workaround will work in time $O(answer(P))$, because we will not visit a state twice (because only one suffix link leaves each state, so there cannot be two different paths leading to the same state).
We only must take into account that two different states can have the same $firstpos$ value.
This happens if one state was obtained by cloning another.
However, this doesn't ruin the complexity, since each state can only have at most one clone.
Moreover, we can also get rid of the duplicate positions, if we don't output the positions from the cloned states.
In fact a state, that a cloned state can reach, is also reachable from the original state.
Thus if we remember the flag `is_cloned` for each state, we can simply ignore the cloned states and only output $firstpos$ for all other states.
Here are some implementation sketches:
```cpp
struct state {
...
bool is_clone;
int first_pos;
vector<int> inv_link;
};
// after constructing the automaton
for (int v = 1; v < sz; v++) {
st[st[v].link].inv_link.push_back(v);
}
// output all positions of occurrences
void output_all_occurrences(int v, int P_length) {
if (!st[v].is_clone)
cout << st[v].first_pos - P_length + 1 << endl;
for (int u : st[v].inv_link)
output_all_occurrences(u, P_length);
}
```
### Shortest non-appearing string
Given a string $S$ and a certain alphabet.
We have to find a string of smallest length, that doesn't appear in $S$.
We will apply dynamic programming on the suffix automaton built for the string $S$.
Let $d[v]$ be the answer for the node $v$, i.e. we already processed part of the substring, are currently in the state $v$, and want to find the smallest number of characters that have to be added to find a non-existent transition.
Computing $d[v]$ is very simple.
If there is not transition using at least one character of the alphabet, then $d[v] = 1$.
Otherwise one character is not enough, and so we need to take the minimum of all answers of all transitions:
$$d[v] = 1 + \min_{w:(v,w,c) \in SA} d[w].$$
The answer to the problem will be $d[t_0]$, and the actual string can be restored using the computed array $d[]$.
### Longest common substring of two strings
Given two strings $S$ and $T$.
We have to find the longest common substring, i.e. such a string $X$ that appears as substring in $S$ and also in $T$.
We construct a suffix automaton for the string $S$.
We will now take the string $T$, and for each prefix look for the longest suffix of this prefix in $S$.
In other words, for each position in the string $T$, we want to find the longest common substring of $S$ and $T$ ending in that position.
For this we will use two variables, the **current state** $v$, and the **current length** $l$.
These two variables will describe the current matching part: its length and the state that corresponds to it.
Initially $v = t_0$ and $l = 0$, i.e. the match is empty.
Now let us describe how we can add a character $T[i]$ and recalculate the answer for it.
- If there is a transition from $v$ with the character $T[i]$, then we simply follow the transition and increase $l$ by one.
- If there is no such transition, we have to shorten the current matching part, which means that we need to follow the suffix link: $v = link(v)$.
At the same time, the current length has to be shortened.
Obviously we need to assign $l = len(v)$, since after passing through the suffix link we end up in state whose corresponding longest string is a substring.
- If there is still no transition using the required character, we repeat and again go through the suffix link and decrease $l$, until we find a transition or we reach the fictional state $-1$ (which means that the symbol $T[i]$ doesn't appear at all in $S$, so we assign $v = l = 0$).
The answer to the task will be the maximum of all the values $l$.
The complexity of this part is $O(length(T))$, since in one move we can either increase $l$ by one, or make several passes through the suffix links, each one ends up reducing the value $l$.
Implementation:
```cpp
string lcs (string S, string T) {
sa_init();
for (int i = 0; i < S.size(); i++)
sa_extend(S[i]);
int v = 0, l = 0, best = 0, bestpos = 0;
for (int i = 0; i < T.size(); i++) {
while (v && !st[v].next.count(T[i])) {
v = st[v].link ;
l = st[v].len;
}
if (st[v].next.count(T[i])) {
v = st [v].next[T[i]];
l++;
}
if (l > best) {
best = l;
bestpos = i;
}
}
return T.substr(bestpos - best + 1, best);
}
```
### Largest common substring of multiple strings
There are $k$ strings $S_i$ given.
We have to find the longest common substring, i.e. such a string $X$ that appears as substring in each string $S_i$.
We join all strings into one large string $T$, separating the strings by a special characters $D_i$ (one for each string):
$$T = S_1 + D_1 + S_2 + D_2 + \dots + S_k + D_k.$$
Then we construct the suffix automaton for the string $T$.
Now we need to find a string in the machine, which is contained in all the strings $S_i$, and this can be done by using the special added characters.
Note that if a substring is included in some string $S_j$, then in the suffix automaton exists a path starting from this substring containing the character $D_j$ and not containing the other characters $D_1, \dots, D_{j-1}, D_{j+1}, \dots, D_k$.
Thus we need to calculate the attainability, which tells us for each state of the machine and each symbol $D_i$ if there exists such a path.
This can easily be computed by DFS or BFS and dynamic programming.
After that, the answer to the problem will be the string $longest(v)$ for the state $v$, from which the paths were exists for all special characters.
|
---
title
suffix_automata
---
# Suffix Automaton
A **suffix automaton** is a powerful data structure that allows solving many string-related problems.
For example, you can search for all occurrences of one string in another, or count the amount of different substrings of a given string.
Both tasks can be solved in linear time with the help of a suffix automaton.
Intuitively a suffix automaton can be understood as a compressed form of **all substrings** of a given string.
An impressive fact is, that the suffix automaton contains all this information in a highly compressed form.
For a string of length $n$ it only requires $O(n)$ memory.
Moreover, it can also be built in $O(n)$ time (if we consider the size $k$ of the alphabet as a constant), otherwise both the memory and the time complexity will be $O(n \log k)$.
The linearity of the size of the suffix automaton was first discovered in 1983 by Blumer et al., and in 1985 the first linear algorithms for the construction was presented by Crochemore and Blumer.
## Definition of a suffix automaton
A suffix automaton for a given string $s$ is a minimal **DFA** (deterministic finite automaton / deterministic finite state machine) that accepts all the suffixes of the string $s$.
In other words:
- A suffix automaton is an oriented acyclic graph.
The vertices are called **states**, and the edges are called **transitions** between states.
- One of the states $t_0$ is the **initial state**, and it must be the source of the graph (all other states are reachable from $t_0$).
- Each **transition** is labeled with some character.
All transitions originating from a state must have **different** labels.
- One or multiple states are marked as **terminal states**.
If we start from the initial state $t_0$ and move along transitions to a terminal state, then the labels of the passed transitions must spell one of the suffixes of the string $s$.
Each of the suffixes of $s$ must be spellable using a path from $t_0$ to a terminal state.
- The suffix automaton contains the minimum number of vertices among all automata satisfying the conditions described above.
### Substring property
The simplest and most important property of a suffix automaton is, that it contains information about all substrings of the string $s$.
Any path starting at the initial state $t_0$, if we write down the labels of the transitions, forms a **substring** of $s$.
And conversely every substring of $s$ corresponds to a certain path starting at $t_0$.
In order to simplify the explanations, we will say that the substring **corresponds** to that path (starting at $t_0$ and the labels spell the substring).
And conversely we say that any path **corresponds** to the string spelled by its labels.
One or multiple paths can lead to a state.
Thus, we will say that a state **corresponds** to the set of strings, which correspond to these paths.
### Examples of constructed suffix automata
Here we will show some examples of suffix automata for several simple strings.
We will denote the initial state with blue and the terminal states with green.
For the string $s =~ \text{""}$:

For the string $s =~ \text{"a"}$:

For the string $s =~ \text{"aa"}$:

For the string $s =~ \text{"ab"}$:

For the string $s =~ \text{"aba"}$:

For the string $s =~ \text{"abb"}$:

For the string $s =~ \text{"abbb"}$:

## Construction in linear time
Before we describe the algorithm to construct a suffix automaton in linear time, we need to introduce several new concepts and simple proofs, which will be very important in understanding the construction.
### End positions $endpos$ {data-toc-label="End positions"}
Consider any non-empty substring $t$ of the string $s$.
We will denote with $endpos(t)$ the set of all positions in the string $s$, in which the occurrences of $t$ end. For instance, we have $endpos(\text{"bc"}) = \{2, 4\}$ for the string $\text{"abcbc"}$.
We will call two substrings $t_1$ and $t_2$ $endpos$-equivalent, if their ending sets coincide: $endpos(t_1) = endpos(t_2)$.
Thus all non-empty substrings of the string $s$ can be decomposed into several **equivalence classes** according to their sets $endpos$.
It turns out, that in a suffix machine $endpos$-equivalent substrings **correspond to the same state**.
In other words the number of states in a suffix automaton is equal to the number of equivalence classes among all substrings, plus the initial state.
Each state of a suffix automaton corresponds to one or more substrings having the same value $endpos$.
We will later describe the construction algorithm using this assumption.
We will then see, that all the required properties of a suffix automaton, except for the minimality, are fulfilled.
And the minimality follows from Nerode's theorem (which will not be proven in this article).
We can make some important observations concerning the values $endpos$:
**Lemma 1**:
Two non-empty substrings $u$ and $w$ (with $length(u) \le length(w)$) are $endpos$-equivalent, if and only if the string $u$ occurs in $s$ only in the form of a suffix of $w$.
The proof is obvious.
If $u$ and $w$ have the same $endpos$ values, then $u$ is a suffix of $w$ and appears only in the form of a suffix of $w$ in $s$.
And if $u$ is a suffix of $w$ and appears only in the form as a suffix in $s$, then the values $endpos$ are equal by definition.
**Lemma 2**:
Consider two non-empty substrings $u$ and $w$ (with $length(u) \le length(w)$).
Then their sets $endpos$ either don't intersect at all, or $endpos(w)$ is a subset of $endpos(u)$.
And it depends on if $u$ is a suffix of $w$ or not.
$$\begin{cases}
endpos(w) \subseteq endpos(u) & \text{if } u \text{ is a suffix of } w \\\\
endpos(w) \cap endpos(u) = \emptyset & \text{otherwise}
\end{cases}$$
Proof:
If the sets $endpos(u)$ and $endpos(w)$ have at least one common element, then the strings $u$ and $w$ both end in that position, i.e. $u$ is a suffix of $w$.
But then at every occurrence of $w$ also appears the substring $u$, which means that $endpos(w)$ is a subset of $endpos(u)$.
**Lemma 3**:
Consider an $endpos$-equivalence class.
Sort all the substrings in this class by decreasing length.
Then in the resulting sequence each substring will be one shorter than the previous one, and at the same time will be a suffix of the previous one.
In other words, in a same equivalence class, the shorter substrings are actually suffixes of the longer substrings, and they take all possible lengths in a certain interval $[x; y]$.
Proof:
Fix some $endpos$-equivalence class.
If it only contains one string, then the lemma is obviously true.
Now let's say that the number of strings in the class is greater than one.
According to Lemma 1, two different $endpos$-equivalent strings are always in such a way, that the shorter one is a proper suffix of the longer one.
Consequently, there cannot be two strings of the same length in the equivalence class.
Let's denote by $w$ the longest, and through $u$ the shortest string in the equivalence class.
According to Lemma 1, the string $u$ is a proper suffix of the string $w$.
Consider now any suffix of $w$ with a length in the interval $[length(u); length(w)]$.
It is easy to see, that this suffix is also contained in the same equivalence class.
Because this suffix can only appear in the form of a suffix of $w$ in the string $s$ (since also the shorter suffix $u$ occurs in $s$ only in the form of a suffix of $w$).
Consequently, according to Lemma 1, this suffix is $endpos$-equivalent to the string $w$.
### Suffix links $link$ {data-toc-label="Suffix links"}
Consider some state $v \ne t_0$ in the automaton.
As we know, the state $v$ corresponds to the class of strings with the same $endpos$ values.
And if we denote by $w$ the longest of these strings, then all the other strings are suffixes of $w$.
We also know the first few suffixes of a string $w$ (if we consider suffixes in descending order of their length) are all contained in this equivalence class, and all other suffixes (at least one other - the empty suffix) are in some other classes.
We denote by $t$ the biggest such suffix, and make a suffix link to it.
In other words, a **suffix link** $link(v)$ leads to the state that corresponds to the **longest suffix** of $w$ that is in another $endpos$-equivalence class.
Here we assume that the initial state $t_0$ corresponds to its own equivalence class (containing only the empty string), and for convenience we set $endpos(t_0) = \{-1, 0, \dots, length(s)-1\}$.
**Lemma 4**:
Suffix links form a **tree** with the root $t_0$.
Proof:
Consider an arbitrary state $v \ne t_0$.
A suffix link $link(v)$ leads to a state corresponding to strings with strictly smaller length (this follows from the definition of the suffix links and from Lemma 3).
Therefore, by moving along the suffix links, we will sooner or later come to the initial state $t_0$, which corresponds to the empty string.
**Lemma 5**:
If we construct a tree using the sets $endpos$ (by the rule that the set of a parent node contains the sets of all children as subsets), then the structure will coincide with the tree of suffix links.
Proof:
The fact that we can construct a tree using the sets $endpos$ follows directly from Lemma 2 (that any two sets either do not intersect or one is contained in the other).
Let us now consider an arbitrary state $v \ne t_0$, and its suffix link $link(v)$.
From the definition of the suffix link and from Lemma 2 it follows that
$$endpos(v) \subseteq endpos(link(v)),$$
which together with the previous lemma proves the assertion:
the tree of suffix links is essentially a tree of sets $endpos$.
Here is an **example** of a tree of suffix links in the suffix automaton build for the string $\text{"abcbc"}$.
The nodes are labeled with the longest substring from the corresponding equivalence class.

### Recap
Before proceeding to the algorithm itself, we recap the accumulated knowledge, and introduce a few auxiliary notations.
- The substrings of the string $s$ can be decomposed into equivalence classes according to their end positions $endpos$.
- The suffix automaton consists of the initial state $t_0$, as well as of one state for each $endpos$-equivalence class.
- For each state $v$ one or multiple substrings match.
We denote by $longest(v)$ the longest such string, and through $len(v)$ its length.
We denote by $shortest(v)$ the shortest such substring, and its length with $minlen(v)$.
Then all the strings corresponding to this state are different suffixes of the string $longest(v)$ and have all possible lengths in the interval $[minlen(v); len(v)]$.
- For each state $v \ne t_0$ a suffix link is defined as a link, that leads to a state that corresponds to the suffix of the string $longest(v)$ of length $minlen(v) - 1$.
The suffix links form a tree with the root in $t_0$, and at the same time this tree forms an inclusion relationship between the sets $endpos$.
- We can express $minlen(v)$ for $v \ne t_0$ using the suffix link $link(v)$ as:
$$minlen(v) = len(link(v)) + 1$$
- If we start from an arbitrary state $v_0$ and follow the suffix links, then sooner or later we will reach the initial state $t_0$.
In this case we obtain a sequence of disjoint intervals $[minlen(v_i); len(v_i)]$, which in union forms the continuous interval $[0; len(v_0)]$.
### Algorithm
Now we can proceed to the algorithm itself.
The algorithm will be **online**, i.e. we will add the characters of the string one by one, and modify the automaton accordingly in each step.
To achieve linear memory consumption, we will only store the values $len$, $link$ and a list of transitions in each state.
We will not label terminal states (but we will later show how to arrange these labels after constructing the suffix automaton).
Initially the automaton consists of a single state $t_0$, which will be the index $0$ (the remaining states will receive the indices $1, 2, \dots$).
We assign it $len = 0$ and $link = -1$ for convenience ($-1$ will be a fictional, non-existing state).
Now the whole task boils down to implementing the process of **adding one character** $c$ to the end of the current string.
Let us describe this process:
- Let $last$ be the state corresponding to the entire string before adding the character $c$.
(Initially we set $last = 0$, and we will change $last$ in the last step of the algorithm accordingly.)
- Create a new state $cur$, and assign it with $len(cur) = len(last) + 1$.
The value $link(cur)$ is not known at the time.
- Now we to the following procedure:
We start at the state $last$.
While there isn't a transition through the letter $c$, we will add a transition to the state $cur$, and follow the suffix link.
If at some point there already exists a transition through the letter $c$, then we will stop and denote this state with $p$.
- If it haven't found such a state $p$, then we reached the fictitious state $-1$, then we can just assign $link(cur) = 0$ and leave.
- Suppose now that we have found a state $p$, from which there exists a transition through the letter $c$.
We will denote the state, to which the transition leads, with $q$.
- Now we have two cases. Either $len(p) + 1 = len(q)$, or not.
- If $len(p) + 1 = len(q)$, then we can simply assign $link(cur) = q$ and leave.
- Otherwise it is a bit more complicated.
It is necessary to **clone** the state $q$:
we create a new state $clone$, copy all the data from $q$ (suffix link and transition) except the value $len$.
We will assign $len(clone) = len(p) + 1$.
After cloning we direct the suffix link from $cur$ to $clone$, and also from $q$ to clone.
Finally we need to walk from the state $p$ back using suffix links as long as there is a transition through $c$ to the state $q$, and redirect all those to the state $clone$.
- In any of the three cases, after completing the procedure, we update the value $last$ with the state $cur$.
If we also want to know which states are **terminal** and which are not, the we can find all terminal states after constructing the complete suffix automaton for the entire string $s$.
To do this, we take the state corresponding to the entire string (stored in the variable $last$), and follow its suffix links until we reach the initial state.
We will mark all visited states as terminal.
It is easy to understand that by doing so we will mark exactly the states corresponding to all the suffixes of the string $s$, which are exactly the terminal states.
In the next section we will look in detail at each step and show its **correctness**.
Here we only note that, since we only create one or two new states for each character of $s$, the suffix automaton contains a **linear number of states**.
The linearity of the number of transitions, and in general the linearity of the runtime of the algorithm is less clear, and they will be proven after we proved the correctness.
### Correctness
- We will call a transition $(p, q)$ **continuous** if $len(p) + 1 = len(q)$.
Otherwise, i.e. when $len(p) + 1 < len(q)$, the transition will be called **non-continuous**.
As we can see from the description of the algorithm, continuous and non-continuous transitions will lead to different cases of the algorithm.
Continuous transitions are fixed, and will never change again.
In contrast non-continuous transition may change, when new letters are added to the string (the end of the transition edge may change).
- To avoid ambiguity we will denote the string, for which the suffix automaton was built before adding the current character $c$, with $s$.
- The algorithm begins with creating a new state $cur$, which will correspond to the entire string $s + c$.
It is clear why we have to create a new state.
Together with the new character a new equivalence class is created.
- After creating a new state we traverse by suffix links starting from the state corresponding to the entire string $s$.
For each state we try to add a transition with the character $c$ to the new state $cur$.
Thus we append to each suffix of $s$ the character $c$.
However we can only add these new transitions, if they don't conflict with an already existing one.
Therefore as soon as we find an already existing transition with $c$ we have to stop.
- In the simplest case we reached the fictitious state $-1$.
This means we added the transition with $c$ to all suffixes of $s$.
This also means, that the character $c$ hasn't been part of the string $s$ before.
Therefore the suffix link of $cur$ has to lead to the state $0$.
- In the second case we came across an existing transition $(p, q)$.
This means that we tried to add a string $x + c$ (where $x$ is a suffix of $s$) to the machine that **already exists** in the machine (the string $x + c$ already appears as a substring of $s$).
Since we assume that the automaton for the string $s$ is build correctly, we should not add a new transition here.
However there is a difficulty.
To which state should the suffix link from the state $cur$ lead?
We have to make a suffix link to a state, in which the longest string is exactly $x + c$, i.e. the $len$ of this state should be $len(p) + 1$.
However it is possible, that such a state doesn't yet exists, i.e. $len(q) > len(p) + 1$.
In this case we have to create such a state, by **splitting** the state $q$.
- If the transition $(p, q)$ turns out to be continuous, then $len(q) = len(p) + 1$.
In this case everything is simple.
We direct the suffix link from $cur$ to the state $q$.
- Otherwise the transition is non-continuous, i.e. $len(q) > len(p) + 1$.
This means that the state $q$ corresponds to not only the suffix of $s + c$ with length $len(p) + 1$, but also to longer substrings of $s$.
We can do nothing other than **splitting** the state $q$ into two sub-states, so that the first one has length $len(p) + 1$.
How can we split a state?
We **clone** the state $q$, which gives us the state $clone$, and we set $len(clone) = len(p) + 1$.
We copy all the transitions from $q$ to $clone$, because we don't want to change the paths that traverse through $q$.
Also we set the suffix link from $clone$ to the target of the suffix link of $q$, and set the suffix link of $q$ to $clone$.
And after splitting the state, we set the suffix link from $cur$ to $clone$.
In the last step we change some of the transitions to $q$, we redirect them to $clone$.
Which transitions do we have to change?
It is enough to redirect only the transitions corresponding to all the suffixes of the string $w + c$ (where $w$ is the longest string of $p$), i.e. we need to continue to move along the suffix links, starting from the vertex $p$ until we reach the fictitious state $-1$ or a transition that leads to a different state than $q$.
### Linear number of operations
First we immediately make the assumption that the size of the alphabet is **constant**.
If this is not the case, then it will not be possible to talk about the linear time complexity.
The list of transitions from one vertex will be stored in a balanced tree, which allows you to quickly perform key search operations and adding keys.
Therefore if we denote with $k$ the size of the alphabet, then the asymptotic behavior of the algorithm will be $O(n \log k)$ with $O(n)$ memory.
However if the alphabet is small enough, then you can sacrifice memory by avoiding balanced trees, and store the transitions at each vertex as an array of length $k$ (for quick searching by key) and a dynamic list (to quickly traverse all available keys).
Thus we reach the $O(n)$ time complexity for the algorithm, but at a cost of $O(n k)$ memory complexity.
So we will consider the size of the alphabet to be constant, i.e. each operation of searching for a transition on a character, adding a transition, searching for the next transition - all these operations can be done in $O(1)$.
If we consider all parts of the algorithm, then it contains three places in the algorithm in which the linear complexity is not obvious:
- The first place is the traversal through the suffix links from the state $last$, adding transitions with the character $c$.
- The second place is the copying of transitions when the state $q$ is cloned into a new state $clone$.
- Third place is changing the transition leading to $q$, redirecting them to $clone$.
We use the fact that the size of the suffix automaton (both in number of states and in the number of transitions) is **linear**.
(The proof of the linearity of the number of states is the algorithm itself, and the proof of linearity of the number of states is given below, after the implementation of the algorithm).
Thus the total complexity of the **first and second places** is obvious, after all each operation adds only one amortized new transition to the automaton.
It remains to estimate the total complexity of the **third place**, in which we redirect transitions, that pointed originally to $q$, to $clone$.
We denote $v = longest(p)$.
This is a suffix of the string $s$, and with each iteration its length decreases - and therefore the position $v$ as the suffix of the string $s$ increases monotonically with each iteration.
In this case, if before the first iteration of the loop, the corresponding string $v$ was at the depth $k$ ($k \ge 2$) from $last$ (by counting the depth as the number of suffix links), then after the last iteration the string $v + c$ will be a $2$-th suffix link on the path from $cur$ (which will become the new value $last$).
Thus, each iteration of this loop leads to the fact that the position of the string $longest(link(link(last))$ as suffix of the current string will monotonically increase.
Therefore this cycle cannot be executed more than $n$ iterations, which was required to prove.
### Implementation
First we describe a data structure that will store all information about a specific transition ($len$, $link$ and the list of transitions).
If necessary you can add a terminal flag here, as well as other information.
We will store the list of transitions in the form of a $map$, which allows us to achieve total $O(n)$ memory and $O(n \log k)$ time for processing the entire string.
```{.cpp file=suffix_automaton_struct}
struct state {
int len, link;
map<char, int> next;
};
```
The suffix automaton itself will be stored in an array of these structures $state$.
We store the current size $sz$ and also the variable $last$, the state corresponding to the entire string at the moment.
```{.cpp file=suffix_automaton_def}
const int MAXLEN = 100000;
state st[MAXLEN * 2];
int sz, last;
```
We give a function that initializes a suffix automaton (creating a suffix automaton with a single state).
```{.cpp file=suffix_automaton_init}
void sa_init() {
st[0].len = 0;
st[0].link = -1;
sz++;
last = 0;
}
```
And finally we give the implementation of the main function - which adds the next character to the end of the current line, rebuilding the machine accordingly.
```{.cpp file=suffix_automaton_extend}
void sa_extend(char c) {
int cur = sz++;
st[cur].len = st[last].len + 1;
int p = last;
while (p != -1 && !st[p].next.count(c)) {
st[p].next[c] = cur;
p = st[p].link;
}
if (p == -1) {
st[cur].link = 0;
} else {
int q = st[p].next[c];
if (st[p].len + 1 == st[q].len) {
st[cur].link = q;
} else {
int clone = sz++;
st[clone].len = st[p].len + 1;
st[clone].next = st[q].next;
st[clone].link = st[q].link;
while (p != -1 && st[p].next[c] == q) {
st[p].next[c] = clone;
p = st[p].link;
}
st[q].link = st[cur].link = clone;
}
}
last = cur;
}
```
As mentioned above, if you sacrifice memory ($O(n k)$, where $k$ is the size of the alphabet), then you can achieve the build time of the machine in $O(n)$, even for any alphabet size $k$.
But for this you will have to store an array of size $k$ in each state (for quickly jumping to the transition of the letter), and additional a list of all transitions (to quickly iterate over the transitions them).
## Additional properties
### Number of states
The number of states in a suffix automaton of the string $s$ of length $n$ **doesn't exceed** $2n - 1$ (for $n \ge 2$).
The proof is the construction algorithm itself, since initially the automaton consists of one state, and in the first and second iteration only a single state will be created, and in the remaining $n-2$ steps at most $2$ states will be created each.
However we can also **show** this estimation **without knowing the algorithm**.
Let us recall that the number of states is equal to the number of different sets $endpos$.
In addition theses sets $endpos$ form a tree (a parent vertex contains all children sets in his set).
Consider this tree and transform it a little bit:
as long as it has an internal vertex with only one child (which means that the set of the child misses at least one position from the parent set), we create a new child with the set of the missing positions.
In the end we have a tree in which each inner vertex has a degree greater than one, and the number of leaves does not exceed $n$.
Therefore there are no more than $2n - 1$ vertices in such a tree.
This bound of the number of states can actually be achieved for each $n$.
A possible string is:
$$\text{"abbb}\dots \text{bbb"}$$
In each iteration, starting at the third one, the algorithm will split a state, resulting in exactly $2n - 1$ states.
### Number of transitions
The number of transitions in a suffix automaton of a string $s$ of length $n$ **doesn't exceed** $3n - 4$ (for $n \ge 3$).
Let us prove this:
Let us first estimate the number of continuous transitions.
Consider a spanning tree of the longest paths in the automaton starting in the state $t_0$.
This skeleton will consist of only the continuous edges, and therefore their number is less than the number of states, i.e. it does not exceed $2n - 2$.
Now let us estimate the number of non-continuous transitions.
Let the current non-continuous transition be $(p, q)$ with the character $c$.
We take the correspondent string $u + c + w$, where the string $u$ corresponds to the longest path from the initial state to $p$, and $w$ to the longest path from $q$ to any terminal state.
On one hand, each such string $u + c + w$ for each incomplete strings will be different (since the strings $u$ and $w$ are formed only by complete transitions).
On the other hand each such string $u + c + w$, by the definition of the terminal states, will be a suffix of the entire string $s$.
Since there are only $n$ non-empty suffixes of $s$, and non of the strings $u + c + w$ can contain $s$ (because the entire string only contains complete transitions), the total number of incomplete transitions does not exceed $n - 1$.
Combining these two estimates gives us the bound $3n - 3$.
However, since the maximum number of states can only be achieved with the test case $\text{"abbb\dots bbb"}$ and this case has clearly less than $3n - 3$ transitions, we get the tighter bound of $3n - 4$ for the number of transitions in a suffix automaton.
This bound can also be achieved with the string:
$$\text{"abbb}\dots \text{bbbc"}$$
## Applications
Here we look at some tasks that can be solved using the suffix automaton.
For the simplicity we assume that the alphabet size $k$ is constant, which allows us to consider the complexity of appending a character and the traversal as constant.
### Check for occurrence
Given a text $T$, and multiple patters $P$.
We have to check whether or not the strings $P$ appear as a substring of $T$.
We build a suffix automaton of the text $T$ in $O(length(T))$ time.
To check if a pattern $P$ appears in $T$, we follow the transitions, starting from $t_0$, according to the characters of $P$.
If at some point there doesn't exists a transition, then the pattern $P$ doesn't appear as a substring of $T$.
If we can process the entire string $P$ this way, then the string appears in $T$.
It is clear that this will take $O(length(P))$ time for each string $P$.
Moreover the algorithm actually finds the length of the longest prefix of $P$ that appears in the text.
### Number of different substrings
Given a string $S$.
You want to compute the number of different substrings.
Let us build a suffix automaton for the string $S$.
Each substring of $S$ corresponds to some path in the automaton.
Therefore the number of different substrings is equal to the number of different paths in the automaton starting at $t_0$.
Given that the suffix automaton is a directed acyclic graph, the number of different ways can be computed using dynamic programming.
Namely, let $d[v]$ be the number of ways, starting at the state $v$ (including the path of length zero).
Then we have the recursion:
$$d[v] = 1 + \sum_{w : (v, w, c) \in DAWG} d[w]$$
I.e. $d[v]$ can be expressed as the sum of answers for all ends of the transitions of $v$.
The number of different substrings is the value $d[t_0] - 1$ (since we don't count the empty substring).
Total time complexity: $O(length(S))$
Alternatively, we can take advantage of the fact that each state $v$ matches to substrings of length $[minlen(v),len(v)]$.
Therefore, given $minlen(v) = 1 + len(link(v))$, we have total distinct substrings at state $v$ being $len(v) - minlen(v) + 1 = len(v) - (1 + len(link(v))) + 1 = len(v) - len(link(v))$.
This is demonstrated succinctly below:
```cpp
long long get_diff_strings(){
long long tot = 0;
for(int i = 1; i < sz; i++) {
tot += st[i].len - st[st[i].link].len;
}
return tot;
}
```
While this is also $O(length(S))$, it requires no extra space and no recursive calls, consequently running faster in practice.
### Total length of all different substrings
Given a string $S$.
We want to compute the total length of all its various substrings.
The solution is similar to the previous one, only now it is necessary to consider two quantities for the dynamic programming part:
the number of different substrings $d[v]$ and their total length $ans[v]$.
We already described how to compute $d[v]$ in the previous task.
The value $ans[v]$ can be computed using the recursion:
$$ans[v] = \sum_{w : (v, w, c) \in DAWG} d[w] + ans[w]$$
We take the answer of each adjacent vertex $w$, and add to it $d[w]$ (since every substrings is one character longer when starting from the state $v$).
Again this task can be computed in $O(length(S))$ time.
Alternatively, we can, again, take advantage of the fact that each state $v$ matches to substrings of length $[minlen(v),len(v)]$.
Since $minlen(v) = 1 + len(link(v))$ and the arithmetic series formula $S_n = n \cdot \frac{a_1+a_n}{2}$ (where $S_n$ denotes the sum of $n$ terms, $a_1$ representing the first term, and $a_n$ representing the last), we can compute the length of substrings at a state in constant time. We then sum up these totals for each state $v \neq t_0$ in the automaton. This is shown by the code below:
```cpp
long long get_tot_len_diff_substings() {
long long tot = 0;
for(int i = 1; i < sz; i++) {
long long shortest = st[st[i].link].len + 1;
long long longest = st[i].len;
long long num_strings = longest - shortest + 1;
long long cur = num_strings * (longest + shortest) / 2;
tot += cur;
}
return tot;
}
```
This approaches runs in $O(length(S))$ time, but experimentally runs 20x faster than the memoized dynamic programming version on randomized strings. It requires no extra space and no recursion.
### Lexicographically $k$-th substring {data-toc-label="Lexicographically k-th substring"}
Given a string $S$.
We have to answer multiple queries.
For each given number $K_i$ we have to find the $K_i$-th string in the lexicographically ordered list of all substrings.
The solution of this problem is based on the idea of the previous two problems.
The lexicographically $k$-th substring corresponds to the lexicographically $k$-th path in the suffix automaton.
Therefore after counting the number of paths from each state, we can easily search for the $k$-th path starting from the root of the automaton.
This takes $O(length(S))$ time for preprocessing and then $O(length(ans) \cdot k)$ for each query (where $ans$ is the answer for the query and $k$ is the size of the alphabet).
### Smallest cyclic shift
Given a string $S$.
We want to find the lexicographically smallest cyclic shift.
We construct a suffix automaton for the string $S + S$.
Then the automaton will contain in itself as paths all the cyclic shifts of the string $S$.
Consequently the problem is reduced to finding the lexicographically smallest path of length $length(S)$, which can be done in a trivial way: we start in the initial state and greedily pass through the transitions with the minimal character.
Total time complexity is $O(length(S))$.
### Number of occurrences
For a given text $T$.
We have to answer multiple queries.
For each given pattern $P$ we have to find out how many times the string $P$ appears in the string $T$ as substring.
We construct the suffix automaton for the text $T$.
Next we do the following preprocessing:
for each state $v$ in the automaton we calculate the number $cnt[v]$ that is equal to the size of the set $endpos(v)$.
In fact all strings corresponding to the same state $v$ appear in the text $T$ an equal amount of times, which is equal to the number of positions in the set $endpos$.
However we cannot construct the sets $endpos$ explicitly, therefore we only consider their sizes $cnt$.
To compute them we proceed as follows.
For each state, if it was not created by cloning (and if it is not the initial state $t_0$), we initialize it with $cnt = 1$.
Then we will go through all states in decreasing order of their length $len$, and add the current value $cnt[v]$ to the suffix links:
$$cnt[link(v)] \text{ += } cnt[v]$$
This gives the correct value for each state.
Why is this correct?
The total number of states obtained _not_ via cloning is exactly $length(T)$, and the first $i$ of them appeared when we added the first $i$ characters.
Consequently for each of these states we count the corresponding position at which it was processed.
Therefore initially we have $cnt = 1$ for each such state, and $cnt = 0$ for all other.
Then we apply the following operation for each $v$: $cnt[link(v)] \text{ += } cnt[v]$.
The meaning behind this is, that if a string $v$ appears $cnt[v]$ times, then also all its suffixes appear at the exact same end positions, therefore also $cnt[v]$ times.
Why don't we overcount in this procedure (i.e. don't count some position twice)?
Because we add the positions of a state to only one other state, so it can not happen that one state directs its positions to another state twice in two different ways.
Thus we can compute the quantities $cnt$ for all states in the automaton in $O(length(T))$ time.
After that answering a query by just looking up the value $cnt[t]$, where $t$ is the state corresponding to the pattern, if such a state exists.
Otherwise answer with $0$.
Answering a query takes $O(length(P))$ time.
### First occurrence position
Given a text $T$ and multiple queries.
For each query string $P$ we want to find the position of the first occurrence of $P$ in the string $T$ (the position of the beginning of $P$).
We again construct a suffix automaton.
Additionally we precompute the position $firstpos$ for all states in the automaton, i.e. for each state $v$ we want to find the position $firstpos[v]$ of the end of the first occurrence.
In other words, we want to find in advance the minimal element of each set $endpos$ (since obviously cannot maintain all sets $endpos$ explicitly).
To maintain these positions $firstpos$ we extend the function `sa_extend()`.
When we create a new state $cur$, we set:
$$firstpos(cur) = len(cur) - 1$$
And when we clone a vertex $q$ as $clone$, we set:
$$firstpos(clone) = firstpos(q)$$
(since the only other option for a value would be $firstpos(cur)$ which is definitely too big)
Thus the answer for a query is simply $firstpos(t) - length(P) + 1$, where $t$ is the state corresponding to the string $P$.
Answering a query again takes only $O(length(P))$ time.
### All occurrence positions
This time we have to display all positions of the occurrences in the string $T$.
Again we construct a suffix automaton for the text $T$.
Similar as in the previous task we compute the position $firstpos$ for all states.
Clearly $firstpos(t)$ is part of the answer, if $t$ is the state corresponding to a query string $P$.
So we took into account the state of the automaton containing $P$.
What other states do we need to take into account?
All states that correspond to strings for which $P$ is a suffix.
In other words we need to find all the states that can reach the state $t$ via suffix links.
Therefore to solve the problem we need to save for each state a list of suffix references leading to it.
The answer to the query then will then contain all $firstpos$ for each state that we can find on a DFS / BFS starting from the state $t$ using only the suffix references.
This workaround will work in time $O(answer(P))$, because we will not visit a state twice (because only one suffix link leaves each state, so there cannot be two different paths leading to the same state).
We only must take into account that two different states can have the same $firstpos$ value.
This happens if one state was obtained by cloning another.
However, this doesn't ruin the complexity, since each state can only have at most one clone.
Moreover, we can also get rid of the duplicate positions, if we don't output the positions from the cloned states.
In fact a state, that a cloned state can reach, is also reachable from the original state.
Thus if we remember the flag `is_cloned` for each state, we can simply ignore the cloned states and only output $firstpos$ for all other states.
Here are some implementation sketches:
```cpp
struct state {
...
bool is_clone;
int first_pos;
vector<int> inv_link;
};
// after constructing the automaton
for (int v = 1; v < sz; v++) {
st[st[v].link].inv_link.push_back(v);
}
// output all positions of occurrences
void output_all_occurrences(int v, int P_length) {
if (!st[v].is_clone)
cout << st[v].first_pos - P_length + 1 << endl;
for (int u : st[v].inv_link)
output_all_occurrences(u, P_length);
}
```
### Shortest non-appearing string
Given a string $S$ and a certain alphabet.
We have to find a string of smallest length, that doesn't appear in $S$.
We will apply dynamic programming on the suffix automaton built for the string $S$.
Let $d[v]$ be the answer for the node $v$, i.e. we already processed part of the substring, are currently in the state $v$, and want to find the smallest number of characters that have to be added to find a non-existent transition.
Computing $d[v]$ is very simple.
If there is not transition using at least one character of the alphabet, then $d[v] = 1$.
Otherwise one character is not enough, and so we need to take the minimum of all answers of all transitions:
$$d[v] = 1 + \min_{w:(v,w,c) \in SA} d[w].$$
The answer to the problem will be $d[t_0]$, and the actual string can be restored using the computed array $d[]$.
### Longest common substring of two strings
Given two strings $S$ and $T$.
We have to find the longest common substring, i.e. such a string $X$ that appears as substring in $S$ and also in $T$.
We construct a suffix automaton for the string $S$.
We will now take the string $T$, and for each prefix look for the longest suffix of this prefix in $S$.
In other words, for each position in the string $T$, we want to find the longest common substring of $S$ and $T$ ending in that position.
For this we will use two variables, the **current state** $v$, and the **current length** $l$.
These two variables will describe the current matching part: its length and the state that corresponds to it.
Initially $v = t_0$ and $l = 0$, i.e. the match is empty.
Now let us describe how we can add a character $T[i]$ and recalculate the answer for it.
- If there is a transition from $v$ with the character $T[i]$, then we simply follow the transition and increase $l$ by one.
- If there is no such transition, we have to shorten the current matching part, which means that we need to follow the suffix link: $v = link(v)$.
At the same time, the current length has to be shortened.
Obviously we need to assign $l = len(v)$, since after passing through the suffix link we end up in state whose corresponding longest string is a substring.
- If there is still no transition using the required character, we repeat and again go through the suffix link and decrease $l$, until we find a transition or we reach the fictional state $-1$ (which means that the symbol $T[i]$ doesn't appear at all in $S$, so we assign $v = l = 0$).
The answer to the task will be the maximum of all the values $l$.
The complexity of this part is $O(length(T))$, since in one move we can either increase $l$ by one, or make several passes through the suffix links, each one ends up reducing the value $l$.
Implementation:
```cpp
string lcs (string S, string T) {
sa_init();
for (int i = 0; i < S.size(); i++)
sa_extend(S[i]);
int v = 0, l = 0, best = 0, bestpos = 0;
for (int i = 0; i < T.size(); i++) {
while (v && !st[v].next.count(T[i])) {
v = st[v].link ;
l = st[v].len;
}
if (st[v].next.count(T[i])) {
v = st [v].next[T[i]];
l++;
}
if (l > best) {
best = l;
bestpos = i;
}
}
return T.substr(bestpos - best + 1, best);
}
```
### Largest common substring of multiple strings
There are $k$ strings $S_i$ given.
We have to find the longest common substring, i.e. such a string $X$ that appears as substring in each string $S_i$.
We join all strings into one large string $T$, separating the strings by a special characters $D_i$ (one for each string):
$$T = S_1 + D_1 + S_2 + D_2 + \dots + S_k + D_k.$$
Then we construct the suffix automaton for the string $T$.
Now we need to find a string in the machine, which is contained in all the strings $S_i$, and this can be done by using the special added characters.
Note that if a substring is included in some string $S_j$, then in the suffix automaton exists a path starting from this substring containing the character $D_j$ and not containing the other characters $D_1, \dots, D_{j-1}, D_{j+1}, \dots, D_k$.
Thus we need to calculate the attainability, which tells us for each state of the machine and each symbol $D_i$ if there exists such a path.
This can easily be computed by DFS or BFS and dynamic programming.
After that, the answer to the problem will be the string $longest(v)$ for the state $v$, from which the paths were exists for all special characters.
## Practice Problems
- [CSES - Finding Patterns](https://cses.fi/problemset/task/2102)
- [CSES - Counting Patterns](https://cses.fi/problemset/task/2103)
- [CSES - String Matching](https://cses.fi/problemset/task/1753)
- [CSES - Patterns Positions](https://cses.fi/problemset/task/2104)
- [CSES - Distinct Substrings](https://cses.fi/problemset/task/2105)
- [CSES - Word Combinations](https://cses.fi/problemset/task/1731)
- [CSES - String Distribution](https://cses.fi/problemset/task/2110)
- [AtCoder - K-th Substring](https://atcoder.jp/contests/abc097/tasks/arc097_a)
- [SPOJ - SUBLEX](https://www.spoj.com/problems/SUBLEX/)
- [Codeforces - Cyclical Quest](https://codeforces.com/problemset/problem/235/C)
- [Codeforces - String](https://codeforces.com/contest/128/problem/B)
|
Suffix Automaton
|
---
title
expressions_parsing
---
# Expression parsing
A string containing a mathematical expression containing numbers and various operators is given.
We have to compute the value of it in $O(n)$, where $n$ is the length of the string.
The algorithm discussed here translates an expression into the so-called **reverse Polish notation** (explicitly or implicitly), and evaluates this expression.
## Reverse Polish notation
The reverse Polish notation is a form of writing mathematical expressions, in which the operators are located after their operands.
For example the following expression
$$a + b * c * d + (e - f) * (g * h + i)$$
can be written in reverse Polish notation in the following way:
$$a b c * d * + e f - g h * i + * +$$
The reverse Polish notation was developed by the Australian philosopher and computer science specialist Charles Hamblin in the mid 1950s on the basis of the Polish notation, which was proposed in 1920 by the Polish mathematician Jan Łukasiewicz.
The convenience of the reverse Polish notation is, that expressions in this form are very **easy to evaluate** in linear time.
We use a stack, which is initially empty.
We will iterate over the operands and operators of the expression in reverse Polish notation.
If the current element is a number, then we put the value on top of the stack, if the current element is an operator, then we get the top two elements from the stack, perform the operation, and put the result back on top of the stack.
In the end there will be exactly one element left in the stack, which will be the value of the expression.
Obviously this simple evaluation runs in $O(n)$ time.
## Parsing of simple expressions
For the time being we only consider a simplified problem:
we assume that all operators are **binary** (i.e. they take two arguments), and all are **left-associative** (if the priorities are equal, they get executed from left to right).
Parentheses are allowed.
We will set up two stacks: one for numbers, and one for operators and parentheses.
Initially both stacks are empty.
For the second stack we will maintain the condition that all operations are ordered by strict descending priority.
If there are parenthesis on the stack, than each block of operators (corresponding to one pair of parenthesis) is ordered, and the entire stack is not necessarily ordered.
We will iterate over the characters of the expression from left to right.
If the current character is a digit, then we put the value of this number on the stack.
If the current character is an opening parenthesis, then we put it on the stack.
If the current character is a closing parenthesis, the we execute all operators on the stack until we reach the opening bracket (in other words we perform all operations inside the parenthesis).
Finally if the current character is an operator, then while the top of the stack has an operator with the same or higher priority, we will execute this operation, and put the new operation on the stack.
After we processed the entire string, some operators might still be in the stack, so we execute them.
Here is the implementation of this method for the four operators $+$ $-$ $*$ $/$:
```{.cpp file=expression_parsing_simple}
bool delim(char c) {
return c == ' ';
}
bool is_op(char c) {
return c == '+' || c == '-' || c == '*' || c == '/';
}
int priority (char op) {
if (op == '+' || op == '-')
return 1;
if (op == '*' || op == '/')
return 2;
return -1;
}
void process_op(stack<int>& st, char op) {
int r = st.top(); st.pop();
int l = st.top(); st.pop();
switch (op) {
case '+': st.push(l + r); break;
case '-': st.push(l - r); break;
case '*': st.push(l * r); break;
case '/': st.push(l / r); break;
}
}
int evaluate(string& s) {
stack<int> st;
stack<char> op;
for (int i = 0; i < (int)s.size(); i++) {
if (delim(s[i]))
continue;
if (s[i] == '(') {
op.push('(');
} else if (s[i] == ')') {
while (op.top() != '(') {
process_op(st, op.top());
op.pop();
}
op.pop();
} else if (is_op(s[i])) {
char cur_op = s[i];
while (!op.empty() && priority(op.top()) >= priority(cur_op)) {
process_op(st, op.top());
op.pop();
}
op.push(cur_op);
} else {
int number = 0;
while (i < (int)s.size() && isalnum(s[i]))
number = number * 10 + s[i++] - '0';
--i;
st.push(number);
}
}
while (!op.empty()) {
process_op(st, op.top());
op.pop();
}
return st.top();
}
```
Thus we learned how to calculate the value of an expression in $O(n)$, at the same time we implicitly used the reverse Polish notation.
By slightly modifying the above implementation it is also possible to obtain the expression in reverse Polish notation in an explicit form.
## Unary operators
Now suppose that the expression also contains **unary** operators (operators that take one argument).
The unary plus and unary minus are common examples of such operators.
One of the differences in this case, is that we need to determine whether the current operator is a unary or a binary one.
You can notice, that before an unary operator, there always is another operator or an opening parenthesis, or nothing at all (if it is at the very beginning of the expression).
On the contrary before a binary operator there will always be an operand (number) or a closing parenthesis.
Thus it is easy to flag whether the next operator can be unary or not.
Additionally we need to execute a unary and a binary operator differently.
And we need to chose the priority of a unary operator higher than all of the binary operators.
In addition it should be noted, that some unary operators (e.g. unary plus and unary minus) are actually **right-associative**.
## Right-associativity
Right-associative means, that whenever the priorities are equal, the operators must be evaluated from right to left.
As noted above, unary operators are usually right-associative.
Another example for an right-associative operator is the exponentiation operator ($a \wedge b \wedge c$ is usually perceived as $a^{b^c}$ and not as $(a^b)^c$).
What difference do we need to make in order to correctly handle right-associative operators?
It turns out that the changes are very minimal.
The only difference will be, if the priorities are equal we will postpone the execution of the right-associative operation.
The only line that needs to be replaced is
```cpp
while (!op.empty() && priority(op.top()) >= priority(cur_op))
```
with
```cpp
while (!op.empty() && (
(left_assoc(cur_op) && priority(op.top()) >= priority(cur_op)) ||
(!left_assoc(cur_op) && priority(op.top()) > priority(cur_op))
))
```
where `left_assoc` is a function that decides if an operator is left_associative or not.
Here is an implementation for the binary operators $+$ $-$ $*$ $/$ and the unary operators $+$ and $-$.
```{.cpp file=expression_parsing_unary}
bool delim(char c) {
return c == ' ';
}
bool is_op(char c) {
return c == '+' || c == '-' || c == '*' || c == '/';
}
bool is_unary(char c) {
return c == '+' || c=='-';
}
int priority (char op) {
if (op < 0) // unary operator
return 3;
if (op == '+' || op == '-')
return 1;
if (op == '*' || op == '/')
return 2;
return -1;
}
void process_op(stack<int>& st, char op) {
if (op < 0) {
int l = st.top(); st.pop();
switch (-op) {
case '+': st.push(l); break;
case '-': st.push(-l); break;
}
} else {
int r = st.top(); st.pop();
int l = st.top(); st.pop();
switch (op) {
case '+': st.push(l + r); break;
case '-': st.push(l - r); break;
case '*': st.push(l * r); break;
case '/': st.push(l / r); break;
}
}
}
int evaluate(string& s) {
stack<int> st;
stack<char> op;
bool may_be_unary = true;
for (int i = 0; i < (int)s.size(); i++) {
if (delim(s[i]))
continue;
if (s[i] == '(') {
op.push('(');
may_be_unary = true;
} else if (s[i] == ')') {
while (op.top() != '(') {
process_op(st, op.top());
op.pop();
}
op.pop();
may_be_unary = false;
} else if (is_op(s[i])) {
char cur_op = s[i];
if (may_be_unary && is_unary(cur_op))
cur_op = -cur_op;
while (!op.empty() && (
(cur_op >= 0 && priority(op.top()) >= priority(cur_op)) ||
(cur_op < 0 && priority(op.top()) > priority(cur_op))
)) {
process_op(st, op.top());
op.pop();
}
op.push(cur_op);
may_be_unary = true;
} else {
int number = 0;
while (i < (int)s.size() && isalnum(s[i]))
number = number * 10 + s[i++] - '0';
--i;
st.push(number);
may_be_unary = false;
}
}
while (!op.empty()) {
process_op(st, op.top());
op.pop();
}
return st.top();
}
```
|
---
title
expressions_parsing
---
# Expression parsing
A string containing a mathematical expression containing numbers and various operators is given.
We have to compute the value of it in $O(n)$, where $n$ is the length of the string.
The algorithm discussed here translates an expression into the so-called **reverse Polish notation** (explicitly or implicitly), and evaluates this expression.
## Reverse Polish notation
The reverse Polish notation is a form of writing mathematical expressions, in which the operators are located after their operands.
For example the following expression
$$a + b * c * d + (e - f) * (g * h + i)$$
can be written in reverse Polish notation in the following way:
$$a b c * d * + e f - g h * i + * +$$
The reverse Polish notation was developed by the Australian philosopher and computer science specialist Charles Hamblin in the mid 1950s on the basis of the Polish notation, which was proposed in 1920 by the Polish mathematician Jan Łukasiewicz.
The convenience of the reverse Polish notation is, that expressions in this form are very **easy to evaluate** in linear time.
We use a stack, which is initially empty.
We will iterate over the operands and operators of the expression in reverse Polish notation.
If the current element is a number, then we put the value on top of the stack, if the current element is an operator, then we get the top two elements from the stack, perform the operation, and put the result back on top of the stack.
In the end there will be exactly one element left in the stack, which will be the value of the expression.
Obviously this simple evaluation runs in $O(n)$ time.
## Parsing of simple expressions
For the time being we only consider a simplified problem:
we assume that all operators are **binary** (i.e. they take two arguments), and all are **left-associative** (if the priorities are equal, they get executed from left to right).
Parentheses are allowed.
We will set up two stacks: one for numbers, and one for operators and parentheses.
Initially both stacks are empty.
For the second stack we will maintain the condition that all operations are ordered by strict descending priority.
If there are parenthesis on the stack, than each block of operators (corresponding to one pair of parenthesis) is ordered, and the entire stack is not necessarily ordered.
We will iterate over the characters of the expression from left to right.
If the current character is a digit, then we put the value of this number on the stack.
If the current character is an opening parenthesis, then we put it on the stack.
If the current character is a closing parenthesis, the we execute all operators on the stack until we reach the opening bracket (in other words we perform all operations inside the parenthesis).
Finally if the current character is an operator, then while the top of the stack has an operator with the same or higher priority, we will execute this operation, and put the new operation on the stack.
After we processed the entire string, some operators might still be in the stack, so we execute them.
Here is the implementation of this method for the four operators $+$ $-$ $*$ $/$:
```{.cpp file=expression_parsing_simple}
bool delim(char c) {
return c == ' ';
}
bool is_op(char c) {
return c == '+' || c == '-' || c == '*' || c == '/';
}
int priority (char op) {
if (op == '+' || op == '-')
return 1;
if (op == '*' || op == '/')
return 2;
return -1;
}
void process_op(stack<int>& st, char op) {
int r = st.top(); st.pop();
int l = st.top(); st.pop();
switch (op) {
case '+': st.push(l + r); break;
case '-': st.push(l - r); break;
case '*': st.push(l * r); break;
case '/': st.push(l / r); break;
}
}
int evaluate(string& s) {
stack<int> st;
stack<char> op;
for (int i = 0; i < (int)s.size(); i++) {
if (delim(s[i]))
continue;
if (s[i] == '(') {
op.push('(');
} else if (s[i] == ')') {
while (op.top() != '(') {
process_op(st, op.top());
op.pop();
}
op.pop();
} else if (is_op(s[i])) {
char cur_op = s[i];
while (!op.empty() && priority(op.top()) >= priority(cur_op)) {
process_op(st, op.top());
op.pop();
}
op.push(cur_op);
} else {
int number = 0;
while (i < (int)s.size() && isalnum(s[i]))
number = number * 10 + s[i++] - '0';
--i;
st.push(number);
}
}
while (!op.empty()) {
process_op(st, op.top());
op.pop();
}
return st.top();
}
```
Thus we learned how to calculate the value of an expression in $O(n)$, at the same time we implicitly used the reverse Polish notation.
By slightly modifying the above implementation it is also possible to obtain the expression in reverse Polish notation in an explicit form.
## Unary operators
Now suppose that the expression also contains **unary** operators (operators that take one argument).
The unary plus and unary minus are common examples of such operators.
One of the differences in this case, is that we need to determine whether the current operator is a unary or a binary one.
You can notice, that before an unary operator, there always is another operator or an opening parenthesis, or nothing at all (if it is at the very beginning of the expression).
On the contrary before a binary operator there will always be an operand (number) or a closing parenthesis.
Thus it is easy to flag whether the next operator can be unary or not.
Additionally we need to execute a unary and a binary operator differently.
And we need to chose the priority of a unary operator higher than all of the binary operators.
In addition it should be noted, that some unary operators (e.g. unary plus and unary minus) are actually **right-associative**.
## Right-associativity
Right-associative means, that whenever the priorities are equal, the operators must be evaluated from right to left.
As noted above, unary operators are usually right-associative.
Another example for an right-associative operator is the exponentiation operator ($a \wedge b \wedge c$ is usually perceived as $a^{b^c}$ and not as $(a^b)^c$).
What difference do we need to make in order to correctly handle right-associative operators?
It turns out that the changes are very minimal.
The only difference will be, if the priorities are equal we will postpone the execution of the right-associative operation.
The only line that needs to be replaced is
```cpp
while (!op.empty() && priority(op.top()) >= priority(cur_op))
```
with
```cpp
while (!op.empty() && (
(left_assoc(cur_op) && priority(op.top()) >= priority(cur_op)) ||
(!left_assoc(cur_op) && priority(op.top()) > priority(cur_op))
))
```
where `left_assoc` is a function that decides if an operator is left_associative or not.
Here is an implementation for the binary operators $+$ $-$ $*$ $/$ and the unary operators $+$ and $-$.
```{.cpp file=expression_parsing_unary}
bool delim(char c) {
return c == ' ';
}
bool is_op(char c) {
return c == '+' || c == '-' || c == '*' || c == '/';
}
bool is_unary(char c) {
return c == '+' || c=='-';
}
int priority (char op) {
if (op < 0) // unary operator
return 3;
if (op == '+' || op == '-')
return 1;
if (op == '*' || op == '/')
return 2;
return -1;
}
void process_op(stack<int>& st, char op) {
if (op < 0) {
int l = st.top(); st.pop();
switch (-op) {
case '+': st.push(l); break;
case '-': st.push(-l); break;
}
} else {
int r = st.top(); st.pop();
int l = st.top(); st.pop();
switch (op) {
case '+': st.push(l + r); break;
case '-': st.push(l - r); break;
case '*': st.push(l * r); break;
case '/': st.push(l / r); break;
}
}
}
int evaluate(string& s) {
stack<int> st;
stack<char> op;
bool may_be_unary = true;
for (int i = 0; i < (int)s.size(); i++) {
if (delim(s[i]))
continue;
if (s[i] == '(') {
op.push('(');
may_be_unary = true;
} else if (s[i] == ')') {
while (op.top() != '(') {
process_op(st, op.top());
op.pop();
}
op.pop();
may_be_unary = false;
} else if (is_op(s[i])) {
char cur_op = s[i];
if (may_be_unary && is_unary(cur_op))
cur_op = -cur_op;
while (!op.empty() && (
(cur_op >= 0 && priority(op.top()) >= priority(cur_op)) ||
(cur_op < 0 && priority(op.top()) > priority(cur_op))
)) {
process_op(st, op.top());
op.pop();
}
op.push(cur_op);
may_be_unary = true;
} else {
int number = 0;
while (i < (int)s.size() && isalnum(s[i]))
number = number * 10 + s[i++] - '0';
--i;
st.push(number);
may_be_unary = false;
}
}
while (!op.empty()) {
process_op(st, op.top());
op.pop();
}
return st.top();
}
```
|
Expression parsing
|
---
title
prefix_function
---
# Prefix function. Knuth–Morris–Pratt algorithm
## Prefix function definition
You are given a string $s$ of length $n$.
The **prefix function** for this string is defined as an array $\pi$ of length $n$, where $\pi[i]$ is the length of the longest proper prefix of the substring $s[0 \dots i]$ which is also a suffix of this substring.
A proper prefix of a string is a prefix that is not equal to the string itself.
By definition, $\pi[0] = 0$.
Mathematically the definition of the prefix function can be written as follows:
$$\pi[i] = \max_ {k = 0 \dots i} \{k : s[0 \dots k-1] = s[i-(k-1) \dots i] \}$$
For example, prefix function of string "abcabcd" is $[0, 0, 0, 1, 2, 3, 0]$, and prefix function of string "aabaaab" is $[0, 1, 0, 1, 2, 2, 3]$.
## Trivial Algorithm
An algorithm which follows the definition of prefix function exactly is the following:
```{.cpp file=prefix_slow}
vector<int> prefix_function(string s) {
int n = (int)s.length();
vector<int> pi(n);
for (int i = 0; i < n; i++)
for (int k = 0; k <= i; k++)
if (s.substr(0, k) == s.substr(i-k+1, k))
pi[i] = k;
return pi;
}
```
It is easy to see that its complexity is $O(n^3)$, which has room for improvement.
## Efficient Algorithm
This algorithm was proposed by Knuth and Pratt and independently from them by Morris in 1977.
It was used as the main function of a substring search algorithm.
### First optimization
The first important observation is, that the values of the prefix function can only increase by at most one.
Indeed, otherwise, if $\pi[i + 1] \gt \pi[i] + 1$, then we can take this suffix ending in position $i + 1$ with the length $\pi[i + 1]$ and remove the last character from it.
We end up with a suffix ending in position $i$ with the length $\pi[i + 1] - 1$, which is better than $\pi[i]$, i.e. we get a contradiction.
The following illustration shows this contradiction.
The longest proper suffix at position $i$ that also is a prefix is of length $2$, and at position $i+1$ it is of length $4$.
Therefore the string $s_0 ~ s_1 ~ s_2 ~ s_3$ is equal to the string $s_{i-2} ~ s_{i-1} ~ s_i ~ s_{i+1}$, which means that also the strings $s_0 ~ s_1 ~ s_2$ and $s_{i-2} ~ s_{i-1} ~ s_i$ are equal, therefore $\pi[i]$ has to be $3$.
$$\underbrace{\overbrace{s_0 ~ s_1}^{\pi[i] = 2} ~ s_2 ~ s_3}_{\pi[i+1] = 4} ~ \dots ~ \underbrace{s_{i-2} ~ \overbrace{s_{i-1} ~ s_{i}}^{\pi[i] = 2} ~ s_{i+1}}_{\pi[i+1] = 4}$$
Thus when moving to the next position, the value of the prefix function can either increase by one, stay the same, or decrease by some amount.
This fact already allows us to reduce the complexity of the algorithm to $O(n^2)$, because in one step the prefix function can grow at most by one.
In total the function can grow at most $n$ steps, and therefore also only can decrease a total of $n$ steps.
This means we only have to perform $O(n)$ string comparisons, and reach the complexity $O(n^2)$.
### Second optimization
Let's go further, we want to get rid of the string comparisons.
To accomplish this, we have to use all the information computed in the previous steps.
So let us compute the value of the prefix function $\pi$ for $i + 1$.
If $s[i+1] = s[\pi[i]]$, then we can say with certainty that $\pi[i+1] = \pi[i] + 1$, since we already know that the suffix at position $i$ of length $\pi[i]$ is equal to the prefix of length $\pi[i]$.
This is illustrated again with an example.
$$\underbrace{\overbrace{s_0 ~ s_1 ~ s_2}^{\pi[i]} ~ \overbrace{s_3}^{s_3 = s_{i+1}}}_{\pi[i+1] = \pi[i] + 1} ~ \dots ~ \underbrace{\overbrace{s_{i-2} ~ s_{i-1} ~ s_{i}}^{\pi[i]} ~ \overbrace{s_{i+1}}^{s_3 = s_{i + 1}}}_{\pi[i+1] = \pi[i] + 1}$$
If this is not the case, $s[i+1] \neq s[\pi[i]]$, then we need to try a shorter string.
In order to speed things up, we would like to immediately move to the longest length $j \lt \pi[i]$, such that the prefix property in the position $i$ holds, i.e. $s[0 \dots j-1] = s[i-j+1 \dots i]$:
$$\overbrace{\underbrace{s_0 ~ s_1}_j ~ s_2 ~ s_3}^{\pi[i]} ~ \dots ~ \overbrace{s_{i-3} ~ s_{i-2} ~ \underbrace{s_{i-1} ~ s_{i}}_j}^{\pi[i]} ~ s_{i+1}$$
Indeed, if we find such a length $j$, then we again only need to compare the characters $s[i+1]$ and $s[j]$.
If they are equal, then we can assign $\pi[i+1] = j + 1$.
Otherwise we will need to find the largest value smaller than $j$, for which the prefix property holds, and so on.
It can happen that this goes until $j = 0$.
If then $s[i+1] = s[0]$, we assign $\pi[i+1] = 1$, and $\pi[i+1] = 0$ otherwise.
So we already have a general scheme of the algorithm.
The only question left is how do we effectively find the lengths for $j$.
Let's recap:
for the current length $j$ at the position $i$ for which the prefix property holds, i.e. $s[0 \dots j-1] = s[i-j+1 \dots i]$, we want to find the greatest $k \lt j$, for which the prefix property holds.
$$\overbrace{\underbrace{s_0 ~ s_1}_k ~ s_2 ~ s_3}^j ~ \dots ~ \overbrace{s_{i-3} ~ s_{i-2} ~ \underbrace{s_{i-1} ~ s_{i}}_k}^j ~s_{i+1}$$
The illustration shows, that this has to be the value of $\pi[j-1]$, which we already calculated earlier.
### Final algorithm
So we finally can build an algorithm that doesn't perform any string comparisons and only performs $O(n)$ actions.
Here is the final procedure:
- We compute the prefix values $\pi[i]$ in a loop by iterating from $i = 1$ to $i = n-1$ ($\pi[0]$ just gets assigned with $0$).
- To calculate the current value $\pi[i]$ we set the variable $j$ denoting the length of the best suffix for $i-1$. Initially $j = \pi[i-1]$.
- Test if the suffix of length $j+1$ is also a prefix by comparing $s[j]$ and $s[i]$.
If they are equal then we assign $\pi[i] = j + 1$, otherwise we reduce $j$ to $\pi[j-1]$ and repeat this step.
- If we have reached the length $j = 0$ and still don't have a match, then we assign $\pi[i] = 0$ and go to the next index $i + 1$.
### Implementation
The implementation ends up being surprisingly short and expressive.
```{.cpp file=prefix_fast}
vector<int> prefix_function(string s) {
int n = (int)s.length();
vector<int> pi(n);
for (int i = 1; i < n; i++) {
int j = pi[i-1];
while (j > 0 && s[i] != s[j])
j = pi[j-1];
if (s[i] == s[j])
j++;
pi[i] = j;
}
return pi;
}
```
This is an **online** algorithm, i.e. it processes the data as it arrives - for example, you can read the string characters one by one and process them immediately, finding the value of prefix function for each next character.
The algorithm still requires storing the string itself and the previously calculated values of prefix function, but if we know beforehand the maximum value $M$ the prefix function can take on the string, we can store only $M+1$ first characters of the string and the same number of values of the prefix function.
## Applications
### Search for a substring in a string. The Knuth-Morris-Pratt algorithm
The task is the classical application of the prefix function.
Given a text $t$ and a string $s$, we want to find and display the positions of all occurrences of the string $s$ in the text $t$.
For convenience we denote with $n$ the length of the string s and with $m$ the length of the text $t$.
We generate the string $s + \# + t$, where $\#$ is a separator that appears neither in $s$ nor in $t$.
Let us calculate the prefix function for this string.
Now think about the meaning of the values of the prefix function, except for the first $n + 1$ entries (which belong to the string $s$ and the separator).
By definition the value $\pi[i]$ shows the longest length of a substring ending in position $i$ that coincides with the prefix.
But in our case this is nothing more than the largest block that coincides with $s$ and ends at position $i$.
This length cannot be bigger than $n$ due to the separator.
But if equality $\pi[i] = n$ is achieved, then it means that the string $s$ appears completely in at this position, i.e. it ends at position $i$.
Just do not forget that the positions are indexed in the string $s + \# + t$.
Thus if at some position $i$ we have $\pi[i] = n$, then at the position $i - (n + 1) - n + 1 = i - 2n$ in the string $t$ the string $s$ appears.
As already mentioned in the description of the prefix function computation, if we know that the prefix values never exceed a certain value, then we do not need to store the entire string and the entire function, but only its beginning.
In our case this means that we only need to store the string $s + \#$ and the values of the prefix function for it.
We can read one character at a time of the string $t$ and calculate the current value of the prefix function.
Thus the Knuth-Morris-Pratt algorithm solves the problem in $O(n + m)$ time and $O(n)$ memory.
### Counting the number of occurrences of each prefix
Here we discuss two problems at once.
Given a string $s$ of length $n$.
In the first variation of the problem we want to count the number of appearances of each prefix $s[0 \dots i]$ in the same string.
In the second variation of the problem another string $t$ is given and we want to count the number of appearances of each prefix $s[0 \dots i]$ in $t$.
First we solve the first problem.
Consider the value of the prefix function $\pi[i]$ at a position $i$.
By definition it means that the prefix of length $\pi[i]$ of the string $s$ occurs and ends at position $i$, and there is no longer prefix that follows this definition.
At the same time shorter prefixes can end at this position.
It is not difficult to see, that we have the same question that we already answered when we computed the prefix function itself:
Given a prefix of length $j$ that is a suffix ending at position $i$, what is the next smaller prefix $\lt j$ that is also a suffix ending at position $i$.
Thus at the position $i$ ends the prefix of length $\pi[i]$, the prefix of length $\pi[\pi[i] - 1]$, the prefix $\pi[\pi[\pi[i] - 1] - 1]$, and so on, until the index becomes zero.
Thus we can compute the answer in the following way.
```{.cpp file=prefix_count_each_prefix}
vector<int> ans(n + 1);
for (int i = 0; i < n; i++)
ans[pi[i]]++;
for (int i = n-1; i > 0; i--)
ans[pi[i-1]] += ans[i];
for (int i = 0; i <= n; i++)
ans[i]++;
```
Here for each value of the prefix function we first count how many times it occurs in the array $\pi$, and then compute the final answers:
if we know that the length prefix $i$ appears exactly $\text{ans}[i]$ times, then this number must be added to the number of occurrences of its longest suffix that is also a prefix.
At the end we need to add $1$ to each result, since we also need to count the original prefixes also.
Now let us consider the second problem.
We apply the trick from Knuth-Morris-Pratt:
we create the string $s + \# + t$ and compute its prefix function.
The only differences to the first task is, that we are only interested in the prefix values that relate to the string $t$, i.e. $\pi[i]$ for $i \ge n + 1$.
With those values we can perform the exact same computations as in the first task.
### The number of different substring in a string
Given a string $s$ of length $n$.
We want to compute the number of different substrings appearing in it.
We will solve this problem iteratively.
Namely we will learn, knowing the current number of different substrings, how to recompute this count by adding a character to the end.
So let $k$ be the current number of different substrings in $s$, and we add the character $c$ to the end of $s$.
Obviously some new substrings ending in $c$ will appear.
We want to count these new substrings that didn't appear before.
We take the string $t = s + c$ and reverse it.
Now the task is transformed into computing how many prefixes there are that don't appear anywhere else.
If we compute the maximal value of the prefix function $\pi_{\text{max}}$ of the reversed string $t$, then the longest prefix that appears in $s$ is $\pi_{\text{max}}$ long.
Clearly also all prefixes of smaller length appear in it.
Therefore the number of new substrings appearing when we add a new character $c$ is $|s| + 1 - \pi_{\text{max}}$.
So for each character appended we can compute the number of new substrings in $O(n)$ times, which gives a time complexity of $O(n^2)$ in total.
It is worth noting, that we can also compute the number of different substrings by appending the characters at the beginning, or by deleting characters from the beginning or the end.
### Compressing a string
Given a string $s$ of length $n$.
We want to find the shortest "compressed" representation of the string, i.e. we want to find a string $t$ of smallest length such that $s$ can be represented as a concatenation of one or more copies of $t$.
It is clear, that we only need to find the length of $t$. Knowing the length, the answer to the problem will be the prefix of $s$ with this length.
Let us compute the prefix function for $s$.
Using the last value of it we define the value $k = n - \pi[n - 1]$.
We will show, that if $k$ divides $n$, then $k$ will be the answer, otherwise there is no effective compression and the answer is $n$.
Let $n$ be divisible by $k$.
Then the string can be partitioned into blocks of the length $k$.
By definition of the prefix function, the prefix of length $n - k$ will be equal with its suffix.
But this means that the last block is equal to the block before.
And the block before has to be equal to the block before it.
And so on.
As a result, it turns out that all blocks are equal, therefore we can compress the string $s$ to length $k$.
Of course we still need to show that this is actually the optimum.
Indeed, if there was a smaller compression than $k$, than the prefix function at the end would be greater than $n - k$.
Therefore $k$ is really the answer.
Now let us assume that $n$ is not divisible by $k$.
We show that this implies that the length of the answer is $n$.
We prove it by contradiction.
Assuming there exists an answer, and the compression has length $p$ ($p$ divides $n$).
Then the last value of the prefix function has to be greater than $n - p$, i.e. the suffix will partially cover the first block.
Now consider the second block of the string.
Since the prefix is equal with the suffix, and both the prefix and the suffix cover this block and their displacement relative to each other $k$ does not divide the block length $p$ (otherwise $k$ divides $n$), then all the characters of the block have to be identical.
But then the string consists of only one character repeated over and over, hence we can compress it to a string of size $1$, which gives $k = 1$, and $k$ divides $n$.
Contradiction.
$$\overbrace{s_0 ~ s_1 ~ s_2 ~ s_3}^p ~ \overbrace{s_4 ~ s_5 ~ s_6 ~ s_7}^p$$
$$s_0 ~ s_1 ~ s_2 ~ \underbrace{\overbrace{s_3 ~ s_4 ~ s_5 ~ s_6}^p ~ s_7}_{\pi[7] = 5}$$
$$s_4 = s_3, ~ s_5 = s_4, ~ s_6 = s_5, ~ s_7 = s_6 ~ \Rightarrow ~ s_0 = s_1 = s_2 = s_3$$
### Building an automaton according to the prefix function
Let's return to the concatenation to the two strings through a separator, i.e. for the strings $s$ and $t$ we compute the prefix function for the string $s + \# + t$.
Obviously, since $\#$ is a separator, the value of the prefix function will never exceed $|s|$.
It follows, that it is sufficient to only store the string $s + \#$ and the values of the prefix function for it, and we can compute the prefix function for all subsequent character on the fly:
$$\underbrace{s_0 ~ s_1 ~ \dots ~ s_{n-1} ~ \#}_{\text{need to store}} ~ \underbrace{t_0 ~ t_1 ~ \dots ~ t_{m-1}}_{\text{do not need to store}}$$
Indeed, in such a situation, knowing the next character $c \in t$ and the value of the prefix function of the previous position is enough information to compute the next value of the prefix function, without using any previous characters of the string $t$ and the value of the prefix function in them.
In other words, we can construct an **automaton** (a finite state machine): the state in it is the current value of the prefix function, and the transition from one state to another will be performed via the next character.
Thus, even without having the string $t$, we can construct such a transition table $(\text{old}_\pi, c) \rightarrow \text{new}_\pi$ using the same algorithm as for calculating the transition table:
```{.cpp file=prefix_automaton_slow}
void compute_automaton(string s, vector<vector<int>>& aut) {
s += '#';
int n = s.size();
vector<int> pi = prefix_function(s);
aut.assign(n, vector<int>(26));
for (int i = 0; i < n; i++) {
for (int c = 0; c < 26; c++) {
int j = i;
while (j > 0 && 'a' + c != s[j])
j = pi[j-1];
if ('a' + c == s[j])
j++;
aut[i][c] = j;
}
}
}
```
However in this form the algorithm runs in $O(n^2 26)$ time for the lowercase letters of the alphabet.
Note that we can apply dynamic programming and use the already calculated parts of the table.
Whenever we go from the value $j$ to the value $\pi[j-1]$, we actually mean that the transition $(j, c)$ leads to the same state as the transition as $(\pi[j-1], c)$, and this answer is already accurately computed.
```{.cpp file=prefix_automaton_fast}
void compute_automaton(string s, vector<vector<int>>& aut) {
s += '#';
int n = s.size();
vector<int> pi = prefix_function(s);
aut.assign(n, vector<int>(26));
for (int i = 0; i < n; i++) {
for (int c = 0; c < 26; c++) {
if (i > 0 && 'a' + c != s[i])
aut[i][c] = aut[pi[i-1]][c];
else
aut[i][c] = i + ('a' + c == s[i]);
}
}
}
```
As a result we construct the automaton in $O(26 n)$ time.
When is such a automaton useful?
To begin with, remember that we use the prefix function for the string $s + \# + t$ and its values mostly for a single purpose: find all occurrences of the string $s$ in the string $t$.
Therefore the most obvious benefit of this automaton is the **acceleration of calculating the prefix function** for the string $s + \# + t$.
By building the automaton for $s + \#$, we no longer need to store the string $s$ or the values of the prefix function in it.
All transitions are already computed in the table.
But there is a second, less obvious, application.
We can use the automaton when the string $t$ is a **gigantic string constructed using some rules**.
This can for instance be the Gray strings, or a string formed by a recursive combination of several short strings from the input.
For completeness we will solve such a problem:
given a number $k \le 10^5$ and a string $s$ of length $\le 10^5$.
We have to compute the number of occurrences of $s$ in the $k$-th Gray string.
Recall that Gray's strings are define in the following way:
$$\begin{align}
g_1 &= \text{"a"}\\
g_2 &= \text{"aba"}\\
g_3 &= \text{"abacaba"}\\
g_4 &= \text{"abacabadabacaba"}
\end{align}$$
In such cases even constructing the string $t$ will be impossible, because of its astronomical length.
The $k$-th Gray string is $2^k-1$ characters long.
However we can calculate the value of the prefix function at the end of the string effectively, by only knowing the value of the prefix function at the start.
In addition to the automaton itself, we also compute values $G[i][j]$ - the value of the automaton after processing the string $g_i$ starting with the state $j$.
And additionally we compute values $K[i][j]$ - the number of occurrences of $s$ in $g_i$, before during the processing of $g_i$ starting with the state $j$.
Actually $K[i][j]$ is the number of times that the prefix function took the value $|s|$ while performing the operations.
The answer to the problem will then be $K[k][0]$.
How can we compute these values?
First the basic values are $G[0][j] = j$ and $K[0][j] = 0$.
And all subsequent values can be calculated from the previous values and using the automaton.
To calculate the value for some $i$ we remember that the string $g_i$ consists of $g_{i-1}$, the $i$ character of the alphabet, and $g_{i-1}$.
Thus the automaton will go into the state:
$$\text{mid} = \text{aut}[G[i-1][j]][i]$$
$$G[i][j] = G[i-1][\text{mid}]$$
The values for $K[i][j]$ can also be easily counted.
$$K[i][j] = K[i-1][j] + (\text{mid} == |s|) + K[i-1][\text{mid}]$$
So we can solve the problem for Gray strings, and similarly also a huge number of other similar problems.
For example the exact same method also solves the following problem:
we are given a string $s$ and some patterns $t_i$, each of which is specified as follows:
it is a string of ordinary characters, and there might be some recursive insertions of the previous strings of the form $t_k^{\text{cnt}}$, which means that at this place we have to insert the string $t_k$ $\text{cnt}$ times.
An example of such patterns:
$$\begin{align}
t_1 &= \text{"abdeca"}\\
t_2 &= \text{"abc"} + t_1^{30} + \text{"abd"}\\
t_3 &= t_2^{50} + t_1^{100}\\
t_4 &= t_2^{10} + t_3^{100}
\end{align}$$
The recursive substitutions blow the string up, so that their lengths can reach the order of $100^{100}$.
We have to find the number of times the string $s$ appears in each of the strings.
The problem can be solved in the same way by constructing the automaton of the prefix function, and then we calculate the transitions in for each pattern by using the previous results.
|
---
title
prefix_function
---
# Prefix function. Knuth–Morris–Pratt algorithm
## Prefix function definition
You are given a string $s$ of length $n$.
The **prefix function** for this string is defined as an array $\pi$ of length $n$, where $\pi[i]$ is the length of the longest proper prefix of the substring $s[0 \dots i]$ which is also a suffix of this substring.
A proper prefix of a string is a prefix that is not equal to the string itself.
By definition, $\pi[0] = 0$.
Mathematically the definition of the prefix function can be written as follows:
$$\pi[i] = \max_ {k = 0 \dots i} \{k : s[0 \dots k-1] = s[i-(k-1) \dots i] \}$$
For example, prefix function of string "abcabcd" is $[0, 0, 0, 1, 2, 3, 0]$, and prefix function of string "aabaaab" is $[0, 1, 0, 1, 2, 2, 3]$.
## Trivial Algorithm
An algorithm which follows the definition of prefix function exactly is the following:
```{.cpp file=prefix_slow}
vector<int> prefix_function(string s) {
int n = (int)s.length();
vector<int> pi(n);
for (int i = 0; i < n; i++)
for (int k = 0; k <= i; k++)
if (s.substr(0, k) == s.substr(i-k+1, k))
pi[i] = k;
return pi;
}
```
It is easy to see that its complexity is $O(n^3)$, which has room for improvement.
## Efficient Algorithm
This algorithm was proposed by Knuth and Pratt and independently from them by Morris in 1977.
It was used as the main function of a substring search algorithm.
### First optimization
The first important observation is, that the values of the prefix function can only increase by at most one.
Indeed, otherwise, if $\pi[i + 1] \gt \pi[i] + 1$, then we can take this suffix ending in position $i + 1$ with the length $\pi[i + 1]$ and remove the last character from it.
We end up with a suffix ending in position $i$ with the length $\pi[i + 1] - 1$, which is better than $\pi[i]$, i.e. we get a contradiction.
The following illustration shows this contradiction.
The longest proper suffix at position $i$ that also is a prefix is of length $2$, and at position $i+1$ it is of length $4$.
Therefore the string $s_0 ~ s_1 ~ s_2 ~ s_3$ is equal to the string $s_{i-2} ~ s_{i-1} ~ s_i ~ s_{i+1}$, which means that also the strings $s_0 ~ s_1 ~ s_2$ and $s_{i-2} ~ s_{i-1} ~ s_i$ are equal, therefore $\pi[i]$ has to be $3$.
$$\underbrace{\overbrace{s_0 ~ s_1}^{\pi[i] = 2} ~ s_2 ~ s_3}_{\pi[i+1] = 4} ~ \dots ~ \underbrace{s_{i-2} ~ \overbrace{s_{i-1} ~ s_{i}}^{\pi[i] = 2} ~ s_{i+1}}_{\pi[i+1] = 4}$$
Thus when moving to the next position, the value of the prefix function can either increase by one, stay the same, or decrease by some amount.
This fact already allows us to reduce the complexity of the algorithm to $O(n^2)$, because in one step the prefix function can grow at most by one.
In total the function can grow at most $n$ steps, and therefore also only can decrease a total of $n$ steps.
This means we only have to perform $O(n)$ string comparisons, and reach the complexity $O(n^2)$.
### Second optimization
Let's go further, we want to get rid of the string comparisons.
To accomplish this, we have to use all the information computed in the previous steps.
So let us compute the value of the prefix function $\pi$ for $i + 1$.
If $s[i+1] = s[\pi[i]]$, then we can say with certainty that $\pi[i+1] = \pi[i] + 1$, since we already know that the suffix at position $i$ of length $\pi[i]$ is equal to the prefix of length $\pi[i]$.
This is illustrated again with an example.
$$\underbrace{\overbrace{s_0 ~ s_1 ~ s_2}^{\pi[i]} ~ \overbrace{s_3}^{s_3 = s_{i+1}}}_{\pi[i+1] = \pi[i] + 1} ~ \dots ~ \underbrace{\overbrace{s_{i-2} ~ s_{i-1} ~ s_{i}}^{\pi[i]} ~ \overbrace{s_{i+1}}^{s_3 = s_{i + 1}}}_{\pi[i+1] = \pi[i] + 1}$$
If this is not the case, $s[i+1] \neq s[\pi[i]]$, then we need to try a shorter string.
In order to speed things up, we would like to immediately move to the longest length $j \lt \pi[i]$, such that the prefix property in the position $i$ holds, i.e. $s[0 \dots j-1] = s[i-j+1 \dots i]$:
$$\overbrace{\underbrace{s_0 ~ s_1}_j ~ s_2 ~ s_3}^{\pi[i]} ~ \dots ~ \overbrace{s_{i-3} ~ s_{i-2} ~ \underbrace{s_{i-1} ~ s_{i}}_j}^{\pi[i]} ~ s_{i+1}$$
Indeed, if we find such a length $j$, then we again only need to compare the characters $s[i+1]$ and $s[j]$.
If they are equal, then we can assign $\pi[i+1] = j + 1$.
Otherwise we will need to find the largest value smaller than $j$, for which the prefix property holds, and so on.
It can happen that this goes until $j = 0$.
If then $s[i+1] = s[0]$, we assign $\pi[i+1] = 1$, and $\pi[i+1] = 0$ otherwise.
So we already have a general scheme of the algorithm.
The only question left is how do we effectively find the lengths for $j$.
Let's recap:
for the current length $j$ at the position $i$ for which the prefix property holds, i.e. $s[0 \dots j-1] = s[i-j+1 \dots i]$, we want to find the greatest $k \lt j$, for which the prefix property holds.
$$\overbrace{\underbrace{s_0 ~ s_1}_k ~ s_2 ~ s_3}^j ~ \dots ~ \overbrace{s_{i-3} ~ s_{i-2} ~ \underbrace{s_{i-1} ~ s_{i}}_k}^j ~s_{i+1}$$
The illustration shows, that this has to be the value of $\pi[j-1]$, which we already calculated earlier.
### Final algorithm
So we finally can build an algorithm that doesn't perform any string comparisons and only performs $O(n)$ actions.
Here is the final procedure:
- We compute the prefix values $\pi[i]$ in a loop by iterating from $i = 1$ to $i = n-1$ ($\pi[0]$ just gets assigned with $0$).
- To calculate the current value $\pi[i]$ we set the variable $j$ denoting the length of the best suffix for $i-1$. Initially $j = \pi[i-1]$.
- Test if the suffix of length $j+1$ is also a prefix by comparing $s[j]$ and $s[i]$.
If they are equal then we assign $\pi[i] = j + 1$, otherwise we reduce $j$ to $\pi[j-1]$ and repeat this step.
- If we have reached the length $j = 0$ and still don't have a match, then we assign $\pi[i] = 0$ and go to the next index $i + 1$.
### Implementation
The implementation ends up being surprisingly short and expressive.
```{.cpp file=prefix_fast}
vector<int> prefix_function(string s) {
int n = (int)s.length();
vector<int> pi(n);
for (int i = 1; i < n; i++) {
int j = pi[i-1];
while (j > 0 && s[i] != s[j])
j = pi[j-1];
if (s[i] == s[j])
j++;
pi[i] = j;
}
return pi;
}
```
This is an **online** algorithm, i.e. it processes the data as it arrives - for example, you can read the string characters one by one and process them immediately, finding the value of prefix function for each next character.
The algorithm still requires storing the string itself and the previously calculated values of prefix function, but if we know beforehand the maximum value $M$ the prefix function can take on the string, we can store only $M+1$ first characters of the string and the same number of values of the prefix function.
## Applications
### Search for a substring in a string. The Knuth-Morris-Pratt algorithm
The task is the classical application of the prefix function.
Given a text $t$ and a string $s$, we want to find and display the positions of all occurrences of the string $s$ in the text $t$.
For convenience we denote with $n$ the length of the string s and with $m$ the length of the text $t$.
We generate the string $s + \# + t$, where $\#$ is a separator that appears neither in $s$ nor in $t$.
Let us calculate the prefix function for this string.
Now think about the meaning of the values of the prefix function, except for the first $n + 1$ entries (which belong to the string $s$ and the separator).
By definition the value $\pi[i]$ shows the longest length of a substring ending in position $i$ that coincides with the prefix.
But in our case this is nothing more than the largest block that coincides with $s$ and ends at position $i$.
This length cannot be bigger than $n$ due to the separator.
But if equality $\pi[i] = n$ is achieved, then it means that the string $s$ appears completely in at this position, i.e. it ends at position $i$.
Just do not forget that the positions are indexed in the string $s + \# + t$.
Thus if at some position $i$ we have $\pi[i] = n$, then at the position $i - (n + 1) - n + 1 = i - 2n$ in the string $t$ the string $s$ appears.
As already mentioned in the description of the prefix function computation, if we know that the prefix values never exceed a certain value, then we do not need to store the entire string and the entire function, but only its beginning.
In our case this means that we only need to store the string $s + \#$ and the values of the prefix function for it.
We can read one character at a time of the string $t$ and calculate the current value of the prefix function.
Thus the Knuth-Morris-Pratt algorithm solves the problem in $O(n + m)$ time and $O(n)$ memory.
### Counting the number of occurrences of each prefix
Here we discuss two problems at once.
Given a string $s$ of length $n$.
In the first variation of the problem we want to count the number of appearances of each prefix $s[0 \dots i]$ in the same string.
In the second variation of the problem another string $t$ is given and we want to count the number of appearances of each prefix $s[0 \dots i]$ in $t$.
First we solve the first problem.
Consider the value of the prefix function $\pi[i]$ at a position $i$.
By definition it means that the prefix of length $\pi[i]$ of the string $s$ occurs and ends at position $i$, and there is no longer prefix that follows this definition.
At the same time shorter prefixes can end at this position.
It is not difficult to see, that we have the same question that we already answered when we computed the prefix function itself:
Given a prefix of length $j$ that is a suffix ending at position $i$, what is the next smaller prefix $\lt j$ that is also a suffix ending at position $i$.
Thus at the position $i$ ends the prefix of length $\pi[i]$, the prefix of length $\pi[\pi[i] - 1]$, the prefix $\pi[\pi[\pi[i] - 1] - 1]$, and so on, until the index becomes zero.
Thus we can compute the answer in the following way.
```{.cpp file=prefix_count_each_prefix}
vector<int> ans(n + 1);
for (int i = 0; i < n; i++)
ans[pi[i]]++;
for (int i = n-1; i > 0; i--)
ans[pi[i-1]] += ans[i];
for (int i = 0; i <= n; i++)
ans[i]++;
```
Here for each value of the prefix function we first count how many times it occurs in the array $\pi$, and then compute the final answers:
if we know that the length prefix $i$ appears exactly $\text{ans}[i]$ times, then this number must be added to the number of occurrences of its longest suffix that is also a prefix.
At the end we need to add $1$ to each result, since we also need to count the original prefixes also.
Now let us consider the second problem.
We apply the trick from Knuth-Morris-Pratt:
we create the string $s + \# + t$ and compute its prefix function.
The only differences to the first task is, that we are only interested in the prefix values that relate to the string $t$, i.e. $\pi[i]$ for $i \ge n + 1$.
With those values we can perform the exact same computations as in the first task.
### The number of different substring in a string
Given a string $s$ of length $n$.
We want to compute the number of different substrings appearing in it.
We will solve this problem iteratively.
Namely we will learn, knowing the current number of different substrings, how to recompute this count by adding a character to the end.
So let $k$ be the current number of different substrings in $s$, and we add the character $c$ to the end of $s$.
Obviously some new substrings ending in $c$ will appear.
We want to count these new substrings that didn't appear before.
We take the string $t = s + c$ and reverse it.
Now the task is transformed into computing how many prefixes there are that don't appear anywhere else.
If we compute the maximal value of the prefix function $\pi_{\text{max}}$ of the reversed string $t$, then the longest prefix that appears in $s$ is $\pi_{\text{max}}$ long.
Clearly also all prefixes of smaller length appear in it.
Therefore the number of new substrings appearing when we add a new character $c$ is $|s| + 1 - \pi_{\text{max}}$.
So for each character appended we can compute the number of new substrings in $O(n)$ times, which gives a time complexity of $O(n^2)$ in total.
It is worth noting, that we can also compute the number of different substrings by appending the characters at the beginning, or by deleting characters from the beginning or the end.
### Compressing a string
Given a string $s$ of length $n$.
We want to find the shortest "compressed" representation of the string, i.e. we want to find a string $t$ of smallest length such that $s$ can be represented as a concatenation of one or more copies of $t$.
It is clear, that we only need to find the length of $t$. Knowing the length, the answer to the problem will be the prefix of $s$ with this length.
Let us compute the prefix function for $s$.
Using the last value of it we define the value $k = n - \pi[n - 1]$.
We will show, that if $k$ divides $n$, then $k$ will be the answer, otherwise there is no effective compression and the answer is $n$.
Let $n$ be divisible by $k$.
Then the string can be partitioned into blocks of the length $k$.
By definition of the prefix function, the prefix of length $n - k$ will be equal with its suffix.
But this means that the last block is equal to the block before.
And the block before has to be equal to the block before it.
And so on.
As a result, it turns out that all blocks are equal, therefore we can compress the string $s$ to length $k$.
Of course we still need to show that this is actually the optimum.
Indeed, if there was a smaller compression than $k$, than the prefix function at the end would be greater than $n - k$.
Therefore $k$ is really the answer.
Now let us assume that $n$ is not divisible by $k$.
We show that this implies that the length of the answer is $n$.
We prove it by contradiction.
Assuming there exists an answer, and the compression has length $p$ ($p$ divides $n$).
Then the last value of the prefix function has to be greater than $n - p$, i.e. the suffix will partially cover the first block.
Now consider the second block of the string.
Since the prefix is equal with the suffix, and both the prefix and the suffix cover this block and their displacement relative to each other $k$ does not divide the block length $p$ (otherwise $k$ divides $n$), then all the characters of the block have to be identical.
But then the string consists of only one character repeated over and over, hence we can compress it to a string of size $1$, which gives $k = 1$, and $k$ divides $n$.
Contradiction.
$$\overbrace{s_0 ~ s_1 ~ s_2 ~ s_3}^p ~ \overbrace{s_4 ~ s_5 ~ s_6 ~ s_7}^p$$
$$s_0 ~ s_1 ~ s_2 ~ \underbrace{\overbrace{s_3 ~ s_4 ~ s_5 ~ s_6}^p ~ s_7}_{\pi[7] = 5}$$
$$s_4 = s_3, ~ s_5 = s_4, ~ s_6 = s_5, ~ s_7 = s_6 ~ \Rightarrow ~ s_0 = s_1 = s_2 = s_3$$
### Building an automaton according to the prefix function
Let's return to the concatenation to the two strings through a separator, i.e. for the strings $s$ and $t$ we compute the prefix function for the string $s + \# + t$.
Obviously, since $\#$ is a separator, the value of the prefix function will never exceed $|s|$.
It follows, that it is sufficient to only store the string $s + \#$ and the values of the prefix function for it, and we can compute the prefix function for all subsequent character on the fly:
$$\underbrace{s_0 ~ s_1 ~ \dots ~ s_{n-1} ~ \#}_{\text{need to store}} ~ \underbrace{t_0 ~ t_1 ~ \dots ~ t_{m-1}}_{\text{do not need to store}}$$
Indeed, in such a situation, knowing the next character $c \in t$ and the value of the prefix function of the previous position is enough information to compute the next value of the prefix function, without using any previous characters of the string $t$ and the value of the prefix function in them.
In other words, we can construct an **automaton** (a finite state machine): the state in it is the current value of the prefix function, and the transition from one state to another will be performed via the next character.
Thus, even without having the string $t$, we can construct such a transition table $(\text{old}_\pi, c) \rightarrow \text{new}_\pi$ using the same algorithm as for calculating the transition table:
```{.cpp file=prefix_automaton_slow}
void compute_automaton(string s, vector<vector<int>>& aut) {
s += '#';
int n = s.size();
vector<int> pi = prefix_function(s);
aut.assign(n, vector<int>(26));
for (int i = 0; i < n; i++) {
for (int c = 0; c < 26; c++) {
int j = i;
while (j > 0 && 'a' + c != s[j])
j = pi[j-1];
if ('a' + c == s[j])
j++;
aut[i][c] = j;
}
}
}
```
However in this form the algorithm runs in $O(n^2 26)$ time for the lowercase letters of the alphabet.
Note that we can apply dynamic programming and use the already calculated parts of the table.
Whenever we go from the value $j$ to the value $\pi[j-1]$, we actually mean that the transition $(j, c)$ leads to the same state as the transition as $(\pi[j-1], c)$, and this answer is already accurately computed.
```{.cpp file=prefix_automaton_fast}
void compute_automaton(string s, vector<vector<int>>& aut) {
s += '#';
int n = s.size();
vector<int> pi = prefix_function(s);
aut.assign(n, vector<int>(26));
for (int i = 0; i < n; i++) {
for (int c = 0; c < 26; c++) {
if (i > 0 && 'a' + c != s[i])
aut[i][c] = aut[pi[i-1]][c];
else
aut[i][c] = i + ('a' + c == s[i]);
}
}
}
```
As a result we construct the automaton in $O(26 n)$ time.
When is such a automaton useful?
To begin with, remember that we use the prefix function for the string $s + \# + t$ and its values mostly for a single purpose: find all occurrences of the string $s$ in the string $t$.
Therefore the most obvious benefit of this automaton is the **acceleration of calculating the prefix function** for the string $s + \# + t$.
By building the automaton for $s + \#$, we no longer need to store the string $s$ or the values of the prefix function in it.
All transitions are already computed in the table.
But there is a second, less obvious, application.
We can use the automaton when the string $t$ is a **gigantic string constructed using some rules**.
This can for instance be the Gray strings, or a string formed by a recursive combination of several short strings from the input.
For completeness we will solve such a problem:
given a number $k \le 10^5$ and a string $s$ of length $\le 10^5$.
We have to compute the number of occurrences of $s$ in the $k$-th Gray string.
Recall that Gray's strings are define in the following way:
$$\begin{align}
g_1 &= \text{"a"}\\
g_2 &= \text{"aba"}\\
g_3 &= \text{"abacaba"}\\
g_4 &= \text{"abacabadabacaba"}
\end{align}$$
In such cases even constructing the string $t$ will be impossible, because of its astronomical length.
The $k$-th Gray string is $2^k-1$ characters long.
However we can calculate the value of the prefix function at the end of the string effectively, by only knowing the value of the prefix function at the start.
In addition to the automaton itself, we also compute values $G[i][j]$ - the value of the automaton after processing the string $g_i$ starting with the state $j$.
And additionally we compute values $K[i][j]$ - the number of occurrences of $s$ in $g_i$, before during the processing of $g_i$ starting with the state $j$.
Actually $K[i][j]$ is the number of times that the prefix function took the value $|s|$ while performing the operations.
The answer to the problem will then be $K[k][0]$.
How can we compute these values?
First the basic values are $G[0][j] = j$ and $K[0][j] = 0$.
And all subsequent values can be calculated from the previous values and using the automaton.
To calculate the value for some $i$ we remember that the string $g_i$ consists of $g_{i-1}$, the $i$ character of the alphabet, and $g_{i-1}$.
Thus the automaton will go into the state:
$$\text{mid} = \text{aut}[G[i-1][j]][i]$$
$$G[i][j] = G[i-1][\text{mid}]$$
The values for $K[i][j]$ can also be easily counted.
$$K[i][j] = K[i-1][j] + (\text{mid} == |s|) + K[i-1][\text{mid}]$$
So we can solve the problem for Gray strings, and similarly also a huge number of other similar problems.
For example the exact same method also solves the following problem:
we are given a string $s$ and some patterns $t_i$, each of which is specified as follows:
it is a string of ordinary characters, and there might be some recursive insertions of the previous strings of the form $t_k^{\text{cnt}}$, which means that at this place we have to insert the string $t_k$ $\text{cnt}$ times.
An example of such patterns:
$$\begin{align}
t_1 &= \text{"abdeca"}\\
t_2 &= \text{"abc"} + t_1^{30} + \text{"abd"}\\
t_3 &= t_2^{50} + t_1^{100}\\
t_4 &= t_2^{10} + t_3^{100}
\end{align}$$
The recursive substitutions blow the string up, so that their lengths can reach the order of $100^{100}$.
We have to find the number of times the string $s$ appears in each of the strings.
The problem can be solved in the same way by constructing the automaton of the prefix function, and then we calculate the transitions in for each pattern by using the previous results.
## Practice Problems
* [UVA # 455 "Periodic Strings"](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=396)
* [UVA # 11022 "String Factoring"](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1963)
* [UVA # 11452 "Dancing the Cheeky-Cheeky"](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=2447)
* [UVA 12604 - Caesar Cipher](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=4282)
* [UVA 12467 - Secret Word](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3911)
* [UVA 11019 - Matrix Matcher](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1960)
* [SPOJ - Pattern Find](http://www.spoj.com/problems/NAJPF/)
* [SPOJ - A Needle in the Haystack](https://www.spoj.com/problems/NHAY/)
* [Codeforces - Anthem of Berland](http://codeforces.com/contest/808/problem/G)
* [Codeforces - MUH and Cube Walls](http://codeforces.com/problemset/problem/471/D)
* [Codeforces - Prefixes and Suffixes](https://codeforces.com/contest/432/problem/D)
|
Prefix function. Knuth–Morris–Pratt algorithm
|
---
title
duval_algorithm
---
# Lyndon factorization
## Lyndon factorization
First let us define the notion of the Lyndon factorization.
A string is called **simple** (or a Lyndon word), if it is strictly **smaller than** any of its own nontrivial **suffixes**.
Examples of simple strings are: $a$, $b$, $ab$, $aab$, $abb$, $ababb$, $abcd$.
It can be shown that a string is simple, if and only if it is strictly **smaller than** all its nontrivial **cyclic shifts**.
Next, let there be a given string $s$.
The **Lyndon factorization** of the string $s$ is a factorization $s = w_1 w_2 \dots w_k$, where all strings $w_i$ are simple, and they are in non-increasing order $w_1 \ge w_2 \ge \dots \ge w_k$.
It can be shown, that for any string such a factorization exists and that it is unique.
## Duval algorithm
The Duval algorithm constructs the Lyndon factorization in $O(n)$ time using $O(1)$ additional memory.
First let us introduce another notion:
a string $t$ is called **pre-simple**, if it has the form $t = w w \dots w \overline{w}$, where $w$ is a simple string and $\overline{w}$ is a prefix of $w$ (possibly empty).
A simple string is also pre-simple.
The Duval algorithm is greedy.
At any point during its execution, the string $s$ will actually be divided into three strings $s = s_1 s_2 s_3$, where the Lyndon factorization for $s_1$ is already found and finalized, the string $s_2$ is pre-simple (and we know the length of the simple string in it), and $s_3$ is completely untouched.
In each iteration the Duval algorithm takes the first character of the string $s_3$ and tries to append it to the string $s_2$.
It $s_2$ is no longer pre-simple, then the Lyndon factorization for some part of $s_2$ becomes known, and this part goes to $s_1$.
Let's describe the algorithm in more detail.
The pointer $i$ will always point to the beginning of the string $s_2$.
The outer loop will be executed as long as $i < n$.
Inside the loop we use two additional pointers, $j$ which points to the beginning of $s_3$, and $k$ which points to the current character that we are currently comparing to.
We want to add the character $s[j]$ to the string $s_2$, which requires a comparison with the character $s[k]$.
There can be three different cases:
- $s[j] = s[k]$: if this is the case, then adding the symbol $s[j]$ to $s_2$ doesn't violate its pre-simplicity.
So we simply increment the pointers $j$ and $k$.
- $s[j] > s[k]$: here, the string $s_2 + s[j]$ becomes simple.
We can increment $j$ and reset $k$ back to the beginning of $s_2$, so that the next character can be compared with the beginning of the simple word.
- $s[j] < s[k]$: the string $s_2 + s[j]$ is no longer pre-simple.
Therefore we will split the pre-simple string $s_2$ into its simple strings and the remainder, possibly empty.
The simple string will have the length $j - k$.
In the next iteration we start again with the remaining $s_2$.
### Implementation
Here we present the implementation of the Duval algorithm, which will return the desired Lyndon factorization of a given string $s$.
```{.cpp file=duval_algorithm}
vector<string> duval(string const& s) {
int n = s.size();
int i = 0;
vector<string> factorization;
while (i < n) {
int j = i + 1, k = i;
while (j < n && s[k] <= s[j]) {
if (s[k] < s[j])
k = i;
else
k++;
j++;
}
while (i <= k) {
factorization.push_back(s.substr(i, j - k));
i += j - k;
}
}
return factorization;
}
```
### Complexity
Let us estimate the running time of this algorithm.
The **outer while loop** does not exceed $n$ iterations, since at the end of each iteration $i$ increases.
Also the second inner while loop runs in $O(n)$, since is only outputs the final factorization.
So we are only interested in the **first inner while loop**.
How many iterations does it perform in the worst case?
It's easy to see that the simple words that we identify in each iteration of the outer loop are longer than the remainder that we additionally compared.
Therefore also the sum of the remainders will be smaller than $n$, which means that we only perform at most $O(n)$ iterations of the first inner while loop.
In fact the total number of character comparisons will not exceed $4n - 3$.
## Finding the smallest cyclic shift
Let there be a string $s$.
We construct the Lyndon factorization for the string $s + s$ (in $O(n)$ time).
We will look for a simple string in the factorization, which starts at a position less than $n$ (i.e. it starts in the first instance of $s$), and ends in a position greater than or equal to $n$ (i.e. in the second instance) of $s$).
It is stated, that the position of the start of this simple string will be the beginning of the desired smallest cyclic shift.
This can be easily verified using the definition of the Lyndon decomposition.
The beginning of the simple block can be found easily - just remember the pointer $i$ at the beginning of each iteration of the outer loop, which indicated the beginning of the current pre-simple string.
So we get the following implementation:
```{.cpp file=smallest_cyclic_string}
string min_cyclic_string(string s) {
s += s;
int n = s.size();
int i = 0, ans = 0;
while (i < n / 2) {
ans = i;
int j = i + 1, k = i;
while (j < n && s[k] <= s[j]) {
if (s[k] < s[j])
k = i;
else
k++;
j++;
}
while (i <= k)
i += j - k;
}
return s.substr(ans, n / 2);
}
```
|
---
title
duval_algorithm
---
# Lyndon factorization
## Lyndon factorization
First let us define the notion of the Lyndon factorization.
A string is called **simple** (or a Lyndon word), if it is strictly **smaller than** any of its own nontrivial **suffixes**.
Examples of simple strings are: $a$, $b$, $ab$, $aab$, $abb$, $ababb$, $abcd$.
It can be shown that a string is simple, if and only if it is strictly **smaller than** all its nontrivial **cyclic shifts**.
Next, let there be a given string $s$.
The **Lyndon factorization** of the string $s$ is a factorization $s = w_1 w_2 \dots w_k$, where all strings $w_i$ are simple, and they are in non-increasing order $w_1 \ge w_2 \ge \dots \ge w_k$.
It can be shown, that for any string such a factorization exists and that it is unique.
## Duval algorithm
The Duval algorithm constructs the Lyndon factorization in $O(n)$ time using $O(1)$ additional memory.
First let us introduce another notion:
a string $t$ is called **pre-simple**, if it has the form $t = w w \dots w \overline{w}$, where $w$ is a simple string and $\overline{w}$ is a prefix of $w$ (possibly empty).
A simple string is also pre-simple.
The Duval algorithm is greedy.
At any point during its execution, the string $s$ will actually be divided into three strings $s = s_1 s_2 s_3$, where the Lyndon factorization for $s_1$ is already found and finalized, the string $s_2$ is pre-simple (and we know the length of the simple string in it), and $s_3$ is completely untouched.
In each iteration the Duval algorithm takes the first character of the string $s_3$ and tries to append it to the string $s_2$.
It $s_2$ is no longer pre-simple, then the Lyndon factorization for some part of $s_2$ becomes known, and this part goes to $s_1$.
Let's describe the algorithm in more detail.
The pointer $i$ will always point to the beginning of the string $s_2$.
The outer loop will be executed as long as $i < n$.
Inside the loop we use two additional pointers, $j$ which points to the beginning of $s_3$, and $k$ which points to the current character that we are currently comparing to.
We want to add the character $s[j]$ to the string $s_2$, which requires a comparison with the character $s[k]$.
There can be three different cases:
- $s[j] = s[k]$: if this is the case, then adding the symbol $s[j]$ to $s_2$ doesn't violate its pre-simplicity.
So we simply increment the pointers $j$ and $k$.
- $s[j] > s[k]$: here, the string $s_2 + s[j]$ becomes simple.
We can increment $j$ and reset $k$ back to the beginning of $s_2$, so that the next character can be compared with the beginning of the simple word.
- $s[j] < s[k]$: the string $s_2 + s[j]$ is no longer pre-simple.
Therefore we will split the pre-simple string $s_2$ into its simple strings and the remainder, possibly empty.
The simple string will have the length $j - k$.
In the next iteration we start again with the remaining $s_2$.
### Implementation
Here we present the implementation of the Duval algorithm, which will return the desired Lyndon factorization of a given string $s$.
```{.cpp file=duval_algorithm}
vector<string> duval(string const& s) {
int n = s.size();
int i = 0;
vector<string> factorization;
while (i < n) {
int j = i + 1, k = i;
while (j < n && s[k] <= s[j]) {
if (s[k] < s[j])
k = i;
else
k++;
j++;
}
while (i <= k) {
factorization.push_back(s.substr(i, j - k));
i += j - k;
}
}
return factorization;
}
```
### Complexity
Let us estimate the running time of this algorithm.
The **outer while loop** does not exceed $n$ iterations, since at the end of each iteration $i$ increases.
Also the second inner while loop runs in $O(n)$, since is only outputs the final factorization.
So we are only interested in the **first inner while loop**.
How many iterations does it perform in the worst case?
It's easy to see that the simple words that we identify in each iteration of the outer loop are longer than the remainder that we additionally compared.
Therefore also the sum of the remainders will be smaller than $n$, which means that we only perform at most $O(n)$ iterations of the first inner while loop.
In fact the total number of character comparisons will not exceed $4n - 3$.
## Finding the smallest cyclic shift
Let there be a string $s$.
We construct the Lyndon factorization for the string $s + s$ (in $O(n)$ time).
We will look for a simple string in the factorization, which starts at a position less than $n$ (i.e. it starts in the first instance of $s$), and ends in a position greater than or equal to $n$ (i.e. in the second instance) of $s$).
It is stated, that the position of the start of this simple string will be the beginning of the desired smallest cyclic shift.
This can be easily verified using the definition of the Lyndon decomposition.
The beginning of the simple block can be found easily - just remember the pointer $i$ at the beginning of each iteration of the outer loop, which indicated the beginning of the current pre-simple string.
So we get the following implementation:
```{.cpp file=smallest_cyclic_string}
string min_cyclic_string(string s) {
s += s;
int n = s.size();
int i = 0, ans = 0;
while (i < n / 2) {
ans = i;
int j = i + 1, k = i;
while (j < n && s[k] <= s[j]) {
if (s[k] < s[j])
k = i;
else
k++;
j++;
}
while (i <= k)
i += j - k;
}
return s.substr(ans, n / 2);
}
```
## Problems
- [UVA #719 - Glass Beads](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=660)
|
Lyndon factorization
|
---
title
string_hashes
---
# String Hashing
Hashing algorithms are helpful in solving a lot of problems.
We want to solve the problem of comparing strings efficiently.
The brute force way of doing so is just to compare the letters of both strings, which has a time complexity of $O(\min(n_1, n_2))$ if $n_1$ and $n_2$ are the sizes of the two strings.
We want to do better.
The idea behind the string hashing is the following: we map each string into an integer and compare those instead of the strings.
Doing this allows us to reduce the execution time of the string comparison to $O(1)$.
For the conversion, we need a so-called **hash function**.
The goal of it is to convert a string into an integer, the so-called **hash** of the string.
The following condition has to hold: if two strings $s$ and $t$ are equal ($s = t$), then also their hashes have to be equal ($\text{hash}(s) = \text{hash}(t)$).
Otherwise, we will not be able to compare strings.
Notice, the opposite direction doesn't have to hold.
If the hashes are equal ($\text{hash}(s) = \text{hash}(t)$), then the strings do not necessarily have to be equal.
E.g. a valid hash function would be simply $\text{hash}(s) = 0$ for each $s$.
Now, this is just a stupid example, because this function will be completely useless, but it is a valid hash function.
The reason why the opposite direction doesn't have to hold, is because there are exponentially many strings.
If we only want this hash function to distinguish between all strings consisting of lowercase characters of length smaller than 15, then already the hash wouldn't fit into a 64-bit integer (e.g. unsigned long long) any more, because there are so many of them.
And of course, we don't want to compare arbitrary long integers, because this will also have the complexity $O(n)$.
So usually we want the hash function to map strings onto numbers of a fixed range $[0, m)$, then comparing strings is just a comparison of two integers with a fixed length.
And of course, we want $\text{hash}(s) \neq \text{hash}(t)$ to be very likely if $s \neq t$.
That's the important part that you have to keep in mind.
Using hashing will not be 100% deterministically correct, because two complete different strings might have the same hash (the hashes collide).
However, in a wide majority of tasks, this can be safely ignored as the probability of the hashes of two different strings colliding is still very small.
And we will discuss some techniques in this article how to keep the probability of collisions very low.
## Calculation of the hash of a string
The good and widely used way to define the hash of a string $s$ of length $n$ is
$$\begin{align}
\text{hash}(s) &= s[0] + s[1] \cdot p + s[2] \cdot p^2 + ... + s[n-1] \cdot p^{n-1} \mod m \\
&= \sum_{i=0}^{n-1} s[i] \cdot p^i \mod m,
\end{align}$$
where $p$ and $m$ are some chosen, positive numbers.
It is called a **polynomial rolling hash function**.
It is reasonable to make $p$ a prime number roughly equal to the number of characters in the input alphabet.
For example, if the input is composed of only lowercase letters of the English alphabet, $p = 31$ is a good choice.
If the input may contain both uppercase and lowercase letters, then $p = 53$ is a possible choice.
The code in this article will use $p = 31$.
Obviously $m$ should be a large number since the probability of two random strings colliding is about $\approx \frac{1}{m}$.
Sometimes $m = 2^{64}$ is chosen, since then the integer overflows of 64-bit integers work exactly like the modulo operation.
However, there exists a method, which generates colliding strings (which work independently from the choice of $p$).
So in practice, $m = 2^{64}$ is not recommended.
A good choice for $m$ is some large prime number.
The code in this article will just use $m = 10^9+9$.
This is a large number, but still small enough so that we can perform multiplication of two values using 64-bit integers.
Here is an example of calculating the hash of a string $s$, which contains only lowercase letters.
We convert each character of $s$ to an integer.
Here we use the conversion $a \rightarrow 1$, $b \rightarrow 2$, $\dots$, $z \rightarrow 26$.
Converting $a \rightarrow 0$ is not a good idea, because then the hashes of the strings $a$, $aa$, $aaa$, $\dots$ all evaluate to $0$.
```{.cpp file=hashing_function}
long long compute_hash(string const& s) {
const int p = 31;
const int m = 1e9 + 9;
long long hash_value = 0;
long long p_pow = 1;
for (char c : s) {
hash_value = (hash_value + (c - 'a' + 1) * p_pow) % m;
p_pow = (p_pow * p) % m;
}
return hash_value;
}
```
Precomputing the powers of $p$ might give a performance boost.
## Example tasks
### Search for duplicate strings in an array of strings
Problem: Given a list of $n$ strings $s_i$, each no longer than $m$ characters, find all the duplicate strings and divide them into groups.
From the obvious algorithm involving sorting the strings, we would get a time complexity of $O(n m \log n)$ where the sorting requires $O(n \log n)$ comparisons and each comparison take $O(m)$ time.
However, by using hashes, we reduce the comparison time to $O(1)$, giving us an algorithm that runs in $O(n m + n \log n)$ time.
We calculate the hash for each string, sort the hashes together with the indices, and then group the indices by identical hashes.
```{.cpp file=hashing_group_identical_strings}
vector<vector<int>> group_identical_strings(vector<string> const& s) {
int n = s.size();
vector<pair<long long, int>> hashes(n);
for (int i = 0; i < n; i++)
hashes[i] = {compute_hash(s[i]), i};
sort(hashes.begin(), hashes.end());
vector<vector<int>> groups;
for (int i = 0; i < n; i++) {
if (i == 0 || hashes[i].first != hashes[i-1].first)
groups.emplace_back();
groups.back().push_back(hashes[i].second);
}
return groups;
}
```
### Fast hash calculation of substrings of given string
Problem: Given a string $s$ and indices $i$ and $j$, find the hash of the substring $s [i \dots j]$.
By definition, we have:
$$\text{hash}(s[i \dots j]) = \sum_{k = i}^j s[k] \cdot p^{k-i} \mod m$$
Multiplying by $p^i$ gives:
$$\begin{align}
\text{hash}(s[i \dots j]) \cdot p^i &= \sum_{k = i}^j s[k] \cdot p^k \mod m \\
&= \text{hash}(s[0 \dots j]) - \text{hash}(s[0 \dots i-1]) \mod m
\end{align}$$
So by knowing the hash value of each prefix of the string $s$, we can compute the hash of any substring directly using this formula.
The only problem that we face in calculating it is that we must be able to divide $\text{hash}(s[0 \dots j]) - \text{hash}(s[0 \dots i-1])$ by $p^i$.
Therefore we need to find the [modular multiplicative inverse](../algebra/module-inverse.md) of $p^i$ and then perform multiplication with this inverse.
We can precompute the inverse of every $p^i$, which allows computing the hash of any substring of $s$ in $O(1)$ time.
However, there does exist an easier way.
In most cases, rather than calculating the hashes of substring exactly, it is enough to compute the hash multiplied by some power of $p$.
Suppose we have two hashes of two substrings, one multiplied by $p^i$ and the other by $p^j$.
If $i < j$ then we multiply the first hash by $p^{j-i}$, otherwise, we multiply the second hash by $p^{i-j}$.
By doing this, we get both the hashes multiplied by the same power of $p$ (which is the maximum of $i$ and $j$) and now these hashes can be compared easily with no need for any division.
## Applications of Hashing
Here are some typical applications of Hashing:
* [Rabin-Karp algorithm](rabin-karp.md) for pattern matching in a string in $O(n)$ time
* Calculating the number of different substrings of a string in $O(n^2)$ (see below)
* Calculating the number of palindromic substrings in a string.
### Determine the number of different substrings in a string
Problem: Given a string $s$ of length $n$, consisting only of lowercase English letters, find the number of different substrings in this string.
To solve this problem, we iterate over all substring lengths $l = 1 \dots n$.
For every substring length $l$ we construct an array of hashes of all substrings of length $l$ multiplied by the same power of $p$.
The number of different elements in the array is equal to the number of distinct substrings of length $l$ in the string.
This number is added to the final answer.
For convenience, we will use $h[i]$ as the hash of the prefix with $i$ characters, and define $h[0] = 0$.
```{.cpp file=hashing_count_unique_substrings}
int count_unique_substrings(string const& s) {
int n = s.size();
const int p = 31;
const int m = 1e9 + 9;
vector<long long> p_pow(n);
p_pow[0] = 1;
for (int i = 1; i < n; i++)
p_pow[i] = (p_pow[i-1] * p) % m;
vector<long long> h(n + 1, 0);
for (int i = 0; i < n; i++)
h[i+1] = (h[i] + (s[i] - 'a' + 1) * p_pow[i]) % m;
int cnt = 0;
for (int l = 1; l <= n; l++) {
unordered_set<long long> hs;
for (int i = 0; i <= n - l; i++) {
long long cur_h = (h[i + l] + m - h[i]) % m;
cur_h = (cur_h * p_pow[n-i-1]) % m;
hs.insert(cur_h);
}
cnt += hs.size();
}
return cnt;
}
```
Notice, that $O(n^2)$ is not the best possible time complexity for this problem.
A solution with $O(n \log n)$ is described in the article about [Suffix Arrays](suffix-array.md), and it's even possible to compute it in $O(n)$ using a [Suffix Tree](./suffix-tree-ukkonen.md) or a [Suffix Automaton](./suffix-automaton.md).
## Improve no-collision probability
Quite often the above mentioned polynomial hash is good enough, and no collisions will happen during tests.
Remember, the probability that collision happens is only $\approx \frac{1}{m}$.
For $m = 10^9 + 9$ the probability is $\approx 10^{-9}$ which is quite low.
But notice, that we only did one comparison.
What if we compared a string $s$ with $10^6$ different strings.
The probability that at least one collision happens is now $\approx 10^{-3}$.
And if we want to compare $10^6$ different strings with each other (e.g. by counting how many unique strings exists), then the probability of at least one collision happening is already $\approx 1$.
It is pretty much guaranteed that this task will end with a collision and returns the wrong result.
There is a really easy trick to get better probabilities.
We can just compute two different hashes for each string (by using two different $p$, and/or different $m$, and compare these pairs instead.
If $m$ is about $10^9$ for each of the two hash functions than this is more or less equivalent as having one hash function with $m \approx 10^{18}$.
When comparing $10^6$ strings with each other, the probability that at least one collision happens is now reduced to $\approx 10^{-6}$.
|
---
title
string_hashes
---
# String Hashing
Hashing algorithms are helpful in solving a lot of problems.
We want to solve the problem of comparing strings efficiently.
The brute force way of doing so is just to compare the letters of both strings, which has a time complexity of $O(\min(n_1, n_2))$ if $n_1$ and $n_2$ are the sizes of the two strings.
We want to do better.
The idea behind the string hashing is the following: we map each string into an integer and compare those instead of the strings.
Doing this allows us to reduce the execution time of the string comparison to $O(1)$.
For the conversion, we need a so-called **hash function**.
The goal of it is to convert a string into an integer, the so-called **hash** of the string.
The following condition has to hold: if two strings $s$ and $t$ are equal ($s = t$), then also their hashes have to be equal ($\text{hash}(s) = \text{hash}(t)$).
Otherwise, we will not be able to compare strings.
Notice, the opposite direction doesn't have to hold.
If the hashes are equal ($\text{hash}(s) = \text{hash}(t)$), then the strings do not necessarily have to be equal.
E.g. a valid hash function would be simply $\text{hash}(s) = 0$ for each $s$.
Now, this is just a stupid example, because this function will be completely useless, but it is a valid hash function.
The reason why the opposite direction doesn't have to hold, is because there are exponentially many strings.
If we only want this hash function to distinguish between all strings consisting of lowercase characters of length smaller than 15, then already the hash wouldn't fit into a 64-bit integer (e.g. unsigned long long) any more, because there are so many of them.
And of course, we don't want to compare arbitrary long integers, because this will also have the complexity $O(n)$.
So usually we want the hash function to map strings onto numbers of a fixed range $[0, m)$, then comparing strings is just a comparison of two integers with a fixed length.
And of course, we want $\text{hash}(s) \neq \text{hash}(t)$ to be very likely if $s \neq t$.
That's the important part that you have to keep in mind.
Using hashing will not be 100% deterministically correct, because two complete different strings might have the same hash (the hashes collide).
However, in a wide majority of tasks, this can be safely ignored as the probability of the hashes of two different strings colliding is still very small.
And we will discuss some techniques in this article how to keep the probability of collisions very low.
## Calculation of the hash of a string
The good and widely used way to define the hash of a string $s$ of length $n$ is
$$\begin{align}
\text{hash}(s) &= s[0] + s[1] \cdot p + s[2] \cdot p^2 + ... + s[n-1] \cdot p^{n-1} \mod m \\
&= \sum_{i=0}^{n-1} s[i] \cdot p^i \mod m,
\end{align}$$
where $p$ and $m$ are some chosen, positive numbers.
It is called a **polynomial rolling hash function**.
It is reasonable to make $p$ a prime number roughly equal to the number of characters in the input alphabet.
For example, if the input is composed of only lowercase letters of the English alphabet, $p = 31$ is a good choice.
If the input may contain both uppercase and lowercase letters, then $p = 53$ is a possible choice.
The code in this article will use $p = 31$.
Obviously $m$ should be a large number since the probability of two random strings colliding is about $\approx \frac{1}{m}$.
Sometimes $m = 2^{64}$ is chosen, since then the integer overflows of 64-bit integers work exactly like the modulo operation.
However, there exists a method, which generates colliding strings (which work independently from the choice of $p$).
So in practice, $m = 2^{64}$ is not recommended.
A good choice for $m$ is some large prime number.
The code in this article will just use $m = 10^9+9$.
This is a large number, but still small enough so that we can perform multiplication of two values using 64-bit integers.
Here is an example of calculating the hash of a string $s$, which contains only lowercase letters.
We convert each character of $s$ to an integer.
Here we use the conversion $a \rightarrow 1$, $b \rightarrow 2$, $\dots$, $z \rightarrow 26$.
Converting $a \rightarrow 0$ is not a good idea, because then the hashes of the strings $a$, $aa$, $aaa$, $\dots$ all evaluate to $0$.
```{.cpp file=hashing_function}
long long compute_hash(string const& s) {
const int p = 31;
const int m = 1e9 + 9;
long long hash_value = 0;
long long p_pow = 1;
for (char c : s) {
hash_value = (hash_value + (c - 'a' + 1) * p_pow) % m;
p_pow = (p_pow * p) % m;
}
return hash_value;
}
```
Precomputing the powers of $p$ might give a performance boost.
## Example tasks
### Search for duplicate strings in an array of strings
Problem: Given a list of $n$ strings $s_i$, each no longer than $m$ characters, find all the duplicate strings and divide them into groups.
From the obvious algorithm involving sorting the strings, we would get a time complexity of $O(n m \log n)$ where the sorting requires $O(n \log n)$ comparisons and each comparison take $O(m)$ time.
However, by using hashes, we reduce the comparison time to $O(1)$, giving us an algorithm that runs in $O(n m + n \log n)$ time.
We calculate the hash for each string, sort the hashes together with the indices, and then group the indices by identical hashes.
```{.cpp file=hashing_group_identical_strings}
vector<vector<int>> group_identical_strings(vector<string> const& s) {
int n = s.size();
vector<pair<long long, int>> hashes(n);
for (int i = 0; i < n; i++)
hashes[i] = {compute_hash(s[i]), i};
sort(hashes.begin(), hashes.end());
vector<vector<int>> groups;
for (int i = 0; i < n; i++) {
if (i == 0 || hashes[i].first != hashes[i-1].first)
groups.emplace_back();
groups.back().push_back(hashes[i].second);
}
return groups;
}
```
### Fast hash calculation of substrings of given string
Problem: Given a string $s$ and indices $i$ and $j$, find the hash of the substring $s [i \dots j]$.
By definition, we have:
$$\text{hash}(s[i \dots j]) = \sum_{k = i}^j s[k] \cdot p^{k-i} \mod m$$
Multiplying by $p^i$ gives:
$$\begin{align}
\text{hash}(s[i \dots j]) \cdot p^i &= \sum_{k = i}^j s[k] \cdot p^k \mod m \\
&= \text{hash}(s[0 \dots j]) - \text{hash}(s[0 \dots i-1]) \mod m
\end{align}$$
So by knowing the hash value of each prefix of the string $s$, we can compute the hash of any substring directly using this formula.
The only problem that we face in calculating it is that we must be able to divide $\text{hash}(s[0 \dots j]) - \text{hash}(s[0 \dots i-1])$ by $p^i$.
Therefore we need to find the [modular multiplicative inverse](../algebra/module-inverse.md) of $p^i$ and then perform multiplication with this inverse.
We can precompute the inverse of every $p^i$, which allows computing the hash of any substring of $s$ in $O(1)$ time.
However, there does exist an easier way.
In most cases, rather than calculating the hashes of substring exactly, it is enough to compute the hash multiplied by some power of $p$.
Suppose we have two hashes of two substrings, one multiplied by $p^i$ and the other by $p^j$.
If $i < j$ then we multiply the first hash by $p^{j-i}$, otherwise, we multiply the second hash by $p^{i-j}$.
By doing this, we get both the hashes multiplied by the same power of $p$ (which is the maximum of $i$ and $j$) and now these hashes can be compared easily with no need for any division.
## Applications of Hashing
Here are some typical applications of Hashing:
* [Rabin-Karp algorithm](rabin-karp.md) for pattern matching in a string in $O(n)$ time
* Calculating the number of different substrings of a string in $O(n^2)$ (see below)
* Calculating the number of palindromic substrings in a string.
### Determine the number of different substrings in a string
Problem: Given a string $s$ of length $n$, consisting only of lowercase English letters, find the number of different substrings in this string.
To solve this problem, we iterate over all substring lengths $l = 1 \dots n$.
For every substring length $l$ we construct an array of hashes of all substrings of length $l$ multiplied by the same power of $p$.
The number of different elements in the array is equal to the number of distinct substrings of length $l$ in the string.
This number is added to the final answer.
For convenience, we will use $h[i]$ as the hash of the prefix with $i$ characters, and define $h[0] = 0$.
```{.cpp file=hashing_count_unique_substrings}
int count_unique_substrings(string const& s) {
int n = s.size();
const int p = 31;
const int m = 1e9 + 9;
vector<long long> p_pow(n);
p_pow[0] = 1;
for (int i = 1; i < n; i++)
p_pow[i] = (p_pow[i-1] * p) % m;
vector<long long> h(n + 1, 0);
for (int i = 0; i < n; i++)
h[i+1] = (h[i] + (s[i] - 'a' + 1) * p_pow[i]) % m;
int cnt = 0;
for (int l = 1; l <= n; l++) {
unordered_set<long long> hs;
for (int i = 0; i <= n - l; i++) {
long long cur_h = (h[i + l] + m - h[i]) % m;
cur_h = (cur_h * p_pow[n-i-1]) % m;
hs.insert(cur_h);
}
cnt += hs.size();
}
return cnt;
}
```
Notice, that $O(n^2)$ is not the best possible time complexity for this problem.
A solution with $O(n \log n)$ is described in the article about [Suffix Arrays](suffix-array.md), and it's even possible to compute it in $O(n)$ using a [Suffix Tree](./suffix-tree-ukkonen.md) or a [Suffix Automaton](./suffix-automaton.md).
## Improve no-collision probability
Quite often the above mentioned polynomial hash is good enough, and no collisions will happen during tests.
Remember, the probability that collision happens is only $\approx \frac{1}{m}$.
For $m = 10^9 + 9$ the probability is $\approx 10^{-9}$ which is quite low.
But notice, that we only did one comparison.
What if we compared a string $s$ with $10^6$ different strings.
The probability that at least one collision happens is now $\approx 10^{-3}$.
And if we want to compare $10^6$ different strings with each other (e.g. by counting how many unique strings exists), then the probability of at least one collision happening is already $\approx 1$.
It is pretty much guaranteed that this task will end with a collision and returns the wrong result.
There is a really easy trick to get better probabilities.
We can just compute two different hashes for each string (by using two different $p$, and/or different $m$, and compare these pairs instead.
If $m$ is about $10^9$ for each of the two hash functions than this is more or less equivalent as having one hash function with $m \approx 10^{18}$.
When comparing $10^6$ strings with each other, the probability that at least one collision happens is now reduced to $\approx 10^{-6}$.
## Practice Problems
* [Good Substrings - Codeforces](https://codeforces.com/contest/271/problem/D)
* [A Needle in the Haystack - SPOJ](http://www.spoj.com/problems/NHAY/)
* [String Hashing - Kattis](https://open.kattis.com/problems/hashing)
* [Double Profiles - Codeforces](http://codeforces.com/problemset/problem/154/C)
* [Password - Codeforces](http://codeforces.com/problemset/problem/126/B)
* [SUB_PROB - SPOJ](http://www.spoj.com/problems/SUB_PROB/)
* [INSQ15_A](https://www.codechef.com/problems/INSQ15_A)
* [SPOJ - Ada and Spring Cleaning](http://www.spoj.com/problems/ADACLEAN/)
* [GYM - Text Editor](http://codeforces.com/gym/101466/problem/E)
* [12012 - Detection of Extraterrestrial](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=3163)
* [Codeforces - Games on a CD](http://codeforces.com/contest/727/problem/E)
* [UVA 11855 - Buzzwords](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2955)
* [Codeforces - Santa Claus and a Palindrome](http://codeforces.com/contest/752/problem/D)
* [Codeforces - String Compression](http://codeforces.com/contest/825/problem/F)
* [Codeforces - Palindromic Characteristics](http://codeforces.com/contest/835/problem/D)
* [SPOJ - Test](http://www.spoj.com/problems/CF25E/)
* [Codeforces - Palindrome Degree](http://codeforces.com/contest/7/problem/D)
* [Codeforces - Deletion of Repeats](http://codeforces.com/contest/19/problem/C)
* [HackerRank - Gift Boxes](https://www.hackerrank.com/contests/womens-codesprint-5/challenges/gift-boxes)
|
String Hashing
|
---
title
ukkonen
---
# Suffix Tree. Ukkonen's Algorithm
*This article is a stub and doesn't contain any descriptions. For a description of the algorithm, refer to other sources, such as [Algorithms on Strings, Trees, and Sequences](https://www.cs.cmu.edu/afs/cs/project/pscico-guyb/realworld/www/slidesF06/cmuonly/gusfield.pdf) by Dan Gusfield.*
This algorithm builds a suffix tree for a given string $s$ of length $n$ in $O(n\log(k))$) time, where $k$ is the size of the alphabet (if $k$ is considered to be a constant, the asymptotic behavior is linear).
The input to the algorithm are the string $s$ and its length $n$, which are passed as global variables.
The main function `build_tree` builds a suffix tree. It is stored as an array of structures `node`, where `node[0]` is the root of the tree.
In order to simplify the code, the edges are stored in the same structures: for each vertex its structure `node` stores the information about the edge between it and its parent. Overall each `node` stores the following information:
* `(l, r)` - left and right boundaries of the substring `s[l..r-1]` which correspond to the edge to this node,
* `par` - the parent node,
* `link` - the suffix link,
* `next` - the list of edges going out from this node.
```cpp
string s;
int n;
struct node {
int l, r, par, link;
map<char,int> next;
node (int l=0, int r=0, int par=-1)
: l(l), r(r), par(par), link(-1) {}
int len() { return r - l; }
int &get (char c) {
if (!next.count(c)) next[c] = -1;
return next[c];
}
};
node t[MAXN];
int sz;
struct state {
int v, pos;
state (int v, int pos) : v(v), pos(pos) {}
};
state ptr (0, 0);
state go (state st, int l, int r) {
while (l < r)
if (st.pos == t[st.v].len()) {
st = state (t[st.v].get( s[l] ), 0);
if (st.v == -1) return st;
}
else {
if (s[ t[st.v].l + st.pos ] != s[l])
return state (-1, -1);
if (r-l < t[st.v].len() - st.pos)
return state (st.v, st.pos + r-l);
l += t[st.v].len() - st.pos;
st.pos = t[st.v].len();
}
return st;
}
int split (state st) {
if (st.pos == t[st.v].len())
return st.v;
if (st.pos == 0)
return t[st.v].par;
node v = t[st.v];
int id = sz++;
t[id] = node (v.l, v.l+st.pos, v.par);
t[v.par].get( s[v.l] ) = id;
t[id].get( s[v.l+st.pos] ) = st.v;
t[st.v].par = id;
t[st.v].l += st.pos;
return id;
}
int get_link (int v) {
if (t[v].link != -1) return t[v].link;
if (t[v].par == -1) return 0;
int to = get_link (t[v].par);
return t[v].link = split (go (state(to,t[to].len()), t[v].l + (t[v].par==0), t[v].r));
}
void tree_extend (int pos) {
for(;;) {
state nptr = go (ptr, pos, pos+1);
if (nptr.v != -1) {
ptr = nptr;
return;
}
int mid = split (ptr);
int leaf = sz++;
t[leaf] = node (pos, n, mid);
t[mid].get( s[pos] ) = leaf;
ptr.v = get_link (mid);
ptr.pos = t[ptr.v].len();
if (!mid) break;
}
}
void build_tree() {
sz = 1;
for (int i=0; i<n; ++i)
tree_extend (i);
}
```
## Compressed Implementation
This compressed implementation was proposed by [freopen](http://codeforces.com/profile/freopen).
```cpp
const int N=1000000,INF=1000000000;
string a;
int t[N][26],l[N],r[N],p[N],s[N],tv,tp,ts,la;
void ukkadd (int c) {
suff:;
if (r[tv]<tp) {
if (t[tv][c]==-1) { t[tv][c]=ts; l[ts]=la;
p[ts++]=tv; tv=s[tv]; tp=r[tv]+1; goto suff; }
tv=t[tv][c]; tp=l[tv];
}
if (tp==-1 || c==a[tp]-'a') tp++; else {
l[ts+1]=la; p[ts+1]=ts;
l[ts]=l[tv]; r[ts]=tp-1; p[ts]=p[tv]; t[ts][c]=ts+1; t[ts][a[tp]-'a']=tv;
l[tv]=tp; p[tv]=ts; t[p[ts]][a[l[ts]]-'a']=ts; ts+=2;
tv=s[p[ts-2]]; tp=l[ts-2];
while (tp<=r[ts-2]) { tv=t[tv][a[tp]-'a']; tp+=r[tv]-l[tv]+1;}
if (tp==r[ts-2]+1) s[ts-2]=tv; else s[ts-2]=ts;
tp=r[tv]-(tp-r[ts-2])+2; goto suff;
}
}
void build() {
ts=2;
tv=0;
tp=0;
fill(r,r+N,(int)a.size()-1);
s[0]=1;
l[0]=-1;
r[0]=-1;
l[1]=-1;
r[1]=-1;
memset (t, -1, sizeof t);
fill(t[1],t[1]+26,0);
for (la=0; la<(int)a.size(); ++la)
ukkadd (a[la]-'a');
}
```
Same code with comments:
```cpp
const int N=1000000, // maximum possible number of nodes in suffix tree
INF=1000000000; // infinity constant
string a; // input string for which the suffix tree is being built
int t[N][26], // array of transitions (state, letter)
l[N], // left...
r[N], // ...and right boundaries of the substring of a which correspond to incoming edge
p[N], // parent of the node
s[N], // suffix link
tv, // the node of the current suffix (if we're mid-edge, the lower node of the edge)
tp, // position in the string which corresponds to the position on the edge (between l[tv] and r[tv], inclusive)
ts, // the number of nodes
la; // the current character in the string
void ukkadd(int c) { // add character s to the tree
suff:; // we'll return here after each transition to the suffix (and will add character again)
if (r[tv]<tp) { // check whether we're still within the boundaries of the current edge
// if we're not, find the next edge. If it doesn't exist, create a leaf and add it to the tree
if (t[tv][c]==-1) {t[tv][c]=ts;l[ts]=la;p[ts++]=tv;tv=s[tv];tp=r[tv]+1;goto suff;}
tv=t[tv][c];tp=l[tv];
} // otherwise just proceed to the next edge
if (tp==-1 || c==a[tp]-'a')
tp++; // if the letter on the edge equal c, go down that edge
else {
// otherwise split the edge in two with middle in node ts
l[ts]=l[tv];r[ts]=tp-1;p[ts]=p[tv];t[ts][a[tp]-'a']=tv;
// add leaf ts+1. It corresponds to transition through c.
t[ts][c]=ts+1;l[ts+1]=la;p[ts+1]=ts;
// update info for the current node - remember to mark ts as parent of tv
l[tv]=tp;p[tv]=ts;t[p[ts]][a[l[ts]]-'a']=ts;ts+=2;
// prepare for descent
// tp will mark where are we in the current suffix
tv=s[p[ts-2]];tp=l[ts-2];
// while the current suffix is not over, descend
while (tp<=r[ts-2]) {tv=t[tv][a[tp]-'a'];tp+=r[tv]-l[tv]+1;}
// if we're in a node, add a suffix link to it, otherwise add the link to ts
// (we'll create ts on next iteration).
if (tp==r[ts-2]+1) s[ts-2]=tv; else s[ts-2]=ts;
// add tp to the new edge and return to add letter to suffix
tp=r[tv]-(tp-r[ts-2])+2;goto suff;
}
}
void build() {
ts=2;
tv=0;
tp=0;
fill(r,r+N,(int)a.size()-1);
// initialize data for the root of the tree
s[0]=1;
l[0]=-1;
r[0]=-1;
l[1]=-1;
r[1]=-1;
memset (t, -1, sizeof t);
fill(t[1],t[1]+26,0);
// add the text to the tree, letter by letter
for (la=0; la<(int)a.size(); ++la)
ukkadd (a[la]-'a');
}
```
|
---
title
ukkonen
---
# Suffix Tree. Ukkonen's Algorithm
*This article is a stub and doesn't contain any descriptions. For a description of the algorithm, refer to other sources, such as [Algorithms on Strings, Trees, and Sequences](https://www.cs.cmu.edu/afs/cs/project/pscico-guyb/realworld/www/slidesF06/cmuonly/gusfield.pdf) by Dan Gusfield.*
This algorithm builds a suffix tree for a given string $s$ of length $n$ in $O(n\log(k))$) time, where $k$ is the size of the alphabet (if $k$ is considered to be a constant, the asymptotic behavior is linear).
The input to the algorithm are the string $s$ and its length $n$, which are passed as global variables.
The main function `build_tree` builds a suffix tree. It is stored as an array of structures `node`, where `node[0]` is the root of the tree.
In order to simplify the code, the edges are stored in the same structures: for each vertex its structure `node` stores the information about the edge between it and its parent. Overall each `node` stores the following information:
* `(l, r)` - left and right boundaries of the substring `s[l..r-1]` which correspond to the edge to this node,
* `par` - the parent node,
* `link` - the suffix link,
* `next` - the list of edges going out from this node.
```cpp
string s;
int n;
struct node {
int l, r, par, link;
map<char,int> next;
node (int l=0, int r=0, int par=-1)
: l(l), r(r), par(par), link(-1) {}
int len() { return r - l; }
int &get (char c) {
if (!next.count(c)) next[c] = -1;
return next[c];
}
};
node t[MAXN];
int sz;
struct state {
int v, pos;
state (int v, int pos) : v(v), pos(pos) {}
};
state ptr (0, 0);
state go (state st, int l, int r) {
while (l < r)
if (st.pos == t[st.v].len()) {
st = state (t[st.v].get( s[l] ), 0);
if (st.v == -1) return st;
}
else {
if (s[ t[st.v].l + st.pos ] != s[l])
return state (-1, -1);
if (r-l < t[st.v].len() - st.pos)
return state (st.v, st.pos + r-l);
l += t[st.v].len() - st.pos;
st.pos = t[st.v].len();
}
return st;
}
int split (state st) {
if (st.pos == t[st.v].len())
return st.v;
if (st.pos == 0)
return t[st.v].par;
node v = t[st.v];
int id = sz++;
t[id] = node (v.l, v.l+st.pos, v.par);
t[v.par].get( s[v.l] ) = id;
t[id].get( s[v.l+st.pos] ) = st.v;
t[st.v].par = id;
t[st.v].l += st.pos;
return id;
}
int get_link (int v) {
if (t[v].link != -1) return t[v].link;
if (t[v].par == -1) return 0;
int to = get_link (t[v].par);
return t[v].link = split (go (state(to,t[to].len()), t[v].l + (t[v].par==0), t[v].r));
}
void tree_extend (int pos) {
for(;;) {
state nptr = go (ptr, pos, pos+1);
if (nptr.v != -1) {
ptr = nptr;
return;
}
int mid = split (ptr);
int leaf = sz++;
t[leaf] = node (pos, n, mid);
t[mid].get( s[pos] ) = leaf;
ptr.v = get_link (mid);
ptr.pos = t[ptr.v].len();
if (!mid) break;
}
}
void build_tree() {
sz = 1;
for (int i=0; i<n; ++i)
tree_extend (i);
}
```
## Compressed Implementation
This compressed implementation was proposed by [freopen](http://codeforces.com/profile/freopen).
```cpp
const int N=1000000,INF=1000000000;
string a;
int t[N][26],l[N],r[N],p[N],s[N],tv,tp,ts,la;
void ukkadd (int c) {
suff:;
if (r[tv]<tp) {
if (t[tv][c]==-1) { t[tv][c]=ts; l[ts]=la;
p[ts++]=tv; tv=s[tv]; tp=r[tv]+1; goto suff; }
tv=t[tv][c]; tp=l[tv];
}
if (tp==-1 || c==a[tp]-'a') tp++; else {
l[ts+1]=la; p[ts+1]=ts;
l[ts]=l[tv]; r[ts]=tp-1; p[ts]=p[tv]; t[ts][c]=ts+1; t[ts][a[tp]-'a']=tv;
l[tv]=tp; p[tv]=ts; t[p[ts]][a[l[ts]]-'a']=ts; ts+=2;
tv=s[p[ts-2]]; tp=l[ts-2];
while (tp<=r[ts-2]) { tv=t[tv][a[tp]-'a']; tp+=r[tv]-l[tv]+1;}
if (tp==r[ts-2]+1) s[ts-2]=tv; else s[ts-2]=ts;
tp=r[tv]-(tp-r[ts-2])+2; goto suff;
}
}
void build() {
ts=2;
tv=0;
tp=0;
fill(r,r+N,(int)a.size()-1);
s[0]=1;
l[0]=-1;
r[0]=-1;
l[1]=-1;
r[1]=-1;
memset (t, -1, sizeof t);
fill(t[1],t[1]+26,0);
for (la=0; la<(int)a.size(); ++la)
ukkadd (a[la]-'a');
}
```
Same code with comments:
```cpp
const int N=1000000, // maximum possible number of nodes in suffix tree
INF=1000000000; // infinity constant
string a; // input string for which the suffix tree is being built
int t[N][26], // array of transitions (state, letter)
l[N], // left...
r[N], // ...and right boundaries of the substring of a which correspond to incoming edge
p[N], // parent of the node
s[N], // suffix link
tv, // the node of the current suffix (if we're mid-edge, the lower node of the edge)
tp, // position in the string which corresponds to the position on the edge (between l[tv] and r[tv], inclusive)
ts, // the number of nodes
la; // the current character in the string
void ukkadd(int c) { // add character s to the tree
suff:; // we'll return here after each transition to the suffix (and will add character again)
if (r[tv]<tp) { // check whether we're still within the boundaries of the current edge
// if we're not, find the next edge. If it doesn't exist, create a leaf and add it to the tree
if (t[tv][c]==-1) {t[tv][c]=ts;l[ts]=la;p[ts++]=tv;tv=s[tv];tp=r[tv]+1;goto suff;}
tv=t[tv][c];tp=l[tv];
} // otherwise just proceed to the next edge
if (tp==-1 || c==a[tp]-'a')
tp++; // if the letter on the edge equal c, go down that edge
else {
// otherwise split the edge in two with middle in node ts
l[ts]=l[tv];r[ts]=tp-1;p[ts]=p[tv];t[ts][a[tp]-'a']=tv;
// add leaf ts+1. It corresponds to transition through c.
t[ts][c]=ts+1;l[ts+1]=la;p[ts+1]=ts;
// update info for the current node - remember to mark ts as parent of tv
l[tv]=tp;p[tv]=ts;t[p[ts]][a[l[ts]]-'a']=ts;ts+=2;
// prepare for descent
// tp will mark where are we in the current suffix
tv=s[p[ts-2]];tp=l[ts-2];
// while the current suffix is not over, descend
while (tp<=r[ts-2]) {tv=t[tv][a[tp]-'a'];tp+=r[tv]-l[tv]+1;}
// if we're in a node, add a suffix link to it, otherwise add the link to ts
// (we'll create ts on next iteration).
if (tp==r[ts-2]+1) s[ts-2]=tv; else s[ts-2]=ts;
// add tp to the new edge and return to add letter to suffix
tp=r[tv]-(tp-r[ts-2])+2;goto suff;
}
}
void build() {
ts=2;
tv=0;
tp=0;
fill(r,r+N,(int)a.size()-1);
// initialize data for the root of the tree
s[0]=1;
l[0]=-1;
r[0]=-1;
l[1]=-1;
r[1]=-1;
memset (t, -1, sizeof t);
fill(t[1],t[1]+26,0);
// add the text to the tree, letter by letter
for (la=0; la<(int)a.size(); ++la)
ukkadd (a[la]-'a');
}
```
## Practice Problems
* [UVA 10679 - I Love Strings!!!](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1620)
|
Suffix Tree. Ukkonen's Algorithm
|
---
title: Manacher's Algorithm - Finding all sub-palindromes in O(N)
title
palindromes_count
---
# Manacher's Algorithm - Finding all sub-palindromes in $O(N)$
## Statement
Given string $s$ with length $n$. Find all the pairs $(i, j)$ such that substring $s[i\dots j]$ is a palindrome. String $t$ is a palindrome when $t = t_{rev}$ ($t_{rev}$ is a reversed string for $t$).
## More precise statement
In the worst case string might have up to $O(n^2)$ palindromic substrings, and at the first glance it seems that there is no linear algorithm for this problem.
But the information about the palindromes can be kept **in a compact way**: for each position $i$ we will find the number of non-empty palindromes centered at this position.
Palindromes with a common center form a contiguous chain, that is if we have a palindrome of length $l$ centered in $i$, we also have palindromes of lengths $l-2$, $l-4$ and so on also centered in $i$. Therefore, we will collect the information about all palindromic substrings in this way.
Palindromes of odd and even lengths are accounted for separately as $d_{odd}[i]$ and $d_{even}[i]$. For the palindromes of even length we assume that they're centered in the position $i$ if their two central characters are $s[i]$ and $s[i-1]$.
For instance, string $s = abababc$ has three palindromes with odd length with centers in the position $s[3] = b$, i. e. $d_{odd}[3] = 3$:
$$a\ \overbrace{b\ a\ \underbrace{b}_{s_3}\ a\ b}^{d_{odd}[3]=3} c$$
And string $s = cbaabd$ has two palindromes with even length with centers in the position $s[3] = a$, i. e. $d_{even}[3] = 2$:
$$c\ \overbrace{b\ a\ \underbrace{a}_{s_3}\ b}^{d_{even}[3]=2} d$$
It's a surprising fact that there is an algorithm, which is simple enough, that calculates these "palindromity arrays" $d_{odd}[]$ and $d_{even}[]$ in linear time. The algorithm is described in this article.
## Solution
In general, this problem has many solutions: with [String Hashing](/string/string-hashing.html) it can be solved in $O(n\cdot \log n)$, and with [Suffix Trees](/string/suffix-tree-ukkonen.html) and fast LCA this problem can be solved in $O(n)$.
But the method described here is **sufficiently** simpler and has less hidden constant in time and memory complexity. This algorithm was discovered by **Glenn K. Manacher** in 1975.
Another modern way to solve this problem and to deal with palindromes in general is through the so-called palindromic tree, or eertree.
## Trivial algorithm
To avoid ambiguities in the further description we denote what "trivial algorithm" is.
It's the algorithm that does the following. For each center position $i$ it tries to increase the answer by one as long as it's possible, comparing a pair of corresponding characters each time.
Such an algorithm is slow, it can calculate the answer only in $O(n^2)$.
The implementation of the trivial algorithm is:
```cpp
vector<int> manacher_odd(string s) {
int n = s.size();
s = "$" + s + "^";
vector<int> p(n + 2);
for(int i = 1; i <= n; i++) {
while(s[i - p[i]] == s[i + p[i]]) {
p[i]++;
}
}
return vector<int>(begin(p) + 1, end(p) - 1);
}
```
Terminal characters `$` and `^` were used to avoid dealing with ends of the string separately.
## Manacher's algorithm
We describe the algorithm to find all the sub-palindromes with odd length, i. e. to calculate $d_{odd}[]$.
For fast calculation we'll maintain the **borders $(l, r)$** of the rightmost found (sub-)palindrome (i. e. the current rightmost (sub-)palindrome is $s[l+1] s[l+2] \dots s[r-1]$). Initially we set $l = 0, r = 1$, which corresponds to the empty string.
So, we want to calculate $d_{odd}[i]$ for the next $i$, and all the previous values in $d_{odd}[]$ have been already calculated. We do the following:
* If $i$ is outside the current sub-palindrome, i. e. $i \geq r$, we'll just launch the trivial algorithm.
So we'll increase $d_{odd}[i]$ consecutively and check each time if the current rightmost substring $[i - d_{odd}[i]\dots i + d_{odd}[i]]$ is a palindrome. When we find the first mismatch or meet the boundaries of $s$, we'll stop. In this case we've finally calculated $d_{odd}[i]$. After this, we must not forget to update $(l, r)$. $r$ should be updated in such a way that it represents the last index of the current rightmost sub-palindrome.
* Now consider the case when $i \le r$. We'll try to extract some information from the already calculated values in $d_{odd}[]$. So, let's find the "mirror" position of $i$ in the sub-palindrome $(l, r)$, i.e. we'll get the position $j = l + (r - i)$, and we check the value of $d_{odd}[j]$. Because $j$ is the position symmetrical to $i$ with respect to $(l+r)/2$, we can **almost always** assign $d_{odd}[i] = d_{odd}[j]$. Illustration of this (palindrome around $j$ is actually "copied" into the palindrome around $i$):
$$
\ldots\
\overbrace{
s_{l+1}\ \ldots\
\underbrace{
s_{j-d_{odd}[j]+1}\ \ldots\ s_j\ \ldots\ s_{j+d_{odd}[j]-1}\
}_\text{palindrome}\
\ldots\
\underbrace{
s_{i-d_{odd}[j]+1}\ \ldots\ s_i\ \ldots\ s_{i+d_{odd}[j]-1}\
}_\text{palindrome}\
\ldots\ s_{r-1}\
}^\text{palindrome}\
\ldots
$$
But there is a **tricky case** to be handled correctly: when the "inner" palindrome reaches the borders of the "outer" one, i. e. $j - d_{odd}[j] \le l$ (or, which is the same, $i + d_{odd}[j] \ge r$). Because the symmetry outside the "outer" palindrome is not guaranteed, just assigning $d_{odd}[i] = d_{odd}[j]$ will be incorrect: we do not have enough data to state that the palindrome in the position $i$ has the same length.
Actually, we should restrict the length of our palindrome for now, i. e. assign $d_{odd}[i] = r - i$, to handle such situations correctly. After this we'll run the trivial algorithm which will try to increase $d_{odd}[i]$ while it's possible.
Illustration of this case (the palindrome with center $j$ is restricted to fit the "outer" palindrome):
$$
\ldots\
\overbrace{
\underbrace{
s_{l+1}\ \ldots\ s_j\ \ldots\ s_{j+(j-l)-1}\
}_\text{palindrome}\
\ldots\
\underbrace{
s_{i-(r-i)+1}\ \ldots\ s_i\ \ldots\ s_{r-1}
}_\text{palindrome}\
}^\text{palindrome}\
\underbrace{
\ldots \ldots \ldots \ldots \ldots
}_\text{try moving here}
$$
It is shown in the illustration that though the palindrome with center $j$ could be larger and go outside the "outer" palindrome, but with $i$ as the center we can use only the part that entirely fits into the "outer" palindrome. But the answer for the position $i$ ($d_{odd}[i]$) can be much bigger than this part, so next we'll run our trivial algorithm that will try to grow it outside our "outer" palindrome, i. e. to the region "try moving here".
Again, we should not forget to update the values $(l, r)$ after calculating each $d_{odd}[i]$.
## Complexity of Manacher's algorithm
At the first glance it's not obvious that this algorithm has linear time complexity, because we often run the naive algorithm while searching the answer for a particular position.
However, a more careful analysis shows that the algorithm is linear. In fact, [Z-function building algorithm](/string/z-function.html), which looks similar to this algorithm, also works in linear time.
We can notice that every iteration of trivial algorithm increases $r$ by one. Also $r$ cannot be decreased during the algorithm. So, trivial algorithm will make $O(n)$ iterations in total.
Other parts of Manacher's algorithm work obviously in linear time. Thus, we get $O(n)$ time complexity.
## Implementation of Manacher's algorithm
For calculating $d_{odd}[]$, we get the following code. Things to note:
- $i$ is the index of the center letter of the current palindrome.
- If $i$ exceeds $r$, $d_{odd}[i]$ is initialized to 0.
- If $i$ does not exceed $r$, $d_{odd}[i]$ is either initialized to the $d_{odd}[j]$, where $j$ is the mirror position of $i$ in $(l,r)$, or $d_{odd}[i]$ is restricted to the size of the "outer" palindrome.
- The while loop denotes the trivial algorithm. We launch it irrespective of the value of $k$.
- If the size of palindrome centered at $i$ is $x$, then $d_{odd}[i]$ stores $\frac{x+1}{2}$.
```cpp
vector<int> manacher_odd(string s) {
int n = s.size();
s = "$" + s + "^";
vector<int> p(n + 2);
int l = 1, r = 1;
for(int i = 1; i <= n; i++) {
p[i] = max(0, min(r - i, p[l + (r - i)]));
while(s[i - p[i]] == s[i + p[i]]) {
p[i]++;
}
if(i + p[i] > r) {
l = i - p[i], r = i + p[i];
}
}
return vector<int>(begin(p) + 1, end(p) - 1);
}
```
## Working with parities
Although it is possible to implement Manacher's algorithm for odd and even lengths separately, the implementation of the version for even lengths is often deemed more difficult, as it is less natural and easily leads to off-by-one errors.
To mitigate this, it is possible to reduce the whole problem to the case when we only deal with the palindromes of odd length. To do this, we can put an additional `#` character between each letter in the string and also in the beginning and the end of the string:
$$abcbcba \to \#a\#b\#c\#b\#c\#b\#a\#,$$
$$d = [1,2,1,2,1,4,1,8,1,4,1,2,1,2,1].$$
As you can see, $d[2i]=2 d_{even}[i]+1$ and $d[2i+1]=2 d_{odd}[i]$ where $d$ denotes the Manacher array for odd-length palindromes in `#`-joined string, while $d_{odd}$ and $d_{even}$ correspond to the arrays defined above in the initial string.
Indeed, `#` characters do not affect the odd-length palindromes, which are still centered in the initial string's characters, but now even-length palindromes of the initial string are odd-length palindromes of the new string centered in `#` characters.
Note that $d[2i]$ and $d[2i+1]$ are essentially the increased by $1$ lengths of the largest odd- and even-length palindromes centered in $i$ correspondingly.
The reduction is implemented in the following way:
```cpp
vector<int> manacher(string s) {
string t;
for(auto c: s) {
t += string("#") + c;
}
auto res = manacher_odd(t + "#");
return vector<int>(begin(res) + 1, end(res) - 1);
}
```
For simplicity, splitting the array into $d_{odd}$ and $d_{even}$ as well as their explicit calculation is omitted.
|
---
title: Manacher's Algorithm - Finding all sub-palindromes in O(N)
title
palindromes_count
---
# Manacher's Algorithm - Finding all sub-palindromes in $O(N)$
## Statement
Given string $s$ with length $n$. Find all the pairs $(i, j)$ such that substring $s[i\dots j]$ is a palindrome. String $t$ is a palindrome when $t = t_{rev}$ ($t_{rev}$ is a reversed string for $t$).
## More precise statement
In the worst case string might have up to $O(n^2)$ palindromic substrings, and at the first glance it seems that there is no linear algorithm for this problem.
But the information about the palindromes can be kept **in a compact way**: for each position $i$ we will find the number of non-empty palindromes centered at this position.
Palindromes with a common center form a contiguous chain, that is if we have a palindrome of length $l$ centered in $i$, we also have palindromes of lengths $l-2$, $l-4$ and so on also centered in $i$. Therefore, we will collect the information about all palindromic substrings in this way.
Palindromes of odd and even lengths are accounted for separately as $d_{odd}[i]$ and $d_{even}[i]$. For the palindromes of even length we assume that they're centered in the position $i$ if their two central characters are $s[i]$ and $s[i-1]$.
For instance, string $s = abababc$ has three palindromes with odd length with centers in the position $s[3] = b$, i. e. $d_{odd}[3] = 3$:
$$a\ \overbrace{b\ a\ \underbrace{b}_{s_3}\ a\ b}^{d_{odd}[3]=3} c$$
And string $s = cbaabd$ has two palindromes with even length with centers in the position $s[3] = a$, i. e. $d_{even}[3] = 2$:
$$c\ \overbrace{b\ a\ \underbrace{a}_{s_3}\ b}^{d_{even}[3]=2} d$$
It's a surprising fact that there is an algorithm, which is simple enough, that calculates these "palindromity arrays" $d_{odd}[]$ and $d_{even}[]$ in linear time. The algorithm is described in this article.
## Solution
In general, this problem has many solutions: with [String Hashing](/string/string-hashing.html) it can be solved in $O(n\cdot \log n)$, and with [Suffix Trees](/string/suffix-tree-ukkonen.html) and fast LCA this problem can be solved in $O(n)$.
But the method described here is **sufficiently** simpler and has less hidden constant in time and memory complexity. This algorithm was discovered by **Glenn K. Manacher** in 1975.
Another modern way to solve this problem and to deal with palindromes in general is through the so-called palindromic tree, or eertree.
## Trivial algorithm
To avoid ambiguities in the further description we denote what "trivial algorithm" is.
It's the algorithm that does the following. For each center position $i$ it tries to increase the answer by one as long as it's possible, comparing a pair of corresponding characters each time.
Such an algorithm is slow, it can calculate the answer only in $O(n^2)$.
The implementation of the trivial algorithm is:
```cpp
vector<int> manacher_odd(string s) {
int n = s.size();
s = "$" + s + "^";
vector<int> p(n + 2);
for(int i = 1; i <= n; i++) {
while(s[i - p[i]] == s[i + p[i]]) {
p[i]++;
}
}
return vector<int>(begin(p) + 1, end(p) - 1);
}
```
Terminal characters `$` and `^` were used to avoid dealing with ends of the string separately.
## Manacher's algorithm
We describe the algorithm to find all the sub-palindromes with odd length, i. e. to calculate $d_{odd}[]$.
For fast calculation we'll maintain the **borders $(l, r)$** of the rightmost found (sub-)palindrome (i. e. the current rightmost (sub-)palindrome is $s[l+1] s[l+2] \dots s[r-1]$). Initially we set $l = 0, r = 1$, which corresponds to the empty string.
So, we want to calculate $d_{odd}[i]$ for the next $i$, and all the previous values in $d_{odd}[]$ have been already calculated. We do the following:
* If $i$ is outside the current sub-palindrome, i. e. $i \geq r$, we'll just launch the trivial algorithm.
So we'll increase $d_{odd}[i]$ consecutively and check each time if the current rightmost substring $[i - d_{odd}[i]\dots i + d_{odd}[i]]$ is a palindrome. When we find the first mismatch or meet the boundaries of $s$, we'll stop. In this case we've finally calculated $d_{odd}[i]$. After this, we must not forget to update $(l, r)$. $r$ should be updated in such a way that it represents the last index of the current rightmost sub-palindrome.
* Now consider the case when $i \le r$. We'll try to extract some information from the already calculated values in $d_{odd}[]$. So, let's find the "mirror" position of $i$ in the sub-palindrome $(l, r)$, i.e. we'll get the position $j = l + (r - i)$, and we check the value of $d_{odd}[j]$. Because $j$ is the position symmetrical to $i$ with respect to $(l+r)/2$, we can **almost always** assign $d_{odd}[i] = d_{odd}[j]$. Illustration of this (palindrome around $j$ is actually "copied" into the palindrome around $i$):
$$
\ldots\
\overbrace{
s_{l+1}\ \ldots\
\underbrace{
s_{j-d_{odd}[j]+1}\ \ldots\ s_j\ \ldots\ s_{j+d_{odd}[j]-1}\
}_\text{palindrome}\
\ldots\
\underbrace{
s_{i-d_{odd}[j]+1}\ \ldots\ s_i\ \ldots\ s_{i+d_{odd}[j]-1}\
}_\text{palindrome}\
\ldots\ s_{r-1}\
}^\text{palindrome}\
\ldots
$$
But there is a **tricky case** to be handled correctly: when the "inner" palindrome reaches the borders of the "outer" one, i. e. $j - d_{odd}[j] \le l$ (or, which is the same, $i + d_{odd}[j] \ge r$). Because the symmetry outside the "outer" palindrome is not guaranteed, just assigning $d_{odd}[i] = d_{odd}[j]$ will be incorrect: we do not have enough data to state that the palindrome in the position $i$ has the same length.
Actually, we should restrict the length of our palindrome for now, i. e. assign $d_{odd}[i] = r - i$, to handle such situations correctly. After this we'll run the trivial algorithm which will try to increase $d_{odd}[i]$ while it's possible.
Illustration of this case (the palindrome with center $j$ is restricted to fit the "outer" palindrome):
$$
\ldots\
\overbrace{
\underbrace{
s_{l+1}\ \ldots\ s_j\ \ldots\ s_{j+(j-l)-1}\
}_\text{palindrome}\
\ldots\
\underbrace{
s_{i-(r-i)+1}\ \ldots\ s_i\ \ldots\ s_{r-1}
}_\text{palindrome}\
}^\text{palindrome}\
\underbrace{
\ldots \ldots \ldots \ldots \ldots
}_\text{try moving here}
$$
It is shown in the illustration that though the palindrome with center $j$ could be larger and go outside the "outer" palindrome, but with $i$ as the center we can use only the part that entirely fits into the "outer" palindrome. But the answer for the position $i$ ($d_{odd}[i]$) can be much bigger than this part, so next we'll run our trivial algorithm that will try to grow it outside our "outer" palindrome, i. e. to the region "try moving here".
Again, we should not forget to update the values $(l, r)$ after calculating each $d_{odd}[i]$.
## Complexity of Manacher's algorithm
At the first glance it's not obvious that this algorithm has linear time complexity, because we often run the naive algorithm while searching the answer for a particular position.
However, a more careful analysis shows that the algorithm is linear. In fact, [Z-function building algorithm](/string/z-function.html), which looks similar to this algorithm, also works in linear time.
We can notice that every iteration of trivial algorithm increases $r$ by one. Also $r$ cannot be decreased during the algorithm. So, trivial algorithm will make $O(n)$ iterations in total.
Other parts of Manacher's algorithm work obviously in linear time. Thus, we get $O(n)$ time complexity.
## Implementation of Manacher's algorithm
For calculating $d_{odd}[]$, we get the following code. Things to note:
- $i$ is the index of the center letter of the current palindrome.
- If $i$ exceeds $r$, $d_{odd}[i]$ is initialized to 0.
- If $i$ does not exceed $r$, $d_{odd}[i]$ is either initialized to the $d_{odd}[j]$, where $j$ is the mirror position of $i$ in $(l,r)$, or $d_{odd}[i]$ is restricted to the size of the "outer" palindrome.
- The while loop denotes the trivial algorithm. We launch it irrespective of the value of $k$.
- If the size of palindrome centered at $i$ is $x$, then $d_{odd}[i]$ stores $\frac{x+1}{2}$.
```cpp
vector<int> manacher_odd(string s) {
int n = s.size();
s = "$" + s + "^";
vector<int> p(n + 2);
int l = 1, r = 1;
for(int i = 1; i <= n; i++) {
p[i] = max(0, min(r - i, p[l + (r - i)]));
while(s[i - p[i]] == s[i + p[i]]) {
p[i]++;
}
if(i + p[i] > r) {
l = i - p[i], r = i + p[i];
}
}
return vector<int>(begin(p) + 1, end(p) - 1);
}
```
## Working with parities
Although it is possible to implement Manacher's algorithm for odd and even lengths separately, the implementation of the version for even lengths is often deemed more difficult, as it is less natural and easily leads to off-by-one errors.
To mitigate this, it is possible to reduce the whole problem to the case when we only deal with the palindromes of odd length. To do this, we can put an additional `#` character between each letter in the string and also in the beginning and the end of the string:
$$abcbcba \to \#a\#b\#c\#b\#c\#b\#a\#,$$
$$d = [1,2,1,2,1,4,1,8,1,4,1,2,1,2,1].$$
As you can see, $d[2i]=2 d_{even}[i]+1$ and $d[2i+1]=2 d_{odd}[i]$ where $d$ denotes the Manacher array for odd-length palindromes in `#`-joined string, while $d_{odd}$ and $d_{even}$ correspond to the arrays defined above in the initial string.
Indeed, `#` characters do not affect the odd-length palindromes, which are still centered in the initial string's characters, but now even-length palindromes of the initial string are odd-length palindromes of the new string centered in `#` characters.
Note that $d[2i]$ and $d[2i+1]$ are essentially the increased by $1$ lengths of the largest odd- and even-length palindromes centered in $i$ correspondingly.
The reduction is implemented in the following way:
```cpp
vector<int> manacher(string s) {
string t;
for(auto c: s) {
t += string("#") + c;
}
auto res = manacher_odd(t + "#");
return vector<int>(begin(res) + 1, end(res) - 1);
}
```
For simplicity, splitting the array into $d_{odd}$ and $d_{even}$ as well as their explicit calculation is omitted.
## Problems
- [Library Checker - Enumerate Palindromes](https://judge.yosupo.jp/problem/enumerate_palindromes)
- [Longest Palindrome](https://cses.fi/problemset/task/1111)
- [UVA 11475 - Extend to Palindrome](https://onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=26&page=show_problem&problem=2470)
- [GYM - (Q) QueryreuQ](https://codeforces.com/gym/101806/problem/Q)
- [CF - Prefix-Suffix Palindrome](https://codeforces.com/contest/1326/problem/D2)
- [SPOJ - Number of Palindromes](https://www.spoj.com/problems/NUMOFPAL/)
- [Kattis - Palindromes](https://open.kattis.com/problems/palindromes)
|
Manacher's Algorithm - Finding all sub-palindromes in $O(N)$
|
---
title
string_tandems
---
# Finding repetitions
Given a string $s$ of length $n$.
A **repetition** is two occurrences of a string in a row.
In other words a repetition can be described by a pair of indices $i < j$ such that the substring $s[i \dots j]$ consists of two identical strings written after each other.
The challenge is to **find all repetitions** in a given string $s$.
Or a simplified task: find **any** repetition or find the **longest** repetition.
The algorithm described here was published in 1982 by Main and Lorentz.
## Example
Consider the repetitions in the following example string:
$$acababaee$$
The string contains the following three repetitions:
- $s[2 \dots 5] = abab$
- $s[3 \dots 6] = baba$
- $s[7 \dots 7] = ee$
Another example:
$$abaaba$$
Here there are only two repetitions
- $s[0 \dots 5] = abaaba$
- $s[2 \dots 3] = aa$
## Number of repetitions
In general there can be up to $O(n^2)$ repetitions in a string of length $n$.
An obvious example is a string consisting of $n$ times the same letter, in this case any substring of even length is a repetition.
In general any periodic string with a short period will contain a lot of repetitions.
On the other hand this fact does not prevent computing the number of repetitions in $O(n \log n)$ time, because the algorithm can give the repetitions in compressed form, in groups of several pieces at once.
There is even the concept, that describes groups of periodic substrings with tuples of size four.
It has been proven that we the number of such groups is at most linear with respect to the string length.
Also, here are some more interesting results related to the number of repetitions:
- The number of primitive repetitions (those whose halves are not repetitions) is at most $O(n \log n)$.
- If we encode repetitions with tuples of numbers (called Crochemore triples) $(i,~ p,~ r)$ (where $i$ is the position of the beginning, $p$ the length of the repeating substring, and $r$ the number of repetitions), then all repetitions can be described with $O(n \log n)$ such triples.
- Fibonacci strings, defined as
\[\begin{align}
t_0 &= a, \\\\
t_1 &= b, \\\\
t_i &= t_{i-1} + t_{i-2},
\end{align}\]
are "strongly" periodic.
The number of repetitions in the Fibonacci string $f_i$, even in the compressed with Crochemore triples, is $O(f_n \log f_n)$.
The number of primitive repetitions is also $O(f_n \log f_n)$.
## Main-Lorentz algorithm
The idea behind the Main-Lorentz algorithm is **divide-and-conquer**.
It splits the initial string into halves, and computes the number of repetitions that lie completely in each halve by two recursive calls.
Then comes the difficult part.
The algorithm finds all repetitions starting in the first half and ending in the second half (which we will call **crossing repetitions**).
This is the essential part of the Main-Lorentz algorithm, and we will discuss it in detail here.
The complexity of divide-and-conquer algorithms is well researched.
The [master theorem](https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)) says, that we will end up with an $O(n \log n)$ algorithm, if we can compute the crossing repetitions in $O(n)$ time.
### Search for crossing repetitions
So we want to find all such repetitions that start in the first half of the string, let's call it $u$, and end in the second half, let's call it $v$:
$$s = u + v$$
Their lengths are approximately equal to the length of $s$ divided by two.
Consider an arbitrary repetition and look at the middle character (more precisely the first character of the second half of the repetition).
I.e. if the repetition is a substring $s[i \dots j]$, then the middle character is $(i + j + 1) / 2$.
We call a repetition **left** or **right** depending on which string this character is located - in the string $u$ or in the string $v$.
In other words a string is called left, if the majority of it lies in $u$, otherwise we call it right.
We will now discuss how to find **all left repetitions**.
Finding all right repetitions can be done in the same way.
Let us denote the length of the left repetition by $2l$ (i.e. each half of the repetition has length $l$).
Consider the first character of the repetition falling into the string $v$ (it is at position $|u|$ in the string $s$).
It coincides with the character $l$ positions before it, let's denote this position $cntr$.
We will fixate this position $cntr$, and **look for all repetitions at this position** $cntr$.
For example:
$$c ~ \underset{cntr}{a} ~ c ~ | ~ a ~ d ~ a$$
The vertical lines divides the two halves.
Here we fixated the position $cntr = 1$, and at this position we find the repetition $caca$.
It is clear, that if we fixate the position $cntr$, we simultaneously fixate the length of the possible repetitions: $l = |u| - cntr$.
Once we know how to find these repetitions, we will iterate over all possible values for $cntr$ from $0$ to $|u|-1$, and find all left crossover repetitions of length $l = |u|,~ |u|-1,~ \dots, 1$.
### Criterion for left crossing repetitions
Now, how can we find all such repetitions for a fixated $cntr$?
Keep in mind that there still can be multiple such repetitions.
Let's again look at a visualization, this time for the repetition $abcabc$:
$$\overbrace{a}^{l_1} ~ \overbrace{\underset{cntr}{b} ~ c}^{l_2} ~ \overbrace{a}^{l_1} ~ | ~ \overbrace{b ~ c}^{l_2}$$
Here we denoted the lengths of the two pieces of the repetition with $l_1$ and $l_2$:
$l_1$ is the length of the repetition up to the position $cntr-1$, and $l_2$ is the length of the repetition from $cntr$ to the end of the half of the repetition.
We have $2l = l_1 + l_2 + l_1 + l_2$ as the total length of the repetition.
Let us generate **necessary and sufficient** conditions for such a repetition at position $cntr$ of length $2l = 2(l_1 + l_2) = 2(|u| - cntr)$:
- Let $k_1$ be the largest number such that the first $k_1$ characters before the position $cntr$ coincide with the last $k_1$ characters in the string $u$:
$$
u[cntr - k_1 \dots cntr - 1] = u[|u| - k_1 \dots |u| - 1]
$$
- Let $k_2$ be the largest number such that the $k_2$ characters starting at position $cntr$ coincide with the first $k_2$ characters in the string $v$:
$$
u[cntr \dots cntr + k_2 - 1] = v[0 \dots k_2 - 1]
$$
- Then we have a repetition exactly for any pair $(l_1,~ l_2)$ with
$$
\begin{align}
l_1 &\le k_1, \\\\
l_2 &\le k_2. \\\\
\end{align}
$$
To summarize:
- We fixate a specific position $cntr$.
- All repetition which we will find now have length $2l = 2(|u| - cntr)$.
There might be multiple such repetitions, they depend on the lengths $l_1$ and $l_2 = l - l_1$.
- We find $k_1$ and $k_2$ as described above.
- Then all suitable repetitions are the ones for which the lengths of the pieces $l_1$ and $l_2$ satisfy the conditions:
$$
\begin{align}
l_1 + l_2 &= l = |u| - cntr \\\\
l_1 &\le k_1, \\\\
l_2 &\le k_2. \\\\
\end{align}
$$
Therefore the only remaining part is how we can compute the values $k_1$ and $k_2$ quickly for every position $cntr$.
Luckily we can compute them in $O(1)$ using the [Z-function](../string/z-function.md):
- To can find the value $k_1$ for each position by calculating the Z-function for the string $\overline{u}$ (i.e. the reversed string $u$).
Then the value $k_1$ for a particular $cntr$ will be equal to the corresponding value of the array of the Z-function.
- To precompute all values $k_2$, we calculate the Z-function for the string $v + \# + u$ (i.e. the string $u$ concatenated with the separator character $\#$ and the string $v$).
Again we just need to look up the corresponding value in the Z-function to get the $k_2$ value.
So this is enough to find all left crossing repetitions.
### Right crossing repetitions
For computing the right crossing repetitions we act similarly:
we define the center $cntr$ as the character corresponding to the last character in the string $u$.
Then the length $k_1$ will be defined as the largest number of characters before the position $cntr$ (inclusive) that coincide with the last characters of the string $u$.
And the length $k_2$ will be defined as the largest number of characters starting at $cntr + 1$ that coincide with the characters of the string $v$.
Thus we can find the values $k_1$ and $k_2$ by computing the Z-function for the strings $\overline{u} + \# + \overline{v}$ and $v$.
After that we can find the repetitions by looking at all positions $cntr$, and use the same criterion as we had for left crossing repetitions.
### Implementation
The implementation of the Main-Lorentz algorithm finds all repetitions in form of peculiar tuples of size four: $(cntr,~ l,~ k_1,~ k_2)$ in $O(n \log n)$ time.
If you only want to find the number of repetitions in a string, or only want to find the longest repetition in a string, this information is enough and the runtime will still be $O(n \log n)$.
Notice that if you want to expand these tuples to get the starting and end position of each repetition, then the runtime will be the runtime will be $O(n^2)$ (remember that there can be $O(n^2)$ repetitions).
In this implementation we will do so, and store all found repetition in a vector of pairs of start and end indices.
```{.cpp file=main_lorentz}
vector<int> z_function(string const& s) {
int n = s.size();
vector<int> z(n);
for (int i = 1, l = 0, r = 0; i < n; i++) {
if (i <= r)
z[i] = min(r-i+1, z[i-l]);
while (i + z[i] < n && s[z[i]] == s[i+z[i]])
z[i]++;
if (i + z[i] - 1 > r) {
l = i;
r = i + z[i] - 1;
}
}
return z;
}
int get_z(vector<int> const& z, int i) {
if (0 <= i && i < (int)z.size())
return z[i];
else
return 0;
}
vector<pair<int, int>> repetitions;
void convert_to_repetitions(int shift, bool left, int cntr, int l, int k1, int k2) {
for (int l1 = max(1, l - k2); l1 <= min(l, k1); l1++) {
if (left && l1 == l) break;
int l2 = l - l1;
int pos = shift + (left ? cntr - l1 : cntr - l - l1 + 1);
repetitions.emplace_back(pos, pos + 2*l - 1);
}
}
void find_repetitions(string s, int shift = 0) {
int n = s.size();
if (n == 1)
return;
int nu = n / 2;
int nv = n - nu;
string u = s.substr(0, nu);
string v = s.substr(nu);
string ru(u.rbegin(), u.rend());
string rv(v.rbegin(), v.rend());
find_repetitions(u, shift);
find_repetitions(v, shift + nu);
vector<int> z1 = z_function(ru);
vector<int> z2 = z_function(v + '#' + u);
vector<int> z3 = z_function(ru + '#' + rv);
vector<int> z4 = z_function(v);
for (int cntr = 0; cntr < n; cntr++) {
int l, k1, k2;
if (cntr < nu) {
l = nu - cntr;
k1 = get_z(z1, nu - cntr);
k2 = get_z(z2, nv + 1 + cntr);
} else {
l = cntr - nu + 1;
k1 = get_z(z3, nu + 1 + nv - 1 - (cntr - nu));
k2 = get_z(z4, (cntr - nu) + 1);
}
if (k1 + k2 >= l)
convert_to_repetitions(shift, cntr < nu, cntr, l, k1, k2);
}
}
```
|
---
title
string_tandems
---
# Finding repetitions
Given a string $s$ of length $n$.
A **repetition** is two occurrences of a string in a row.
In other words a repetition can be described by a pair of indices $i < j$ such that the substring $s[i \dots j]$ consists of two identical strings written after each other.
The challenge is to **find all repetitions** in a given string $s$.
Or a simplified task: find **any** repetition or find the **longest** repetition.
The algorithm described here was published in 1982 by Main and Lorentz.
## Example
Consider the repetitions in the following example string:
$$acababaee$$
The string contains the following three repetitions:
- $s[2 \dots 5] = abab$
- $s[3 \dots 6] = baba$
- $s[7 \dots 7] = ee$
Another example:
$$abaaba$$
Here there are only two repetitions
- $s[0 \dots 5] = abaaba$
- $s[2 \dots 3] = aa$
## Number of repetitions
In general there can be up to $O(n^2)$ repetitions in a string of length $n$.
An obvious example is a string consisting of $n$ times the same letter, in this case any substring of even length is a repetition.
In general any periodic string with a short period will contain a lot of repetitions.
On the other hand this fact does not prevent computing the number of repetitions in $O(n \log n)$ time, because the algorithm can give the repetitions in compressed form, in groups of several pieces at once.
There is even the concept, that describes groups of periodic substrings with tuples of size four.
It has been proven that we the number of such groups is at most linear with respect to the string length.
Also, here are some more interesting results related to the number of repetitions:
- The number of primitive repetitions (those whose halves are not repetitions) is at most $O(n \log n)$.
- If we encode repetitions with tuples of numbers (called Crochemore triples) $(i,~ p,~ r)$ (where $i$ is the position of the beginning, $p$ the length of the repeating substring, and $r$ the number of repetitions), then all repetitions can be described with $O(n \log n)$ such triples.
- Fibonacci strings, defined as
\[\begin{align}
t_0 &= a, \\\\
t_1 &= b, \\\\
t_i &= t_{i-1} + t_{i-2},
\end{align}\]
are "strongly" periodic.
The number of repetitions in the Fibonacci string $f_i$, even in the compressed with Crochemore triples, is $O(f_n \log f_n)$.
The number of primitive repetitions is also $O(f_n \log f_n)$.
## Main-Lorentz algorithm
The idea behind the Main-Lorentz algorithm is **divide-and-conquer**.
It splits the initial string into halves, and computes the number of repetitions that lie completely in each halve by two recursive calls.
Then comes the difficult part.
The algorithm finds all repetitions starting in the first half and ending in the second half (which we will call **crossing repetitions**).
This is the essential part of the Main-Lorentz algorithm, and we will discuss it in detail here.
The complexity of divide-and-conquer algorithms is well researched.
The [master theorem](https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)) says, that we will end up with an $O(n \log n)$ algorithm, if we can compute the crossing repetitions in $O(n)$ time.
### Search for crossing repetitions
So we want to find all such repetitions that start in the first half of the string, let's call it $u$, and end in the second half, let's call it $v$:
$$s = u + v$$
Their lengths are approximately equal to the length of $s$ divided by two.
Consider an arbitrary repetition and look at the middle character (more precisely the first character of the second half of the repetition).
I.e. if the repetition is a substring $s[i \dots j]$, then the middle character is $(i + j + 1) / 2$.
We call a repetition **left** or **right** depending on which string this character is located - in the string $u$ or in the string $v$.
In other words a string is called left, if the majority of it lies in $u$, otherwise we call it right.
We will now discuss how to find **all left repetitions**.
Finding all right repetitions can be done in the same way.
Let us denote the length of the left repetition by $2l$ (i.e. each half of the repetition has length $l$).
Consider the first character of the repetition falling into the string $v$ (it is at position $|u|$ in the string $s$).
It coincides with the character $l$ positions before it, let's denote this position $cntr$.
We will fixate this position $cntr$, and **look for all repetitions at this position** $cntr$.
For example:
$$c ~ \underset{cntr}{a} ~ c ~ | ~ a ~ d ~ a$$
The vertical lines divides the two halves.
Here we fixated the position $cntr = 1$, and at this position we find the repetition $caca$.
It is clear, that if we fixate the position $cntr$, we simultaneously fixate the length of the possible repetitions: $l = |u| - cntr$.
Once we know how to find these repetitions, we will iterate over all possible values for $cntr$ from $0$ to $|u|-1$, and find all left crossover repetitions of length $l = |u|,~ |u|-1,~ \dots, 1$.
### Criterion for left crossing repetitions
Now, how can we find all such repetitions for a fixated $cntr$?
Keep in mind that there still can be multiple such repetitions.
Let's again look at a visualization, this time for the repetition $abcabc$:
$$\overbrace{a}^{l_1} ~ \overbrace{\underset{cntr}{b} ~ c}^{l_2} ~ \overbrace{a}^{l_1} ~ | ~ \overbrace{b ~ c}^{l_2}$$
Here we denoted the lengths of the two pieces of the repetition with $l_1$ and $l_2$:
$l_1$ is the length of the repetition up to the position $cntr-1$, and $l_2$ is the length of the repetition from $cntr$ to the end of the half of the repetition.
We have $2l = l_1 + l_2 + l_1 + l_2$ as the total length of the repetition.
Let us generate **necessary and sufficient** conditions for such a repetition at position $cntr$ of length $2l = 2(l_1 + l_2) = 2(|u| - cntr)$:
- Let $k_1$ be the largest number such that the first $k_1$ characters before the position $cntr$ coincide with the last $k_1$ characters in the string $u$:
$$
u[cntr - k_1 \dots cntr - 1] = u[|u| - k_1 \dots |u| - 1]
$$
- Let $k_2$ be the largest number such that the $k_2$ characters starting at position $cntr$ coincide with the first $k_2$ characters in the string $v$:
$$
u[cntr \dots cntr + k_2 - 1] = v[0 \dots k_2 - 1]
$$
- Then we have a repetition exactly for any pair $(l_1,~ l_2)$ with
$$
\begin{align}
l_1 &\le k_1, \\\\
l_2 &\le k_2. \\\\
\end{align}
$$
To summarize:
- We fixate a specific position $cntr$.
- All repetition which we will find now have length $2l = 2(|u| - cntr)$.
There might be multiple such repetitions, they depend on the lengths $l_1$ and $l_2 = l - l_1$.
- We find $k_1$ and $k_2$ as described above.
- Then all suitable repetitions are the ones for which the lengths of the pieces $l_1$ and $l_2$ satisfy the conditions:
$$
\begin{align}
l_1 + l_2 &= l = |u| - cntr \\\\
l_1 &\le k_1, \\\\
l_2 &\le k_2. \\\\
\end{align}
$$
Therefore the only remaining part is how we can compute the values $k_1$ and $k_2$ quickly for every position $cntr$.
Luckily we can compute them in $O(1)$ using the [Z-function](../string/z-function.md):
- To can find the value $k_1$ for each position by calculating the Z-function for the string $\overline{u}$ (i.e. the reversed string $u$).
Then the value $k_1$ for a particular $cntr$ will be equal to the corresponding value of the array of the Z-function.
- To precompute all values $k_2$, we calculate the Z-function for the string $v + \# + u$ (i.e. the string $u$ concatenated with the separator character $\#$ and the string $v$).
Again we just need to look up the corresponding value in the Z-function to get the $k_2$ value.
So this is enough to find all left crossing repetitions.
### Right crossing repetitions
For computing the right crossing repetitions we act similarly:
we define the center $cntr$ as the character corresponding to the last character in the string $u$.
Then the length $k_1$ will be defined as the largest number of characters before the position $cntr$ (inclusive) that coincide with the last characters of the string $u$.
And the length $k_2$ will be defined as the largest number of characters starting at $cntr + 1$ that coincide with the characters of the string $v$.
Thus we can find the values $k_1$ and $k_2$ by computing the Z-function for the strings $\overline{u} + \# + \overline{v}$ and $v$.
After that we can find the repetitions by looking at all positions $cntr$, and use the same criterion as we had for left crossing repetitions.
### Implementation
The implementation of the Main-Lorentz algorithm finds all repetitions in form of peculiar tuples of size four: $(cntr,~ l,~ k_1,~ k_2)$ in $O(n \log n)$ time.
If you only want to find the number of repetitions in a string, or only want to find the longest repetition in a string, this information is enough and the runtime will still be $O(n \log n)$.
Notice that if you want to expand these tuples to get the starting and end position of each repetition, then the runtime will be the runtime will be $O(n^2)$ (remember that there can be $O(n^2)$ repetitions).
In this implementation we will do so, and store all found repetition in a vector of pairs of start and end indices.
```{.cpp file=main_lorentz}
vector<int> z_function(string const& s) {
int n = s.size();
vector<int> z(n);
for (int i = 1, l = 0, r = 0; i < n; i++) {
if (i <= r)
z[i] = min(r-i+1, z[i-l]);
while (i + z[i] < n && s[z[i]] == s[i+z[i]])
z[i]++;
if (i + z[i] - 1 > r) {
l = i;
r = i + z[i] - 1;
}
}
return z;
}
int get_z(vector<int> const& z, int i) {
if (0 <= i && i < (int)z.size())
return z[i];
else
return 0;
}
vector<pair<int, int>> repetitions;
void convert_to_repetitions(int shift, bool left, int cntr, int l, int k1, int k2) {
for (int l1 = max(1, l - k2); l1 <= min(l, k1); l1++) {
if (left && l1 == l) break;
int l2 = l - l1;
int pos = shift + (left ? cntr - l1 : cntr - l - l1 + 1);
repetitions.emplace_back(pos, pos + 2*l - 1);
}
}
void find_repetitions(string s, int shift = 0) {
int n = s.size();
if (n == 1)
return;
int nu = n / 2;
int nv = n - nu;
string u = s.substr(0, nu);
string v = s.substr(nu);
string ru(u.rbegin(), u.rend());
string rv(v.rbegin(), v.rend());
find_repetitions(u, shift);
find_repetitions(v, shift + nu);
vector<int> z1 = z_function(ru);
vector<int> z2 = z_function(v + '#' + u);
vector<int> z3 = z_function(ru + '#' + rv);
vector<int> z4 = z_function(v);
for (int cntr = 0; cntr < n; cntr++) {
int l, k1, k2;
if (cntr < nu) {
l = nu - cntr;
k1 = get_z(z1, nu - cntr);
k2 = get_z(z2, nv + 1 + cntr);
} else {
l = cntr - nu + 1;
k1 = get_z(z3, nu + 1 + nv - 1 - (cntr - nu));
k2 = get_z(z4, (cntr - nu) + 1);
}
if (k1 + k2 >= l)
convert_to_repetitions(shift, cntr < nu, cntr, l, k1, k2);
}
}
```
|
Finding repetitions
|
---
title
suffix_array
---
# Suffix Array
## Definition
Let $s$ be a string of length $n$. The $i$-th suffix of $s$ is the substring $s[i \ldots n - 1]$.
A **suffix array** will contain integers that represent the **starting indexes** of the all the suffixes of a given string, after the aforementioned suffixes are sorted.
As an example look at the string $s = abaab$.
All suffixes are as follows
$$\begin{array}{ll}
0. & abaab \\
1. & baab \\
2. & aab \\
3. & ab \\
4. & b
\end{array}$$
After sorting these strings:
$$\begin{array}{ll}
2. & aab \\
3. & ab \\
0. & abaab \\
4. & b \\
1. & baab
\end{array}$$
Therefore the suffix array for $s$ will be $(2,~ 3,~ 0,~ 4,~ 1)$.
As a data structure it is widely used in areas such as data compression, bioinformatics and, in general, in any area that deals with strings and string matching problems.
## Construction
### $O(n^2 \log n)$ approach {data-toc-label="O(n^2 log n) approach"}
This is the most naive approach.
Get all the suffixes and sort them using quicksort or mergesort and simultaneously retain their original indices.
Sorting uses $O(n \log n)$ comparisons, and since comparing two strings will additionally take $O(n)$ time, we get the final complexity of $O(n^2 \log n)$.
### $O(n \log n)$ approach {data-toc-label="O(n log n) approach"}
Strictly speaking the following algorithm will not sort the suffixes, but rather the cyclic shifts of a string.
However we can very easily derive an algorithm for sorting suffixes from it:
it is enough to append an arbitrary character to the end of the string which is smaller than any character from the string.
It is common to use the symbol \$.
Then the order of the sorted cyclic shifts is equivalent to the order of the sorted suffixes, as demonstrated here with the string $dabbb$.
$$\begin{array}{lll}
1. & abbb\$d & abbb \\
4. & b\$dabb & b \\
3. & bb\$dab & bb \\
2. & bbb\$da & bbb \\
0. & dabbb\$ & dabbb
\end{array}$$
Since we are going to sort cyclic shifts, we will consider **cyclic substrings**.
We will use the notation $s[i \dots j]$ for the substring of $s$ even if $i > j$.
In this case we actually mean the string $s[i \dots n-1] + s[0 \dots j]$.
In addition we will take all indices modulo the length of $s$, and will omit the modulo operation for simplicity.
The algorithm we discuss will perform $\lceil \log n \rceil + 1$ iterations.
In the $k$-th iteration ($k = 0 \dots \lceil \log n \rceil$) we sort the $n$ cyclic substrings of $s$ of length $2^k$.
After the $\lceil \log n \rceil$-th iteration the substrings of length $2^{\lceil \log n \rceil} \ge n$ will be sorted, so this is equivalent to sorting the cyclic shifts altogether.
In each iteration of the algorithm, in addition to the permutation $p[0 \dots n-1]$, where $p[i]$ is the index of the $i$-th substring (starting at $i$ and with length $2^k$) in the sorted order, we will also maintain an array $c[0 \dots n-1]$, where $c[i]$ corresponds to the **equivalence class** to which the substring belongs.
Because some of the substrings will be identical, and the algorithm needs to treat them equally.
For convenience the classes will be labeled by numbers started from zero.
In addition the numbers $c[i]$ will be assigned in such a way that they preserve information about the order:
if one substring is smaller than the other, then it should also have a smaller class label.
The number of equivalence classes will be stored in a variable $\text{classes}$.
Let's look at an example.
Consider the string $s = aaba$.
The cyclic substrings and the corresponding arrays $p[]$ and $c[]$ are given for each iteration:
$$\begin{array}{cccc}
0: & (a,~ a,~ b,~ a) & p = (0,~ 1,~ 3,~ 2) & c = (0,~ 0,~ 1,~ 0)\\
1: & (aa,~ ab,~ ba,~ aa) & p = (0,~ 3,~ 1,~ 2) & c = (0,~ 1,~ 2,~ 0)\\
2: & (aaba,~ abaa,~ baaa,~ aaab) & p = (3,~ 0,~ 1,~ 2) & c = (1,~ 2,~ 3,~ 0)\\
\end{array}$$
It is worth noting that the values of $p[]$ can be different.
For example in the $0$-th iteration the array could also be $p = (3,~ 1,~ 0,~ 2)$ or $p = (3,~ 0,~ 1,~ 2)$.
All these options permutation the substrings into a sorted order.
So they are all valid.
At the same time the array $c[]$ is fixed, there can be no ambiguities.
Let us now focus on the implementation of the algorithm.
We will write a function that takes a string $s$ and returns the permutations of the sorted cyclic shifts.
```{.cpp file=suffix_array_sort_cyclic1}
vector<int> sort_cyclic_shifts(string const& s) {
int n = s.size();
const int alphabet = 256;
```
At the beginning (in the **$0$-th iteration**) we must sort the cyclic substrings of length $1$, that is we have to sort all characters of the string and divide them into equivalence classes (same symbols get assigned to the same class).
This can be done trivially, for example, by using **counting sort**.
For each character we count how many times it appears in the string, and then use this information to create the array $p[]$.
After that we go through the array $p[]$ and construct $c[]$ by comparing adjacent characters.
```{.cpp file=suffix_array_sort_cyclic2}
vector<int> p(n), c(n), cnt(max(alphabet, n), 0);
for (int i = 0; i < n; i++)
cnt[s[i]]++;
for (int i = 1; i < alphabet; i++)
cnt[i] += cnt[i-1];
for (int i = 0; i < n; i++)
p[--cnt[s[i]]] = i;
c[p[0]] = 0;
int classes = 1;
for (int i = 1; i < n; i++) {
if (s[p[i]] != s[p[i-1]])
classes++;
c[p[i]] = classes - 1;
}
```
Now we have to talk about the iteration step.
Let's assume we have already performed the $k-1$-th step and computed the values of the arrays $p[]$ and $c[]$ for it.
We want to compute the values for the $k$-th step in $O(n)$ time.
Since we perform this step $O(\log n)$ times, the complete algorithm will have a time complexity of $O(n \log n)$.
To do this, note that the cyclic substrings of length $2^k$ consists of two substrings of length $2^{k-1}$ which we can compare with each other in $O(1)$ using the information from the previous phase - the values of the equivalence classes $c[]$.
Thus, for two substrings of length $2^k$ starting at position $i$ and $j$, all necessary information to compare them is contained in the pairs $(c[i],~ c[i + 2^{k-1}])$ and $(c[j],~ c[j + 2^{k-1}])$.
$$\dots
\overbrace{
\underbrace{s_i \dots s_{i+2^{k-1}-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[i]}
\quad
\underbrace{s_{i+2^{k-1}} \dots s_{i+2^k-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[i + 2^{k-1}]}
}^{\text{length} = 2^k}
\dots
\overbrace{
\underbrace{s_j \dots s_{j+2^{k-1}-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[j]}
\quad
\underbrace{s_{j+2^{k-1}} \dots s_{j+2^k-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[j + 2^{k-1}]}
}^{\text{length} = 2^k}
\dots
$$
This gives us a very simple solution:
**sort** the substrings of length $2^k$ **by these pairs of numbers**.
This will give us the required order $p[]$.
However a normal sort runs in $O(n \log n)$ time, with which we are not satisfied.
This will only give us an algorithm for constructing a suffix array in $O(n \log^2 n)$ times.
How do we quickly perform such a sorting of the pairs?
Since the elements of the pairs do not exceed $n$, we can use counting sort again.
However sorting pairs with counting sort is not the most efficient.
To achieve a better hidden constant in the complexity, we will use another trick.
We use here the technique on which **radix sort** is based: to sort the pairs we first sort them by the second element, and then by the first element (with a stable sort, i.e. sorting without breaking the relative order of equal elements).
However the second elements were already sorted in the previous iteration.
Thus, in order to sort the pairs by the second elements, we just need to subtract $2^{k-1}$ from the indices in $p[]$ (e.g. if the smallest substring of length $2^{k-1}$ starts at position $i$, then the substring of length $2^k$ with the smallest second half starts at $i - 2^{k-1}$).
So only by simple subtractions we can sort the second elements of the pairs in $p[]$.
Now we need to perform a stable sort by the first elements.
As already mentioned, this can be accomplished with counting sort.
The only thing left is to compute the equivalence classes $c[]$, but as before this can be done by simply iterating over the sorted permutation $p[]$ and comparing neighboring pairs.
Here is the remaining implementation.
We use temporary arrays $pn[]$ and $cn[]$ to store the permutation by the second elements and the new equivalent class indices.
```{.cpp file=suffix_array_sort_cyclic3}
vector<int> pn(n), cn(n);
for (int h = 0; (1 << h) < n; ++h) {
for (int i = 0; i < n; i++) {
pn[i] = p[i] - (1 << h);
if (pn[i] < 0)
pn[i] += n;
}
fill(cnt.begin(), cnt.begin() + classes, 0);
for (int i = 0; i < n; i++)
cnt[c[pn[i]]]++;
for (int i = 1; i < classes; i++)
cnt[i] += cnt[i-1];
for (int i = n-1; i >= 0; i--)
p[--cnt[c[pn[i]]]] = pn[i];
cn[p[0]] = 0;
classes = 1;
for (int i = 1; i < n; i++) {
pair<int, int> cur = {c[p[i]], c[(p[i] + (1 << h)) % n]};
pair<int, int> prev = {c[p[i-1]], c[(p[i-1] + (1 << h)) % n]};
if (cur != prev)
++classes;
cn[p[i]] = classes - 1;
}
c.swap(cn);
}
return p;
}
```
The algorithm requires $O(n \log n)$ time and $O(n)$ memory. For simplicity we used the complete ASCII range as alphabet.
If it is known that the string only contains a subset of characters, e.g. only lowercase letters, then the implementation can be optimized, but the optimization factor would likely be insignificant, as the size of the alphabet only matters on the first iteration. Every other iteration depends on the number of equivalence classes, which may quickly reach $O(n)$ even if initially it was a string over the alphabet of size $2$.
Also note, that this algorithm only sorts the cycle shifts.
As mentioned at the beginning of this section we can generate the sorted order of the suffixes by appending a character that is smaller than all other characters of the string, and sorting this resulting string by cycle shifts, e.g. by sorting the cycle shifts of $s + $\$.
This will obviously give the suffix array of $s$, however prepended with $|s|$.
```{.cpp file=suffix_array_construction}
vector<int> suffix_array_construction(string s) {
s += "$";
vector<int> sorted_shifts = sort_cyclic_shifts(s);
sorted_shifts.erase(sorted_shifts.begin());
return sorted_shifts;
}
```
## Applications
### Finding the smallest cyclic shift
The algorithm above sorts all cyclic shifts (without appending a character to the string), and therefore $p[0]$ gives the position of the smallest cyclic shift.
### Finding a substring in a string
The task is to find a string $s$ inside some text $t$ online - we know the text $t$ beforehand, but not the string $s$.
We can create the suffix array for the text $t$ in $O(|t| \log |t|)$ time.
Now we can look for the substring $s$ in the following way.
The occurrence of $s$ must be a prefix of some suffix from $t$.
Since we sorted all the suffixes we can perform a binary search for $s$ in $p$.
Comparing the current suffix and the substring $s$ within the binary search can be done in $O(|s|)$ time, therefore the complexity for finding the substring is $O(|s| \log |t|)$.
Also notice that if the substring occurs multiple times in $t$, then all occurrences will be next to each other in $p$.
Therefore the number of occurrences can be found with a second binary search, and all occurrences can be printed easily.
### Comparing two substrings of a string
We want to be able to compare two substrings of the same length of a given string $s$ in $O(1)$ time, i.e. checking if the first substring is smaller than the second one.
For this we construct the suffix array in $O(|s| \log |s|)$ time and store all the intermediate results of the equivalence classes $c[]$.
Using this information we can compare any two substring whose length is equal to a power of two in O(1):
for this it is sufficient to compare the equivalence classes of both substrings.
Now we want to generalize this method to substrings of arbitrary length.
Let's compare two substrings of length $l$ with the starting indices $i$ and $j$.
We find the largest length of a block that is placed inside a substring of this length: the greatest $k$ such that $2^k \le l$.
Then comparing the two substrings can be replaced by comparing two overlapping blocks of length $2^k$:
first you need to compare the two blocks starting at $i$ and $j$, and if these are equal then compare the two blocks ending in positions $i + l - 1$ and $j + l - 1$:
$$\dots
\overbrace{\underbrace{s_i \dots s_{i+l-2^k} \dots s_{i+2^k-1}}_{2^k} \dots s_{i+l-1}}^{\text{first}}
\dots
\overbrace{\underbrace{s_j \dots s_{j+l-2^k} \dots s_{j+2^k-1}}_{2^k} \dots s_{j+l-1}}^{\text{second}}
\dots$$
$$\dots
\overbrace{s_i \dots \underbrace{s_{i+l-2^k} \dots s_{i+2^k-1} \dots s_{i+l-1}}_{2^k}}^{\text{first}}
\dots
\overbrace{s_j \dots \underbrace{s_{j+l-2^k} \dots s_{j+2^k-1} \dots s_{j+l-1}}_{2^k}}^{\text{second}}
\dots$$
Here is the implementation of the comparison.
Note that it is assumed that the function gets called with the already calculated $k$.
$k$ can be computed with $\lfloor \log l \rfloor$, but it is more efficient to precompute all $k$ values for every $l$.
See for instance the article about the [Sparse Table](../data_structures/sparse-table.md), which uses a similar idea and computes all $\log$ values.
```cpp
int compare(int i, int j, int l, int k) {
pair<int, int> a = {c[k][i], c[k][(i+l-(1 << k))%n]};
pair<int, int> b = {c[k][j], c[k][(j+l-(1 << k))%n]};
return a == b ? 0 : a < b ? -1 : 1;
}
```
### Longest common prefix of two substrings with additional memory
For a given string $s$ we want to compute the longest common prefix (**LCP**) of two arbitrary suffixes with position $i$ and $j$.
The method described here uses $O(|s| \log |s|)$ additional memory.
A completely different approach that will only use a linear amount of memory is described in the next section.
We construct the suffix array in $O(|s| \log |s|)$ time, and remember the intermediate results of the arrays $c[]$ from each iteration.
Let's compute the LCP for two suffixes starting at $i$ and $j$.
We can compare any two substrings with a length equal to a power of two in $O(1)$.
To do this, we compare the strings by power of twos (from highest to lowest power) and if the substrings of this length are the same, then we add the equal length to the answer and continue checking for the LCP to the right of the equal part, i.e. $i$ and $j$ get added by the current power of two.
```cpp
int lcp(int i, int j) {
int ans = 0;
for (int k = log_n; k >= 0; k--) {
if (c[k][i % n] == c[k][j % n]) {
ans += 1 << k;
i += 1 << k;
j += 1 << k;
}
}
return ans;
}
```
Here `log_n` denotes a constant that is equal to the logarithm of $n$ in base $2$ rounded down.
### Longest common prefix of two substrings without additional memory
We have the same task as in the previous section.
We have compute the longest common prefix (**LCP**) for two suffixes of a string $s$.
Unlike the previous method this one will only use $O(|s|)$ memory.
The result of the preprocessing will be an array (which itself is an important source of information about the string, and therefore also used to solve other tasks).
LCP queries can be answered by performing RMQ queries (range minimum queries) in this array, so for different implementations it is possible to achieve logarithmic and even constant query time.
The basis for this algorithm is the following idea:
we will compute the longest common prefix for each **pair of adjacent suffixes in the sorted order**.
In other words we construct an array $\text{lcp}[0 \dots n-2]$, where $\text{lcp}[i]$ is equal to the length of the longest common prefix of the suffixes starting at $p[i]$ and $p[i+1]$.
This array will give us an answer for any two adjacent suffixes of the string.
Then the answer for arbitrary two suffixes, not necessarily neighboring ones, can be obtained from this array.
In fact, let the request be to compute the LCP of the suffixes $p[i]$ and $p[j]$.
Then the answer to this query will be $\min(lcp[i],~ lcp[i+1],~ \dots,~ lcp[j-1])$.
Thus if we have such an array $\text{lcp}$, then the problem is reduced to the [RMQ](../sequences/rmq.md), which has many wide number of different solutions with different complexities.
So the main task is to **build** this array $\text{lcp}$.
We will use **Kasai's algorithm**, which can compute this array in $O(n)$ time.
Let's look at two adjacent suffixes in the sorted order (order of the suffix array).
Let their starting positions be $i$ and $j$ and their $\text{lcp}$ equal to $k > 0$.
If we remove the first letter of both suffixes - i.e. we take the suffixes $i+1$ and $j+1$ - then it should be obvious that the $\text{lcp}$ of these two is $k - 1$.
However we cannot use this value and write it in the $\text{lcp}$ array, because these two suffixes might not be next to each other in the sorted order.
The suffix $i+1$ will of course be smaller than the suffix $j+1$, but there might be some suffixes between them.
However, since we know that the LCP between two suffixes is the minimum value of all transitions, we also know that the LCP between any two pairs in that interval has to be at least $k-1$, especially also between $i+1$ and the next suffix.
And possibly it can be bigger.
Now we already can implement the algorithm.
We will iterate over the suffixes in order of their length. This way we can reuse the last value $k$, since going from suffix $i$ to the suffix $i+1$ is exactly the same as removing the first letter.
We will need an additional array $\text{rank}$, which will give us the position of a suffix in the sorted list of suffixes.
```{.cpp file=suffix_array_lcp_construction}
vector<int> lcp_construction(string const& s, vector<int> const& p) {
int n = s.size();
vector<int> rank(n, 0);
for (int i = 0; i < n; i++)
rank[p[i]] = i;
int k = 0;
vector<int> lcp(n-1, 0);
for (int i = 0; i < n; i++) {
if (rank[i] == n - 1) {
k = 0;
continue;
}
int j = p[rank[i] + 1];
while (i + k < n && j + k < n && s[i+k] == s[j+k])
k++;
lcp[rank[i]] = k;
if (k)
k--;
}
return lcp;
}
```
It is easy to see, that we decrease $k$ at most $O(n)$ times (each iteration at most once, except for $\text{rank}[i] == n-1$, where we directly reset it to $0$), and the LCP between two strings is at most $n-1$, we will also increase $k$ only $O(n)$ times.
Therefore the algorithm runs in $O(n)$ time.
### Number of different substrings
We preprocess the string $s$ by computing the suffix array and the LCP array.
Using this information we can compute the number of different substrings in the string.
To do this, we will think about which **new** substrings begin at position $p[0]$, then at $p[1]$, etc.
In fact we take the suffixes in sorted order and see what prefixes give new substrings.
Thus we will not overlook any by accident.
Because the suffixes are sorted, it is clear that the current suffix $p[i]$ will give new substrings for all its prefixes, except for the prefixes that coincide with the suffix $p[i-1]$.
Thus, all its prefixes except the first $\text{lcp}[i-1]$ one.
Since the length of the current suffix is $n - p[i]$, $n - p[i] - \text{lcp}[i-1]$ new prefixes start at $p[i]$.
Summing over all the suffixes, we get the final answer:
$$\sum_{i=0}^{n-1} (n - p[i]) - \sum_{i=0}^{n-2} \text{lcp}[i] = \frac{n^2 + n}{2} - \sum_{i=0}^{n-2} \text{lcp}[i]$$
|
---
title
suffix_array
---
# Suffix Array
## Definition
Let $s$ be a string of length $n$. The $i$-th suffix of $s$ is the substring $s[i \ldots n - 1]$.
A **suffix array** will contain integers that represent the **starting indexes** of the all the suffixes of a given string, after the aforementioned suffixes are sorted.
As an example look at the string $s = abaab$.
All suffixes are as follows
$$\begin{array}{ll}
0. & abaab \\
1. & baab \\
2. & aab \\
3. & ab \\
4. & b
\end{array}$$
After sorting these strings:
$$\begin{array}{ll}
2. & aab \\
3. & ab \\
0. & abaab \\
4. & b \\
1. & baab
\end{array}$$
Therefore the suffix array for $s$ will be $(2,~ 3,~ 0,~ 4,~ 1)$.
As a data structure it is widely used in areas such as data compression, bioinformatics and, in general, in any area that deals with strings and string matching problems.
## Construction
### $O(n^2 \log n)$ approach {data-toc-label="O(n^2 log n) approach"}
This is the most naive approach.
Get all the suffixes and sort them using quicksort or mergesort and simultaneously retain their original indices.
Sorting uses $O(n \log n)$ comparisons, and since comparing two strings will additionally take $O(n)$ time, we get the final complexity of $O(n^2 \log n)$.
### $O(n \log n)$ approach {data-toc-label="O(n log n) approach"}
Strictly speaking the following algorithm will not sort the suffixes, but rather the cyclic shifts of a string.
However we can very easily derive an algorithm for sorting suffixes from it:
it is enough to append an arbitrary character to the end of the string which is smaller than any character from the string.
It is common to use the symbol \$.
Then the order of the sorted cyclic shifts is equivalent to the order of the sorted suffixes, as demonstrated here with the string $dabbb$.
$$\begin{array}{lll}
1. & abbb\$d & abbb \\
4. & b\$dabb & b \\
3. & bb\$dab & bb \\
2. & bbb\$da & bbb \\
0. & dabbb\$ & dabbb
\end{array}$$
Since we are going to sort cyclic shifts, we will consider **cyclic substrings**.
We will use the notation $s[i \dots j]$ for the substring of $s$ even if $i > j$.
In this case we actually mean the string $s[i \dots n-1] + s[0 \dots j]$.
In addition we will take all indices modulo the length of $s$, and will omit the modulo operation for simplicity.
The algorithm we discuss will perform $\lceil \log n \rceil + 1$ iterations.
In the $k$-th iteration ($k = 0 \dots \lceil \log n \rceil$) we sort the $n$ cyclic substrings of $s$ of length $2^k$.
After the $\lceil \log n \rceil$-th iteration the substrings of length $2^{\lceil \log n \rceil} \ge n$ will be sorted, so this is equivalent to sorting the cyclic shifts altogether.
In each iteration of the algorithm, in addition to the permutation $p[0 \dots n-1]$, where $p[i]$ is the index of the $i$-th substring (starting at $i$ and with length $2^k$) in the sorted order, we will also maintain an array $c[0 \dots n-1]$, where $c[i]$ corresponds to the **equivalence class** to which the substring belongs.
Because some of the substrings will be identical, and the algorithm needs to treat them equally.
For convenience the classes will be labeled by numbers started from zero.
In addition the numbers $c[i]$ will be assigned in such a way that they preserve information about the order:
if one substring is smaller than the other, then it should also have a smaller class label.
The number of equivalence classes will be stored in a variable $\text{classes}$.
Let's look at an example.
Consider the string $s = aaba$.
The cyclic substrings and the corresponding arrays $p[]$ and $c[]$ are given for each iteration:
$$\begin{array}{cccc}
0: & (a,~ a,~ b,~ a) & p = (0,~ 1,~ 3,~ 2) & c = (0,~ 0,~ 1,~ 0)\\
1: & (aa,~ ab,~ ba,~ aa) & p = (0,~ 3,~ 1,~ 2) & c = (0,~ 1,~ 2,~ 0)\\
2: & (aaba,~ abaa,~ baaa,~ aaab) & p = (3,~ 0,~ 1,~ 2) & c = (1,~ 2,~ 3,~ 0)\\
\end{array}$$
It is worth noting that the values of $p[]$ can be different.
For example in the $0$-th iteration the array could also be $p = (3,~ 1,~ 0,~ 2)$ or $p = (3,~ 0,~ 1,~ 2)$.
All these options permutation the substrings into a sorted order.
So they are all valid.
At the same time the array $c[]$ is fixed, there can be no ambiguities.
Let us now focus on the implementation of the algorithm.
We will write a function that takes a string $s$ and returns the permutations of the sorted cyclic shifts.
```{.cpp file=suffix_array_sort_cyclic1}
vector<int> sort_cyclic_shifts(string const& s) {
int n = s.size();
const int alphabet = 256;
```
At the beginning (in the **$0$-th iteration**) we must sort the cyclic substrings of length $1$, that is we have to sort all characters of the string and divide them into equivalence classes (same symbols get assigned to the same class).
This can be done trivially, for example, by using **counting sort**.
For each character we count how many times it appears in the string, and then use this information to create the array $p[]$.
After that we go through the array $p[]$ and construct $c[]$ by comparing adjacent characters.
```{.cpp file=suffix_array_sort_cyclic2}
vector<int> p(n), c(n), cnt(max(alphabet, n), 0);
for (int i = 0; i < n; i++)
cnt[s[i]]++;
for (int i = 1; i < alphabet; i++)
cnt[i] += cnt[i-1];
for (int i = 0; i < n; i++)
p[--cnt[s[i]]] = i;
c[p[0]] = 0;
int classes = 1;
for (int i = 1; i < n; i++) {
if (s[p[i]] != s[p[i-1]])
classes++;
c[p[i]] = classes - 1;
}
```
Now we have to talk about the iteration step.
Let's assume we have already performed the $k-1$-th step and computed the values of the arrays $p[]$ and $c[]$ for it.
We want to compute the values for the $k$-th step in $O(n)$ time.
Since we perform this step $O(\log n)$ times, the complete algorithm will have a time complexity of $O(n \log n)$.
To do this, note that the cyclic substrings of length $2^k$ consists of two substrings of length $2^{k-1}$ which we can compare with each other in $O(1)$ using the information from the previous phase - the values of the equivalence classes $c[]$.
Thus, for two substrings of length $2^k$ starting at position $i$ and $j$, all necessary information to compare them is contained in the pairs $(c[i],~ c[i + 2^{k-1}])$ and $(c[j],~ c[j + 2^{k-1}])$.
$$\dots
\overbrace{
\underbrace{s_i \dots s_{i+2^{k-1}-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[i]}
\quad
\underbrace{s_{i+2^{k-1}} \dots s_{i+2^k-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[i + 2^{k-1}]}
}^{\text{length} = 2^k}
\dots
\overbrace{
\underbrace{s_j \dots s_{j+2^{k-1}-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[j]}
\quad
\underbrace{s_{j+2^{k-1}} \dots s_{j+2^k-1}}_{\text{length} = 2^{k-1},~ \text{class} = c[j + 2^{k-1}]}
}^{\text{length} = 2^k}
\dots
$$
This gives us a very simple solution:
**sort** the substrings of length $2^k$ **by these pairs of numbers**.
This will give us the required order $p[]$.
However a normal sort runs in $O(n \log n)$ time, with which we are not satisfied.
This will only give us an algorithm for constructing a suffix array in $O(n \log^2 n)$ times.
How do we quickly perform such a sorting of the pairs?
Since the elements of the pairs do not exceed $n$, we can use counting sort again.
However sorting pairs with counting sort is not the most efficient.
To achieve a better hidden constant in the complexity, we will use another trick.
We use here the technique on which **radix sort** is based: to sort the pairs we first sort them by the second element, and then by the first element (with a stable sort, i.e. sorting without breaking the relative order of equal elements).
However the second elements were already sorted in the previous iteration.
Thus, in order to sort the pairs by the second elements, we just need to subtract $2^{k-1}$ from the indices in $p[]$ (e.g. if the smallest substring of length $2^{k-1}$ starts at position $i$, then the substring of length $2^k$ with the smallest second half starts at $i - 2^{k-1}$).
So only by simple subtractions we can sort the second elements of the pairs in $p[]$.
Now we need to perform a stable sort by the first elements.
As already mentioned, this can be accomplished with counting sort.
The only thing left is to compute the equivalence classes $c[]$, but as before this can be done by simply iterating over the sorted permutation $p[]$ and comparing neighboring pairs.
Here is the remaining implementation.
We use temporary arrays $pn[]$ and $cn[]$ to store the permutation by the second elements and the new equivalent class indices.
```{.cpp file=suffix_array_sort_cyclic3}
vector<int> pn(n), cn(n);
for (int h = 0; (1 << h) < n; ++h) {
for (int i = 0; i < n; i++) {
pn[i] = p[i] - (1 << h);
if (pn[i] < 0)
pn[i] += n;
}
fill(cnt.begin(), cnt.begin() + classes, 0);
for (int i = 0; i < n; i++)
cnt[c[pn[i]]]++;
for (int i = 1; i < classes; i++)
cnt[i] += cnt[i-1];
for (int i = n-1; i >= 0; i--)
p[--cnt[c[pn[i]]]] = pn[i];
cn[p[0]] = 0;
classes = 1;
for (int i = 1; i < n; i++) {
pair<int, int> cur = {c[p[i]], c[(p[i] + (1 << h)) % n]};
pair<int, int> prev = {c[p[i-1]], c[(p[i-1] + (1 << h)) % n]};
if (cur != prev)
++classes;
cn[p[i]] = classes - 1;
}
c.swap(cn);
}
return p;
}
```
The algorithm requires $O(n \log n)$ time and $O(n)$ memory. For simplicity we used the complete ASCII range as alphabet.
If it is known that the string only contains a subset of characters, e.g. only lowercase letters, then the implementation can be optimized, but the optimization factor would likely be insignificant, as the size of the alphabet only matters on the first iteration. Every other iteration depends on the number of equivalence classes, which may quickly reach $O(n)$ even if initially it was a string over the alphabet of size $2$.
Also note, that this algorithm only sorts the cycle shifts.
As mentioned at the beginning of this section we can generate the sorted order of the suffixes by appending a character that is smaller than all other characters of the string, and sorting this resulting string by cycle shifts, e.g. by sorting the cycle shifts of $s + $\$.
This will obviously give the suffix array of $s$, however prepended with $|s|$.
```{.cpp file=suffix_array_construction}
vector<int> suffix_array_construction(string s) {
s += "$";
vector<int> sorted_shifts = sort_cyclic_shifts(s);
sorted_shifts.erase(sorted_shifts.begin());
return sorted_shifts;
}
```
## Applications
### Finding the smallest cyclic shift
The algorithm above sorts all cyclic shifts (without appending a character to the string), and therefore $p[0]$ gives the position of the smallest cyclic shift.
### Finding a substring in a string
The task is to find a string $s$ inside some text $t$ online - we know the text $t$ beforehand, but not the string $s$.
We can create the suffix array for the text $t$ in $O(|t| \log |t|)$ time.
Now we can look for the substring $s$ in the following way.
The occurrence of $s$ must be a prefix of some suffix from $t$.
Since we sorted all the suffixes we can perform a binary search for $s$ in $p$.
Comparing the current suffix and the substring $s$ within the binary search can be done in $O(|s|)$ time, therefore the complexity for finding the substring is $O(|s| \log |t|)$.
Also notice that if the substring occurs multiple times in $t$, then all occurrences will be next to each other in $p$.
Therefore the number of occurrences can be found with a second binary search, and all occurrences can be printed easily.
### Comparing two substrings of a string
We want to be able to compare two substrings of the same length of a given string $s$ in $O(1)$ time, i.e. checking if the first substring is smaller than the second one.
For this we construct the suffix array in $O(|s| \log |s|)$ time and store all the intermediate results of the equivalence classes $c[]$.
Using this information we can compare any two substring whose length is equal to a power of two in O(1):
for this it is sufficient to compare the equivalence classes of both substrings.
Now we want to generalize this method to substrings of arbitrary length.
Let's compare two substrings of length $l$ with the starting indices $i$ and $j$.
We find the largest length of a block that is placed inside a substring of this length: the greatest $k$ such that $2^k \le l$.
Then comparing the two substrings can be replaced by comparing two overlapping blocks of length $2^k$:
first you need to compare the two blocks starting at $i$ and $j$, and if these are equal then compare the two blocks ending in positions $i + l - 1$ and $j + l - 1$:
$$\dots
\overbrace{\underbrace{s_i \dots s_{i+l-2^k} \dots s_{i+2^k-1}}_{2^k} \dots s_{i+l-1}}^{\text{first}}
\dots
\overbrace{\underbrace{s_j \dots s_{j+l-2^k} \dots s_{j+2^k-1}}_{2^k} \dots s_{j+l-1}}^{\text{second}}
\dots$$
$$\dots
\overbrace{s_i \dots \underbrace{s_{i+l-2^k} \dots s_{i+2^k-1} \dots s_{i+l-1}}_{2^k}}^{\text{first}}
\dots
\overbrace{s_j \dots \underbrace{s_{j+l-2^k} \dots s_{j+2^k-1} \dots s_{j+l-1}}_{2^k}}^{\text{second}}
\dots$$
Here is the implementation of the comparison.
Note that it is assumed that the function gets called with the already calculated $k$.
$k$ can be computed with $\lfloor \log l \rfloor$, but it is more efficient to precompute all $k$ values for every $l$.
See for instance the article about the [Sparse Table](../data_structures/sparse-table.md), which uses a similar idea and computes all $\log$ values.
```cpp
int compare(int i, int j, int l, int k) {
pair<int, int> a = {c[k][i], c[k][(i+l-(1 << k))%n]};
pair<int, int> b = {c[k][j], c[k][(j+l-(1 << k))%n]};
return a == b ? 0 : a < b ? -1 : 1;
}
```
### Longest common prefix of two substrings with additional memory
For a given string $s$ we want to compute the longest common prefix (**LCP**) of two arbitrary suffixes with position $i$ and $j$.
The method described here uses $O(|s| \log |s|)$ additional memory.
A completely different approach that will only use a linear amount of memory is described in the next section.
We construct the suffix array in $O(|s| \log |s|)$ time, and remember the intermediate results of the arrays $c[]$ from each iteration.
Let's compute the LCP for two suffixes starting at $i$ and $j$.
We can compare any two substrings with a length equal to a power of two in $O(1)$.
To do this, we compare the strings by power of twos (from highest to lowest power) and if the substrings of this length are the same, then we add the equal length to the answer and continue checking for the LCP to the right of the equal part, i.e. $i$ and $j$ get added by the current power of two.
```cpp
int lcp(int i, int j) {
int ans = 0;
for (int k = log_n; k >= 0; k--) {
if (c[k][i % n] == c[k][j % n]) {
ans += 1 << k;
i += 1 << k;
j += 1 << k;
}
}
return ans;
}
```
Here `log_n` denotes a constant that is equal to the logarithm of $n$ in base $2$ rounded down.
### Longest common prefix of two substrings without additional memory
We have the same task as in the previous section.
We have compute the longest common prefix (**LCP**) for two suffixes of a string $s$.
Unlike the previous method this one will only use $O(|s|)$ memory.
The result of the preprocessing will be an array (which itself is an important source of information about the string, and therefore also used to solve other tasks).
LCP queries can be answered by performing RMQ queries (range minimum queries) in this array, so for different implementations it is possible to achieve logarithmic and even constant query time.
The basis for this algorithm is the following idea:
we will compute the longest common prefix for each **pair of adjacent suffixes in the sorted order**.
In other words we construct an array $\text{lcp}[0 \dots n-2]$, where $\text{lcp}[i]$ is equal to the length of the longest common prefix of the suffixes starting at $p[i]$ and $p[i+1]$.
This array will give us an answer for any two adjacent suffixes of the string.
Then the answer for arbitrary two suffixes, not necessarily neighboring ones, can be obtained from this array.
In fact, let the request be to compute the LCP of the suffixes $p[i]$ and $p[j]$.
Then the answer to this query will be $\min(lcp[i],~ lcp[i+1],~ \dots,~ lcp[j-1])$.
Thus if we have such an array $\text{lcp}$, then the problem is reduced to the [RMQ](../sequences/rmq.md), which has many wide number of different solutions with different complexities.
So the main task is to **build** this array $\text{lcp}$.
We will use **Kasai's algorithm**, which can compute this array in $O(n)$ time.
Let's look at two adjacent suffixes in the sorted order (order of the suffix array).
Let their starting positions be $i$ and $j$ and their $\text{lcp}$ equal to $k > 0$.
If we remove the first letter of both suffixes - i.e. we take the suffixes $i+1$ and $j+1$ - then it should be obvious that the $\text{lcp}$ of these two is $k - 1$.
However we cannot use this value and write it in the $\text{lcp}$ array, because these two suffixes might not be next to each other in the sorted order.
The suffix $i+1$ will of course be smaller than the suffix $j+1$, but there might be some suffixes between them.
However, since we know that the LCP between two suffixes is the minimum value of all transitions, we also know that the LCP between any two pairs in that interval has to be at least $k-1$, especially also between $i+1$ and the next suffix.
And possibly it can be bigger.
Now we already can implement the algorithm.
We will iterate over the suffixes in order of their length. This way we can reuse the last value $k$, since going from suffix $i$ to the suffix $i+1$ is exactly the same as removing the first letter.
We will need an additional array $\text{rank}$, which will give us the position of a suffix in the sorted list of suffixes.
```{.cpp file=suffix_array_lcp_construction}
vector<int> lcp_construction(string const& s, vector<int> const& p) {
int n = s.size();
vector<int> rank(n, 0);
for (int i = 0; i < n; i++)
rank[p[i]] = i;
int k = 0;
vector<int> lcp(n-1, 0);
for (int i = 0; i < n; i++) {
if (rank[i] == n - 1) {
k = 0;
continue;
}
int j = p[rank[i] + 1];
while (i + k < n && j + k < n && s[i+k] == s[j+k])
k++;
lcp[rank[i]] = k;
if (k)
k--;
}
return lcp;
}
```
It is easy to see, that we decrease $k$ at most $O(n)$ times (each iteration at most once, except for $\text{rank}[i] == n-1$, where we directly reset it to $0$), and the LCP between two strings is at most $n-1$, we will also increase $k$ only $O(n)$ times.
Therefore the algorithm runs in $O(n)$ time.
### Number of different substrings
We preprocess the string $s$ by computing the suffix array and the LCP array.
Using this information we can compute the number of different substrings in the string.
To do this, we will think about which **new** substrings begin at position $p[0]$, then at $p[1]$, etc.
In fact we take the suffixes in sorted order and see what prefixes give new substrings.
Thus we will not overlook any by accident.
Because the suffixes are sorted, it is clear that the current suffix $p[i]$ will give new substrings for all its prefixes, except for the prefixes that coincide with the suffix $p[i-1]$.
Thus, all its prefixes except the first $\text{lcp}[i-1]$ one.
Since the length of the current suffix is $n - p[i]$, $n - p[i] - \text{lcp}[i-1]$ new prefixes start at $p[i]$.
Summing over all the suffixes, we get the final answer:
$$\sum_{i=0}^{n-1} (n - p[i]) - \sum_{i=0}^{n-2} \text{lcp}[i] = \frac{n^2 + n}{2} - \sum_{i=0}^{n-2} \text{lcp}[i]$$
## Practice Problems
* [Uva 760 - DNA Sequencing](http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=24&page=show_problem&problem=701)
* [Uva 1223 - Editor](http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=24&page=show_problem&problem=3664)
* [Codechef - Tandem](https://www.codechef.com/problems/TANDEM)
* [Codechef - Substrings and Repetitions](https://www.codechef.com/problems/ANUSAR)
* [Codechef - Entangled Strings](https://www.codechef.com/problems/TANGLED)
* [Codeforces - Martian Strings](http://codeforces.com/problemset/problem/149/E)
* [Codeforces - Little Elephant and Strings](http://codeforces.com/problemset/problem/204/E)
* [SPOJ - Ada and Terramorphing](http://www.spoj.com/problems/ADAPHOTO/)
* [SPOJ - Ada and Substring](http://www.spoj.com/problems/ADASTRNG/)
* [UVA - 1227 - The longest constant gene](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=3668)
* [SPOJ - Longest Common Substring](http://www.spoj.com/problems/LCS/en/)
* [UVA 11512 - GATTACA](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2507)
* [LA 7502 - Suffixes and Palindromes](https://icpcarchive.ecs.baylor.edu/index.php?option=com_onlinejudge&Itemid=8&category=720&page=show_problem&problem=5524)
* [GYM - Por Costel and the Censorship Committee](http://codeforces.com/gym/100923/problem/D)
* [UVA 1254 - Top 10](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3695)
* [UVA 12191 - File Recover](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3343)
* [UVA 12206 - Stammering Aliens](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=3358)
* [Codechef - Jarvis and LCP](https://www.codechef.com/problems/INSQ16F)
* [LA 3943 - Liking's Letter](https://icpcarchive.ecs.baylor.edu/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=1944)
* [UVA 11107 - Life Forms](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2048)
* [UVA 12974 - Exquisite Strings](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=862&page=show_problem&problem=4853)
* [UVA 10526 - Intellectual Property](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=1467)
* [UVA 12338 - Anti-Rhyme Pairs](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=3760)
* [DevSkills Reconstructing Blue Print of Life (archived)](http://web.archive.org/web/20210126015936/https://devskill.com/CodingProblems/ViewProblem/328)
* [UVA 12191 - File Recover](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3343)
* [SPOJ - Suffix Array](http://www.spoj.com/problems/SARRAY/)
* [LA 4513 - Stammering Aliens](https://icpcarchive.ecs.baylor.edu/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2514)
* [SPOJ - LCS2](http://www.spoj.com/problems/LCS2/)
* [Codeforces - Fake News (hard)](http://codeforces.com/contest/802/problem/I)
* [SPOJ - Longest Commong Substring](http://www.spoj.com/problems/LONGCS/)
* [SPOJ - Lexicographical Substring Search](http://www.spoj.com/problems/SUBLEX/)
* [Codeforces - Forbidden Indices](http://codeforces.com/contest/873/problem/F)
* [Codeforces - Tricky and Clever Password](http://codeforces.com/contest/30/problem/E)
* [LA 6856 - Circle of digits](https://icpcarchive.ecs.baylor.edu/index.php?option=onlinejudge&page=show_problem&problem=4868)
|
Suffix Array
|
---
title
z_function
---
# Z-function and its calculation
Suppose we are given a string $s$ of length $n$. The **Z-function** for this string is an array of length $n$ where the $i$-th element is equal to the greatest number of characters starting from the position $i$ that coincide with the first characters of $s$.
In other words, $z[i]$ is the length of the longest string that is, at the same time, a prefix of $s$ and a prefix of the suffix of $s$ starting at $i$.
**Note.** In this article, to avoid ambiguity, we assume $0$-based indexes; that is: the first character of $s$ has index $0$ and the last one has index $n-1$.
The first element of Z-function, $z[0]$, is generally not well defined. In this article we will assume it is zero (although it doesn't change anything in the algorithm implementation).
This article presents an algorithm for calculating the Z-function in $O(n)$ time, as well as various of its applications.
## Examples
For example, here are the values of the Z-function computed for different strings:
* "aaaaa" - $[0, 4, 3, 2, 1]$
* "aaabaab" - $[0, 2, 1, 0, 2, 1, 0]$
* "abacaba" - $[0, 0, 1, 0, 3, 0, 1]$
## Trivial algorithm
Formal definition can be represented in the following elementary $O(n^2)$ implementation.
```cpp
vector<int> z_function_trivial(string s) {
int n = s.size();
vector<int> z(n);
for (int i = 1; i < n; i++) {
while (i + z[i] < n && s[z[i]] == s[i + z[i]]) {
z[i]++;
}
}
return z;
}
```
We just iterate through every position $i$ and update $z[i]$ for each one of them, starting from $z[i] = 0$ and incrementing it as long as we don't find a mismatch (and as long as we don't reach the end of the line).
Of course, this is not an efficient implementation. We will now show the construction of an efficient implementation.
## Efficient algorithm to compute the Z-function
To obtain an efficient algorithm we will compute the values of $z[i]$ in turn from $i = 1$ to $n - 1$ but at the same time, when computing a new value, we'll try to make the best use possible of the previously computed values.
For the sake of brevity, let's call **segment matches** those substrings that coincide with a prefix of $s$. For example, the value of the desired Z-function $z[i]$ is the length of the segment match starting at position $i$ (and that ends at position $i + z[i] - 1$).
To do this, we will keep **the $[l, r)$ indices of the rightmost segment match**. That is, among all detected segments we will keep the one that ends rightmost. In a way, the index $r$ can be seen as the "boundary" to which our string $s$ has been scanned by the algorithm; everything beyond that point is not yet known.
Then, if the current index (for which we have to compute the next value of the Z-function) is $i$, we have one of two options:
* $i \geq r$ -- the current position is **outside** of what we have already processed.
We will then compute $z[i]$ with the **trivial algorithm** (that is, just comparing values one by one). Note that in the end, if $z[i] > 0$, we'll have to update the indices of the rightmost segment, because it's guaranteed that the new $r = i + z[i]$ is better than the previous $r$.
* $i < r$ -- the current position is inside the current segment match $[l, r)$.
Then we can use the already calculated Z-values to "initialize" the value of $z[i]$ to something (it sure is better than "starting from zero"), maybe even some big number.
For this, we observe that the substrings $s[l \dots r)$ and $s[0 \dots r-l)$ **match**. This means that as an initial approximation for $z[i]$ we can take the value already computed for the corresponding segment $s[0 \dots r-l)$, and that is $z[i-l]$.
However, the value $z[i-l]$ could be too large: when applied to position $i$ it could exceed the index $r$. This is not allowed because we know nothing about the characters to the right of $r$: they may differ from those required.
Here is **an example** of a similar scenario:
$$ s = "aaaabaa" $$
When we get to the last position ($i = 6$), the current match segment will be $[5, 7)$. Position $6$ will then match position $6 - 5 = 1$, for which the value of the Z-function is $z[1] = 3$. Obviously, we cannot initialize $z[6]$ to $3$, it would be completely incorrect. The maximum value we could initialize it to is $1$ -- because it's the largest value that doesn't bring us beyond the index $r$ of the match segment $[l, r)$.
Thus, as an **initial approximation** for $z[i]$ we can safely take:
$$ z_0[i] = \min(r - i,\; z[i-l]) $$
After having $z[i]$ initialized to $z_0[i]$, we try to increment $z[i]$ by running the **trivial algorithm** -- because in general, after the border $r$, we cannot know if the segment will continue to match or not.
Thus, the whole algorithm is split in two cases, which differ only in **the initial value** of $z[i]$: in the first case it's assumed to be zero, in the second case it is determined by the previously computed values (using the above formula). After that, both branches of this algorithm can be reduced to the implementation of **the trivial algorithm**, which starts immediately after we specify the initial value.
The algorithm turns out to be very simple. Despite the fact that on each iteration the trivial algorithm is run, we have made significant progress, having an algorithm that runs in linear time. Later on we will prove that the running time is linear.
## Implementation
Implementation turns out to be rather concise:
```cpp
vector<int> z_function(string s) {
int n = s.size();
vector<int> z(n);
int l = 0, r = 0;
for(int i = 1; i < n; i++) {
if(i < r) {
z[i] = min(r - i, z[i - l]);
}
while(i + z[i] < n && s[z[i]] == s[i + z[i]]) {
z[i]++;
}
if(i + z[i] > r) {
l = i;
r = i + z[i];
}
}
return z;
}
```
### Comments on this implementation
The whole solution is given as a function which returns an array of length $n$ -- the Z-function of $s$.
Array $z$ is initially filled with zeros. The current rightmost match segment is assumed to be $[0; 0)$ (that is, a deliberately small segment which doesn't contain any $i$).
Inside the loop for $i = 1 \dots n - 1$ we first determine the initial value $z[i]$ -- it will either remain zero or be computed using the above formula.
Thereafter, the trivial algorithm attempts to increase the value of $z[i]$ as much as possible.
In the end, if it's required (that is, if $i + z[i] > r$), we update the rightmost match segment $[l, r)$.
## Asymptotic behavior of the algorithm
We will prove that the above algorithm has a running time that is linear in the length of the string -- thus, it's $O(n)$.
The proof is very simple.
We are interested in the nested `while` loop, since everything else is just a bunch of constant operations which sums up to $O(n)$.
We will show that **each iteration** of the `while` loop will increase the right border $r$ of the match segment.
To do that, we will consider both branches of the algorithm:
* $i \geq r$
In this case, either the `while` loop won't make any iteration (if $s[0] \ne s[i]$), or it will take a few iterations, starting at position $i$, each time moving one character to the right. After that, the right border $r$ will necessarily be updated.
So we have found that, when $i \geq r$, each iteration of the `while` loop increases the value of the new $r$ index.
* $i < r$
In this case, we initialize $z[i]$ to a certain value $z_0$ given by the above formula. Let's compare this initial value $z_0$ to the value $r - i$. We will have three cases:
* $z_0 < r - i$
We prove that in this case no iteration of the `while` loop will take place.
It's easy to prove, for example, by contradiction: if the `while` loop made at least one iteration, it would mean that initial approximation $z[i] = z_0$ was inaccurate (less than the match's actual length). But since $s[l \dots r)$ and $s[0 \dots r-l)$ are the same, this would imply that $z[i-l]$ holds the wrong value (less than it should be).
Thus, since $z[i-l]$ is correct and it is less than $r - i$, it follows that this value coincides with the required value $z[i]$.
* $z_0 = r - i$
In this case, the `while` loop can make a few iterations, but each of them will lead to an increase in the value of the $r$ index because we will start comparing from $s[r]$, which will climb beyond the $[l, r)$ interval.
* $z_0 > r - i$
This option is impossible, by definition of $z_0$.
So, we have proved that each iteration of the inner loop make the $r$ pointer advance to the right. Since $r$ can't be more than $n-1$, this means that the inner loop won't make more than $n-1$ iterations.
As the rest of the algorithm obviously works in $O(n)$, we have proved that the whole algorithm for computing Z-functions runs in linear time.
## Applications
We will now consider some uses of Z-functions for specific tasks.
These applications will be largely similar to applications of [prefix function](prefix-function.md).
### Search the substring
To avoid confusion, we call $t$ the **string of text**, and $p$ the **pattern**. The problem is: find all occurrences of the pattern $p$ inside the text $t$.
To solve this problem, we create a new string $s = p + \diamond + t$, that is, we apply string concatenation to $p$ and $t$ but we also put a separator character $\diamond$ in the middle (we'll choose $\diamond$ so that it will certainly not be present anywhere in the strings $p$ or $t$).
Compute the Z-function for $s$. Then, for any $i$ in the interval $[0; \; \operatorname{length}(t) - 1]$, we will consider the corresponding value $k = z[i + \operatorname{length}(p) + 1]$. If $k$ is equal to $\operatorname{length}(p)$ then we know there is one occurrence of $p$ in the $i$-th position of $t$, otherwise there is no occurrence of $p$ in the $i$-th position of $t$.
The running time (and memory consumption) is $O(\operatorname{length}(t) + \operatorname{length}(p))$.
### Number of distinct substrings in a string
Given a string $s$ of length $n$, count the number of distinct substrings of $s$.
We'll solve this problem iteratively. That is: knowing the current number of different substrings, recalculate this amount after adding to the end of $s$ one character.
So, let $k$ be the current number of distinct substrings of $s$. We append a new character $c$ to $s$. Obviously, there can be some new substrings ending in this new character $c$ (namely, all those strings that end with this symbol and that we haven't encountered yet).
Take a string $t = s + c$ and invert it (write its characters in reverse order). Our task is now to count how many prefixes of $t$ are not found anywhere else in $t$. Let's compute the Z-function of $t$ and find its maximum value $z_{max}$. Obviously, $t$'s prefix of length $z_{max}$ occurs also somewhere in the middle of $t$. Clearly, shorter prefixes also occur.
So, we have found that the number of new substrings that appear when symbol $c$ is appended to $s$ is equal to $\operatorname{length}(t) - z_{max}$.
Consequently, the running time of this solution is $O(n^2)$ for a string of length $n$.
It's worth noting that in exactly the same way we can recalculate, still in $O(n)$ time, the number of distinct substrings when appending a character in the beginning of the string, as well as when removing it (from the end or the beginning).
### String compression
Given a string $s$ of length $n$. Find its shortest "compressed" representation, that is: find a string $t$ of shortest length such that $s$ can be represented as a concatenation of one or more copies of $t$.
A solution is: compute the Z-function of $s$, loop through all $i$ such that $i$ divides $n$. Stop at the first $i$ such that $i + z[i] = n$. Then, the string $s$ can be compressed to the length $i$.
The proof for this fact is the same as the solution which uses the [prefix function](prefix-function.md).
|
---
title
z_function
---
# Z-function and its calculation
Suppose we are given a string $s$ of length $n$. The **Z-function** for this string is an array of length $n$ where the $i$-th element is equal to the greatest number of characters starting from the position $i$ that coincide with the first characters of $s$.
In other words, $z[i]$ is the length of the longest string that is, at the same time, a prefix of $s$ and a prefix of the suffix of $s$ starting at $i$.
**Note.** In this article, to avoid ambiguity, we assume $0$-based indexes; that is: the first character of $s$ has index $0$ and the last one has index $n-1$.
The first element of Z-function, $z[0]$, is generally not well defined. In this article we will assume it is zero (although it doesn't change anything in the algorithm implementation).
This article presents an algorithm for calculating the Z-function in $O(n)$ time, as well as various of its applications.
## Examples
For example, here are the values of the Z-function computed for different strings:
* "aaaaa" - $[0, 4, 3, 2, 1]$
* "aaabaab" - $[0, 2, 1, 0, 2, 1, 0]$
* "abacaba" - $[0, 0, 1, 0, 3, 0, 1]$
## Trivial algorithm
Formal definition can be represented in the following elementary $O(n^2)$ implementation.
```cpp
vector<int> z_function_trivial(string s) {
int n = s.size();
vector<int> z(n);
for (int i = 1; i < n; i++) {
while (i + z[i] < n && s[z[i]] == s[i + z[i]]) {
z[i]++;
}
}
return z;
}
```
We just iterate through every position $i$ and update $z[i]$ for each one of them, starting from $z[i] = 0$ and incrementing it as long as we don't find a mismatch (and as long as we don't reach the end of the line).
Of course, this is not an efficient implementation. We will now show the construction of an efficient implementation.
## Efficient algorithm to compute the Z-function
To obtain an efficient algorithm we will compute the values of $z[i]$ in turn from $i = 1$ to $n - 1$ but at the same time, when computing a new value, we'll try to make the best use possible of the previously computed values.
For the sake of brevity, let's call **segment matches** those substrings that coincide with a prefix of $s$. For example, the value of the desired Z-function $z[i]$ is the length of the segment match starting at position $i$ (and that ends at position $i + z[i] - 1$).
To do this, we will keep **the $[l, r)$ indices of the rightmost segment match**. That is, among all detected segments we will keep the one that ends rightmost. In a way, the index $r$ can be seen as the "boundary" to which our string $s$ has been scanned by the algorithm; everything beyond that point is not yet known.
Then, if the current index (for which we have to compute the next value of the Z-function) is $i$, we have one of two options:
* $i \geq r$ -- the current position is **outside** of what we have already processed.
We will then compute $z[i]$ with the **trivial algorithm** (that is, just comparing values one by one). Note that in the end, if $z[i] > 0$, we'll have to update the indices of the rightmost segment, because it's guaranteed that the new $r = i + z[i]$ is better than the previous $r$.
* $i < r$ -- the current position is inside the current segment match $[l, r)$.
Then we can use the already calculated Z-values to "initialize" the value of $z[i]$ to something (it sure is better than "starting from zero"), maybe even some big number.
For this, we observe that the substrings $s[l \dots r)$ and $s[0 \dots r-l)$ **match**. This means that as an initial approximation for $z[i]$ we can take the value already computed for the corresponding segment $s[0 \dots r-l)$, and that is $z[i-l]$.
However, the value $z[i-l]$ could be too large: when applied to position $i$ it could exceed the index $r$. This is not allowed because we know nothing about the characters to the right of $r$: they may differ from those required.
Here is **an example** of a similar scenario:
$$ s = "aaaabaa" $$
When we get to the last position ($i = 6$), the current match segment will be $[5, 7)$. Position $6$ will then match position $6 - 5 = 1$, for which the value of the Z-function is $z[1] = 3$. Obviously, we cannot initialize $z[6]$ to $3$, it would be completely incorrect. The maximum value we could initialize it to is $1$ -- because it's the largest value that doesn't bring us beyond the index $r$ of the match segment $[l, r)$.
Thus, as an **initial approximation** for $z[i]$ we can safely take:
$$ z_0[i] = \min(r - i,\; z[i-l]) $$
After having $z[i]$ initialized to $z_0[i]$, we try to increment $z[i]$ by running the **trivial algorithm** -- because in general, after the border $r$, we cannot know if the segment will continue to match or not.
Thus, the whole algorithm is split in two cases, which differ only in **the initial value** of $z[i]$: in the first case it's assumed to be zero, in the second case it is determined by the previously computed values (using the above formula). After that, both branches of this algorithm can be reduced to the implementation of **the trivial algorithm**, which starts immediately after we specify the initial value.
The algorithm turns out to be very simple. Despite the fact that on each iteration the trivial algorithm is run, we have made significant progress, having an algorithm that runs in linear time. Later on we will prove that the running time is linear.
## Implementation
Implementation turns out to be rather concise:
```cpp
vector<int> z_function(string s) {
int n = s.size();
vector<int> z(n);
int l = 0, r = 0;
for(int i = 1; i < n; i++) {
if(i < r) {
z[i] = min(r - i, z[i - l]);
}
while(i + z[i] < n && s[z[i]] == s[i + z[i]]) {
z[i]++;
}
if(i + z[i] > r) {
l = i;
r = i + z[i];
}
}
return z;
}
```
### Comments on this implementation
The whole solution is given as a function which returns an array of length $n$ -- the Z-function of $s$.
Array $z$ is initially filled with zeros. The current rightmost match segment is assumed to be $[0; 0)$ (that is, a deliberately small segment which doesn't contain any $i$).
Inside the loop for $i = 1 \dots n - 1$ we first determine the initial value $z[i]$ -- it will either remain zero or be computed using the above formula.
Thereafter, the trivial algorithm attempts to increase the value of $z[i]$ as much as possible.
In the end, if it's required (that is, if $i + z[i] > r$), we update the rightmost match segment $[l, r)$.
## Asymptotic behavior of the algorithm
We will prove that the above algorithm has a running time that is linear in the length of the string -- thus, it's $O(n)$.
The proof is very simple.
We are interested in the nested `while` loop, since everything else is just a bunch of constant operations which sums up to $O(n)$.
We will show that **each iteration** of the `while` loop will increase the right border $r$ of the match segment.
To do that, we will consider both branches of the algorithm:
* $i \geq r$
In this case, either the `while` loop won't make any iteration (if $s[0] \ne s[i]$), or it will take a few iterations, starting at position $i$, each time moving one character to the right. After that, the right border $r$ will necessarily be updated.
So we have found that, when $i \geq r$, each iteration of the `while` loop increases the value of the new $r$ index.
* $i < r$
In this case, we initialize $z[i]$ to a certain value $z_0$ given by the above formula. Let's compare this initial value $z_0$ to the value $r - i$. We will have three cases:
* $z_0 < r - i$
We prove that in this case no iteration of the `while` loop will take place.
It's easy to prove, for example, by contradiction: if the `while` loop made at least one iteration, it would mean that initial approximation $z[i] = z_0$ was inaccurate (less than the match's actual length). But since $s[l \dots r)$ and $s[0 \dots r-l)$ are the same, this would imply that $z[i-l]$ holds the wrong value (less than it should be).
Thus, since $z[i-l]$ is correct and it is less than $r - i$, it follows that this value coincides with the required value $z[i]$.
* $z_0 = r - i$
In this case, the `while` loop can make a few iterations, but each of them will lead to an increase in the value of the $r$ index because we will start comparing from $s[r]$, which will climb beyond the $[l, r)$ interval.
* $z_0 > r - i$
This option is impossible, by definition of $z_0$.
So, we have proved that each iteration of the inner loop make the $r$ pointer advance to the right. Since $r$ can't be more than $n-1$, this means that the inner loop won't make more than $n-1$ iterations.
As the rest of the algorithm obviously works in $O(n)$, we have proved that the whole algorithm for computing Z-functions runs in linear time.
## Applications
We will now consider some uses of Z-functions for specific tasks.
These applications will be largely similar to applications of [prefix function](prefix-function.md).
### Search the substring
To avoid confusion, we call $t$ the **string of text**, and $p$ the **pattern**. The problem is: find all occurrences of the pattern $p$ inside the text $t$.
To solve this problem, we create a new string $s = p + \diamond + t$, that is, we apply string concatenation to $p$ and $t$ but we also put a separator character $\diamond$ in the middle (we'll choose $\diamond$ so that it will certainly not be present anywhere in the strings $p$ or $t$).
Compute the Z-function for $s$. Then, for any $i$ in the interval $[0; \; \operatorname{length}(t) - 1]$, we will consider the corresponding value $k = z[i + \operatorname{length}(p) + 1]$. If $k$ is equal to $\operatorname{length}(p)$ then we know there is one occurrence of $p$ in the $i$-th position of $t$, otherwise there is no occurrence of $p$ in the $i$-th position of $t$.
The running time (and memory consumption) is $O(\operatorname{length}(t) + \operatorname{length}(p))$.
### Number of distinct substrings in a string
Given a string $s$ of length $n$, count the number of distinct substrings of $s$.
We'll solve this problem iteratively. That is: knowing the current number of different substrings, recalculate this amount after adding to the end of $s$ one character.
So, let $k$ be the current number of distinct substrings of $s$. We append a new character $c$ to $s$. Obviously, there can be some new substrings ending in this new character $c$ (namely, all those strings that end with this symbol and that we haven't encountered yet).
Take a string $t = s + c$ and invert it (write its characters in reverse order). Our task is now to count how many prefixes of $t$ are not found anywhere else in $t$. Let's compute the Z-function of $t$ and find its maximum value $z_{max}$. Obviously, $t$'s prefix of length $z_{max}$ occurs also somewhere in the middle of $t$. Clearly, shorter prefixes also occur.
So, we have found that the number of new substrings that appear when symbol $c$ is appended to $s$ is equal to $\operatorname{length}(t) - z_{max}$.
Consequently, the running time of this solution is $O(n^2)$ for a string of length $n$.
It's worth noting that in exactly the same way we can recalculate, still in $O(n)$ time, the number of distinct substrings when appending a character in the beginning of the string, as well as when removing it (from the end or the beginning).
### String compression
Given a string $s$ of length $n$. Find its shortest "compressed" representation, that is: find a string $t$ of shortest length such that $s$ can be represented as a concatenation of one or more copies of $t$.
A solution is: compute the Z-function of $s$, loop through all $i$ such that $i$ divides $n$. Stop at the first $i$ such that $i + z[i] = n$. Then, the string $s$ can be compressed to the length $i$.
The proof for this fact is the same as the solution which uses the [prefix function](prefix-function.md).
## Practice Problems
* [eolymp - Blocks of string](https://www.eolymp.com/en/problems/1309)
* [Codeforces - Password [Difficulty: Easy]](http://codeforces.com/problemset/problem/126/B)
* [UVA # 455 "Periodic Strings" [Difficulty: Medium]](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=396)
* [UVA # 11022 "String Factoring" [Difficulty: Medium]](http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1963)
* [UVa 11475 - Extend to Palindrome](http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=24&page=show_problem&problem=2470)
* [LA 6439 - Pasti Pas!](https://icpcarchive.ecs.baylor.edu/index.php?option=com_onlinejudge&Itemid=8&category=588&page=show_problem&problem=4450)
* [Codechef - Chef and Strings](https://www.codechef.com/problems/CHSTR)
* [Codeforces - Prefixes and Suffixes](http://codeforces.com/problemset/problem/432/D)
|
Z-function and its calculation
|
---
title
rabin_karp
---
# Rabin-Karp Algorithm for string matching
This algorithm is based on the concept of hashing, so if you are not familiar with string hashing, refer to the [string hashing](string-hashing.md) article.
This algorithm was authored by Rabin and Karp in 1987.
Problem: Given two strings - a pattern $s$ and a text $t$, determine if the pattern appears in the text and if it does, enumerate all its occurrences in $O(|s| + |t|)$ time.
Algorithm: Calculate the hash for the pattern $s$.
Calculate hash values for all the prefixes of the text $t$.
Now, we can compare a substring of length $|s|$ with $s$ in constant time using the calculated hashes.
So, compare each substring of length $|s|$ with the pattern. This will take a total of $O(|t|)$ time.
Hence the final complexity of the algorithm is $O(|t| + |s|)$: $O(|s|)$ is required for calculating the hash of the pattern and $O(|t|)$ for comparing each substring of length $|s|$ with the pattern.
## Implementation
```{.cpp file=rabin_karp}
vector<int> rabin_karp(string const& s, string const& t) {
const int p = 31;
const int m = 1e9 + 9;
int S = s.size(), T = t.size();
vector<long long> p_pow(max(S, T));
p_pow[0] = 1;
for (int i = 1; i < (int)p_pow.size(); i++)
p_pow[i] = (p_pow[i-1] * p) % m;
vector<long long> h(T + 1, 0);
for (int i = 0; i < T; i++)
h[i+1] = (h[i] + (t[i] - 'a' + 1) * p_pow[i]) % m;
long long h_s = 0;
for (int i = 0; i < S; i++)
h_s = (h_s + (s[i] - 'a' + 1) * p_pow[i]) % m;
vector<int> occurrences;
for (int i = 0; i + S - 1 < T; i++) {
long long cur_h = (h[i+S] + m - h[i]) % m;
if (cur_h == h_s * p_pow[i] % m)
occurrences.push_back(i);
}
return occurrences;
}
```
|
---
title
rabin_karp
---
# Rabin-Karp Algorithm for string matching
This algorithm is based on the concept of hashing, so if you are not familiar with string hashing, refer to the [string hashing](string-hashing.md) article.
This algorithm was authored by Rabin and Karp in 1987.
Problem: Given two strings - a pattern $s$ and a text $t$, determine if the pattern appears in the text and if it does, enumerate all its occurrences in $O(|s| + |t|)$ time.
Algorithm: Calculate the hash for the pattern $s$.
Calculate hash values for all the prefixes of the text $t$.
Now, we can compare a substring of length $|s|$ with $s$ in constant time using the calculated hashes.
So, compare each substring of length $|s|$ with the pattern. This will take a total of $O(|t|)$ time.
Hence the final complexity of the algorithm is $O(|t| + |s|)$: $O(|s|)$ is required for calculating the hash of the pattern and $O(|t|)$ for comparing each substring of length $|s|$ with the pattern.
## Implementation
```{.cpp file=rabin_karp}
vector<int> rabin_karp(string const& s, string const& t) {
const int p = 31;
const int m = 1e9 + 9;
int S = s.size(), T = t.size();
vector<long long> p_pow(max(S, T));
p_pow[0] = 1;
for (int i = 1; i < (int)p_pow.size(); i++)
p_pow[i] = (p_pow[i-1] * p) % m;
vector<long long> h(T + 1, 0);
for (int i = 0; i < T; i++)
h[i+1] = (h[i] + (t[i] - 'a' + 1) * p_pow[i]) % m;
long long h_s = 0;
for (int i = 0; i < S; i++)
h_s = (h_s + (s[i] - 'a' + 1) * p_pow[i]) % m;
vector<int> occurrences;
for (int i = 0; i + S - 1 < T; i++) {
long long cur_h = (h[i+S] + m - h[i]) % m;
if (cur_h == h_s * p_pow[i] % m)
occurrences.push_back(i);
}
return occurrences;
}
```
## Practice Problems
* [SPOJ - Pattern Find](http://www.spoj.com/problems/NAJPF/)
* [Codeforces - Good Substrings](http://codeforces.com/problemset/problem/271/D)
* [Codeforces - Palindromic characteristics](https://codeforces.com/problemset/problem/835/D)
* [Leetcode - Longest Duplicate Substring](https://leetcode.com/problems/longest-duplicate-substring/)
|
Rabin-Karp Algorithm for string matching
|
---
title
aho_corasick
---
# Aho-Corasick algorithm
The Aho-Corasick algorithm allows us to quickly search for multiple patterns in a text.
The set of pattern strings is also called a _dictionary_.
We will denote the total length of its constituent strings by $m$ and the size of the alphabet by $k$.
The algorithm constructs a finite state automaton based on a trie in $O(m k)$ time and then uses it to process the text.
The algorithm was proposed by Alfred Aho and Margaret Corasick in 1975.
## Construction of the trie
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/e/e2/Trie.svg" width="400px">
<br>
<i>A trie based on words "Java", "Rad", "Rand", "Rau", "Raum" and "Rose".</i>
<br>
<i>The <a href="https://commons.wikimedia.org/wiki/File:Trie.svg">image</a> by [nd](https://de.wikipedia.org/wiki/Benutzer:Nd) is distributed under <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en">CC BY-SA 3.0</a> license.</i>
</center>
Formally, a trie is a rooted tree, where each edge of the tree is labeled with some letter
and outgoing edges of a vertex have distinct labels.
We will identify each vertex in the trie with the string formed by the labels on the path from the root to that vertex.
Each vertex will also have a flag $\text{output}$ which will be set
if the vertex corresponds to a pattern in the dictionary.
Accordingly, a trie for a set of strings is a trie such that each $\text{output}$ vertex corresponds to one string from the set, and conversely, each string of the set corresponds to one $\text{output}$ vertex.
We now describe how to construct a trie for a given set of strings in linear time with respect to their total length.
We introduce a structure for the vertices of the tree:
```{.cpp file=aho_corasick_trie_definition}
const int K = 26;
struct Vertex {
int next[K];
bool output = false;
Vertex() {
fill(begin(next), end(next), -1);
}
};
vector<Vertex> trie(1);
```
Here, we store the trie as an array of $\text{Vertex}$.
Each $\text{Vertex}$ contains the flag $\text{output}$ and the edges in the form of an array $\text{next}[]$, where $\text{next}[i]$ is the index of the vertex that we reach by following the character $i$, or $-1$ if there is no such edge.
Initially, the trie consists of only one vertex - the root - with the index $0$.
Now we implement a function that will add a string $s$ to the trie.
The implementation is simple:
we start at the root node, and as long as there are edges corresponding to the characters of $s$ we follow them.
If there is no edge for one character, we generate a new vertex and connect it with an edge.
At the end of the process we mark the last vertex with the flag $\text{output}$.
```{.cpp file=aho_corasick_trie_add}
void add_string(string const& s) {
int v = 0;
for (char ch : s) {
int c = ch - 'a';
if (trie[v].next[c] == -1) {
trie[v].next[c] = trie.size();
trie.emplace_back();
}
v = trie[v].next[c];
}
trie[v].output = true;
}
```
This implementation obviously runs in linear time,
and since every vertex stores $k$ links, it will use $O(m k)$ memory.
It is possible to decrease the memory consumption to $O(m)$ by using a map instead of an array in each vertex.
However, this will increase the time complexity to $O(m \log k)$.
## Construction of an automaton
Suppose we have built a trie for the given set of strings.
Now let's look at it from a different side.
If we look at any vertex,
the string that corresponds to it is a prefix of one or more strings in the set, thus each vertex of the trie can be interpreted as a position in one or more strings from the set.
In fact, the trie vertices can be interpreted as states in a **finite deterministic automaton**.
From any state we can transition - using some input letter - to other states, i.e., to another position in the set of strings.
For example, if there is only one string $abc$ in the dictionary, and we are standing at vertex $ab$, then using the letter $c$ we can go to the vertex $abc$.
Thus we can understand the edges of the trie as transitions in an automaton according to the corresponding letter.
However, in an automaton we need to have transitions for each combination of a state and a letter.
If we try to perform a transition using a letter, and there is no corresponding edge in the trie, then we nevertheless must go into some state.
More precisely, suppose we are in a state corresponding to a string $t$, and we want to transition to a different state using the character $c$.
If there is an edge labeled with this letter $c$, then we can simply go over this edge, and get the vertex corresponding to $t + c$.
If there is no such edge, since we want to maintain the invariant that the current state is the longest partial match in the processed string, we must find the longest string in the trie that's a proper suffix of the string $t$, and try to perform a transition from there.
For example, let the trie be constructed by the strings $ab$ and $bc$, and we are currently at the vertex corresponding to $ab$, which is a $\text{output}$.
To transition with the letter $c$, we are forced to go to the state corresponding to the string $b$, and from there follow the edge with the letter $c$.
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/9/90/A_diagram_of_the_Aho-Corasick_string_search_algorithm.svg" width="300px">
<br>
<i>An Aho-Corasick automaton based on words "a", "ab", "bc", "bca", "c" and "caa".</i>
<br>
<i>Blue arrows are suffix links, green arrows are terminal links.</i>
</center>
A **suffix link** for a vertex $p$ is an edge that points to the longest proper suffix of the string corresponding to the vertex $p$.
The only special case is the root of the trie, whose suffix link will point to itself.
Now we can reformulate the statement about the transitions in the automaton like this:
while there is no transition from the current vertex of the trie using the current letter (or until we reach the root), we follow the suffix link.
Thus we reduced the problem of constructing an automaton to the problem of finding suffix links for all vertices of the trie.
However, we will build these suffix links, oddly enough, using the transitions constructed in the automaton.
The suffix links of the root vertex and all its immediate children point to the root vertex.
For any vertex $v$ deeper in the tree, we can calculate the suffix link as follows:
if $p$ is the ancestor of $v$ with $c$ being the letter labeling the edge from $p$ to $v$,
go to $p$,
then follow its suffix link, and perform the transition with the letter $c$ from there.
Thus, the problem of finding the transitions has been reduced to the problem of finding suffix links, and the problem of finding suffix links has been reduced to the problem of finding a suffix link and a transition, except for vertices closer to the root.
So we have a recursive dependence that we can resolve in linear time.
Let's move to the implementation.
Note that we now will store the ancestor $p$ and the character $pch$ of the edge from $p$ to $v$ for each vertex $v$.
Also, at each vertex we will store the suffix link $\text{link}$ (or $-1$ if it hasn't been calculated yet), and in the array $\text{go}[k]$ the transitions in the machine for each symbol (again $-1$ if it hasn't been calculated yet).
```{.cpp file=aho_corasick_automaton}
const int K = 26;
struct Vertex {
int next[K];
bool output = false;
int p = -1;
char pch;
int link = -1;
int go[K];
Vertex(int p=-1, char ch='$') : p(p), pch(ch) {
fill(begin(next), end(next), -1);
fill(begin(go), end(go), -1);
}
};
vector<Vertex> t(1);
void add_string(string const& s) {
int v = 0;
for (char ch : s) {
int c = ch - 'a';
if (t[v].next[c] == -1) {
t[v].next[c] = t.size();
t.emplace_back(v, ch);
}
v = t[v].next[c];
}
t[v].output = true;
}
int go(int v, char ch);
int get_link(int v) {
if (t[v].link == -1) {
if (v == 0 || t[v].p == 0)
t[v].link = 0;
else
t[v].link = go(get_link(t[v].p), t[v].pch);
}
return t[v].link;
}
int go(int v, char ch) {
int c = ch - 'a';
if (t[v].go[c] == -1) {
if (t[v].next[c] != -1)
t[v].go[c] = t[v].next[c];
else
t[v].go[c] = v == 0 ? 0 : go(get_link(v), ch);
}
return t[v].go[c];
}
```
It is easy to see that thanks to memoization of the suffix links and transitions,
the total time for finding all suffix links and transitions will be linear.
For an illustration of the concept refer to slide number 103 of the [Stanford slides](http://web.stanford.edu/class/archive/cs/cs166/cs166.1166/lectures/02/Slides02.pdf).
### BFS-based construction
Instead of computing transitions and suffix links with recursive calls to `go` and `get_link`, it is possible to compute them bottom-up starting from the root.
(In fact, when the dictionary consists of only one string, we obtain the familiar Knuth-Morris-Pratt algorithm.)
This approach will have some advantages over the one described above as, instead of the total length $m$, its running time depends only on the number of vertices $n$ in the trie. Moreover, it is possible to adapt it for large alphabets using a persistent array data structure, thus making the construction time $O(n \log k)$ instead of $O(mk)$, which is a significant improvement granted that $m$ may go up to $n^2$.
We can reason inductively using the fact that BFS from the root traverses vertices in order of increasing length.
We may assume that when we're in a vertex $v$, its suffix link $u = link[v]$ is already successfully computed, and for all vertices with shorter length transitions from them are also fully computed.
Assume that at the moment we stand in a vertex $v$ and consider a character $c$. We essentially have two cases:
1. $go[v][c] = -1$. In this case, we may assign $go[v][c] = go[u][c]$, which is already known by the induction hypothesis;
2. $go[v][c] = w \neq -1$. In this case, we may assign $link[w] = go[u][c]$.
In this way, we spend $O(1)$ time per each pair of a vertex and a character, making the running time $O(mk)$. The major overhead here is that we copy a lot of transitions from $u$ in the first case, while the transitions of the second case form the trie and sum up to $m$ over all vertices. To avoid the copying of $go[u][c]$, we may use a persistent array data structure, using which we initially copy $go[u]$ into $go[v]$ and then only update values for characters in which the transition would differ. This leads to the $O(m \log k)$ algorithm.
## Applications
### Find all strings from a given set in a text
We are given a set of strings and a text.
We have to print all occurrences of all strings from the set in the given text in $O(\text{len} + \text{ans})$, where $\text{len}$ is the length of the text and $\text{ans}$ is the size of the answer.
We construct an automaton for this set of strings.
We will now process the text letter by letter using the automaton,
starting at the root of the trie.
If we are at any time at state $v$, and the next letter is $c$, then we transition to the next state with $\text{go}(v, c)$, thereby either increasing the length of the current match substring by $1$, or decreasing it by following a suffix link.
How can we find out for a state $v$, if there are any matches with strings for the set?
First, it is clear that if we stand on a $\text{output}$ vertex, then the string corresponding to the vertex ends at this position in the text.
However this is by no means the only possible case of achieving a match:
if we can reach one or more $\text{output}$ vertices by moving along the suffix links, then there will be also a match corresponding to each found $\text{output}$ vertex.
A simple example demonstrating this situation can be creating using the set of strings $\{dabce, abc, bc\}$ and the text $dabc$.
Thus if we store in each $\text{output}$ vertex the index of the string corresponding to it (or the list of indices if duplicate strings appear in the set), then we can find in $O(n)$ time the indices of all strings which match the current state, by simply following the suffix links from the current vertex to the root.
This is not the most efficient solution, since this results in $O(n ~ \text{len})$ complexity overall.
However, this can be optimized by computing and storing the nearest $\text{output}$ vertex that is reachable using suffix links (this is sometimes called the **exit link**).
This value we can compute lazily in linear time.
Thus for each vertex we can advance in $O(1)$ time to the next marked vertex in the suffix link path, i.e. to the next match.
Thus for each match we spend $O(1)$ time, and therefore we reach the complexity $O(\text{len} + \text{ans})$.
If you only want to count the occurrences and not find the indices themselves, you can calculate the number of marked vertices in the suffix link path for each vertex $v$.
This can be calculated in $O(n)$ time in total.
Thus we can sum up all matches in $O(\text{len})$.
### Finding the lexicographically smallest string of a given length that doesn't match any given strings
A set of strings and a length $L$ is given.
We have to find a string of length $L$, which does not contain any of the strings, and derive the lexicographically smallest of such strings.
We can construct the automaton for the set of strings.
Recall that $\text{output}$ vertices are the states where we have a match with a string from the set.
Since in this task we have to avoid matches, we are not allowed to enter such states.
On the other hand we can enter all other vertices.
Thus we delete all "bad" vertices from the machine, and in the remaining graph of the automaton we find the lexicographically smallest path of length $L$.
This task can be solved in $O(L)$ for example by [depth first search](../graph/depth-first-search.md).
### Finding the shortest string containing all given strings
Here we use the same ideas.
For each vertex we store a mask that denotes the strings which match at this state.
Then the problem can be reformulated as follows:
initially being in the state $(v = \text{root},~ \text{mask} = 0)$, we want to reach the state $(v,~ \text{mask} = 2^n - 1)$, where $n$ is the number of strings in the set.
When we transition from one state to another using a letter, we update the mask accordingly.
By running a [breadth first search](../graph/breadth-first-search.md) we can find a path to the state $(v,~ \text{mask} = 2^n - 1)$ with the smallest length.
### Finding the lexicographically smallest string of length $L$ containing $k$ strings {data-toc-label="Finding the lexicographically smallest string of length L containing k strings"}
As in the previous problem, we calculate for each vertex the number of matches that correspond to it (that is the number of marked vertices reachable using suffix links).
We reformulate the problem: the current state is determined by a triple of numbers $(v,~ \text{len},~ \text{cnt})$, and we want to reach from the state $(\text{root},~ 0,~ 0)$ the state $(v,~ L,~ k)$, where $v$ can be any vertex.
Thus we can find such a path using depth first search (and if the search looks at the edges in their natural order, then the found path will automatically be the lexicographically smallest).
|
---
title
aho_corasick
---
# Aho-Corasick algorithm
The Aho-Corasick algorithm allows us to quickly search for multiple patterns in a text.
The set of pattern strings is also called a _dictionary_.
We will denote the total length of its constituent strings by $m$ and the size of the alphabet by $k$.
The algorithm constructs a finite state automaton based on a trie in $O(m k)$ time and then uses it to process the text.
The algorithm was proposed by Alfred Aho and Margaret Corasick in 1975.
## Construction of the trie
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/e/e2/Trie.svg" width="400px">
<br>
<i>A trie based on words "Java", "Rad", "Rand", "Rau", "Raum" and "Rose".</i>
<br>
<i>The <a href="https://commons.wikimedia.org/wiki/File:Trie.svg">image</a> by [nd](https://de.wikipedia.org/wiki/Benutzer:Nd) is distributed under <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en">CC BY-SA 3.0</a> license.</i>
</center>
Formally, a trie is a rooted tree, where each edge of the tree is labeled with some letter
and outgoing edges of a vertex have distinct labels.
We will identify each vertex in the trie with the string formed by the labels on the path from the root to that vertex.
Each vertex will also have a flag $\text{output}$ which will be set
if the vertex corresponds to a pattern in the dictionary.
Accordingly, a trie for a set of strings is a trie such that each $\text{output}$ vertex corresponds to one string from the set, and conversely, each string of the set corresponds to one $\text{output}$ vertex.
We now describe how to construct a trie for a given set of strings in linear time with respect to their total length.
We introduce a structure for the vertices of the tree:
```{.cpp file=aho_corasick_trie_definition}
const int K = 26;
struct Vertex {
int next[K];
bool output = false;
Vertex() {
fill(begin(next), end(next), -1);
}
};
vector<Vertex> trie(1);
```
Here, we store the trie as an array of $\text{Vertex}$.
Each $\text{Vertex}$ contains the flag $\text{output}$ and the edges in the form of an array $\text{next}[]$, where $\text{next}[i]$ is the index of the vertex that we reach by following the character $i$, or $-1$ if there is no such edge.
Initially, the trie consists of only one vertex - the root - with the index $0$.
Now we implement a function that will add a string $s$ to the trie.
The implementation is simple:
we start at the root node, and as long as there are edges corresponding to the characters of $s$ we follow them.
If there is no edge for one character, we generate a new vertex and connect it with an edge.
At the end of the process we mark the last vertex with the flag $\text{output}$.
```{.cpp file=aho_corasick_trie_add}
void add_string(string const& s) {
int v = 0;
for (char ch : s) {
int c = ch - 'a';
if (trie[v].next[c] == -1) {
trie[v].next[c] = trie.size();
trie.emplace_back();
}
v = trie[v].next[c];
}
trie[v].output = true;
}
```
This implementation obviously runs in linear time,
and since every vertex stores $k$ links, it will use $O(m k)$ memory.
It is possible to decrease the memory consumption to $O(m)$ by using a map instead of an array in each vertex.
However, this will increase the time complexity to $O(m \log k)$.
## Construction of an automaton
Suppose we have built a trie for the given set of strings.
Now let's look at it from a different side.
If we look at any vertex,
the string that corresponds to it is a prefix of one or more strings in the set, thus each vertex of the trie can be interpreted as a position in one or more strings from the set.
In fact, the trie vertices can be interpreted as states in a **finite deterministic automaton**.
From any state we can transition - using some input letter - to other states, i.e., to another position in the set of strings.
For example, if there is only one string $abc$ in the dictionary, and we are standing at vertex $ab$, then using the letter $c$ we can go to the vertex $abc$.
Thus we can understand the edges of the trie as transitions in an automaton according to the corresponding letter.
However, in an automaton we need to have transitions for each combination of a state and a letter.
If we try to perform a transition using a letter, and there is no corresponding edge in the trie, then we nevertheless must go into some state.
More precisely, suppose we are in a state corresponding to a string $t$, and we want to transition to a different state using the character $c$.
If there is an edge labeled with this letter $c$, then we can simply go over this edge, and get the vertex corresponding to $t + c$.
If there is no such edge, since we want to maintain the invariant that the current state is the longest partial match in the processed string, we must find the longest string in the trie that's a proper suffix of the string $t$, and try to perform a transition from there.
For example, let the trie be constructed by the strings $ab$ and $bc$, and we are currently at the vertex corresponding to $ab$, which is a $\text{output}$.
To transition with the letter $c$, we are forced to go to the state corresponding to the string $b$, and from there follow the edge with the letter $c$.
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/9/90/A_diagram_of_the_Aho-Corasick_string_search_algorithm.svg" width="300px">
<br>
<i>An Aho-Corasick automaton based on words "a", "ab", "bc", "bca", "c" and "caa".</i>
<br>
<i>Blue arrows are suffix links, green arrows are terminal links.</i>
</center>
A **suffix link** for a vertex $p$ is an edge that points to the longest proper suffix of the string corresponding to the vertex $p$.
The only special case is the root of the trie, whose suffix link will point to itself.
Now we can reformulate the statement about the transitions in the automaton like this:
while there is no transition from the current vertex of the trie using the current letter (or until we reach the root), we follow the suffix link.
Thus we reduced the problem of constructing an automaton to the problem of finding suffix links for all vertices of the trie.
However, we will build these suffix links, oddly enough, using the transitions constructed in the automaton.
The suffix links of the root vertex and all its immediate children point to the root vertex.
For any vertex $v$ deeper in the tree, we can calculate the suffix link as follows:
if $p$ is the ancestor of $v$ with $c$ being the letter labeling the edge from $p$ to $v$,
go to $p$,
then follow its suffix link, and perform the transition with the letter $c$ from there.
Thus, the problem of finding the transitions has been reduced to the problem of finding suffix links, and the problem of finding suffix links has been reduced to the problem of finding a suffix link and a transition, except for vertices closer to the root.
So we have a recursive dependence that we can resolve in linear time.
Let's move to the implementation.
Note that we now will store the ancestor $p$ and the character $pch$ of the edge from $p$ to $v$ for each vertex $v$.
Also, at each vertex we will store the suffix link $\text{link}$ (or $-1$ if it hasn't been calculated yet), and in the array $\text{go}[k]$ the transitions in the machine for each symbol (again $-1$ if it hasn't been calculated yet).
```{.cpp file=aho_corasick_automaton}
const int K = 26;
struct Vertex {
int next[K];
bool output = false;
int p = -1;
char pch;
int link = -1;
int go[K];
Vertex(int p=-1, char ch='$') : p(p), pch(ch) {
fill(begin(next), end(next), -1);
fill(begin(go), end(go), -1);
}
};
vector<Vertex> t(1);
void add_string(string const& s) {
int v = 0;
for (char ch : s) {
int c = ch - 'a';
if (t[v].next[c] == -1) {
t[v].next[c] = t.size();
t.emplace_back(v, ch);
}
v = t[v].next[c];
}
t[v].output = true;
}
int go(int v, char ch);
int get_link(int v) {
if (t[v].link == -1) {
if (v == 0 || t[v].p == 0)
t[v].link = 0;
else
t[v].link = go(get_link(t[v].p), t[v].pch);
}
return t[v].link;
}
int go(int v, char ch) {
int c = ch - 'a';
if (t[v].go[c] == -1) {
if (t[v].next[c] != -1)
t[v].go[c] = t[v].next[c];
else
t[v].go[c] = v == 0 ? 0 : go(get_link(v), ch);
}
return t[v].go[c];
}
```
It is easy to see that thanks to memoization of the suffix links and transitions,
the total time for finding all suffix links and transitions will be linear.
For an illustration of the concept refer to slide number 103 of the [Stanford slides](http://web.stanford.edu/class/archive/cs/cs166/cs166.1166/lectures/02/Slides02.pdf).
### BFS-based construction
Instead of computing transitions and suffix links with recursive calls to `go` and `get_link`, it is possible to compute them bottom-up starting from the root.
(In fact, when the dictionary consists of only one string, we obtain the familiar Knuth-Morris-Pratt algorithm.)
This approach will have some advantages over the one described above as, instead of the total length $m$, its running time depends only on the number of vertices $n$ in the trie. Moreover, it is possible to adapt it for large alphabets using a persistent array data structure, thus making the construction time $O(n \log k)$ instead of $O(mk)$, which is a significant improvement granted that $m$ may go up to $n^2$.
We can reason inductively using the fact that BFS from the root traverses vertices in order of increasing length.
We may assume that when we're in a vertex $v$, its suffix link $u = link[v]$ is already successfully computed, and for all vertices with shorter length transitions from them are also fully computed.
Assume that at the moment we stand in a vertex $v$ and consider a character $c$. We essentially have two cases:
1. $go[v][c] = -1$. In this case, we may assign $go[v][c] = go[u][c]$, which is already known by the induction hypothesis;
2. $go[v][c] = w \neq -1$. In this case, we may assign $link[w] = go[u][c]$.
In this way, we spend $O(1)$ time per each pair of a vertex and a character, making the running time $O(mk)$. The major overhead here is that we copy a lot of transitions from $u$ in the first case, while the transitions of the second case form the trie and sum up to $m$ over all vertices. To avoid the copying of $go[u][c]$, we may use a persistent array data structure, using which we initially copy $go[u]$ into $go[v]$ and then only update values for characters in which the transition would differ. This leads to the $O(m \log k)$ algorithm.
## Applications
### Find all strings from a given set in a text
We are given a set of strings and a text.
We have to print all occurrences of all strings from the set in the given text in $O(\text{len} + \text{ans})$, where $\text{len}$ is the length of the text and $\text{ans}$ is the size of the answer.
We construct an automaton for this set of strings.
We will now process the text letter by letter using the automaton,
starting at the root of the trie.
If we are at any time at state $v$, and the next letter is $c$, then we transition to the next state with $\text{go}(v, c)$, thereby either increasing the length of the current match substring by $1$, or decreasing it by following a suffix link.
How can we find out for a state $v$, if there are any matches with strings for the set?
First, it is clear that if we stand on a $\text{output}$ vertex, then the string corresponding to the vertex ends at this position in the text.
However this is by no means the only possible case of achieving a match:
if we can reach one or more $\text{output}$ vertices by moving along the suffix links, then there will be also a match corresponding to each found $\text{output}$ vertex.
A simple example demonstrating this situation can be creating using the set of strings $\{dabce, abc, bc\}$ and the text $dabc$.
Thus if we store in each $\text{output}$ vertex the index of the string corresponding to it (or the list of indices if duplicate strings appear in the set), then we can find in $O(n)$ time the indices of all strings which match the current state, by simply following the suffix links from the current vertex to the root.
This is not the most efficient solution, since this results in $O(n ~ \text{len})$ complexity overall.
However, this can be optimized by computing and storing the nearest $\text{output}$ vertex that is reachable using suffix links (this is sometimes called the **exit link**).
This value we can compute lazily in linear time.
Thus for each vertex we can advance in $O(1)$ time to the next marked vertex in the suffix link path, i.e. to the next match.
Thus for each match we spend $O(1)$ time, and therefore we reach the complexity $O(\text{len} + \text{ans})$.
If you only want to count the occurrences and not find the indices themselves, you can calculate the number of marked vertices in the suffix link path for each vertex $v$.
This can be calculated in $O(n)$ time in total.
Thus we can sum up all matches in $O(\text{len})$.
### Finding the lexicographically smallest string of a given length that doesn't match any given strings
A set of strings and a length $L$ is given.
We have to find a string of length $L$, which does not contain any of the strings, and derive the lexicographically smallest of such strings.
We can construct the automaton for the set of strings.
Recall that $\text{output}$ vertices are the states where we have a match with a string from the set.
Since in this task we have to avoid matches, we are not allowed to enter such states.
On the other hand we can enter all other vertices.
Thus we delete all "bad" vertices from the machine, and in the remaining graph of the automaton we find the lexicographically smallest path of length $L$.
This task can be solved in $O(L)$ for example by [depth first search](../graph/depth-first-search.md).
### Finding the shortest string containing all given strings
Here we use the same ideas.
For each vertex we store a mask that denotes the strings which match at this state.
Then the problem can be reformulated as follows:
initially being in the state $(v = \text{root},~ \text{mask} = 0)$, we want to reach the state $(v,~ \text{mask} = 2^n - 1)$, where $n$ is the number of strings in the set.
When we transition from one state to another using a letter, we update the mask accordingly.
By running a [breadth first search](../graph/breadth-first-search.md) we can find a path to the state $(v,~ \text{mask} = 2^n - 1)$ with the smallest length.
### Finding the lexicographically smallest string of length $L$ containing $k$ strings {data-toc-label="Finding the lexicographically smallest string of length L containing k strings"}
As in the previous problem, we calculate for each vertex the number of matches that correspond to it (that is the number of marked vertices reachable using suffix links).
We reformulate the problem: the current state is determined by a triple of numbers $(v,~ \text{len},~ \text{cnt})$, and we want to reach from the state $(\text{root},~ 0,~ 0)$ the state $(v,~ L,~ k)$, where $v$ can be any vertex.
Thus we can find such a path using depth first search (and if the search looks at the edges in their natural order, then the found path will automatically be the lexicographically smallest).
## Problems
- [UVA #11590 - Prefix Lookup](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2637)
- [UVA #11171 - SMS](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=2112)
- [UVA #10679 - I Love Strings!!](https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1620)
- [Codeforces - x-prime Substrings](https://codeforces.com/problemset/problem/1400/F)
- [Codeforces - Frequency of String](http://codeforces.com/problemset/problem/963/D)
- [CodeChef - TWOSTRS](https://www.codechef.com/MAY20A/problems/TWOSTRS)
## References
- [Stanford's CS166 - Aho-Corasick Automata](http://web.stanford.edu/class/archive/cs/cs166/cs166.1166/lectures/02/Slides02.pdf) ([Condensed](http://web.stanford.edu/class/archive/cs/cs166/cs166.1166/lectures/02/Small02.pdf))
|
Aho-Corasick algorithm
|
---
title
matrix_rank
---
# Finding the rank of a matrix
**The rank of a matrix** is the largest number of linearly independent rows/columns of the matrix. The rank is not only defined for square matrices.
The rank of a matrix can also be defined as the largest order of any non-zero minor in the matrix.
Let the matrix be rectangular and have size $N \times M$.
Note that if the matrix is square and its determinant is non-zero, then the rank is $N$ ($=M$); otherwise it will be less. Generally, the rank of a matrix does not exceed $\min (N, M)$.
## Algorithm
You can search for the rank using [Gaussian elimination](linear-system-gauss.md). We will perform the same operations as when solving the system or finding its determinant. But if at any step in the $i$-th column there are no rows with an non-empty entry among those that we didn't selected already, then we skip this step.
Otherwise, if we have found a row with a non-zero element in the $i$-th column during the $i$-th step, then we mark this row as a selected one, increase the rank by one (initially the rank is set equal to $0$), and perform the usual operations of taking this row away from the rest.
## Complexity
This algorithm runs in $\mathcal{O}(n^3)$.
## Implementation
```{.cpp file=matrix-rank}
const double EPS = 1E-9;
int compute_rank(vector<vector<double>> A) {
int n = A.size();
int m = A[0].size();
int rank = 0;
vector<bool> row_selected(n, false);
for (int i = 0; i < m; ++i) {
int j;
for (j = 0; j < n; ++j) {
if (!row_selected[j] && abs(A[j][i]) > EPS)
break;
}
if (j != n) {
++rank;
row_selected[j] = true;
for (int p = i + 1; p < m; ++p)
A[j][p] /= A[j][i];
for (int k = 0; k < n; ++k) {
if (k != j && abs(A[k][i]) > EPS) {
for (int p = i + 1; p < m; ++p)
A[k][p] -= A[j][p] * A[k][i];
}
}
}
}
return rank;
}
```
|
---
title
matrix_rank
---
# Finding the rank of a matrix
**The rank of a matrix** is the largest number of linearly independent rows/columns of the matrix. The rank is not only defined for square matrices.
The rank of a matrix can also be defined as the largest order of any non-zero minor in the matrix.
Let the matrix be rectangular and have size $N \times M$.
Note that if the matrix is square and its determinant is non-zero, then the rank is $N$ ($=M$); otherwise it will be less. Generally, the rank of a matrix does not exceed $\min (N, M)$.
## Algorithm
You can search for the rank using [Gaussian elimination](linear-system-gauss.md). We will perform the same operations as when solving the system or finding its determinant. But if at any step in the $i$-th column there are no rows with an non-empty entry among those that we didn't selected already, then we skip this step.
Otherwise, if we have found a row with a non-zero element in the $i$-th column during the $i$-th step, then we mark this row as a selected one, increase the rank by one (initially the rank is set equal to $0$), and perform the usual operations of taking this row away from the rest.
## Complexity
This algorithm runs in $\mathcal{O}(n^3)$.
## Implementation
```{.cpp file=matrix-rank}
const double EPS = 1E-9;
int compute_rank(vector<vector<double>> A) {
int n = A.size();
int m = A[0].size();
int rank = 0;
vector<bool> row_selected(n, false);
for (int i = 0; i < m; ++i) {
int j;
for (j = 0; j < n; ++j) {
if (!row_selected[j] && abs(A[j][i]) > EPS)
break;
}
if (j != n) {
++rank;
row_selected[j] = true;
for (int p = i + 1; p < m; ++p)
A[j][p] /= A[j][i];
for (int k = 0; k < n; ++k) {
if (k != j && abs(A[k][i]) > EPS) {
for (int p = i + 1; p < m; ++p)
A[k][p] -= A[j][p] * A[k][i];
}
}
}
}
return rank;
}
```
## Problems
* [TIMUS1041 Nikifor](http://acm.timus.ru/problem.aspx?space=1&num=1041)
|
Finding the rank of a matrix
|
---
title
determinant_gauss
---
# Calculating the determinant of a matrix by Gauss
Problem: Given a matrix $A$ of size $N \times N$. Compute its determinant.
## Algorithm
We use the ideas of [Gauss method for solving systems of linear equations](linear-system-gauss.md)
We will perform the same steps as in the solution of systems of linear equations, excluding only the division of the current line to $a_{ij}$. These operations will not change the absolute value of the determinant of the matrix. When we exchange two lines of the matrix, however, the sign of the determinant can change.
After applying Gauss on the matrix, we receive a diagonal matrix, whose determinant is just the product of the elements on the diagonal. The sign, as previously mentioned, can be determined by the number of exchanged rows (if odd, then the sign of the determinant should be reversed). Thus, we can use the Gauss algorithm to compute the determinant of the matrix in complexity $O(N^3)$.
It should be noted that if at some point, we do not find non-zero cell in current column, the algorithm should stop and returns 0.
## Implementation
```cpp
const double EPS = 1E-9;
int n;
vector < vector<double> > a (n, vector<double> (n));
double det = 1;
for (int i=0; i<n; ++i) {
int k = i;
for (int j=i+1; j<n; ++j)
if (abs (a[j][i]) > abs (a[k][i]))
k = j;
if (abs (a[k][i]) < EPS) {
det = 0;
break;
}
swap (a[i], a[k]);
if (i != k)
det = -det;
det *= a[i][i];
for (int j=i+1; j<n; ++j)
a[i][j] /= a[i][i];
for (int j=0; j<n; ++j)
if (j != i && abs (a[j][i]) > EPS)
for (int k=i+1; k<n; ++k)
a[j][k] -= a[i][k] * a[j][i];
}
cout << det;
```
|
---
title
determinant_gauss
---
# Calculating the determinant of a matrix by Gauss
Problem: Given a matrix $A$ of size $N \times N$. Compute its determinant.
## Algorithm
We use the ideas of [Gauss method for solving systems of linear equations](linear-system-gauss.md)
We will perform the same steps as in the solution of systems of linear equations, excluding only the division of the current line to $a_{ij}$. These operations will not change the absolute value of the determinant of the matrix. When we exchange two lines of the matrix, however, the sign of the determinant can change.
After applying Gauss on the matrix, we receive a diagonal matrix, whose determinant is just the product of the elements on the diagonal. The sign, as previously mentioned, can be determined by the number of exchanged rows (if odd, then the sign of the determinant should be reversed). Thus, we can use the Gauss algorithm to compute the determinant of the matrix in complexity $O(N^3)$.
It should be noted that if at some point, we do not find non-zero cell in current column, the algorithm should stop and returns 0.
## Implementation
```cpp
const double EPS = 1E-9;
int n;
vector < vector<double> > a (n, vector<double> (n));
double det = 1;
for (int i=0; i<n; ++i) {
int k = i;
for (int j=i+1; j<n; ++j)
if (abs (a[j][i]) > abs (a[k][i]))
k = j;
if (abs (a[k][i]) < EPS) {
det = 0;
break;
}
swap (a[i], a[k]);
if (i != k)
det = -det;
det *= a[i][i];
for (int j=i+1; j<n; ++j)
a[i][j] /= a[i][i];
for (int j=0; j<n; ++j)
if (j != i && abs (a[j][i]) > EPS)
for (int k=i+1; k<n; ++k)
a[j][k] -= a[i][k] * a[j][i];
}
cout << det;
```
## Practice Problems
* [Codeforces - Wizards and Bets](http://codeforces.com/contest/167/problem/E)
|
Calculating the determinant of a matrix by Gauss
|
---
title
linear_systems_gauss
---
# Gauss method for solving system of linear equations
Given a system of $n$ linear algebraic equations (SLAE) with $m$ unknowns. You are asked to solve the system: to determine if it has no solution, exactly one solution or infinite number of solutions. And in case it has at least one solution, find any of them.
Formally, the problem is formulated as follows: solve the system:
$$\begin{align}
a_{11} x_1 + a_{12} x_2 + &\dots + a_{1m} x_m = b_1 \\
a_{21} x_1 + a_{22} x_2 + &\dots + a_{2m} x_m = b_2\\
&\vdots \\
a_{n1} x_1 + a_{n2} x_2 + &\dots + a_{nm} x_m = b_n
\end{align}$$
where the coefficients $a_{ij}$ (for $i$ from 1 to $n$, $j$ from 1 to $m$) and $b_i$ ($i$ from 1 to $n$ are known and variables $x_i$ ($i$ from 1 to $m$) are unknowns.
This problem also has a simple matrix representation:
$$Ax = b,$$
where $A$ is a matrix of size $n \times m$ of coefficients $a_{ij}$ and $b$ is the column vector of size $n$.
It is worth noting that the method presented in this article can also be used to solve the equation modulo any number p, i.e.:
$$\begin{align}
a_{11} x_1 + a_{12} x_2 + &\dots + a_{1m} x_m \equiv b_1 \pmod p \\
a_{21} x_1 + a_{22} x_2 + &\dots + a_{2m} x_m \equiv b_2 \pmod p \\
&\vdots \\
a_{n1} x_1 + a_{n2} x_2 + &\dots + a_{nm} x_m \equiv b_n \pmod p
\end{align}$$
## Gauss
Strictly speaking, the method described below should be called "Gauss-Jordan", or Gauss-Jordan elimination, because it is a variation of the Gauss method, described by Jordan in 1887.
## Overview
The algorithm is a `sequential elimination` of the variables in each equation, until each equation will have only one remaining variable. If $n = m$, you can think of it as transforming the matrix $A$ to identity matrix, and solve the equation in this obvious case, where solution is unique and is equal to coefficient $b_i$.
Gaussian elimination is based on two simple transformation:
* It is possible to exchange two equations
* Any equation can be replaced by a linear combination of that row (with non-zero coefficient), and some other rows (with arbitrary coefficients).
In the first step, Gauss-Jordan algorithm divides the first row by $a_{11}$. Then, the algorithm adds the first row to the remaining rows such that the coefficients in the first column becomes all zeros. To achieve this, on the i-th row, we must add the first row multiplied by $- a_{i1}$. Note that, this operation must also be performed on vector $b$. In a sense, it behaves as if vector $b$ was the $m+1$-th column of matrix $A$.
As a result, after the first step, the first column of matrix $A$ will consists of $1$ on the first row, and $0$ in other rows.
Similarly, we perform the second step of the algorithm, where we consider the second column of second row. First, the row is divided by $a_{22}$, then it is subtracted from other rows so that all the second column becomes $0$ (except for the second row).
We continue this process for all columns of matrix $A$. If $n = m$, then $A$ will become identity matrix.
## Search for the pivoting element
The described scheme left out many details. At the $i$th step, if $a_{ii}$ is zero, we cannot apply directly the described method. Instead, we must first `select a pivoting row`: find one row of the matrix where the $i$th column is non-zero, and then swap the two rows.
Note that, here we swap rows but not columns. This is because if you swap columns, then when you find a solution, you must remember to swap back to correct places. Thus, swapping rows is much easier to do.
In many implementations, when $a_{ii} \neq 0$, you can see people still swap the $i$th row with some pivoting row, using some heuristics such as choosing the pivoting row with maximum absolute value of $a_{ji}$. This heuristic is used to reduce the value range of the matrix in later steps. Without this heuristic, even for matrices of size about $20$, the error will be too big and can cause overflow for floating points data types of C++.
## Degenerate cases
In the case where $m = n$ and the system is non-degenerate (i.e. it has non-zero determinant, and has unique solution), the algorithm described above will transform $A$ into identity matrix.
Now we consider the `general case`, where $n$ and $m$ are not necessarily equal, and the system can be degenerate. In these cases, the pivoting element in $i$th step may not be found. This means that on the $i$th column, starting from the current line, all contains zeros. In this case, either there is no possible value of variable $x_i$ (meaning the SLAE has no solution), or $x_i$ is an independent variable and can take arbitrary value. When implementing Gauss-Jordan, you should continue the work for subsequent variables and just skip the $i$th column (this is equivalent to removing the $i$th column of the matrix).
So, some of the variables in the process can be found to be independent. When the number of variables, $m$ is greater than the number of equations, $n$, then at least $m - n$ independent variables will be found.
In general, if you find at least one independent variable, it can take any arbitrary value, while the other (dependent) variables are expressed through it. This means that when we work in the field of real numbers, the system potentially has infinitely many solutions. But you should remember that when there are independent variables, SLAE can have no solution at all. This happens when the remaining untreated equations have at least one non-zero constant term. You can check this by assigning zeros to all independent variables, calculate other variables, and then plug in to the original SLAE to check if they satisfy it.
## Implementation
Following is an implementation of Gauss-Jordan. Choosing the pivot row is done with heuristic: choosing maximum value in the current column.
The input to the function `gauss` is the system matrix $a$. The last column of this matrix is vector $b$.
The function returns the number of solutions of the system $(0, 1,\textrm{or } \infty)$. If at least one solution exists, then it is returned in the vector $ans$.
```{.cpp file=gauss}
const double EPS = 1e-9;
const int INF = 2; // it doesn't actually have to be infinity or a big number
int gauss (vector < vector<double> > a, vector<double> & ans) {
int n = (int) a.size();
int m = (int) a[0].size() - 1;
vector<int> where (m, -1);
for (int col=0, row=0; col<m && row<n; ++col) {
int sel = row;
for (int i=row; i<n; ++i)
if (abs (a[i][col]) > abs (a[sel][col]))
sel = i;
if (abs (a[sel][col]) < EPS)
continue;
for (int i=col; i<=m; ++i)
swap (a[sel][i], a[row][i]);
where[col] = row;
for (int i=0; i<n; ++i)
if (i != row) {
double c = a[i][col] / a[row][col];
for (int j=col; j<=m; ++j)
a[i][j] -= a[row][j] * c;
}
++row;
}
ans.assign (m, 0);
for (int i=0; i<m; ++i)
if (where[i] != -1)
ans[i] = a[where[i]][m] / a[where[i]][i];
for (int i=0; i<n; ++i) {
double sum = 0;
for (int j=0; j<m; ++j)
sum += ans[j] * a[i][j];
if (abs (sum - a[i][m]) > EPS)
return 0;
}
for (int i=0; i<m; ++i)
if (where[i] == -1)
return INF;
return 1;
}
```
Implementation notes:
* The function uses two pointers - the current column $col$ and the current row $row$.
* For each variable $x_i$, the value $where(i)$ is the line where this column is not zero. This vector is needed because some variables can be independent.
* In this implementation, the current $i$th line is not divided by $a_{ii}$ as described above, so in the end the matrix is not identity matrix (though apparently dividing the $i$th line can help reducing errors).
* After finding a solution, it is inserted back into the matrix - to check whether the system has at least one solution or not. If the test solution is successful, then the function returns 1 or $\inf$, depending on whether there is at least one independent variable.
## Complexity
Now we should estimate the complexity of this algorithm. The algorithm consists of $m$ phases, in each phase:
* Search and reshuffle the pivoting row. This takes $O(n + m)$ when using heuristic mentioned above.
* If the pivot element in the current column is found - then we must add this equation to all other equations, which takes time $O(nm)$.
So, the final complexity of the algorithm is $O(\min (n, m) . nm)$.
In case $n = m$, the complexity is simply $O(n^3)$.
Note that when the SLAE is not on real numbers, but is in the modulo two, then the system can be solved much faster, which is described below.
## Acceleration of the algorithm
The previous implementation can be sped up by two times, by dividing the algorithm into two phases: forward and reverse:
* Forward phase: Similar to the previous implementation, but the current row is only added to the rows after it. As a result, we obtain a triangular matrix instead of diagonal.
* Reverse phase: When the matrix is triangular, we first calculate the value of the last variable. Then plug this value to find the value of next variable. Then plug these two values to find the next variables...
Reverse phase only takes $O(nm)$, which is much faster than forward phase. In forward phase, we reduce the number of operations by half, thus reducing the running time of the implementation.
## Solving modular SLAE
For solving SLAE in some module, we can still use the described algorithm. However, in case the module is equal to two, we can perform Gauss-Jordan elimination much more effectively using bitwise operations and C++ bitset data types:
```cpp
int gauss (vector < bitset<N> > a, int n, int m, bitset<N> & ans) {
vector<int> where (m, -1);
for (int col=0, row=0; col<m && row<n; ++col) {
for (int i=row; i<n; ++i)
if (a[i][col]) {
swap (a[i], a[row]);
break;
}
if (! a[row][col])
continue;
where[col] = row;
for (int i=0; i<n; ++i)
if (i != row && a[i][col])
a[i] ^= a[row];
++row;
}
// The rest of implementation is the same as above
}
```
Since we use bit compress, the implementation is not only shorter, but also 32 times faster.
## A little note about different heuristics of choosing pivoting row
There is no general rule for what heuristics to use.
The heuristics used in previous implementation works quite well in practice. It also turns out to give almost the same answers as "full pivoting" - where the pivoting row is search amongst all elements of the whose submatrix (from the current row and current column).
Though, you should note that both heuristics is dependent on how much the original equations was scaled. For example, if one of the equation was multiplied by $10^6$, then this equation is almost certain to be chosen as pivot in first step. This seems rather strange, so it seems logical to change to a more complicated heuristics, called `implicit pivoting`.
Implicit pivoting compares elements as if both lines were normalized, so that the maximum element would be unity. To implement this technique, one need to maintain maximum in each row (or maintain each line so that maximum is unity, but this can lead to increase in the accumulated error).
## Improve the solution
Despite various heuristics, Gauss-Jordan algorithm can still lead to large errors in special matrices even of size $50 - 100$.
Therefore, the resulting Gauss-Jordan solution must sometimes be improved by applying a simple numerical method - for example, the method of simple iteration.
Thus, the solution turns into two-step: First, Gauss-Jordan algorithm is applied, and then a numerical method taking initial solution as solution in the first step.
|
---
title
linear_systems_gauss
---
# Gauss method for solving system of linear equations
Given a system of $n$ linear algebraic equations (SLAE) with $m$ unknowns. You are asked to solve the system: to determine if it has no solution, exactly one solution or infinite number of solutions. And in case it has at least one solution, find any of them.
Formally, the problem is formulated as follows: solve the system:
$$\begin{align}
a_{11} x_1 + a_{12} x_2 + &\dots + a_{1m} x_m = b_1 \\
a_{21} x_1 + a_{22} x_2 + &\dots + a_{2m} x_m = b_2\\
&\vdots \\
a_{n1} x_1 + a_{n2} x_2 + &\dots + a_{nm} x_m = b_n
\end{align}$$
where the coefficients $a_{ij}$ (for $i$ from 1 to $n$, $j$ from 1 to $m$) and $b_i$ ($i$ from 1 to $n$ are known and variables $x_i$ ($i$ from 1 to $m$) are unknowns.
This problem also has a simple matrix representation:
$$Ax = b,$$
where $A$ is a matrix of size $n \times m$ of coefficients $a_{ij}$ and $b$ is the column vector of size $n$.
It is worth noting that the method presented in this article can also be used to solve the equation modulo any number p, i.e.:
$$\begin{align}
a_{11} x_1 + a_{12} x_2 + &\dots + a_{1m} x_m \equiv b_1 \pmod p \\
a_{21} x_1 + a_{22} x_2 + &\dots + a_{2m} x_m \equiv b_2 \pmod p \\
&\vdots \\
a_{n1} x_1 + a_{n2} x_2 + &\dots + a_{nm} x_m \equiv b_n \pmod p
\end{align}$$
## Gauss
Strictly speaking, the method described below should be called "Gauss-Jordan", or Gauss-Jordan elimination, because it is a variation of the Gauss method, described by Jordan in 1887.
## Overview
The algorithm is a `sequential elimination` of the variables in each equation, until each equation will have only one remaining variable. If $n = m$, you can think of it as transforming the matrix $A$ to identity matrix, and solve the equation in this obvious case, where solution is unique and is equal to coefficient $b_i$.
Gaussian elimination is based on two simple transformation:
* It is possible to exchange two equations
* Any equation can be replaced by a linear combination of that row (with non-zero coefficient), and some other rows (with arbitrary coefficients).
In the first step, Gauss-Jordan algorithm divides the first row by $a_{11}$. Then, the algorithm adds the first row to the remaining rows such that the coefficients in the first column becomes all zeros. To achieve this, on the i-th row, we must add the first row multiplied by $- a_{i1}$. Note that, this operation must also be performed on vector $b$. In a sense, it behaves as if vector $b$ was the $m+1$-th column of matrix $A$.
As a result, after the first step, the first column of matrix $A$ will consists of $1$ on the first row, and $0$ in other rows.
Similarly, we perform the second step of the algorithm, where we consider the second column of second row. First, the row is divided by $a_{22}$, then it is subtracted from other rows so that all the second column becomes $0$ (except for the second row).
We continue this process for all columns of matrix $A$. If $n = m$, then $A$ will become identity matrix.
## Search for the pivoting element
The described scheme left out many details. At the $i$th step, if $a_{ii}$ is zero, we cannot apply directly the described method. Instead, we must first `select a pivoting row`: find one row of the matrix where the $i$th column is non-zero, and then swap the two rows.
Note that, here we swap rows but not columns. This is because if you swap columns, then when you find a solution, you must remember to swap back to correct places. Thus, swapping rows is much easier to do.
In many implementations, when $a_{ii} \neq 0$, you can see people still swap the $i$th row with some pivoting row, using some heuristics such as choosing the pivoting row with maximum absolute value of $a_{ji}$. This heuristic is used to reduce the value range of the matrix in later steps. Without this heuristic, even for matrices of size about $20$, the error will be too big and can cause overflow for floating points data types of C++.
## Degenerate cases
In the case where $m = n$ and the system is non-degenerate (i.e. it has non-zero determinant, and has unique solution), the algorithm described above will transform $A$ into identity matrix.
Now we consider the `general case`, where $n$ and $m$ are not necessarily equal, and the system can be degenerate. In these cases, the pivoting element in $i$th step may not be found. This means that on the $i$th column, starting from the current line, all contains zeros. In this case, either there is no possible value of variable $x_i$ (meaning the SLAE has no solution), or $x_i$ is an independent variable and can take arbitrary value. When implementing Gauss-Jordan, you should continue the work for subsequent variables and just skip the $i$th column (this is equivalent to removing the $i$th column of the matrix).
So, some of the variables in the process can be found to be independent. When the number of variables, $m$ is greater than the number of equations, $n$, then at least $m - n$ independent variables will be found.
In general, if you find at least one independent variable, it can take any arbitrary value, while the other (dependent) variables are expressed through it. This means that when we work in the field of real numbers, the system potentially has infinitely many solutions. But you should remember that when there are independent variables, SLAE can have no solution at all. This happens when the remaining untreated equations have at least one non-zero constant term. You can check this by assigning zeros to all independent variables, calculate other variables, and then plug in to the original SLAE to check if they satisfy it.
## Implementation
Following is an implementation of Gauss-Jordan. Choosing the pivot row is done with heuristic: choosing maximum value in the current column.
The input to the function `gauss` is the system matrix $a$. The last column of this matrix is vector $b$.
The function returns the number of solutions of the system $(0, 1,\textrm{or } \infty)$. If at least one solution exists, then it is returned in the vector $ans$.
```{.cpp file=gauss}
const double EPS = 1e-9;
const int INF = 2; // it doesn't actually have to be infinity or a big number
int gauss (vector < vector<double> > a, vector<double> & ans) {
int n = (int) a.size();
int m = (int) a[0].size() - 1;
vector<int> where (m, -1);
for (int col=0, row=0; col<m && row<n; ++col) {
int sel = row;
for (int i=row; i<n; ++i)
if (abs (a[i][col]) > abs (a[sel][col]))
sel = i;
if (abs (a[sel][col]) < EPS)
continue;
for (int i=col; i<=m; ++i)
swap (a[sel][i], a[row][i]);
where[col] = row;
for (int i=0; i<n; ++i)
if (i != row) {
double c = a[i][col] / a[row][col];
for (int j=col; j<=m; ++j)
a[i][j] -= a[row][j] * c;
}
++row;
}
ans.assign (m, 0);
for (int i=0; i<m; ++i)
if (where[i] != -1)
ans[i] = a[where[i]][m] / a[where[i]][i];
for (int i=0; i<n; ++i) {
double sum = 0;
for (int j=0; j<m; ++j)
sum += ans[j] * a[i][j];
if (abs (sum - a[i][m]) > EPS)
return 0;
}
for (int i=0; i<m; ++i)
if (where[i] == -1)
return INF;
return 1;
}
```
Implementation notes:
* The function uses two pointers - the current column $col$ and the current row $row$.
* For each variable $x_i$, the value $where(i)$ is the line where this column is not zero. This vector is needed because some variables can be independent.
* In this implementation, the current $i$th line is not divided by $a_{ii}$ as described above, so in the end the matrix is not identity matrix (though apparently dividing the $i$th line can help reducing errors).
* After finding a solution, it is inserted back into the matrix - to check whether the system has at least one solution or not. If the test solution is successful, then the function returns 1 or $\inf$, depending on whether there is at least one independent variable.
## Complexity
Now we should estimate the complexity of this algorithm. The algorithm consists of $m$ phases, in each phase:
* Search and reshuffle the pivoting row. This takes $O(n + m)$ when using heuristic mentioned above.
* If the pivot element in the current column is found - then we must add this equation to all other equations, which takes time $O(nm)$.
So, the final complexity of the algorithm is $O(\min (n, m) . nm)$.
In case $n = m$, the complexity is simply $O(n^3)$.
Note that when the SLAE is not on real numbers, but is in the modulo two, then the system can be solved much faster, which is described below.
## Acceleration of the algorithm
The previous implementation can be sped up by two times, by dividing the algorithm into two phases: forward and reverse:
* Forward phase: Similar to the previous implementation, but the current row is only added to the rows after it. As a result, we obtain a triangular matrix instead of diagonal.
* Reverse phase: When the matrix is triangular, we first calculate the value of the last variable. Then plug this value to find the value of next variable. Then plug these two values to find the next variables...
Reverse phase only takes $O(nm)$, which is much faster than forward phase. In forward phase, we reduce the number of operations by half, thus reducing the running time of the implementation.
## Solving modular SLAE
For solving SLAE in some module, we can still use the described algorithm. However, in case the module is equal to two, we can perform Gauss-Jordan elimination much more effectively using bitwise operations and C++ bitset data types:
```cpp
int gauss (vector < bitset<N> > a, int n, int m, bitset<N> & ans) {
vector<int> where (m, -1);
for (int col=0, row=0; col<m && row<n; ++col) {
for (int i=row; i<n; ++i)
if (a[i][col]) {
swap (a[i], a[row]);
break;
}
if (! a[row][col])
continue;
where[col] = row;
for (int i=0; i<n; ++i)
if (i != row && a[i][col])
a[i] ^= a[row];
++row;
}
// The rest of implementation is the same as above
}
```
Since we use bit compress, the implementation is not only shorter, but also 32 times faster.
## A little note about different heuristics of choosing pivoting row
There is no general rule for what heuristics to use.
The heuristics used in previous implementation works quite well in practice. It also turns out to give almost the same answers as "full pivoting" - where the pivoting row is search amongst all elements of the whose submatrix (from the current row and current column).
Though, you should note that both heuristics is dependent on how much the original equations was scaled. For example, if one of the equation was multiplied by $10^6$, then this equation is almost certain to be chosen as pivot in first step. This seems rather strange, so it seems logical to change to a more complicated heuristics, called `implicit pivoting`.
Implicit pivoting compares elements as if both lines were normalized, so that the maximum element would be unity. To implement this technique, one need to maintain maximum in each row (or maintain each line so that maximum is unity, but this can lead to increase in the accumulated error).
## Improve the solution
Despite various heuristics, Gauss-Jordan algorithm can still lead to large errors in special matrices even of size $50 - 100$.
Therefore, the resulting Gauss-Jordan solution must sometimes be improved by applying a simple numerical method - for example, the method of simple iteration.
Thus, the solution turns into two-step: First, Gauss-Jordan algorithm is applied, and then a numerical method taking initial solution as solution in the first step.
## Practice Problems
* [Spoj - Xor Maximization](http://www.spoj.com/problems/XMAX/)
* [Codechef - Knight Moving](https://www.codechef.com/SEP12/problems/KNGHTMOV)
* [Lightoj - Graph Coloring](http://lightoj.com/volume_showproblem.php?problem=1279)
* [UVA 12910 - Snakes and Ladders](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=4775)
* [TIMUS1042 Central Heating](http://acm.timus.ru/problem.aspx?space=1&num=1042)
* [TIMUS1766 Humpty Dumpty](http://acm.timus.ru/problem.aspx?space=1&num=1766)
* [TIMUS1266 Kirchhoff's Law](http://acm.timus.ru/problem.aspx?space=1&num=1266)
* [Codeforces - No game no life](https://codeforces.com/problemset/problem/1411/G)
|
Gauss method for solving system of linear equations
|
---
title: Calculating the determinant using Kraut method
title
- Original
---
# Calculating the determinant using Kraut method in $O(N^3)$
In this article, we'll describe how to find the determinant of the matrix using Kraut method, which works in $O(N^3)$.
The Kraut algorithm finds decomposition of matrix $A$ as $A = L U$ where $L$ is lower triangular and $U$ is upper triangular matrix. Without loss of generality, we can assume that all the diagonal elements of $L$ are equal to 1. Once we know these matrices, it is easy to calculate the determinant of $A$: it is equal to the product of all the elements on the main diagonal of the matrix $U$.
There is a theorem stating that any invertible matrix has a LU-decomposition, and it is unique, if and only if all its principle minors are non-zero. We consider only such decomposition in which the diagonal of matrix $L$ consists of ones.
Let $A$ be the matrix and $N$ - its size. We will find the elements of the matrices $L$ and $U$ using the following steps:
1. Let $L_{i i} = 1$ for $i = 1, 2, ..., N$.
2. For each $j = 1, 2, ..., N$ perform:
- For $i = 1, 2, ..., j$ find values
\[U_{ij} = A_{ij} - \sum_{k=1}^{i-1} L_{ik} \cdot U_{kj}\]
- Next, for $i = j+1, j+2, ..., N$ find values
\[L_{ij} = \frac{1}{U_{jj}} \left(A_{ij} - \sum_{k=1}^{j-1} L_{ik} \cdot U_{kj} \right).\]
## Implementation
```java
static BigInteger det (BigDecimal a [][], int n) {
try {
for (int i=0; i<n; i++) {
boolean nonzero = false;
for (int j=0; j<n; j++)
if (a[i][j].compareTo (new BigDecimal (BigInteger.ZERO)) > 0)
nonzero = true;
if (!nonzero)
return BigInteger.ZERO;
}
BigDecimal scaling [] = new BigDecimal [n];
for (int i=0; i<n; i++) {
BigDecimal big = new BigDecimal (BigInteger.ZERO);
for (int j=0; j<n; j++)
if (a[i][j].abs().compareTo (big) > 0)
big = a[i][j].abs();
scaling[i] = (new BigDecimal (BigInteger.ONE)) .divide
(big, 100, BigDecimal.ROUND_HALF_EVEN);
}
int sign = 1;
for (int j=0; j<n; j++) {
for (int i=0; i<j; i++) {
BigDecimal sum = a[i][j];
for (int k=0; k<i; k++)
sum = sum.subtract (a[i][k].multiply (a[k][j]));
a[i][j] = sum;
}
BigDecimal big = new BigDecimal (BigInteger.ZERO);
int imax = -1;
for (int i=j; i<n; i++) {
BigDecimal sum = a[i][j];
for (int k=0; k<j; k++)
sum = sum.subtract (a[i][k].multiply (a[k][j]));
a[i][j] = sum;
BigDecimal cur = sum.abs();
cur = cur.multiply (scaling[i]);
if (cur.compareTo (big) >= 0) {
big = cur;
imax = i;
}
}
if (j != imax) {
for (int k=0; k<n; k++) {
BigDecimal t = a[j][k];
a[j][k] = a[imax][k];
a[imax][k] = t;
}
BigDecimal t = scaling[imax];
scaling[imax] = scaling[j];
scaling[j] = t;
sign = -sign;
}
if (j != n-1)
for (int i=j+1; i<n; i++)
a[i][j] = a[i][j].divide
(a[j][j], 100, BigDecimal.ROUND_HALF_EVEN);
}
BigDecimal result = new BigDecimal (1);
if (sign == -1)
result = result.negate();
for (int i=0; i<n; i++)
result = result.multiply (a[i][i]);
return result.divide
(BigDecimal.valueOf(1), 0, BigDecimal.ROUND_HALF_EVEN).toBigInteger();
}
catch (Exception e) {
return BigInteger.ZERO;
}
}
```
|
---
title: Calculating the determinant using Kraut method
title
- Original
---
# Calculating the determinant using Kraut method in $O(N^3)$
In this article, we'll describe how to find the determinant of the matrix using Kraut method, which works in $O(N^3)$.
The Kraut algorithm finds decomposition of matrix $A$ as $A = L U$ where $L$ is lower triangular and $U$ is upper triangular matrix. Without loss of generality, we can assume that all the diagonal elements of $L$ are equal to 1. Once we know these matrices, it is easy to calculate the determinant of $A$: it is equal to the product of all the elements on the main diagonal of the matrix $U$.
There is a theorem stating that any invertible matrix has a LU-decomposition, and it is unique, if and only if all its principle minors are non-zero. We consider only such decomposition in which the diagonal of matrix $L$ consists of ones.
Let $A$ be the matrix and $N$ - its size. We will find the elements of the matrices $L$ and $U$ using the following steps:
1. Let $L_{i i} = 1$ for $i = 1, 2, ..., N$.
2. For each $j = 1, 2, ..., N$ perform:
- For $i = 1, 2, ..., j$ find values
\[U_{ij} = A_{ij} - \sum_{k=1}^{i-1} L_{ik} \cdot U_{kj}\]
- Next, for $i = j+1, j+2, ..., N$ find values
\[L_{ij} = \frac{1}{U_{jj}} \left(A_{ij} - \sum_{k=1}^{j-1} L_{ik} \cdot U_{kj} \right).\]
## Implementation
```java
static BigInteger det (BigDecimal a [][], int n) {
try {
for (int i=0; i<n; i++) {
boolean nonzero = false;
for (int j=0; j<n; j++)
if (a[i][j].compareTo (new BigDecimal (BigInteger.ZERO)) > 0)
nonzero = true;
if (!nonzero)
return BigInteger.ZERO;
}
BigDecimal scaling [] = new BigDecimal [n];
for (int i=0; i<n; i++) {
BigDecimal big = new BigDecimal (BigInteger.ZERO);
for (int j=0; j<n; j++)
if (a[i][j].abs().compareTo (big) > 0)
big = a[i][j].abs();
scaling[i] = (new BigDecimal (BigInteger.ONE)) .divide
(big, 100, BigDecimal.ROUND_HALF_EVEN);
}
int sign = 1;
for (int j=0; j<n; j++) {
for (int i=0; i<j; i++) {
BigDecimal sum = a[i][j];
for (int k=0; k<i; k++)
sum = sum.subtract (a[i][k].multiply (a[k][j]));
a[i][j] = sum;
}
BigDecimal big = new BigDecimal (BigInteger.ZERO);
int imax = -1;
for (int i=j; i<n; i++) {
BigDecimal sum = a[i][j];
for (int k=0; k<j; k++)
sum = sum.subtract (a[i][k].multiply (a[k][j]));
a[i][j] = sum;
BigDecimal cur = sum.abs();
cur = cur.multiply (scaling[i]);
if (cur.compareTo (big) >= 0) {
big = cur;
imax = i;
}
}
if (j != imax) {
for (int k=0; k<n; k++) {
BigDecimal t = a[j][k];
a[j][k] = a[imax][k];
a[imax][k] = t;
}
BigDecimal t = scaling[imax];
scaling[imax] = scaling[j];
scaling[j] = t;
sign = -sign;
}
if (j != n-1)
for (int i=j+1; i<n; i++)
a[i][j] = a[i][j].divide
(a[j][j], 100, BigDecimal.ROUND_HALF_EVEN);
}
BigDecimal result = new BigDecimal (1);
if (sign == -1)
result = result.negate();
for (int i=0; i<n; i++)
result = result.multiply (a[i][i]);
return result.divide
(BigDecimal.valueOf(1), 0, BigDecimal.ROUND_HALF_EVEN).toBigInteger();
}
catch (Exception e) {
return BigInteger.ZERO;
}
}
```
|
Calculating the determinant using Kraut method in $O(N^3)$
|
---
title
sprague_grundy
---
# Sprague-Grundy theorem. Nim
## Introduction
This theorem describes the so-called **impartial** two-player game,
i.e. those in which the available moves and winning/losing depends only on the state of the game.
In other words, the only difference between the two players is that one of them moves first.
Additionally, we assume that the game has **perfect information**, i.e. no information is hidden from the players (they know the rules and the possible moves).
It is assumed that the game is **finite**, i.e. after a certain number of moves, one of the players will end up in a losing position — from which they can't move to another position.
On the other side, the player who set up this position for the opponent wins.
Understandably, there are no draws in this game.
Such games can be completely described by a *directed acyclic graph*: the vertices are game states and the edges are transitions (moves).
A vertex without outgoing edges is a losing vertex (a player who must make a move from this vertex loses).
Since there are no draws, we can classify all game states as either **winning** or **losing**.
Winning states are those from which there is a move that causes inevitable defeat of the other player, even with their best response.
Losing states are those from which all moves lead to winning states for the other player.
Summarizing, a state is winning if there is at least one transition to a losing state and is losing if there isn't at least one transition to a losing state.
Our task is to classify the states of a given game.
The theory of such games was independently developed by Roland Sprague in 1935 and Patrick Michael Grundy in 1939.
## Nim
This game obeys the restrictions described above.
Moreover, *any* perfect-information impartial two-player game can be reduced to the game of Nim.
Studying this game will allow us to solve all other similar games, but more on that later.
Historically this game was popular in ancient times.
Its origin is probably in China — or at least the game *Jianshizi* is very similar to it.
In Europe the earliest references to it are from the 16th century.
The name was given by Charles Bouton, who in 1901 published a full analysis of this game.
### Game description
There are several piles, each with several stones.
In a move a player can take any positive number of stones from any one pile and throw them away.
A player loses if they can't make a move, which happens when all the piles are empty.
The game state is unambiguously described by a multiset of positive integers.
A move consists of strictly decreasing a chosen integer (if it becomes zero, it is removed from the set).
### The solution
The solution by Charles L. Bouton looks like this:
**Theorem.**
The current player has a winning strategy if and only if the xor-sum of the pile sizes is non-zero.
The xor-sum of a sequence $a$ is $a_1 \oplus a_2 \oplus \ldots \oplus a_n$, where $\oplus$ is the *bitwise exclusive or*.
**Proof.**
The key to the proof is the presence of a **symmetric strategy for the opponent**.
We show that a once in a position with the xor-sum equal to zero, the player won't be able to make it non-zero in the long term —
if they transition to a position with a non-zero xor-sum, the opponent will always have a move returning the xor-sum back to zero.
We will prove the theorem by mathematical induction.
For an empty Nim (where all the piles are empty i.e. the multiset is empty) the xor-sum is zero and the theorem is true.
Now suppose we are in a non-empty state.
Using the assumption of induction (and the acyclicity of the game) we assume that the theorem is proven for all states reachable from the current one.
Then the proof splits into two parts:
if for the current position the xor-sum $s = 0$, we have to prove that this state is losing, i.e. all reachable states have xor-sum $t \neq 0$.
If $s \neq 0$, we have to prove that there is a move leading to a state with $t = 0$.
* Let $s = 0$ and let's consider any move.
This move reduces the size of a pile $x$ to a size $y$.
Using elementary properties of $\oplus$, we have
\[ t = s \oplus x \oplus y = 0 \oplus x \oplus y = x \oplus y \]
Since $y < x$, $y \oplus x$ can't be zero, so $t \neq 0$.
That means any reachable state is a winning one (by the assumption of induction), so we are in a losing position.
* Let $s \neq 0$.
Consider the binary representation of the number $s$.
Let $d$ be the index of its leading (biggest value) non-zero bit.
Our move will be on a pile whose size's bit number $d$ is set (it must exist, otherwise the bit wouldn't be set in $s$).
We will reduce its size $x$ to $y = x \oplus s$.
All bits at positions greater than $d$ in $x$ and $y$ match and bit $d$ is set in $x$ but not set in $y$.
Therefore, $y < x$, which is all we need for a move to be legal.
Now we have:
\[ t = s \oplus x \oplus y = s \oplus x \oplus (s \oplus x) = 0 \]
This means we found a reachable losing state (by the assumption of induction) and the current state is winning.
**Corollary.**
Any state of Nim can be replaced by an equivalent state as long as the xor-sum doesn't change.
Moreover, when analyzing a Nim with several piles, we can replace it with a single pile of size $s$.
## The equivalence of impartial games and Nim (Sprague-Grundy theorem)
Now we will learn how to find, for any game state of any impartial game, a corresponding state of Nim.
### Lemma about Nim with increases
We consider the following modification to Nim: we also allow **adding stones to a chosen pile**.
The exact rules about how and when increasing is allowed **do not interest us**, however the rules should keep our game **acyclic**. In later sections, example games are considered.
**Lemma.**
The addition of increasing to Nim doesn't change how winning and losing states are determined.
In other words, increases are useless, and we don't have to use them in a winning strategy.
**Proof.**
Suppose a player added stones to a pile. Then his opponent can simply undo his move — decrease the number back to the previous value.
Since the game is acyclic, sooner or later the current player won't be able to use an increase move and will have to do the usual Nim move.
### Sprague-Grundy theorem
Let's consider a state $v$ of a two-player impartial game and let $v_i$ be the states reachable from it (where $i \in \{ 1, 2, \dots, k \} , k \ge 0$).
To this state, we can assign a fully equivalent game of Nim with one pile of size $x$.
The number $x$ is called the Grundy value or nim-value of state $v$.
Moreover, this number can be found in the following recursive way:
$$ x = \text{mex}\ \{ x_1, \ldots, x_k \}, $$
where $x_i$ is the Grundy value for state $v_i$ and the function $\text{mex}$ (*minimum excludant*) is the smallest non-negative integer not found in the given set.
Viewing the game as a graph, we can gradually calculate the Grundy values starting from vertices without outgoing edges.
Grundy value being equal to zero means a state is losing.
**Proof.**
We will use a proof by induction.
For vertices without a move, the value $x$ is the $\text{mex}$ of an empty set, which is zero.
That is correct, since an empty Nim is losing.
Now consider any other vertex $v$.
By induction, we assume the values $x_i$ corresponding to its reachable vertices are already calculated.
Let $p = \text{mex}\ \{ x_1, \ldots, x_k \}$.
Then we know that for any integer $i \in [0, p)$ there exists a reachable vertex with Grundy value $i$.
This means $v$ is **equivalent to a state of the game of Nim with increases with one pile of size $p$**.
In such a game we have transitions to piles of every size smaller than $p$ and possibly transitions to piles with sizes greater than $p$.
Therefore, $p$ is indeed the desired Grundy value for the currently considered state.
## Application of the theorem
Finally, we describe an algorithm to determine the win/loss outcome of a game, which is applicable to any impartial two-player game.
To calculate the Grundy value of a given state you need to:
* Get all possible transitions from this state
* Each transition can lead to a **sum of independent games** (one game in the degenerate case).
Calculate the Grundy value for each independent game and xor-sum them.
Of course xor does nothing if there is just one game.
* After we calculated Grundy values for each transition we find the state's value as the $\text{mex}$ of these numbers.
* If the value is zero, then the current state is losing, otherwise it is winning.
In comparison to the previous section, we take into account the fact that there can be transitions to combined games.
We consider them a Nim with pile sizes equal to the independent games' Grundy values.
We can xor-sum them just like usual Nim according to Bouton's theorem.
## Patterns in Grundy values
Very often when solving specific tasks using Grundy values, it may be beneficial to **study the table of the values** in search of patterns.
In many games, which may seem rather difficult for theoretical analysis,
the Grundy values turn out to be periodic or of an easily understandable form.
In the overwhelming majority of cases the observed pattern turns out to be true and can be proved by induction if desired.
However, Grundy values are far from *always* containing such regularities and even for some very simple games, the problem asking if those regularities exist is still open (e.g. "Grundy's game").
## Example games
### Crosses-crosses
**The rules.**
Consider a checkered strip of size $1 \times n$. In one move, the player must put one cross, but it is forbidden to put two crosses next to each other (in adjacent cells). As usual, the player without a valid move loses.
**The solution.**
When a player puts a cross in any cell, we can think of the strip being split into two independent parts:
to the left of the cross and to the right of it.
In this case, the cell with a cross, as well as its left and right neighbours are destroyed — nothing more can be put in them.
Therefore, if we number the cells from $1$ to $n$ then putting the cross in position $1 < i < n$ breaks the strip
into two strips of length $i-2$ and $n-i-1$ i.e. we go to the sum of games $i-2$ and $n-i-1$.
For the edge case of the cross being marked on position $1$ or $n$, we go to the game $n-2$.
Thus, the Grundy value $g(n)$ has the form:
$$g(n) = \text{mex} \Bigl( \{ g(n-2) \} \cup \{g(i-2) \oplus g(n-i-1) \mid 2 \leq i \leq n-1\} \Bigr) .$$
So we've got a $O(n^2)$ solution.
In fact, $g(n)$ has a period of length 34 starting with $n=52$.
|
---
title
sprague_grundy
---
# Sprague-Grundy theorem. Nim
## Introduction
This theorem describes the so-called **impartial** two-player game,
i.e. those in which the available moves and winning/losing depends only on the state of the game.
In other words, the only difference between the two players is that one of them moves first.
Additionally, we assume that the game has **perfect information**, i.e. no information is hidden from the players (they know the rules and the possible moves).
It is assumed that the game is **finite**, i.e. after a certain number of moves, one of the players will end up in a losing position — from which they can't move to another position.
On the other side, the player who set up this position for the opponent wins.
Understandably, there are no draws in this game.
Such games can be completely described by a *directed acyclic graph*: the vertices are game states and the edges are transitions (moves).
A vertex without outgoing edges is a losing vertex (a player who must make a move from this vertex loses).
Since there are no draws, we can classify all game states as either **winning** or **losing**.
Winning states are those from which there is a move that causes inevitable defeat of the other player, even with their best response.
Losing states are those from which all moves lead to winning states for the other player.
Summarizing, a state is winning if there is at least one transition to a losing state and is losing if there isn't at least one transition to a losing state.
Our task is to classify the states of a given game.
The theory of such games was independently developed by Roland Sprague in 1935 and Patrick Michael Grundy in 1939.
## Nim
This game obeys the restrictions described above.
Moreover, *any* perfect-information impartial two-player game can be reduced to the game of Nim.
Studying this game will allow us to solve all other similar games, but more on that later.
Historically this game was popular in ancient times.
Its origin is probably in China — or at least the game *Jianshizi* is very similar to it.
In Europe the earliest references to it are from the 16th century.
The name was given by Charles Bouton, who in 1901 published a full analysis of this game.
### Game description
There are several piles, each with several stones.
In a move a player can take any positive number of stones from any one pile and throw them away.
A player loses if they can't make a move, which happens when all the piles are empty.
The game state is unambiguously described by a multiset of positive integers.
A move consists of strictly decreasing a chosen integer (if it becomes zero, it is removed from the set).
### The solution
The solution by Charles L. Bouton looks like this:
**Theorem.**
The current player has a winning strategy if and only if the xor-sum of the pile sizes is non-zero.
The xor-sum of a sequence $a$ is $a_1 \oplus a_2 \oplus \ldots \oplus a_n$, where $\oplus$ is the *bitwise exclusive or*.
**Proof.**
The key to the proof is the presence of a **symmetric strategy for the opponent**.
We show that a once in a position with the xor-sum equal to zero, the player won't be able to make it non-zero in the long term —
if they transition to a position with a non-zero xor-sum, the opponent will always have a move returning the xor-sum back to zero.
We will prove the theorem by mathematical induction.
For an empty Nim (where all the piles are empty i.e. the multiset is empty) the xor-sum is zero and the theorem is true.
Now suppose we are in a non-empty state.
Using the assumption of induction (and the acyclicity of the game) we assume that the theorem is proven for all states reachable from the current one.
Then the proof splits into two parts:
if for the current position the xor-sum $s = 0$, we have to prove that this state is losing, i.e. all reachable states have xor-sum $t \neq 0$.
If $s \neq 0$, we have to prove that there is a move leading to a state with $t = 0$.
* Let $s = 0$ and let's consider any move.
This move reduces the size of a pile $x$ to a size $y$.
Using elementary properties of $\oplus$, we have
\[ t = s \oplus x \oplus y = 0 \oplus x \oplus y = x \oplus y \]
Since $y < x$, $y \oplus x$ can't be zero, so $t \neq 0$.
That means any reachable state is a winning one (by the assumption of induction), so we are in a losing position.
* Let $s \neq 0$.
Consider the binary representation of the number $s$.
Let $d$ be the index of its leading (biggest value) non-zero bit.
Our move will be on a pile whose size's bit number $d$ is set (it must exist, otherwise the bit wouldn't be set in $s$).
We will reduce its size $x$ to $y = x \oplus s$.
All bits at positions greater than $d$ in $x$ and $y$ match and bit $d$ is set in $x$ but not set in $y$.
Therefore, $y < x$, which is all we need for a move to be legal.
Now we have:
\[ t = s \oplus x \oplus y = s \oplus x \oplus (s \oplus x) = 0 \]
This means we found a reachable losing state (by the assumption of induction) and the current state is winning.
**Corollary.**
Any state of Nim can be replaced by an equivalent state as long as the xor-sum doesn't change.
Moreover, when analyzing a Nim with several piles, we can replace it with a single pile of size $s$.
## The equivalence of impartial games and Nim (Sprague-Grundy theorem)
Now we will learn how to find, for any game state of any impartial game, a corresponding state of Nim.
### Lemma about Nim with increases
We consider the following modification to Nim: we also allow **adding stones to a chosen pile**.
The exact rules about how and when increasing is allowed **do not interest us**, however the rules should keep our game **acyclic**. In later sections, example games are considered.
**Lemma.**
The addition of increasing to Nim doesn't change how winning and losing states are determined.
In other words, increases are useless, and we don't have to use them in a winning strategy.
**Proof.**
Suppose a player added stones to a pile. Then his opponent can simply undo his move — decrease the number back to the previous value.
Since the game is acyclic, sooner or later the current player won't be able to use an increase move and will have to do the usual Nim move.
### Sprague-Grundy theorem
Let's consider a state $v$ of a two-player impartial game and let $v_i$ be the states reachable from it (where $i \in \{ 1, 2, \dots, k \} , k \ge 0$).
To this state, we can assign a fully equivalent game of Nim with one pile of size $x$.
The number $x$ is called the Grundy value or nim-value of state $v$.
Moreover, this number can be found in the following recursive way:
$$ x = \text{mex}\ \{ x_1, \ldots, x_k \}, $$
where $x_i$ is the Grundy value for state $v_i$ and the function $\text{mex}$ (*minimum excludant*) is the smallest non-negative integer not found in the given set.
Viewing the game as a graph, we can gradually calculate the Grundy values starting from vertices without outgoing edges.
Grundy value being equal to zero means a state is losing.
**Proof.**
We will use a proof by induction.
For vertices without a move, the value $x$ is the $\text{mex}$ of an empty set, which is zero.
That is correct, since an empty Nim is losing.
Now consider any other vertex $v$.
By induction, we assume the values $x_i$ corresponding to its reachable vertices are already calculated.
Let $p = \text{mex}\ \{ x_1, \ldots, x_k \}$.
Then we know that for any integer $i \in [0, p)$ there exists a reachable vertex with Grundy value $i$.
This means $v$ is **equivalent to a state of the game of Nim with increases with one pile of size $p$**.
In such a game we have transitions to piles of every size smaller than $p$ and possibly transitions to piles with sizes greater than $p$.
Therefore, $p$ is indeed the desired Grundy value for the currently considered state.
## Application of the theorem
Finally, we describe an algorithm to determine the win/loss outcome of a game, which is applicable to any impartial two-player game.
To calculate the Grundy value of a given state you need to:
* Get all possible transitions from this state
* Each transition can lead to a **sum of independent games** (one game in the degenerate case).
Calculate the Grundy value for each independent game and xor-sum them.
Of course xor does nothing if there is just one game.
* After we calculated Grundy values for each transition we find the state's value as the $\text{mex}$ of these numbers.
* If the value is zero, then the current state is losing, otherwise it is winning.
In comparison to the previous section, we take into account the fact that there can be transitions to combined games.
We consider them a Nim with pile sizes equal to the independent games' Grundy values.
We can xor-sum them just like usual Nim according to Bouton's theorem.
## Patterns in Grundy values
Very often when solving specific tasks using Grundy values, it may be beneficial to **study the table of the values** in search of patterns.
In many games, which may seem rather difficult for theoretical analysis,
the Grundy values turn out to be periodic or of an easily understandable form.
In the overwhelming majority of cases the observed pattern turns out to be true and can be proved by induction if desired.
However, Grundy values are far from *always* containing such regularities and even for some very simple games, the problem asking if those regularities exist is still open (e.g. "Grundy's game").
## Example games
### Crosses-crosses
**The rules.**
Consider a checkered strip of size $1 \times n$. In one move, the player must put one cross, but it is forbidden to put two crosses next to each other (in adjacent cells). As usual, the player without a valid move loses.
**The solution.**
When a player puts a cross in any cell, we can think of the strip being split into two independent parts:
to the left of the cross and to the right of it.
In this case, the cell with a cross, as well as its left and right neighbours are destroyed — nothing more can be put in them.
Therefore, if we number the cells from $1$ to $n$ then putting the cross in position $1 < i < n$ breaks the strip
into two strips of length $i-2$ and $n-i-1$ i.e. we go to the sum of games $i-2$ and $n-i-1$.
For the edge case of the cross being marked on position $1$ or $n$, we go to the game $n-2$.
Thus, the Grundy value $g(n)$ has the form:
$$g(n) = \text{mex} \Bigl( \{ g(n-2) \} \cup \{g(i-2) \oplus g(n-i-1) \mid 2 \leq i \leq n-1\} \Bigr) .$$
So we've got a $O(n^2)$ solution.
In fact, $g(n)$ has a period of length 34 starting with $n=52$.
## Practice Problems
- [KATTIS S-Nim](https://open.kattis.com/problems/snim)
- [CodeForces - Marbles (2018-2019 ACM-ICPC Brazil Subregional)](https://codeforces.com/gym/101908/problem/B)
- [KATTIS - Cuboid Slicing Game](https://open.kattis.com/problems/cuboidslicinggame)
- [HackerRank - Tower Breakers, Revisited!](https://www.hackerrank.com/contests/5-days-of-game-theory/challenges/tower-breakers-2)
- [HackerRank - Tower Breakers, Again!](https://www.hackerrank.com/contests/5-days-of-game-theory/challenges/tower-breakers-3/problem)
- [HackerRank - Chessboard Game, Again!](https://www.hackerrank.com/contests/5-days-of-game-theory/challenges/a-chessboard-game)
|
Sprague-Grundy theorem. Nim
|
---
title
games_on_graphs
---
# Games on arbitrary graphs
Let a game be played by two players on an arbitrary graph $G$.
I.e. the current state of the game is a certain vertex.
The players perform moves by turns, and move from the current vertex to an adjacent vertex using a connecting edge.
Depending on the game, the person that is unable to move will either lose or win the game.
We consider the most general case, the case of an arbitrary directed graph with cycles.
It is our task to determine, given an initial state, who will win the game if both players play with optimal strategies or determine that the result of the game will be a draw.
We will solve this problem very efficiently.
We will find the solution for all possible starting vertices of the graph in linear time with respect to the number of edges: $O(m)$.
## Description of the algorithm
We will call a vertex a winning vertex, if the player starting at this state will win the game, if they play optimally (regardless of what turns the other player makes).
Similarly, we will call a vertex a losing vertex, if the player starting at this vertex will lose the game, if the opponent plays optimally.
For some of the vertices of the graph, we already know in advance that they are winning or losing vertices: namely all vertices that have no outgoing edges.
Also we have the following **rules**:
- if a vertex has an outgoing edge that leads to a losing vertex, then the vertex itself is a winning vertex.
- if all outgoing edges of a certain vertex lead to winning vertices, then the vertex itself is a losing vertex.
- if at some point there are still undefined vertices, and neither will fit the first or the second rule, then each of these vertices, when used as a starting vertex, will lead to a draw if both player play optimally.
Thus, we can define an algorithm which runs in $O(n m)$ time immediately.
We go through all vertices and try to apply the first or second rule, and repeat.
However, we can accelerate this procedure, and get the complexity down to $O(m)$.
We will go over all the vertices, for which we initially know if they are winning or losing states.
For each of them, we start a [depth first search](../graph/depth-first-search.md).
This DFS will move back over the reversed edges.
First of all, it will not enter vertices which already are defined as winning or losing vertices.
And further, if the search goes from a losing vertex to an undefined vertex, then we mark this one as a winning vertex, and continue the DFS using this new vertex.
If we go from a winning vertex to an undefined vertex, then we must check whether all edges from this one leads to winning vertices.
We can perform this test in $O(1)$ by storing the number of edges that lead to a winning vertex for each vertex.
So if we go from a winning vertex to an undefined one, then we increase the counter, and check if this number is equal to the number of outgoing edges.
If this is the case, we can mark this vertex as a losing vertex, and continue the DFS from this vertex.
Otherwise we don't know yet, if this vertex is a winning or losing vertex, and therefore it doesn't make sense to keep continuing the DFS using it.
In total we visit every winning and every losing vertex exactly once (undefined vertices are not visited), and we go over each edge also at most one time.
Hence the complexity is $O(m)$.
## Implementation
Here is the implementation of such a DFS.
We assume that the variable `adj_rev` stores the adjacency list for the graph in **reversed** form, i.e. instead of storing the edge $(i, j)$ of the graph, we store $(j, i)$.
Also for each vertex we assume that the outgoing degree is already computed.
```cpp
vector<vector<int>> adj_rev;
vector<bool> winning;
vector<bool> losing;
vector<bool> visited;
vector<int> degree;
void dfs(int v) {
visited[v] = true;
for (int u : adj_rev[v]) {
if (!visited[u]) {
if (losing[v])
winning[u] = true;
else if (--degree[u] == 0)
losing[u] = true;
else
continue;
dfs(u);
}
}
}
```
## Example: "Policeman and thief"
Here is a concrete example of such a game.
There is $m \times n$ board.
Some of the cells cannot be entered.
The initial coordinates of the police officer and of the thief are known.
One of the cells is the exit.
If the policeman and the thief are located at the same cell at any moment, the policeman wins.
If the thief is at the exit cell (without the policeman also being on the cell), then the thief wins.
The policeman can walk in all 8 directions, the thief only in 4 (along the coordinate axis).
Both the policeman and the thief will take turns moving.
However they also can skip a turn if they want to.
The first move is made by the policeman.
We will now **construct the graph**.
For this we must formalize the rules of the game.
The current state of the game is determined by the coordinates of the police offices $P$, the coordinates of the thief $T$, and also by whose turn it is, let's call this variable $P_{\text{turn}}$ (which is true when it is the policeman's turn).
Therefore a vertex of the graph is determined by the triple $(P, T, P_{\text{turn}})$
The graph then can be easily constructed, simply by following the rules of the game.
Next we need to determine which vertices are winning and which are losing ones initially.
There is a **subtle point** here.
The winning / losing vertices depend, in addition to the coordinates, also on $P_{\text{turn}}$ - whose turn it.
If it is the policeman's turn, then the vertex is a winning vertex, if the coordinates of the policeman and the thief coincide, and the vertex is a losing one if it is not a winning one and the thief is on the exit vertex.
If it is the thief's turn, then a vertex is a losing vertex, if the coordinates of the two players coincide, and it is a winning vertex if it is not a losing one, and the thief is at the exit vertex.
The only point before implementing is not, that you need to decide if you want to build the graph **explicitly** or just construct it **on the fly**.
On one hand, building the graph explicitly will be a lot easier and there is less chance of making mistakes.
On the other hand, it will increase the amount of code and the running time will be slower than if you build the graph on the fly.
The following implementation will construct the graph explicitly:
```cpp
struct State {
int P, T;
bool Pstep;
};
vector<State> adj_rev[100][100][2]; // [P][T][Pstep]
bool winning[100][100][2];
bool losing[100][100][2];
bool visited[100][100][2];
int degree[100][100][2];
void dfs(State v) {
visited[v.P][v.T][v.Pstep] = true;
for (State u : adj_rev[v.P][v.T][v.Pstep]) {
if (!visited[u.P][u.T][u.Pstep]) {
if (losing[v.P][v.T][v.Pstep])
winning[u.P][u.T][u.Pstep] = true;
else if (--degree[u.P][u.T][u.Pstep] == 0)
losing[u.P][u.T][u.Pstep] = true;
else
continue;
dfs(u);
}
}
}
int main() {
int n, m;
cin >> n >> m;
vector<string> a(n);
for (int i = 0; i < n; i++)
cin >> a[i];
for (int P = 0; P < n*m; P++) {
for (int T = 0; T < n*m; T++) {
for (int Pstep = 0; Pstep <= 1; Pstep++) {
int Px = P/m, Py = P%m, Tx = T/m, Ty = T%m;
if (a[Px][Py]=='*' || a[Tx][Ty]=='*')
continue;
bool& win = winning[P][T][Pstep];
bool& lose = losing[P][T][Pstep];
if (Pstep) {
win = Px==Tx && Py==Ty;
lose = !win && a[Tx][Ty] == 'E';
} else {
lose = Px==Tx && Py==Ty;
win = !lose && a[Tx][Ty] == 'E';
}
if (win || lose)
continue;
State st = {P,T,!Pstep};
adj_rev[P][T][Pstep].push_back(st);
st.Pstep = Pstep;
degree[P][T][Pstep]++;
const int dx[] = {-1, 0, 1, 0, -1, -1, 1, 1};
const int dy[] = {0, 1, 0, -1, -1, 1, -1, 1};
for (int d = 0; d < (Pstep ? 8 : 4); d++) {
int PPx = Px, PPy = Py, TTx = Tx, TTy = Ty;
if (Pstep) {
PPx += dx[d];
PPy += dy[d];
} else {
TTx += dx[d];
TTy += dy[d];
}
if (PPx >= 0 && PPx < n && PPy >= 0 && PPy < m && a[PPx][PPy] != '*' &&
TTx >= 0 && TTx < n && TTy >= 0 && TTy < m && a[TTx][TTy] != '*')
{
adj_rev[PPx*m+PPy][TTx*m+TTy][!Pstep].push_back(st);
++degree[P][T][Pstep];
}
}
}
}
}
for (int P = 0; P < n*m; P++) {
for (int T = 0; T < n*m; T++) {
for (int Pstep = 0; Pstep <= 1; Pstep++) {
if ((winning[P][T][Pstep] || losing[P][T][Pstep]) && !visited[P][T][Pstep])
dfs({P, T, (bool)Pstep});
}
}
}
int P_st, T_st;
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
if (a[i][j] == 'P')
P_st = i*m+j;
else if (a[i][j] == 'T')
T_st = i*m+j;
}
}
if (winning[P_st][T_st][true]) {
cout << "Police catches the thief" << endl;
} else if (losing[P_st][T_st][true]) {
cout << "The thief escapes" << endl;
} else {
cout << "Draw" << endl;
}
}
```
|
---
title
games_on_graphs
---
# Games on arbitrary graphs
Let a game be played by two players on an arbitrary graph $G$.
I.e. the current state of the game is a certain vertex.
The players perform moves by turns, and move from the current vertex to an adjacent vertex using a connecting edge.
Depending on the game, the person that is unable to move will either lose or win the game.
We consider the most general case, the case of an arbitrary directed graph with cycles.
It is our task to determine, given an initial state, who will win the game if both players play with optimal strategies or determine that the result of the game will be a draw.
We will solve this problem very efficiently.
We will find the solution for all possible starting vertices of the graph in linear time with respect to the number of edges: $O(m)$.
## Description of the algorithm
We will call a vertex a winning vertex, if the player starting at this state will win the game, if they play optimally (regardless of what turns the other player makes).
Similarly, we will call a vertex a losing vertex, if the player starting at this vertex will lose the game, if the opponent plays optimally.
For some of the vertices of the graph, we already know in advance that they are winning or losing vertices: namely all vertices that have no outgoing edges.
Also we have the following **rules**:
- if a vertex has an outgoing edge that leads to a losing vertex, then the vertex itself is a winning vertex.
- if all outgoing edges of a certain vertex lead to winning vertices, then the vertex itself is a losing vertex.
- if at some point there are still undefined vertices, and neither will fit the first or the second rule, then each of these vertices, when used as a starting vertex, will lead to a draw if both player play optimally.
Thus, we can define an algorithm which runs in $O(n m)$ time immediately.
We go through all vertices and try to apply the first or second rule, and repeat.
However, we can accelerate this procedure, and get the complexity down to $O(m)$.
We will go over all the vertices, for which we initially know if they are winning or losing states.
For each of them, we start a [depth first search](../graph/depth-first-search.md).
This DFS will move back over the reversed edges.
First of all, it will not enter vertices which already are defined as winning or losing vertices.
And further, if the search goes from a losing vertex to an undefined vertex, then we mark this one as a winning vertex, and continue the DFS using this new vertex.
If we go from a winning vertex to an undefined vertex, then we must check whether all edges from this one leads to winning vertices.
We can perform this test in $O(1)$ by storing the number of edges that lead to a winning vertex for each vertex.
So if we go from a winning vertex to an undefined one, then we increase the counter, and check if this number is equal to the number of outgoing edges.
If this is the case, we can mark this vertex as a losing vertex, and continue the DFS from this vertex.
Otherwise we don't know yet, if this vertex is a winning or losing vertex, and therefore it doesn't make sense to keep continuing the DFS using it.
In total we visit every winning and every losing vertex exactly once (undefined vertices are not visited), and we go over each edge also at most one time.
Hence the complexity is $O(m)$.
## Implementation
Here is the implementation of such a DFS.
We assume that the variable `adj_rev` stores the adjacency list for the graph in **reversed** form, i.e. instead of storing the edge $(i, j)$ of the graph, we store $(j, i)$.
Also for each vertex we assume that the outgoing degree is already computed.
```cpp
vector<vector<int>> adj_rev;
vector<bool> winning;
vector<bool> losing;
vector<bool> visited;
vector<int> degree;
void dfs(int v) {
visited[v] = true;
for (int u : adj_rev[v]) {
if (!visited[u]) {
if (losing[v])
winning[u] = true;
else if (--degree[u] == 0)
losing[u] = true;
else
continue;
dfs(u);
}
}
}
```
## Example: "Policeman and thief"
Here is a concrete example of such a game.
There is $m \times n$ board.
Some of the cells cannot be entered.
The initial coordinates of the police officer and of the thief are known.
One of the cells is the exit.
If the policeman and the thief are located at the same cell at any moment, the policeman wins.
If the thief is at the exit cell (without the policeman also being on the cell), then the thief wins.
The policeman can walk in all 8 directions, the thief only in 4 (along the coordinate axis).
Both the policeman and the thief will take turns moving.
However they also can skip a turn if they want to.
The first move is made by the policeman.
We will now **construct the graph**.
For this we must formalize the rules of the game.
The current state of the game is determined by the coordinates of the police offices $P$, the coordinates of the thief $T$, and also by whose turn it is, let's call this variable $P_{\text{turn}}$ (which is true when it is the policeman's turn).
Therefore a vertex of the graph is determined by the triple $(P, T, P_{\text{turn}})$
The graph then can be easily constructed, simply by following the rules of the game.
Next we need to determine which vertices are winning and which are losing ones initially.
There is a **subtle point** here.
The winning / losing vertices depend, in addition to the coordinates, also on $P_{\text{turn}}$ - whose turn it.
If it is the policeman's turn, then the vertex is a winning vertex, if the coordinates of the policeman and the thief coincide, and the vertex is a losing one if it is not a winning one and the thief is on the exit vertex.
If it is the thief's turn, then a vertex is a losing vertex, if the coordinates of the two players coincide, and it is a winning vertex if it is not a losing one, and the thief is at the exit vertex.
The only point before implementing is not, that you need to decide if you want to build the graph **explicitly** or just construct it **on the fly**.
On one hand, building the graph explicitly will be a lot easier and there is less chance of making mistakes.
On the other hand, it will increase the amount of code and the running time will be slower than if you build the graph on the fly.
The following implementation will construct the graph explicitly:
```cpp
struct State {
int P, T;
bool Pstep;
};
vector<State> adj_rev[100][100][2]; // [P][T][Pstep]
bool winning[100][100][2];
bool losing[100][100][2];
bool visited[100][100][2];
int degree[100][100][2];
void dfs(State v) {
visited[v.P][v.T][v.Pstep] = true;
for (State u : adj_rev[v.P][v.T][v.Pstep]) {
if (!visited[u.P][u.T][u.Pstep]) {
if (losing[v.P][v.T][v.Pstep])
winning[u.P][u.T][u.Pstep] = true;
else if (--degree[u.P][u.T][u.Pstep] == 0)
losing[u.P][u.T][u.Pstep] = true;
else
continue;
dfs(u);
}
}
}
int main() {
int n, m;
cin >> n >> m;
vector<string> a(n);
for (int i = 0; i < n; i++)
cin >> a[i];
for (int P = 0; P < n*m; P++) {
for (int T = 0; T < n*m; T++) {
for (int Pstep = 0; Pstep <= 1; Pstep++) {
int Px = P/m, Py = P%m, Tx = T/m, Ty = T%m;
if (a[Px][Py]=='*' || a[Tx][Ty]=='*')
continue;
bool& win = winning[P][T][Pstep];
bool& lose = losing[P][T][Pstep];
if (Pstep) {
win = Px==Tx && Py==Ty;
lose = !win && a[Tx][Ty] == 'E';
} else {
lose = Px==Tx && Py==Ty;
win = !lose && a[Tx][Ty] == 'E';
}
if (win || lose)
continue;
State st = {P,T,!Pstep};
adj_rev[P][T][Pstep].push_back(st);
st.Pstep = Pstep;
degree[P][T][Pstep]++;
const int dx[] = {-1, 0, 1, 0, -1, -1, 1, 1};
const int dy[] = {0, 1, 0, -1, -1, 1, -1, 1};
for (int d = 0; d < (Pstep ? 8 : 4); d++) {
int PPx = Px, PPy = Py, TTx = Tx, TTy = Ty;
if (Pstep) {
PPx += dx[d];
PPy += dy[d];
} else {
TTx += dx[d];
TTy += dy[d];
}
if (PPx >= 0 && PPx < n && PPy >= 0 && PPy < m && a[PPx][PPy] != '*' &&
TTx >= 0 && TTx < n && TTy >= 0 && TTy < m && a[TTx][TTy] != '*')
{
adj_rev[PPx*m+PPy][TTx*m+TTy][!Pstep].push_back(st);
++degree[P][T][Pstep];
}
}
}
}
}
for (int P = 0; P < n*m; P++) {
for (int T = 0; T < n*m; T++) {
for (int Pstep = 0; Pstep <= 1; Pstep++) {
if ((winning[P][T][Pstep] || losing[P][T][Pstep]) && !visited[P][T][Pstep])
dfs({P, T, (bool)Pstep});
}
}
}
int P_st, T_st;
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
if (a[i][j] == 'P')
P_st = i*m+j;
else if (a[i][j] == 'T')
T_st = i*m+j;
}
}
if (winning[P_st][T_st][true]) {
cout << "Police catches the thief" << endl;
} else if (losing[P_st][T_st][true]) {
cout << "The thief escapes" << endl;
} else {
cout << "Draw" << endl;
}
}
```
|
Games on arbitrary graphs
|
---
title
- Original
---
# Binary search
**Binary search** is a method that allows for quicker search of something by splitting the search interval into two. Its most common application is searching values in sorted arrays, however the splitting idea is crucial in many other typical tasks.
## Search in sorted arrays
The most typical problem that leads to the binary search is as follows. You're given a sorted array $A_0 \leq A_1 \leq \dots \leq A_{n-1}$, check if $k$ is present within the sequence. The simplest solution would be to check every element one by one and compare it with $k$ (a so-called linear search). This approach works in $O(n)$, but doesn't utilize the fact that the array is sorted.
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/8/83/Binary_Search_Depiction.svg" width="800px">
<br>
<i>Binary search of the value $7$ in an array</i>.
<br>
<i>The <a href="https://commons.wikimedia.org/wiki/File:Binary_Search_Depiction.svg">image</a> by [AlwaysAngry](https://commons.wikimedia.org/wiki/User:AlwaysAngry) is distributed under <a href="https://creativecommons.org/licenses/by-sa/4.0/deed.en">CC BY-SA 4.0</a></i> license.
</center>
Now assume that we know two indices $L < R$ such that $A_L \leq k \leq A_R$. Because the array is sorted, we can deduce that $k$ either occurs among $A_L, A_{L+1}, \dots, A_R$ or doesn't occur in the array at all. If we pick an arbitrary index $M$ such that $L < M < R$ and check whether $k$ is less or greater than $A_M$. We have two possible cases:
1. $A_L \leq k \leq A_M$. In this case, we reduce the problem from $[L, R]$ to $[L, M]$;
1. $A_M \leq k \leq A_R$. In this case, we reduce the problem from $[L, R]$ to $[M, R]$.
When it is impossible to pick $M$, that is, when $R = L + 1$, we directly compare $k$ with $A_L$ and $A_R$. Otherwise we would want to pick $M$ in such manner that it reduces the active segment to a single element as quickly as possible _in the worst case_.
Since in the worst case we will always reduce to larger segment of $[L, M]$ and $[M, R]$. Thus, in the worst case scenario the reduction would be from $R-L$ to $\max(M-L, R-M)$. To minimize this value, we should pick $M \approx \frac{L+R}{2}$, then
$$
M-L \approx \frac{R-L}{2} \approx R-M.
$$
In other words, from the worst-case scenario perspective it is optimal to always pick $M$ in the middle of $[L, R]$ and split it in half. Thus, the active segment halves on each step until it becomes of size $1$. So, if the process needs $h$ steps, in the end it reduces the difference between $R$ and $L$ from $R-L$ to $\frac{R-L}{2^h} \approx 1$, giving us the equation $2^h \approx R-L$.
Taking $\log_2$ on both sides, we get $h \approx \log_2(R-L) \in O(\log n)$.
Logarithmic number of steps is drastically better than that of linear search. For example, for $n \approx 2^{20} \approx 10^6$ you'd need to make approximately a million operations for linear search, but only around $20$ operations with the binary search.
### Lower bound and upper bound
It is often convenient to find the position of the first element that is not less than $k$ (called the lower bound of $k$ in the array) or the position of the first element that is greater than $k$ (called the upper bound of $k$) rather than the exact position of the element.
Together, lower and upper bounds produce a possibly empty half-interval of the array elements that are equal to $k$. To check whether $k$ is present in the array it's enough to find its lower bound and check if the corresponding element equates to $k$.
### Implementation
The explanation above provides a rough description of the algorithm. For the implementation details, we'd need to be more precise.
We will maintain a pair $L < R$ such that $A_L \leq k < A_R$. Meaning that the active search interval is $[L, R)$. We use half-interval here instead of a segment $[L, R]$ as it turns out to require less corner case work.
When $R = L+1$, we can deduce from definitions above that $R$ is the upper bound of $k$. It is convenient to initialize $R$ with past-the-end index, that is $R=n$ and $L$ with before-the-beginning index, that is $L=-1$. It is fine as long as we never evaluate $A_L$ and $A_R$ in our algorithm directly, formally treating it as $A_L = -\infty$ and $A_R = +\infty$.
Finally, to be specific about the value of $M$ we pick, we will stick with $M = \lfloor \frac{L+R}{2} \rfloor$.
Then the implementation could look like this:
```cpp
... // a sorted array is stored as a[0], a[1], ..., a[n-1]
int l = -1, r = n;
while(r - l > 1) {
int m = (l + r) / 2;
if(k < a[m]) {
r = m; // a[l] <= k < a[m] <= a[r]
} else {
l = m; // a[l] <= a[m] <= k < a[r]
}
}
```
During the execution of the algorithm, we never evaluate neither $A_L$ nor $A_R$, as $L < M < R$. In the end, $L$ will be the index of the last element that is not greater than $k$ (or $-1$ if there is no such element) and $R$ will be the index of the first element larger than $k$ (or $n$ if there is no such element).
## Search on arbitrary predicate
Let $f : \{0,1,\dots, n-1\} \to \{0, 1\}$ be a boolean function defined on $0,1,\dots,n-1$ such that it is monotonous, that is
$$
f(0) \leq f(1) \leq \dots \leq f(n-1).
$$
The binary search, the way it is described above, finds the partition of the array by the predicate $f(M)$, holding the boolean value of $k < A_M$ expression. In other words, binary search finds the unique index $L$ such that $f(L) = 0$ and $f(R)=f(L+1)=1$.
It is possible to use arbitrary monotonous predicate instead of $k < A_M$. It is particularly useful when the computation of $f(k)$ is requires too much time to actually compute it for every possible value.
```cpp
... // f(i) is a boolean function such that f(0) <= ... <= f(n-1)
int l = -1, r = n;
while(r - l > 1) {
int m = (l + r) / 2;
if(f(m)) {
r = m; // 0 = f(l) < f(m) = 1
} else {
l = m; // 0 = f(m) < f(r) = 1
}
}
```
### Binary search on the answer
Such situation often occurs when we're asked to compute some value, but we're only capable of checking whether this value is at least $i$. For example, you're given an array $a_1,\dots,a_n$ and you're asked to find the maximum floored average sum
$$
\left \lfloor \frac{a_l + a_{l+1} + \dots + a_r}{r-l+1} \right\rfloor
$$
among all possible pairs of $l,r$ such that $r-l \geq x$. One of simple ways to solve this problem is to check whether the answer is at least $\lambda$, that is if there is a pair $l, r$ such that the following is true:
$$
\frac{a_l + a_{l+1} + \dots + a_r}{r-l+1} \geq \lambda.
$$
Equivalently, it rewrites as
$$
(a_l - \lambda) + (a_{l+1} - \lambda) + \dots + (a_r - \lambda) \geq 0,
$$
so now we need to check whether there is a subarray of a new array $a_i - \lambda$ of length at least $x+1$ with non-negative sum, which is doable with some prefix sums.
## Continuous search
Let $f : \mathbb R \to \mathbb R$ be a real-valued function that is continuous on a segment $[L, R]$.
Without loss of generality assume that $f(L) \leq f(R)$. From [intermediate value theorem](https://en.wikipedia.org/wiki/Intermediate_value_theorem) it follows that for any $y \in [f(L), f(R)]$ there is $x \in [L, R]$ such that $f(x) = y$. Note that, unlike previous paragraphs, the function is _not_ required to be monotonous.
The value $x$ could be approximated up to $\pm\delta$ in $O\left(\log \frac{R-L}{\delta}\right)$ time for any specific value of $\delta$. The idea is essentially the same, if we take $M \in (L, R)$ then we would be able to reduce the search interval to either $[L, M]$ or $[M, R]$ depending on whether $f(M)$ is larger than $y$. One common example here would be finding roots of odd-degree polynomials.
For example, let $f(x)=x^3 + ax^2 + bx + c$. Then $f(L) \to -\infty$ and $f(R) \to +\infty$ with $L \to -\infty$ and $R \to +\infty$. Which means that it is always possible to find sufficiently small $L$ and sufficiently large $R$ such that $f(L) < 0$ and $f(R) > 0$. Then, it is possible to find with binary search arbitrarily small interval containing $x$ such that $f(x)=0$.
## Search with powers of 2
Another noteworthy way to do binary search is, instead of maintaining an active segment, to maintain the current pointer $i$ and the current power $k$. The pointer starts at $i=L$ and then on each iteration one tests the predicate at point $i+2^k$. If the predicate is still $0$, the pointer is advanced from $i$ to $i+2^k$, otherwise it stays the same, then the power $k$ is decreased by $1$.
This paradigm is widely used in tasks around trees, such as finding lowest common ancestor of two vertices or finding an ancestor of a specific vertex that has a certain height. It could also be adapted to e.g. find the $k$-th non-zero element in a Fenwick tree.
|
---
title
- Original
---
# Binary search
**Binary search** is a method that allows for quicker search of something by splitting the search interval into two. Its most common application is searching values in sorted arrays, however the splitting idea is crucial in many other typical tasks.
## Search in sorted arrays
The most typical problem that leads to the binary search is as follows. You're given a sorted array $A_0 \leq A_1 \leq \dots \leq A_{n-1}$, check if $k$ is present within the sequence. The simplest solution would be to check every element one by one and compare it with $k$ (a so-called linear search). This approach works in $O(n)$, but doesn't utilize the fact that the array is sorted.
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/8/83/Binary_Search_Depiction.svg" width="800px">
<br>
<i>Binary search of the value $7$ in an array</i>.
<br>
<i>The <a href="https://commons.wikimedia.org/wiki/File:Binary_Search_Depiction.svg">image</a> by [AlwaysAngry](https://commons.wikimedia.org/wiki/User:AlwaysAngry) is distributed under <a href="https://creativecommons.org/licenses/by-sa/4.0/deed.en">CC BY-SA 4.0</a></i> license.
</center>
Now assume that we know two indices $L < R$ such that $A_L \leq k \leq A_R$. Because the array is sorted, we can deduce that $k$ either occurs among $A_L, A_{L+1}, \dots, A_R$ or doesn't occur in the array at all. If we pick an arbitrary index $M$ such that $L < M < R$ and check whether $k$ is less or greater than $A_M$. We have two possible cases:
1. $A_L \leq k \leq A_M$. In this case, we reduce the problem from $[L, R]$ to $[L, M]$;
1. $A_M \leq k \leq A_R$. In this case, we reduce the problem from $[L, R]$ to $[M, R]$.
When it is impossible to pick $M$, that is, when $R = L + 1$, we directly compare $k$ with $A_L$ and $A_R$. Otherwise we would want to pick $M$ in such manner that it reduces the active segment to a single element as quickly as possible _in the worst case_.
Since in the worst case we will always reduce to larger segment of $[L, M]$ and $[M, R]$. Thus, in the worst case scenario the reduction would be from $R-L$ to $\max(M-L, R-M)$. To minimize this value, we should pick $M \approx \frac{L+R}{2}$, then
$$
M-L \approx \frac{R-L}{2} \approx R-M.
$$
In other words, from the worst-case scenario perspective it is optimal to always pick $M$ in the middle of $[L, R]$ and split it in half. Thus, the active segment halves on each step until it becomes of size $1$. So, if the process needs $h$ steps, in the end it reduces the difference between $R$ and $L$ from $R-L$ to $\frac{R-L}{2^h} \approx 1$, giving us the equation $2^h \approx R-L$.
Taking $\log_2$ on both sides, we get $h \approx \log_2(R-L) \in O(\log n)$.
Logarithmic number of steps is drastically better than that of linear search. For example, for $n \approx 2^{20} \approx 10^6$ you'd need to make approximately a million operations for linear search, but only around $20$ operations with the binary search.
### Lower bound and upper bound
It is often convenient to find the position of the first element that is not less than $k$ (called the lower bound of $k$ in the array) or the position of the first element that is greater than $k$ (called the upper bound of $k$) rather than the exact position of the element.
Together, lower and upper bounds produce a possibly empty half-interval of the array elements that are equal to $k$. To check whether $k$ is present in the array it's enough to find its lower bound and check if the corresponding element equates to $k$.
### Implementation
The explanation above provides a rough description of the algorithm. For the implementation details, we'd need to be more precise.
We will maintain a pair $L < R$ such that $A_L \leq k < A_R$. Meaning that the active search interval is $[L, R)$. We use half-interval here instead of a segment $[L, R]$ as it turns out to require less corner case work.
When $R = L+1$, we can deduce from definitions above that $R$ is the upper bound of $k$. It is convenient to initialize $R$ with past-the-end index, that is $R=n$ and $L$ with before-the-beginning index, that is $L=-1$. It is fine as long as we never evaluate $A_L$ and $A_R$ in our algorithm directly, formally treating it as $A_L = -\infty$ and $A_R = +\infty$.
Finally, to be specific about the value of $M$ we pick, we will stick with $M = \lfloor \frac{L+R}{2} \rfloor$.
Then the implementation could look like this:
```cpp
... // a sorted array is stored as a[0], a[1], ..., a[n-1]
int l = -1, r = n;
while(r - l > 1) {
int m = (l + r) / 2;
if(k < a[m]) {
r = m; // a[l] <= k < a[m] <= a[r]
} else {
l = m; // a[l] <= a[m] <= k < a[r]
}
}
```
During the execution of the algorithm, we never evaluate neither $A_L$ nor $A_R$, as $L < M < R$. In the end, $L$ will be the index of the last element that is not greater than $k$ (or $-1$ if there is no such element) and $R$ will be the index of the first element larger than $k$ (or $n$ if there is no such element).
## Search on arbitrary predicate
Let $f : \{0,1,\dots, n-1\} \to \{0, 1\}$ be a boolean function defined on $0,1,\dots,n-1$ such that it is monotonous, that is
$$
f(0) \leq f(1) \leq \dots \leq f(n-1).
$$
The binary search, the way it is described above, finds the partition of the array by the predicate $f(M)$, holding the boolean value of $k < A_M$ expression. In other words, binary search finds the unique index $L$ such that $f(L) = 0$ and $f(R)=f(L+1)=1$.
It is possible to use arbitrary monotonous predicate instead of $k < A_M$. It is particularly useful when the computation of $f(k)$ is requires too much time to actually compute it for every possible value.
```cpp
... // f(i) is a boolean function such that f(0) <= ... <= f(n-1)
int l = -1, r = n;
while(r - l > 1) {
int m = (l + r) / 2;
if(f(m)) {
r = m; // 0 = f(l) < f(m) = 1
} else {
l = m; // 0 = f(m) < f(r) = 1
}
}
```
### Binary search on the answer
Such situation often occurs when we're asked to compute some value, but we're only capable of checking whether this value is at least $i$. For example, you're given an array $a_1,\dots,a_n$ and you're asked to find the maximum floored average sum
$$
\left \lfloor \frac{a_l + a_{l+1} + \dots + a_r}{r-l+1} \right\rfloor
$$
among all possible pairs of $l,r$ such that $r-l \geq x$. One of simple ways to solve this problem is to check whether the answer is at least $\lambda$, that is if there is a pair $l, r$ such that the following is true:
$$
\frac{a_l + a_{l+1} + \dots + a_r}{r-l+1} \geq \lambda.
$$
Equivalently, it rewrites as
$$
(a_l - \lambda) + (a_{l+1} - \lambda) + \dots + (a_r - \lambda) \geq 0,
$$
so now we need to check whether there is a subarray of a new array $a_i - \lambda$ of length at least $x+1$ with non-negative sum, which is doable with some prefix sums.
## Continuous search
Let $f : \mathbb R \to \mathbb R$ be a real-valued function that is continuous on a segment $[L, R]$.
Without loss of generality assume that $f(L) \leq f(R)$. From [intermediate value theorem](https://en.wikipedia.org/wiki/Intermediate_value_theorem) it follows that for any $y \in [f(L), f(R)]$ there is $x \in [L, R]$ such that $f(x) = y$. Note that, unlike previous paragraphs, the function is _not_ required to be monotonous.
The value $x$ could be approximated up to $\pm\delta$ in $O\left(\log \frac{R-L}{\delta}\right)$ time for any specific value of $\delta$. The idea is essentially the same, if we take $M \in (L, R)$ then we would be able to reduce the search interval to either $[L, M]$ or $[M, R]$ depending on whether $f(M)$ is larger than $y$. One common example here would be finding roots of odd-degree polynomials.
For example, let $f(x)=x^3 + ax^2 + bx + c$. Then $f(L) \to -\infty$ and $f(R) \to +\infty$ with $L \to -\infty$ and $R \to +\infty$. Which means that it is always possible to find sufficiently small $L$ and sufficiently large $R$ such that $f(L) < 0$ and $f(R) > 0$. Then, it is possible to find with binary search arbitrarily small interval containing $x$ such that $f(x)=0$.
## Search with powers of 2
Another noteworthy way to do binary search is, instead of maintaining an active segment, to maintain the current pointer $i$ and the current power $k$. The pointer starts at $i=L$ and then on each iteration one tests the predicate at point $i+2^k$. If the predicate is still $0$, the pointer is advanced from $i$ to $i+2^k$, otherwise it stays the same, then the power $k$ is decreased by $1$.
This paradigm is widely used in tasks around trees, such as finding lowest common ancestor of two vertices or finding an ancestor of a specific vertex that has a certain height. It could also be adapted to e.g. find the $k$-th non-zero element in a Fenwick tree.
## Practice Problems
* [LeetCode - Find First and Last Position of Element in Sorted Array](https://leetcode.com/problems/find-first-and-last-position-of-element-in-sorted-array/)
* [LeetCode - Search Insert Position](https://leetcode.com/problems/search-insert-position/)
* [LeetCode - Sqrt(x)](https://leetcode.com/problems/sqrtx/)
* [LeetCode - First Bad Version](https://leetcode.com/problems/first-bad-version/)
* [LeetCode - Valid Perfect Square](https://leetcode.com/problems/valid-perfect-square/)
* [LeetCode - Guess Number Higher or Lower](https://leetcode.com/problems/guess-number-higher-or-lower/)
* [LeetCode - Search a 2D Matrix II](https://leetcode.com/problems/search-a-2d-matrix-ii/)
* [Codeforces - Interesting Drink](https://codeforces.com/problemset/problem/706/B/)
* [Codeforces - Magic Powder - 1](https://codeforces.com/problemset/problem/670/D1)
* [Codeforces - Another Problem on Strings](https://codeforces.com/problemset/problem/165/C)
* [Codeforces - Frodo and pillows](https://codeforces.com/problemset/problem/760/B)
* [Codeforces - GukiZ hates Boxes](https://codeforces.com/problemset/problem/551/C)
* [Codeforces - Enduring Exodus](https://codeforces.com/problemset/problem/645/C)
* [Codeforces - Chip 'n Dale Rescue Rangers](https://codeforces.com/problemset/problem/590/B)
|
Binary search
|
---
title
ternary_search
---
# Ternary Search
We are given a function $f(x)$ which is unimodal on an interval $[l, r]$. By unimodal function, we mean one of two behaviors of the function:
1. The function strictly increases first, reaches a maximum (at a single point or over an interval), and then strictly decreases.
2. The function strictly decreases first, reaches a minimum, and then strictly increases.
In this article, we will assume the first scenario.
The second scenario is completely symmetrical to the first.
The task is to find the maximum of function $f(x)$ on the interval $[l, r]$.
## Algorithm
Consider any 2 points $m_1$, and $m_2$ in this interval: $l < m_1 < m_2 < r$. We evaluate the function at $m_1$ and $m_2$, i.e. find the values of $f(m_1)$ and $f(m_2)$. Now, we get one of three options:
- $f(m_1) < f(m_2)$
The desired maximum can not be located on the left side of $m_1$, i.e. on the interval $[l, m_1]$, since either both points $m_1$ and $m_2$ or just $m_1$ belong to the area where the function increases. In either case, this means that we have to search for the maximum in the segment $[m_1, r]$.
- $f(m_1) > f(m_2)$
This situation is symmetrical to the previous one: the maximum can not be located on the right side of $m_2$, i.e. on the interval $[m_2, r]$, and the search space is reduced to the segment $[l, m_2]$.
- $f(m_1) = f(m_2)$
We can see that either both of these points belong to the area where the value of the function is maximized, or $m_1$ is in the area of increasing values and $m_2$ is in the area of descending values (here we used the strictness of function increasing/decreasing). Thus, the search space is reduced to $[m_1, m_2]$. To simplify the code, this case can be combined with any of the previous cases.
Thus, based on the comparison of the values in the two inner points, we can replace the current interval $[l, r]$ with a new, shorter interval $[l^\prime, r^\prime]$. Repeatedly applying the described procedure to the interval, we can get an arbitrarily short interval. Eventually, its length will be less than a certain pre-defined constant (accuracy), and the process can be stopped. This is a numerical method, so we can assume that after that the function reaches its maximum at all points of the last interval $[l, r]$. Without loss of generality, we can take $f(l)$ as the return value.
We didn't impose any restrictions on the choice of points $m_1$ and $m_2$. This choice will define the convergence rate and the accuracy of the implementation. The most common way is to choose the points so that they divide the interval $[l, r]$ into three equal parts. Thus, we have
$$m_1 = l + \frac{(r - l)}{3}$$
$$m_2 = r - \frac{(r - l)}{3}$$
If $m_1$ and $m_2$ are chosen to be closer to each other, the convergence rate will increase slightly.
### Run time analysis
$$T(n) = T({2n}/{3}) + 1 = \Theta(\log n)$$
It can be visualized as follows: every time after evaluating the function at points $m_1$ and $m_2$, we are essentially ignoring about one third of the interval, either the left or right one. Thus the size of the search space is ${2n}/{3}$ of the original one.
Applying [Master's Theorem](https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)), we get the desired complexity estimate.
### The case of the integer arguments
If $f(x)$ takes integer parameter, the interval $[l, r]$ becomes discrete. Since we did not impose any restrictions on the choice of points $m_1$ and $m_2$, the correctness of the algorithm is not affected. $m_1$ and $m_2$ can still be chosen to divide $[l, r]$ into 3 approximately equal parts.
The difference occurs in the stopping criterion of the algorithm. Ternary search will have to stop when $(r - l) < 3$, because in that case we can no longer select $m_1$ and $m_2$ to be different from each other as well as from $l$ and $r$, and this can cause an infinite loop. Once $(r - l) < 3$, the remaining pool of candidate points $(l, l + 1, \ldots, r)$ needs to be checked to find the point which produces the maximum value $f(x)$.
## Implementation
```cpp
double ternary_search(double l, double r) {
double eps = 1e-9; //set the error limit here
while (r - l > eps) {
double m1 = l + (r - l) / 3;
double m2 = r - (r - l) / 3;
double f1 = f(m1); //evaluates the function at m1
double f2 = f(m2); //evaluates the function at m2
if (f1 < f2)
l = m1;
else
r = m2;
}
return f(l); //return the maximum of f(x) in [l, r]
}
```
Here `eps` is in fact the absolute error (not taking into account errors due to the inaccurate calculation of the function).
Instead of the criterion `r - l > eps`, we can select a constant number of iterations as a stopping criterion. The number of iterations should be chosen to ensure the required accuracy. Typically, in most programming challenges the error limit is ${10}^{-6}$ and thus 200 - 300 iterations are sufficient. Also, the number of iterations doesn't depend on the values of $l$ and $r$, so the number of iterations corresponds to the required relative error.
|
---
title
ternary_search
---
# Ternary Search
We are given a function $f(x)$ which is unimodal on an interval $[l, r]$. By unimodal function, we mean one of two behaviors of the function:
1. The function strictly increases first, reaches a maximum (at a single point or over an interval), and then strictly decreases.
2. The function strictly decreases first, reaches a minimum, and then strictly increases.
In this article, we will assume the first scenario.
The second scenario is completely symmetrical to the first.
The task is to find the maximum of function $f(x)$ on the interval $[l, r]$.
## Algorithm
Consider any 2 points $m_1$, and $m_2$ in this interval: $l < m_1 < m_2 < r$. We evaluate the function at $m_1$ and $m_2$, i.e. find the values of $f(m_1)$ and $f(m_2)$. Now, we get one of three options:
- $f(m_1) < f(m_2)$
The desired maximum can not be located on the left side of $m_1$, i.e. on the interval $[l, m_1]$, since either both points $m_1$ and $m_2$ or just $m_1$ belong to the area where the function increases. In either case, this means that we have to search for the maximum in the segment $[m_1, r]$.
- $f(m_1) > f(m_2)$
This situation is symmetrical to the previous one: the maximum can not be located on the right side of $m_2$, i.e. on the interval $[m_2, r]$, and the search space is reduced to the segment $[l, m_2]$.
- $f(m_1) = f(m_2)$
We can see that either both of these points belong to the area where the value of the function is maximized, or $m_1$ is in the area of increasing values and $m_2$ is in the area of descending values (here we used the strictness of function increasing/decreasing). Thus, the search space is reduced to $[m_1, m_2]$. To simplify the code, this case can be combined with any of the previous cases.
Thus, based on the comparison of the values in the two inner points, we can replace the current interval $[l, r]$ with a new, shorter interval $[l^\prime, r^\prime]$. Repeatedly applying the described procedure to the interval, we can get an arbitrarily short interval. Eventually, its length will be less than a certain pre-defined constant (accuracy), and the process can be stopped. This is a numerical method, so we can assume that after that the function reaches its maximum at all points of the last interval $[l, r]$. Without loss of generality, we can take $f(l)$ as the return value.
We didn't impose any restrictions on the choice of points $m_1$ and $m_2$. This choice will define the convergence rate and the accuracy of the implementation. The most common way is to choose the points so that they divide the interval $[l, r]$ into three equal parts. Thus, we have
$$m_1 = l + \frac{(r - l)}{3}$$
$$m_2 = r - \frac{(r - l)}{3}$$
If $m_1$ and $m_2$ are chosen to be closer to each other, the convergence rate will increase slightly.
### Run time analysis
$$T(n) = T({2n}/{3}) + 1 = \Theta(\log n)$$
It can be visualized as follows: every time after evaluating the function at points $m_1$ and $m_2$, we are essentially ignoring about one third of the interval, either the left or right one. Thus the size of the search space is ${2n}/{3}$ of the original one.
Applying [Master's Theorem](https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)), we get the desired complexity estimate.
### The case of the integer arguments
If $f(x)$ takes integer parameter, the interval $[l, r]$ becomes discrete. Since we did not impose any restrictions on the choice of points $m_1$ and $m_2$, the correctness of the algorithm is not affected. $m_1$ and $m_2$ can still be chosen to divide $[l, r]$ into 3 approximately equal parts.
The difference occurs in the stopping criterion of the algorithm. Ternary search will have to stop when $(r - l) < 3$, because in that case we can no longer select $m_1$ and $m_2$ to be different from each other as well as from $l$ and $r$, and this can cause an infinite loop. Once $(r - l) < 3$, the remaining pool of candidate points $(l, l + 1, \ldots, r)$ needs to be checked to find the point which produces the maximum value $f(x)$.
## Implementation
```cpp
double ternary_search(double l, double r) {
double eps = 1e-9; //set the error limit here
while (r - l > eps) {
double m1 = l + (r - l) / 3;
double m2 = r - (r - l) / 3;
double f1 = f(m1); //evaluates the function at m1
double f2 = f(m2); //evaluates the function at m2
if (f1 < f2)
l = m1;
else
r = m2;
}
return f(l); //return the maximum of f(x) in [l, r]
}
```
Here `eps` is in fact the absolute error (not taking into account errors due to the inaccurate calculation of the function).
Instead of the criterion `r - l > eps`, we can select a constant number of iterations as a stopping criterion. The number of iterations should be chosen to ensure the required accuracy. Typically, in most programming challenges the error limit is ${10}^{-6}$ and thus 200 - 300 iterations are sufficient. Also, the number of iterations doesn't depend on the values of $l$ and $r$, so the number of iterations corresponds to the required relative error.
## Practice Problems
- [Codechef - Race time](https://www.codechef.com/problems/AMCS03)
- [Hackerearth - Rescuer](https://www.hackerearth.com/problem/algorithm/rescuer-2d2495cb/)
- [Spoj - Building Construction](http://www.spoj.com/problems/KOPC12A/)
- [Codeforces - Weakness and Poorness](http://codeforces.com/problemset/problem/578/C)
* [LOJ - Closest Distance](http://lightoj.com/volume_showproblem.php?problem=1146)
* [GYM - Dome of Circus (D)](http://codeforces.com/gym/101309)
* [UVA - Galactic Taxes](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=4898)
* [GYM - Chasing the Cheetahs (A)](http://codeforces.com/gym/100829)
* [UVA - 12197 - Trick or Treat](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=3349)
* [SPOJ - Building Construction](http://www.spoj.com/problems/KOPC12A/)
* [Codeforces - Devu and his Brother](https://codeforces.com/problemset/problem/439/D)
* [Codechef - Is This JEE ](https://www.codechef.com/problems/ICM2003)
* [Codeforces - Restorer Distance](https://codeforces.com/contest/1355/problem/E)
* [TIMUS 1719 Kill the Shaitan-Boss](https://acm.timus.ru/problem.aspx?space=1&num=1719)
* [TIMUS 1913 Titan Ruins: Alignment of Forces](https://acm.timus.ru/problem.aspx?space=1&num=1913)
|
Ternary Search
|
---
title
roots_newton
---
# Newton's method for finding roots
This is an iterative method invented by Isaac Newton around 1664. However, this method is also sometimes called the Raphson method, since Raphson invented the same algorithm a few years after Newton, but his article was published much earlier.
The task is as follows. Given the following equation:
$$f(x) = 0$$
We want to solve the equation. More precisely, we want to find one of its roots (it is assumed that the root exists). It is assumed that $f(x)$ is continuous and differentiable on an interval $[a, b]$.
## Algorithm
The input parameters of the algorithm consist of not only the function $f(x)$ but also the initial approximation - some $x_0$, with which the algorithm starts.
<p align="center">
<img src="./roots_newton.png" alt="plot_f(x)">
</p>
Suppose we have already calculated $x_i$, calculate $x_{i+1}$ as follows. Draw the tangent to the graph of the function $f(x)$ at the point $x = x_i$, and find the point of intersection of this tangent with the $x$-axis. $x_{i+1}$ is set equal to the $x$-coordinate of the point found, and we repeat the whole process from the beginning.
It is not difficult to obtain the following formula,
$$ x_{i+1} = x_i - \frac{f(x_i)}{f^\prime(x_i)} $$
First, we calculate the slope $f'(x)$, derivative of $f(x)$, and then determine the equation of the tangent which is,
$$ y - f(x_i) = f'(x_i)(x - x_i) $$
The tangent intersects with the x-axis at cordinate, $y = 0$ and $x = x_{i+1}$,
$$ - f(x_i) = f'(x_i)(x_{i+1} - x_i) $$
Now, solving the equation we get the value of $x_{i+1}$.
It is intuitively clear that if the function $f(x)$ is "good" (smooth), and $x_i$ is close enough to the root, then $x_{i+1}$ will be even closer to the desired root.
The rate of convergence is quadratic, which, conditionally speaking, means that the number of exact digits in the approximate value $x_i$ doubles with each iteration.
## Application for calculating the square root
Let's use the calculation of square root as an example of Newton's method.
If we substitute $f(x) = x^2 - n$, then after simplifying the expression, we get:
$$ x_{i+1} = \frac{x_i + \frac{n}{x_i}}{2} $$
The first typical variant of the problem is when a rational number $n$ is given, and its root must be calculated with some accuracy `eps`:
```cpp
double sqrt_newton(double n) {
const double eps = 1E-15;
double x = 1;
for (;;) {
double nx = (x + n / x) / 2;
if (abs(x - nx) < eps)
break;
x = nx;
}
return x;
}
```
Another common variant of the problem is when we need to calculate the integer root (for the given $n$ find the largest $x$ such that $x^2 \le n$). Here it is necessary to slightly change the termination condition of the algorithm, since it may happen that $x$ will start to "jump" near the answer. Therefore, we add a condition that if the value $x$ has decreased in the previous step, and it tries to increase at the current step, then the algorithm must be stopped.
```cpp
int isqrt_newton(int n) {
int x = 1;
bool decreased = false;
for (;;) {
int nx = (x + n / x) >> 1;
if (x == nx || nx > x && decreased)
break;
decreased = nx < x;
x = nx;
}
return x;
}
```
Finally, we are given the third variant - for the case of bignum arithmetic. Since the number $n$ can be large enough, it makes sense to pay attention to the initial approximation. Obviously, the closer it is to the root, the faster the result will be achieved. It is simple enough and effective to take the initial approximation as the number $2^{\textrm{bits}/2}$, where $\textrm{bits}$ is the number of bits in the number $n$. Here is the Java code that demonstrates this variant:
```java
public static BigInteger isqrtNewton(BigInteger n) {
BigInteger a = BigInteger.ONE.shiftLeft(n.bitLength() / 2);
boolean p_dec = false;
for (;;) {
BigInteger b = n.divide(a).add(a).shiftRight(1);
if (a.compareTo(b) == 0 || a.compareTo(b) < 0 && p_dec)
break;
p_dec = a.compareTo(b) > 0;
a = b;
}
return a;
}
```
For example, this code is executed in $60$ milliseconds for $n = 10^{1000}$, and if we remove the improved selection of the initial approximation (just starting with $1$), then it will be executed in about $120$ milliseconds.
|
---
title
roots_newton
---
# Newton's method for finding roots
This is an iterative method invented by Isaac Newton around 1664. However, this method is also sometimes called the Raphson method, since Raphson invented the same algorithm a few years after Newton, but his article was published much earlier.
The task is as follows. Given the following equation:
$$f(x) = 0$$
We want to solve the equation. More precisely, we want to find one of its roots (it is assumed that the root exists). It is assumed that $f(x)$ is continuous and differentiable on an interval $[a, b]$.
## Algorithm
The input parameters of the algorithm consist of not only the function $f(x)$ but also the initial approximation - some $x_0$, with which the algorithm starts.
<p align="center">
<img src="./roots_newton.png" alt="plot_f(x)">
</p>
Suppose we have already calculated $x_i$, calculate $x_{i+1}$ as follows. Draw the tangent to the graph of the function $f(x)$ at the point $x = x_i$, and find the point of intersection of this tangent with the $x$-axis. $x_{i+1}$ is set equal to the $x$-coordinate of the point found, and we repeat the whole process from the beginning.
It is not difficult to obtain the following formula,
$$ x_{i+1} = x_i - \frac{f(x_i)}{f^\prime(x_i)} $$
First, we calculate the slope $f'(x)$, derivative of $f(x)$, and then determine the equation of the tangent which is,
$$ y - f(x_i) = f'(x_i)(x - x_i) $$
The tangent intersects with the x-axis at cordinate, $y = 0$ and $x = x_{i+1}$,
$$ - f(x_i) = f'(x_i)(x_{i+1} - x_i) $$
Now, solving the equation we get the value of $x_{i+1}$.
It is intuitively clear that if the function $f(x)$ is "good" (smooth), and $x_i$ is close enough to the root, then $x_{i+1}$ will be even closer to the desired root.
The rate of convergence is quadratic, which, conditionally speaking, means that the number of exact digits in the approximate value $x_i$ doubles with each iteration.
## Application for calculating the square root
Let's use the calculation of square root as an example of Newton's method.
If we substitute $f(x) = x^2 - n$, then after simplifying the expression, we get:
$$ x_{i+1} = \frac{x_i + \frac{n}{x_i}}{2} $$
The first typical variant of the problem is when a rational number $n$ is given, and its root must be calculated with some accuracy `eps`:
```cpp
double sqrt_newton(double n) {
const double eps = 1E-15;
double x = 1;
for (;;) {
double nx = (x + n / x) / 2;
if (abs(x - nx) < eps)
break;
x = nx;
}
return x;
}
```
Another common variant of the problem is when we need to calculate the integer root (for the given $n$ find the largest $x$ such that $x^2 \le n$). Here it is necessary to slightly change the termination condition of the algorithm, since it may happen that $x$ will start to "jump" near the answer. Therefore, we add a condition that if the value $x$ has decreased in the previous step, and it tries to increase at the current step, then the algorithm must be stopped.
```cpp
int isqrt_newton(int n) {
int x = 1;
bool decreased = false;
for (;;) {
int nx = (x + n / x) >> 1;
if (x == nx || nx > x && decreased)
break;
decreased = nx < x;
x = nx;
}
return x;
}
```
Finally, we are given the third variant - for the case of bignum arithmetic. Since the number $n$ can be large enough, it makes sense to pay attention to the initial approximation. Obviously, the closer it is to the root, the faster the result will be achieved. It is simple enough and effective to take the initial approximation as the number $2^{\textrm{bits}/2}$, where $\textrm{bits}$ is the number of bits in the number $n$. Here is the Java code that demonstrates this variant:
```java
public static BigInteger isqrtNewton(BigInteger n) {
BigInteger a = BigInteger.ONE.shiftLeft(n.bitLength() / 2);
boolean p_dec = false;
for (;;) {
BigInteger b = n.divide(a).add(a).shiftRight(1);
if (a.compareTo(b) == 0 || a.compareTo(b) < 0 && p_dec)
break;
p_dec = a.compareTo(b) > 0;
a = b;
}
return a;
}
```
For example, this code is executed in $60$ milliseconds for $n = 10^{1000}$, and if we remove the improved selection of the initial approximation (just starting with $1$), then it will be executed in about $120$ milliseconds.
## Practice Problems
- [UVa 10428 - The Roots](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=16&page=show_problem&problem=1369)
|
Newton's method for finding roots
|
---
title
simpson_integrating
---
# Integration by Simpson's formula
We are going to calculate the value of a definite integral
$$\int_a ^ b f (x) dx$$
The solution described here was published in one of the dissertations of **Thomas Simpson** in 1743.
## Simpson's formula
Let $n$ be some natural number. We divide the integration segment $[a, b]$ into $2n$ equal parts:
$$x_i = a + i h, ~~ i = 0 \ldots 2n,$$
$$h = \frac {b-a} {2n}.$$
Now we calculate the integral separately on each of the segments $[x_ {2i-2}, x_ {2i}]$, $i = 1 \ldots n$, and then add all the values.
So, suppose we consider the next segment $[x_ {2i-2}, x_ {2i}], i = 1 \ldots n$. Replace the function $f(x)$ on it with a parabola $P(x)$ passing through 3 points $(x_ {2i-2}, x_ {2i-1}, x_ {2i})$. Such a parabola always exists and is unique; it can be found analytically.
For instance we could construct it using the Lagrange polynomial interpolation.
The only remaining thing left to do is to integrate this polynomial.
If you do this for a general function $f$, you receive a remarkably simple expression:
$$\int_{x_ {2i-2}} ^ {x_ {2i}} f (x) ~dx \approx \int_{x_ {2i-2}} ^ {x_ {2i}} P (x) ~dx = \left(f(x_{2i-2}) + 4f(x_{2i-1})+(f(x_{2i})\right)\frac {h} {3} $$
Adding these values over all segments, we obtain the final **Simpson's formula**:
$$\int_a ^ b f (x) dx \approx \left(f (x_0) + 4 f (x_1) + 2 f (x_2) + 4f(x_3) + 2 f(x_4) + \ldots + 4 f(x_{2N-1}) + f(x_{2N}) \right)\frac {h} {3} $$
## Error
The error in approximating an integral by Simpson's formula is
$$ -\tfrac{1}{90} \left(\tfrac{b-a}{2}\right)^5 f^{(4)}(\xi)$$
where $\xi$ is some number between $a$ and $b$.
The error is asymptotically proportional to $(b-a)^5$. However, the above derivations suggest an error proportional to $(b-a)^4$. Simpson's rule gains an extra order because the points at which the integrand is evaluated are distributed symmetrically in the interval $[a, b]$.
## Implementation
Here, $f(x)$ is some user-defined function.
```cpp
const int N = 1000 * 1000; // number of steps (already multiplied by 2)
double simpson_integration(double a, double b){
double h = (b - a) / N;
double s = f(a) + f(b); // a = x_0 and b = x_2n
for (int i = 1; i <= N - 1; ++i) { // Refer to final Simpson's formula
double x = a + h * i;
s += f(x) * ((i & 1) ? 4 : 2);
}
s *= h / 3;
return s;
}
```
|
---
title
simpson_integrating
---
# Integration by Simpson's formula
We are going to calculate the value of a definite integral
$$\int_a ^ b f (x) dx$$
The solution described here was published in one of the dissertations of **Thomas Simpson** in 1743.
## Simpson's formula
Let $n$ be some natural number. We divide the integration segment $[a, b]$ into $2n$ equal parts:
$$x_i = a + i h, ~~ i = 0 \ldots 2n,$$
$$h = \frac {b-a} {2n}.$$
Now we calculate the integral separately on each of the segments $[x_ {2i-2}, x_ {2i}]$, $i = 1 \ldots n$, and then add all the values.
So, suppose we consider the next segment $[x_ {2i-2}, x_ {2i}], i = 1 \ldots n$. Replace the function $f(x)$ on it with a parabola $P(x)$ passing through 3 points $(x_ {2i-2}, x_ {2i-1}, x_ {2i})$. Such a parabola always exists and is unique; it can be found analytically.
For instance we could construct it using the Lagrange polynomial interpolation.
The only remaining thing left to do is to integrate this polynomial.
If you do this for a general function $f$, you receive a remarkably simple expression:
$$\int_{x_ {2i-2}} ^ {x_ {2i}} f (x) ~dx \approx \int_{x_ {2i-2}} ^ {x_ {2i}} P (x) ~dx = \left(f(x_{2i-2}) + 4f(x_{2i-1})+(f(x_{2i})\right)\frac {h} {3} $$
Adding these values over all segments, we obtain the final **Simpson's formula**:
$$\int_a ^ b f (x) dx \approx \left(f (x_0) + 4 f (x_1) + 2 f (x_2) + 4f(x_3) + 2 f(x_4) + \ldots + 4 f(x_{2N-1}) + f(x_{2N}) \right)\frac {h} {3} $$
## Error
The error in approximating an integral by Simpson's formula is
$$ -\tfrac{1}{90} \left(\tfrac{b-a}{2}\right)^5 f^{(4)}(\xi)$$
where $\xi$ is some number between $a$ and $b$.
The error is asymptotically proportional to $(b-a)^5$. However, the above derivations suggest an error proportional to $(b-a)^4$. Simpson's rule gains an extra order because the points at which the integrand is evaluated are distributed symmetrically in the interval $[a, b]$.
## Implementation
Here, $f(x)$ is some user-defined function.
```cpp
const int N = 1000 * 1000; // number of steps (already multiplied by 2)
double simpson_integration(double a, double b){
double h = (b - a) / N;
double s = f(a) + f(b); // a = x_0 and b = x_2n
for (int i = 1; i <= N - 1; ++i) { // Refer to final Simpson's formula
double x = a + h * i;
s += f(x) * ((i & 1) ? 4 : 2);
}
s *= h / 3;
return s;
}
```
## Practice Problems
* [URI - Environment Protection](https://www.urionlinejudge.com.br/judge/en/problems/view/1297)
|
Integration by Simpson's formula
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.