Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there's a misunderstanding here. Can you please be specific about the operation you're saying is increasing commensurate with inputs n, c? The GCD algorithm is accomplished using O(1) polynomial multiplications, which is achieved because the coefficients for the polynomial multiplication are given by n, c.

EDIT: To make my position clear, what I am saying is this:

1. The presented algorithm will have variable computation time, but not variable asymptotic time,

2. The algorithm uses O(1) (i.e. constant time) polynomial multiplications in the worst case, and

3. The worst case bound does not change with n nor c, though they are used in the algorithm to calibrate the divsteps such that no more than O(1) operations are required.

I will concede it's possible I'm misunderstanding the paper itself, but I don't see that here.



The title of the paper is "Fast constant-time gcd computation and modular inversion". The algorithm presented in the paper does not compute GCDs or modular inverses in time O(1). The time taken for either of those tasks is O(n (log n)^(2+o(1))).

The algorithm presented in the paper also doesn't take a bounded number of polynomial multiplications, though I'm not sure why those should be the relevant thing to count. In section 5.4 they say they split a size-n problem into two problems whose size add up to n using O(1) polynomial multiplications of size about n, and that their algorithm takes them both to be of size ~ n/2. That means O(n) polynomial multiplications, but most of them are small so the total cost for given n is, as they say in section 5.5, Theta(log n) times the cost of polynomial multiplication provided a fast polynomial multiplication algorithm is used.

The worst-case bound, both for the number of polynomial multiplications and for the actual running time, does depend on n. (The former is of order n but, again, most of the polynomial multiplications are small; the latter is of order about n (log n)^2.)

(As for the cryptographic "constant time" notion, my understanding from the paper is that with a naive timing model -- e.g., assuming that all memory accesses take equal time -- their algorithm is exactly constant-time for fixed input size. They say in a footnote near the end of section 7: "There is considerable variation in the cycle counts, more than 3% between quartiles, presumably depending on the mapping from virtual addresses to physical addresses. There is no dependence on the secret input being inverted.")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: