Derandomization from Worst-Case Assumptions: Error-correcting codes and worst-case to average-case reductions

ثبت نشده
چکیده

Definition: CCρ(f) ≥ s if s-sized circuits can compute f with probability at most ρ for a random input. That is, for every circuit family {Cn} with |Cn| ≤ s(n), Prx←R{0,1}n [Cn(x) = f(x)] < ρ. If CC1−1/(100n)(f) ≥ s we say that f is “mildly hard on the average” for s-sized circuits (every circuit will fail on a 1/(100n) fraction of the inputs) and if CC1(f) ≥ s we say that f is “worst-case hard” for s-sized circuits (every circuit will fail on at least one input). Assumption 1: ∃f ∈ E such that CC1−1/(100n)(f) ≥ 2n . That is, for every large enough n and 2n sized circuit C, Pr x←R{0,1} [C(x) = f(x)] ≤ 1− 1 100n

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hardness Amplification and Error Correcting Codes

We pointed out in earlier chapters (e.g., Chapter ?? the distinction between worst-case hardness and average-case hardness. For example, the problem of finding the smallest factor of every given integer seems difficult on worstcase instances, and yet is trivial for at least half the integers –namely, the even ones. We also saw that functions that are average-case hard have many uses, notably in...

متن کامل

New connections between derandomization, worst-case complexity and average-case complexity

We show that a mild derandomization assumption together with the worst-case hardness of NP implies the average-case hardness of a language in non-deterministic quasi-polynomial time. Previously such connections were only known for high classes such as EXP and PSPACE. There has been a long line of research trying to explain our failure in proving worstcase to average-case reductions within NP [F...

متن کامل

Worst-Case to Average-Case Reductions Revisited

A fundamental goal of computational complexity (and foundations of cryptography) is to find a polynomial-time samplable distribution (e.g., the uniform distribution) and a language in NTIME(f(n)) for some polynomial function f , such that the language is hard on the average with respect to this distribution, given that NP is worst-case hard (i.e. NP 6= P, or NP 6⊆ BPP). Currently, no such resul...

متن کامل

Voting Rules As Error-Correcting Codes

We present the first model of optimal voting under adversarial noise. From this viewpoint, voting rules are seen as error-correcting codes: their goal is to correct errors in the input rankings and recover a ranking that is close to the ground truth. We derive worst-case bounds on the relation between the average accuracy of the input votes, and the accuracy of the output ranking. Empirical res...

متن کامل

Bridging Shannon and Hamming: List Error-Correction with Optimal Rate

Error-correcting codes tackle the fundamental problem of recovering from errors during data communication and storage. A basic issue in coding theory concerns the modeling of the channel noise. Shannon’s theory models the channel as a stochastic process with a known probability law. Hamming suggested a combinatorial approach where the channel causes worst-case errors subject only to a limit on ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006