For each one of the following functions state and prove a tight bound. that is, state a function g (n) such that f (n) ∈ Θ(g (n)) and prove this relationship. You may prove it by using the definitions of O,Θ, and Ω or the limit test.
4*2log2(n) - 4
var doLinearSearch = function(array, targetValue) { for (var guess = 0; guess < array.length; guess++) { if (array[guess] === targetValue) { return guess; // found it! } } return -1; // didn't find it };
Let's denote the size of the array (array.length
)
by nnn. The maximum number of times that the for-loop can run is
nnn, and this worst case occurs when the value being searched for
is not present in the array.
Each time the for-loop iterates, it has to do several things:
guess
with array.length
array[guess]
with
targetValue
guess
guess
.Each of these little computations takes a constant amount of time each time it executes. If the for-loop iterates nnn times, then the time for all nnn iterations is c_1 \cdot nc1⋅nc, start subscript, 1, end subscript, dot, n, where c_1c1c, start subscript, 1, end subscript is the sum of the times for the computations in one loop iteration. Now, we cannot say here what the value of c_1c1c, start subscript, 1, end subscript is, because it depends on the speed of the computer, the programming language used, the compiler or interpreter that translates the source program into runnable code, and other factors.
This code has a little bit of extra overhead, for setting up the
for-loop (including initializing guess
to 0) and
possibly returning -1
at the end. Let's call the time
for this overhead c_2c2c, start subscript, 2, end subscript, which
is also a constant. Therefore, the total time for linear search in
the worst case is c_1 \cdot n + c_2c1⋅n+c2c, start subscript, 1,
end subscript, dot, n, plus, c, start subscript, 2, end
subscript.
As we've argued, the constant factor c_1c1c, start subscript, 1, end subscript and the low-order term c_2c2c, start subscript, 2, end subscript don't tell us about the rate of growth of the running time. What's significant is that the worst-case running time of linear search grows like the array size nnn. The notation we use for this running time is \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis. That's the Greek letter "theta," and we say "big-Theta of nnn" or just "Theta of nnn."
When we say that a particular running time is \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis, we're saying that once nnn gets large enough, the running time is at least k_1 \cdot nk1⋅nk, start subscript, 1, end subscript, dot, n and at most k_2 \cdot nk2⋅nk, start subscript, 2, end subscript, dot, n for some constants k_1k1k, start subscript, 1, end subscript and k_2k2k, start subscript, 2, end subscript. Here's how to think of \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis:
For small values of nnn, we don't care how the running time compares with k_1 \cdot nk1⋅nk, start subscript, 1, end subscript, dot, n or k_2 \cdot nk2⋅nk, start subscript, 2, end subscript, dot, n. But once nnn gets large enough—on or to the right of the dashed line—the running time must be sandwiched between k_1 \cdot nk1⋅nk, start subscript, 1, end subscript, dot, n and k_2 \cdot nk2⋅nk, start subscript, 2, end subscript, dot, n. As long as these constants k_1k1k, start subscript, 1, end subscript and k_2k2k, start subscript, 2, end subscript exist, we say that the running time is \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis.
We are not restricted to just nnn in big-Θ notation. We can use any function, such as n^2n2n, squared, n \log_2 nnlog2nn, log, start base, 2, end base, n, or any other function of nnn. Here's how to think of a running time that is \Theta(f(n))Θ(f(n))\Theta, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis for some function f(n)f(n)f, left parenthesis, n, right parenthesis:
Once nnn gets large enough, the running time is between k_1 \cdot f(n)k1⋅f(n)k, start subscript, 1, end subscript, dot, f, left parenthesis, n, right parenthesis and k_2 \cdot f(n)k2⋅f(n)k, start subscript, 2, end subscript, dot, f, left parenthesis, n, right parenthesis.
In practice, we just drop constant factors and low-order terms. Another advantage of using big-Θ notation is that we don't have to worry about which time units we're using. For example, suppose that you calculate that a running time is 6n^2 + 100n + 3006n2+100n+3006, n, squared, plus, 100, n, plus, 300 microseconds. Or maybe it's milliseconds. When you use big-Θ notation, you don't say. You also drop the factor 6 and the low-order terms 100n + 300100n+300100, n, plus, 300, and you just say that the running time is \Theta(n^2)Θ(n2)\Theta, left parenthesis, n, squared, right parenthesis.
When we use big-Θ notation, we're saying that we have an asymptotically tight bound on the running time. "Asymptotically" because it matters for only large values of nnn. "Tight bound" because we've nailed the running time to within a constant factor above and below.
For each one of the following functions state and prove a tight bound. that is, state...
10 points 4. One can obtain an asymptotically tight bound directly by computing a limit as n goes to infinity. Essentially, if the ratio of functions f(n) and g(n) converges to a positive constant as n goes to infinity, then f(n) e (g(n)). Let fand g be two functions that Lim f(n)/g(n) n0 exists and is equal to some number c > 0. Then f(n) 0 (g(n)) Prove the above statement
Analyze the runtime of c functions below and give a tight runtime bound for each. . . Both functions have the same best-case and worst-case runtime (so this is not an issue). Since we want a "tight" runtime bound, your final answer should be in big-m form. Show your work! "The runtime of foo() is e(< something >)" is not sufficient even if <something> happens to be correct. In other words, convince the reader of the correctness of your answer....
Let f(n) and g(n) be asymptotically positive functions. Prove or disprove each of the following conjectures. f(n) = O(g(n)) implies g(n) = Ω(f(n)) . f(n) = O(g(n)) implies g(n) = O(f(n)). f(n) + g(n) = Θ(min(f(n),g(n))).
Need help with 1,2,3 thank you.
1. Order of growth (20 points) Order the following functions according to their order of growth from the lowest to the highest. If you think that two functions are of the same order (Le f(n) E Θ(g(n))), put then in the same group. log(n!), n., log log n, logn, n log(n), n2 V, (1)!, 2", n!, 3", 21 2. Asymptotic Notation (20 points) For each pair of functions in the table below, deternme whether...
Prove or disprove the following statements, using the relationship among typical growth-rate functions seen in class. a)n^15log n + n^9 is O(n^9 log n) b) 15^7n^5 + 5n^4 + 8000000n^2 + n is Θ(n^3) c) n^n is Ω (n!) d) 0.01n^9 + 800000n^7 is O(n^9) e) n^14 + 0.0000001n^5 is Ω(n^13) f) n! is O(3n)
7. Prove the following assertions by using the definitions of the notations in- volved, or disprove them by giving a specific counterexample. a. If t(n) e (g(n)), then g(n) E S2(t(n I. Θ(gg(n))-e(g(n)), where α > 0. c. Θ(g(n))-: 0(g(n))n Ω (g(n)). d. For any two nonnegative functions t (n) and g(n) defined on the set of nonnegative integers, either t (n) e 0(g(n)), or t (n) e Ω(g(n)), or both
7. Prove the following assertions by using the definitions...
Let f(n) = 5n^2. Prove that f(n) = O(n^3). Let f(n) = 7n^2. Prove that f(n) = Ω(n). Let f(n) = 3n. Prove that f(n) =ꙍ (√n). Let f(n) = 3n+2. Prove that f(n) = Θ (n). Let k > 0 and c > 0 be any positive constants. Prove that (n + k)c = O(nc). Prove that lg(n!) = O(n lg n). Let g(n) = log10(n). Prove that g(n) = Θ(lg n). (hint: ???? ? = ???? ?)???? ?...
1. (10 pts) For each of the following pairs of functions, indicate whether f = 0(g), f = Ω(g), or both (in which case f-6(1). You do not need to explain your answer. f(n) (n) a) n (b) n-1n+1 (c) 1000n 0.01n2 (d) 10n2 n (lg n)2 21 е) n (f) 3" (g) 4" rl. 72 i-0 2. (12 pts) Sort the following functions by increasing order of growth. For every pair of consecutive functions f(n) and g(n) in the...
1. For each of the following pairs of functions, prove that f(n)-O(g(n)), and / or that g(n) O(f(n)), or explain why one or the other is not true. (a) 2"+1 vs 2 (b) 22n vs 2" VS (c) 4" vs 22n (d) 2" vs 4" (e) loga n vs log, n - where a and b are constants greater than 1. Show that you understand why this restriction on a and b was given. f) log(0(1) n) vs log n....
In this problem you will prove there is a function that is in O(n, and Ω(n) but is not in Θ(nd) for any 1 sds3. State a function f(n) that is in O(3) and 2(n) but is not in (n) for anylsds3 Prove that f(n)gn forany1sds3.