Question

For each one of the following functions state and prove a tight bound. that is, state...

For each one of the following functions state and prove a tight bound. that is, state a function g (n) such that f (n) ∈ Θ(g (n)) and prove this relationship. You may prove it by using the definitions of O,Θ, and Ω or the limit test.

4*2log2(n) - 4

0 0
Add a comment Improve this question Transcribed image text
Answer #1
var doLinearSearch = function(array, targetValue) { for (var guess = 0; guess < array.length; guess++) { if (array[guess] === targetValue) { return guess; // found it! } } return -1; // didn't find it };

Let's denote the size of the array (array.length) by nnn. The maximum number of times that the for-loop can run is nnn, and this worst case occurs when the value being searched for is not present in the array.

Each time the for-loop iterates, it has to do several things:

  • compare guess with array.length
  • compare array[guess] with targetValue
  • possibly return the value of guess
  • increment guess.

Each of these little computations takes a constant amount of time each time it executes. If the for-loop iterates nnn times, then the time for all nnn iterations is c_1 \cdot nc1​⋅nc, start subscript, 1, end subscript, dot, n, where c_1c1​c, start subscript, 1, end subscript is the sum of the times for the computations in one loop iteration. Now, we cannot say here what the value of c_1c1​c, start subscript, 1, end subscript is, because it depends on the speed of the computer, the programming language used, the compiler or interpreter that translates the source program into runnable code, and other factors.

This code has a little bit of extra overhead, for setting up the for-loop (including initializing guess to 0) and possibly returning -1 at the end. Let's call the time for this overhead c_2c2​c, start subscript, 2, end subscript, which is also a constant. Therefore, the total time for linear search in the worst case is c_1 \cdot n + c_2c1​⋅n+c2​c, start subscript, 1, end subscript, dot, n, plus, c, start subscript, 2, end subscript.

As we've argued, the constant factor c_1c1​c, start subscript, 1, end subscript and the low-order term c_2c2​c, start subscript, 2, end subscript don't tell us about the rate of growth of the running time. What's significant is that the worst-case running time of linear search grows like the array size nnn. The notation we use for this running time is \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis. That's the Greek letter "theta," and we say "big-Theta of nnn" or just "Theta of nnn."

When we say that a particular running time is \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis, we're saying that once nnn gets large enough, the running time is at least k_1 \cdot nk1​⋅nk, start subscript, 1, end subscript, dot, n and at most k_2 \cdot nk2​⋅nk, start subscript, 2, end subscript, dot, n for some constants k_1k1​k, start subscript, 1, end subscript and k_2k2​k, start subscript, 2, end subscript. Here's how to think of \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis:

For small values of nnn, we don't care how the running time compares with k_1 \cdot nk1​⋅nk, start subscript, 1, end subscript, dot, n or k_2 \cdot nk2​⋅nk, start subscript, 2, end subscript, dot, n. But once nnn gets large enough—on or to the right of the dashed line—the running time must be sandwiched between k_1 \cdot nk1​⋅nk, start subscript, 1, end subscript, dot, n and k_2 \cdot nk2​⋅nk, start subscript, 2, end subscript, dot, n. As long as these constants k_1k1​k, start subscript, 1, end subscript and k_2k2​k, start subscript, 2, end subscript exist, we say that the running time is \Theta(n)Θ(n)\Theta, left parenthesis, n, right parenthesis.

We are not restricted to just nnn in big-Θ notation. We can use any function, such as n^2n2n, squared, n \log_2 nnlog2​nn, log, start base, 2, end base, n, or any other function of nnn. Here's how to think of a running time that is \Theta(f(n))Θ(f(n))\Theta, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis for some function f(n)f(n)f, left parenthesis, n, right parenthesis:

Once nnn gets large enough, the running time is between k_1 \cdot f(n)k1​⋅f(n)k, start subscript, 1, end subscript, dot, f, left parenthesis, n, right parenthesis and k_2 \cdot f(n)k2​⋅f(n)k, start subscript, 2, end subscript, dot, f, left parenthesis, n, right parenthesis.

In practice, we just drop constant factors and low-order terms. Another advantage of using big-Θ notation is that we don't have to worry about which time units we're using. For example, suppose that you calculate that a running time is 6n^2 + 100n + 3006n2+100n+3006, n, squared, plus, 100, n, plus, 300 microseconds. Or maybe it's milliseconds. When you use big-Θ notation, you don't say. You also drop the factor 6 and the low-order terms 100n + 300100n+300100, n, plus, 300, and you just say that the running time is \Theta(n^2)Θ(n2)\Theta, left parenthesis, n, squared, right parenthesis.

When we use big-Θ notation, we're saying that we have an asymptotically tight bound on the running time. "Asymptotically" because it matters for only large values of nnn. "Tight bound" because we've nailed the running time to within a constant factor above and below.

Add a comment
Know the answer?
Add Answer to:
For each one of the following functions state and prove a tight bound. that is, state...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT