Question

Explain how to analyze an algorithm to determine its input size and its Big-O, Big Theta,...

Explain how to analyze an algorithm to determine its input size and its Big-O, Big Theta, and/or Big Omega.

0 0
Add a comment Improve this question Transcribed image text
Answer #1

INPUT SIZE

If an algorithm takes in a single number as input, we typically use the number of bits in the number. Sometimes, however, we just take the input number itself as the “size”. For example, if the task is “generate n objects such that they satisfy the following constraints”, then n would be the natural choice for the size. However, for something like “determine whether the set of divisors of a number n has a subset for which the sum has properties X and Y”, the input size would be log2n (the number of bits in n).

If the input is a list, typically we use the number of items in the list, but there might be cases where we consider the sum of the numbers of bits of all the numbers in the list instead.

big O

Big O notation allows us to work out how long an algorithm will take to run. This lets us understand how a piece of code will scale. It measures algorithmic efficiency.

Calculating Big O

To calculate Big O, you can go through each line of code and establish whether it’s O(1), O(n) etc and then return your calculation at the end. For example it may be O(4 + 5n) where the 4 represents four instances of O(1) and 5n represents five instances of O(n).

There’s an easier way to calculate this however, and that’s done with the following four rules:

Assume the worst
Remove constants
Use different terms for inputs
Drop any non dominants
Taking each of these in turn:

Assume the worst
When calculating Big O, you always think about the worst case. For example, if you were to loop over an array and look for an item, it could be the first item or it could be the last. As we aren’t certain then we must assume O(n) in this instance.

2. Remove constants

This is a little bit trickier but bear with me. Consider this code:


We have a function which does several things. Some are O(1) such as line 3, whereas others are O(n) such as line 9.

We could express the Big O notation of this as O(1 + n/2 +100) but it’s really too specific. We can remove our O(1) operations because, as a rule of thumb, they are likely to be insignificant compared to our O(n) operations. If we are providing a million items in our input array, then an extra 100 operations from O(1) aren’t our main concern.

So then we have O(n / 2) — as n gets larger and larger, dividing it by two has a diminishing effect. So ultimately our Big O notation for this function becomes O(n).

3. Different terms for inputs

If we were to loop twice of the same array then our Big O is technically O(2n) but let’s drop the constants so we are left with O(n). But now we need to think about different terms for inputs. What does that mean?


If we look at above code, we can see we now have two inputs. One could be one item long, the other could contain a million items. So our function is no longer O(n), it is O(a + b). The n we use in our notation is arbitrary, so we need to reflect both inputs in our notation.

In this instance our loops aren’t nested. If they were then our code would be O(n²) which would be a potential target for refactoring. Typically if there are nested loops then you are multiplying rather than adding the variables.

4. Drop non-dominant terms

Take a look at this code:


So we have a single loop at the top which is O(n) and then a nested loop which is O(n²). So our total is O(n + n²). But as we saw in rule #2, we want to drop constants. So which of the two terms is more important to us?

In this case we drop O(n) and retain O(n²) as this is more important. It is the dominant term as it has a much heavier impact on performance.

BIG THETA


If we have a positive valued functions f(n) and g(n) takes a positive valued argument n then ϴ(g(n)) defined as {f(n):there exist constants c1,c2 and n1 for all n>=n1}

where c1 g(n)<=f(n)<=c2 g(n)

Let's take an example:
let f(n)=

g(n)=

c1=5 and c2=8 and n1=1

Among all the notations ,ϴ notation gives the best intuition about the rate of growth of function because it gives us a tight bound unlike big-oh and big -omega which gives the upper and lower bounds respectively.

ϴ tells us that g(n) is as close as f(n),rate of growth of g(n) is as close to the rate of growth of f(n) as possible.

How to calculate


Big-theta notation represents the following rule:

For any two functions f(n), g(n), if f(n)/g(n) and g(n)/f(n) are both bounded as n grows to infinity, then f = Θ(g) and g = Θ(f). In that case, g is both an upper bound and a lower bound on the growth of f.

Here's an example algorithm:

def find-minimum(List)
min = +∞
foreach value in List
min = value if min > value
return min
We wish to evaluate the cost function c(n) where n is the size of the input list. This algorithm will perform one comparison for every item in the list, so c(n) = n.

c(n)/n = 1 which remains bounded as n goes to infinity, so c(n) grows no faster than n. This is what is meant by big-O notation c(n) = O(n). Conversely, n/C(n) = 1 also remains bounded, so c(n) grows no slower than n. Since it grows neither slower nor faster, it must grow at the same speed. This is what is meant by theta notation c(n) = Θ(n).

Note that c(n)/n² is also bounded, so c(n) = O(n²) as well — big-O notation is merely an upper bound on the complexity, so any O(n) function is also O(n²), O(n³)...

However, since n²/c(n) = n is not bounded, then c(n) ≠ Θ(n²). This is the interesting property of big-theta notation: it's both an upper bound and a lower bound on the complexity.

BIG OMEGA

Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."
Big Omega represents the fastest possible running time. It is a curve that lies below our function’s curve, at least after some point and forever toward infinity. A function will never perform faster (in less time) than it’s Big Omega.
Since miraculous streaks of best case scenarios are far more rare than worst case scenarios, which are often the average cases, it’s not hard to see why Big O is more popular than Big Omega

Add a comment
Know the answer?
Add Answer to:
Explain how to analyze an algorithm to determine its input size and its Big-O, Big Theta,...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT