Variations on O and Ω
Some authors define \(\Omega\) in a slightly different way than we do; let’s use \(\mathop \Omega \limits^\infty\) (read “omega infinity”) for this alternative definition. We say that \(f(n) = \mathop \Omega \limits^\infty (g(n))\) if there exists a positive constant \(c\) such that \(f(n) \geq c \cdot g(n) \geq 0\) for infinitely many integers \(n\).
a. Show that for any two functions \(f(n)\) and \(g(n)\) that are asymptotically nonnegative, either \(f(n) = O(g(n))\) or \(f(n) = \mathop \Omega \limits^\infty (g(n))\) or both, whereas this is not true if we use \(\Omega\) in place of \(\mathop \Omega \limits^\infty\).
Most of the time, \(f(n) = O(g(n))\) or \(f(n) = \Omega(g(n))\) or both, but let us consider the cases where none of the above statements are true. For these cases, as \(n \rightarrow \infty\) we know that \(f(n)\) and \(g(n)\) continue to cross over one another since there is no upper or lower asymptotic bound. When this is the case, we know that there are an infinite number of \(n\) values where \(g(n)\) is a lower bound of \(f(n)\), which indicates that \(f(n) = \mathop \Omega \limits^\infty (g(n))\). An example of this would be when \(f(n) = 0\) and \(g(n) = \sin(n)\).
Now consider the fact that \(f(n) = \Omega(g(n)) \implies f(n) = \mathop \Omega \limits^\infty (g(n))\) and we can modify our original statement to capture these additional scenarios, thus making it true in all cases: \(f(n) = O(g(n))\) or \(f(n) = \mathop \Omega \limits^\infty (g(n))\) or both.
b. Describe the potential advantages and disadvantages of using \(\mathop \Omega \limits^\infty\) instead of \(\Omega\) to characterize the running times of programs.
Advantage: Consider part a. where we show that by using \(\mathop \Omega \limits^\infty\) we can fully characterize all functions by way of a simple statement, whereas with \(\Omega\) we have to add the awkward caveat that describes some unique scenarios.
Disadvantage: Given that \(f(n) = \Omega(g(n)) \implies f(n) = \mathop \Omega \limits^\infty (g(n))\), then using \(\mathop \Omega \limits^\infty\) can result in unnecessarily obfuscating otherwise straightforward information.
Some authors also define \(O\) in a slightly different manner; let’s use \(O'\) for the alternative definition. We say that \(f(n) = O'(g(n))\) if and only if \(\lvert f(n) \rvert = O(g(n))\)
c. What happens to each direction of the “if and only if” in Theorem 3.1 if we substitute \(O'\) for \(O\) but still use \(\Omega\)?
For reference, Thereom 3.1 states: For any two functions \(f(n)\) and \(g(n)\), we have \(f(n) = \Theta(g(n))\) if and only if \(f(n) = O(g(n))\) and \(f(n) = \Omega(g(n))\).
By substituting \(O'(g(n))\) for \(O(g(n))\) in Theorem 3.1 above, we are not impacting the left-to-right interpretation of the Theorem because \(f(n) = O(g(n)) \implies f(n) = O'(g(n))\). By definition, \(\Theta(g(n))\) requires that every member \(f(n) \in \Theta(g(n))\) be asymptotically nonnegative, therefore taking the absolute value of \(f(n)\) will not change the output for sufficiently large values of \(n\).
The right-to-left interpretation of the Theorem is no longer valid when we substitute \(O'(g(n))\) for \(O(g(n))\) because there is no guarantee that \(f(n)\) is producing positive numbers for any values of \(n\). Let \(f(n) = -n\) and \(g(n) = n\). \(f(n) = O'(g(n))\) and \(f(n) = \Omega(g(n))\) but \(f(n) \neq \Theta(g(n))\).
Some authors define \(\widetilde{O}\) (read “soft-oh”) to mean \(O\) with logarithmic factors ignored:
\(\widetilde{O} (g(n)) = \{ f(n) :\) there exist positive constants \(c\), \(k\), and \(n_0\) such that \(0 \leq f(n) \leq cg(n)\lg^k(n) \ \forall \ n \geq n_0 \}\).
d. Define \(\widetilde{\Omega}\) and \(\widetilde{\Theta}\) in a similar manner. Prove the corresponding analog to Theorem 3.1.
\(\widetilde{\Omega}(g(n)) = \{ f(n) :\) there exist positive constants \(c\), \(k\), and \(n_0\) such that \(0 \leq cg(n)\lg^k(n) \leq f(n) \ \forall n \geq n_0 \}\).
\(\widetilde{\Theta}(g(n)) = \{ f(n) :\) there exist positive constants \(c_1\), \(c_2\), \(k\), and \(n_0\) such that \(0 \leq c_1g(n)\lg^k(n) \leq f(n) \leq c_2g(n)\lg^k(n) \ \forall \ n \geq n_0 \}\).
The corresponding analog to Theorem 3.1 is for any two functions \(f(n)\) and \(g(n)\), we have \(f(n) = \widetilde{\Theta}(g(n))\) if and only if \(f(n) = \widetilde{O}(g(n))\) and \(f(n) = \widetilde{\Omega}(g(n))\) which is the result of combining the three inequalities above.