Chapter #12 Solutions- Allan Weiss - Data Structures and Algorithm Analysis in Java - 3rd Edition

12.1 Prove that the amortized cost of a top-down splay is O(logN). - Get solution

12.2 Prove that there exist access sequences that require 2 logN rotations per access for bottom-up splaying. Show that a similar result holds for top-down splaying. - Get solution

12.3 Modify the splay tree to support queries for the kth smallest item. - Get solution

12.4 Compare, empirically, the simplified top-down splay with the originally described top-down splay. - Get solution

12.5 Write the deletion procedure for red-black trees. - Get solution

12.6 Prove that the height of a red-black tree is at most 2 logN, and that this bound cannot be substantially lowered. - Get solution

12.7 Show that every AVL tree can be colored as a red-black tree. Are all red-black trees AVL? - Get solution

12.8 Draw a suffix tree and show the suffix array and LCP array for the following input strings:
a. ABCABCABC
b. MISSISSIPPI - Get solution

12.9 Once the suffix array is constructed, the short routine shown in Figure 12.50 can be invoked from Figure 12.32 to create the longest common prefix array.
a. In the code, what does rank[i] represent?
b. Suppose that LCP[rank[i] ] = h. Show that LCP[rank[i+1] ] ≥ h − 1.
c. Show that the algorithm in Figure 12.50 correctly computes the LCP array.
d. Prove that the algorithm in Figure 12.50 runs in linear time.
1 /*
2 * Create the LCP array from the suffix array
3 * @param s the input array populated from 0..N-1, with available pos N
4 * @param sa the already-computed suffix array 0..N-1
5 * @param LCP the resulting LCP array 0..N-1
6 */
7 public static void makeLCPArray( int [ ] s, int [ ] sa, int [ ] LCP )
8 {
9 int N = sa.length;
10 int [ ] rank = new int[ N ];
11
12 s[ N ] = -1;
13 for( int i = 0; i < N; i++ )
14 rank[ sa[ i ] ] = i;
15
16 int h = 0;
17 for( int i = 0; i < N; i++ )
18 if( rank[ i ] > 0 )
19 {
20 int j = sa[ rank[ i ] - 1 ];
21
22 while( s[ i + h ] == s[ j + h ] )
23 h++;
24
25 LCP[ rank[ i ] ] = h;
26 if( h > 0 )
27 h--;
28 }
29 }
Get solution

12.10 Suppose that in the linear-time suffix array construction algorithm, instead of constructing three groups, we construct seven groups, using for k = 0, 1, 2, 3, 4, 5, 6 Sk = < S[7i + k]S[7i + k + 1]S[7i + k + 2] . . . S[7i + k + 6] for i = 0, 1, 2, . . . >
a. Show that with a recursive call to S3S5S6, we have enough information to sort
the other four groups S0, S1, S2, and S4.
b. Show that this partitioning leads to a linear-time algorithm. - Get solution

12.11 Implement the insertion routine for treaps nonrecursively by maintaining a stack. Is it worth the effort? - Get solution

12.12 We can make treaps self-adjusting by using the number of accesses as a priority and performing rotations as needed after each access. Compare this method with the randomized strategy. Alternatively, generate a random number each time an item X is accessed. If this number is smaller than X’s current priority, use it as X’s new priority (performing the appropriate rotation). - Get solution

12.13 Show that if the items are sorted, then a treap can be constructed in linear time, even if the priorities are not sorted. - Get solution

12.14 Implement red-black trees without using the nullNode sentinel. How much coding effort is saved by using the sentinel? - Get solution

12.15 Suppose we store, for each node, the number of null links in its subtree; call this the node’s weight. Adopt the following strategy: If the left and right subtrees have weights that are not within a factor of 2 of each other, then completely rebuild the subtree rooted at the node. Show the following:
a. We can rebuild a node in O(S), where S is the weight of the node.
b. The algorithm has amortized cost of O(logN) per insertion.
c. We can rebuild a node in a k-d tree in O(S log S) time, where S is the weight of
the node.
d. We can apply the algorithm to k-d trees, at a cost of O(log2 N) per insertion. - Get solution

12.16 Suppose we call rotateWithLeftChild on an arbitrary 2-d tree. Explain in detail all the reasons that the result is no longer a usable 2-d tree. - Get solution

12.17 Implement the insertion and range search for the k-d tree. Do not use recursion. - Get solution

12.18 Determine the time for partial match query for values of p corresponding to k = 3, 4, and 5. - Get solution

12.19 For a perfectly balanced k-d tree, derive the worst-case running time of a range query that is quoted in the text (see p. 581). - Get solution

12.20 The 2-d heap is a data structure that allows each item to have two individual keys. deleteMin can be performed with respect to either of these keys. The 2-d heap is a complete binary tree with the following order property: For any node X at even depth, the item stored at X has the smallest key #1 in its subtree, while for any node X at odd depth, the item stored at X has the smallest key #2 in its subtree.
a. Draw a possible 2-d heap for the items (1, 10), (2, 9), (3, 8), (4, 7), (5, 6).
b. How do we find the item with minimum key #1?

c. How do we find the item with minimum key #2?
d. Give an algorithm to insert a new item into the 2-d heap.
e. Give an algorithm to perform deleteMin with respect to either key.
f. Give an algorithm to perform buildHeap in linear time. - Get solution

12.21 Generalize the preceding exercise to obtain a k-d heap, in which each item can have k individual keys. You should be able to obtain the following bounds: insert in O(logN), deleteMin in O(2k logN), and buildHeap in O(kN). - Get solution

12.22 Show that the k-d heap can be used to implement a double-ended priority queue. - Get solution

12.23 Abstractly, generalize the k-d heap so that only levels that branch on key #1 have
two children (all others have one).
a. Do we need links?
b. Clearly, the basic algorithms still work; what are the new time bounds? - Get solution

12.24 Use a k-d tree to implement deleteMin. What would you expect the average running time to be for a random tree? - Get solution

12.25 Use a k-d heap to implement a double-ended queue that also supports deleteMin. - Get solution

12.26 Implement the pairing heap with a nullNode sentinel. - Get solution

12.27 Show that the amortized cost of each operation is O(logN) for the pairing heap algorithm in the text. - Get solution

12.28 An alternative method for combineSiblings is to place all of the siblings on a queue, and repeatedly dequeue and merge the first two items on the queue, placing the result at the end of the queue. Implement this variation. - Get solution

12.29 Show that using a stack instead of a queue in the previous exercise is bad, by giving a sequence that leads to (N) cost per operation. This is the left-to-right single-pass merge. - Get solution

12.30 Without decreaseKey, we can remove parent links. How competitive is the result with the skew heap? - Get solution

12.31 Assume that each of the following is represented as a tree with child and parent references. Explain how to implement a decreaseKey operation.
a. Binary heap
b. Splay tree - Get solution

12.32 When viewed graphically, each node in a 2-d tree partitions the plane into regions.
For instance, Figure 12.51 shows the first five insertions into the 2-d tree in

 Figure 12.39. The first insertion, of p1, splits the plane into a left part and a right part. The second insertion, of p2, splits the left part into a top part and a bottom part, and so on.
a. For a given set of N items, does the order of insertion affect the final partition?
b. If two different insertion sequences result in the same tree, is the same partition
produced?
c. Give a formula for the number of regions that result from the partition after N
insertions.
d. Show the final partition for the 2-d tree in Figure 12.39. - Get solution

12.33 An alternative to the 2-d tree is the quad tree. Figure 12.52 shows how a plane is partitioned by a quad tree. Initially we have a region (which is often a square, but need not be). Each region may store one point. If a second point is inserted into a region, then the region is split into four equal-sized quadrants (northeast, southeast, southwest, and northwest). If this places the points in different quadrants (aswhen p2 is inserted), we are done; otherwise, we continue splitting recursively (as is done when p5 is inserted).
a. For a given set of N items, does the order of insertion affect the final partition?
b. Show the final partition if the same elements that were in the 2-d tree in
Figure 12.39 are inserted into the quad tree. - Get solution

12.34 A tree data structure can store the quad tree. We maintain the bounds of the original region. The tree root represents the original region. Each node is either a leaf that stores an inserted item, or has exactly four children, representing four quadrants. To perform a search, we begin at the root and repeatedly branch to anappropriate quadrant until a leaf (or null entry) is reached.
a. Draw the quad tree that corresponds to Figure 12.52.
b. What factors influence how deep the (quad) tree will be?
c. Describe an algorithm that performs an orthogonal range query in a quad tree. - Get solution

Chapter #11 Solutions- Allan Weiss - Data Structures and Algorithm Analysis in Java - 3rd Edition

11.1 When do M consecutive insertions into a binomial queue take less than 2M time units? - Get solution

11.2 Suppose a binomial queue of N = 2k − 1 elements is built. Alternately perform M insert and deleteMin pairs. Clearly, each operation takes O(logN) time. Why does this not contradict the amortized bound of O(1) for insertion? - Get solution

11.3 Show that the amortized bound of O(logN) for the skew heap operations described in the text cannot be converted to a worst-case bound, by giving a sequence of operations that lead to a merge requiring (N) time. - Get solution

11.4 Show how to merge two skew heaps with one top-down pass and reduce the merge cost to O(1) amortized time. - Get solution

11.5 Extend skew heaps to support the decreaseKey operation in O(logN) amortized time. - Get solution

11.6 Implement Fibonacci heaps and compare their performance with that of binary heaps when used in Dijkstra’s algorithm. - Get solution


11.7 A standard implementation of Fibonacci heaps requires four links per node (parent, child, and two siblings). Show how to reduce the number of links, at the cost of at most a constant factor in the running time. - Get solution

11.8 Show that the amortized time of a zig-zig splay is at most 3(Rf (X) − Ri(X)). - Get solution

11.9 By changing the potential function, it is possible to prove different bounds for splaying. Let the weight function W(i) be some function assigned to each node in the tree, and let S(i) be the sum of the weights of all the nodes in the subtree rooted at i, including i itself. The special case W(i) = 1 for all nodes corresponds to the function used in the proof of the splaying bound. Let N be the number of
nodes in the tree, and let M be the number of accesses. Prove the following two theorems:
a. The total access time is O(M + (M + N) logN).
b. If qi is the number of times that item i is accessed, and qi > 0 for all i, then the
total access time is
11.10 a. Show how to implement the merge operation on splay trees so that any sequence of N−1 merges starting from N single-element trees takes O(N log2 N) time.
b. Improve the bound to O(N logN). - Get solution

11.11 In Chapter 5, we described rehashing: When a table becomes more than half full,a new table twice as large is constructed, and the entire old table is rehashed. Give a formal amortized analysis, with potential function, to show that the amortized cost of an insertion is still O(1). - Get solution

11.12 What is the maximum depth of a Fibonacci heap? - Get solution

11.13 A deque with heap order is a data structure consisting of a list of items, on which
the following operations are possible:
push(x): Insert item x on the front end of the deque.
pop(): Remove the front item from the deque and return it.
inject(x): Insert item x on the rear end of the deque.
eject(): Remove the rear item from the deque and return it.
findMin(): Return the smallest item from the deque (breaking ties arbitrarily).
a. Describe how to support these operations in constant amortized time per
operation.
b. Describe how to support these operations in constant worst-case time per
operation. - Get solution

11.14 Show that the binomial queues actually support merging in O(1) amortized time. Define the potential of a binomial queue to be the number of trees plus the rank of the largest tree. - Get solution

11.15 Suppose that in an attempt to save time, we splay on every second tree operation.
Does the amortized cost remain logarithmic? - Get solution

11.16 Using the potential function in the proof of the splay tree bound, what is the maximum and minimum potential of a splay tree? By how much can the potential function decrease in one splay? By how much can the potential function increase in one splay? You may give Big-Oh answers. - Get solution

11.17 As a result of a splay, most of the nodes on the access path are moved halfway towards the root, while a couple of nodes on the path move down one level. This suggests using the sum over all nodes of the logarithm of each node’s depth as a potential function.
a. What is the maximum value of the potential function?
b. What is the minimum value of the potential function?
c. The difference in the answers to parts (a) and (b) gives some indication that this potential function isn’t too good. Show that a splaying operation could increase the potential by (N/ logN). - Get solution