Chapter #12 Solutions- Allan Weiss - Data Structures and Algorithm Analysis in Java - 3rd Edition

12.1 Prove that the amortized cost of a top-down splay is O(logN). - Get solution

12.2 Prove that there exist access sequences that require 2 logN rotations per access for bottom-up splaying. Show that a similar result holds for top-down splaying. - Get solution

12.3 Modify the splay tree to support queries for the kth smallest item. - Get solution

12.4 Compare, empirically, the simplified top-down splay with the originally described top-down splay. - Get solution

12.5 Write the deletion procedure for red-black trees. - Get solution

12.6 Prove that the height of a red-black tree is at most 2 logN, and that this bound cannot be substantially lowered. - Get solution

12.7 Show that every AVL tree can be colored as a red-black tree. Are all red-black trees AVL? - Get solution

12.8 Draw a suffix tree and show the suffix array and LCP array for the following input strings:
a. ABCABCABC
b. MISSISSIPPI - Get solution

12.9 Once the suffix array is constructed, the short routine shown in Figure 12.50 can be invoked from Figure 12.32 to create the longest common prefix array.
a. In the code, what does rank[i] represent?
b. Suppose that LCP[rank[i] ] = h. Show that LCP[rank[i+1] ] ≥ h − 1.
c. Show that the algorithm in Figure 12.50 correctly computes the LCP array.
d. Prove that the algorithm in Figure 12.50 runs in linear time.
1 /*
2 * Create the LCP array from the suffix array
3 * @param s the input array populated from 0..N-1, with available pos N
4 * @param sa the already-computed suffix array 0..N-1
5 * @param LCP the resulting LCP array 0..N-1
6 */
7 public static void makeLCPArray( int [ ] s, int [ ] sa, int [ ] LCP )
8 {
9 int N = sa.length;
10 int [ ] rank = new int[ N ];
11
12 s[ N ] = -1;
13 for( int i = 0; i < N; i++ )
14 rank[ sa[ i ] ] = i;
15
16 int h = 0;
17 for( int i = 0; i < N; i++ )
18 if( rank[ i ] > 0 )
19 {
20 int j = sa[ rank[ i ] - 1 ];
21
22 while( s[ i + h ] == s[ j + h ] )
23 h++;
24
25 LCP[ rank[ i ] ] = h;
26 if( h > 0 )
27 h--;
28 }
29 }
Get solution

12.10 Suppose that in the linear-time suffix array construction algorithm, instead of constructing three groups, we construct seven groups, using for k = 0, 1, 2, 3, 4, 5, 6 Sk = < S[7i + k]S[7i + k + 1]S[7i + k + 2] . . . S[7i + k + 6] for i = 0, 1, 2, . . . >
a. Show that with a recursive call to S3S5S6, we have enough information to sort
the other four groups S0, S1, S2, and S4.
b. Show that this partitioning leads to a linear-time algorithm. - Get solution

12.11 Implement the insertion routine for treaps nonrecursively by maintaining a stack. Is it worth the effort? - Get solution

12.12 We can make treaps self-adjusting by using the number of accesses as a priority and performing rotations as needed after each access. Compare this method with the randomized strategy. Alternatively, generate a random number each time an item X is accessed. If this number is smaller than X’s current priority, use it as X’s new priority (performing the appropriate rotation). - Get solution

12.13 Show that if the items are sorted, then a treap can be constructed in linear time, even if the priorities are not sorted. - Get solution

12.14 Implement red-black trees without using the nullNode sentinel. How much coding effort is saved by using the sentinel? - Get solution

12.15 Suppose we store, for each node, the number of null links in its subtree; call this the node’s weight. Adopt the following strategy: If the left and right subtrees have weights that are not within a factor of 2 of each other, then completely rebuild the subtree rooted at the node. Show the following:
a. We can rebuild a node in O(S), where S is the weight of the node.
b. The algorithm has amortized cost of O(logN) per insertion.
c. We can rebuild a node in a k-d tree in O(S log S) time, where S is the weight of
the node.
d. We can apply the algorithm to k-d trees, at a cost of O(log2 N) per insertion. - Get solution

12.16 Suppose we call rotateWithLeftChild on an arbitrary 2-d tree. Explain in detail all the reasons that the result is no longer a usable 2-d tree. - Get solution

12.17 Implement the insertion and range search for the k-d tree. Do not use recursion. - Get solution

12.18 Determine the time for partial match query for values of p corresponding to k = 3, 4, and 5. - Get solution

12.19 For a perfectly balanced k-d tree, derive the worst-case running time of a range query that is quoted in the text (see p. 581). - Get solution

12.20 The 2-d heap is a data structure that allows each item to have two individual keys. deleteMin can be performed with respect to either of these keys. The 2-d heap is a complete binary tree with the following order property: For any node X at even depth, the item stored at X has the smallest key #1 in its subtree, while for any node X at odd depth, the item stored at X has the smallest key #2 in its subtree.
a. Draw a possible 2-d heap for the items (1, 10), (2, 9), (3, 8), (4, 7), (5, 6).
b. How do we find the item with minimum key #1?

c. How do we find the item with minimum key #2?
d. Give an algorithm to insert a new item into the 2-d heap.
e. Give an algorithm to perform deleteMin with respect to either key.
f. Give an algorithm to perform buildHeap in linear time. - Get solution

12.21 Generalize the preceding exercise to obtain a k-d heap, in which each item can have k individual keys. You should be able to obtain the following bounds: insert in O(logN), deleteMin in O(2k logN), and buildHeap in O(kN). - Get solution

12.22 Show that the k-d heap can be used to implement a double-ended priority queue. - Get solution

12.23 Abstractly, generalize the k-d heap so that only levels that branch on key #1 have
two children (all others have one).
a. Do we need links?
b. Clearly, the basic algorithms still work; what are the new time bounds? - Get solution

12.24 Use a k-d tree to implement deleteMin. What would you expect the average running time to be for a random tree? - Get solution

12.25 Use a k-d heap to implement a double-ended queue that also supports deleteMin. - Get solution

12.26 Implement the pairing heap with a nullNode sentinel. - Get solution

12.27 Show that the amortized cost of each operation is O(logN) for the pairing heap algorithm in the text. - Get solution

12.28 An alternative method for combineSiblings is to place all of the siblings on a queue, and repeatedly dequeue and merge the first two items on the queue, placing the result at the end of the queue. Implement this variation. - Get solution

12.29 Show that using a stack instead of a queue in the previous exercise is bad, by giving a sequence that leads to (N) cost per operation. This is the left-to-right single-pass merge. - Get solution

12.30 Without decreaseKey, we can remove parent links. How competitive is the result with the skew heap? - Get solution

12.31 Assume that each of the following is represented as a tree with child and parent references. Explain how to implement a decreaseKey operation.
a. Binary heap
b. Splay tree - Get solution

12.32 When viewed graphically, each node in a 2-d tree partitions the plane into regions.
For instance, Figure 12.51 shows the first five insertions into the 2-d tree in

 Figure 12.39. The first insertion, of p1, splits the plane into a left part and a right part. The second insertion, of p2, splits the left part into a top part and a bottom part, and so on.
a. For a given set of N items, does the order of insertion affect the final partition?
b. If two different insertion sequences result in the same tree, is the same partition
produced?
c. Give a formula for the number of regions that result from the partition after N
insertions.
d. Show the final partition for the 2-d tree in Figure 12.39. - Get solution

12.33 An alternative to the 2-d tree is the quad tree. Figure 12.52 shows how a plane is partitioned by a quad tree. Initially we have a region (which is often a square, but need not be). Each region may store one point. If a second point is inserted into a region, then the region is split into four equal-sized quadrants (northeast, southeast, southwest, and northwest). If this places the points in different quadrants (aswhen p2 is inserted), we are done; otherwise, we continue splitting recursively (as is done when p5 is inserted).
a. For a given set of N items, does the order of insertion affect the final partition?
b. Show the final partition if the same elements that were in the 2-d tree in
Figure 12.39 are inserted into the quad tree. - Get solution

12.34 A tree data structure can store the quad tree. We maintain the bounds of the original region. The tree root represents the original region. Each node is either a leaf that stores an inserted item, or has exactly four children, representing four quadrants. To perform a search, we begin at the root and repeatedly branch to anappropriate quadrant until a leaf (or null entry) is reached.
a. Draw the quad tree that corresponds to Figure 12.52.
b. What factors influence how deep the (quad) tree will be?
c. Describe an algorithm that performs an orthogonal range query in a quad tree. - Get solution