KthSmallestElementInABST [source code]
public class KthSmallestElementInABST {
static
/******************************************************************************/
class Solution {
int count;
public int kthSmallest(TreeNode root, int k) {
count = k;
return in_order (root).val;
}
TreeNode in_order (TreeNode root) {
if (root == null)
return null;
TreeNode left = in_order (root.left);
if (left != null)
return left;
return --count == 0 ? root : in_order (root.right);
}
}
/******************************************************************************/
public static void main(String[] args) {
KthSmallestElementInABST.Solution tester = new KthSmallestElementInABST.Solution();
}
}
笨办法就很直接, 直接走一遍, 维护一个size为K的PriorityQueue就行了; 但是这个明显不符合hint里面给的复杂度的要求;
这题最后想了半天还是没有想出来, 放弃了;
上面的解法是模仿discussion里面的最优解写出来的一个解法, 最后的速度是0ms (93%), 应该说是最优解了, 而且代码非常的简练;
这个题目最后花的时间确实是有点多了, 不过原因是多方面的, 一方面是这个题目虽然看起来很简单, 但是实际上学到很多东西, 一个原因就是放假的时候边玩边学, 确实效率不高;
另外这个题目也让我回忆到一个我以前总结过的对于BST的一个标准处理, 其实就是当做一个array就行了, 因为只要你能够用InOrder来traverse一个BST, 那么你traverse一个BST其实跟traverse一个array是完全没有区别的;
当然, 最后对于这个题目的理解是超越了这个程度的;
这个是discussion的一个解法:
public int kthSmallest(TreeNode root, int k) {
int count = countNodes(root.left);
if (k <= count) {
return kthSmallest(root.left, k);
} else if (k > count + 1) {
return kthSmallest(root.right, k-1-count); // 1 is counted as current node
}
return root.val;
}
public int countNodes(TreeNode n) {
if (n == null) return 0;
return 1 + countNodes(n.left) + countNodes(n.right);
}
作者认为这个解法更加的preferable, 但是这个解法的复杂度实际上非常的差, 还不如直接一个PriorityQueue遍历的做法; 看起来是运用了BST的性质, 但是运用的很失败; 当然, 虽然这个算法本身看起来有点quick sort的感觉, 好像有点divide&conquer的感觉, 但是实际上是一个拙劣的模仿;
后面很多人认为这个算法的复杂度是average NlgN and N^2 worst-case; 这个N^2其实不难理解, 很多人理解的时候用一个list的这种极端情况的tree来理解, 但是实际上是不需要的; 在root的时候, 你count需要N的时间, 然后随着level的降低, tree的N越来越小, 但是是一个线性减少, 这个就又是一个sum 1 to N的类似的问题, 很轻松就是N^2的复杂度; 直观上这个理解应该是对的, 比如即使所以个balanced的tree, 比如你root是N, 然后root.left is (N-1)/2, root.right is (N-1)/2. 这两个加起来就比root少1, 这个就是我认为的sum 1 to N的关系; 另外一种思路应该可以用aggregate的思路来分析, 就是how many times does a node get counted. 从最下层开始, 最下层有N/2个node, 每一个node都被above的任何一个node给count一次, 所以最后光是最下面的一层leaf这一层, 就已经贡献了(N/2)*(N/2)的count了; 继续向上的node的贡献是多少具体算起来有点麻烦, 不过总体是被upperbound住了; 所以我现在也是不知道他们average NlgN的到底是怎么得到的;
这个是同一个作者的另外一个做法:
// better keep these two variables in a wrapper class
private static int number = 0;
private static int count = 0;
public int kthSmallest(TreeNode root, int k) {
count = k;
helper(root);
return number;
}
public void helper(TreeNode n) {
if (n.left != null) helper(n.left);
count--;
if (count == 0) {
number = n.val;
return;
}
if (n.right != null) helper(n.right);
}
这个算法的思路其实也不复杂; 不过相对比上面一个算法的思路要合理一些, 上面一个算法的复杂度虽然看起来思路很合理, 但是复杂度太差了; 这个算法相对就合理很多, 而且利用了InOrder的这个提示; 那么是否能够达到hint里面宣称的O(H)的复杂度呢? naive的看法就是O(K)的复杂度, 而且利用到了BST本身的性质, 所以其实还是不错的; 但是这个跟O(H)有没有直接的关系? 好像是合理的, 比如是一个line, 那么最后最坏情况就是走到height的尽头. 感觉这个算法可能其实就是最优算法了;
最后实际上想一下, 这个算法本身并不是非常的难想到, 一切其实都是非常的一气呵成. 也不知道是自己太生疏了还是什么, 这个题目最后没有做出来; 不过我还是希望后面能够掌握这个算法, 因为这里展示的这个walk BST的思路, 好像是一个很general的做法;
这个人后面还写了一个iterative的写法:
public int kthSmallest(TreeNode root, int k) {
Stack<TreeNode> st = new Stack<>();
while (root != null) {
st.push(root);
root = root.left;
}
while (k != 0) {
TreeNode n = st.pop();
k--;
if (k == 0) return n.val;
TreeNode right = n.right;
while (right != null) {
st.push(right);
right = right.left;
}
}
return -1; // never hit if k is valid
}
这个暂时不看了, iterative dfs我实在是记不得了, 不过应该是不难的. iterative的优势在这里, 就是premature exit实现起来相对方便一些;
大概看了一下, 好像其实记得这个算法的大概意思的, 就是一个push left beam的思路, 不过大概模糊记得一点之前总结的东西, 这样的写法好像不是最好的, 因为这个需要在正餐开始之前, 先单独push一次; 最好的写法好像是不需要这个过程的;
总体来说这个作者还是挺厉害的, 三个算法都是各有特色; 而且包含了最优解;
这个人后来针对改变的要求写了一个新的解法, 是Python, 不过不是特别难理解:
note: requirement has been changed a bit since last time I visited that the counting could be looked up frequently and BST itself could be altered (inserted/deleted) by multiple times, so that's the main reason that I stored them in an array.
class Solution(object):
def kthSmallest(self, root, k):
"""
:type root: TreeNode
:type k: int
:rtype: int
"""
count = []
self.helper(root, count)
return count[k-1]
def helper(self, node, count):
if not node:
return
self.helper(node.left, count)
count.append(node.val)
self.helper(node.right, count)
当然, 这里要回顾的一点就是, InOrder这个东西本身, 其实就是利用BST Property的一个弱化版本: 也就是inorder traversal就是一个ordered list的性质;
但是好像还是不是很理解他的解释? 他这个做法其实本身很简单, 就是走一遍, 全都扔到一个array里面(InOrder), 但是为什么这个对于题目这个frequently的要求就有好处呢?
当然, 可以这样理解, 可以用类似memoization的方法来做; 只有当出现一次modification之后, 才进行一次helper; 其他的时候, 如果只是针对不同的K进行查询, 就直接从array里面叼出来就行了;
当然这个memo的思路, 他这个代码是没有展示的; 但是真正面试的时候说明一下就行了;
这个是上面InOrder的另外一种写法:
public class Solution {
int count = 0;
int current = 0;
public int kthSmallest(TreeNode root, int k) {
inOrderTraversal(root,k);
return current;
}
public void inOrderTraversal(TreeNode root, int k){
if(count==k) return;
else{
if(root.left!=null) inOrderTraversal(root.left, k);
if(count==k) return; //break after already found kth smallest.
count++;
if(count==k){
current = root.val;
return;
}
if(root.right!=null) inOrderTraversal(root.right, k);
}
}
}
总体的思路是类似的;
这个是下面给的一个比较好的分析:
@caikehe said in 3 ways implemented in JAVA (Python): Binary Search, in-order iterative & recursive:
Actually the first idea is not bad, if we augment the TreeNode data structure, we can get O(h) time solution. Here ([http://www.geeksforgeeks.org/find-k-th-smallest-element-in-bst-order-statistics-in-bst/][1]) we can find very detailed description. What angelvivienne did is just implementing "lCount" by using the recursive method.
Method 2: Augmented Tree Data Structure.
The idea is to maintain rank of each node. We can keep track of elements in a subtree of any node while building the tree. Since we need K-th smallest element, we can maintain number of elements of left subtree in every node.
Assume that the root is having N nodes in its left subtree. If K = N + 1, root is K-th node. If K < N, we will continue our search (recursion) for the Kth smallest element in the left subtree of root. If K > N + 1, we continue our search in the right subtree for the (K – N – 1)-th smallest element. Note that we need the count of elements in left subtree only.
Time complexity: O(h) where h is height of tree.
[1]: http://www.geeksforgeeks.org/find-k-th-smallest-element-in-bst-order-statistics-in-bst/
这个2015年的分析还是比较好的; 虽然本身并不是特别复杂;
注意, 类似我上面自己的分析, 这个O(H)的得到确实并不是非常的intuitive, 但是我感觉这个分析在BST问题里面还是经常出现的, 不要上来就很naive的就知道一个O(N), 这个实在是太宽松了; 一般只要是成功利用了BST Property的算法, 都很容易做到O(H)的;
上面的答案是一个3000分大神先发布的, 后面有人给出了代码:
@yzxu1101 said in 3 ways implemented in JAVA (Python): Binary Search, in-order iterative & recursive:
O(h) (h = height) time complexity by modify TreeNode structure and add left subtree node count and find kth smallest element base on (http://www.geeksforgeeks.org/find-k-th-smallest-element-in-bst-order-statistics-in-bst/)
The idea is to maintain rank of each node. We can keep track of elements in a subtree of any node while building the tree. Since we need K-th smallest element, we can maintain number of elements of left subtree in every node.
Assume that the root is having N nodes in its left subtree. If K = N + 1, root is K-th node. If K < N, we will continue our search (recursion) for the Kth smallest element in the left subtree of root. If K > N + 1, we continue our search in the right subtree for the (K – N – 1)-th smallest element. Note that we need the count of elements in left subtree only.
1.travel tree by level and insert node into TreeNodeWithCount Tree.
2.find kth smallest in the TreeNodeWithCount Tree.
public class TreeNodeWithCount {
int val;
int lCount;
TreeNodeWithCount left;
TreeNodeWithCount right;
TreeNodeWithCount(int x) { val = x; }
}
public int kthSmallest(TreeNode root, int k) {
if(root == null) return -1;
TreeNodeWithCount rootWithCount = createBSTWithCount(root);
return kthSmallestWithCount(rootWithCount, k);
}
public TreeNodeWithCount createBSTWithCount(TreeNode root) {
Queue<TreeNode> queue = new LinkedList<TreeNode>();
queue.add(root);
TreeNodeWithCount rootWithCount = null;
while(!queue.isEmpty()) {
TreeNode node = queue.remove();
TreeNodeWithCount nodeWithCount = new TreeNodeWithCount(node.val);
rootWithCount = insertBSTWithCount(rootWithCount, nodeWithCount);
if(node.left != null) queue.add(node.left);
if(node.right != null) queue.add(node.right);
}
return rootWithCount;
}
public TreeNodeWithCount insertBSTWithCount(TreeNodeWithCount rootWithCount, TreeNodeWithCount nodeWithCount) {
TreeNodeWithCount cur = rootWithCount, parent = rootWithCount;
while(cur != null) {
parent = cur;
if(nodeWithCount.val < cur.val) {
cur.lCount++;
cur = cur.left;
} else {
cur = cur.right;
}
}
if(rootWithCount == null) {
rootWithCount = nodeWithCount;
} else if(nodeWithCount.val < parent.val) {
parent.left = nodeWithCount;
} else {
parent.right = nodeWithCount;
}
return rootWithCount;
}
public int kthSmallestWithCount(TreeNodeWithCount rootWithCount, int k) {
while(rootWithCount != null) {
if(k == rootWithCount.lCount + 1) {
return rootWithCount.val;
} else if(k <= rootWithCount.lCount) {
rootWithCount = rootWithCount.left;
} else {
k = k - rootWithCount.lCount - 1;
rootWithCount = rootWithCount.right;
}
}
return -1;
}
这个算法的整体思路他们解释的还是很清楚的; 这个算法的本质就是为了改善之前那个作者第一种做法的缺陷: 复杂度太高, 因为做了很多的重复工作; 这里通过augment的方法, 维护每一个node的rank, 这样就不需要重复的count了;
当然这个算法本身的实现并不trivial, 尤其是insert的实现: 当你有一个维护rank的BST的时候, 你的insert怎么做最方便, 这个还真的是一个值得考虑的问题; 看起来跟普通的BST insert差不多, 但是实际上还稍微有趣一点;
当然这个算法的核心架构其实并不复杂, 其实就是重新把这个tree给build一遍; 所以最后这个复杂度并不好说, 但是肯定是超过O(N)的; 最后这个复杂度, 跟insert的执行顺序有很大关系; 这里这个代码, 最后insert的顺序直接就是根据原来的tree的结构进行一个BFS的insert; 这样的一个顺序是不是就一定是最好的? 很难说, 反正是没有论证的; 不过BFS确实是最直接的一个想法, 因为这种做法可以让你从一开始就有root; 如果你用其他的做法, 很可能你最后更新的时候要不停的更新root, 而更新root这个东西就很复杂了;
当然, 当这个count被维护成功之后, 整个找K的过程就很简单了;
感觉如果我当时把Sedgwick那本书上的那些基本算法都记熟了, 这个算法还是有希望在现场快速写出来的; 所以基础这个东西, 真的是不到用的时候, 就不知道你需要;
有人说insert/delete is average lgN, 这个我不太记得了, 可能需要翻一下书; 我好像记得之前做过一题通过serialization来直接rebuild bst的问题, 当时这个问题不记得有没有N的解法了; 如果有的话, 那么是不是最后这个rebuild也可以做到N? 不过好像未必是这么简单, 因为我们最后在rebuild的过程中, 想要做到的其实是一个维护rank, 而de-serialization这个算法, 还不知道能不能达到这个目的;
对于上面InOrder的那个解法, 那个iterative的版本, 这里是另外一个人也意识到了我提出的太啰嗦的问题, 所以给出了改进:
public int kthSmallest(TreeNode root, int k) {
Stack<TreeNode> stack = new Stack<TreeNode>();
TreeNode p = root;
int count = 0;
while(!stack.isEmpty() || p != null) {
if(p != null) {
stack.push(p); // Just like recursion
p = p.left;
} else {
TreeNode node = stack.pop();
if(++count == k) return node.val;
p = node.right;
}
}
return Integer.MIN_VALUE;
}
这个是2015年的一个回答, 所以其实当时就已经有很多人可以很熟练的写iterative DFS了;
事实上, 看一下他这个代码, 写的非常简练, 在while里面判断的or的两个条件, 在body里面分别进行判断, 然后对应的反应; 而且用了inline的技巧, 这个作者应该是一个非常熟练的程序员;
这个是另外一个使用augment的算法:
@WHJ425 said in What if you could modify the BST node's structure?:
If we could add a count field in the BST node class, it will take O(n) time when we calculate the count value for the whole tree, but after that, it will take O(logn) time when insert/delete a node or calculate the kth smallest element.
public class Solution {
public int kthSmallest(TreeNode root, int k) {
TreeNodeWithCount rootWithCount = buildTreeWithCount(root);
return kthSmallest(rootWithCount, k);
}
private TreeNodeWithCount buildTreeWithCount(TreeNode root) {
if (root == null) return null;
TreeNodeWithCount rootWithCount = new TreeNodeWithCount(root.val);
rootWithCount.left = buildTreeWithCount(root.left);
rootWithCount.right = buildTreeWithCount(root.right);
if (rootWithCount.left != null) rootWithCount.count += rootWithCount.left.count;
if (rootWithCount.right != null) rootWithCount.count += rootWithCount.right.count;
return rootWithCount;
}
private int kthSmallest(TreeNodeWithCount rootWithCount, int k) {
if (k <= 0 || k > rootWithCount.count) return -1;
if (rootWithCount.left != null) {
if (rootWithCount.left.count >= k) return kthSmallest(rootWithCount.left, k);
if (rootWithCount.left.count == k-1) return rootWithCount.val;
return kthSmallest(rootWithCount.right, k-1-rootWithCount.left.count);
} else {
if (k == 1) return rootWithCount.val;
return kthSmallest(rootWithCount.right, k-1);
}
}
class TreeNodeWithCount {
int val;
int count;
TreeNodeWithCount left;
TreeNodeWithCount right;
TreeNodeWithCount(int x) {val = x; count = 1;};
}
}
这个算法实际上最后达到的目的更加简单, 其实就是维护count本身就够了, 都不需要维护rank, 而且这个维护count的过程, 看起来似乎更加简单, 直接一个recursion就可以做完; 算法本身也是非常的直观;
这个是下面有一个人提出的另外一个解法:
class TreeNodeWithCount {
public:
int val;
int count;
TreeNodeWithCount* left;
TreeNodeWithCount* right;
TreeNodeWithCount(int x) {val = x; count = 1;left=NULL;right=NULL;}
};
class Solution {
public:
TreeNodeWithCount* buildTreeWithCount(TreeNode* root,int& n) {
if (!root) return NULL;
TreeNodeWithCount* rootWithCount = new TreeNodeWithCount(root->val);
rootWithCount->left = buildTreeWithCount(root->left,n);
rootWithCount->count = n++;
rootWithCount->right = buildTreeWithCount(root->right,n);
return rootWithCount;
}
int kthSmallest(TreeNodeWithCount* rootWithCount, int k) {
if (rootWithCount->count > k)
return kthSmallest(rootWithCount->left, k);
else if (rootWithCount->count == k)
return rootWithCount->val;
else
return kthSmallest(rootWithCount->right, k);
}
int kthSmallest(TreeNode* root, int k) {
int n = 1;
TreeNodeWithCount* rootWithCount = buildTreeWithCount(root,n);
return kthSmallest(rootWithCount, k);
}
};
这个思路也很简单, 这里他的count存的直接就是order; 这个其实也是没问题的, 最后达到的效果其实是类似的; 事实上, 这个order跟前面的rank是一个意思;
这个是另外一个版本:
public class Solution {
Map<Integer, Integer> map;
public int kthSmallest(TreeNode root, int k) {
map = new HashMap<>();
initMap(root);
TreeNode node = root;
int val = map.get(node.val);
while(val != k-1){
if(val >= k){
node = node.left;
} else {
k-=val+1;
node = node.right;
}
val = map.get(node.val);
}
return node.val;
}
private int initMap(TreeNode root){
int res = 1;
if(root.left == null) {
map.put(root.val, 0);
} else {
int left=initMap(root.left);
map.put(root.val, left);
res+=left;
}
if(root.right == null){
return res;
}
return res+initMap(root.right);
}
}
他这个Map里面存放的还是rank的概念, 但是他的这个helper(用来rebuild tree)的, 看起来还是有点意思的; 这个helper函数的主要作用是更新rank, which is stored in a map, 然后这个helper本身的返回值是subtree node count. 这样的一个recursion设计其实是比较高阶的一个设计了; 看看他res的初始值是1, 因为这个root本身自己至少要贡献一个count; 整个代码写的也比较简练; 注意他Map的初始化的位置: 在每一个kth的开头, 这样的一个位置设计, 说明这个作者还是有很好的代码习惯的;
另外他这个设计的一个主要优势就是不需要用augment了; 直接一个recursion走一遍tree, 然后Map记录就行了, 最后是保证的O(N), 因为不需要insert这种操作; 当然, 说的是WorstCase;
@joe-cai said in What if you could modify the BST node's structure?:
It's not O(log n). O(h) instead. h is the height of the BST.
这个是一个人的看法, 是没有毛病的, 在讨论tree的复杂度的时候, 用H来讨论很多时候可以做到笼统很多;
这个是另外一个人的做法:
@yavinci said in Two Easiest In Order Traverse (Java):
In order traverse for BST gives the natural order of numbers. No need to use array.
Recursive:
int count = 0;
int result = Integer.MIN_VALUE;
public int kthSmallest(TreeNode root, int k) {
traverse(root, k);
return result;
}
public void traverse(TreeNode root, int k) {
if(root == null) return;
traverse(root.left, k);
count ++;
if(count == k) result = root.val;
traverse(root.right, k);
// can be optimized to: if (count < k) traverse(root.right, k);
}
这里可以看到, 这个InOrder的算法本身真的是很简单的; 他自己原来的算法没有加premature exit, 小问题, 很好解决;
Iterative:
public int kthSmallest(TreeNode root, int k) {
Stack<TreeNode> stack = new Stack<TreeNode>();
TreeNode p = root;
int count = 0;
while(!stack.isEmpty() || p != null) {
if(p != null) {
stack.push(p); // Just like recursion
p = p.left;
} else {
TreeNode node = stack.pop();
if(++count == k) return node.val;
p = node.right;
}
}
return Integer.MIN_VALUE;
}
优化版本的iterative;
@datahunter said in Simple and Clean Java solution with explanation:
public static int ans = 0;
public int kthSmallest(TreeNode root, int k) {
helper(root, k);
return ans;
}
public int helper(TreeNode root, int k) {
if (root == null) {
return 0;
}
int leftCount = helper(root.left, k);
int rightCount = helper(root.right, k - leftCount - 1);
if (k == leftCount + 1) {
ans = root.val;
}
return leftCount + rightCount + 1;
}
We count the number of nodes of left sub tree and right sub tree recursively. Suppose the Kth smallest element is in the right sub tree, then we need to update k as k - leftCount - 1 (leftCount + 1 is the number of nodes of left sub tree plus the root node). Only when k equals leftCount + 1, we find the target.
这个还是一个类似的做法, 是对于recursion的微调完成一个好的效果; 返回值是node count. 不过这个解法有跟上面的解法类似的问题, 就是没有premature exit; 这里的helper因为有有效的返回值, 所以直接当时就return不行; 一个比较好的做法是加一个global flag, 用这个flag来prune一下就行了;
注意他这里实际上的写法是PostOrder, 但是如果把premature exit加进去, 最好还是改成InOrder, 因为他process这一步其实只用到了left的结果; 后来想了一下, 他这里PostOrder其实有可能是有意义的; 因为假如k不在当前这个node, 那么k要么就在left or right的subtree两个当中的一个; 假如在某一个下层hit之后 ? 想了一下, 好像还是最好用一个pruning flag来处理; left down总是执行的, 然后下一步是filtered processing, 如果这里还是没有hit或者被prune掉, 那么right down其实是不需要prune的; 所以最好还是用InOrder;
另外他这里对于数字的处理还是比较直观的, 因为kth的左边有k-1个, 所以他这里直接就是判断k-1==left就行了; 包括他传给right的参数的变化, 其实也是类似的变化; 这样的数字处理比较直观;
这个是另外一个解法:
@yfcheng said in Java divide-and-conquer solution considering augmenting tree structure for the follow-up:
The idea behind the follow up question is what extra information is required for divide-and-conquer. Basically is we can know the number of nodes on the left subtree, we get to know what is the position of the root node in the in-order traversal, which is basically the the kth number. the left value can be saved in each node of the tree, and when we are finding the kth number, the complexity is O(lgn).
public class Solution {
public int kthSmallest(TreeNode root, int k) {
int left = nodeCount(root.left); // this value can be saved in the root node
if(left + 1 == k) {
return root.val;
} else if (left + 1 < k) {
return kthSmallest(root.right, k - left - 1);
} else {
return kthSmallest(root.left, k);
}
}
private int nodeCount(TreeNode root) {
if(root == null) {
return 0;
}
return 1 + nodeCount(root.left) + nodeCount(root.right);
}
}
这个其实就是最开始的那个我认为不好的解法, 不过当时我对于这个算法的复杂度的分析并不是特别到位, 下面有人分析, 这个算法的复杂度其实应该是O(N):
@Tiejun said in Java divide-and-conquer solution considering augmenting tree structure for the follow-up:
Hi yfcheng,
Nice short code, but I think your code is O(n), not O(lgn), because you have to count at least half of the nodes before you get the right answer.
And this: "the left value can be saved in each node of the tree". Are you suggesting modifying the tree node and adding a new member to it, so that each node always know how many children are in its left subtree? If so, whenever the bst is updated, you may also need to update a bunch of nodes to reflect that change.
If I am allowed to add a new member to the node, I would probably add a pointer to each node so that it always knows its parent. With this pointer, it could be much easier to find the next element. I will always hold a pointer P to the kth element. If things change in elements greater than the kth element, take no action. If an element in the first kth element is deleted, we find the next element of P. If a new element less than P is added, find the previous element of P.
Happy coding!
这个人不仅给出了合理的分析, 而且还提出了一个真正有效的对于Follow-Up的解决方案. 之前那个人用一个array存下来, 我感觉那个做法并不好; 因为那个做法只能针对frequent kth lookup, 但是不能解决frequent modification的方法; 这里这个parent pointer, which is just storing serialization in place, 的做法, 我认为是比较聪明的;
@Tiejun said in Java divide-and-conquer solution considering augmenting tree structure for the follow-up:
Hi mahk,
Let's define the tree node first
typedef struct tagNODE { int val; tagNODE* left, *right; } NODE, *PNODE;
1) lgn + k
BST inorder traversal: first, it takes lgn steps to find the minimum node, then we can find all the following nodes with no effort, that's the beauty of the stack. When we have k nodes, we quit.
c++ code:
PNODE _kth( PNODE head, int& count, int k) { if( !head ) return NULL; PNODE p = _kth( head->left, count, k ); if( p ) return p; count++; if( count == k ) return head; p = _kth( head->right, count, k ); if( p ) return p; return NULL; } PNODE kth( PNODE head, int k ) { int count = 0; return _kth( head, count, k ); }
2) Parent point optimization
The parent pointer optimization should look like this:
Enchanced node:
typedef struct tagNODE { int val; tagNODE* left, *right; tagNODE* parent; } NODE, *PNODE;
The reason is that, with parent pointer, we don't have to start from the head and we can start from any node. That's why we hold a pointer P to the kth node.
For example, here is an implementation of finding the successor.
// Find the minimum node from subtree p PNODE minnode( PNODE p ) { if( !p ) return NULL; while( p ) { if( p->left ) p = p->left; else break; } return p; } // Find the next node of head PNODE successor( NODE* p ) { if( !p ) return NULL; if( p->right ) return minnode( p->right ); else { PNODE parent = p->parent; while( parent ) { if( parent->left == p ) return parent; if( parent->parent && parent->parent->left == parent ) return parent->parent; parent = parent->parent; } return NULL; } }
As for the predecessor, there is a similar function.
Ideally, no matter how many nodes the tree has, the time complexity of finding the predecessor or successor of the current kth node P would be something like O(lgk), if k is small enough, almost O(1)
这个人自己给出的解法其实是InOrder的最优解;
对于以上的Follow-Up代码, 还有人提出了进一步的改进方案:
@CodingFlower said in Java divide-and-conquer solution considering augmenting tree structure for the follow-up:
@Tiejun I think we can keep a next (in terms of in-order) pointer on each node. Then delete & insert will have least effect on the big O since we can easily find the precedent and next node.
Problem Description
Given a binary search tree, write a function kthSmallest to find the kth smallest element in it.
Note:
You may assume k is always valid, 1 ≤ k ≤ BST's total elements.
Follow up:
What if the BST is modified (insert/delete operations) often and you need to find the kth smallest frequently? How would you optimize the kthSmallest routine?
Credits:
Special thanks to @ts for adding this problem and creating all test cases.
Difficulty:Medium
Total Accepted:107K
Total Submissions:243.8K
Contributor: LeetCode
Companies
bloomberg uber google
Related Topics
binary search tree
Similar Questions
Binary Tree Inorder Traversal Second Minimum Node In a Binary Tree
Hide Hint 1
Try to utilize the property of a BST.
Hide Hint 2
Try in-order traversal. (Credits to @chan13)
Hide Hint 3
What if you could modify the BST node's structure?
Hide Hint 4
The optimal runtime complexity is O(height of BST).