Contents

HackerRank Solutions: Algorithm and Data Structure Mastery

HackerRank Solutions: Algorithm and Data Structure Mastery

In this post, I’ll share insights from my HackerRankCodes project, which demonstrates comprehensive problem-solving skills through solutions to various HackerRank challenges across multiple programming languages and algorithmic domains.

Project Overview

The HackerRankCodes project represents a systematic approach to competitive programming and algorithmic problem-solving. It showcases solutions to problems spanning various difficulty levels, from basic algorithms to advanced data structures and complex optimization challenges.

Problem Categories Covered

Algorithm Fundamentals

  • Sorting Algorithms: Quick sort, merge sort, heap sort implementations
  • Search Algorithms: Binary search, linear search, and advanced search techniques
  • Graph Algorithms: BFS, DFS, shortest path, minimum spanning tree
  • Dynamic Programming: Memoization, tabulation, and optimization problems
  • Greedy Algorithms: Activity selection, fractional knapsack, scheduling

Data Structures

  • Arrays and Strings: Manipulation, searching, and optimization
  • Linked Lists: Traversal, insertion, deletion, and advanced operations
  • Trees: Binary trees, BST, AVL trees, and tree traversals
  • Graphs: Adjacency lists, matrices, and graph algorithms
  • Heaps: Min-heap, max-heap, and priority queue implementations

Advanced Topics

  • String Algorithms: Pattern matching, suffix arrays, and string processing
  • Number Theory: Prime numbers, modular arithmetic, and mathematical algorithms
  • Combinatorics: Permutations, combinations, and counting problems
  • Geometry: Computational geometry and spatial algorithms

Implementation Examples

Dynamic Programming Solutions

# Longest Common Subsequence (LCS)
def longest_common_subsequence(text1, text2):
    """
    Find the length of the longest common subsequence between two strings.
    Time Complexity: O(m*n)
    Space Complexity: O(m*n)
    """
    m, n = len(text1), len(text2)
    
    # Create DP table
    dp = [[0] * (n + 1) for _ in range(m + 1)]
    
    # Fill DP table
    for i in range(1, m + 1):
        for j in range(1, n + 1):
            if text1[i-1] == text2[j-1]:
                dp[i][j] = dp[i-1][j-1] + 1
            else:
                dp[i][j] = max(dp[i-1][j], dp[i][j-1])
    
    return dp[m][n]

# Space-optimized version
def lcs_optimized(text1, text2):
    """
    Space-optimized LCS implementation.
    Space Complexity: O(min(m,n))
    """
    if len(text1) < len(text2):
        text1, text2 = text2, text1
    
    prev = [0] * (len(text2) + 1)
    curr = [0] * (len(text2) + 1)
    
    for i in range(1, len(text1) + 1):
        for j in range(1, len(text2) + 1):
            if text1[i-1] == text2[j-1]:
                curr[j] = prev[j-1] + 1
            else:
                curr[j] = max(prev[j], curr[j-1])
        prev, curr = curr, prev
    
    return prev[len(text2)]

# Coin Change Problem
def coin_change(coins, amount):
    """
    Find minimum number of coins needed to make amount.
    Time Complexity: O(amount * len(coins))
    Space Complexity: O(amount)
    """
    dp = [float('inf')] * (amount + 1)
    dp[0] = 0
    
    for coin in coins:
        for i in range(coin, amount + 1):
            dp[i] = min(dp[i], dp[i - coin] + 1)
    
    return dp[amount] if dp[amount] != float('inf') else -1

Graph Algorithms

# Dijkstra's Shortest Path Algorithm
import heapq
from collections import defaultdict

def dijkstra(graph, start):
    """
    Find shortest paths from start vertex to all other vertices.
    Time Complexity: O((V + E) log V)
    """
    distances = {vertex: float('inf') for vertex in graph}
    distances[start] = 0
    pq = [(0, start)]
    visited = set()
    
    while pq:
        current_dist, current_vertex = heapq.heappop(pq)
        
        if current_vertex in visited:
            continue
        
        visited.add(current_vertex)
        
        for neighbor, weight in graph[current_vertex]:
            distance = current_dist + weight
            
            if distance < distances[neighbor]:
                distances[neighbor] = distance
                heapq.heappush(pq, (distance, neighbor))
    
    return distances

# Union-Find (Disjoint Set Union)
class UnionFind:
    def __init__(self, n):
        self.parent = list(range(n))
        self.rank = [0] * n
        self.components = n
    
    def find(self, x):
        if self.parent[x] != x:
            self.parent[x] = self.find(self.parent[x])
        return self.parent[x]
    
    def union(self, x, y):
        px, py = self.find(x), self.find(y)
        
        if px == py:
            return False
        
        if self.rank[px] < self.rank[py]:
            px, py = py, px
        
        self.parent[py] = px
        
        if self.rank[px] == self.rank[py]:
            self.rank[px] += 1
        
        self.components -= 1
        return True

# Kruskal's Minimum Spanning Tree
def kruskal_mst(edges, n):
    """
    Find minimum spanning tree using Kruskal's algorithm.
    Time Complexity: O(E log E)
    """
    edges.sort(key=lambda x: x[2])  # Sort by weight
    uf = UnionFind(n)
    mst = []
    
    for u, v, weight in edges:
        if uf.union(u, v):
            mst.append((u, v, weight))
            if len(mst) == n - 1:
                break
    
    return mst

String Algorithms

# Knuth-Morris-Pratt (KMP) Pattern Matching
def kmp_search(text, pattern):
    """
    Find all occurrences of pattern in text using KMP algorithm.
    Time Complexity: O(m + n)
    """
    def build_lps(pattern):
        lps = [0] * len(pattern)
        length = 0
        i = 1
        
        while i < len(pattern):
            if pattern[i] == pattern[length]:
                length += 1
                lps[i] = length
                i += 1
            else:
                if length != 0:
                    length = lps[length - 1]
                else:
                    lps[i] = 0
                    i += 1
        
        return lps
    
    lps = build_lps(pattern)
    i = j = 0
    occurrences = []
    
    while i < len(text):
        if pattern[j] == text[i]:
            i += 1
            j += 1
        
        if j == len(pattern):
            occurrences.append(i - j)
            j = lps[j - 1]
        elif i < len(text) and pattern[j] != text[i]:
            if j != 0:
                j = lps[j - 1]
            else:
                i += 1
    
    return occurrences

# Rabin-Karp Rolling Hash
def rabin_karp_search(text, pattern):
    """
    Find pattern in text using rolling hash.
    Time Complexity: O(m + n) average case
    """
    def rolling_hash(s):
        hash_value = 0
        base = 256
        mod = 10**9 + 7
        
        for char in s:
            hash_value = (hash_value * base + ord(char)) % mod
        
        return hash_value
    
    def update_hash(old_hash, old_char, new_char, pattern_len):
        base = 256
        mod = 10**9 + 7
        
        # Remove old character
        old_hash = (old_hash - ord(old_char) * pow(base, pattern_len - 1, mod)) % mod
        
        # Add new character
        old_hash = (old_hash * base + ord(new_char)) % mod
        
        return old_hash
    
    if len(pattern) > len(text):
        return []
    
    pattern_hash = rolling_hash(pattern)
    text_hash = rolling_hash(text[:len(pattern)])
    
    occurrences = []
    
    if pattern_hash == text_hash and text[:len(pattern)] == pattern:
        occurrences.append(0)
    
    for i in range(len(pattern), len(text)):
        text_hash = update_hash(text_hash, text[i - len(pattern)], text[i], len(pattern))
        
        if pattern_hash == text_hash and text[i - len(pattern) + 1:i + 1] == pattern:
            occurrences.append(i - len(pattern) + 1)
    
    return occurrences

Advanced Data Structures

# Segment Tree for Range Queries
class SegmentTree:
    def __init__(self, arr):
        self.n = len(arr)
        self.tree = [0] * (4 * self.n)
        self.build(arr, 0, 0, self.n - 1)
    
    def build(self, arr, node, start, end):
        if start == end:
            self.tree[node] = arr[start]
        else:
            mid = (start + end) // 2
            self.build(arr, 2 * node + 1, start, mid)
            self.build(arr, 2 * node + 2, mid + 1, end)
            self.tree[node] = self.tree[2 * node + 1] + self.tree[2 * node + 2]
    
    def update(self, idx, val, node=0, start=0, end=None):
        if end is None:
            end = self.n - 1
        
        if start == end:
            self.tree[node] = val
        else:
            mid = (start + end) // 2
            if idx <= mid:
                self.update(idx, val, 2 * node + 1, start, mid)
            else:
                self.update(idx, val, 2 * node + 2, mid + 1, end)
            self.tree[node] = self.tree[2 * node + 1] + self.tree[2 * node + 2]
    
    def query(self, l, r, node=0, start=0, end=None):
        if end is None:
            end = self.n - 1
        
        if r < start or end < l:
            return 0
        
        if l <= start and end <= r:
            return self.tree[node]
        
        mid = (start + end) // 2
        left_sum = self.query(l, r, 2 * node + 1, start, mid)
        right_sum = self.query(l, r, 2 * node + 2, mid + 1, end)
        
        return left_sum + right_sum

# Fenwick Tree (Binary Indexed Tree)
class FenwickTree:
    def __init__(self, size):
        self.size = size
        self.tree = [0] * (size + 1)
    
    def update(self, idx, delta):
        idx += 1  # 1-indexed
        while idx <= self.size:
            self.tree[idx] += delta
            idx += idx & -idx
    
    def query(self, idx):
        idx += 1  # 1-indexed
        result = 0
        while idx > 0:
            result += self.tree[idx]
            idx -= idx & -idx
        return result
    
    def range_query(self, l, r):
        return self.query(r) - self.query(l - 1)

C++ Implementations

// Binary Search Tree Implementation
#include <iostream>
#include <memory>

template<typename T>
class BST {
private:
    struct Node {
        T data;
        std::unique_ptr<Node> left;
        std::unique_ptr<Node> right;
        
        Node(T value) : data(value), left(nullptr), right(nullptr) {}
    };
    
    std::unique_ptr<Node> root;
    
    void insert_helper(std::unique_ptr<Node>& node, T value) {
        if (!node) {
            node = std::make_unique<Node>(value);
            return;
        }
        
        if (value < node->data) {
            insert_helper(node->left, value);
        } else if (value > node->data) {
            insert_helper(node->right, value);
        }
    }
    
    bool search_helper(const std::unique_ptr<Node>& node, T value) const {
        if (!node) return false;
        
        if (value == node->data) return true;
        if (value < node->data) return search_helper(node->left, value);
        return search_helper(node->right, value);
    }
    
    void inorder_helper(const std::unique_ptr<Node>& node) const {
        if (!node) return;
        
        inorder_helper(node->left);
        std::cout << node->data << " ";
        inorder_helper(node->right);
    }
    
public:
    void insert(T value) {
        insert_helper(root, value);
    }
    
    bool search(T value) const {
        return search_helper(root, value);
    }
    
    void inorder() const {
        inorder_helper(root);
        std::cout << std::endl;
    }
};

// Quick Sort Implementation
template<typename T>
void quick_sort(std::vector<T>& arr, int low, int high) {
    if (low < high) {
        int pivot_index = partition(arr, low, high);
        quick_sort(arr, low, pivot_index - 1);
        quick_sort(arr, pivot_index + 1, high);
    }
}

template<typename T>
int partition(std::vector<T>& arr, int low, int high) {
    T pivot = arr[high];
    int i = low - 1;
    
    for (int j = low; j < high; j++) {
        if (arr[j] <= pivot) {
            i++;
            std::swap(arr[i], arr[j]);
        }
    }
    
    std::swap(arr[i + 1], arr[high]);
    return i + 1;
}

Ruby Solutions

# Ruby implementation of common algorithms

class Graph
  def initialize
    @adjacency_list = Hash.new { |h, k| h[k] = [] }
  end
  
  def add_edge(from, to, weight = 1)
    @adjacency_list[from] << { to: to, weight: weight }
    @adjacency_list[to] << { to: from, weight: weight } # Undirected graph
  end
  
  def dijkstra(start)
    distances = Hash.new(Float::INFINITY)
    distances[start] = 0
    pq = [[0, start]]
    visited = Set.new
    
    while !pq.empty?
      current_dist, current_vertex = pq.shift
      
      next if visited.include?(current_vertex)
      visited.add(current_vertex)
      
      @adjacency_list[current_vertex].each do |edge|
        neighbor = edge[:to]
        weight = edge[:weight]
        distance = current_dist + weight
        
        if distance < distances[neighbor]
          distances[neighbor] = distance
          pq << [distance, neighbor]
        end
      end
      
      pq.sort! # Simple priority queue
    end
    
    distances
  end
  
  def bfs(start)
    visited = Set.new
    queue = [start]
    result = []
    
    visited.add(start)
    
    while !queue.empty?
      vertex = queue.shift
      result << vertex
      
      @adjacency_list[vertex].each do |edge|
        neighbor = edge[:to]
        unless visited.include?(neighbor)
          visited.add(neighbor)
          queue << neighbor
        end
      end
    end
    
    result
  end
end

# Merge Sort Implementation
def merge_sort(arr)
  return arr if arr.length <= 1
  
  mid = arr.length / 2
  left = merge_sort(arr[0...mid])
  right = merge_sort(arr[mid..-1])
  
  merge(left, right)
end

def merge(left, right)
  result = []
  i = j = 0
  
  while i < left.length && j < right.length
    if left[i] <= right[j]
      result << left[i]
      i += 1
    else
      result << right[j]
      j += 1
    end
  end
  
  result.concat(left[i..-1]) if i < left.length
  result.concat(right[j..-1]) if j < right.length
  
  result
end

# Binary Search
def binary_search(arr, target)
  left, right = 0, arr.length - 1
  
  while left <= right
    mid = (left + right) / 2
    
    if arr[mid] == target
      return mid
    elsif arr[mid] < target
      left = mid + 1
    else
      right = mid - 1
    end
  end
  
  -1
end

Problem-Solving Strategies

Algorithm Design Patterns

  1. Divide and Conquer: Break problems into smaller subproblems
  2. Dynamic Programming: Solve overlapping subproblems efficiently
  3. Greedy Approach: Make locally optimal choices
  4. Backtracking: Explore all possible solutions systematically
  5. Two Pointers: Use two pointers to traverse data structures

Optimization Techniques

  1. Time Complexity Analysis: Understand Big O notation
  2. Space Optimization: Reduce memory usage
  3. Mathematical Insights: Use mathematical properties
  4. Data Structure Selection: Choose appropriate data structures
  5. Edge Case Handling: Consider boundary conditions

Testing and Validation

# Test framework for algorithm solutions
import unittest
import time
from random import randint

class AlgorithmTestSuite(unittest.TestCase):
    def setUp(self):
        self.test_cases = self.generate_test_cases()
    
    def generate_test_cases(self):
        """Generate comprehensive test cases"""
        return {
            'small': [1, 2, 3, 4, 5],
            'medium': [randint(1, 1000) for _ in range(100)],
            'large': [randint(1, 10000) for _ in range(10000)],
            'edge_cases': [[], [1], [1, 1, 1], [5, 4, 3, 2, 1]]
        }
    
    def test_sorting_algorithm(self, sort_func):
        """Test sorting algorithm correctness and performance"""
        for case_name, test_data in self.test_cases.items():
            with self.subTest(case=case_name):
                if not test_data:
                    continue
                
                sorted_data = sort_func(test_data.copy())
                expected = sorted(test_data)
                
                self.assertEqual(sorted_data, expected)
                
                # Performance test
                start_time = time.time()
                sort_func(test_data.copy())
                execution_time = time.time() - start_time
                
                # Assert reasonable execution time
                self.assertLess(execution_time, 1.0)  # Should complete within 1 second
    
    def test_search_algorithm(self, search_func):
        """Test search algorithm correctness"""
        test_array = [1, 3, 5, 7, 9, 11, 13, 15]
        
        # Test existing elements
        for i, element in enumerate(test_array):
            result = search_func(test_array, element)
            self.assertEqual(result, i)
        
        # Test non-existing elements
        non_existing = [0, 2, 4, 6, 8, 10, 12, 14, 16]
        for element in non_existing:
            result = search_func(test_array, element)
            self.assertEqual(result, -1)

# Performance benchmarking
def benchmark_algorithm(func, test_data, iterations=1000):
    """Benchmark algorithm performance"""
    times = []
    
    for _ in range(iterations):
        start_time = time.perf_counter()
        func(test_data.copy())
        end_time = time.perf_counter()
        times.append(end_time - start_time)
    
    return {
        'min_time': min(times),
        'max_time': max(times),
        'avg_time': sum(times) / len(times),
        'median_time': sorted(times)[len(times) // 2]
    }

Competitive Programming Tips

Problem Analysis

  1. Read Carefully: Understand all constraints and requirements
  2. Identify Patterns: Recognize common algorithmic patterns
  3. Consider Edge Cases: Handle boundary conditions
  4. Optimize Early: Consider time and space complexity upfront

Implementation Strategy

  1. Start Simple: Begin with brute force if needed
  2. Iterate: Refine solution step by step
  3. Test Thoroughly: Validate with multiple test cases
  4. Optimize: Improve time/space complexity

Common Pitfalls

  1. Integer Overflow: Handle large numbers carefully
  2. Array Bounds: Check array access carefully
  3. Off-by-One Errors: Pay attention to loop boundaries
  4. Memory Management: Avoid memory leaks

Lessons Learned

Algorithmic Thinking

  • Pattern Recognition: Identify common algorithmic patterns
  • Complexity Analysis: Understand time and space complexity
  • Optimization: Balance correctness with efficiency
  • Problem Decomposition: Break complex problems into simpler parts

Programming Skills

  • Language Proficiency: Master multiple programming languages
  • Code Quality: Write clean, readable, and maintainable code
  • Testing: Implement comprehensive testing strategies
  • Documentation: Document algorithms and their complexities

Competitive Programming

  • Speed vs. Accuracy: Balance quick implementation with correctness
  • Problem-Solving Process: Develop systematic approach to problems
  • Practice: Regular practice improves problem-solving skills
  • Learning: Continuously learn new algorithms and techniques

Future Enhancements

Advanced Topics

  • Advanced Data Structures: Implement complex data structures
  • Graph Theory: Explore advanced graph algorithms
  • Number Theory: Study mathematical algorithms
  • Computational Geometry: Implement geometric algorithms

Performance Optimization

  • Algorithm Optimization: Improve existing algorithms
  • Memory Optimization: Reduce space complexity
  • Parallel Algorithms: Implement parallel versions
  • Cache Optimization: Optimize for CPU cache

Conclusion

The HackerRankCodes project demonstrates comprehensive algorithmic problem-solving skills across multiple programming languages. Key achievements include:

  • Algorithm Mastery: Implementation of fundamental and advanced algorithms
  • Multi-Language Proficiency: Solutions in Python, C++, Ruby, and C
  • Problem-Solving Skills: Systematic approach to competitive programming
  • Code Quality: Clean, efficient, and well-documented implementations
  • Testing: Comprehensive testing and validation strategies
  • Performance: Optimized solutions with proper complexity analysis

The project is available on GitHub and serves as a comprehensive resource for learning algorithms and competitive programming.


This project represents my journey in competitive programming and algorithmic problem-solving. The solutions demonstrate systematic thinking, code quality, and continuous learning in the field of computer science algorithms.