Free time complexity analyzer for code patterns. Identify complexity from loops, recursion, and common algorithms. Estimate runtime, get optimization suggestions, and understand how your code scales.
You might also find these calculators useful
Analyze and compare algorithm time and space complexities
Convert between binary, decimal, hex & octal
Calculate n! factorial, subfactorial, and double factorial
Analyze memory usage for algorithms and data structures
Time complexity determines how your code scales. Whether you're preparing for interviews, optimizing production code, or learning algorithms, this calculator helps you analyze code patterns and understand their performance characteristics.
Time complexity measures how the runtime of an algorithm grows relative to input size. By analyzing code patterns—loops, recursion, and operations—you can predict performance for any input size and identify potential bottlenecks before they become problems.
Time Complexity
T(n) = c · f(n) where c is constant time per operationKnow if your code will handle 1 million records before running it. A O(n²) loop that works for 100 items may timeout at production scale.
Recognize inefficient patterns like nested loops when hash tables would work. Turn O(n²) into O(n) with the right data structure.
Technical interviews focus heavily on complexity analysis. Quickly identify and explain the complexity of any code pattern.
When code is slow, identify which pattern is the bottleneck. Is it the nested loop? The recursive call? Target your optimization efforts.
Iterating through an array once: for(i=0; i less than n; i++). Linear growth—doubling input doubles time. Efficient for most use cases.
Loop within a loop: for(i) for(j). Comparing all pairs, bubble sort. Quadratic growth—use for small n only (<10,000).
Dividing problem in half: while(n>0) n/=2. Binary search pattern. Extremely efficient—handles billions of elements.
Split, solve, merge: f(n) = 2f(n/2) + O(n). Merge sort, quicksort average. Optimal for comparison-based sorting.
Count nested loops: 1 loop = O(n), 2 nested = O(n²), 3 nested = O(n³). For recursion: linear calls = O(n), binary tree calls = O(2^n), divide-and-conquer with merge = O(n log n). The deepest/most frequent operation dominates.
For O(n²), roughly 10,000 items will take about 100 million operations—typically a few seconds. At 100,000 items, you're at 10 billion operations—potentially minutes. For larger inputs, look for O(n log n) or O(n) solutions.
Each call makes two recursive calls: fib(n-1) and fib(n-2). This creates a binary tree of calls that doubles at each level. For fib(50), that's over 2^50 calls! Adding memoization reduces it to O(n) by storing computed values.
Estimates give order of magnitude. Actual time depends on hardware, language, memory access patterns, and constant factors. Use estimates to compare approaches and identify if an algorithm is feasible, not for precise timing.
Optimize when: 1) Code is measurably slow in production, 2) Input size will grow significantly, 3) You're in an interview discussing improvements. Don't optimize prematurely—often O(n²) is fine for small n with clearer code.