You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: array/README.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
# Array
2
2
3
-
Arrays are a basic and essential data structure in computer science. They consist of a fixed-size contiguous block of memory and offer O(1) read and write time complexity. As a fundamental element of programming languages, arrays come built-in as part of their core.
3
+
Arrays are a basic and essential data structure in computer science. They consist of a fixed-size contiguous memory block and offer O(1) read and write time complexity. As a fundamental element of programming languages, arrays come built-in in their core.
4
4
5
-
To provide a real-world analogy, consider an array of athletes preparing for a sprinting match. Each athlete occupies a specific position within the array, which is typically denoted as 1, 2,..., n. While it is technically possible for each athlete to be in a different position, the positions generally carry some form of significance, such as alphabetical order or seniority within the sport.
5
+
To provide a real-world analogy, consider an array of athletes preparing for a sprinting match. Each athlete occupies a specific position within the array, typically denoted as 1, 2,…, n. While it is technically possible for each athlete to be in a different position, the positions generally carry some form of significance, such as alphabetical order or seniority within the sport.
6
6
7
7
## Implementation
8
8
9
-
In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy of the array is created, resulting in additional memory usage. However, to avoid this issue, it is possible to pass a pointer to the array instead.
9
+
In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy is created, resulting in additional memory usage. However, to avoid this issue, it is possible to pass a pointer to the array instead.
10
10
11
-
To define an array in Go, it is necessary to specify the size of the array using a constant. By using constants in this manner, it is no longer necessary to utilize the make function to create the array.
11
+
To define an array in Go, it is possible to specify the array size using a constant. By using constants in this manner, it is no longer necessary to use the make function to create the array.
12
12
13
13
```Go
14
14
package main
@@ -24,7 +24,7 @@ func main() {
24
24
25
25
Although arrays are fundamental data structures in Go, their constant size can make them inflexible and difficult to use in situations where a variable size is required. To address this issue, Go provides [slices](https://blog.golang.org/slices-intro) which are an abstraction of arrays that offer more convenient access to sequential data typically stored in arrays.
26
26
27
-
Slices enable the addition of values using the `append` function, which allows for dynamic resizing of the slice. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.
27
+
Slices enable the addition of values using the `append` function, which allows for dynamic slice resizing. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.
28
28
29
29
```Go
30
30
package main
@@ -74,11 +74,11 @@ func main() {
74
74
75
75
## Complexity
76
76
77
-
In computer science, the act of accessing an element within an array using an index `i`has an O(1) time complexity. This means that regardless of the size of the array, the read and write operations for a given element can be performed in constant time.
77
+
Accessing an element within an array using an index has O(1) time complexity. This means that regardless of the size of the array, read and write operations for a given element can be performed in constant time.
78
78
79
79
While arrays are useful for certain tasks, searching an unsorted array can be a time-consuming O(n) operation. Since the target item could be located anywhere in the array, every element must be checked until the item is found. Due to this limitation, alternative data structures such as trees and hash tables are often more suitable for search operations.
80
80
81
-
Both addition and deletion operations on arrays can be O(n) operations in Arrays. The process of removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the new item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.
81
+
Addition and deletion operations are O(n) operations in Arrays. The process of removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the added item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.
Copy file name to clipboardExpand all lines: backtracking/README.md
+7-9
Original file line number
Diff line number
Diff line change
@@ -6,22 +6,20 @@ Backtracking can be compared to how a person solves a maze or searches for an ex
6
6
7
7
## Implementation
8
8
9
-
A backtracking algorithm is typically implemented in these steps:
9
+
Backtracking algorithms are typically implemented in these steps:
10
10
11
-
1.Pruning: eliminating invalid approaches when possible
12
-
2.Generating a partial solution by iterating through available alternatives
13
-
3.Checking the validity of the selected alternative according to the problem conditions and rules
14
-
4.Checking for solution completion when required
11
+
1.Prune invalid approaches when possible.
12
+
2.Generate a partial solution by iterating through available alternatives.
13
+
3.Check the validity of the selected alternative according to the problem conditions and rules.
14
+
4.Check for solution completion when required.
15
15
16
16
## Complexity
17
17
18
-
The time complexity of backtracking algorithms may vary depending on the problem at hand, but they generally require iterating through possible alternatives and checking for validity at each step. Although backtracking may be the only feasible approach for certain problems, it does not always guarantee an optimal solution. To improve the time complexity of backtracking algorithms, pruning, which involves eliminating known invalid options before iterating through the alternatives, is an effective technique.
19
-
20
-
In addition, the space complexity of backtracking algorithms is typically not efficient since the recursive process requires maintaining a copy of the state at each step.
18
+
The time complexity of backtracking algorithms may vary depending on the problem at hand, however they generally require iterating through possible alternatives and checking for validity at each step. Although backtracking may be the only feasible approach to certain problems, it does not always guarantee an optimal solution. To improve the time complexity of backtracking algorithms, pruning, which involves eliminating known invalid options before iterating through the alternatives, is an effective technique.
21
19
22
20
## Application
23
21
24
-
Backtracking is widely used to solve board games and is often employed by computers to select their next moves. Furthermore, the backtracking technique is also applied to graphs and trees through the use of [Depth First Search](../graph/graph#depth-first-search---dfs). It also has applications in object detection in image processing.
22
+
Backtracking is widely used to solve board games and computers use it to select their next moves. Furthermore, the backtracking technique is also applied to graphs and trees through the use of [Depth First Search](../graph/graph#depth-first-search---dfs). It also has applications in object detection and image processing.
Copy file name to clipboardExpand all lines: bit/README.md
+8-8
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ AND 1100 OR 1100 XOR 1100 Negation 1100 L Shift 1100 R Shift 1100
13
13
14
14
## Implementation
15
15
16
-
Go provides below operators that can be used in bit manipulation:
16
+
Go provides the below operators for bit manipulation:
17
17
18
18
```Go
19
19
package main
@@ -44,7 +44,7 @@ func printBinary(n int) {
44
44
45
45
## Arithmetic by Shifting
46
46
47
-
Left shifting can be viewed as a multiplication operation by 2 raised to the power of a specified number, while right shifting can be viewed as a division operation by 2 raised to the power of a specified number. For instance, a << b can be interpreted as multiplying a by 2^b, and a >> b can be interpreted as dividing a by 2^b.
47
+
Left shifting can be viewed as a multiplication operation by 2 raised to the power of a specified number. Right shifting can be viewed as a division operation by 2 raised to the power of a specified number. For instance, a << b can be interpreted as multiplying a by 2^b, and a >> b can be interpreted as dividing a by 2^b.
48
48
49
49
```Go
50
50
package main
@@ -65,7 +65,7 @@ func main() {
65
65
66
66
## Cryptography and Other XOR applications
67
67
68
-
The XOR operation can be used to perform basic cryptography. By XORing a message with a key, we can generate an encrypted message. This encrypted message can be shared with someone else who knows the same key. If they XOR the key with the encrypted message, they will obtain the original plaintext message. This method is not secure enough because the key is relatively easy to guess from the encrypted message. The following example demonstrates this process:
68
+
The XOR operation can be used for basic cryptography. By XORing a message with a key, we can generate an encrypted message. This encrypted message can be shared with someone else who knows the same key. If they XOR the key with the encrypted message, they will obtain the original plaintext message. This method is not secure enough because the key is relatively easy to guess from the encrypted message. The following example demonstrates this process:
Bit manipulation operations are characterized by a constant time complexity of O(1). This high level of performance renders them an optimal choice as a replacement for other approaches, especially when working with large data sets. As a result, they are frequently utilized in algorithmic design to optimize the execution of certain operations.
94
+
Bit manipulation operations are characterized by a constant time complexity. This high level of performance renders them an optimal choice to replace other approaches, especially when working with large data sets. As a result, they are frequently used to achieve better performance.
95
95
96
96
## Application
97
97
98
-
Bit manipulation techniques are widely utilized in diverse fields of computing, such as cryptography, data compression, network protocols, and databases, to name a few. Each specific bitwise operation has its own qualities that make it useful in different scenarios.
98
+
Bit manipulation techniques are widely utilized in diverse fields of computing, such as cryptography, data compression, network protocols, and databases, to name a few. Each bitwise operation has its own qualities that make it useful in different scenarios.
99
99
100
-
AND is used to extract bit(s) from a larger number. For example, to check if a certain bit is set in a number, we can AND the number with a mask that has only that bit set to 1, and if the result is not 0, then that bit was set. Another application is to clear or reset certain bits in a number by ANDing with a mask that has those bits set to 0.
100
+
AND extracts bit(s) from a larger number. For example, to check if a certain bit is set in a number, we can AND the number with a mask that has only that bit set to 1, and if the result is not 0, then that bit was set. Another application is to clear or reset certain bits in a number by ANDing with a mask that has those bits set to 0.
101
101
102
-
OR can be useful in solving problems where we want to "set" or "turn on" certain bits in a binary number. For example, if we have a variable flag, which is a binary number representing various options, we can set a particular flag by ORing the variable with a binary number where only the corresponding bit for that flag is 1. This will turn on the flag in the variable without affecting any other flags.
102
+
OR can be useful in solving problems where we want to "set" or "turn on" certain bits in a binary number. For example, if we have a variable flag, which is a binary number representing various options, we can set a particular flag by ORing the variable with a binary number where only the corresponding bit for that flag is 1. This will turn on the flag in the variable without affecting other flags.
103
103
104
104
XOR can be used for encryption and decryption, as well as error detection and correction. It can also be used to swap two variables without using a temporary variable. Additionally, XOR can be used to solve problems related to finding unique elements in a list or array or to check whether two sets of data have any overlapping elements.
105
105
106
-
Negation can be used to invert a set of flags or to find the two's complement of a number. In computer architecture, negation is often used in the implementation of logical and arithmetic operations.
106
+
Negation can be used to invert a set of flags or find the two's complement of a number.
Copy file name to clipboardExpand all lines: complexity.md
+19-17
Original file line number
Diff line number
Diff line change
@@ -9,10 +9,10 @@ To address these questions, the Big O asymptotic notation, which characterizes h
9
9
10
10
## Big O
11
11
12
-
Big O is a mathematical notion commonly used to describe the impact on time or space as input size `n` increases. Seven Big O notations that are commonly used in algorithm complexity analysis are discussed in the following sections.
12
+
Big O is a mathematical notion commonly used to describe the impact on time or space as input size `n` increases. Seven Big O notations commonly used in algorithm complexity analysis are discussed in the following sections.
13
13
14
14
```ASCII
15
-
[Figure 1] Schematic diagrams of Big O for common run times from fastest to slowest.
15
+
[Figure 1] Schematic diagram of Big O for common run times from fastest to slowest.
16
16
17
17
O(1) O(Log n) O(n)
18
18
▲ ▲ ▲
@@ -69,24 +69,24 @@ t│ .
69
69
n
70
70
```
71
71
72
-
To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in the algorithm's performance as the input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with the input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.
72
+
To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in algorithm performance as input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.
73
73
74
74
However, it is essential to note that this is not always the case. In practice, a O(1) algorithm with a single time-consuming operation might be slower than a O(n) algorithm with multiple operations if the single operation in the first algorithm requires more time to complete than the collective operations in the second algorithm.
75
75
76
76
The Big O notation of an algorithm can be simplified using the following two rules:
77
77
78
-
1. Remove constants. `O(n) + 2*O(n*Log n) + 3*O(K) + 5` is simplified to `O(n) + O(n*Log n) + O(K)`.
79
-
2. Remove non-dominant or slower terms. `O(n) + O(n*Log n) + O(K)` is simplified to `O(n*Log n)` because `O(n*Log n)` is the most dominant term..
78
+
1. Remove the constants. `O(n) + 2*O(n*Log n) + 3*O(K) + 5` is simplified to `O(n) + O(n*Log n) + O(K)`.
79
+
2. Remove non-dominant or slower terms. `O(n) + O(n*Log n) + O(K)` is simplified to `O(n*Log n)` because `O(n*Log n)` is the dominant term.
80
80
81
81
### Constant - O(K) or O(1)
82
82
83
-
Constant time complexity represents the most efficient scenario for an algorithm, where the execution time remains constant regardless of the input size. Achieving constant time complexity often involves eliminating loops and recursive calls. Examples:
83
+
Constant time complexity represents the most efficient scenario for an algorithm, where execution time remains constant regardless of input size. Achieving constant time complexity often involves eliminating loops and recursive calls. Examples:
84
84
85
-
*Reads and writes in a [hash table](./hashtable/README.md)
85
+
*Read and write in a [hash table](./hashtable/README.md)
86
86
* Enqueue and Dequeue in a [queue](./queue/README.md)
87
87
* Push and Pop in a [stack](./stack/README.md)
88
-
*Finding the minimum or maximum in a [heap](./heap/README.md)
89
-
*Removing the last element of a [doubly linked list](./linkedlist/README.md)
88
+
*Find the minimum or maximum in a [heap](./heap/README.md)
89
+
*Remove the last element of a [doubly linked list](./linkedlist/README.md)
90
90
*[Max without conditions](./bit/max_function_without_conditions.go)
91
91
92
92
### Logarithmic - O(Log n)
@@ -101,36 +101,38 @@ Attaining logarithmic time complexity in an algorithm is highly desirable as it
101
101
102
102
### Linear - O(n)
103
103
104
-
Linear time complexity is considered favorable when an algorithm necessitates traversing every input with no feasible way to avoid it. Examples:
104
+
Linear time complexity is considered favorable when an algorithm traverses every input with no feasible way to avoid it. Examples:
105
105
106
-
*Removing the last element in a [singly linked list](./linkedlist/README.md)
107
-
*Searching an unsorted [array](./array/README.md) or [linked list](./linkedlist/README.md)
106
+
*Remove the last element in a [singly linked list](./linkedlist/README.md)
107
+
*Search an unsorted [array](./array/README.md) or [linked list](./linkedlist/README.md)
108
108
*[Number of Islands](./graph/number_of_islands.go)
109
109
*[Missing Number](./hashtable/missing_number.go)
110
110
111
111
### O(n*Log n)
112
112
113
-
The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and can yield an outcome at the same time through an efficient operation. Sorting is a common example. It's not possible to sort items faster than O(n*Log n). Examples:
113
+
The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and yield an outcome at the same time through an efficient operation. Sorting is a common example. It's impossible to sort items faster than O(n*Log n). Examples:
114
114
115
-
*[Merge Sort](./dnc/merge_sort.go) and [Heap Sort](./heap/README.md)
115
+
*[Merge Sort](./dnc/merge_sort.go)
116
+
*[Quick Sort](./dnc/quick_sort.go)
117
+
*[Heap Sort](./heap/heap_sort.go)
116
118
*[Knapsack](./greedy/knapsack.go)
117
119
*[Find Anagrams](./hashtable/find_anagrams.go)
118
120
* In order traversal of a [Binary Search Tree](./tree/README.md)
119
121
120
122
### Polynomial - O(n^2)
121
123
122
-
Polynomial time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving both an inner loop and an outer loop. Examples:
124
+
Polynomial time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving both an inner loop an outer loop. Examples:
0 commit comments