Skip to content

Commit f2cbfa6

Browse files
authored
grammar fixes (spring1843#93)
1 parent 5548d76 commit f2cbfa6

21 files changed

+156
-123
lines changed

array/README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# Array
22

3-
Arrays are a basic and essential data structure in computer science. They consist of a fixed-size contiguous block of memory and offer O(1) read and write time complexity. As a fundamental element of programming languages, arrays come built-in as part of their core.
3+
Arrays are a basic and essential data structure in computer science. They consist of a fixed-size contiguous memory block and offer O(1) read and write time complexity. As a fundamental element of programming languages, arrays come built-in in their core.
44

5-
To provide a real-world analogy, consider an array of athletes preparing for a sprinting match. Each athlete occupies a specific position within the array, which is typically denoted as 1, 2,..., n. While it is technically possible for each athlete to be in a different position, the positions generally carry some form of significance, such as alphabetical order or seniority within the sport.
5+
To provide a real-world analogy, consider an array of athletes preparing for a sprinting match. Each athlete occupies a specific position within the array, typically denoted as 1, 2,, n. While it is technically possible for each athlete to be in a different position, the positions generally carry some form of significance, such as alphabetical order or seniority within the sport.
66

77
## Implementation
88

9-
In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy of the array is created, resulting in additional memory usage. However, to avoid this issue, it is possible to pass a pointer to the array instead.
9+
In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy is created, resulting in additional memory usage. However, to avoid this issue, it is possible to pass a pointer to the array instead.
1010

11-
To define an array in Go, it is necessary to specify the size of the array using a constant. By using constants in this manner, it is no longer necessary to utilize the make function to create the array.
11+
To define an array in Go, it is possible to specify the array size using a constant. By using constants in this manner, it is no longer necessary to use the make function to create the array.
1212

1313
```Go
1414
package main
@@ -24,7 +24,7 @@ func main() {
2424

2525
Although arrays are fundamental data structures in Go, their constant size can make them inflexible and difficult to use in situations where a variable size is required. To address this issue, Go provides [slices](https://blog.golang.org/slices-intro) which are an abstraction of arrays that offer more convenient access to sequential data typically stored in arrays.
2626

27-
Slices enable the addition of values using the `append` function, which allows for dynamic resizing of the slice. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.
27+
Slices enable the addition of values using the `append` function, which allows for dynamic slice resizing. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.
2828

2929
```Go
3030
package main
@@ -74,11 +74,11 @@ func main() {
7474

7575
## Complexity
7676

77-
In computer science, the act of accessing an element within an array using an index `i` has an O(1) time complexity. This means that regardless of the size of the array, the read and write operations for a given element can be performed in constant time.
77+
Accessing an element within an array using an index has O(1) time complexity. This means that regardless of the size of the array, read and write operations for a given element can be performed in constant time.
7878

7979
While arrays are useful for certain tasks, searching an unsorted array can be a time-consuming O(n) operation. Since the target item could be located anywhere in the array, every element must be checked until the item is found. Due to this limitation, alternative data structures such as trees and hash tables are often more suitable for search operations.
8080

81-
Both addition and deletion operations on arrays can be O(n) operations in Arrays. The process of removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the new item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.
81+
Addition and deletion operations are O(n) operations in Arrays. The process of removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the added item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.
8282

8383
## Application
8484

backtracking/README.md

+7-9
Original file line numberDiff line numberDiff line change
@@ -6,22 +6,20 @@ Backtracking can be compared to how a person solves a maze or searches for an ex
66

77
## Implementation
88

9-
A backtracking algorithm is typically implemented in these steps:
9+
Backtracking algorithms are typically implemented in these steps:
1010

11-
1. Pruning: eliminating invalid approaches when possible
12-
2. Generating a partial solution by iterating through available alternatives
13-
3. Checking the validity of the selected alternative according to the problem conditions and rules
14-
4. Checking for solution completion when required
11+
1. Prune invalid approaches when possible.
12+
2. Generate a partial solution by iterating through available alternatives.
13+
3. Check the validity of the selected alternative according to the problem conditions and rules.
14+
4. Check for solution completion when required.
1515

1616
## Complexity
1717

18-
The time complexity of backtracking algorithms may vary depending on the problem at hand, but they generally require iterating through possible alternatives and checking for validity at each step. Although backtracking may be the only feasible approach for certain problems, it does not always guarantee an optimal solution. To improve the time complexity of backtracking algorithms, pruning, which involves eliminating known invalid options before iterating through the alternatives, is an effective technique.
19-
20-
In addition, the space complexity of backtracking algorithms is typically not efficient since the recursive process requires maintaining a copy of the state at each step.
18+
The time complexity of backtracking algorithms may vary depending on the problem at hand, however they generally require iterating through possible alternatives and checking for validity at each step. Although backtracking may be the only feasible approach to certain problems, it does not always guarantee an optimal solution. To improve the time complexity of backtracking algorithms, pruning, which involves eliminating known invalid options before iterating through the alternatives, is an effective technique.
2119

2220
## Application
2321

24-
Backtracking is widely used to solve board games and is often employed by computers to select their next moves. Furthermore, the backtracking technique is also applied to graphs and trees through the use of [Depth First Search](../graph/graph#depth-first-search---dfs). It also has applications in object detection in image processing.
22+
Backtracking is widely used to solve board games and computers use it to select their next moves. Furthermore, the backtracking technique is also applied to graphs and trees through the use of [Depth First Search](../graph/graph#depth-first-search---dfs). It also has applications in object detection and image processing.
2523

2624
## Rehearsal
2725

bit/README.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ AND 1100 OR 1100 XOR 1100 Negation 1100 L Shift 1100 R Shift 1100
1313

1414
## Implementation
1515

16-
Go provides below operators that can be used in bit manipulation:
16+
Go provides the below operators for bit manipulation:
1717

1818
```Go
1919
package main
@@ -44,7 +44,7 @@ func printBinary(n int) {
4444

4545
## Arithmetic by Shifting
4646

47-
Left shifting can be viewed as a multiplication operation by 2 raised to the power of a specified number, while right shifting can be viewed as a division operation by 2 raised to the power of a specified number. For instance, a << b can be interpreted as multiplying a by 2^b, and a >> b can be interpreted as dividing a by 2^b.
47+
Left shifting can be viewed as a multiplication operation by 2 raised to the power of a specified number. Right shifting can be viewed as a division operation by 2 raised to the power of a specified number. For instance, a << b can be interpreted as multiplying a by 2^b, and a >> b can be interpreted as dividing a by 2^b.
4848

4949
```Go
5050
package main
@@ -65,7 +65,7 @@ func main() {
6565

6666
## Cryptography and Other XOR applications
6767

68-
The XOR operation can be used to perform basic cryptography. By XORing a message with a key, we can generate an encrypted message. This encrypted message can be shared with someone else who knows the same key. If they XOR the key with the encrypted message, they will obtain the original plaintext message. This method is not secure enough because the key is relatively easy to guess from the encrypted message. The following example demonstrates this process:
68+
The XOR operation can be used for basic cryptography. By XORing a message with a key, we can generate an encrypted message. This encrypted message can be shared with someone else who knows the same key. If they XOR the key with the encrypted message, they will obtain the original plaintext message. This method is not secure enough because the key is relatively easy to guess from the encrypted message. The following example demonstrates this process:
6969

7070
```Go
7171
package main
@@ -91,19 +91,19 @@ func xorCrypt(key, message []byte) []byte {
9191

9292
## Complexity
9393

94-
Bit manipulation operations are characterized by a constant time complexity of O(1). This high level of performance renders them an optimal choice as a replacement for other approaches, especially when working with large data sets. As a result, they are frequently utilized in algorithmic design to optimize the execution of certain operations.
94+
Bit manipulation operations are characterized by a constant time complexity. This high level of performance renders them an optimal choice to replace other approaches, especially when working with large data sets. As a result, they are frequently used to achieve better performance.
9595

9696
## Application
9797

98-
Bit manipulation techniques are widely utilized in diverse fields of computing, such as cryptography, data compression, network protocols, and databases, to name a few. Each specific bitwise operation has its own qualities that make it useful in different scenarios.
98+
Bit manipulation techniques are widely utilized in diverse fields of computing, such as cryptography, data compression, network protocols, and databases, to name a few. Each bitwise operation has its own qualities that make it useful in different scenarios.
9999

100-
AND is used to extract bit(s) from a larger number. For example, to check if a certain bit is set in a number, we can AND the number with a mask that has only that bit set to 1, and if the result is not 0, then that bit was set. Another application is to clear or reset certain bits in a number by ANDing with a mask that has those bits set to 0.
100+
AND extracts bit(s) from a larger number. For example, to check if a certain bit is set in a number, we can AND the number with a mask that has only that bit set to 1, and if the result is not 0, then that bit was set. Another application is to clear or reset certain bits in a number by ANDing with a mask that has those bits set to 0.
101101

102-
OR can be useful in solving problems where we want to "set" or "turn on" certain bits in a binary number. For example, if we have a variable flag, which is a binary number representing various options, we can set a particular flag by ORing the variable with a binary number where only the corresponding bit for that flag is 1. This will turn on the flag in the variable without affecting any other flags.
102+
OR can be useful in solving problems where we want to "set" or "turn on" certain bits in a binary number. For example, if we have a variable flag, which is a binary number representing various options, we can set a particular flag by ORing the variable with a binary number where only the corresponding bit for that flag is 1. This will turn on the flag in the variable without affecting other flags.
103103

104104
XOR can be used for encryption and decryption, as well as error detection and correction. It can also be used to swap two variables without using a temporary variable. Additionally, XOR can be used to solve problems related to finding unique elements in a list or array or to check whether two sets of data have any overlapping elements.
105105

106-
Negation can be used to invert a set of flags or to find the two's complement of a number. In computer architecture, negation is often used in the implementation of logical and arithmetic operations.
106+
Negation can be used to invert a set of flags or find the two's complement of a number.
107107

108108
## Rehearsal
109109

complexity.md

+19-17
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ To address these questions, the Big O asymptotic notation, which characterizes h
99

1010
## Big O
1111

12-
Big O is a mathematical notion commonly used to describe the impact on time or space as input size `n` increases. Seven Big O notations that are commonly used in algorithm complexity analysis are discussed in the following sections.
12+
Big O is a mathematical notion commonly used to describe the impact on time or space as input size `n` increases. Seven Big O notations commonly used in algorithm complexity analysis are discussed in the following sections.
1313

1414
```ASCII
15-
[Figure 1] Schematic diagrams of Big O for common run times from fastest to slowest.
15+
[Figure 1] Schematic diagram of Big O for common run times from fastest to slowest.
1616
1717
O(1) O(Log n) O(n)
1818
▲ ▲ ▲
@@ -69,24 +69,24 @@ t│ .
6969
n
7070
```
7171

72-
To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in the algorithm's performance as the input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with the input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.
72+
To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in algorithm performance as input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.
7373

7474
However, it is essential to note that this is not always the case. In practice, a O(1) algorithm with a single time-consuming operation might be slower than a O(n) algorithm with multiple operations if the single operation in the first algorithm requires more time to complete than the collective operations in the second algorithm.
7575

7676
The Big O notation of an algorithm can be simplified using the following two rules:
7777

78-
1. Remove constants. `O(n) + 2*O(n*Log n) + 3*O(K) + 5` is simplified to `O(n) + O(n*Log n) + O(K)`.
79-
2. Remove non-dominant or slower terms. `O(n) + O(n*Log n) + O(K)` is simplified to `O(n*Log n)` because `O(n*Log n)` is the most dominant term..
78+
1. Remove the constants. `O(n) + 2*O(n*Log n) + 3*O(K) + 5` is simplified to `O(n) + O(n*Log n) + O(K)`.
79+
2. Remove non-dominant or slower terms. `O(n) + O(n*Log n) + O(K)` is simplified to `O(n*Log n)` because `O(n*Log n)` is the dominant term.
8080

8181
### Constant - O(K) or O(1)
8282

83-
Constant time complexity represents the most efficient scenario for an algorithm, where the execution time remains constant regardless of the input size. Achieving constant time complexity often involves eliminating loops and recursive calls. Examples:
83+
Constant time complexity represents the most efficient scenario for an algorithm, where execution time remains constant regardless of input size. Achieving constant time complexity often involves eliminating loops and recursive calls. Examples:
8484

85-
* Reads and writes in a [hash table](./hashtable/README.md)
85+
* Read and write in a [hash table](./hashtable/README.md)
8686
* Enqueue and Dequeue in a [queue](./queue/README.md)
8787
* Push and Pop in a [stack](./stack/README.md)
88-
* Finding the minimum or maximum in a [heap](./heap/README.md)
89-
* Removing the last element of a [doubly linked list](./linkedlist/README.md)
88+
* Find the minimum or maximum in a [heap](./heap/README.md)
89+
* Remove the last element of a [doubly linked list](./linkedlist/README.md)
9090
* [Max without conditions](./bit/max_function_without_conditions.go)
9191

9292
### Logarithmic - O(Log n)
@@ -101,36 +101,38 @@ Attaining logarithmic time complexity in an algorithm is highly desirable as it
101101

102102
### Linear - O(n)
103103

104-
Linear time complexity is considered favorable when an algorithm necessitates traversing every input with no feasible way to avoid it. Examples:
104+
Linear time complexity is considered favorable when an algorithm traverses every input with no feasible way to avoid it. Examples:
105105

106-
* Removing the last element in a [singly linked list](./linkedlist/README.md)
107-
* Searching an unsorted [array](./array/README.md) or [linked list](./linkedlist/README.md)
106+
* Remove the last element in a [singly linked list](./linkedlist/README.md)
107+
* Search an unsorted [array](./array/README.md) or [linked list](./linkedlist/README.md)
108108
* [Number of Islands](./graph/number_of_islands.go)
109109
* [Missing Number](./hashtable/missing_number.go)
110110

111111
### O(n*Log n)
112112

113-
The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and can yield an outcome at the same time through an efficient operation. Sorting is a common example. It's not possible to sort items faster than O(n*Log n). Examples:
113+
The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and yield an outcome at the same time through an efficient operation. Sorting is a common example. It's impossible to sort items faster than O(n*Log n). Examples:
114114

115-
* [Merge Sort](./dnc/merge_sort.go) and [Heap Sort](./heap/README.md)
115+
* [Merge Sort](./dnc/merge_sort.go)
116+
* [Quick Sort](./dnc/quick_sort.go)
117+
* [Heap Sort](./heap/heap_sort.go)
116118
* [Knapsack](./greedy/knapsack.go)
117119
* [Find Anagrams](./hashtable/find_anagrams.go)
118120
* In order traversal of a [Binary Search Tree](./tree/README.md)
119121

120122
### Polynomial - O(n^2)
121123

122-
Polynomial time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving both an inner loop and an outer loop. Examples:
124+
Polynomial time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving both an inner loop an outer loop. Examples:
123125

124126
* [Bubble Sort](./array/bubble_sort.go)
125127
* [Cheapest Flight](./graph/cheapest_flights.go)
126128
* [Remove Invalid Parentheses](./graph/remove_invalid_parentheses.go)
127129

128130
### Exponential O(2^n)
129131

130-
Exponential complexity is considered highly undesirable; however, it represents only the second-worst complexity scenario. Examples:
132+
Exponential complexity is considered highly undesirable; however, it represents only the second-worst complexity scenario. Examples:
131133

132134
* [Climbing Stairs](./recursion/climbing_stairs.go)
133-
* [Tower of Hanoi](./dnc/towers_of_hanoi.go)
135+
* [Towers of Hanoi](./dnc/towers_of_hanoi.go)
134136
* [Generate Parentheses](./backtracking/generate_parentheses.go)
135137
* Basic [Recursive](./recursion/README.md) implementation of Fibonacci
136138

0 commit comments

Comments
 (0)