Skip to content

Commit a08b8bb

Browse files
authored
Update 0609-find-duplicate-file-in-system.md
1 parent 85369be commit a08b8bb

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

dsa-solutions/lc-solutions/0600-0699/0609-find-duplicate-file-in-system.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -43,16 +43,16 @@ Output: [["root/a/2.txt","root/c/d/4.txt"],["root/a/1.txt","root/c/3.txt"]]
4343

4444
### Constraints
4545

46-
- $1 <= paths.length <= 2 * 10^4$
47-
- $1 <= paths[i].length <= 3000$
48-
- $1 <= sum(paths[i].length) <= 5 * 10^5$
46+
- $1 \leq \text{paths.length} \leq 2 \times 10^4$
47+
- $1 \leq paths[i].length \leq 3000$
48+
- $1 \leq sum(paths[i].length) \leq 5 * 10^5$
4949
- `paths[i]` consist of English letters, digits, `'/'`, `'.'`, `'('`, `')'`, and `' '`.
5050
- You may assume no files or directories share the same name in the same directory.
5151
- You may assume each given directory info represents a unique directory. A single blank space separates the directory path and file info.
5252

5353
## Solution for Find Duplicate File in System
5454

55-
### Approach #1 Brute Force [Time Limit Exceeded]
55+
### Approach 1 Brute Force [Time Limit Exceeded]
5656

5757
For the brute force solution, firstly we obtain the directory paths, the filenames and file contents separately by appropriately splitting the elements of the pathspathspaths list. While doing so, we keep on creating a list which contains the full path of every file along with the contents of the file. The list contains data in the form
5858

@@ -192,15 +192,15 @@ class Solution:
192192

193193
## Complexity Analysis
194194

195-
### Time Complexity: $O(n*x + f^2 * s)$
195+
### Time Complexity: $O(n \times x + f^2 \times s)$
196196

197197
> **Reason**: Creation of list will take O(n∗x), where n is the number of directories and x is the average string length. Every file is compared with every other file. Let f files are there with average size of s, then files comparision will take O(f2∗s), equals can take O(s). Here, Worst case will be when all files are unique.
198198
199-
### Space Complexity: $O(n*x)$
199+
### Space Complexity: $O(n \times x)$
200200

201201
> **Reason**: Size of lists res and list can grow upto n∗x.
202202
203-
### Approach #2 Using HashMap
203+
### Approach 2 Using HashMap
204204
#### Algorithm
205205

206206
In this approach, firstly we obtain the directory paths, the file names and their contents separately by appropriately splitting each string in the given paths list. In order to find the files with duplicate contents, we make use of a HashMap map, which stores the data in the form (contents,list_of_file_paths_with_this_content). Thus, for every file's contents, we check if the same content already exist in the hashmap. If so, we add the current file's path to the list of files corresponding to the current contents. Otherwise, we create a new entry in the map, with the current contents as the key and the value being a list with only one entry(the current file's path).
@@ -309,11 +309,11 @@ class Solution:
309309

310310
## Complexity Analysis
311311

312-
### Time Complexity: $O(n*x)$
312+
### Time Complexity: $O(n \times x)$
313313

314314
> **Reason**: n strings of average length x is parsed.
315315

316-
### Space Complexity: $O(n*x)$
316+
### Space Complexity: $O(n \times x)$
317317

318318
> **Reason**: map and res size grows upto n∗x.
319319

0 commit comments

Comments
 (0)