You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-`paths[i]` consist of English letters, digits, `'/'`, `'.'`, `'('`, `')'`, and `' '`.
50
50
- You may assume no files or directories share the same name in the same directory.
51
51
- You may assume each given directory info represents a unique directory. A single blank space separates the directory path and file info.
52
52
53
53
## Solution for Find Duplicate File in System
54
54
55
-
### Approach #1 Brute Force [Time Limit Exceeded]
55
+
### Approach 1 Brute Force [Time Limit Exceeded]
56
56
57
57
For the brute force solution, firstly we obtain the directory paths, the filenames and file contents separately by appropriately splitting the elements of the pathspathspaths list. While doing so, we keep on creating a list which contains the full path of every file along with the contents of the file. The list contains data in the form
58
58
@@ -192,15 +192,15 @@ class Solution:
192
192
193
193
## Complexity Analysis
194
194
195
-
### Time Complexity: $O(n*x + f^2 * s)$
195
+
### Time Complexity: $O(n \times x + f^2 \times s)$
196
196
197
197
> **Reason**: Creation of list will take O(n∗x), where n is the number of directories and x is the average string length. Every file is compared with every other file. Let f files are there with average size of s, then files comparision will take O(f2∗s), equals can take O(s). Here, Worst case will be when all files are unique.
198
198
199
-
### Space Complexity: $O(n*x)$
199
+
### Space Complexity: $O(n \times x)$
200
200
201
201
> **Reason**: Size of lists res and list can grow upto n∗x.
202
202
203
-
### Approach #2 Using HashMap
203
+
### Approach 2 Using HashMap
204
204
#### Algorithm
205
205
206
206
In this approach, firstly we obtain the directory paths, the file names and their contents separately by appropriately splitting each string in the given paths list. In order to find the files with duplicate contents, we make use of a HashMap map, which stores the data in the form (contents,list_of_file_paths_with_this_content). Thus, for every file's contents, we check if the same content already exist in the hashmap. If so, we add the current file's path to the list of files corresponding to the current contents. Otherwise, we create a new entry in the map, with the current contents as the key and the value being a list with only one entry(the current file's path).
@@ -309,11 +309,11 @@ class Solution:
309
309
310
310
## Complexity Analysis
311
311
312
-
### Time Complexity: $O(n*x)$
312
+
### Time Complexity: $O(n \times x)$
313
313
314
314
> **Reason**: n strings of average length x is parsed.
0 commit comments