'how to remove duplicated elements from a list without using set()?
Let
a = np.array([1, 1, 1,1,1,1])
b = np.array([2,2,2])
be two numpy arrays. Then let
c = [a]+[b]+[b]
clearly, c has duplicated elements b. Now I wish to remove one array b from c so that c only contains one a and one b
For removing duplicated elements in a list I usually used set(). However, if, this time, I do
set(c)
I would receive error like
TypeError: unhashable type: 'numpy.ndarray'
In my understanding is that the numpy.ndarray is not hashable.
The list c above is just an example, in fact my c could be very long. So, is there any good way to remove duplicated elements from a list of numpy.array?
Thx!
edit: I would expect my return to be c = [a]+[b]
Solution 1:[1]
You can use this
c = a.tolist() + b.tolist() + b.tolist()
And then
c = set(c)
Solution 2:[2]
I think the question is the same as the one below.
Removing duplicates from a list of numPy arrays
import numpy as np
a = np.array([1, 1, 1,1,1,1])
b = np.array([2,2,2])
arraylist = [a, b, b]
L = {array.tostring(): array for array in arraylist}
c = [v for v in L.values()]
c
result c:
[array([1, 1, 1, 1, 1, 1]), array([2, 2, 2])]
Solution 3:[3]
You can try this in the minimum lines of code:
list1 = [7,7,3,3,8]
list2 = []
for k in list1:
if k not in list2:
list2.append(k)
print(list2) # Output: [7, 3, 8]
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Paulo44 |
| Solution 2 | |
| Solution 3 | BlackBeans |
