Mekire's solution seems pretty good but we can improve efficiency with list comp. and hash table.

And above all:

**Don’t use *list + list***! use *extend* instead or better *chain* from itertools

l = list(range(1, 100)) def process_list(_list): _list = _list[::2] _index = 1 while _index < len(_list): _step = _list[_index] _avoid_indx = {(i - 1): None for i in range(0, len(_list), _step)} # Hash table search items in O(1) _list = [i for i in _list if _list.index(i) not in _avoid_indx] _index += 1 return _list

now only for *didactics* purpose we know that Python’s list class relies on an algorithmic sleight :

**dynamic arra**y. very efficient but not for all.(not for real time process for example)

The first key to providing the semantics of a dynamic array is that a list instance

maintains an underlying array that frequently has greater capacity than the current length

of the list. --> Python’s implementation of the

**append**method shows amortized constant-time behavior (otherwise it would work in linear time)

finally parsing item's list works in O(n) linear time, instead hash table works in O(1).