flatMap()
, the third one - sorted()
, the fourth - all sorts of limit()
and takeWhile()
(in JDK-9). But still, I try to avoid it. However, the other day I discovered an operation that is badly parallelized and, depending on its use, may not work on an endless stream, but is still too good. Through it, it is possible to literally express in a few lines both practically any existing intermediate operation, and a bunch of nonexistent ones. I called the operation headTail () . <R> StreamEx<R> headTail(BiFunction<T, StreamEx<T>, Stream<R>> mapper, Supplier<Stream<R>> supplier)
<R> StreamEx<R> headTail(BiFunction<T, StreamEx<T>, Stream<R>> mapper)
StreamEx.of("name", "John", "Mary", "Lucy") .headTail((head, tail) -> tail.map(str -> head+": "+str)) .forEach(System.out::println);
name: John name: Mary name: Lucy
public static <T, R> StreamEx<R> map(StreamEx<T> input, Function<T, R> mapper) { return input.headTail((head, tail) -> map(tail, mapper).prepend(mapper.apply(head))); }
headTail()
affect only the beginning of the stream, leaving the tail unchanged. I was not very creative in calling this feature “tail stream optimization”. It is compatible with intermediate prepend
operations (add something to the beginning of the stream), mapFirst
(change the first element of the stream, without touching the rest) and the headTail
itself. In principle, it could be extended to standard skip and dropWhile (from JDK-9), but my library promises that standard operations are fully compatible with the original Stream API, and there would be subtle differences. public static <T> StreamEx<T> limit(StreamEx<T> input, int n) { return input.headTail((head, tail) -> n > 1 ? limit(tail, n - 1).prepend(head) : Stream.of(head)); }
public static <T> StreamEx<T> limit(StreamEx<T> input, int n) { return input.headTail((head, tail) -> n > 0 ? limit(tail, n - 1).prepend(head) : null); }
limit(StreamEx.of(new Random().ints(0, 1000).boxed().distinct()), 1000).forEach(System.out::println);
static <T> StreamEx<T> skip(StreamEx<T> input, int n) { return input.headTail((head, tail) -> n > 0 ? skip(tail, n - 1) : tail.prepend(head)); }
public static <T, R> StreamEx<R> flatMap(StreamEx<T> input, Function<T, Stream<R>> mapper) { return input.headTail((head, tail) -> flatMap(tail, mapper).prepend(mapper.apply(head))); }
public static <T> StreamEx<T> peek(StreamEx<T> input, Consumer<T> consumer) { return input.headTail((head, tail) -> { consumer.accept(head); return peek(tail, consumer).prepend(head); }); }
public static <T> StreamEx<T> filter(StreamEx<T> input, Predicate<T> predicate) { return input.<T> headTail((head, tail) -> predicate.test(head) ? filter(tail, predicate).prepend(head) : filter(tail, predicate)); }
public static <T> StreamEx<T> distinct(StreamEx<T> input) { return input.headTail((head, tail) -> distinct(tail.filter(n -> !Objects.equals(head, n))).prepend(head)); }
private static <T> StreamEx<T> distinct(StreamEx<T> input, Set<T> observed) { return input.headTail((head, tail) -> observed.add(head) ? distinct(tail, observed).prepend(head) : distinct(tail, observed)); }
Set.add
returns false
if the element was already in the set. In this case, do not glue the head. Such an implementation stack no longer eats and is not inferior in memory from the standard. Here it is worth adding a method for running (with recursive functions it often happens that you need a separate public method for running): public static <T> StreamEx<T> distinct(StreamEx<T> input) { return distinct(input, new HashSet<>()); }
ArrayList
) and here, for the first time, we will need the second headTail
argument: public static <T> StreamEx<T> sorted(StreamEx<T> input) { return sorted(input, new ArrayList<>()); } private static <T> StreamEx<T> sorted(StreamEx<T> input, List<T> buf) { return input.headTail((head, tail) -> { buf.add(head); return sorted(tail, buf); }, () -> { buf.sort(null); return buf.stream(); }); }
sorted
works similar to the standard one and it is still better than the shuffle
given above. For example, if you concatenate two sorted streams, the second will not be sorted until you have completely read the first. By the way, replacing buf.sort(null)
with Collections.shuffle(buf)
you and shuffle
can do more or less normally. And with Collections.reverse(buf)
you can flip the stream. public static <T> StreamEx<T> takeWhile(StreamEx<T> input, Predicate<T> predicate) { return input.headTail((head, tail) -> predicate.test(head) ? takeWhile(tail, predicate).prepend(head) : null); }
false
. Looks like a skip
: public static <T> StreamEx<T> dropWhile(StreamEx<T> input, Predicate<T> predicate) { return input.headTail((head, tail) -> predicate.test(head) ? dropWhile(tail, predicate) : tail.prepend(head)); }
public static <T> StreamEx<T> mirror(StreamEx<T> input) { return input.headTail((head, tail) -> mirror(tail).append(head).prepend(head)); }
public static <T> StreamEx<T> mirror(StreamEx<T> input) { return mirror(input, new ArrayDeque<>()); } private static <T> StreamEx<T> mirror(StreamEx<T> input, Deque<T> buf) { return input.headTail((head, tail) -> { buf.addFirst(head); return mirror(tail, buf).prepend(head); }, buf::stream); }
mirror(StreamEx.of(1,2,3,4,5)).limit(3)
does not reach the reflection point and subtracts only three elements from the source.scanLeft(StreamEx.of(1,2,3,4,5), Integer::sum)
should sequentially summarize the elements and create streams 1, 3, 6, 10, 15
. public static <T> StreamEx<T> scanLeft(StreamEx<T> input, BinaryOperator<T> operator) { return input.headTail((head, tail) -> scanLeft(tail.mapFirst(cur -> operator.apply(head, cur)), operator).prepend(head)); }
public static <T> StreamEx<T> mapFirst(StreamEx<T> input, UnaryOperator<T> operator) { return input.headTail((head, tail) -> tail.prepend(operator.apply(head))); }
takeWhile
, so that not only elements that satisfy the predicate but also the first element that violates it are included in the stream. Through existing operations and through the usual takeWhile
this is not normally expressed. And through headTail
, it's easy: public static <T> StreamEx<T> takeWhileClosed(StreamEx<T> input, Predicate<T> predicate) { return input.headTail((head, tail) -> predicate.test(head) ? takeWhileClosed(tail, predicate).prepend(head) : Stream.of(head)); }
skip
operation, but the standard skip
does not optimize tails, so we will use our redefined skip
: public static <T> StreamEx<T> every(StreamEx<T> input, int n) { return input.headTail((head, tail) -> every(skip(tail, n - 1), n).prepend(head)); }
headTail
twice: public static <T, R> StreamEx<R> couples(StreamEx<T> input, BiFunction<T, T, R> mapper) { return input.headTail((left, tail1) -> tail1.headTail((right, tail2) -> couples(tail2, mapper).prepend(mapper.apply(left, right)))); }
public static <T, R> StreamEx<R> pairMap(StreamEx<T> input, BiFunction<T, T, R> mapper) { return input.headTail((left, tail1) -> tail1.headTail((right, tail2) -> pairMap(tail2.prepend(right), mapper).prepend(mapper.apply(left, right)))); }
headTail()
course, it is normally parallelized, in contrast to the implementation via headTail()
.batches(StreamEx(1,2,3,4,5,6,7), 3)
should make a stream from the lists [1,2,3], [4,5,6], [7]
. An argument containing the intermediate buffer will help here: public static <T> StreamEx<List<T>> batches(StreamEx<T> input, int size) { return batches(input, size, Collections.emptyList()); } private static <T> StreamEx<List<T>> batches(StreamEx<T> input, int size, List<T> cur) { return input.headTail((head, tail) -> cur.size() >= size ? batches(tail, size, Collections.singletonList(head)).prepend(cur) // : batches(tail, size, StreamEx.of(cur).append(head).toList()), // () -> Stream.of(cur)); }
() -> Stream.of(cur)
, so as not to lose the tail. Here, for the beauty of the implementation, I create a new list every time through StreamEx.of(cur).append(head).toList()
, rather than changing the existing one. But easy and changeable lists to insert, if performance is important.BiFunction<Integer, T, R>
take an abstract function of the type BiFunction<Integer, T, R>
, which can do everything it wants with the index and the element: public static <T, R> StreamEx<R> withIndices(StreamEx<T> input, BiFunction<Integer, T, R> mapper) { return withIndices(input, 0, mapper); } private static <T, R> StreamEx<R> withIndices(StreamEx<T> input, int idx, BiFunction<Integer, T, R> mapper) { return input.headTail((head, tail) -> withIndices(tail, idx + 1, mapper).prepend(mapper.apply(idx, head))); }
dominators(numbers, (a, b) -> a >= b)
will leave an increasing subset of the original numbers. The implementation is similar to every, but instead of skip our dropWhile is used: public static <T> StreamEx<T> dominators(StreamEx<T> input, BiPredicate<T, T> isDominator) { return input.headTail((head, tail) -> dominators(dropWhile(tail, e -> isDominator.test(head, e)), isDominator) .prepend(head)); }
appendReduction(numbers, 0, Integer::sum)
add to the stream of numbers the sum of its elements. public static <T> StreamEx<T> appendReduction(StreamEx<T> input, T identity, BinaryOperator<T> op) { return input.headTail((head, tail) -> appendReduction(tail, op.apply(identity, head), op).prepend(head), () -> Stream.of(identity)); }
public static StreamEx<Integer> sieve(StreamEx<Integer> input) { return sieve(StreamEx.iterate(2, x -> x+1)); } private static StreamEx<Integer> sieve(StreamEx<Integer> input) { return input.headTail((head, tail) -> sieve(tail.filter(n -> n % head != 0)).prepend(head)); }
Iterator
or Spliterator
. As I understand it, in the world of functional programming, such things are common. It's nice that this is possible in Java.Source: https://habr.com/ru/post/262139/
All Articles