Contents:
Lots of great companies use JHipster, all over the world! Find the full list here , and don't forget to add your company once you have started using JHipster. Financial sponsors Contributing individuals Contributing companies. Marketplace Creating a module Creating a blueprint.
Community help Bug bounties Professional help Training.
Thank you to our gold sponsors! Thank you to our bronze sponsors! Thank you to all our backers! To get the latest JHipster news, please follow us on Twitter: Download it for free from InfoQ or buy the print version from Lulu. Get it on Packt and Amazon. Events February 22nd, Goal Our goal is to generate for you a complete and modern Web app or microservice architecture, unifying: Create a Flux that emits long values starting with 0 and incrementing at specified time intervals, after an initial delay, on the specified Scheduler.
Create a Flux that emits long values starting with 0 and incrementing at specified time intervals, on the specified Scheduler. Create a Flux that emits the provided elements and then completes. Create a new Flux that will only emit a single element then onComplete. Emit the last element observed before complete signal as a Mono , or emit NoSuchElementException error if the source was empty. Emit the last element observed before complete signal as a Mono , or emit the defaultValue if the source was empty. Ensure that backpressure signals from downstream subscribers are split into batches capped at the provided prefetchRate when propagated upstream, effectively rate limiting the upstream Publisher.
Ensure that backpressure signals from downstream subscribers are split into batches capped at the provided highTide first, then replenishing at the provided lowTide , effectively rate limiting the upstream Publisher. Observe all Reactive Streams signals and trace them using Logger support. Observe Reactive Streams signals matching the passed filter options and trace them using a specific user-provided Logger , at Level. Observe Reactive Streams signals matching the passed filter options and trace them using a specific user-provided Logger , at the given Level. Observe Reactive Streams signals matching the passed filter options and trace them using Logger support.
Transform the items emitted by this Flux by applying a synchronous function to each item. Transform incoming onNext, onError and onComplete signals into Signal instances, materializing these signals. Merge data from Publisher sequences contained in an Iterable into an interleaved merged sequence. Merge data from Publisher sequences emitted by the passed Publisher into an interleaved merged sequence.
Merge data from provided Publisher sequences into an ordered merged sequence, by picking the smallest values from each source as defined by the provided Comparator. Merge data from provided Publisher sequences into an ordered merged sequence, by picking the smallest values from each source as defined by their natural order. Merge data from this Flux and a Publisher into a reordered merge sequence, by picking the smallest value from each sequence as defined by a provided Comparator. Merge data from Publisher sequences provided in an Iterable into an ordered merged sequence.
Merge data from Publisher sequences emitted by the passed Publisher into an ordered merged sequence. Merge data from this Flux and a Publisher into an interleaved merged sequence. Activate metrics for this sequence, provided there is an instrumentation facade on the classpath otherwise this method is a pure no-op. Give a name to this sequence, which can be retrieved using Scannable. Create a Flux that will never signal any data, error or completion signal. Emit only the first item emitted by this Flux , into a new Mono. Evaluate each accepted value against the given Class type. To be used by custom operators: Request an unbounded demand and push to the returned Flux , or park the observed elements if not enough demand is requested downstream.
Request an unbounded demand and push to the returned Flux , or park the observed elements if not enough demand is requested downstream, within a maxSize limit and for a maximum Duration of ttl as measured on the elastic Scheduler. Request an unbounded demand and push to the returned Flux , or park the observed elements if not enough demand is requested downstream, within a maxSize limit and for a maximum Duration of ttl as measured on the provided Scheduler.
Request an unbounded demand and push to the returned Flux , or park the observed elements if not enough demand is requested downstream, within a maxSize limit. Request an unbounded demand and push to the returned Flux , or drop the observed elements if not enough demand is requested downstream. Request an unbounded demand and push to the returned Flux , or drop and notify dropping Consumer with the observed elements if not enough demand is requested downstream. Request an unbounded demand and push to the returned Flux , or emit onError fom Exceptions.
Request an unbounded demand and push to the returned Flux , or only keep the most recent observed item if not enough demand is requested downstream. Let compatible operators upstream recover from errors by dropping the incriminating element from the sequence and continuing with subsequent elements. Transform an error emitted by this Flux by synchronously applying a function to it if the error matches the given type.
Transform any error emitted by this Flux by synchronously applying a function to it. Transform an error emitted by this Flux by synchronously applying a function to it if the error matches the given predicate. Subscribe to a fallback publisher when an error matching the given type occurs, using a function to choose the fallback depending on the error. Subscribe to a returned fallback publisher when any error occurs, using a function to choose the fallback depending on the error. Subscribe to a fallback publisher when an error matching a given predicate occurs. Simply emit a captured fallback value when an error of the specified type is observed on this Flux.
Simply emit a captured fallback value when an error matching the given predicate is observed on this Flux. Simply emit a captured fallback value when any error is observed on this Flux. Detaches both the child Subscriber and the Subscription on termination or cancellation. Prepare this Flux by dividing data on a number of 'rails' matching the number of CPU cores, in a round-robin fashion.
Prepare this Flux by dividing data on a number of 'rails' matching the provided parallelism parameter, in a round-robin fashion. Prepare this Flux by dividing data on a number of 'rails' matching the provided parallelism parameter, in a round-robin fashion and using a custom prefetch amount and queue for dealing with the source Flux 's values.
Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner. Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.
Prepare a Mono which shares this Flux sequence and dispatches the first observed item to subscribers in a backpressure-aware manner. Programmatically create a Flux with the capability of emitting multiple elements from a single-threaded producer through the FluxSink API.
Build a Flux that will only emit a sequence of count incrementing integers, starting from start. Reduce the values from this Flux sequence into a single object matching the type of a seed value. Reduce the values from this Flux sequence into a single object of the same type than the emitted items.
Reduce the values from this Flux sequence into a single object matching the type of a lazily supplied seed value. Repeatedly and indefinitely subscribe to the source upon completion of the previous subscription. Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.
Repeatedly subscribe to this Flux when a companion sequence emits elements in response to the flux completion signal. Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber.
Re-subscribes to this Flux sequence if it signals any error, indefinitely. Re-subscribes to this Flux sequence if it signals any error, for a fixed number of times. Re-subscribes to this Flux sequence up to the specified number of retries if it signals any error that match the given Predicate , otherwise push the error downstream.
Re-subscribes to this Flux sequence if it signals any error that matches the given Predicate , otherwise push the error downstream. In case of error, retry this Flux up to numRetries times using a randomized exponential backoff strategy jitter. In case of error, retry this Flux up to numRetries times using a randomized exponential backoff strategy. In case of error, retry this Flux up to numRetries times using a randomized exponential backoff strategy, randomized with a user-provided jitter factor between 0. Retries this Flux when a companion sequence signals an item in response to this Flux error signal.
Sample this Flux by periodically emitting an item corresponding to that Flux latest emitted value within the periodical time window. Sample this Flux by emitting an item corresponding to that Flux latest emitted value whenever a companion sampler Publisher signals a value. Repeatedly take a value from this Flux then skip the values that follow within a given duration. Repeatedly take a value from this Flux then skip the values that follow before the next signal from a companion sampler Publisher.
Emit the latest value from this Flux only if there were no new values emitted during the window defined by a companion Publisher derived from that particular value. Reduce this Flux values with an accumulator BiFunction and also emit the intermediate results of this function. Reduce this Flux values with the help of an accumulator BiFunction and also emits the intermediate results.
Returns a new Flux that multicasts shares the original Flux. Expect and emit a single item from this Flux source and emit a default value for an empty source, but signal an IndexOutOfBoundsException for a source with more than one element. Expect and emit a single item from this Flux source, and accept an empty source but signal an IndexOutOfBoundsException for a source with more than one element. Skip elements from this Flux emitted within the specified initial duration. Skip elements from this Flux emitted within the specified initial duration, as measured on the provided Scheduler.
Skip the specified number of elements from the beginning of this Flux then emit the remaining elements. Skip a specified number of elements at the end of this Flux sequence. Skips values from this Flux until a Predicate returns true for the value. Skip values from this Flux until a specified Publisher signals an onNext or onComplete. Skips values from this Flux while a Predicate returns true for the value. Sort elements from this Flux by collecting and sorting them in the background then emitting the sorted sequence once this sequence completes. Sort elements from this Flux using a Comparator function, by collecting and sorting elements in the background then emitting the sorted sequence once this sequence completes.
Prepend the given Iterable before this Flux sequence. Prepend the given Publisher sequence to this Flux sequence. Prepend the given values before this Flux sequence.
Subscribe to this Flux and request unbounded demand. Subscribe a Consumer to this Flux that will consume all the elements in the sequence.
Subscribe to this Flux with a Consumer that will consume all the elements in the sequence, as well as a Consumer that will handle errors. Subscribe Consumer to this Flux that will respectively consume all the elements in the sequence, handle errors and react to completion.
Subscribe Consumer to this Flux that will respectively consume all the elements in the sequence, handle errors, react to completion, and request upon subscription. Run subscribe, onSubscribe and request on a specified Scheduler 's Scheduler.
Unlike concat , sources are subscribed to eagerly. The docs were not helpfull on this issue. PostgreSQL submitted 3 years ago by iwearnopants2. It is also the easiest way for content writers to edit, preview and plan updates. Split this Flux sequence into continuous, non-overlapping windows that open for a windowingTimespan Duration as measured on the parallel Scheduler. Add behavior side-effect triggered when the Flux completes successfully. So it depends on your use case, but if you're just writing queries you can try Oracle's SQL Developer.
Run subscribe and onSubscribe on a specified Scheduler 's Scheduler. Enrich a potentially empty downstream Context by adding all values from the given Context , producing a new Context that is propagated upstream. Enrich a potentially empty downstream Context by applying a Function to it, producing a new Context that is propagated upstream. Subscribe the given Subscriber to this Flux and return said Subscriber eg. Switch to an alternative Publisher if this sequence is completed without any data.
Switch to a new Publisher generated via a Function whenever this Flux produces an item. Transform the current Flux once it emits its first element, making a conditional transformation possible. Creates a Flux that mirrors the most recently emitted Publisher , forwarding its data until a new Publisher comes in in the source. Relay values from this Flux until the specified Duration elapses. Relay values from this Flux until the specified Duration elapses, as measured on the specified Scheduler.
Take only the first N values from this Flux , if available. Emit the last N values this Flux emitted before its completion. Relay values from this Flux until the given Predicate matches. Relay values from this Flux until the given Publisher emits.
Relay values from this Flux while a predicate returns TRUE for the values checked before each value is delivered. Let this Flux complete then play signals from a provided Mono. Let this Flux complete then play another Publisher. Propagate a TimeoutException as soon as no item is emitted within the given Duration from the previous emission or the subscription for the first item. Switch to a fallback Flux as soon as no item is emitted within the given Duration from the previous emission or the subscription for the first item.
Switch to a fallback Flux as soon as no item is emitted within the given Duration from the previous emission or the subscription for the first item , as measured on the specified Scheduler. Propagate a TimeoutException as soon as no item is emitted within the given Duration from the previous emission or the subscription for the first item , as measured by the specified Scheduler.
Signal a TimeoutException in case the first item from this Flux has not been emitted before the given Publisher emits. Signal a TimeoutException in case the first item from this Flux has not been emitted before the firstTimeout Publisher emits, and whenever each subsequent elements is not emitted before a Publisher generated from the latest element signals.
Switch to a fallback Publisher in case the first item from this Flux has not been emitted before the firstTimeout Publisher emits, and whenever each subsequent elements is not emitted before a Publisher generated from the latest element signals. Emit a Tuple2 pair of T1 the current clock time in millis as a Long measured by the parallel Scheduler and T2 the emitted data as a T , for each item from this Flux.
Emit a Tuple2 pair of T1 the current clock time in millis as a Long measured by the provided Scheduler and T2 the emitted data as a T , for each item from this Flux. Transform this Flux into a lazy Iterable blocking on Iterator. Transform this Flux into a lazy Stream blocking for each source onNext call. Transform this Flux in order to generate a target Flux. Uses a resource, generated by a supplier for each individual Subscriber, while streaming the values from a Publisher derived from the same resource and makes sure the resource is released if the sequence terminates or the Subscriber cancels.
Uses a resource, generated by a Publisher for each individual Subscriber , while streaming the values from a Publisher derived from the same resource. Split this Flux sequence into continuous, non-overlapping windows that open for a windowingTimespan Duration as measured on the parallel Scheduler. Split this Flux sequence into multiple Flux windows that open for a given windowingTimespan Duration , after which it closes with onComplete.
Split this Flux sequence into continuous, non-overlapping windows that open for a windowingTimespan Duration as measured on the provided Scheduler. Split this Flux sequence into multiple Flux windows containing maxSize elements or less for the final window and starting from the first item. Split this Flux sequence into multiple Flux windows of size maxSize , that each open every skip elements in the source. Split this Flux sequence into continuous, non-overlapping windows where the window boundary is signalled by another Publisher. Split this Flux sequence into multiple Flux windows delimited by the given predicate.
Split this Flux sequence into multiple Flux windows delimited by the given predicate and using a prefetch. Split this Flux sequence into potentially overlapping windows controlled by items of a start Publisher and end Publisher derived from the start values. Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements.
Combine the most recently emitted values from both this Flux and another Publisher through a BiFunction and emits the result. Zip multiple sources together, that is to say wait for all the sources to emit one element and combine these elements once into an output value constructed by the provided combinator. Zip two sources together, that is to say wait for all the sources to emit one element and combine these elements once into a Tuple2.
Zip two sources together, that is to say wait for all the sources to emit one element and combine these elements once into an output value constructed by the provided combinator. Zip three sources together, that is to say wait for all the sources to emit one element and combine these elements once into a Tuple3. Zip four sources together, that is to say wait for all the sources to emit one element and combine these elements once into a Tuple4.
Zip five sources together, that is to say wait for all the sources to emit one element and combine these elements once into a Tuple5. Zip six sources together, that is to say wait for all the sources to emit one element and combine these elements once into a Tuple6. Zip seven sources together, that is to say wait for all the sources to emit one element and combine these elements once into a Tuple7. Zip eight sources together, that is to say wait for all the sources to emit one element and combine these elements once into a Tuple8.
Zip this Flux with another Publisher source, that is to say wait for both to emit one element and combine these elements once into a Tuple2. Zip this Flux with another Publisher source, that is to say wait for both to emit one element and combine these elements using a combinator BiFunction The operator will continue doing so until any of the sources completes. Zip elements from this Flux with the content of an Iterable , that is to say combine one element from each, pairwise, into a Tuple2. Zip elements from this Flux with the content of an Iterable , that is to say combine one element from each, pairwise, using the given zipper BiFunction.
Concatenation is achieved by sequentially subscribing to the first source then waiting for it to complete before subscribing to the next, and so on until the last source completes. Any error interrupts the sequence immediately and is forwarded downstream. Errors do not interrupt the main sequence but are propagated after the rest of the sources have had a chance to be concatenated. Errors do not interrupt the main sequence but are propagated after the current concat backlog if delayUntilEnd is false or after all sources have had a chance to be concatenated if delayUntilEnd is true.
This includes emitting elements from multiple threads. This Flux factory is useful if one wants to adapt some other multi-valued async API and not worry about cancellation and backpressure which is handled by buffering all signals if the downstream can't keep up. For a multi-threaded capable alternative, see create Consumer. This Flux factory is useful if one wants to adapt some other single-threaded multi-valued async API and not worry about cancellation and backpressure which is handled by buffering all signals if the downstream can't keep up.
For a multi-threaded capable alternative, see create Consumer, reactor. If the supplier doesn't generate a new instance however, this operator will effectively behave like from Publisher. The Throwable is generated by a Supplier , invoked each time there is a subscription and allowing for lazy instantiation. A new iterator will be created for each subscriber.
Keep in mind that a Stream cannot be re-used, which can be problematic in case of multiple subscriptions or re-subscription like with repeat or retry. The Stream is closed automatically by the operator on cancellation, error or completion. The stateSupplier may return null. The stateSupplier may return null but your cleanup stateConsumer will need to handle the null case. If demand is not produced in time, an onError will be signalled with an overflow IllegalStateException detailing the tick that couldn't be emitted.
In normal conditions, the Flux will never complete. Runs on the Schedulers. Unlike concat , inner sources are subscribed to eagerly. Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source. Unlike concat , inner sources are subscribed to eagerly but at most concurrency sources are subscribed to at the same time.
A new Iterator will be created for each subscriber. Unlike concat , sources are subscribed to eagerly. This variant will delay any error until after the rest of the merge backlog has been processed. This is not a sort , as it doesn't consider the whole of each sequences. Instead, this operator considers only one value from each source and picks the smallest of all these values, then replenishes the slot for that picked source.
This is not a sort Comparator , as it doesn't consider the whole of each sequences. Unlike concat, the inner publishers are subscribed to eagerly. Unlike merge, their emitted values are merged into the final sequence in subscription order. Unlike concat, the inner publishers are subscribed to eagerly but at most maxConcurrency sources at a time. This variant will delay any error until after the rest of the mergeSequential backlog has been processed.
Unlike concat, sources are subscribed to eagerly. Unlike concat, sources are subscribed to eagerly but at most maxConcurrency sources at a time. The resulting Flux will complete once there are no new Publisher in the source source has completed and the last mirrored Publisher has also completed. Eager resource cleanup happens just before the source termination and exceptions raised by the cleanup Consumer may override the terminal even.
For an asynchronous version of the cleanup, with distinct path for onComplete, onError and cancel terminations, see usingWhen Publisher, Function, Function, Function, Function. Non-eager cleanup will drop any exception. Whenever the resulting sequence terminates, the relevant Function generates a "cleanup" Publisher that is invoked but doesn't change the content of the main sequence.
Instead it just defers the termination unless it errors, in which case the error suppresses the original termination signal. Individual cleanups can also be associated with main sequence cancellation and error terminations: Note that if the resource supplying Publisher emits more than one resource, the subsequent resources are dropped Operators. An empty completion or error without at least one onNext signal triggers a short-circuit of the main sequence with the same terminal signal no resource is established, no cleanup is invoked. Additionally, the terminal signal is replaced by any error that might have happened in the terminating Publisher: Finally, early cancellations will cancel the resource supplying Publisher: Whenever the resulting sequence terminates, a provided Function generates a "cleanup" Publisher that is invoked but doesn't change the content of the main sequence.
The operator will continue doing so until any of the sources completes. Errors will immediately be forwarded. This "Step-Merge" processing is especially useful in Scatter-Gather scenarios. Note that the Publisher sources from the outer Publisher will accumulate into an exhaustive list before starting zip operation. Emit a single boolean true if all values of this sequence match the Predicate. The implementation uses short-circuit logic and completes with false if the predicate doesn't match a value.
The implementation uses short-circuit logic and completes with false if any value doesn't match the predicate. Returns that value, or null if the Flux completes empty. In case the Flux errors, the original exception is thrown wrapped in a RuntimeException if it was a checked exception. Note that each blockFirst will trigger a new subscription: If the provided timeout expires,a RuntimeException is thrown.
Note that each blockLast will trigger a new subscription: Buffers can be created with gaps, as a new buffer will be created every time skip values have been emitted by the source. Each buffer will last until the bufferingTimespan has elapsed, thus emitting the bucket in the resulting Flux. Each buffer will last until the bufferingTimespan has elapsed also measured on the scheduler , thus emitting the bucket in the resulting Flux.
Note that the element that triggers the predicate to return true and thus closes a buffer is included as last element in the emitted buffer. On completion, if the latest buffer is non-empty and has not been closed it is emitted.
However, such a "partial" buffer isn't emitted in case of onError termination. Note that the buffer into which the element that triggers the predicate to return true and thus closes a buffer is included depends on the cutBefore parameter: Each buffer continues aggregating values while the given predicate returns true, and a new buffer is created as soon as the predicate returns false Note that the element that triggers the predicate to return false and thus closes a buffer is NOT included in any emitted buffer. Each buffer will last until the corresponding closing companion Publisher emits, thus releasing the buffer to the resulting Flux.
When Open signal is strictly not overlapping Close signal: When Open signal is strictly more frequent than Close signal: When Open signal is exactly coordinated with Close signal: