Computer Scientists Invent an Efficient New Way to Count
By making use of randomness, a team has created a simple algorithm for estimating large numbers of distinct objects in a stream of data.
You're viewing a single thread.
So this is probably useful for statistics collectors in DBMSes, used for planning queries. Any other use cases jump to mind?
I thought it sounded kind of similar to statistical CPU profiling, where you're sampling the program counter of a given thread to see which functions actually use the most time. Maybe this idea could help increase the sample rate.