remember the times when software companies shipped software to users, and users used the software offline. No one from business even dreamed of knowing what each user was doing with the software at any moment in real time. What data users stored etc. Today it looks like this is MUST have for any software and no bussines is possible without this.
> No one from business even dreamed of knowing what each user was doing with the software at any moment in real time
Surely you jest. Not knowing what your users are doing with the software was the number one problem with product development in the 90's. The only solution was user interviews and real-time in-person observation, which was highly expensive, thus making the cost of software much more expensive.
> Today it looks like this is MUST have for any software and no bussines is possible without this
Because more and more software is delivered over the Internet through low-touch sales methods. More often than not, software producers don't even know who their users are - how do you conduct user interviews, expensive as they may be, if you don't even know who your users are?
Trying to run a modern business without a modern analytics stack is functionally equivalent to driving with a blindfold on.
Yes, information can be better than no information, but user analytics is not the holy grail it pretends to be and can be worse than no information at all.
I led a team a BigCo for some years that had access to mountains of highly detailed user data that enabled product, eng and UX folks to ask very specific questions about how successful various changes were, in terms of our business and product goals.
An outrageous amount of time was invested in this process. Designing, describing and negotiating experiments, collecting data over some days or weeks and in almost every case, the data was basically inconclusive and every question would spawn more questions. Enormous time was spent digging and sifting through this data, while even people with the best of intentions would almost always disagree about what the data was telling us.
Eventually, some quant showed up and explained how exactly zero experiments we ever did were statistically significant, and that if we wanted to achieve significance, we would need to run a 1% experiment on our many millions of users for more than 30 days.
The organization collectively shrugged at this information and went right back to the time worn rituals of success theater.
What I learned in that experience was that developing great products requires experience, skill, intuition, grit and visibility, but that visibility might be the least critical of those requirements, not the most.
Put another way, there is almost always a simpler, clearer measure of success that doesn't require spying on everyone all the time and then sifting through what is actually worse than useless information.
Geez, yeah. I was working at a mid-sized company's ecommerce site. And even with the amount of users we had hitting our site daily, in order to get to any type of statistical significance to assure us that we weren't wasting our time, we had to wait about 21 days. That limited our cycle time.
My boss ended up telling me that I was moving too slowly, and I needed to move on with the experiments. I was surprised, since he should be smart enough to understand statistical significance, but here, he turned a blind eye to it.
A/B testing is only the way to go when you can gather millions of samples. For web sites, this is more likely at big companies.
No need to get data at that level of granularity. "How many users do we have?" "Who is our biggest user?" Simple questions like those which are simple to answer vs a centralized database, and much more difficult to answer when the databases are distributed.