Last week we held the first ever summit on Collaborative Intelligence. We coined the term to contrast with both centralized data warehousing and the free-for-all do-it-yourself model. Neither approach has really satisfied our customers needs. We gathered last week in Atlanta to help define a middle way. Coming out of the event we have settled
We hear it all the time: data engineers and analysts complaining about how hard it is to locate existing SQL assets, whether generated by themselves or by a teammate, for reuse in a new project. Inevitably, after combing through your hard drive and company network, or using email or internal chat tools to beg others
As strong advocates for analytic code sharing and reuse, we’re often asked why we don’t integrate with GitHub or other Git repositories. The simple answer is that Git isn’t often a great fit for sharing SQL code. This post explains why, and suggests a better alternative. It’s tedious to rewrite the same SQL over again.
Centralized analytics moves too slowly to respond to an evolving set of business needs. There’s simply no way a centralized data engineering team can understand and build all the analytic capabilities every team in the business needs…nor can it be fast enough. Businesses can’t afford to wait weeks or months for the centralized engineering team
Data is essential for organizations but always remember, if you’re not staying up to date to make sure data (and analytic output) is still relevant, your analytics will become a liability vs. an asset. Both unrevised analysis and analysis done from scratch are prone to the risks of orphan analytics and analytics drift. Data is
Coginiti CEO Rick Hall recently sat down with Chad Perry on the Industrial Evolution Podcast. They had a wide-ranging discussion on analytics at the edge of business, where processes are changing rapidly, and business teams need answers quickly. Their discussion covers the shift from a centralized engineering model to one where business users are empowered