Gianluca Carrera

Author name: gianlucacarrera

Do not sell your data!

Leading data initiatives at companies like dunnhumby and Reward, where we tracked billions of transactions worth billions of pounds, I’ve observed a common mistake about data monetization: the belief that selling raw data is a great way to create value. This is not just wrong – it’s potentially damaging to your long-term business prospects. Here’s why, and a better approach. The Raw Data Trap Many companies sitting on valuable data assets immediately think about selling that data to interested parties. It’s an understandable – you have something others want, why not sell it directly? But this approach has several important flaws: The Power of Insights The first step up the value chain is transforming data into insights. This approach offers several advantages: The Ultimate Goal: Actionable Outcomes The highest form of data monetization is turning insights into actions. This is where the real value multiplication happens: Building a Sustainable Data Business To successfully monetize data through insights and actions: The Multiplication Effect Perhaps the most compelling argument for this approach is the multiplication effect. A single dataset, properly leveraged, can power multiple products serving different use cases at different price points. Each step up the value chain – from data to insights to actions – multiplies your potential revenue. This again was one of our ‘killer’ apps at dunnhumby: ‘recycling’ data for multiple use cases and customers. Think about it: would you rather sell your customer data once for £X, or build a sustainable business that generates multiples of &X by solving various high-value problems with that same dataset? The key is understanding that data’s true value lies not in the data itself, but in its application to solve real business problems. Focus on turning your data into solutions that deliver clear business outcomes, and you’ll build a more valuable, sustainable business. What’s your experience with data monetization? Have you seen companies succeed with raw data sales, or do you agree that insights and actions are the way to go? #DataMonetization #ProductStrategy #DigitalTransformation #Data #Analytics

What Happens When Content becomes Infinite and Free?

In an era where AI can generate content at unprecedented scale and speed, we face an intriguing paradox: what’s the value of infinite content in a world of finite attention? Let’s decompose this transformation. When I was leading product at dunnhumby, we processed over 50 billion customer transactions yearly. The volume of data wasn’t the challenge – extracting meaningful insights that drove business value was. Today, we’re seeing a similar pattern with content, but at an even more dramatic scale. The transformation has multiple layers: At Yahoo!, we focused heavily on content creation and distribution. Today, that strategy would need radical rethinking. The challenge isn’t creating content – it’s ensuring it reaches the right audience at the right time. This mirrors what we experienced at dunnhumby: data abundance without proper curation and relevance quickly becomes noise. Think about Netflix’s suggestion algorithm – its value isn’t in its 17,000+ titles library, but in its ability to surface the right content for each viewer. The same principle will apply across all content platforms. But there’s a crucial difference: while Netflix’s content is professionally produced and vetted, we’re entering an era where content can come from anywhere, created by anyone (or anything). Meta’s experience with user-generated content offers valuable lessons. They’ve already solved many challenges we’re facing with AI-generated content. Their platforms process billions of posts daily, using sophisticated systems to detect quality, filter misinformation, and build trust – exactly what we need for AI content. The real difference isn’t in content volume or validation needs – Meta handles those daily. It’s in the incentive structures. While human creators seek attention and engagement, AI systems can be optimized for different objectives. This actually presents an opportunity: we can program AI to optimize for value creation rather than just engagement. At dunnhumby, we learned that aligning incentives with value creation was crucial for sustainable platforms. This shift in incentive structures reshapes how we think about content quality, trust, and distribution: Quality assessment moves from engagement metrics to value metrics. We need frameworks that measure actual utility to users, not just their attention. At dunnhumby, we learned to distinguish between high-engagement and high-value customer behaviors – the same principle applies here. Trust mechanisms shift from reactive to proactive. Instead of moderating after publication, we can build trust signals into the content generation process itself. This requires new reputation systems that evaluate not just authenticity, but consistency in value delivery over time. Distribution economics need fundamental rethinking. When content can be optimized for specific objectives rather than engagement, traditional monetization models need revision. The challenge becomes aligning platform economics with value creation rather than attention capture. I know, easier said than done! Implications for Product Strategy This shift has profound implications for product strategy. When I was building data products at dunnhumby, we learned that value wasn’t in data accumulation but in insight generation. So what will happen with content? The Platform Evolution Drawing from my experience at PubMatic and Yahoo!, I see three major shifts coming. First, we’re witnessing a complete inversion of the value chain. Traditional platforms obsessed over content sourcing and distribution – it was all about getting more content to more people. But that’s becoming meaningless in a world of infinite content. Future platforms will instead focus on filtering and matching. Think about it: your value proposition completely flips from “access to content” to “protection from noise.” This fundamentally changes how platforms need to think about their revenue models. At Yahoo!, we were constantly pushing for more content volume – today, that would be precisely the wrong strategy. Second, network effects are being completely redefined. Traditionally, these effects were straightforward: more users meant more content, which attracted more users. But in a world of infinite content, that logic breaks down. Future network effects will center on curation quality – the platforms that can build the most trusted curation engines will win. At PubMatic, we saw how quality signals became increasingly important in programmatic advertising. The same principle applies here, but at a much larger scale. User trust and engagement become your moats, and community validation becomes a key feature of your platform. Third, platforms need to become AI-native from the ground up. This isn’t about bolting AI onto existing architectures – it’s about reimagining platforms where content creation, curation, and distribution are one seamless flow. Real-time personalization isn’t a feature, it’s the foundation. Quality signals need to be built into the core architecture. At dunnhumby, we saw how retailers who treated data as a strategic asset outperformed those who saw it as a byproduct. Similarly, platforms that understand this shift in value creation will outperform those still focused on pure content volume. Looking Ahead We’re moving from a world where content was king, to one where curation reigns supreme. The value is shifting from creation to discovery, from quantity to relevance. This isn’t just another technological shift – it’s fundamentally changing how we think about value creation in the digital economy. What’s your view on this transformation? How are you thinking about value creation in a world of AI-generated abundance? Are you seeing similar patterns in your industry? #AI #DigitalTransformation #ProductStrategy #Content #DigitalMedia

The Unsexy yet Fundamental Part of AI Projects: Data

In my years leading data and product initiatives I’ve seen firsthand what really drives successful AI projects. While the focus is often on sophisticated algorithms and cutting-edge models, fancy use cases and cool demos or prototypes, the reality is far less exciting but much more fundamental and critical: data acquisition, preparation, and management typically is about 80% of the effort in most AI initiatives. The 80/20 Rule of AI Projects You can discover the truth by yourself (at a significant cost), or just trust me and read through. Most of the time, resources, and frustration aren’t spent on developing advanced algorithms or fine-tuning neural networks. Instead, they’re spent on: Only after these foundational elements are in place can the actual work on AI models begin. And even then, the data challenges continue as models need to be retrained, monitored, and maintained with fresh, high-quality data. An you might need to re-work on your source data. Why Data Preparation Dominates AI Projects There are several reasons why data work is so prominent in AI projects: 1. The Reality of Enterprise Data Even after decades of investment in data warehouses and data lakes, a big chunk of enterprise data remains fragmented, inconsistent, and poorly documented. In many organizations, even basic questions like “how many customers do we have?” can yield different answers depending on which system you query. It happened to me personally in more than one occasion – I have spent weeks in understanding how many customers we had, even starting from the most important thing: the definition of ‘customers’. You’d be surprised how many you can come up with 2. Quality, Quality, Quality Machine learning models amplify the problems in your data. Poor data quality means poor model performance – it’s that simple. As the saying goes: garbage in, garbage out. This reality forces AI teams to spend significant time ensuring data quality before any modeling can begin. You just have to do it, if you care about good outcomes. In an occasion, despite the customer reassurance their data set was as good as gold, after cleaning and structuring a customer transactional data set, it turned out it there were significantly more customers in its dataset than people living in the country he was operating! How good was the model prediction going to be? 3. Integration Challenges AI systems require integrating data from multiple systems – often combining structured data (from databases) with unstructured content (like images, text, or voice recordings). Creating cohesive datasets from these diverse sources is complex and time-consuming. All these integrations also need to be maintained. Pipelines break, and need to be fixed. Real-World Impact During my time at dunnhumby, our successful retail analytics succeeded because of maniacal and meticulous attention to data preparation. This was the bedrock of our success. All teams invested heavily in creating clean, well-structured data assets that could be effectively used by AI solutions that delivered measurable ROI. This was at the base of our continue success. How Organizations Can Respond For executives sponsoring AI initiatives, understanding this reality leads to several strategic imperatives: Looking Forward As AI becomes more central to business operations, the organizations that succeed won’t necessarily be those with the most advanced algorithms. Instead, the winners will be those that have built robust data foundations – what I call “data monetization capabilities” – that enable rapid and reliable deployment of AI. The breakthroughs in AI research make headlines, but the quiet, persistent work of building data infrastructure is what truly enables AI success. For executives embarking on AI transformations, embracing this reality early can be the difference between success and failure.

The Evolution of Product Leadership: From Features to Value Creation, and the AI revolution

In my 20+ years career in product leadership, I’ve witnessed a fundamental transformation in how we approach product leadership. Let me share some observations that might resonate with fellow product leaders. The role of product leadership has evolved from being the “feature factory manager” to becoming the “value creation orchestrator.” This shift isn’t just semantic – it represents a profound change in how we think about and measure product success. Let’s look at this evolution through different lenses: From Output to Outcomes Traditional product management was obsessed with outputs: features shipped, story points completed, releases made. Modern product leadership focuses on outcomes: customer value delivered, business metrics moved, strategic objectives achieved. At Truvo, this shift helped us grow digital revenues from €50M to €150M ARR in two years. The driver? MySite, a modular WYSIWYG website builder – think of it as the precursor of today’s WIX – that created immediate value for SMEs. In fact, we acquired 20,000 SME customers in just 12 months. Linking every product decision to measurable customer value made all the difference. But how do we actually measure what truly matters? Measuring What Really Matters Drawing from John Doerr’s “Measure What Matters,” we’ve learned that OKRs in product need to directly link to value creation. But here’s what they don’t tell you: implementing value-based OKRs requires a cultural transformation. At dunnhumby, we moved from measuring feature adoption to measuring business impact. For example, instead of tracking how many retailers used our forecasting tool, we measured the reduction in waste and out-of-stocks it delivered – a shift that increased product stickiness and doubled user adoption. The key was connecting product metrics to financial outcomes: revenue uplift, cost reduction, or margin improvement. But perhaps more importantly, we learned that not everything that matters can be measured, and not everything that can be measured matters. For instance, while we could measure every click in our retail media platform, what really mattered was advertiser ROI – a metric that required close collaboration with customers to define and track properly. This evolution in measurement needs to flow through the entire organization. Product teams should understand how their daily decisions impact business metrics, engineers should see how their technical choices affect customer value, and stakeholders should evaluate success through outcome-based metrics rather than output-based ones. It’s not just about changing metrics – it requires a fundamental shift in how teams think about success. What capabilities does this transformation require from product leaders? The Modern Product Leader’s Toolkit Today’s product leader needs three core capabilities: How does AI reshape this value creation equation? The AI Impact AI isn’t just another technology wave. It’s fundamentally reshaping how we think about value creation in products. The challenge has inverted: from struggling to build what customers want, to choosing which of the infinite possibilities will drive the most value. During my time at dunnhumby, we faced this daily: every process could be automated, every decision augmented, every experience personalized. But successful AI initiatives weren’t determined by technical sophistication. Instead, they were defined by three critical factors: This shift emphasizes a crucial evolution in product leadership: the ability to navigate through endless technical possibilities to identify true value creators. It’s no longer about building AI capabilities – it’s about orchestrating them into coherent value streams that transform customer businesses. The most successful product leaders today aren’t those who understand AI best, but those who excel at identifying where AI intersects with maximum customer value and operational reality. What’s really holding organizations back from embracing this evolution? Cultural Transformation Perhaps the biggest challenge isn’t technical – it’s cultural. Success requires: The most successful product organizations I’ve led share one common trait: they’ve moved beyond the feature factory mindset to embrace value creation as their north star. Looking ahead, I believe the next frontier in product leadership will be about orchestrating value creation across increasingly complex ecosystems of products, services, and experiences. The winners will be those who can navigate this complexity while keeping laser-focused on customer value creation. What’s your experience with this evolution? How is your organization adapting to this new paradigm? Let’s continue this conversation in the comments. #ProductLeadership #Innovation #DigitalTransformation #AI #CustomerValue

Data Product Management: Features Don’t Drive Value. Insights Do.

At dunnhumby, while building Walmart Luminate, I had an “aha” moment that changed how I think about data products: we were spending too much time discussing features, and not enough talking about insights. This is a common trap, I’ve seen countless organizations treat data products like traditional ones. It simply doesn’t work. Here’s why: traditional products are about features enabling outcomes. A CRM helps manage customers, an accounting package manages finances, a word processor helps create documents. The value chain is clear and linear. Data products flip this model on its head. Their features and outcomes are essentially the insights they generate and the actions they drive. When we built Walmart Luminate, success wasn’t about adding more features – it was about generating insights that drove better decisions and measurable business outcomes. A Different Kind of Product Let me give you a real example. At Yahoo!, our advertising platform started as a simple marketplace for ad space. But the real value emerged when we started layering in data capabilities – audience insights, performance analytics, optimization algorithms. The core product remained the same, but its value multiplied exponentially. This highlights a fundamental truth about data products: their value isn’t linear. At dunnhumby, combining different data sets often created insights worth far more than the sum of their parts. A customer segment analysis combined with promotional data might reveal opportunities nobody had spotted before. But here’s the catch – data products are also more fragile. One unreliable data point, one privacy breach, one quality issue, and you can lose customer trust forever: trust isn’t a feature, it’s the foundation everything else builds on. Different Skills, Different Mindset Think about what makes a great traditional product manager: they obsess over user experience, feature prioritization, market fit. All crucial skills. But for data products? That’s just the starting point. I learned this building teams over time. The best data product managers weren’t necessarily the ones with the strongest traditional product background. They were the ones who could bridge worlds – understanding both the retail business and the possibilities of advanced analytics. They could translate between data science solutions and real business problems. You also need to think differently about infrastructure. In traditional products, infrastructure supports your features. In data products, your infrastructure choices fundamentally shape what’s possible. Get your data architecture wrong early on, and you’ll pay for it forever. Trust me on this one – I’ve seen it happen more times than I care to remember. This is why I was agonizing so much about data structures at dunnhumby: where do the data sit, can they travel easily, how many times do they have to travel, how are the data schema, how easily can you access the data, how do they integrate upstream, what’s the refresh rate? These aren’t just technical decisions – they fundamentally shape what products you can build and how much value you can deliver. Once we had to completely change the product because the refresh rate wasn’t what it was supposed to be! We were seconds away from throwing the whole thing out of the window when we had a breakthrough and pivoted toward a different data product. Not different features, a whole different product! The Evolution: From Internal Tool to Product Let me walk you through how this typically plays out. I’ve seen this evolution multiple times, and it usually follows three stages. Stage 1: Internal Focus Here’s a story from my early dunnhumby days. A retailer came to us wanting to optimize their private label portfolio. Simple request, right? But it perfectly illustrates the first stage of data product evolution. You’re not building for external customers yet. You’re using data to enhance internal operations. But don’t underestimate this phase – it’s where you learn what makes data valuable in your specific context. The product manager’s role here looks very different. You’re not shipping features; you’re: The key? Success isn’t just about generating insights – it’s about building the organizational muscle to act on them. I’ve seen plenty of great insights die in PowerPoint decks because organizations weren’t ready to use them. Stage 2: Adding Intelligence This is where it gets interesting. You’re taking existing products and enhancing them with data capabilities. Think of it as adding a brain to your existing offerings. At Yahoo!, we transformed our basic ad platform into a sophisticated performance marketing solution by progressively adding data capabilities. Each new data layer – audience insights, performance analytics, optimization algorithms – multiplied the value of the core product. But here’s the trap I see most teams fall into: adding data features just because they can. Every enhancement needs to solve a real problem or create meaningful value. Products will fail if they become over-engineered data platforms rather than solutions to customer problems. If something doesn’t generate value, then it is not needed, and you won’t put it in. Simple as that. Stage 3: Data as the Product This is where data product management truly comes into its own. You’re not enhancing existing products anymore – you’re creating standalone data products. Exciting? Yes. But this is also where I’ve seen many organizations stumble. Building Walmart Luminate was a masterclass in this stage. We weren’t just packaging data – we were creating a suite of products that fundamentally changed how retailers and CPG manufacturers worked together – at the biggest retailer in the world! Every insight was worth tenths of million of dollars. The challenges here are unique: The Hard Truth Want to know the biggest mistake I see? Organizations jumping straight to Stage 3 before mastering Stages 1 and 2. It’s tempting – the allure of data monetization is strong. But it’s like trying to run before you can walk. I’ve learned that successful data products aren’t built – they evolve. You start by proving value internally, then enhance existing products, and finally create standalone offerings. Skip these steps at your peril. Looking Ahead Here’s what I know for sure: the future of product management

Platform Economics: Three Waves of Value Creation

Over twenty years ago, at Yahoo!, I witnessed the birth of digital platforms. We thought connecting advertisers with publishers was revolutionary. It was just the beginning. The evolution of platforms tells an interesting story about value creation. From simple matchmaking to data-driven intelligence, to AI-powered orchestration. Each wave fundamentally changed how platforms create and capture value. Think of it this way: I’ve not just witnessed these waves – I’ve helped shape them firsthand. At Yahoo!, processing billions in ad transactions. At PubMatic, connecting thousands of publishers with advertisers. At dunnhumby, orchestrating retail media networks and some of the world biggest retail data platforms. Let’s explore what these patterns tell us about the future of platforms. Wave 1: The Matchmaking Era The first wave focused on reducing friction. Two fundamental benefits drove platform adoption: search cost reduction and transaction cost reduction. Finding matches became easier. Executing transactions became simpler. Platforms made both cheaper. At Yahoo!, we built one of the first scaled advertising platforms. The mechanics were straightforward: match advertisers with available ad inventory. More publishers attracted more advertisers. More advertisers attracted more publishers. A classic network effect. Three elements defined this era: Simple Network Effects Growth was multiplicative, not additive. Take PubMatic as an example: each new premium publisher might attract 50 new advertisers, each new advertiser increasing publisher yield by 0.5%. Every new participant multiplied value. More publishers, more advertisers, more spend… the flywheel spun faster. That’s platform magic. Transaction-Based Value Revenue came from enabling transactions. Take rates were simple – a share of transaction value. At Yahoo!, this meant processing billions in annual advertising revenues – 20 years ago! Rule-Based Intelligence Matching was manual or rule-based. Google’s introduction of eCPM was revolutionary – creating a common currency for different ad formats and pricing models. Yet even this innovation was fundamentally rule-based: smart arithmetic rather than true intelligence. This model created billion-dollar businesses. But it had limitations. Market inefficiencies persisted. Value leaked. Most importantly, platforms weren’t learning from their transactions. The real value was hiding in plain sight: the data these transactions generated. Wave 2: The Data Advantage Wave 1 reduced costs. Wave 2 created entirely new value. The shift wasn’t about efficiency – it was about possibility. At dunnhumby, we created and operated some of the world biggest and most successful three-sided platforms (Walmart, Tesco, for example): shoppers buying products, retailers maximizing sales, CPG manufacturers selling more through retailers. The magic? Data from 800 million shoppers and hundred of thousands of products optimizing all connections. Each transaction made the platform smarter about consumer behavior. Retailers optimized their offering. CPG manufacturers targeted their promotions better. From Transactions to Intelligence The game changed. Purchase data predicted future behavior. Retailers learned what to stock. CPG manufacturers discovered what would sell. Shoppers received offers they wanted. When I was at dunnhumby, this approach helped reignite revenue growth to whole new levels. The value wasn’t in the transaction – it was in the intelligence. Network Effects 2.0 Data created multi-dimensional network effects. More shoppers generated more data. Better data helped retailers optimize. Optimization attracted CPG investment. Investment improved shopper experiences. Better experiences attracted more shoppers. Each party made the platform more valuable for others. The Platform Intelligence Layer Platforms became intelligence engines, answering complex questions: A new form of lock-in emerged: intelligence effects. The more each party used the platform, the more valuable it became for everyone. Once you start, you can’t stop. But even this sophisticated learning system was just a preview of what was coming. Wave 3: The AI Amplification Wave 1 reduced costs. Wave 2 created value from existing interactions. Wave 3 is creating capabilities that never existed before. Previous waves create or optimized connections. AI platforms do something different: they create new capabilities on demand. From Optimization to Creation ChatGPT and GitHub Copilot showcase this shift. They don’t just optimize workflows – they create new ones. At dunnhumby, we built specific analytics features. Today’s AI platforms create capabilities on the fly, adapting to each unique request. The New Network Effect Every interaction expands platform capabilities. One user teaches the agent a new task – that capability becomes available to everyone. Platforms grow in capability, not just size. Value Creation at Scale Traditional platforms replicated services to more users. AI platforms expand capabilities for all users. At Yahoo!, scaling meant more advertisers using the same features. Today’s platforms create new features as they scale. The Platform Becomes the Product Here’s the fundamental shift: platforms are becoming active participants in value creation. GitHub Copilot isn’t just connecting developers with code – it’s creating code. The platform transforms from repository to development partner. Take Adobe Creative Cloud. Traditional platforms offered tools and assets – templates, fonts, stock images. Today’s AI-enabled platforms generate images, suggest improvements, automate editing tasks, create campaign variations. The platform becomes an intelligent creative collaborator. This changes everything about platform economics: We’re just beginning to understand the implications. But one thing is clear: AI isn’t just another feature for platforms – it’s a fundamental reimagining of what a platform can be. If you just think that AI is an efficiency generating feature, you are simply missing the forest for the trees. And this will be an expensive mistake. How is your organization thinking about these platform evolution waves? Still optimizing transactions, or ready to create new capabilities? What’s your forest? Share your thoughts in the comments. #PlatformEconomics #AI #Innovation #DigitalTransformation #ProductStrategy

SaaS vs Agentic AI: Rethinking the Future of Software

During a recent lesson about AI with Massimo Chiriatti at Università Cattolica, a smart student asked about SaaS and Agentic AI – specifically about the potential impact on SaaS business models. I wasn’t completely satisfied with my answer to such a profound question, so I decided to think deeper and formulate a more complete response. The Nature of the Shift The transition from SaaS to agentic AI represents more than just another technology shift. While SaaS transformed how we distribute and consume software, agentic AI is transforming how we create it. Let’s understand why I think this transformation might fundamentally change the software industry as we know it. The evolution from on-premise to SaaS was primarily about distribution and consumption. Software remained fundamentally the same – it just lived in the cloud instead of your desktop. But the shift to agentic AI is different. It’s not about where software lives, but about how it comes to life. Let’s decompose this transformation. SaaS products are built around specific use cases: Salesforce for CRM, Workday for HR, ServiceNow for IT. They’re intent-specific, with predefined features and workflows. Agentic AI fundamentally changes this paradigm. Instead of adapting to the software’s logic, the software adapts to your intent. The same agent can write emails, analyze data, or create presentations – not because it has different modules, but because it understands and adapts to different needs. At dunnhumby, we spent months building specific analytics features into our customer data platform. Each new capability required careful planning, development, and release management. An agent, instead, can create new capabilities on the fly, adapting to specific user needs as they arise. This shift from fixed functionality to fluid adaptation represents a fundamental change in how software creates value. The Business Model Challenge Let’s look at how this shift impacts business models. The SaaS model thrived on predictability: recurring revenues, clear value metrics, usage-based pricing. Software as a service meant exactly that – a defined service, delivered consistently, at scale. But what happens when your software isn’t a fixed product, but a fluid conversation? At Yahoo!, we built an advertising platform that served hundred of thousands of advertisers. The value came from standardization – same features, same workflows, same interfaces for everyone. Agentic AI inverts this model. Instead of one solution serving millions, we might see millions of unique solutions, each adapted to individual needs. This creates interesting challenges in value capture. How do you price an agent that can theoretically do anything? The traditional SaaS metrics – users, features, usage – might not apply. If an agent can (eventually) replace multiple SaaS products, does it command multiple subscriptions? More importantly, who captures this value, and how? The Disruption Pattern The way SaaS disrupted on-premise software offers interesting insights into how agentic AI might disrupt SaaS. But the pattern looks quite different. SaaS won by making software more accessible and cost-effective, while maintaining the same fundamental value proposition. Agentic AI, instead, is changing the value proposition itself. Let’s analyze where this disruption is likely to start. The first wave isn’t targeting core business processes – it’s focusing on the knowledge work around them. Every SaaS tool requires users to translate their intent into system actions. At dunnhumby, our data scientists spent considerable time translating business questions into SQL queries. Agents will at some point likely eliminate this translation layer entirely – they’ll understand the intent and execute it directly. The integration layer presents another interesting angle. The SaaS ecosystem created data and workflow silos, spawning a multi-billion dollar integration market. But agents don’t see silos – they see capabilities. If your agent can seamlessly work across systems, the need for traditional integration diminishes significantly. The Industry Impact Not all SaaS businesses are equally vulnerable to this disruption. The pattern isn’t random – it follows lines of resistance and opportunity. The most vulnerable? Tools where users spend significant time translating thoughts into actions. Think about report builders, data visualization tools, basic analytics platforms. At Pubmatic, our business intelligence stack was essentially a complex translation layer between business questions and data answers. That’s natural territory for agents. Then there’s the long tail of niche SaaS solutions. The software industry’s beauty is that (thanks to scale and standardization) it can support highly specialized tools – from email signature management to invoice processing. Each solving a specific problem well. But when one agent can handle all these tasks, the economics of specialized SaaS become harder to justify. Of course, there will be high initial training costs which might slow this down, or prevent it in some areas. Some categories will prove more resilient. Mission-critical platforms with deep workflow embedding, compliance-heavy solutions where predictability matters more than flexibility, real-time operational systems where microseconds count – these won’t transition quickly. Their value isn’t in the interface – it’s in the reliability, compliance, and ecosystem they’ve built. The Coexistence Question The emergence of cloud computing didn’t kill on-premise software entirely – it created a hybrid reality that still exists today. The transition to agentic AI might follow a similar pattern, but with a very significant difference: while cloud was about location, agents are about creation and interaction. Let’s understand what this means for the future. Most enterprises today run a complex mix of on-premise and cloud solutions, each chosen based on specific requirements around control, compliance, and capability. Tomorrow’s technology landscape will likely see a similar blend of SaaS and agentic AI, but the decision factors I think will be different: instead of choosing based on deployment models, organizations will choose based on interaction models. Value Creation in the Agentic Era I think it is worth giving some thought to the possible evolution of value creation in software. SaaS 1.0 created value through standardized features – the same solution for everyone, delivered efficiently at scale. SaaS 2.0 added value through ecosystems and integrations – platforms that could connect and extend functionality across solutions. The agentic era will likely introduce a fundamentally different value creation model. At dunnhumby, our products created

Why Net Recurring Revenue (NRR) is a fundamental product KPI

One of the important usage of data is about product diagnostic. When data people talk about data, they often talk about data monetization, and get all excited about the many ways that can be used to transform data, and extract value from them. Usually it is about significant quantity of data (either in size, or velocity), and how they can be transformed to deliver insights or support processes, amongst the many things you could do. Less exciting to data people, but not less important, is a lower frequency type of data, which are product-related financial metrics. They help assessing product health along different dimensions. I am often puzzled by how many brilliant data people focus on building products that extract value from data, missing the usage of data to assess the very same product health. A measure that helps understanding a product health is Net Recurring Revenues — NRR. NRR formula is simple: NRR = (start RR + US/CS RR — Churn — Down RR) / start RR Start RR is the recurring revenues ad the beginning of the month US/CS RR is the amount of upsells and cross-sells on existing customers during the month (or quarter, or year) Down RR is the amount of revenues lost to products downgrades during the month Put it simply, is a snapshot of how well the product is performing at retaining and upselling customers, without giving any consideration to new customers acquisition. NRR greater than 100% constitute a more solid base upon which to add new customers. The reason is simple: you make more from existing customers at the end of the month that you did at the beginning, hence retained customers more than compensate any churn you might have. On the flip side, NRR smaller than 100% can be a problem in the medium to long term. And the farther away from 100, the bigger the problem. This is because a share of new customers will go to compensate for the revenues lost from existing customer during the month. If you have an NRR of 90%, and revenues from new clients growing at 15%, you are for all intent and purposes ‘wasting’ 2/3 of your acquisition to compensate for revenues loss from the existing customers. This problem grows as the customer base grows. As many might have experienced, it’s easier to grow new customer fast when the customer base is relatively small (and the corresponding loss on existing customers small for NRR<100), while it becomes more difficult when the customer base is significant (and the loss on existing customer significantly bigger for NRR<100). NRR > 100 can be considered the ‘tailwinds’ of growth. Even with limited new customer acquisition, the product till grows. To the contrary, NRR<100 are the ‘headwinds’ of growth, as a share of new customers simply plugs a gap coming from existing ones. If you have NRR<100 you clearly have some work to do on a number of fronts, but you would usually start from the product. Is product value aligned to customer expectations? How ‘good’ is the product on several dimensions? What is the customer feedback? What are the driving reasons for customer loss or downgrade? You could also look at pricing, and check if you need different product tears to mitigate customer loss. And of course, you should try to understand why some customers have downgraded. Downgrades are less concerning than churn, nevertheless need to be understood properly to make sure they are not early sign of churn. A cohort analysis might help here.

What’s holding up data monetization at many companies

Despite the enthusiasm around data monetization, there is shockingly few companies, at least in Europe, and especially in Italy, that are really leveraging it in a very significant way. This is a massive lost opportunity. If you put aside regulative frameworks like GPDR — law should never be an excuse, or considered a limiting factor, as it simply is a framework, a constrain, that needs to be abided to-, I believe there are three big barriers, or obstacles, that are limiting the value you can extract from data, broadly. You can cross away awareness. Everyone is, or seems to be, or claims to be, aware of the potential embedded in data. There is no lack of articles, stories, statements about the value, the centrality, the potential of data (“Data is the new oil” has been the mantra for a while). Even more so if you put it in the context of AI, where data is a foundational layer — GIGO applies to AI as well: if garbage data is the input, then garbage answers are the output, no matter the deep learning or transformer technology behind it. If it is not awareness, what it is? I believe it boils down to three words: resources, culture, commitment. RESOURCES Resources encompasses a broad spectrum: from the data itself (often times less abundant and qualitative than believed), to the data savvy people that can make good use of them, from data engineers (how to deal with data), to data product managers (what to do with data), and people that can sell them (data-savvy commercial people), or that can protect them (security comes to mind, but lawyers as well). The demand for these resources far outstrips supply. Moreover, technical prowess is only half of the equation, and needs to be coupled with the ability to understand the value hidden in data, how to surface it, how to deliver it, and how to extract it in a sustainable and law abiding way. And ho to use data to delight and create significant value for the customer. I find this skill even rarer. Beyond human resources , let’s not forget assets, technology, capex. CULTURE Culture is a key ingredient of leveraging data to create value. Data culture needs to be pervasive across the organization to extract the most value of data. Data monetization cannot be relegated to a specialized group of data-savvy people, but it needs to be democratized across the organization, at every level. Each organisation takes, every day, hundreds of decisions, small or big, that can benefit from data. The move from gut-driven decisions, or heuristics, to evidence-based and data-driven decisions is a profound cultural change, that needs to be supported by people empowerment, and the willingness to experiment and fail (and learn), at every level of the organization. This is a significant change that makes many organizations extremely uncomfortable. It takes a lot of courage, but it is necessary to reap the benefits of data. Data needs to be turned into action to generate value, and this has to happen at every level in the organization. COMMITMENT Commitment signals and spearheads a profound transformation at many organizational levels. Commitment has to be unwavering, but it can’t be blind, it can’t simply be an act of fait. Commitment has to stand on the shoulders of solid business cases showing a positive balance of benefits over costs. This requires a delicate balance of data expertise, commercial acumen, and industry know-how that will often require a collaboration across functions and expertise. Not all data initiatives can and will be value accretive, and therefrom comes the need for solid business cases that supports unwavering commitment. The importance of explaining the foundation of such commitment cannot be under-estimated in the context of supporting a significant and far reaching cultural change. While the potential for data monetization is vast, bringing it to life requires more than just recognition of the value of data. It requires a holistic transformation within organizations, encompassing resources, culture, and commitment. Companies must not only invest in the right mix of tangible and intangible assets but also create a data-centric culture that permeates every level of the organization. This culture should champion data-driven decision-making and encourage a mindset of continuous learning and adaptation. Furthermore, unwavering commitment, underpinned by robust business cases, is essential to steer this transformative journey. Overcoming these barriers is not merely a strategic move but a fundamental necessity for organizations aiming to thrive in the data-rich era ahead. By embracing these principles, any company can unlock the true value of their data, turning this untapped potential into an engine of success and innovation.

Understanding Digital Advertising Platforms and Data’s Role

In a recent session with Luiss Business School master students focused on performance marketing and platforms, I had the valuable opportunity to streamline and share my insights and experiences with digital advertising platforms. This article is aimed at shedding light on the intricate world of digital advertising platforms and the pivotal role of data in this realm. The essence of a digital advertising platforms lies in their ability to connect advertisers with potential customers effectively. These platforms harness the power of data to target audiences with precision, ensuring that marketing messages reach the most relevant viewers. This precision in targeting is what sets digital advertising apart, allowing for optimized digital marketing strategies and increased return on investment. An advertising platform is an exchange platform with a foundational data-driven insights wrap Let’s decompose it. An advertising platform is not an information platform — information is not exchanged for money. Information platforms are often monetized on a subscription basis, while an advertising platform is not (I am excluding tenancy, different in form, however, not in substance). What is charged in an advertising platform is the appearance of attention grabbing advertising in exchange for money. The transaction is finalized (as well as charged) when the ad is placed in front of a user: an advertising platform has, amongst the many types of platforms, the great advantage that transactions cannot be conducted off-platforms, that means there is no value leakage. Despite not being an information platform, information (data-driven insights) plays a vital role in the proper functioning of an advertising platform. When an advertiser runs a campaign, it needs a set of data-driven insights as well as information to properly plan a campaign as well as allocate the right budget to achieve its goals. Not only that. As the campaign runs, the advertiser will greatly benefit from data-driven insights about its performance, allowing it to course correct if needed. But all this information, for as valuable as it is, it is not charged to the customer, as it is necessary to the smooth as well as effective operations of the platform. The way this information generates value is indirect: by enhancing the platform ability to generate the core value to the advertiser. The more valuable as well as informative this information is, the more successful the platform will be. For example: if the data-driven insights wrap provides a clear and easy way to measure return on investments, then the customer will more easily invest more budget into the platform, as this information will allow it to adjust the pricing such that the return on investment meets its goals. If the forecasting of exposure, campaign metrics as well as budget is accurate, the buyer will more confidently allocate the full budget (everybody knows how agencies hate to underspend!). Over time digital advertising platforms have moved toward ‘actioning’ such insights in an automated way. The reason is clear: value creation grows moving from data, to insights, to action. All the platform bells and whistles like automated bidding, automated budgeting, frequency capping etc are a way to increase value generation by turning insights into actions, allowing a more seamless use of the platform, as well as with that capturing bigger budgets. A/B testing, for example, is a data-driven insights driven feature aimed at maximizing platform spend by maximizing client return on investment. Between two creatives, the one with the highest KPI will be promoted to run on the platform with the higher share if exposure. These insights are actioned autonomously by the platform, in another example of ‘actioned insights’. If you look in a different direction, for example automated creative review or generation, these are ancillary services aimed at reducing platform usage friction as well as cost. The trigger of such initiatives on the platform operator front is of course a reduction in platform operating costs (creative review costs), or a reduction in campaign overhead for the agency / advertiser (automated creative generation), that results in more budget available for the core platform activity: the exchange of money for attention. Whatever way you look at it, you realize that data are a wrap around the exchange platform whose purpose is to increase its effectiveness as well as capacity to generate value for the customers, attracting higher budgets. What are your thoughts on the evolving landscape of digital advertising platforms? How do you see the role of data in shaping the future of advertising?

Scroll to Top