Gianluca Carrera

Author name: gianlucacarrera

Embracing Serendipity: The Symbiosis of Personalization and Discovery in Consumer Experiences.

In the age of hyper-personalization, it’s easy to fixate on the power of precision. As a CPO with over 20 years of experience leveraging platform and consumer data, I’ve seen the tremendous impact of crafting experiences around individual preferences. However, I’ve also come to appreciate the vital role of serendipity. While personalization caters to consumers’ explicit choices, serendipity taps into the joy of unexpected discovery. The best experiences meld the two, amplifying consumer engagement and loyalty. The Magic of the Unexpected Humans are curious creatures. We crave novelty, our brains lighting up when we encounter something fresh and unfamiliar. In a world of infinte content and product options, experiences that surprise and delight stand out. A lot. They shake us from the comfort of our bubbles, inviting us to expand our horizons. Consider the breakout hit of Netflix’s “Stranger Things.” The show’s popularity was fueled not just by its compelling plot, but by its seemingly incongruous mashup of genres. It blended science fiction, horror, and coming-of-age dramedy in a way that felt both novel and familiar. The Risks of Over-Optimization On the flipside, over-optimizing for individual preferences risks trapping consumers in an echo chamber of their established interests. Think to Spotify’s “Discover Weekly” playlist. While often scaringly on-point, it can sometimes feel like a feedback loop, endless variations of the listener’s playlists. The thrill of stumbling upon an entirely new artist or sound is nowhere to be found. This danger extends far beyond music. In the world of e-commerce, an obsessive focus on surfacing products similar to a shopper’s past purchases can kill brand exploration and product discovery. The very data used to personalize experiences can narrow them. Striking the Balance The sweet spot lies in balancing personalization and serendipity. This means using data-driven insights as a springboard for cleverly curated discovery. Stitch Fix strikes this balance: by pairing rich preference data with the instincts of human stylists, the service navigates the line between tailored selections and fashion-forward finds. Customers receive items that feel simultaneously “so them” and refreshingly novel. Amazon’s “Frequently Bought Together” offers another example. By suggesting complementary items, the feature builds upon a shopper’s expressed interests, expanding them. A consumer looking at a yoga mat might find an intriguing new book on mindfulness or an innovative water bottle. The experience feels at once personal and serendipitous. Engineering Serendipity For platforms and brands, the challenge lies in thoughtfully engineering such moments of serendipity. This requires a nuanced understanding of each consumer’s “adjacent possible” – the realm of experiences related to, but distinct from, their demonstrated interests. By mapping these adjacencies and strategically exposing consumers to them, platforms and brands can foster a sense of delightful discovery without veering into irrelevance. Data, both quantitative and qualitative, is the way. Platforms can mine search queries, browsing patterns, and product pairings to surface latent consumer interests. Social listening can reveal nascent trends from online discussions. Crucially, these insights must be continually balanced with human curation. Algorithmic recommendations can kickstart discovery, but tasteful editorial touches can infuse it with the warmth and wit that consumers crave. A Symbiotic Future As consumer choice expands exponentially, personalization’s role in curation has never been more vital. However, a short sighted pursuit of precision risks replacing the wide-eyed wonder of brick-and-mortar browsing with digital tunnel vision (have you ever entered a mall to buy just milk, and left it with a full cart? This is serendipity at work). The antidote is a strategic blend of serendipity and personalization that expands consumer horizons while still catering to individual tastes. For consumers, this approach reduces the echo chamber effect of hyper-targeted recommendations, sparking the excitement of unexpected discovery without sacrificing relevance. Encounters with new products, content, or ideas feel organic and delightful, not odd or misaligned. Thoughtfully curated serendipity also alleviates the privacy concerns some consumers have on highly precise personalization. Experiences can feel deeply personal without relying on invasive data collection or user profiling. Platform and Business Benefits For platforms, integrating serendipity alongside personalization offers a range of advantages. It boosts overall content discovery and consumption, breathing life into overlooked offerings. It can help balance inventory in e-commerce settings, driving attention beyond just the most popular products. The engagement implications of this serendipity-personalization symbiosis are transformative. Unexpected yet relevant recommendations spark higher interaction, as consumers are drawn to explore. The promise of delightful discovery drives more frequent visits, turning sporadic users into loyal ones. In e-commerce settings, serendipity can inspire shoppers to venture into new product categories, increasing average order values. For content platforms, well-timed suggestions can ignite content binges. Beyond boosting core metrics, this approach nurtures a deeper emotional connection. When a platform feels perceptive and pleasantly unpredictable, users form a strong affinity. They’re inspired to share their finds with their network, organically amplifying the platform’s reach. Churn rates fall as users always find reasons to return. In essence, engineering serendipity alongside personalization shifts the consumer relationship from transactional to magical. Users no longer merely consume recommendations; they embark on a journey of discovery. Each visit has the potential of a new favorite product discovery, a fresh perspective, or an unexpected passion. The platform becomes not just a provider, but a trusted guide. Serendipity also enriches a platform’s data insights. As consumers explore new terrain based on clever recommendations, a richer picture of their evolving interests emerges. This nuanced understanding can, in turn, refine future personalization efforts. Rather than a vicious cycle of narrowing recommendations, it becomes a virtuous spiral of broadening discovery. By skillfully combining the personal and the novel, platforms and brands can craft digital experiences that feel as unique and surprising as the ones in the real world. In an age of limitless options, it’s these moments of delightful surprise that will define the consumer relationships that last. The businesses that master this dance between data and discovery will not only keep customers coming back, but keep them curious for what’s next. What are your thoughts about personalization? Are you mixing personalization

In products, to do more with less, do less with more

As a lifelong advocate for focused strategies, I’ve consistently championed the power of concentration throughout my career. This approach has been a cornerstone of my professional philosophy, guiding my decision-making and strategic planning across various roles and industries. Product success isn’t about doing more. It’s about doing less, but doing it extraordinarily well. This strategy might be counterintuitive, but it’s effective: do less with more. Let’s see how. The Math of Concentration In our pursuit of product excellence, let’s consider a typical scenario. Imagine a yearly team capacity of 200 person-weeks, with each development point equivalent to one person-week. You have 10 initiatives, each requiring 20 development points. Now, we’ll examine two approaches: a distributed approach tackling all 10 initiatives, and a focused approach concentrating on only 4. The distributed approach, spreading resources across all 10 initiatives, paints a picture of breadth. With 5 concurrent initiatives, it takes approximately 12 months to complete them all, with a mean time to market of 6 months per initiative. The total cost? 200 points. The likely outcome? A broad but shallow market impact – jack of all trades, master of none. Contrast this with the focused approach. By concentrating on just 4 initiatives, with 2 running concurrently, the landscape changes dramatically. The time to complete key initiatives shrinks to 5-6 months, with a mean time to market of 2.5-3 months. Total cost? A mere 80 points. And you still have the remaining half of year to improve on those 4 products, with more than double the resources you allocated initially! The potential outcome? Market leadership through focused, powerful product experiences. The Time-Resource-Impact Triad The focused approach demonstrates significant advantages. It slashes the development cycle from 12 months to 5-6 months, optimizes resource investment from 200 points to 80, and increases the probability of creating breakthrough initiatives. The net result? More impactful, strategically aligned development. The Pros of Focus Critical mass acceleration is the first benefit. Concentrated resources create momentum, like focusing sunlight through a magnifying glass – sparking innovation and rapid development. I call it the product quantum theory: if you do not concentrate enough energy (resources x time), the product won’t make the leap to success, and the energy will be wasted. Next, we see reduced cognitive load. Fewer initiatives mean less mental overhead for teams. Context switching is a productivity killer, as demonstrated by several studies according to which it takes between 5 and 45 minutes to fully return to a task after an interruption (potentially reducing effective working time by up to 40 or even 80% according to Gerald Weinberg). By focusing on fewer, more critical initiatives, teams maintain deeper concentration, achieve flow states more easily, and produce higher-quality work. Efficiency and market domination follow. Halving time to market from 6 to 2.5-3 months enables companies to capture first-mover advantages, respond faster to customer demands, and iterate rapidly based on feedback. It reduces development costs and boosts investor confidence. In fast-moving industries, this accelerated timeline can mean the difference between market leadership and obsolescence. Also, with more resources, iterations are faster, and the product can move from strength to strength, quickly. Quality over quantity becomes achievable. Concentrated resources allow for deeper initiative development, more refined user experiences, and more sophisticated solutions. Ultimately, this leads to more success. Finally, resource optimization comes into play. The 120 saved development points can be invested in deeper refinement of key product initiatives, extensive user testing, iterative improvements, exploring advanced capabilities, and continuous innovation of core product offerings. In one sentence, they can be used to let the initiative grow to its full potential. The Potential Cons It’s important to acknowledge potential downsides. There’s a risk of tunnel vision – concentrating too narrowly might mean missing emerging opportunities. Some markets require a broader approach, leading to potential missed diversification. Psychologically, teams might feel constrained by fewer initiatives. These risks can be mitigate though. Strategic Considerations This strategy isn’t about recklessness. It’s about surgical precision. The key lies in rigorous prioritization, data-driven initiative selection, continuous market validation, and rapid iteration capabilities. Rember: you can do anything you want, but you cannot do everything. Choose wisely. Real-World Success Story: Spotify’s Podcast Power Play Spotify’s podcast strategy serves as a masterclass in “do less with more.” Instead of diversifying across multiple content types or expanding into hardware, Spotify zeroed in on podcasts. They made significant moves in the podcast space, acquiring major production companies like Gimlet Media and The Ringer, and securing exclusive deals with high-profile creators such as Joe Rogan and the Obamas. The results were substantial. Spotify saw a dramatic increase in podcast listeners, growing from 7% of their user base engaging with podcasts in 2019 to 25% by 2021. This growth not only diversified their content offering but also increased user engagement and opened new revenue streams through podcast advertising. By choosing to do “less with more,” Spotify made a significant impact in a new market segment, enhanced their core product offering, and strengthened their position in the audio streaming industry. It’s a textbook case of concentrated resources creating outsized returns. Spotify passed on the opportunity to be in the middle of the pack of many markets, to be market leader in very few selected ones. The results speak for thelselves. A Cautionary Tale: The Yahoo! Peanut Butter Manifesto The Yahoo! peanut butter manifesto offers a nuanced lesson on the challenges of maintaining focus in a rapidly evolving tech landscape. As someone employed at Yahoo! during those years, I experienced firsthand both the company’s incredible potential and its strategic struggles. In 2006, Senior VP Brad Garlinghouse’s internal memo used the vivid metaphor of thinly spread peanut butter to describe the company’s approach of diversifying across numerous initiatives. Despite being home to some of the brightest minds in tech and pioneering many innovative services, Yahoo! found itself at a crossroads. The company’s vast array of talented teams were working on a multitude of projects, each valuable in its own right. However, this breadth sometimes came

A call to end agreeable AI – lessons from social media

In recent weeks, we’ve witnessed a concerning trend with OpenAI’s latest GPT-4o model, which many users have reported as becoming excessively agreeable – validating even harmful or false statements. This phenomenon isn’t just a temporary technical glitch; it represents a fundamental risk in how AI systems are being optimized. Sam Altman himself acknowledged this issue, noting on X: “The last couple of GPT-4o updates have made the personality too sycophant-y and annoying… and we are working on fixes asap.” But why is this happening in the first place? What we’re seeing is a classic optimization problem. When AI companies prioritize user satisfaction metrics, systems naturally evolve toward telling people what they want to hear rather than what they need to hear. This mirrors exactly what happened with social media algorithms over the past decade – platforms optimized for engagement rather than wellbeing, and we’re still dealing with the societal consequences. The Social Media Cautionary TaleThe evolution of social media offers a sobering preview of what could happen with AI systems. What began as platforms for connection gradually transformed into sophisticated engagement machines with profound societal impacts: 1. **Algorithmic Amplification**: Social platforms discovered that emotional content – particularly outrage, fear, and tribalism – drove significantly higher engagement. The algorithms were adjusted accordingly, not out of malice but following the optimization imperative. 2. **Echo Chambers**: As engagement metrics became paramount, platforms began showing users primarily what they already agreed with. Research from MIT and Stanford has documented how this algorithmic curation created fragmented information ecosystems where contradictory facts rarely penetrate. 3. **Erosion of Shared Reality**: By 2020, the Edelman Trust Barometer showed that 76% of people worried about false information being used as a weapon – a direct consequence of algorithms that prioritized engagement over accuracy. 4. **Addiction By Design**: Features like infinite scroll, notification systems, and variable reward mechanisms were deliberately engineered to maximize time spent – what Tristan Harris has called “the race to the bottom of the brainstem.” The Amplified Risks with AIThe risks with people-pleasing AI are potentially more severe than those we’ve seen with social media: 1. **Personal Validation at Scale**: While social media validates through likes and comments, AI can provide direct, personalized validation of even harmful beliefs. Imagine systems that unfailingly agree with conspiratorial thinking, medical misinformation, or destructive personal choices. 2. **Undermining of Expertise**: When AI systems reflexively agree with users, they implicitly devalue expert knowledge. The AI saying “you’re right” becomes more accessible and comfortable than the expert saying “that’s incorrect.” 3. **Cognitive Outsourcing**: Research in psychology shows that we already outsource memory to our devices. With validation-optimized AI, we risk outsourcing critical thinking itself – why struggle with complex analysis when an AI will validate your first instinct? 4. **Institutional Distrust**: If AI systems consistently validate users’ preconceptions, they could accelerate the erosion of trust in traditional sources of authority – from scientific institutions to professional journalism – that might challenge those views. 5. **Psychological Dependency**: Perhaps most concerning is the potential for users to develop emotional dependency on AI validation, creating a relationship where users increasingly seek artificial confirmation rather than human connection or self-reliance. Corporate and Societal Implications For organizations deploying AI, these risks demand careful consideration: – **Decision-Making Integrity**: An overly agreeable AI could validate flawed strategic thinking rather than providing necessary counterarguments. – **Ethical Responsibility**: Companies deploying AI systems must recognize their role in shaping societal information flows, much as social media companies eventually had to. – **Regulatory Attention**: As with social media, unaddressed harms from people-pleasing AI will inevitably attract regulatory scrutiny. The harmful impact of engagement-optimized social media took years to fully comprehend. With AI, we have the advantage of this historical parallel. The question is whether we’ll learn from it or repeat the same optimization mistakes with potentially more profound consequences. True innovation in AI development must create systems that deliberately challenge our thinking – AI that functions as a devil’s advocate, that enters into proper contradictory dialogue, and that pushes back with reasoned arguments when our logic fails. We need AI that’s programmed not to maximize agreement but to maximize intellectual growth through productive disagreement. That’s the AI we truly need. And personally, that’s the AI I want – one that challenges me rather than simply agreeing with me. I’ve personally experienced this troubling phenomenon multiple times now – where large language models subtly propose or confirm thoughts, interpretations, or ideas that I knew, with proper reasoning and logic, were incorrect. The most dangerous aspect is how naturally it happens – a slight nudge toward agreement, a small confirmation bias, or a gentle reinforcement of a flawed premise. These small validations can add up to significant confirmation of incorrect thinking. Has this happened to you? Have you noticed AI systems becoming more agreeable even when they shouldn’t be? I’d be interested to hear your experiences and what implications you think this might have for your organization and society at large. #AI #AIEthics #DigitalTransformation #AIStrategy #ProductStrategy #TechLeadership​​​​​​​​​​​​​​​​

You have an AI agent. Great. Does it have the data?

The rush to implement AI agents within organizations reminds me of the early cloud migration days – lots of enthusiasm but often overlooking critical foundations. As we experienced at dunnhumby when scaling data platforms, capabilities without connections create frustration, not value. Here’s the reality: no matter how brilliant your AI agent is, it’s only as good as the data it can access and process. And there’s the rub. For an AI agent to perform well, it needs quality data. But even before we talk about data quality, let’s tackle the more fundamental challenge – data accessibility. For your agent to be effective, it must: The problem? No industry-accepted protocols exist for how AI agents should discover and access enterprise data. Each organization is essentially building custom bridges between their AI agents and data systems. Organizations succeeding with AI agents today are investing as much in data accessibility infrastructure as they are in the agents themselves. They’re building middleware that serves as data translators between their agents and enterprise systems. Until standardized protocols emerge (thanks Anthropic for the MCP, and good luck!), this custom programming approach will remain necessary. But the investment pays dividends across every agent use case you’ll deploy. Remember: a smart agent without data access is like hiring a brilliant consultant and then giving them no information about your business. Has your organization solved the data accessibility challenge for AI agents? What approaches are working?

Data and Analytics fueling United Healthcare Group’s Powerful Flywheel

The recent 20% stock price drop of United Healthcare Group (UHG) due to a slight earnings miss has triggered my interest in looking beyond the short-term market reaction and examining their long-term strategic vision and organizational structure. The Dual Division Structure: A Strategic Masterpiece UnitedHealth Group’s business model is built around a strategic dual division structure that creates a powerful synergistic relationship: UnitedHealthcare Division The insurance side focuses on providing health benefits through employer-sponsored, individual, Medicare, Medicaid, and military health plans. With approximately 50 million members, UnitedHealthcare: Optum Division Operating across three key segments, Optum has become a crucial growth engine: Optum now accounts for approximately half of UHG’s total revenues. The UHG Flywheel in Action What’s fascinating is how UHG is combining these divisions to build a comprehensive healthcare marketplace that leverages several powerful business concepts simultaneously: This creates a virtuous cycle where: Why This Strategy Is Powerful for an Incumbent What makes UHG’s approach particularly noteworthy is that, from my perspective observing from afar, they’re successfully implementing a platform strategy that typically we associate with digital-native companies. They’re leveraging their incumbent advantages—scale, capital, and existing relationships—while adopting the nimbleness and data-centricity typical of tech companies. During my time at dunnhumby and in my work with retail and financial services organizations, I’ve seen how difficult it is for traditional companies to truly harness data as a competitive advantage. UHG appears to be doing this effectively by positioning itself as a platform rather than just a traditional insurer. The Ethical Dimension: Power, Profit, and Patient Outcomes While UHG’s strategy is undeniably impressive from a business perspective, we must ask: is it acceptable for a profit-driven corporation to wield such immense power through vertical integration and control of healthcare data? This concentration of market power raises important questions: We might consider alternative structures that maintain the benefits of integration while better aligning incentives with patient health: The fundamental tension remains: can an entity optimize simultaneously for profit and patient outcomes, or do we need different institutional arrangements to truly put health and well-being at the center? Looking Beyond the Stock Drop While the market’s reaction to recent earnings might suggest trouble, I believe the long-term strategy remains sound and forward-looking. UHG is building infrastructure that increases switching costs for all participants while simultaneously improving outcomes—a textbook example of how to defend and expand market position. The short-term pressures (Medicare Advantage adjustments, post-pandemic utilization normalization) are real challenges, but they don’t undermine the fundamental strategic direction of this powerful flywheel-based approach. What are your thoughts on healthcare platforms and the potential for incumbents to successfully build digital marketplaces? #UnitedHealthGroup #HealthcareInnovation #PlatformStrategy #HealthTech #Bigdata #Platforms

Do not sell your data!

Leading data initiatives at companies like dunnhumby and Reward, where we tracked billions of transactions worth billions of pounds, I’ve observed a common mistake about data monetization: the belief that selling raw data is a great way to create value. This is not just wrong – it’s potentially damaging to your long-term business prospects. Here’s why, and a better approach. The Raw Data Trap Many companies sitting on valuable data assets immediately think about selling that data to interested parties. It’s an understandable – you have something others want, why not sell it directly? But this approach has several important flaws: The Power of Insights The first step up the value chain is transforming data into insights. This approach offers several advantages: The Ultimate Goal: Actionable Outcomes The highest form of data monetization is turning insights into actions. This is where the real value multiplication happens: Building a Sustainable Data Business To successfully monetize data through insights and actions: The Multiplication Effect Perhaps the most compelling argument for this approach is the multiplication effect. A single dataset, properly leveraged, can power multiple products serving different use cases at different price points. Each step up the value chain – from data to insights to actions – multiplies your potential revenue. This again was one of our ‘killer’ apps at dunnhumby: ‘recycling’ data for multiple use cases and customers. Think about it: would you rather sell your customer data once for £X, or build a sustainable business that generates multiples of &X by solving various high-value problems with that same dataset? The key is understanding that data’s true value lies not in the data itself, but in its application to solve real business problems. Focus on turning your data into solutions that deliver clear business outcomes, and you’ll build a more valuable, sustainable business. What’s your experience with data monetization? Have you seen companies succeed with raw data sales, or do you agree that insights and actions are the way to go? #DataMonetization #ProductStrategy #DigitalTransformation #Data #Analytics

What Happens When Content becomes Infinite and Free?

In an era where AI can generate content at unprecedented scale and speed, we face an intriguing paradox: what’s the value of infinite content in a world of finite attention? Let’s decompose this transformation. When I was leading product at dunnhumby, we processed over 50 billion customer transactions yearly. The volume of data wasn’t the challenge – extracting meaningful insights that drove business value was. Today, we’re seeing a similar pattern with content, but at an even more dramatic scale. The transformation has multiple layers: At Yahoo!, we focused heavily on content creation and distribution. Today, that strategy would need radical rethinking. The challenge isn’t creating content – it’s ensuring it reaches the right audience at the right time. This mirrors what we experienced at dunnhumby: data abundance without proper curation and relevance quickly becomes noise. Think about Netflix’s suggestion algorithm – its value isn’t in its 17,000+ titles library, but in its ability to surface the right content for each viewer. The same principle will apply across all content platforms. But there’s a crucial difference: while Netflix’s content is professionally produced and vetted, we’re entering an era where content can come from anywhere, created by anyone (or anything). Meta’s experience with user-generated content offers valuable lessons. They’ve already solved many challenges we’re facing with AI-generated content. Their platforms process billions of posts daily, using sophisticated systems to detect quality, filter misinformation, and build trust – exactly what we need for AI content. The real difference isn’t in content volume or validation needs – Meta handles those daily. It’s in the incentive structures. While human creators seek attention and engagement, AI systems can be optimized for different objectives. This actually presents an opportunity: we can program AI to optimize for value creation rather than just engagement. At dunnhumby, we learned that aligning incentives with value creation was crucial for sustainable platforms. This shift in incentive structures reshapes how we think about content quality, trust, and distribution: Quality assessment moves from engagement metrics to value metrics. We need frameworks that measure actual utility to users, not just their attention. At dunnhumby, we learned to distinguish between high-engagement and high-value customer behaviors – the same principle applies here. Trust mechanisms shift from reactive to proactive. Instead of moderating after publication, we can build trust signals into the content generation process itself. This requires new reputation systems that evaluate not just authenticity, but consistency in value delivery over time. Distribution economics need fundamental rethinking. When content can be optimized for specific objectives rather than engagement, traditional monetization models need revision. The challenge becomes aligning platform economics with value creation rather than attention capture. I know, easier said than done! Implications for Product Strategy This shift has profound implications for product strategy. When I was building data products at dunnhumby, we learned that value wasn’t in data accumulation but in insight generation. So what will happen with content? The Platform Evolution Drawing from my experience at PubMatic and Yahoo!, I see three major shifts coming. First, we’re witnessing a complete inversion of the value chain. Traditional platforms obsessed over content sourcing and distribution – it was all about getting more content to more people. But that’s becoming meaningless in a world of infinite content. Future platforms will instead focus on filtering and matching. Think about it: your value proposition completely flips from “access to content” to “protection from noise.” This fundamentally changes how platforms need to think about their revenue models. At Yahoo!, we were constantly pushing for more content volume – today, that would be precisely the wrong strategy. Second, network effects are being completely redefined. Traditionally, these effects were straightforward: more users meant more content, which attracted more users. But in a world of infinite content, that logic breaks down. Future network effects will center on curation quality – the platforms that can build the most trusted curation engines will win. At PubMatic, we saw how quality signals became increasingly important in programmatic advertising. The same principle applies here, but at a much larger scale. User trust and engagement become your moats, and community validation becomes a key feature of your platform. Third, platforms need to become AI-native from the ground up. This isn’t about bolting AI onto existing architectures – it’s about reimagining platforms where content creation, curation, and distribution are one seamless flow. Real-time personalization isn’t a feature, it’s the foundation. Quality signals need to be built into the core architecture. At dunnhumby, we saw how retailers who treated data as a strategic asset outperformed those who saw it as a byproduct. Similarly, platforms that understand this shift in value creation will outperform those still focused on pure content volume. Looking Ahead We’re moving from a world where content was king, to one where curation reigns supreme. The value is shifting from creation to discovery, from quantity to relevance. This isn’t just another technological shift – it’s fundamentally changing how we think about value creation in the digital economy. What’s your view on this transformation? How are you thinking about value creation in a world of AI-generated abundance? Are you seeing similar patterns in your industry? #AI #DigitalTransformation #ProductStrategy #Content #DigitalMedia

The Unsexy yet Fundamental Part of AI Projects: Data

In my years leading data and product initiatives I’ve seen firsthand what really drives successful AI projects. While the focus is often on sophisticated algorithms and cutting-edge models, fancy use cases and cool demos or prototypes, the reality is far less exciting but much more fundamental and critical: data acquisition, preparation, and management typically is about 80% of the effort in most AI initiatives. The 80/20 Rule of AI Projects You can discover the truth by yourself (at a significant cost), or just trust me and read through. Most of the time, resources, and frustration aren’t spent on developing advanced algorithms or fine-tuning neural networks. Instead, they’re spent on: Only after these foundational elements are in place can the actual work on AI models begin. And even then, the data challenges continue as models need to be retrained, monitored, and maintained with fresh, high-quality data. An you might need to re-work on your source data. Why Data Preparation Dominates AI Projects There are several reasons why data work is so prominent in AI projects: 1. The Reality of Enterprise Data Even after decades of investment in data warehouses and data lakes, a big chunk of enterprise data remains fragmented, inconsistent, and poorly documented. In many organizations, even basic questions like “how many customers do we have?” can yield different answers depending on which system you query. It happened to me personally in more than one occasion – I have spent weeks in understanding how many customers we had, even starting from the most important thing: the definition of ‘customers’. You’d be surprised how many you can come up with 2. Quality, Quality, Quality Machine learning models amplify the problems in your data. Poor data quality means poor model performance – it’s that simple. As the saying goes: garbage in, garbage out. This reality forces AI teams to spend significant time ensuring data quality before any modeling can begin. You just have to do it, if you care about good outcomes. In an occasion, despite the customer reassurance their data set was as good as gold, after cleaning and structuring a customer transactional data set, it turned out it there were significantly more customers in its dataset than people living in the country he was operating! How good was the model prediction going to be? 3. Integration Challenges AI systems require integrating data from multiple systems – often combining structured data (from databases) with unstructured content (like images, text, or voice recordings). Creating cohesive datasets from these diverse sources is complex and time-consuming. All these integrations also need to be maintained. Pipelines break, and need to be fixed. Real-World Impact During my time at dunnhumby, our successful retail analytics succeeded because of maniacal and meticulous attention to data preparation. This was the bedrock of our success. All teams invested heavily in creating clean, well-structured data assets that could be effectively used by AI solutions that delivered measurable ROI. This was at the base of our continue success. How Organizations Can Respond For executives sponsoring AI initiatives, understanding this reality leads to several strategic imperatives: Looking Forward As AI becomes more central to business operations, the organizations that succeed won’t necessarily be those with the most advanced algorithms. Instead, the winners will be those that have built robust data foundations – what I call “data monetization capabilities” – that enable rapid and reliable deployment of AI. The breakthroughs in AI research make headlines, but the quiet, persistent work of building data infrastructure is what truly enables AI success. For executives embarking on AI transformations, embracing this reality early can be the difference between success and failure.

The Evolution of Product Leadership: From Features to Value Creation, and the AI revolution

In my 20+ years career in product leadership, I’ve witnessed a fundamental transformation in how we approach product leadership. Let me share some observations that might resonate with fellow product leaders. The role of product leadership has evolved from being the “feature factory manager” to becoming the “value creation orchestrator.” This shift isn’t just semantic – it represents a profound change in how we think about and measure product success. Let’s look at this evolution through different lenses: From Output to Outcomes Traditional product management was obsessed with outputs: features shipped, story points completed, releases made. Modern product leadership focuses on outcomes: customer value delivered, business metrics moved, strategic objectives achieved. At Truvo, this shift helped us grow digital revenues from €50M to €150M ARR in two years. The driver? MySite, a modular WYSIWYG website builder – think of it as the precursor of today’s WIX – that created immediate value for SMEs. In fact, we acquired 20,000 SME customers in just 12 months. Linking every product decision to measurable customer value made all the difference. But how do we actually measure what truly matters? Measuring What Really Matters Drawing from John Doerr’s “Measure What Matters,” we’ve learned that OKRs in product need to directly link to value creation. But here’s what they don’t tell you: implementing value-based OKRs requires a cultural transformation. At dunnhumby, we moved from measuring feature adoption to measuring business impact. For example, instead of tracking how many retailers used our forecasting tool, we measured the reduction in waste and out-of-stocks it delivered – a shift that increased product stickiness and doubled user adoption. The key was connecting product metrics to financial outcomes: revenue uplift, cost reduction, or margin improvement. But perhaps more importantly, we learned that not everything that matters can be measured, and not everything that can be measured matters. For instance, while we could measure every click in our retail media platform, what really mattered was advertiser ROI – a metric that required close collaboration with customers to define and track properly. This evolution in measurement needs to flow through the entire organization. Product teams should understand how their daily decisions impact business metrics, engineers should see how their technical choices affect customer value, and stakeholders should evaluate success through outcome-based metrics rather than output-based ones. It’s not just about changing metrics – it requires a fundamental shift in how teams think about success. What capabilities does this transformation require from product leaders? The Modern Product Leader’s Toolkit Today’s product leader needs three core capabilities: How does AI reshape this value creation equation? The AI Impact AI isn’t just another technology wave. It’s fundamentally reshaping how we think about value creation in products. The challenge has inverted: from struggling to build what customers want, to choosing which of the infinite possibilities will drive the most value. During my time at dunnhumby, we faced this daily: every process could be automated, every decision augmented, every experience personalized. But successful AI initiatives weren’t determined by technical sophistication. Instead, they were defined by three critical factors: This shift emphasizes a crucial evolution in product leadership: the ability to navigate through endless technical possibilities to identify true value creators. It’s no longer about building AI capabilities – it’s about orchestrating them into coherent value streams that transform customer businesses. The most successful product leaders today aren’t those who understand AI best, but those who excel at identifying where AI intersects with maximum customer value and operational reality. What’s really holding organizations back from embracing this evolution? Cultural Transformation Perhaps the biggest challenge isn’t technical – it’s cultural. Success requires: The most successful product organizations I’ve led share one common trait: they’ve moved beyond the feature factory mindset to embrace value creation as their north star. Looking ahead, I believe the next frontier in product leadership will be about orchestrating value creation across increasingly complex ecosystems of products, services, and experiences. The winners will be those who can navigate this complexity while keeping laser-focused on customer value creation. What’s your experience with this evolution? How is your organization adapting to this new paradigm? Let’s continue this conversation in the comments. #ProductLeadership #Innovation #DigitalTransformation #AI #CustomerValue

Data Product Management: Features Don’t Drive Value. Insights Do.

At dunnhumby, while building Walmart Luminate, I had an “aha” moment that changed how I think about data products: we were spending too much time discussing features, and not enough talking about insights. This is a common trap, I’ve seen countless organizations treat data products like traditional ones. It simply doesn’t work. Here’s why: traditional products are about features enabling outcomes. A CRM helps manage customers, an accounting package manages finances, a word processor helps create documents. The value chain is clear and linear. Data products flip this model on its head. Their features and outcomes are essentially the insights they generate and the actions they drive. When we built Walmart Luminate, success wasn’t about adding more features – it was about generating insights that drove better decisions and measurable business outcomes. A Different Kind of Product Let me give you a real example. At Yahoo!, our advertising platform started as a simple marketplace for ad space. But the real value emerged when we started layering in data capabilities – audience insights, performance analytics, optimization algorithms. The core product remained the same, but its value multiplied exponentially. This highlights a fundamental truth about data products: their value isn’t linear. At dunnhumby, combining different data sets often created insights worth far more than the sum of their parts. A customer segment analysis combined with promotional data might reveal opportunities nobody had spotted before. But here’s the catch – data products are also more fragile. One unreliable data point, one privacy breach, one quality issue, and you can lose customer trust forever: trust isn’t a feature, it’s the foundation everything else builds on. Different Skills, Different Mindset Think about what makes a great traditional product manager: they obsess over user experience, feature prioritization, market fit. All crucial skills. But for data products? That’s just the starting point. I learned this building teams over time. The best data product managers weren’t necessarily the ones with the strongest traditional product background. They were the ones who could bridge worlds – understanding both the retail business and the possibilities of advanced analytics. They could translate between data science solutions and real business problems. You also need to think differently about infrastructure. In traditional products, infrastructure supports your features. In data products, your infrastructure choices fundamentally shape what’s possible. Get your data architecture wrong early on, and you’ll pay for it forever. Trust me on this one – I’ve seen it happen more times than I care to remember. This is why I was agonizing so much about data structures at dunnhumby: where do the data sit, can they travel easily, how many times do they have to travel, how are the data schema, how easily can you access the data, how do they integrate upstream, what’s the refresh rate? These aren’t just technical decisions – they fundamentally shape what products you can build and how much value you can deliver. Once we had to completely change the product because the refresh rate wasn’t what it was supposed to be! We were seconds away from throwing the whole thing out of the window when we had a breakthrough and pivoted toward a different data product. Not different features, a whole different product! The Evolution: From Internal Tool to Product Let me walk you through how this typically plays out. I’ve seen this evolution multiple times, and it usually follows three stages. Stage 1: Internal Focus Here’s a story from my early dunnhumby days. A retailer came to us wanting to optimize their private label portfolio. Simple request, right? But it perfectly illustrates the first stage of data product evolution. You’re not building for external customers yet. You’re using data to enhance internal operations. But don’t underestimate this phase – it’s where you learn what makes data valuable in your specific context. The product manager’s role here looks very different. You’re not shipping features; you’re: The key? Success isn’t just about generating insights – it’s about building the organizational muscle to act on them. I’ve seen plenty of great insights die in PowerPoint decks because organizations weren’t ready to use them. Stage 2: Adding Intelligence This is where it gets interesting. You’re taking existing products and enhancing them with data capabilities. Think of it as adding a brain to your existing offerings. At Yahoo!, we transformed our basic ad platform into a sophisticated performance marketing solution by progressively adding data capabilities. Each new data layer – audience insights, performance analytics, optimization algorithms – multiplied the value of the core product. But here’s the trap I see most teams fall into: adding data features just because they can. Every enhancement needs to solve a real problem or create meaningful value. Products will fail if they become over-engineered data platforms rather than solutions to customer problems. If something doesn’t generate value, then it is not needed, and you won’t put it in. Simple as that. Stage 3: Data as the Product This is where data product management truly comes into its own. You’re not enhancing existing products anymore – you’re creating standalone data products. Exciting? Yes. But this is also where I’ve seen many organizations stumble. Building Walmart Luminate was a masterclass in this stage. We weren’t just packaging data – we were creating a suite of products that fundamentally changed how retailers and CPG manufacturers worked together – at the biggest retailer in the world! Every insight was worth tenths of million of dollars. The challenges here are unique: The Hard Truth Want to know the biggest mistake I see? Organizations jumping straight to Stage 3 before mastering Stages 1 and 2. It’s tempting – the allure of data monetization is strong. But it’s like trying to run before you can walk. I’ve learned that successful data products aren’t built – they evolve. You start by proving value internally, then enhance existing products, and finally create standalone offerings. Skip these steps at your peril. Looking Ahead Here’s what I know for sure: the future of product management

Scroll to Top